The community members are discussing an issue where the output from using the llama2 and BGE models sometimes only contains line breaks (\n) instead of the expected text. The comments suggest that the issue may be related to the input text, improper prompt formatting, or the specific model being used. Some community members suggest trying different versions or configurations, while others recommend creating a Colab notebook to better understand the problem. However, there is no explicitly marked answer in the provided information.
I've already done several things, I used the sentence splitter and left my paragraphs separated by \n\n\n, and I've left them with just 1 \n. However, there is always some example that gives this altered result
system_prompt = """<|SYSTEM|>Context information is below. --------------------- {context_str} --------------------- Given context information and not prior knowledge, respond to the user last message contained in the ChatSession. ChatSession: {query_str} Answer: """
response = query_engine.query(""" Assistant: how are you doing today? User: Hi, I'm doing well. I'm interested in purchasing a new laptop. Assistant: That's great to hear! I'd be happy to assist you with that. Have you considered the our account as a mode of payment? User: No, I haven't. """)