Find answers from the community

Updated 6 months ago

Hi guys, Sometimes when I generate using

At a glance

The community members are discussing an issue where the output from using the llama2 and BGE models sometimes only contains line breaks (\n) instead of the expected text. The comments suggest that the issue may be related to the input text, improper prompt formatting, or the specific model being used. Some community members suggest trying different versions or configurations, while others recommend creating a Colab notebook to better understand the problem. However, there is no explicitly marked answer in the provided information.

Useful resources
Hi guys, Sometimes when I generate using llama2 and BGE, my output only has \n (line breaks) what could it be?
Attachment
image.png
L
M
L
17 comments
llama2 has decided to freak out it seems lol
Something about the text it was reading maybe caused this
I've already done several things, I used the sentence splitter and left my paragraphs separated by \n\n\n, and I've left them with just 1 \n. However, there is always some example that gives this altered result
Could downgrading the version help?
I don't think this is related to any version. More like something to do with the input text, or maybe improper prompt formatting, is causing issues
Which llm class is this ? How did you set it up?
I'll create a colab notebook, can you do help me to understand this?
I`m using CustomLLM for connect in my api that host Llama2, and BaseEmbedding to connect another api that host BGE-base
My system prompt is:

system_prompt = """<|SYSTEM|>Context information is below.
---------------------
{context_str}
---------------------
Given context information and not prior knowledge, respond to the user last message contained in the ChatSession.
ChatSession: {query_str}
Answer:
"""
and my query is:

response = query_engine.query("""
Assistant: how are you doing today?
User: Hi, I'm doing well. I'm interested in purchasing a new laptop.
Assistant: That's great to hear! I'd be happy to assist you with that. Have you considered the our account as a mode of payment?
User: No, I haven't.
""")
I`ve the .txts inside the data folder and the struct is:

User: No, pay pal account Assistant: I recommend the our account because...

User: .... Assistant: I recommend the our account because...

User: .... Assistant: I recommend the our account because...

....

....
this is not look like a llama prompt template to me. Was model are you using?
I'm using the llama-2-13b-chat for the generation and bge-base-en-1.5.
https://colab.research.google.com/drive/1dyR_C5pHsE-X72b-k-vA6LUJAf8ggTm9#scrollTo=u73iZkWPaQo4 This notebook contains the code used, it is not running because I use the models via API and cannot access them via Colab
just looking at the model card of llama-2-chat you are not using the right prompt template.
https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ
Attachment
image.png
small models like that dont like that at all and start spitting random stuff πŸ˜‰
Add a reply
Sign up and join the conversation on Discord