Find answers from the community

Updated 3 months ago

i have a question about llama-2 and

i have a question about llama-2 and generate_question_context_pairs

when i use mistral-instruct, everything works well - but with llama-2, the generated response is often prefaced with Great! Here are two questions based on the provided context: - which ends up as a question in the qa_dataset.

Has anyone else bumped into / worked around this? I am using LLamaCPP to instantiate the llm object, and using the default messages_to_prompt / completion_to_prompt
L
e
2 comments
Thid is a classic problem with llama2 -- it's very verbose and always adds extra text

The question generator is pretty simple though, it's just splitting the output on newlines.
thanks for the response - have people had success modifying the prompt to eliminate the friendly chit-chat? or perhaps in ensuring the extraneous text is regular enough to post-process out?
Add a reply
Sign up and join the conversation on Discord