i have a question about llama-2 and generate_question_context_pairs
when i use mistral-instruct, everything works well - but with llama-2, the generated response is often prefaced with Great! Here are two questions based on the provided context: - which ends up as a question in the qa_dataset.
Has anyone else bumped into / worked around this? I am using LLamaCPP to instantiate the llm object, and using the default messages_to_prompt / completion_to_prompt
thanks for the response - have people had success modifying the prompt to eliminate the friendly chit-chat? or perhaps in ensuring the extraneous text is regular enough to post-process out?