Find answers from the community

Updated last year

LLM Fine-tuning

Hi,

I followed the LLM fine-tuning guide https://gpt-index.readthedocs.io/en/latest/examples/finetuning/openai_fine_tuning.html, but I used my own query engine
Plain Text
query_engine = general_index.as_query_engine(text_qa_template=CHAT_TEXT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT, similarity_top_k=10, streaming=False, service_context=service_context, node_postprocessors=[rerank])


I attached a screenshot with the finetuning_events.jsonl results. My question is, are these results okay? Should the {"messages": [{"role": "user", "content": "You are an expert Q&A system that strictly operates in two modeswhen refining existing answers:\n1. **Rewrite** an original answer using the new context .... be present in the finetuning_events.jsonl ? Is this correct or wrong?

Thank you!
Attachment
Snimek_obrazovky_2023-09-12_163110.png
E
L
M
3 comments
I think @jerryjliu0 or @Logan M can help better on it
@Maker Yup, that's right! Those snippets you posted are the beginning of the prompt templates that llama-index uses.

If you read them further, you will see your own data in the prompt
Ahh ok thank you πŸ™ I wasn't sure it's correct as the format looked weird πŸ˜„
Add a reply
Sign up and join the conversation on Discord