Find answers from the community

Updated 2 years ago

Book query

At a glance

The community member has a "fancy idea" to record their prompts and answers based on a new book using LlamaIndex, and then fine-tune a GPT model weekly to ask open questions comparing the new book to other books. Another community member thinks this makes sense, but is unsure if the fine-tuning is necessary unless the goal is to embed new knowledge into the model. They suggest using the LlamaIndex logger to record the exact inputs and outputs. A third community member explains that the purpose of fine-tuning in this case is to let the model know the new book, so it can answer questions like comparing characters or imagining conversations between characters from the new book and other books.

Useful resources
Great! I love it. And I have a fancy idea like this:
1 Record all my prompts and answers besed on a new book ( injected by LlamaIndex) to a dataset
2 Fine tuning gpt model every week
Then ask open questions like compare the new book to other books. Is this practical?
L
A
2 comments
I think it makes sense! although I'm not sure if the fine tuning is necessary πŸ€” unless you are trying to embed new knowledge into a model... but I'm not sure how well that works.

If you want to record the exact inputs/outputs to openAI, you'll want to use the llama logger (since llama index takes your query and pairs it with a few various prompt templates depending on the situation)

Check it out at the bottom of the notebook
https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo.ipynb
The purpose of fine tuning in this case is let the model konw the new book (let's call it <<A>>). And can answer questions like:
  1. Which novels have a simular character like Mr Adam in <<A>>? What's the name of them?
  2. If Mr Adam first meet Harry Porter in the classroom what will they talk about?
Add a reply
Sign up and join the conversation on Discord