Find answers from the community

Updated 2 years ago

Picking relevant nodes

I’m already obtaining high similarities (0.75+ for the first 10/20 nodes).
The difference is that i’m adding a llm call. So instead to use the top k to create the responses, i’m using that for selecting like 10-20 nodes. Than I let gpt-turbo decides if each one is truly relevant (this is the key, imo i should obtain some not relevant nodes even though the similarity is high). If relevant i merge them and I use the usual llama index solution.

Do you think that i can incorporate all of that in the qa and refine prompt?
My use case is not simple as “who wins the 2022 world cup”.
L
A
2 comments
Yea I think this can be done in the prompt.

You can include details specific to your use case in the prompt, and even a small example in the prompt!

Basically, it sounds like you just need to make sure that if the node isn't relevant, the model responds appropriately with either something like "The answer cannot be found" or returns the previous answer during the refine process

Here's the current internal prompts:
https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/default_prompts.py

And here are the ones specific to chat models:
https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/chat_prompts.py
Yeah Ive already played with those prompts 🙂 i was just reading best practices and I thought could be a good solution. Ill give a shot and ill compare the two solutions 🙂
Add a reply
Sign up and join the conversation on Discord