Find answers from the community

s
F
Y
a
P
Updated last month

Prompt Helper Parameters

Hey guys, since similarity top k does not always present the actual most relevant data, I was thinking to do the following:

  • Loop using the n top k, but using k=1 for each llm_call, asking if the context is really pertinent to the question (answers must be 0 if no and 1 if yes (to keep it fast), like a classification problem);
  • keep the nodes where the context is relevant and merge them (accurately separated)
  • using the qa and (only if the chunk is long enough) the refine template only for the relevant nodes
Im following the openai best practices for complex reasoning problems where they suggest to split the problem in smaller problems.

What do u think? In this way i should obtain more flexibility and better responses. However im afraid of the overall time of execution.
Do u have suggestions?
A
1 comment
I believe youve mistaken the thread πŸ™‚
Add a reply
Sign up and join the conversation on Discord