Find answers from the community

Updated 4 months ago

Prompt Helper Parameters

At a glance
The community member is discussing a potential approach to improve the relevance of responses in a complex reasoning problem. The approach involves looping through the top k results, using k=1 for each language model call, and classifying the context as relevant or not (0 or 1). The relevant nodes would then be merged and processed using QA and a refine template, if the chunk is long enough. The community member is concerned about the overall execution time of this approach and is seeking suggestions from the community. The only comment is from another community member who believes the original post was mistaken for a different thread.
Hey guys, since similarity top k does not always present the actual most relevant data, I was thinking to do the following:

  • Loop using the n top k, but using k=1 for each llm_call, asking if the context is really pertinent to the question (answers must be 0 if no and 1 if yes (to keep it fast), like a classification problem);
  • keep the nodes where the context is relevant and merge them (accurately separated)
  • using the qa and (only if the chunk is long enough) the refine template only for the relevant nodes
Im following the openai best practices for complex reasoning problems where they suggest to split the problem in smaller problems.

What do u think? In this way i should obtain more flexibility and better responses. However im afraid of the overall time of execution.
Do u have suggestions?
A
1 comment
I believe youve mistaken the thread πŸ™‚
Add a reply
Sign up and join the conversation on Discord