Find answers from the community

Updated 3 months ago

Llama index keeps refining over and over

Llama index keeps refining over and over. How do I make it stop?
L
j
7 comments
Not sure what you mean. Can you share some code?
Figured it out. Well.. mostly
When it's default, it refines something like 7-8 times so I have set it to simple_summarize
is there a way I can set it to refine just once?
I'm not sure what you mean πŸ€” If it refines as much as it needs to read all retrieved context

To reduce the number of refines (if using a vector index), I would decrease the top-k
Ah I see. Thank you πŸ™
Add a reply
Sign up and join the conversation on Discord