Find answers from the community

Updated 3 months ago

I have upgraded my llama index to 0 6 20

I have upgraded my llama index to 0.6.20, and I used the compact mode as a response mode, but it's not working as what is written in the documentation; the way it works is the same as the refine mode work; Does anyone knows why this happened?
https://gpt-index.readthedocs.io/en/latest/reference/query/response_synthesizer.html#llama_index.indices.response.type.ResponseMode
L
o
z
3 comments
Refine and Compact are actually very similar (the Compact object class extends refine)

The only difference is that refine will query the LLM once per node

Compact will stuff as much as possible into each LLM. But it may still make more than one LLM call and refine if all the text does not fit into the first LLM call
Please add similarity_cutoff, similarity_top_k params with response mode param.
In updated version, you need to use query_engine and query with query_engine.
, already add the params you have mentioned, but still the same problem exists
Add a reply
Sign up and join the conversation on Discord