Find answers from the community

Updated 3 months ago

Do we have any native implementation in

Do we have any native implementation in LlamaIndex to help with "Lost in the middle: Performance degrades when models must access relevant information in the middle of long contexts". I find this implementation in LangChain https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.long_context_reorder.LongContextReorder.html ? Thank you
W
L
x
6 comments
I think @Logan M can help you in this. πŸ¦‡ πŸ”¦
@xrt nah we don't. All it does is re-order the retrieved chunks, so you could implement that easily with a custom node-postprocessor
https://gpt-index.readthedocs.io/en/stable/core_modules/query_modules/node_postprocessors/usage_pattern.html#custom-node-postprocessor

Low-key though, I don't know if it's actually helpful. You aren't reducing the size of the context, you still have stuff "in the middle" -- it's just re-ordering it.
Haystack actually released this first. I'm not convinced that it's helpful though
When this issue is visible, there is a limit after we see this happening ? or issue is the same if we send 4 or 12 chunks ?
I'm not totally sure tbh

For what it's worth, we did add the re-order thing to llama-index
https://gpt-index.readthedocs.io/en/stable/examples/node_postprocessor/LongContextReorder.html
Thanks, I will test.
Add a reply
Sign up and join the conversation on Discord