Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 2 months ago
0
Follow
Do we have any native implementation in
Do we have any native implementation in
Inactive
0
Follow
x
xrt
last year
Β·
Do we have any native implementation in LlamaIndex to help with "Lost in the middle: Performance degrades when models must access relevant information in the middle of long contexts". I find this implementation in LangChain
https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.long_context_reorder.LongContextReorder.html
? Thank you
W
L
x
6 comments
Share
Open in Discord
W
WhiteFang_Jr
last year
I think @Logan M can help you in this. π¦ π¦
L
Logan M
last year
@xrt nah we don't. All it does is re-order the retrieved chunks, so you could implement that easily with a custom node-postprocessor
https://gpt-index.readthedocs.io/en/stable/core_modules/query_modules/node_postprocessors/usage_pattern.html#custom-node-postprocessor
Low-key though, I don't know if it's actually helpful. You aren't reducing the size of the context, you still have stuff "in the middle" -- it's just re-ordering it.
L
Logan M
last year
Haystack actually released this first. I'm not convinced that it's helpful though
x
xrt
last year
When this issue is visible, there is a limit after we see this happening ? or issue is the same if we send 4 or 12 chunks ?
L
Logan M
last year
I'm not totally sure tbh
For what it's worth, we did add the re-order thing to llama-index
https://gpt-index.readthedocs.io/en/stable/examples/node_postprocessor/LongContextReorder.html
x
xrt
last year
Thanks, I will test.
Add a reply
Sign up and join the conversation on Discord
Join on Discord