Find answers from the community

Updated 2 days ago

Unexpected behaviour when no match is found using the llamacloudindex

I'm using the LlamaCloudIndex with similarity_top_k=3, I noticed that when no match is found the source_nodes contain one chunk only and that chunk is the whole document. Is that expected behaviour? I'm surprised as I thought there would always be source nodes as we're looking at top similarities, i.e. it would return the most similar nodes even though they might not be similar at all.
W
k
L
5 comments
Have you added any postprocessing step like Similiarity Postprocessing?
I only have:

index = LlamaCloudIndex(...)
query_engine = index.as_query_engine(...)
query_engine.aquery(...)
Hey @Logan M , could you please take a look into this🙏
I'm pretty sure by defult, the llama cloud retriever uses "auto" mode, where it automatically decides to do either top-k based chunk retrieval, or retrieving entire files based on metadata
Add a reply
Sign up and join the conversation on Discord