Find answers from the community

Updated 2 months ago

Hey all, any ideas why the results

Hey all, any ideas why the results coming from a direct search_batch request to Qdrant would be different than the results coming from using a llamaindex retriever that is pointed at the same Qdrant collection? Some overlap in the results but I would have assumed an exact match. No filters being applied during the search.
L
D
8 comments
Are you sure its the same? This is the query llama-index uses in most cases under the hood

Plain Text
response = self._client.search(
    collection_name=self.collection_name,
    query_vector=query_embedding,
    limit=query.similarity_top_k,
    query_filter=query_filter,
)
I have hybrid enabled in the collection although am currently testing the retriever without enabling hybrid (not using VectorStoreQueryMode.HYBRID), in which case I believe it is using search_batch; although, I would still expect there to be no differences, unless i'm missing something with the hybrid option enabled in the collection..
Right it would be running this

Plain Text
response = self._client.search_batch(
    collection_name=self.collection_name,
    requests=[
        rest.SearchRequest(
            vector=rest.NamedVector(
                name="text-dense",
                vector=query_embedding,
            ),
            limit=query.similarity_top_k,
            filter=query_filter,
            with_payload=True,
        ),
    ],
)
there is no default filter or preprocessing done to a string input passed to retriever.retrieve, correct? I am passing the exact same string to both the retriever and the Qdrant client and getting slightly different results (some overlap of retrieved nodes, albeit with different scores, but also some that don't match); I am not passing a filter to Qdrant and am wondering if that could be the cause somehow, if it is defaulting to something when making the retrieve call?
Yea shouldn't have any filter πŸ€” I know if your index is huge, the HNSW algorithm can provide some variance in results
Yeah I thought that as well and bugged the Qdrant discord about it, thing is I get the same output from the Qdrant client search_batch when pinging it multiple times in succession... could it be a version thing between llamaindex integration and qdrant client? I want to use the retriever but am now unsure how to reconcile the differences
is there a straightforwad way to log or output the exact request being sent to qdrant when using the retriever?
Hmmm, not sure. I would just set a break point and inspect the arguments being used to ensure they are the same?
Add a reply
Sign up and join the conversation on Discord