Find answers from the community

Updated 11 months ago

I'm trying to implement Hybrid Search

I'm trying to implement Hybrid Search with Qdrant and I've successfully set up my collection.
Then in my code I set up the vectorstore to have hybrid = true
I'm then building the nodes manually and add them to the vectorstore then persist:

vector_store = QdrantVectorStore(index_name, client=client, enable_hybrid=True, batch_size=20) storage_context = StorageContext.from_defaults(vector_store=vector_store) ... vector_store.add(nodes) index.storage_context.persist()

Would the add method also generate both the dense and the sparse vector?
L
B
12 comments
the .add() method assumes you already have dense embeddings attached to each node. It will generate the sparse embeddings though
Yep the dense one is attached. Thanks
Ok we've just tried to execute this and I get the following error:
INFO:indexer_llamaindex_v3_qdrant:HTML to Qdrant Indexer failed with exception: Unexpected Response: 400 (Bad Request)
Raw response content:
b'{"status":{"error":"Wrong input: Not existing vector name error: "},"time":0.053041753}'
I've initialized the collection as described in the notebook:
client.create_collection( collection_name="test", vectors_config={ "text-dense": models.VectorParams( size=1024, # openai vector size distance=models.Distance.COSINE, ) }, sparse_vectors_config={ "text-sparse": models.SparseVectorParams( index=models.SparseIndexParams() ) }, )
Here's the full text embedding code:
nodes = [] for j, chunk in enumerate(text_chunks): node = TextNode(text=chunk, metadata=metadata) node.embedding = embed_model.get_text_embedding(node.get_content(metadata_mode=MetadataMode.ALL)) nodes.append(node) vector_store.add(nodes)
any idea what the problem could be?
according to llamaindex doc:
NOTE: The names of vector configs must be text-dense and text-sparse if creating a hybrid index.

However the error message suggests that it tries to push a vector without vector name. Could it be a llamaindex issue?
thats pretty weird πŸ€”
I would try without creating the collection
llama-index should handle that
yep that fixed the issue. This is weird, I use the exact same syntax as in the package but somehow it fails.
I agree, a little strange lol
Add a reply
Sign up and join the conversation on Discord