Find answers from the community

U
Uzair
Offline, last seen 3 months ago
Joined September 25, 2024
How can I add metadata while creating qdrant vector store index??
5 comments
L
U
I do have an idea about Qdrant UI, but my concern is why I am not getting embeddings on Qdrant, as show in image
1 comment
T
Can someone help clarify my doubts?

After creating nodes:
Plain Text
nodes = splitter.get_nodes_from_documents(documents)


At this stage, node embeddings have not yet been generated.

Embeddings are created when we make an index. Right?

Plain Text
vector_store = QdrantVectorStore(client=QDRANT_CLIENT, collection_name=collection_name)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(
    nodes,
    storage_context=storage_context,
    insert_batch_size=10,
    embed_model = embed_model,
)


So how can we view our embeddings? Also, in Qdrant, it shows 'embedding:null'. Why is that?
2 comments
U
W
Hi I am creating an index but my session keeps crashing out. Can anyone guide me with the problem?

Here is the llm which I am using for embedding:
Plain Text
# Embedding Model Setting
Settings.embed_model = HuggingFaceEmbedding(
    model_name="jinaai/jina-embeddings-v2-base-es"
)

This model outputs 768 embedding dimensions.

Here I am creating QdrantVectorStore index
Plain Text
vector_store = QdrantVectorStore(client=QDRANT_CLIENT, collection_name=collection_name)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
)

But it crashes
3 comments
W
U
Hi, I am using custom model from hugging face to generate embedding and then I am using Qdrant to store my documents along with their embeddings as shown below
Plain Text
MODEL = "mixedbread-ai/mxbai-embed-large-v1"

reader = SimpleDirectoryReader(input_dir="/content/data")
documents = reader.load_data()
embed_model = HuggingFaceEmbedding(model_name=MODEL)
vector_store = QdrantVectorStore(client=QDRANT_CLIENT, collection_name=collection_name)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
    embed_model=embed_model,
)

Now I want to make retriever which should be of the same model as of embedding model. And then to synthesize the final response I want to use gpt, How can I make retriever with the same embedding model that I used for document embeddings?
3 comments
U
k
Is there a difference between summary index and document summary index? Can anyone help me better understand these two
9 comments
L
U
k
Ok so apparently this documentation guides how to make choices for routing. But I'm having a problem here and that is when I query the bot it doesn't route to the right query engine. I was wondering if there's any way we can make choices more robust by providing extra details as we do in semantic routers by creating a route and then adding utterances to route to the correct direction.
2 comments
L
W