Find answers from the community

Updated 10 months ago

Google Colab

At a glance

The community member is asking how to use the Llama Index query engine without passing documents or nodes to it. They have already created embeddings for each document in MongoDB, which includes metadata. The community member wants to use the query engine with the index stored in MongoDB, as the collections.aggregate from MongoDB is not sufficient to get the summary that the query engine creates. They have followed a tutorial, which they have provided a link to.

In the comments, another community member suggests using the VectorStoreIndex class to create an index from the vector store, and using the StorageContext to set the vector store as the default. However, there is no explicitly marked answer in the comments.

Useful resources
@Mikko How can we use Llama Index query engine without passing documents or nodes to it. We have the embeddings already created for each document in MongoDB it also includes some metadata.

We already did that before, now we want to just use the query engine using index stored in MongoDB.

collections.aggregate from MongoDB won't suffice cause we want the summary that query_engine creates.



We followed this tutorial :
https://colab.research.google.com/drive/136MSwepvFgEceAs9GN9RzXGGSwOk5pmr?usp=sharing
W
1 comment
You can try with:

Plain Text
# using vector store
index = VectorStoreIndex.from_vector_store(vector_store = vector_store )

# Using storage context 
storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex.from_documents([], storage_context=storage_context)
Add a reply
Sign up and join the conversation on Discord