Find answers from the community

Updated 7 months ago

Google Colab

@Mikko How can we use Llama Index query engine without passing documents or nodes to it. We have the embeddings already created for each document in MongoDB it also includes some metadata.

We already did that before, now we want to just use the query engine using index stored in MongoDB.

collections.aggregate from MongoDB won't suffice cause we want the summary that query_engine creates.



We followed this tutorial :
https://colab.research.google.com/drive/136MSwepvFgEceAs9GN9RzXGGSwOk5pmr?usp=sharing
W
1 comment
You can try with:

Plain Text
# using vector store
index = VectorStoreIndex.from_vector_store(vector_store = vector_store )

# Using storage context 
storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex.from_documents([], storage_context=storage_context)
Add a reply
Sign up and join the conversation on Discord