Find answers from the community

Home
Members
purple crow
p
purple crow
Offline, last seen last week
Joined December 26, 2024
Hi, i wanted to a small clarification on the qdrant integration as a vector store for llama index

I am able to query and get responses as a retriever as well having semantic relations but when i see the documents stored.

I assume that this means that the vectors indexed are 0 only , can some one comment on what could be wrong here ?
4 comments
p
L
p
purple crow
·

Query

index = load_or_create_index(storage_context=storage_context, embed_model=embed_model, documents=documents)
query_engine = index.as_query_engine(
verbose=True,
)
response = query_engine.query("What is so funnny?")

does the query engine as well needs the embed model ?
1 comment
L
Hello just starting to build a knowledge graph or long term memory for my LLM agents

evaluating llama index for the same.

Couple of quick questions -

  1. is there a s3 bucket loder in llama index
  2. i dont understand vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
why do i need to send the vector store while load index and storage context both when storage context itself knows whats the vector store used ?
  1. do i need to store the embeddings at some blob storage as well, incase my vector store goes down i might need to recreate all of those ?
3 comments
W
p