Find answers from the community

Updated 8 months ago

have a question for all of you. Feel

have a question for all of you. Feel free to reply, and if you can respond quickly, that would be great. When we use a vector store index in the backend, what is used to create the embeddings? Also, if we don't mention our API key, will it create embeddings, or where do we specify the API key in the vector store index? Can someone clarify how this works?
L
N
11 comments
it defaults to openai text-embedding-ada-002
it generates embeddings with that model, then inserts into the vector db
We have to provide an API key, otherwise it won't work, because this model (OpenAI's text-embedding-ada-002) is not open source. Where do we mention the API key for this model?
Yea it picks up the api key from your env variables
or you can manually configure the embedding model
One more thing: what is the main job of a vector store index? It's used to create embeddings with third-party models, so why are there different kinds of indexes, and what are their use cases? If we're using a third-party model to create embeddings and storing them somewhere else, what is the purpose of the vector store index, and why are there different types of indexes?
and thankful for your response as well.
If you can help me in this as well
The vector store index just orchestrates a lot of stuff
  • connecting to your vector db
  • chunking documents into nodes
  • generating embeddings
  • creating a retriever or query engine for you
if you can help me in this

Can anyone help explain why LlamaIndex is better for indexing and retrieval compared to the LangChain framework? What makes it different from LangChain? If someone can answer this properly, there's a reward, this question asked by me in interview .
Add a reply
Sign up and join the conversation on Discord