Find answers from the community

Updated 11 months ago

Hi guys, small knowledge question: If I'

Hi guys, small knowledge question: If I'm creating a vector store index from my documents with a service context, does the llm matter? As in, I am currently creating 2 vector indexes (I'm testing between Mistral and Llama), however noticed that the context selected seems to be the same
T
n
w
3 comments
In general it doesn't matter for the embedding step, you have to change the embed model for that: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#embeddings

The LLM step is typically just used after the retrieval for the response synthesis (with some exceptions)
It only matters if you use your service context in a step or object that requires a LLM - like an Agent for example. But for embedding it shouldn't, just make sure the embedding model is set within the service context.
Add a reply
Sign up and join the conversation on Discord