Hi guys, small knowledge question: If I'm creating a vector store index from my documents with a service context, does the llm matter? As in, I am currently creating 2 vector indexes (I'm testing between Mistral and Llama), however noticed that the context selected seems to be the same
It only matters if you use your service context in a step or object that requires a LLM - like an Agent for example. But for embedding it shouldn't, just make sure the embedding model is set within the service context.