Find answers from the community

Updated 2 months ago

If I'm using `Settings.llm=Ollama(model

If I'm using Settings.llm=Ollama(model="mistral") for my LLM, is there a specific embedding model I need to use when I'm trying to make a VectorStoreIndex from the documents? I was using HuggingFace Settings.embed_model = HuggingFaceEmbedding( model_name="sentence-transformers/all-MiniLM-L6-v2") ... does that make sense?
T
๏ฟฝ
6 comments
Yeah you can use any embedding model
So it doesn't matter if the LLM model was trained on different tokens? How does it know what the token embeddings "mean" if it's a different embedding model than the one the LLM was trained on?
Only the retrieved text is passed to the LLM
ohhhh I see, so the embeddings are only being used to look up the Documents/Nodes, and then the actual text content from the document is what's being used as input/context for LLM?
Yeah basically
Add a reply
Sign up and join the conversation on Discord