Find answers from the community

Updated 3 months ago

OK, another question for the void: After

OK, another question for the void: After I create a VectorStoreIndex, and then try to run query_engine = index.as_query_engine() ... I am getting an error about OpenAI keys (even though I'm using huggingface embeddings and Ollama:

from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.core import Settings, ServiceContext from llama_index.llms.ollama import Ollama llm = Ollama(model="mistral", request_timeout=30.0) Settings.embed_model = HuggingFaceEmbedding( model_name="BAAI/bge-small-en-v1.5" ) index = VectorStoreIndex.from_documents(docs)

How do I tell it to use ollama for llm?
W
οΏ½
3 comments
Add llm to Settings with Settings.llm=llm
Add a reply
Sign up and join the conversation on Discord