OK, another question for the void: After I create a VectorStoreIndex, and then try to run query_engine = index.as_query_engine() ... I am getting an error about OpenAI keys (even though I'm using huggingface embeddings and Ollama:
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings, ServiceContext
from llama_index.llms.ollama import Ollama
llm = Ollama(model="mistral", request_timeout=30.0)
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5"
)
index = VectorStoreIndex.from_documents(docs)