Global settings
from llama_index.core import Settings
from llama_index.llms.ollama import Ollama
Settings.llm = Ollama(model="mistral", request_timeout=60.0)
Local settings
index.as_query_engine(llm=Ollama(model="mistral", request_timeout=60.0)) Add a reply
Sign up and join the conversation on Discord