Find answers from the community

Updated 10 months ago

Hi. A quick question about using local

Hi. A quick question about using local Ollama with Mistral model when having local storage. On the docs page, I only saw the usage with OpenAPI api keys.

I am, however, wondering how I would make this work with a local running instance of Ollama that runs MistalAI model.

The code below works and gives me the results so the setup is correctly set up.

Plain Text
llm = Ollama(model="mistral", request_timeout=30.0)
resp = llm.complete("Who is Paul Graham?")               print(resp)


A storage folder was created and there are files there, so that worked properly as well.

What am I missing here? I have been reading documentation and I haven't found an example for this. I may very well be missing something or have missed something in the docs.

Does anybody know how to achieve this?
Attachment
image.png
L
m
2 comments
You'll need to configure an embed model, in addition to the LLM
query_engine = index.as_query_engine(llm=Ollama(model="mistral", request_timeout=30.0))

This solved it. Thank you.
Add a reply
Sign up and join the conversation on Discord