Find answers from the community

Updated 2 months ago

How to increase waiting time?

How to increase waiting time?

I'm using a local LLM model from ollama + qdrant:

Plain Text
llm = Ollama(model="model")
...
query_engine = index.as_query_engine()
response = query_engine.query("query")


but keep getting:

Plain Text
TimeoutError: timed out
...
httpcore.ReadTimeout: timed out
...
httpx.ReadTimeout: timed out


I confirm that ollama server is running and I can access: http://localhost:11434

I also confirm that I can chat with the model using terminal with ollama run command.

I played around with the raw llm (without qdrant) and it worked, even though sometimes it used to throw the same error.
L
p
3 comments
llm = Ollama(model="model", request_timeout=30)
30s is the default
@Logan M Ah shoot! Thank you!
Add a reply
Sign up and join the conversation on Discord