Find answers from the community

Updated 7 days ago

Ensuring Independent API Calls to Ollama

Is there a way to ensure no shared context is maintained between calls, and every api call to ollama is treated as independent ?
W
m
7 comments
You can use chat engine: https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/

This gives you the feature to maintain the context
I do not want to maintain context at all, every call should be unique
Right now I use something like this
from ollama import Client

Client.chat(
model="llama3.2:3b",
messages=[
{"role": "user", "content": question},
],
optiions={
"frequency_penalty": 0,
"stop_sequences": ["Thank you", "Best regards"],
},
)
oh damn, I missed the 'no' πŸ˜†
I think if you keep updating the message block once you recieve the response ( i,e remove previous message and then send only the new query )

Ollama would consider that as a independent request only
Even I agree, but In my answer. I feel the responses get improves when I have a continous chat.
Have you found something in documentaio which show this ?
This look right to me tbh πŸ‘€
Add a reply
Sign up and join the conversation on Discord