Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Active
Updated 7 days ago
0
Follow
Ensuring Independent API Calls to Ollama
Ensuring Independent API Calls to Ollama
Active
0
Follow
m
mostlyAdi
last week
Β·
Is there a way to ensure no shared context is maintained between calls, and every api call to ollama is treated as independent ?
W
m
7 comments
Share
Open in Discord
W
WhiteFang_Jr
last week
You can use chat engine:
https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/
This gives you the feature to maintain the context
m
mostlyAdi
edited last week
I do not want to maintain context at all, every call should be unique
Right now I use something like this
from ollama import Client
Client.chat(
model="llama3.2:3b",
messages=[
{"role": "user", "content": question},
],
optiions={
"frequency_penalty": 0,
"stop_sequences": ["Thank you", "Best regards"],
},
)
W
WhiteFang_Jr
last week
oh damn, I missed the 'no' π
W
WhiteFang_Jr
last week
I think if you keep updating the message block once you recieve the response ( i,e remove previous message and then send only the new query )
Ollama would consider that as a independent request only
m
mostlyAdi
last week
Even I agree, but In my answer. I feel the responses get improves when I have a continous chat.
m
mostlyAdi
last week
Have you found something in documentaio which show this ?
W
WhiteFang_Jr
7 days ago
This look right to me tbh π
Add a reply
Sign up and join the conversation on Discord
Join on Discord