Find answers from the community

s
F
Y
a
P
Updated last month

Partial response

has anyone come across a scenario where openAI completion API returns the partial response ? The context has the entire informtion and api some how skips over last few lines and summarizes 70% of the context
L
a
5 comments
Do you mean the response is cut off?
yup..sorry just figured ..I was resetting the max_token in my code to a lower value
was scratching my head for a while...untill I read my own code..lol ..sorry about that
by the way ..is there a way to set this number dynamically , based on token consumption in the prompt ? I guess I can count that prompt token first and then set a number , what do you think ? I feel fixed number might be limiting
I know text-davinci-003 let you set max_tokens to minus one (which functions like you mention), but they removed the feature from gpt-3.5/4 last time I checked πŸ€”
Add a reply
Sign up and join the conversation on Discord