Find answers from the community

Updated 4 weeks ago

Transitioning from LLM OpenAI to AzureOpenAI gpt4o deployment with token output limitations

I transitioned from LLM OpenAI to AzureOpenAI gpt4o deployment, but I can't get the model to produce more than 1000 tokens. I have not set up max_tokens and confirmed its None in Settings.llm. Not sure what settings im missing here. any one experience the same?
O
L
5 comments
Pretty sure if max tokens isn't set, it sends it as None to the api. Maybe openai handles that differently than azure? Have you tried actually setting it to a value like 2000?
yeah, doesnt make a difference
i think it was the multimodal azure llm, it has max_new_tokens set to 300
πŸ‘€ ohhh multimodal
Add a reply
Sign up and join the conversation on Discord