Find answers from the community

Updated 2 days ago

Very Large Language Model Structured Outputs

vLLM Structured Outputs.

Hi, I'm trying to do the same thing as this person (Issue #17677 on GitHub) but running into errors. If I do sllm.complete(prompt) or sllm.chat(ChatMessage[]), I get 'tool_choice must either be a named tool, "auto" or "none".'.

If I put tool_choice = auto or none, I get "Expected at least one tool call, but got 0 tool calls."

I copied the code in the documentation as well as the one recommended on GitHub issues. What could be the problem?

Also tried it with is_function_calling_model=True and False.
C
L
5 comments
Could be a bug? Vllm updated something? Are you using openailike? (If you are, seems like vllm isn't following openai spec for some reason?)
It's going to need a deeper debug -- looking at the actual api call being made and comparing that to whatever vllm expects
Im out this week though, so I can't look into it atm
Ok thanks. Yeah seems like when I do the .complete() it doesn't send the Pydantic struct, when I do .chat() it does format the input correctly but the response isn't a tool call. I think it may be a problem with the actual API call.
Add a reply
Sign up and join the conversation on Discord