Hi, I'm trying to do the same thing as this person (Issue #17677 on GitHub) but running into errors. If I do sllm.complete(prompt) or sllm.chat(ChatMessage[]), I get 'tool_choice must either be a named tool, "auto" or "none".'.
If I put tool_choice = auto or none, I get "Expected at least one tool call, but got 0 tool calls."
I copied the code in the documentation as well as the one recommended on GitHub issues. What could be the problem?
Also tried it with is_function_calling_model=True and False.
Hey, I'm running into an issue using structured outputs with query engine (following guide called Query Engine with Pydantic Outputs). I'm using vLLM for my backend and OpenAILike module for connecting. Currently getting Error 400, Value Error 'tool_choice' must either be a named tool, "auto", or "none". Saw a recommendation to add tool_choice="none" if using structured_llm.chat(), but since I'm using query engine I can't do that. How do I best do this? Do I have to manually create a workflow for this?