Is Strict = True Default For Structured Outputs For Function Calling With Openai (Ex. FunctionCallingProgram)?
Is Strict = True Default For Structured Outputs For Function Calling With Openai (Ex. FunctionCallingProgram)?
At a glance
The community members are discussing the default behavior of the strict parameter in the OpenAI FunctionCallingProgram. The consensus is that strict=True is not the default, as it can have latency impacts and may not work with all pydantic classes. However, the community members note that strict=True can be set manually when using the OpenAI API. They also discuss issues with the FunctionCallingProgram where it may return an error (0 tool calls), but the regular response format from OpenAI works. The community members suggest that the structured output through function calling may be supported exclusively in the llamaindex library. They also ask about setting the system prompt for the FunctionCallingProgram, and a community member provides a solution using the ChatMessagePromptTemplate and setting the tool_choice parameter.
I noticed that for some data models and prompts, when I tried to extract via FunctionCallingProgram I ran into an error (0 tool calls) but was able to get proper output using the response format (ie. not function calling) through openai - I'm assuming it's the structured output through function calling that is supported in llamaindex exclusively, correct? maybe if I set strict=True with the FunctionCallingProgram I will get the same output as calling OpenAI directly?