Find answers from the community

Updated 11 months ago

When using Query Pipelines is there a

At a glance

The community member is using Query Pipelines and wants to format the outputs of the intermediate steps, specifically to output a JSON string and save the intermediate output. They are currently using the prompt to generate JSON output, but it is not consistent. The comments suggest trying to use the tool calling API instead, where the community member can activate JSON mode by setting it in the constructor. However, the community member encountered an error when using GPT-4, indicating that the "response_format" parameter is not supported with that model. The community member had success with GPT-3.5 Turbo, but is still trying to get it working with GPT-4 Turbo.

Useful resources
When using Query Pipelines is there a way to format the outputs of the intermediate steps? At some steps, I would like to output a JSON string and save the intermediate output. I am currently using the prompt to generate JSON output, but it is not consistent. Thanks in advance.
L
n
9 comments
curious what you mean by "not consistent" ?

You could have a query pipeline component to parse the LLM output and/or correct the JSON?
Yes, I am doing exactly that (correcting the JSON in a pipeline component) and it works well for gpt-3.5, but for gpt-4 I keep getting messages before and after the JSON that my script can't handle. I can improve my script for cleaning up the JSON output, but I just wanted to know if there was a way to call for gpt-4's JSON output feature from the pipeline.
Have you tried just using the tool calling api instead? (i.e OpenAIPydanticProgram?)

I think you can activate json mode by setting it in the constructor too
llm = OpenAI(..., additional_kwargs={"response_format": {"type": "json_object"}})

Theres actually a comparison here too (they pass in the response format into the chat call, additional_kwargs works too)
https://docs.llamaindex.ai/en/stable/examples/llm/openai_json_vs_function_calling/?h=json
Yes, I saw that tutorial, but I didn't realize that I can set JSON mode in the constructor. That looks promising and I am going to give that a try. Thank you!
I'm getting this error when using gpt-4:
"Invalid parameter: 'response_format' of type 'json_object' is not supported with this model."
I'll see what happens with gpt-3.5...
I wonder if they changed the param name? Thats kind of odd
Seems like it might only be specific models yea
Yeah it worked for 3.5 turbo. Trying 4 turbo now. Thanks.
Add a reply
Sign up and join the conversation on Discord