Find answers from the community

Updated 4 months ago

Is there a way to return the prompt used

At a glance

The community members are discussing how to access the actual prompt used by the OpenAIPydanticProgram, rather than just the template. One community member explains that the prompt is the template formatted with the provided values, and there are no special tricks involved. Another community member asks if there is a way to see the entire call passed to OpenAI, to understand how the Pydantic objects are incorporated into the function call. A third community member responds that the Pydantic models are converted into a schema using output_cls.schema(), and that schema is then used in the API call.

Is there a way to return the prompt used by OpenAIPydanticProgram?
I.e. not the template but the actual prompt that is passed to openai at run time.
L
K
4 comments
The prompt is just the template, formatted with the value you gave. No special tricks here

Plain Text
prompt_template_str = """\
Generate an example album, with an artist and a list of songs. \
Using the movie {movie_name} as inspiration.\
"""

output = program(movie_name="The Shining")

formatted_prompt = prompt_template_str.format(movie_name="The Shining")
thanks, maybe a better question: is there a way to see the whole call that is passed to openai? I want to understand how the pydantic objects are being incorporated into the function calling ability
Plain Text
import openai
openai.log = "debug"


Essentially, the pydantic models just get converted into a schema using output_cls.schema() and that gets shoved into the API call
awesome thanks
Add a reply
Sign up and join the conversation on Discord