Find answers from the community

Updated 8 months ago

how to get a structured pydantic output here?response = client.chat.completions.create(

@kapa.ai how to get a structured pydantic output here?
response = client.chat.completions.create(
model=deployment_engine,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": prompt},
],
temperature=model_temperature,
top_p=model_top_p,
frequency_penalty=model_frequency_penalty,
presence_penalty=model_presence_penalty
)
k
L
3 comments
Please stand by @navya9668 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To get a structured Pydantic output from your chat completion, you can use the OpenAIPydanticProgram module in LlamaIndex. This module allows you to define a prompt template and pass in the LLM and Pydantic model you've defined. Here's an example of how you can do it:

Plain Text
from llama_index.program.openai import OpenAIPydanticProgram
from llama_index.core import ChatPromptTemplate
from llama_index.core.llms import ChatMessage

prompt = ChatPromptTemplate(
    message_templates=[
        ChatMessage(
            role="system",
            content="Your system message here",
        ),
        ChatMessage(
            role="user",
            content="Your user message here",
        ),
    ]
)

program = OpenAIPydanticProgram.from_defaults(
    output_cls=YourPydanticModel,  # replace with your Pydantic model
    llm=llm,  # replace with your LLM
    prompt=prompt,
    verbose=True,
)

output = program(your_input)  # replace with your input


In this example, YourPydanticModel should be replaced with the Pydantic model you've defined for your structured output. The llm should be replaced with your language model, and your_input should be replaced with the input you want to pass to the model.

After running this code, output will contain the desired structured data as a Pydantic Model. You can inspect the results with output.dict().

Please note that this is a general example and you might need to adjust it according to your specific use case and the structure of your Pydantic model.

Source: GitHub - jerryjliu/llama_index ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
(we have a channel for asking kapa btw, to not clog up the other channels, #๐Ÿ™‹ask-kapa-gpt-index )
Add a reply
Sign up and join the conversation on Discord