----------------------
To get a structured Pydantic output from your chat completion, you can use the
OpenAIPydanticProgram
module in LlamaIndex. This module allows you to define a prompt template and pass in the LLM and Pydantic model you've defined. Here's an example of how you can do it:
from llama_index.program.openai import OpenAIPydanticProgram
from llama_index.core import ChatPromptTemplate
from llama_index.core.llms import ChatMessage
prompt = ChatPromptTemplate(
message_templates=[
ChatMessage(
role="system",
content="Your system message here",
),
ChatMessage(
role="user",
content="Your user message here",
),
]
)
program = OpenAIPydanticProgram.from_defaults(
output_cls=YourPydanticModel, # replace with your Pydantic model
llm=llm, # replace with your LLM
prompt=prompt,
verbose=True,
)
output = program(your_input) # replace with your input
In this example,
YourPydanticModel
should be replaced with the Pydantic model you've defined for your structured output. The
llm
should be replaced with your language model, and
your_input
should be replaced with the input you want to pass to the model.
After running this code,
output
will contain the desired structured data as a Pydantic Model. You can inspect the results with
output.dict()
.
Please note that this is a general example and you might need to adjust it according to your specific use case and the structure of your Pydantic model.
Source:
GitHub - jerryjliu/llama_index ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------