Find answers from the community

Updated 7 months ago

How can I pass pydantic class in PromptTemplate to get structued output ?

@kapa.ai How can I pass pydantic class in PromptTemplate to get structued output ?
k
p
12 comments
Please stand by @praveenks30#120294 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To get structured output using a Pydantic class in LlamaIndex, you can use the OpenAIPydanticProgram module. Here's a step-by-step guide:

  1. Define your Pydantic classes. These classes will represent the structure of the data you want to extract. For example:
Plain Text
class Song(BaseModel):
    title: str
    length_seconds: int

class Album(BaseModel):
    name: str
    artist: str
    songs: List[Song]

(Source: LlamaIndex Docs ())

  1. Define a prompt template. This is a string that will be used to generate the prompt for the language model. You can use placeholders in this string that will be filled in with the actual values when you run the program. For example:
Plain Text
prompt_template_str = """\
Generate an example album, with an artist and a list of songs. \
Using the movie {movie_name} as inspiration.\
"""

(Source: LlamaIndex Docs ())

  1. Create an instance of OpenAIPydanticProgram using the from_defaults method, passing in your Pydantic class and the prompt template string:
Plain Text
program = OpenAIPydanticProgram.from_defaults(
    output_cls=Album, prompt_template_str=prompt_template_str, verbose=True
)

(Source: LlamaIndex Docs ())

  1. Run the program with the actual values for the placeholders in the prompt template:
Plain Text
output = program(movie_name="The Shining")

(Source: GitHub ())
The output will be an instance of your Pydantic class, filled in with the data extracted by the language model.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How can this be passed to PromptTemplate (from llama_index.core import PromptTemplate) ?
Let me check the knowledge base for you. One moment! :books:
----------------------
I'm sorry for any confusion, but based on the provided knowledge sources, there's no direct way to pass a Pydantic class to a PromptTemplate from llama_index.core.

The PromptTemplate class in LlamaIndex is used to format prompts for the language model. It takes a string with placeholders and provides methods to fill in those placeholders with actual values. Here's an example:

Plain Text
from llama_index.core import PromptTemplate

template = (
    "We have provided context information below. \n"
    "---------------------\n"
    "{context_str}"
    "\n---------------------\n"
    "Given this information, please answer the question: {query_str}\n"
)
qa_template = PromptTemplate(template)
prompt = qa_template.format(
    context_str="In a parallel universe called zebo in 1 + 1 is 3",
    query_str="what is 1 + 1 in universe zebo ?",
)

(Source: GitHub ())

On the other hand, Pydantic classes are used to define data models with type annotations, and they can be used to parse and validate data. They are not directly related to the PromptTemplate class and cannot be passed to it.

If you want to use a Pydantic class to structure the output of a language model, you might want to look into the OpenAIPydanticProgram class in LlamaIndex, which allows you to do this. However, this is a separate process from creating and using a PromptTemplate.

If you have a specific use case in mind where you want to use a Pydantic class with a PromptTemplate, could you please provide more details? I might be able to give a more specific answer with more information.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Ok. as of now, I am relying on LangchainOutputParser to define the structured output using response schema and then passing it to PromptTemplate as output_parser.. Is there any other way to achieve the same without using output_parser and achieve it using pydantic way ?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can achieve structured output using Pydantic classes without using an output parser. This can be done using the LLMTextCompletionProgram or OpenAIPydanticProgram classes in LlamaIndex. Here's how you can do it:

  1. Define your Pydantic class. This class will represent the structure of the data you want to extract. For example:
Plain Text
from pydantic import BaseModel
from typing import List

class Song(BaseModel):
    name: str
    artist: str

class Album(BaseModel):
    name: str
    songs: List[Song]


  1. Create an instance of LLMTextCompletionProgram or OpenAIPydanticProgram, passing in your Pydantic class as the output_cls argument:
Plain Text
from llama_index.program import LLMTextCompletionProgram

llm_program = LLMTextCompletionProgram.from_defaults(
    output_parser=PydanticOutputParser(output_cls=Album),
    prompt=chat_prompt_tmpl,
    llm=self.llm,
)

(Source: LlamaIndex Docs ())

or

Plain Text
from llama_index.program.openai import OpenAIPydanticProgram

program = OpenAIPydanticProgram.from_defaults(
    output_cls=Album, prompt_template_str=prompt_template_str, verbose=True
)

(Source: LlamaIndex Docs ())

  1. Run the program. The output will be an instance of your Pydantic class, filled in with the data extracted by the language model.
This way, you can use Pydantic classes to structure the output of a language model without using an output parser.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai ok.. I am using AzureOpenAI to create llm instance and using below to achieve structured output :

output_parser = LangchainOutputParser(lc_output_parser)
prompt = PromptTemplate(template = template, output_parser = output_parser)
message = prompt.from_message(input_list = input_list)
response = llm.chat(message)
print(response.message.content)

Now, if I need to use Pydantic class insted of output_parser, how can I fit in my code ?
Let me check the knowledge base for you. One moment! :books:
----------------------
To use a Pydantic class instead of an output parser, you can modify your code as follows:

  1. Define your Pydantic class. This class will represent the structure of the data you want to extract. For example:
Plain Text
from pydantic import BaseModel
from typing import List

class Song(BaseModel):
    name: str
    artist: str

class Album(BaseModel):
    name: str
    songs: List[Song]


  1. Import the necessary classes from LlamaIndex:
Plain Text
from llama_index.core.output_parsers import PydanticOutputParser
from llama_index.core import PromptTemplate


  1. Create an instance of PydanticOutputParser, passing in your Pydantic class:
Plain Text
output_parser = PydanticOutputParser(output_cls=Album)


  1. Create an instance of PromptTemplate, passing in your template and the Pydantic output parser:
Plain Text
prompt = PromptTemplate(template=template, output_parser=output_parser)


  1. Run your program as before:
Plain Text
message = prompt.from_message(input_list=input_list)
response = llm.chat(message)
print(response.message.content)


This way, the output of the language model will be parsed into an instance of your Pydantic class.

Please note that this is a general guide and you might need to adjust the code to fit your specific use case. Also, the Pydantic class should match the structure of the data you expect to get from the language model.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord