Find answers from the community

Updated 9 months ago

Pydantic

@Logan M, I just upgraded to llama-index v10 and discovered that the query engine output_cls parameter was no longer compatible with pydantic v2 models, while this was working in v9.
Is this known / to be expected ?

E.g., with those packages:
Plain Text
llama-index==0.10.20
llama-index-embeddings-openai==0.1.6
llama-index-llms-openai==0.1.12
llama-index-program-openai==0.1.4
llama-index-vector-stores-postgres==0.1.3
python-dotenv==1.0.1
pydantic~=2.6.3

The following script fails:
Plain Text
from dotenv import load_dotenv
from llama_index.core import Document, VectorStoreIndex
from llama_index.llms.openai import OpenAI
from pydantic import BaseModel

load_dotenv()

class MyModel(BaseModel):
    """Data model for a content."""

    content: str

documents = [Document(text="A brief history of chocolate")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(
    output_cls=MyModel,
    llm=OpenAI(model="gpt-3.5-turbo")
)

response = query_engine.query("What is it about ?")

print("content: ",response.content)
print("response type: ",type(response.response))
# `model_dump` method is available for pydantic v2 models but not for v1.
print("response dump: ",response.response.model_dump())

Here's the error:
Plain Text
AttributeError: 'BaseModel' object has no attribute 'model_dump'
L
S
2 comments
That's expected. I think I responded to the github issue. Use the v1 layer
Yes, handled as part of the github issue here: https://github.com/run-llama/llama_index/issues/12243 thanks
Add a reply
Sign up and join the conversation on Discord