Find answers from the community

Home
Members
Samuel ASENSI
S
Samuel ASENSI
Offline, last seen 3 months ago
Joined September 25, 2024
@Logan M, I just upgraded to llama-index v10 and discovered that the query engine output_cls parameter was no longer compatible with pydantic v2 models, while this was working in v9.
Is this known / to be expected ?

E.g., with those packages:
Plain Text
llama-index==0.10.20
llama-index-embeddings-openai==0.1.6
llama-index-llms-openai==0.1.12
llama-index-program-openai==0.1.4
llama-index-vector-stores-postgres==0.1.3
python-dotenv==1.0.1
pydantic~=2.6.3

The following script fails:
Plain Text
from dotenv import load_dotenv
from llama_index.core import Document, VectorStoreIndex
from llama_index.llms.openai import OpenAI
from pydantic import BaseModel

load_dotenv()

class MyModel(BaseModel):
    """Data model for a content."""

    content: str

documents = [Document(text="A brief history of chocolate")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(
    output_cls=MyModel,
    llm=OpenAI(model="gpt-3.5-turbo")
)

response = query_engine.query("What is it about ?")

print("content: ",response.content)
print("response type: ",type(response.response))
# `model_dump` method is available for pydantic v2 models but not for v1.
print("response dump: ",response.response.model_dump())

Here's the error:
Plain Text
AttributeError: 'BaseModel' object has no attribute 'model_dump'
2 comments
S
L