Find answers from the community

Updated 8 months ago

Hello,

Hello,
I am currently testing a document that requires me to obtain specific output from the query engine. I am following the example provided in https://docs.llamaindex.ai/en/stable/module_guides/querying/structured_outputs/query_engine/, but unfortunately, I keep getting a blank object. Below is my Pydantic model. I am using it to generate a set of questions from the details that I have obtained from the vector store.

I'm using OpenAI model and embeddings for this example.

Plain Text
class Questions(BaseModel):
    """ Data model for a questions, which has four options, one correct option and explanation for the correct option."""
    question: str = Field(..., description="natural lanaguage question")
    options: List[str] = Field(..., description="four options for the question")
    corrent_option: str = Field(..., description="correct option for the question")
    explanation : str = Field(..., description="explanation for the correct option")

Plain Text
query_engine = index.as_query_engine(
    response_mode="tree_summarize", output_cls = Questions, llm = Settings.llm
)
L
H
4 comments
This works for me πŸ‘€ Do you have the latest version of llama index?
yes, i'm using the latest one
Attachment
image.png
Here's my test, it works pretty well
Plain Text
**from typing import List
from llama_index.core import Document, VectorStoreIndex
from llama_index.core.bridge.pydantic import BaseModel, Field

class Questions(BaseModel):
    """ Data model for a questions, which has four options, one correct option and explanation for the correct option."""
    question: str = Field(..., description="natural lanaguage question")
    options: List[str] = Field(..., description="four options for the question")
    corrent_option: str = Field(..., description="correct option for the question")
    explanation : str = Field(..., description="explanation for the correct option")

index = VectorStoreIndex.from_documents([Document.example()])

query_engine = index.as_query_engine(
    response_mode="tree_summarize", output_cls=Questions,
)

response = query_engine.query("Give me a question related to the retrieved context.")

# check attributes
print(response.question)
print(response.options)

# print entire pydantic object
print(response.response)


If you are using an open-source LLM though, it may not be smart enough to output the correct object (the above uses the default gpt-3.5-turbo)
Thank you so much for this. I will dig more into this.
Add a reply
Sign up and join the conversation on Discord