Find answers from the community

Updated 3 months ago

This guide doesn't seem to work anymore

This guide doesn't seem to work anymore as "as_structured_llm" doesn't seem to exist for OpenAI
https://docs.llamaindex.ai/en/stable/examples/structured_outputs/structured_outputs/#2-plug-into-rag-pipeline
L
O
25 comments
as_structured_llm definitely works
Maybe make sure you have the latest versions in your env
I'm using it right now πŸ˜…
I got that to work. but it's still giving me the 0 tools error
I think maybe I used as_structured_llm on the model name variable the first time 🀦

But it's still giving me the Expected 1 tool call got 0 error
Expected at least one tool call, but got 0 tool calls
Kind of expected tbh -- try a better prompt
A better system prompt or query?
I'm not quite following how to define "better"
Something that would encourage the llm to think it needs to use your output class

It could also be that your output class is poorly defined (bad classname, bad docstring, bad field names, bad field descriptions)
Ok. Is it the prompt or the query text I need to make it think it needs to use the output class?
In the example, it doesn't set a prompt
Could be both, I'm conflating terms, but both have impact

Its kind of a combination of what you give to the llm as input, along with the schema of the tool
what does your output class look like?
Plain Text
from typing import List
from pydantic.v1 import BaseModel, Field

class Topic(BaseModel):
    name: str = Field(..., title="Topic")
    confidence: float = Field(
        ...,
        description="Confidence value between 0-1 of the correctness of the topic.",
    )
    confidence_explanation: str = Field(
        ..., description="Explanation for the confidence score"
    )

class Questions(BaseModel):
    question: str = Field(..., title="Question")
    answer: str = Field(..., title="Answer")

class CustomerDetail(BaseModel):
    detail: str = Field(..., title="Custom Detail", description="Detail provided by the customer on the call.")

class TranscriptPeople(BaseModel):
    name: str = Field(..., title="Person Name")

class Transcript(BaseModel):
    response: str = Field(..., title="Response", description="The answer to the question.")
    id: str = Field(..., title="Transcript ID")
    description: str = Field(..., title="Transcript Description")
    duration: int = Field(..., title="Duration of the transcript in seconds")
    number_of_questions: int = Field(..., title="Number of questions", description="Number of questions asked in the transcript")
    topics: List[Topic] = Field(..., title="Topics List", description="List of topics discussed in the transcript")
    questions: List[Questions] = Field(..., title="Questions List", description="List of questions asked and answers in the transcript")
    people: List[TranscriptPeople] = Field(..., title="People List", description="List of people in the transcript")
    customer_details: List[CustomerDetail] = Field(..., title="Customer Details", description="List of customer details in the transcript")
I did say "Summarize the <id> transcript". It worked for 1 transcript, but not another
Oh, it worked if I switched it to "Summarize the details of the transcript with the id c-0a4bbee6-cdfd-4982-9d43-0469275632a9"
I find I get better performance if I don't nest pydantic classes, but not sure flattening this would help either (its pretty broad already)
Do I need to use pydantic.v1? I got an error previously if I didn't.
And I saw you commented about it on a bunch of github issues (and in the chat history here).
in v0.11.x of llama-index, you don't need to
We moved to pydantic v2 fully
Ok. This is pretty magical. Thanks @Logan M
Add a reply
Sign up and join the conversation on Discord