Find answers from the community

Updated last month

Error

At a glance

The post contains partial code that uses the MOVIE model, OpenAI language model, and PydanticOutputParser to generate a movie recommendation based on a genre. The community member is printing the prompt template and running the QueryPipeline to get the output.

In the comments, another community member asks about the full error and the version of llama-index being used. They also suggest that query pipelines are deprecated and that the community member should directly use the language model instead, providing an example of how to do that.

There is no explicitly marked answer in the post or comments.

some partial code is class MOVIE(BaseModel):
name:str=Field(...,description="name of the movie")
year:int=Field(...,description="year of movie release")


llm=OpenAI(model='gpt-3.5-turbo')
output_parser=PydanticOutputParser(MOVIE)

json_prompt_str = """\
suggest a movie for the genre {genre}.Here is the JSOn schema to follow.
"""
json_prompt_str = output_parser.format(json_prompt_str)
json_prompt_template=PromptTemplate(json_prompt_str)

print(json_prompt_template)

p=QueryPipeline(chain=[json_prompt_template,llm,output_parser],verbose=True)
output=p.run(genre="action")
print(str(output))
L
1 comment
What is the full error? What version of llama-index?

Fyi query pipelines are deprecated -- i would just directly use the llm in this case?

Plain Text
response = llm.complete(json_prompt_str)
obj = output_parser.parse(response.text)
Add a reply
Sign up and join the conversation on Discord