Find answers from the community

Updated 8 months ago

how to write a query pipeline to get formatted output with OpenAI llm

@kapa.ai how to write a query pipeline to get formatted output with OpenAI llm
k
3 comments
Please stand by @navya9668 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To write a query pipeline to get formatted output with OpenAI LLM in LlamaIndex, you can follow these steps:

  1. Define a prompt template. This is a string that will be used to generate the query for the LLM. You can use placeholders in the string that will be replaced with actual values when the query is run.
Plain Text
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)


  1. Create an instance of the OpenAI LLM. You can specify the model and other parameters in the constructor.
Plain Text
llm = OpenAI(model="gpt-3.5-turbo")


  1. Create a query pipeline by chaining together the prompt template and the LLM. The chain parameter is a list of components that will be run in sequence. The output of each component is passed as input to the next component.
Plain Text
p = QueryPipeline(chain=[prompt_tmpl, llm], verbose=True)


  1. Run the query pipeline with the actual values for the placeholders in the prompt template.
Plain Text
output = p.run(movie_name="The Departed")


  1. Print the output.
Plain Text
print(str(output))


You can also view the inputs and outputs at each step for debugging purposes.

Plain Text
output, intermediates = p.run_with_intermediates(movie_name="The Departed")


Here is the complete code:

Plain Text
from llama_index.core.query_pipeline import QueryPipeline
from llama_index.core import PromptTemplate
from llama_index.llms.openai import OpenAI

# Define the prompt template
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)

# Create an instance of the OpenAI LLM
llm = OpenAI(model="gpt-3.5-turbo")

# Create a query pipeline
p = QueryPipeline(chain=[prompt_tmpl, llm], verbose=True)

# Run the query pipeline
output = p.run(movie_name="The Departed")

# Print the output
print(str(output))
For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord