----------------------
To write a query pipeline to get formatted output with OpenAI LLM in LlamaIndex, you can follow these steps:
- Define a prompt template. This is a string that will be used to generate the query for the LLM. You can use placeholders in the string that will be replaced with actual values when the query is run.
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
- Create an instance of the OpenAI LLM. You can specify the model and other parameters in the constructor.
llm = OpenAI(model="gpt-3.5-turbo")
- Create a query pipeline by chaining together the prompt template and the LLM. The
chain
parameter is a list of components that will be run in sequence. The output of each component is passed as input to the next component.
p = QueryPipeline(chain=[prompt_tmpl, llm], verbose=True)
- Run the query pipeline with the actual values for the placeholders in the prompt template.
output = p.run(movie_name="The Departed")
- Print the output.
You can also view the inputs and outputs at each step for debugging purposes.
output, intermediates = p.run_with_intermediates(movie_name="The Departed")
Here is the complete code:
from llama_index.core.query_pipeline import QueryPipeline
from llama_index.core import PromptTemplate
from llama_index.llms.openai import OpenAI
# Define the prompt template
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
# Create an instance of the OpenAI LLM
llm = OpenAI(model="gpt-3.5-turbo")
# Create a query pipeline
p = QueryPipeline(chain=[prompt_tmpl, llm], verbose=True)
# Run the query pipeline
output = p.run(movie_name="The Departed")
# Print the output
print(str(output))