from pydantic import BaseModel, Field class DiameterMessage(BaseModel): diameter_name: str = Field(description="Diameter name") required: str = Field(description="Required: y/n") type: str = Field(description="Type: request/answer…") description: str = Field(description="Description")
data = { "diameter_name": "example", "required": "y", "type": "request", "description": "Example description", } message = DiameterMessage(**data)
message
object will now be an instance of the DiameterMessage
model, with all the data validated and parsed according to the model's definition. You can access the data in the object using dot notation, like message.diameter_name
.chain
argument you're passing to QueryPipeline
is not a sequence of modules or module keys. In LlamaIndex, a module is a callable object that takes an input and produces an output. A module key is a string that identifies a module in a pipeline.prompt_tmpl1
and llm
to chain
. prompt_tmpl1
is a PromptTemplate
, which is a type of module in LlamaIndex. llm
is assigned the value client
, but without more context, it's not clear what client
is. If client
is not a module or a module key, this could be the source of your error.QueryPipeline
with a chain of modules:from llama_index.core.query_pipeline import QueryPipeline from llama_index.llms.openai import OpenAI from llama_index.core import PromptTemplate # define llm llm = OpenAI(model="gpt-3.5-turbo") # define prompt template prompt_str = "Please write a passage to answer the question: {query_str}" prompt_tmpl = PromptTemplate(prompt_str) # define query pipeline p = QueryPipeline( chain=[prompt_tmpl, llm], verbose=True, )
llm
is an instance of OpenAI
, which is a type of module in LlamaIndex. prompt_tmpl
is a PromptTemplate
, which is also a type of module. This code defines a QueryPipeline
that first applies the PromptTemplate
to the input, and then passes the result to the OpenAI
module.QueryPipeline
and the modules you're using. You might also find it helpful to look at some examples of how to use QueryPipeline
in the LlamaIndex documentation ().chain
argument you're passing to QueryPipeline
is not a sequence of modules or module keys. In LlamaIndex, a module is a callable object that takes an input and produces an output. A module key is a string that identifies a module in a pipeline.prompt_tmpl1
and llm
to chain
. prompt_tmpl1
is a PromptTemplate
, which is a type of module in LlamaIndex. llm
is an instance of OpenAI
, which is also a type of module. This should be a valid chain.p.run()
without passing any arguments. The run
method of QueryPipeline
requires at least one argument, which is the input to the pipeline. If you don't pass an argument to run
, it might cause an error.run
:output = p.run(query_str="What is the diameter of the Earth?")
query_str
is the input to the pipeline. The exact arguments you need to pass to run
will depend on the modules in your pipeline.QueryPipeline
and the modules you're using. You might also find it helpful to look at some examples of how to use QueryPipeline
in the LlamaIndex documentation ().%pip install llama-index-llms-openai
. However, please note that this command is just an example and may not install the specific module you're looking for.sys
module, like this:import sys sys.path.append("/path/to/directory")
"/path/to/directory"
with the path to the directory that contains the module.