SchemaLLMPathExtractor
in LlamaIndex:from typing import Literal from llama_index.llms.ollama import Ollama from llama_index.core.indices.property_graph import SchemaLLMPathExtractor # best practice to use upper-case entities = Literal["PACKAGE", "FUNCTION", "CLASS", "METHOD", "PARAMETER", "EXAMPLE"] relations = Literal["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF"] # define which entities can have which relations validation_schema = { "PACKAGE": ["CONTAINS"], "FUNCTION": ["USES", "CALLS", "HAS", "EXAMPLE_OF"], "CLASS": ["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF"], "METHOD": ["USES", "CALLS", "HAS", "EXAMPLE_OF"], "PARAMETER": ["USED_BY"], "EXAMPLE": ["EXAMPLE_OF"], }
PACKAGE
contains FUNCTIONS
, CLASSES
, and METHODS
.FUNCTION
uses PARAMETERS
and can call other FUNCTIONS
or METHODS
. It can also have EXAMPLES
.CLASS
contains METHODS
and can use PARAMETERS
. It can also have EXAMPLES
.METHOD
uses PARAMETERS
and can call other METHODS
or FUNCTIONS
. It can also have EXAMPLES
.PARAMETER
is used by FUNCTIONS
, METHODS
, and CLASSES
.EXAMPLE
is an example of a FUNCTION
, METHOD
, or CLASS
.CONCEPT
, APPROACH
, and GUIDELINE
, and define appropriate relationships for them. Here's an example:from typing import Literal from llama_index.llms.ollama import Ollama from llama_index.core.indices.property_graph import SchemaLLMPathExtractor # best practice to use upper-case entities = Literal["PACKAGE", "FUNCTION", "CLASS", "METHOD", "PARAMETER", "EXAMPLE", "CONCEPT", "APPROACH", "GUIDELINE"] relations = Literal["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS", "EXPLAINS"] # define which entities can have which relations validation_schema = { "PACKAGE": ["CONTAINS"], "FUNCTION": ["USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"], "CLASS": ["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"], "METHOD": ["USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"], "PARAMETER": ["USED_BY"], "EXAMPLE": ["EXAMPLE_OF"], "CONCEPT": ["EXPLAINS"], "APPROACH": ["USED_BY", "EXPLAINS"], "GUIDELINE": ["FOLLOWS", "EXPLAINS"], }
FUNCTION
, CLASS
, or METHOD
applies an APPROACH
and follows a GUIDELINE
.CONCEPT
explains a FUNCTION
, CLASS
, METHOD
, APPROACH
, or GUIDELINE
.APPROACH
is used by a FUNCTION
, CLASS
, or METHOD
and can explain a CONCEPT
.GUIDELINE
is followed by a FUNCTION
, CLASS
, or METHOD
and can explain a CONCEPT
.CONCEPT
or an APPROACH
depending on how you want to represent it. Here's an example:# Assuming you have a graph `g` g.add_node("RAG", type="CONCEPT") g.add_node("query_engine.query", type="METHOD") g.add_edge("RAG", "query_engine.query", type="APPLIES") # If you have a guideline for using RAG g.add_node("Use RAG for knowledge-intensive tasks", type="GUIDELINE") g.add_edge("query_engine.query", "Use RAG for knowledge-intensive tasks", type="FOLLOWS")
METHOD
"query_engine.query" applies the CONCEPT
"RAG". Additionally, it follows the GUIDELINE
"Use RAG for knowledge-intensive tasks".prompt
variable is being treated as a string, while the code expects it to be a list of dictionaries (each representing a ChatMessage
). This could be due to the way the messages
are being passed to the chat
method.chat
method of the Bedrock
class expects a list of ChatMessage
objects. Each ChatMessage
object should have a role
and content
. Here's how you should structure your messages
:from llama_index.core.llms import ChatMessage from llama_index.llms.bedrock import Bedrock llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0") messages = [ ChatMessage(role="system", content="You are a pirate with a colorful personality"), ChatMessage(role="user", content="Tell me a story"), ] response = llm.chat(messages)