Find answers from the community

Updated 5 months ago

what would be a good schema for creating a knowledge graph that contains information about

@kapa.ai what would be a good schema for creating a knowledge graph that contains information about how to use a python package? For example, what schema would i use to create a property graph that would contain knowledge about how to use llamaindex?
k
r
J
17 comments
Please stand by @rawwerks as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Creating a knowledge graph for a Python package like LlamaIndex involves defining a schema that captures the relevant entities and relationships. Here's an example of how you might define such a schema using the SchemaLLMPathExtractor in LlamaIndex:

Plain Text
from typing import Literal
from llama_index.llms.ollama import Ollama
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor

# best practice to use upper-case
entities = Literal["PACKAGE", "FUNCTION", "CLASS", "METHOD", "PARAMETER", "EXAMPLE"]
relations = Literal["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF"]

# define which entities can have which relations
validation_schema = {
    "PACKAGE": ["CONTAINS"],
    "FUNCTION": ["USES", "CALLS", "HAS", "EXAMPLE_OF"],
    "CLASS": ["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF"],
    "METHOD": ["USES", "CALLS", "HAS", "EXAMPLE_OF"],
    "PARAMETER": ["USED_BY"],
    "EXAMPLE": ["EXAMPLE_OF"],
}

In this schema:

  • A PACKAGE contains FUNCTIONS, CLASSES, and METHODS.
  • A FUNCTION uses PARAMETERS and can call other FUNCTIONS or METHODS. It can also have EXAMPLES.
  • A CLASS contains METHODS and can use PARAMETERS. It can also have EXAMPLES.
  • A METHOD uses PARAMETERS and can call other METHODS or FUNCTIONS. It can also have EXAMPLES.
  • A PARAMETER is used by FUNCTIONS, METHODS, and CLASSES.
  • An EXAMPLE is an example of a FUNCTION, METHOD, or CLASS.
This is just a basic example. The actual schema you use would depend on the specific aspects of the Python package you want to capture in your knowledge graph.
Source: LlamaIndex > En > Latest > Examples > Property_graph > Property Graph Construction with Predefined Schemas ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
This is pretty good, @kapa.ai , but you didn’t include any schema for concepts. In addition to knowing the schema that is in the code itself, we need schema that represent concepts and approaches for how one should be using the package. Please add this
Let me check the knowledge base for you. One moment! :books:
----------------------
Certainly, to capture the concepts and approaches for using the package, you can extend the schema to include entities like CONCEPT, APPROACH, and GUIDELINE, and define appropriate relationships for them. Here's an example:

Plain Text
from typing import Literal
from llama_index.llms.ollama import Ollama
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor

# best practice to use upper-case
entities = Literal["PACKAGE", "FUNCTION", "CLASS", "METHOD", "PARAMETER", "EXAMPLE", "CONCEPT", "APPROACH", "GUIDELINE"]
relations = Literal["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS", "EXPLAINS"]

# define which entities can have which relations
validation_schema = {
    "PACKAGE": ["CONTAINS"],
    "FUNCTION": ["USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"],
    "CLASS": ["CONTAINS", "USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"],
    "METHOD": ["USES", "CALLS", "HAS", "EXAMPLE_OF", "APPLIES", "FOLLOWS"],
    "PARAMETER": ["USED_BY"],
    "EXAMPLE": ["EXAMPLE_OF"],
    "CONCEPT": ["EXPLAINS"],
    "APPROACH": ["USED_BY", "EXPLAINS"],
    "GUIDELINE": ["FOLLOWS", "EXPLAINS"],
}

In this extended schema:

  • A FUNCTION, CLASS, or METHOD applies an APPROACH and follows a GUIDELINE.
  • A CONCEPT explains a FUNCTION, CLASS, METHOD, APPROACH, or GUIDELINE.
  • An APPROACH is used by a FUNCTION, CLASS, or METHOD and can explain a CONCEPT.
  • A GUIDELINE is followed by a FUNCTION, CLASS, or METHOD and can explain a CONCEPT.
This schema allows you to capture not just the structure of the code, but also the concepts and best practices for using the package.
Please note that this is a hypothetical schema and the actual implementation would depend on the specific aspects of the Python package and its usage that you want to capture in your knowledge graph.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai -good job. Could you give me an example of how “RAG (Retrieval Augmented Generation)” might be included in the graph?
Let me check the knowledge base for you. One moment! :books:
----------------------
Sure, to include "RAG (Retrieval Augmented Generation)" in the graph, you might consider it as a CONCEPT or an APPROACH depending on how you want to represent it. Here's an example:

Plain Text
# Assuming you have a graph `g`
g.add_node("RAG", type="CONCEPT")
g.add_node("query_engine.query", type="METHOD")
g.add_edge("RAG", "query_engine.query", type="APPLIES")

# If you have a guideline for using RAG
g.add_node("Use RAG for knowledge-intensive tasks", type="GUIDELINE")
g.add_edge("query_engine.query", "Use RAG for knowledge-intensive tasks", type="FOLLOWS")


In this example, the METHOD "query_engine.query" applies the CONCEPT "RAG". Additionally, it follows the GUIDELINE "Use RAG for knowledge-intensive tasks".

This is a simplified example. In a real-world scenario, you would likely have more nodes and edges to fully represent the relationships between different entities.

Please note that the actual implementation would depend on the specific aspects of the Python package and its usage that you want to capture in your knowledge graph.

Sources:
  • LlamaIndex > En > Latest > Examples > Llm > Construct mock Chat history > RAG Approach to import external knowledge into LLM as context Part 3 ()
  • Jerryjliu > Llama_index > Blob > Main > Docs > Docs > Examples > Query_engine > Knowledge Graph RAG Query Engine > Perform Graph RAG Query ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@jerryjliu0 and @Logan M - i just onboarded your newest customer support engineer
Let me check the knowledge base for you. One moment! :books:
----------------------
I'm sorry, but as an AI, I'm currently unable to directly access or read content from external links or documents that are not provided in the knowledge sources. I can only provide information based on the knowledge sources given to me. If you have specific information from the paper that you'd like to discuss or questions about, feel free to provide that, and I'll do my best to assist you.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai
llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0")
messages = [
ChatMessage(role="system", content="You are a pirate with a colorful personality"),
ChatMessage(role="user", content="Tell me a story"),
]
llm.chat(messages=[
ChatMessage(role="system", content="You are a pirate with a colorful personality"),
ChatMessage(role="user", content="Tell me a story"),
])

The above code is throwing the below error
File "C:\Users\Jay\OneDrive\coding_projects\RAG_AWS_Neptune\venv\Lib\site-packages\llama_index\llms\bedrock\utils.py", line 157, in get_request_body
if len(prompt) > 0 and prompt[0]["role"] == "system":
~^^^^^^^^
TypeError: string indices must be integers, not 'str'

This happens for both 'chat' and 'complete' method
Let me check the knowledge base for you. One moment! :books:
----------------------
The error message suggests that the prompt variable is being treated as a string, while the code expects it to be a list of dictionaries (each representing a ChatMessage). This could be due to the way the messages are being passed to the chat method.

In the LlamaIndex package, the chat method of the Bedrock class expects a list of ChatMessage objects. Each ChatMessage object should have a role and content. Here's how you should structure your messages:

Plain Text
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0")

messages = [
    ChatMessage(role="system", content="You are a pirate with a colorful personality"),
    ChatMessage(role="user", content="Tell me a story"),
]

response = llm.chat(messages)


If you're still encountering the error, it might be due to an issue with the LlamaIndex package itself. In that case, I would recommend reaching out to the maintainers of the package for further assistance.

Sources:
  • LlamaIndex > En > Latest > Api_reference > Llms > Bedrock > Bedrock Part 8 ()
  • LlamaIndex > En > Latest > Examples > Cookbooks > docs_both = SimpleDirectoryReader(input_files=["data/drake_kendrick_beef.pdf"]).load_data() > 1. Basic Completion and Chat > Call chat with a list of messages ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord