Neo4jPropertyGraphStore
class from LlamaIndex. You will need to provide the username, password, and URL for your Neo4j instance. Here is an example:from llama_index.graph_stores.neo4j import Neo4jPropertyGraphStore graph_store = Neo4jPropertyGraphStore( username="neo4j", password="your_password", url="bolt://localhost:7687", )
"your_password"
with your actual Neo4j password.PropertyGraphIndex
from it using the from_existing
method. This method also requires an LLM (Language Model) and an embedding model. Here is an example:from llama_index.core import PropertyGraphIndex from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms.openai import OpenAI index = PropertyGraphIndex.from_existing( property_graph_store=graph_store, llm=OpenAI(model="gpt-3.5-turbo", temperature=0.3), embed_model=OpenAIEmbedding(model_name="text-embedding-3-small"), )
PropertyGraphIndex
, you can still insert more documents into it using the insert
method. Here is an example:from llama_index.core import Document document = Document(text="LlamaIndex is great!") index.insert(document)
as_retriever
and retrieve
methods. Here is an example:nodes = index.as_retriever(include_text=False).retrieve("LlamaIndex") print(nodes[0].text)
Neo4jPropertyGraphStore
class from LlamaIndex. Provide the username, password, and URL for your Neo4j instance:from llama_index.graph_stores.neo4j import Neo4jPropertyGraphStore graph_store = Neo4jPropertyGraphStore( username="neo4j", password="your_password", url="bolt://localhost:7687", )
"your_password"
with your actual Neo4j password.PropertyGraphIndex
from your existing graph using the from_existing
method. This method also requires an LLM (Language Model) and an embedding model:from llama_index.core import PropertyGraphIndex from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms.openai import OpenAI index = PropertyGraphIndex.from_existing( property_graph_store=graph_store, llm=OpenAI(model="gpt-3.5-turbo", temperature=0.3), embed_model=OpenAIEmbedding(model_name="text-embedding-3-small"), )
index = PropertyGraphIndex.from_existing( property_graph_store=graph_store, llm=OpenAI(model="gpt-3.5-turbo", temperature=0.3), embed_model=OpenAIEmbedding(model_name="text-embedding-3-small"), )
.from_existing
go find all the unprocessed nodes and embed them?from_existing
is meant to load a graph that you created early with from_documents()
or created from nodesEntityNode
and Relation
objects, embed the entity nodes, and then upsert into the property graphSchemaLLMPathExtractor
might be extracting entities without their corresponding relationships or vice versa. This could happen if the documents do not contain enough information to form complete relationships.strict=False
in your SchemaLLMPathExtractor
. This means that entities and relationships that are not in your validation_schema
can be included in the graph. If these entities or relationships do not form complete paths, they could result in orphan nodes.strict=True
in your SchemaLLMPathExtractor
to only include entities and relationships that are in your validation_schema
.logging
module. Here's how you can do it:logging
module and configure it to output debug information:import logging logging.basicConfig(level=logging.DEBUG)
FunctionCallingAgentWorker
, OpenAIAgent
, etc., set the verbose
parameter to True
. This will cause these components to print out detailed information about their operations. Here are some examples from the knowledge sources:# Creating a FunctionCallingAgentWorker with verbose output worker = FunctionCallingAgentWorker.from_tools([lyft_tool, uber_tool], verbose=True) # Creating an OpenAIAgent with verbose output agent = OpenAIAgent.from_tools(tools, llm=llm, verbose=True)
lyft_tool
, uber_tool
, tools
, and llm
with your actual tools and LLM.