----------------------
To use embeddings in this case, you need to set
include_embeddings=True
when creating the
KnowledgeGraphIndex
. This will include embeddings in the index which can be used for querying later. Here is an example:
from llama_index import KnowledgeGraphIndex, ServiceContext
from llama_index.storage.storage_context import StorageContext
from llama_index.graph_stores import Neo4jGraphStore
# Define your Neo4jGraphStore with your specific parameters
graph_store = Neo4jGraphStore(
username=username,
password=password,
url=url,
database=database,
)
storage_context = StorageContext.from_defaults(graph_store=graph_store)
# Define your LLM and ServiceContext
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512)
# Load your documents
documents = SimpleDirectoryReader(
"../../../../examples/paul_graham_essay/data"
).load_data()
# Build the KnowledgeGraphIndex with embeddings
index = KnowledgeGraphIndex.from_documents(
documents,
storage_context=storage_context,
max_triplets_per_chunk=2,
service_context=service_context,
include_embeddings=True, # Include embeddings
)
After creating the index with embeddings, you can use the
as_query_engine
method to create a query engine that uses these embeddings. Here is an example:
query_engine = index.as_query_engine(
include_text=True,
response_mode="tree_summarize",
embedding_mode="hybrid",
similarity_top_k=5,
)
In this example,
embedding_mode="hybrid"
means that the query engine will use both text and embeddings for querying.