Find answers from the community

R
RUPP
Offline, last seen 3 months ago
Joined September 25, 2024
How can i make a async indexing by node , in KGIndexing
1 comment
L
what is the include_text in this function?

query_engine = index.as_query_engine(
include_text=True, response_mode="tree_summarize"
)

tell me the default pattern of this parameter
2 comments
L
how can i parrallel the send nodes to openai , for exemple:If If I have 10 nodes, today I need to wait for 1 to finish being processed by openai before it can be inserted, is there a way to do it in parallel? Send the 10 nodes at the same time?
6 comments
R
L
how can i change the kg_triple_extract_template in KnowledgeGraphIndex
index=KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=3,
service_context=service_context,
include_embeddings=True
)
2 comments
k
How to change the kg_triple_extract_template in KnowledgeGraphIndex
12 comments
R
k
index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=30,
service_context=service_context,
include_embeddings=True,
)

when i set True for include_embeddings where this data is stored and how can i load this embeddings ?
3 comments
L
R
R
RUPP
·

Azure

how to solve this erro Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>

im getting this error , what is wrong with my code , i persist and now i get load

graph_store = Neo4jGraphStore(
username=username,
password=password,
url=url,
database=database,
)

storage_context = StorageContext.from_defaults(graph_store=graph_store)

index = KnowledgeGraphIndex.from_documents(
documents,
storage_context=storage_context,
max_triplets_per_chunk=10,
service_context=servicecontext,
include_embeddings=True,
)

index.storage_context.persist('testSorage')
from llama_index.storage.docstore import SimpleDocumentStore
from llama_index.vector_stores import SimpleVectorStore
from llama_index.storage.index_store import SimpleIndexStore
storage_context = StorageContext.from_defaults(
docstore=SimpleDocumentStore.from_persist_dir(persist_dir="./testSorage/"),
vector_store=SimpleVectorStore.from_persist_dir(persist_dir="./testSorage/"),
index_store=SimpleIndexStore.from_persist_dir(persist_dir="./testSorage/"),
)
from llama_index import load_index_from_storage
graph = load_index_from_storage(storage_context)
gb=graph.as_query_engine()
gb.query("how much is this")
1 comment
E
How can i Delete my nodes inside of a Index passing id_node
1 comment
W
class SimilarityNodePostprocessor(BaseNodePostprocessor):
def _postprocess_nodes(
self, nodes: List[TextNode], query_bundle: Optional[QueryBundle], value: Optional[int] = None
) -> List[TextNode]:
results = []
for n in nodes:
if n.metadata['x'] == value:
results.append(n)
return results


assemble query engine

query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
node_postprocessors=[SimilarityNodePostprocessor()],
)

i am passing the value and nodes and this return this error hwo can i pass the value and nodes to this class
1 comment
L
how to set 2 different models , onde for LLM AZURE and OTHER from OPENAI for embedding
4 comments
k
V
How can in make streaming response , arrive to my front in real time?
2 comments
L
W
@kapa.ai it is possible from llama_index.storage.storage_context import StorageContext

graph_store = SimpleGraphStore()
storage_context = StorageContext.from_defaults(graph_store=graph_store)

NOTE: can take a while!

index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=2,
storage_context=storage_context,
service_context=service_context,
include_embeddings=True
)

in Neo4j using embeddoings?
6 comments
k
R
@kapa.ai How can i load and persist dada of a KnowledgeGraphIndex
3 comments
k
@kapa.ai Show me an exemple using KGTableRetriever
2 comments
k
@kapa.ai I made a index with KnowledgeGraphIndex with embedding true , what is the best query for this index
7 comments
R
k