Find answers from the community

Home
Members
Bar Haim
B
Bar Haim
Offline, last seen 3 months ago
Joined September 25, 2024
I think i found a bug in KnowledgeGraphRAGRetriever code, it will get the "graph_query_synthesis_prompt" into "KnowledgeGraphQueryEngine" from kwargs but also leave it in the kwargs, result in TypeError: llama_index.query_engine.knowledge_graph_query_engine.KnowledgeGraphQueryEngine() got multiple values for keyword argument 'graph_query_synthesis_prompt'
3 comments
L
B
I am building an index with 300k documents, is it possible to see the progress of building? like tqdm metric?
16 comments
E
B
L
I have a graph stored in neo4j graph database, is it possible to query the graph with llama index? I saw you could only build a new graph out of documents but not using existing graph
24 comments
B
L
Is it possible to make a wrapper for model LLM and use my own API for embedding and prediction?
2 comments
B
E
How to stream response but not printing it?
this
Plain Text
query_engine = index.as_query_engine(streaming=True)
streaming_response = query_engine.query("Who is Paul Graham.")
streaming_response.print_response_stream()


will print the output, I want to pass it to a chatbot
1 comment
L
Hello, i've build an index, is there a way to query it but just return the top k relevant nodes without the generated response step?
3 comments
b
B
I can't find a doc on how to implement streaming for custom LLM, the CompletionResponseGen section is not implemented in the doc
33 comments
B
L
E