Find answers from the community

Updated 2 months ago

What Alternatives to GraphGDB Offer Better Performance in Terms of Cost, Speed, and Simplicity

At a glance

The community members are discussing alternatives to GraphRAG, a tool used for high-level context, low-level context, and summarization. The original poster is not satisfied with GraphRAG's performance and suggests using partitioned vector databases or hybrid search as potential better alternatives in terms of cost, speed, and simplicity.

The comments suggest that there are many ways to achieve similar functionality, such as using metadata tags, query engines, and agentic retrieval in a vector store. One community member mentions using a tool called Gemini for prompt caching to reduce token utilization, but notes that it is not cheap. Another community member raises concerns about handling incremental deletions in the original GraphRAG approach. Overall, the community members are exploring different approaches to address the limitations of GraphRAG.

any better alternative than graphrag in term of :
  • cost
  • speed
  • simplicity ?
I'm really not sold on graphrag performance, seems a high level context, low level context, and summary could be done better using partitioned vectordb or hybrid search
L
i
V
8 comments
I agree with this take

Really, what graphrag is trying to solve is as you said, using a high-level context to narrow down what nodes to include during search

There's many ways to do this tbh. For example, maybe you have many files, and want to select files based on a summary or QA-Pairs that represent that file. Its just a matter of indexing that reference information, and swapping out that reference content with the real content before sending the text to the llm
How you actually go about that will be specific to your data imo
I have 250gb of data in a graph rag


If your application is not required I would not recommend using it.... 9 out of 10 times if you added metadata tags + query engine + agentic retrieval to vector store you are Golden.

The metadata tags will help you the most.
To cut down on the token utilization in the graph we do a lot of prompt catching with Gemini .

Saves us 25-30k a month
Gemini prompt caching isnt cheap though
I'm curious as how would you handle incremental delete on those data. Original graphrag doesn't keep track of which nodes affected by which documents / chunks
I'm using a similar approach to cohere contextual RAG. But the client doesn't seem satisfied with the result.
Sorry responding to this now
Add a reply
Sign up and join the conversation on Discord