I'm really not sold on graphrag performance, seems a high level context, low level context, and summary could be done better using partitioned vectordb or hybrid search
Really, what graphrag is trying to solve is as you said, using a high-level context to narrow down what nodes to include during search
There's many ways to do this tbh. For example, maybe you have many files, and want to select files based on a summary or QA-Pairs that represent that file. Its just a matter of indexing that reference information, and swapping out that reference content with the real content before sending the text to the llm
If your application is not required I would not recommend using it.... 9 out of 10 times if you added metadata tags + query engine + agentic retrieval to vector store you are Golden.
I'm curious as how would you handle incremental delete on those data. Original graphrag doesn't keep track of which nodes affected by which documents / chunks