Hello, I am using your implementation of GraphRAG with llamaindex (
https://docs.llamaindex.ai/en/stable/examples/cookbooks/GraphRAG_v1/?h=graphrag), and it works very well! I have a question, can we have "gpt-4o-mini" or our finetuned llm model in the query_engine building? Now they only accept a limited sets of OPENAI model. Thanks!
query_engine = GraphRAGQueryEngine(
graph_store=index.property_graph_store, llm=llm
)