Find answers from the community

Updated 4 months ago

Llm

Hello, I am using your implementation of GraphRAG with llamaindex (https://docs.llamaindex.ai/en/stable/examples/cookbooks/GraphRAG_v1/?h=graphrag), and it works very well! I have a question, can we have "gpt-4o-mini" or our finetuned llm model in the query_engine building? Now they only accept a limited sets of OPENAI model. Thanks!
query_engine = GraphRAGQueryEngine(
graph_store=index.property_graph_store, llm=llm
)
L
j
8 comments
Yea you can pass in any llm, should work fine
hello, i tried a again but it still doesn't think, here is the code and the error I got.
Attachments
image.png
image.png
pip install -U llama-index-llms-openai
hi i think it still can't work.
can you help check whether the "llm"parameter doesn't accept "gpt-4o-mini" on GraphRAGQueryEngine, thanks!
Attachment
image.png
Are you running in a notebook? Make sure you restart it after updating that openai llm package.

It works fine on my end
yes, it works. many thanks!
Add a reply
Sign up and join the conversation on Discord