Find answers from the community

Home
Members
adish007
a
adish007
Offline, last seen 3 months ago
Joined September 25, 2024
Hi @WhiteFang_Jr , @Logan M ,

I'm trying out the GraphRAG using llama_index_cookbook_v1

here i'm facing an issue while running :

index = PropertyGraphIndex(
nodes=nodes,
property_graph_store=GraphRAGStore(llm=llm),
kg_extractors=[kg_extractor],
show_progress=True,
)

The error is : AuthenticationError: Error code: 401 - {'message': 'Invalid API key in request'} (File ~/Library/Python/3.9/lib/python/site-packages/llama_index/core/indices/property_graph/base.py:134, in PropertyGraphIndex.init(self, nodes, llm, kg_extractors, property_graph_store, vector_store, use_async, embed_model, embed_kg_nodes, callback_manager, transformations, storage_context, show_progress, **kwargs)




My intuition :

Since i'm utilising a custom gateway which interacts with openai api :

import os
from llama_index.llms.openai import OpenAI

os.environ['OPENAI_API_BASE'] = "https://llm-gateway.api.dev.sapt.com/api/openai"
os.environ['OPENAI_API_KEY'] = "EMPTY"

llm = OpenAI(model="gpt-4",default_headers={"Authorization" : '123456'})


i think in some place openai api call is directly made in graphrag implementation.


in some places : eg: generate_community_summary function inside GraphRAGStore , there is a direct call : response = OpenAI().chat(messages)

i've also tried setting:

from llama_index.core import Settings

Settings.llm = llm


please help me to direct the calls using my api gateway
14 comments
L
a
W