Find answers from the community

Updated 2 months ago

Hi @WhiteFang_Jr , @Logan M ,

At a glance

The community member is facing an issue while running the GraphRAG using the llama_index_cookbook_v1, where they are getting an "AuthenticationError: Error code: 401 - {'message': 'Invalid API key in request'}" error. They are using a custom gateway that interacts with the OpenAI API, and they suspect that the issue is related to the OpenAI API calls being made directly in the GraphRAG implementation.

The community members suggest using the OpenAILike class from the llama_index library, which allows the use of a custom API gateway. However, they encounter a different error, "InternalServerError: Error code: 500 - {'detail': "500: Internal error due to: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}"}". They are advised to declare the OpenAILike instance as a chat model by setting is_chat_model=True, is_function_calling_model=True.

Useful resources
Hi @WhiteFang_Jr , @Logan M ,

I'm trying out the GraphRAG using llama_index_cookbook_v1

here i'm facing an issue while running :

index = PropertyGraphIndex(
nodes=nodes,
property_graph_store=GraphRAGStore(llm=llm),
kg_extractors=[kg_extractor],
show_progress=True,
)

The error is : AuthenticationError: Error code: 401 - {'message': 'Invalid API key in request'} (File ~/Library/Python/3.9/lib/python/site-packages/llama_index/core/indices/property_graph/base.py:134, in PropertyGraphIndex.init(self, nodes, llm, kg_extractors, property_graph_store, vector_store, use_async, embed_model, embed_kg_nodes, callback_manager, transformations, storage_context, show_progress, **kwargs)




My intuition :

Since i'm utilising a custom gateway which interacts with openai api :

import os
from llama_index.llms.openai import OpenAI

os.environ['OPENAI_API_BASE'] = "https://llm-gateway.api.dev.sapt.com/api/openai"
os.environ['OPENAI_API_KEY'] = "EMPTY"

llm = OpenAI(model="gpt-4",default_headers={"Authorization" : '123456'})


i think in some place openai api call is directly made in graphrag implementation.


in some places : eg: generate_community_summary function inside GraphRAGStore , there is a direct call : response = OpenAI().chat(messages)

i've also tried setting:

from llama_index.core import Settings

Settings.llm = llm


please help me to direct the calls using my api gateway
1
W
a
L
15 comments
Hi @WhiteFang_Jr

from llama_index.llms.openai_like import OpenAILike

llm = OpenAILike(model="gpt-4", api_base="https://llm-gateway.api.dev.sapt.com/api/openai", api_key="EMPTY",default_headers={"Authorization" : '123456'})

response = llm.complete("Hello World!")
print(str(response))

Now i'm getting a different error

InternalServerError: Error code: 500 - {'detail': "500: Internal error due to: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}"}
Why use openai-like and not OpenAI?
In this case, if you use openai-like, you need to declare it as a chat model

Plain Text
OpenAILike(..., is_chat_model=True, is_function_calling_model=True)
Hi @Logan M , @WhiteFang_Jr ,

The above changes worked for OpenAILike() - but my original problem still exists :

Whenever i run this code by already initialising my llm ( using my custom gateway which interacts with openai services ), i'm still getting : AuthenticationError: Error code: 401 - {'message': 'Invalid API key in request'}

LLM Initialisation :
===================

from llama_index.llms.openai_like import OpenAILike

llm = OpenAILike(model="gpt-4", api_base="https://b-llm-gateway.api.dev.saptso.p.com/api/openai", api_key="EMPTY",default_headers={"Authorization" : '12345'},is_chat_model=True, is_function_calling_model=True)

llm.complete("Hello World!")



Code block which creates the error:
=====================================

from llama_index.core import PropertyGraphIndex

index = PropertyGraphIndex(
nodes=nodes,
property_graph_store=GraphRAGStore(),
kg_extractors=[kg_extractor],
show_progress=True,
)
Probably this is being raised from the embedding model, if you read the full traceback (if I had to guess)
If yes, how can i rectify the same? is there any workaround ?
File ~/Library/Python/3.9/lib/python/site-packages/llama_index/core/indices/property_graph/base.py:134

I'm seeing that error is raised from the above path
Error traceback file for your reference
Hi @Logan M, @WhiteFang_Jr ,

Is there any way to integrate this custom gateway which i have which is communicating with openai services - including the embedding part

do llamaindex offer any solutions , where i can use my custom gateway to embed using open ai as well
this is probably the most unreadable traceback I've ever seen lmao why is it in this format πŸ˜…
it looks like embeddings to me at least
Plain Text
from llama_index.embedding.openai import OpenAIEmbedding

embed_model = OpenAIEmbedding(model_name="text-embedding-3-small", api_base="https://b-llm-gateway.api.dev.saptso.p.com/api/openai", api_key="EMPTY", default_headers={"Authorization" : '12345'})

index = PropertyGraphIndex(
    nodes=nodes,
    property_graph_store=GraphRAGStore(),
    kg_extractors=[kg_extractor],
    embed_model=embed_model
    show_progress=True,
)
Hi @Logan M

Sorry for the traceback format πŸ˜…
Thanks, the above code works for me

I was also facing someother issues (like : The existing entity_pattern and relationship_pattern were returning empty results ) as well while following the cookbook for GraphRAG implementation from :
https://docs.llamaindex.ai/en/stable/examples/cookbooks/GraphRAG_v1/

I found the solution suggested by some user in : https://github.com/run-llama/llama_index/issues/15173

I have a very humble request to modify the documentation in : https://docs.llamaindex.ai/en/stable/examples/cookbooks/GraphRAG_v1/

to cater the issues i have mentioned above.

Thanks!
i struggled with OpenAILike for half an hour, read through the unit tests and docs with the same errors like the previous buddy.
finally found this message... Is is_chat_mode always needed? why not include it in the documentation?

as for the reason, i guess it is exactly why in the first place OpenAILike is designed: to use customized OpenAI-compatible API endpoint with many models like gemini/oai/anthropic
Add a reply
Sign up and join the conversation on Discord