Find answers from the community

Updated 2 months ago

```BadRequestError: Error code: 400 - {'

Plain Text
BadRequestError: Error code: 400 - {'error': {'message': "'$.tools' is too long. Maximum length is 128, but got 221 items.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

hello guys i'm getting this error even for gpt-4-1106-preview for higher context and tokens, i'm trying to use rag. I have created a pandasqueryengine for csv based datas and UnstructuredReader to parse data from pdf
data sources format: pdf, txt, csv
trying to use gpt-4 and OpenAIEmbedding for embed model
here is my service context
Plain Text
llm = OpenAI(
    model="gpt-4-1106-preview",
    # model="gpt-4",
    # model="gpt-3.5-turbo",
    temperature=0.5,
)
OpenAI_embeddings = OpenAIEmbedding()
service_context = ServiceContext.from_defaults(
    llm=llm,
    embed_model=OpenAI_embeddings,
    chunk_size=1024,
)


I use RouterQueryEngine and SubQuestionQueryEngine as well, testing out both.
Plain Text
root_query_engine_tool = QueryEngineTool(
    # query_engine=root_query_engine,
    query_engine=router_query_engine,
    metadata=ToolMetadata(
        name="router_query_engine",
        # name="sub_question_query_engine",
        description="useful for when you want to answer queries that require analyzing multiple sections documents",
    ),
)

# We combine all query engine tools as one tool
tools = individual_query_engine_tools + [root_query_engine_tool]

index is stored in local storage with persistent directory, and loading them in runtime
and i'm using OpenAI data agents with these tools.
How to resolve this error?
L
S
A
16 comments
If I'm reading this right, you sent more than 128 tools?

The only way to overcome this is to create some hierarchy of tools
hi, sorry for buttin in, but I keep getting the same even though there was no change to my code, nor to the dataset the embedding is running on.
Do you have 128+ tools/indexes?
no idea, Im sorry :S
but really nothing changed on my side, just wanted to rerun the load to regenerate a file.
my trouble is I find no way to actually check what llama does/sends, so I dont even know if it reaches the token limit (it shouldnt though)
This is not related to token limits
in the above code, you'll see the user is using router_query_engine -- are you also using a router? Or an agent maybe?
This seems unrelated
nope, Im simply loading a vectoridx from docs
And you get the same error? "'$.tools' is too long. Maximum length is 128, but got 221 ...
ah, yes, other function, but started to receive the same response from openai for seemingly no reason
sorry my bad
I only checked up to error 400 and supposed badly...
@Logan M @Semirke i have indices around 217, but my data sources are large what to do? how to use hierarchical tools with large amount of data
Add a reply
Sign up and join the conversation on Discord