Find answers from the community

Updated 5 months ago

mangled name OpenAIAgent

At a glance
Plain Text
...
llama_index/agent/openai_agent.py", line 344, in chat
    chat_response = self._chat(
                    ^^^^^^^^^^^
...
python3.11/site-packages/llama_index/agent/openai_agent.py", line 40, in get_function_by_name
    raise ValueError(f"Tool with name {name} not found")


I'm using the OpenAIAgent for chat and I have quite a few tools atm (>30) and it usually finds the right one but sometimes it just slightly mangles the name leading to this error. What is there that I can do about this?
W
b
L
34 comments
Plain Text
    return OpenAIAgent.from_tools(
        tools=tools, # all of them are VectorIndexStore tools
        llm=OpenAI(temperature=0),
        verbose=True,
        system_prompt=SYSTEM_PROMPT,
        max_function_calls=len(_indices) * 2,
        memory=ChatMemoryBuffer.from_defaults(llm=OpenAI(), token_limit=1000),
    )
You're really going after it Wyrine, fun to see your llama index journey.
I'm so close 😭
don't make me have to increase my cap this month
Attachment
image.png
haha $50?!
light work!
;P
Attachment
image.png
Anyway let me look at OpenAiAgent and how it works
I imagine it uses a llm prompt to choose the name
and the LLM is actually returning a slightly mangled name
help me so i can become elon musk rich πŸ™‚
it's kinda a bad solution but can just ignore queries that fail via some flag that is passed down
Are you naming the tools yourself?
Does it fail on the same one name/tool?
yes:

Plain Text
    def _get_vector_index_query_engine_tool(self):
        query_engine = self.vector_index.as_query_engine(
            text_qa_template=NEW_CITATION_TEMPLATE,)
        return QueryEngineTool.from_defaults(
            query_engine=query_engine,
            name=f'{self.metadata["name"]}_vector_index',
            description=f"Useful for retrieving specific context relating to {self.metadata['description']}"
        )
It's possible..? I'm not sure
it should say in the error Tool with name {name}
I think i saw it happen like twice and I wanna say they were both within the same tool
Just validated that its not the same name
I need to spend some time understanding how it helps pick what tool
and if there is a prompt involved in sending to LLM
I'd look there.
otherwise Logan might be able to help when he gets back πŸ™‚ cc @Logan M
Hmm yea my only advice is to make the tool names as easy as possible for the llm to write -- maybe remove the vector_index suffix?

Also, as you increase tools, you might just run into issues in general. Maybe you can look into a retriever to only show relevant tools (like the top 10) to the LLM
https://gpt-index.readthedocs.io/en/stable/examples/agent/openai_agent_retrieval.html#building-an-object-index
That's helpful to know, I'll look into the retriever. Still can't help but be concerned about a whole complex query falling apart on the last step because of a bad guess at a name and then the application also crashing. Do you think it's worthwhile to have like an optional field for returning empty results when finding a tool fails?
I think some optional return value or defaulting to some default tool would be nice
I think this relates to the "lost in the middle" problem eith llms tbh
Should get better in the future though (or using gpt-4 for the top level agent might be a good choice too)
gpt-4 would be really nice to use but it's too expensive lol
and yeah - unfortunately we're implementing now and not in the future πŸ˜‚
who knows if the word "implementing" is even going to exist in the future πŸ™‚
Well, since it would only be for the top level agent (not the query engines/tools) it might not be toooo bad -- but I feel you, the 10x price increase is a lot (but it is a lot more reliable)
Yeah I tried it a little while ago with that setup and it was still too much.
Anyway, I thank you both once again for all your support and your patience with my dumb questions.
Add a reply
Sign up and join the conversation on Discord