Find answers from the community

Updated 4 months ago

Can anyone tell me Why this agent does

Can anyone tell me Why this agent does not even attempt to call the tool parsing functions when it gets the perfectly formatted response form the LLM? Ive been digging through the sourcecode for hours now....

Plain Text
def add(x: int, y:int) -> int:
    """
    Add two numbers together
    """
    return x + y

add_tool = FunctionTool.from_defaults(fn=add)

tools: List[BaseTool] = [add_tool, subtract_tool, multiply_tool]

llm = OpenAILike(
    logprobs=None,
    api_version="v2",
    model="Llama-3.1-8b",  # Replace with your model's name
    api_base=API_BASE,
    api_key=API_KEY,
    max_tokens=100,
    is_chat_model=True,
    is_function_calling_model=True,
)

agent = OpenAIAgent.from_tools(
    system_prompt=system_prompt,
    chat_history=messages,
    tools=tools,
    llm=llm,
    verbose=True,
    tool_call_parser=advanced_tool_call_parser,
    is_function_calling_model=True,
    allow_parallel_tool_calls=True,
    is_chat_model=True,
)


res: str = agent.chat("Can you add together 5 + 9 for me.")
L
C
J
16 comments
Could be because
  • you are using an open-source LLM, and they are generally bad at function calling
  • the openai-like server you are using doesn't actually support function calling using openai's tools/functions api
@Logan M Thanks for the response. Im confused on this still though. I am getting a perfect json response back {"name": "add", "parameters": {"x": "5", "y": "9"}}. So I started diging through the source code and just printing statatments through all the parsing methods of the agent. Its not even attempting to parse the json with none of the methods getting called.
Im going to make a custom agent to do this now, But Im trying to use the LlamaIndex library as intended so really wanted to understand why this is the case. I was not able to find the exact desision point for the lack of call's to the parsing functions over about 6 hours of digging through the code.

One of the comments is a bit confunsing:
class OpenAILike(OpenAI):
"""
OpenAILike is a thin wrapper around the OpenAI model that makes it compatible with
3rd party tools that provide an openai-compatible api.

Currently, llama_index prevents using custom models with their OpenAI class
because they need to be able to infer some metadata from the model name.

NOTE: You still need to set the OPENAI_BASE_API and OPENAI_API_KEY environment
variables or the api_key and api_base constructor arguments.
OPENAI_API_KEY/api_key can normally be set to anything in this case,
but will depend on the tool you're using.
"""

This seems to state that the OpenAI"Like" calass enables the 3rd party LLM to work. but some times comments don't express the full intent. It does seem that this wrapper should enable my LLM to work as it follows the OpenAI API.

Considering Im getting a perfect response it seems that the parsing functions should still fire off an attempt to parse. At this point I feel like there is a bug some where and maybe ill put in an issue on github.
In my experience lack of tool calling was always on the LLM.
Thank you, Im looking into this now!!!
@Logan M Ok, So I think thinking that the the json object was parsed out of the response on the LlamaIndex side inda like the React Agent and added to the kwargs... But I think your saying that its on the API side that it is parsed and added to the returned response as a request pram from the API. So I think I need to just go review the Open AI tool API spec.

Does it sound like i'm on the right track here now?
Yea that sounds right to me.

Specifically, in the openai llm class, its doing this to process the response from the openai client
Attachment
image.png
Where ChatCompletionMessage is the result of calling client.chat.completions.create(...).choices[0].message
Cool, I was looking at it in the wrong frame of mind, I should be able to figure it out from here now... Thank you!
@Logan M Modified my backend and have the function calling working now.

Do you have any insight as to how streaming function calling works. I'm looking for documentation currently but have not found anything to indicate how i should implement this.
I think i have an idea of how it should work but was looking for good docs before i spend all my time implementing it incorrectly lol.

Im thinking that for streaming function calls to work its implementing the assistants api over the chat/completions?
Nope it's still chat/completions

I suggest trying it with openai (if you have a key) and inspecting responses to see how it works

I think it incrementally builds the json object for the tool/function as it streams. We have utils to take that and basically parse it into a partial object
Yea, thats what i was thinking at first but was not sure how to get the data back to the server witin the streaming call. Ill read through that and then yea ill hook up OpenAI if i still have quests.

Thanks, you've saved me a lot of time. Drop an ETH address in my DM's if you've got one and ill send a thank you haha : )
Add a reply
Sign up and join the conversation on Discord