Find answers from the community

Updated 3 months ago

Is it only me or, when using a ReAct

Is it only me or, when using a ReAct agent, when it asks questions to tools, it doesn't seem async, it blocks my whole process
L
t
17 comments
What kind of tools do you have? Not every tool will have async execution
Llm’s @Logan M
How did you create the tools tho?
Plain Text
history.append(ChatMessage(
                role='assistant',
                content=(f"Here are all the details of the route:\n"
                    f"  Context: {route.higher_context}\n"
                    f"  Route: {route.route}\n"
                    f"  Method: {route.method}\n"
                    f"  Description: {route.description}\n"
                    f"  Request Body: {json.dumps(clean_request_body, indent=2)}\n"
                    f"  Parameters: {json.dumps(clean_parameters, indent=2) if route.parameters else 'None'}\n"
                    f"  Possible Responses: {json.dumps(clean_responses, indent=2)}\n").strip()
            ))

            logging.info(f"history: {history[-1].content}")
            function_llm = OpenAI(model="gpt-3.5-turbo")
            agent = OpenAIAgent.from_tools(
                None,
                llm=function_llm,
                verbose=True,
                system_prompt=(
                    prompt
                    ),
                chat_history=history,
            )

            doc_tool = QueryEngineTool(
                query_engine=agent,
                metadata=ToolMetadata(
                    name=f"{route.title()}",
                    description=tool_summary,
                ),
            )
removed the prompt so it could fit here
but yh basically it
(i call my reactagent with astreamchat btw)
response = await self.top_agents[2].astream_chat(replaced_text, chat_history=chat_history)
You create an agent without giving it tools? πŸ‘€ Or I guess the agent is the tool?

In any case, the query engine tool has an async method. It should be async, unless maybe the LLM you are using doesn't properly support async?

How do you end up using the tool?
Using it in the top agent getting called in the get_stream for now
not sure tbh. I do know that there is async tool calling, but also, it sounds like you might be on an older version of llama-index?
I do not think so I rebuild my docker image every day which pulls the latest from llama_index
will check tho
also curious, if you are using openai, why mix react agents with openai agents? πŸ‘€
For the task, I need the main agent to be able to reason in multiple steps because the API is quite complexe, while I don't want to give him too much tokens in input so giving the docs isn't an option, so the agents are here to take only the useful part of the API doc summurize is base on the goal and output something that the agent needs at that time
without having extra information on what this specific route could do aswell
Add a reply
Sign up and join the conversation on Discord