OpenAIAgent
function execution to be concurrent, especially in asynchronous environment? Note that this is different from parallel function calling. What I am looking for is the way to execute those parallel func calls concurrently. Something like asyncio gather. Thanks!!get_current_weather
(modified to be async) to confirm that in fact the function executions are not concurrent.async def my_tool(...) -> str: """Some docstring""" await asyncio.sleep(1) return "Work done" tool = FunctionTool.from_defaults(async_fn=my_tool) # could also use FunctionCallingAgent, same thing, more generic agent = OpenAIAgent.from_tools([tool], ...) resp = await agent.achat(...)
import asyncio import json import time from llama_index.agent.openai import OpenAIAgent from llama_index.core.tools import FunctionTool from llama_index.llms.openai import OpenAI from app.config import OPENAI_API_KEY async def aget_current_weather(location: str): """Get the current weather in a given location""" await asyncio.sleep(2) # NOTE: this will be printed sequentially print(f"sleeping for 2 secs, current time: {time.time()}") return json.dumps({"location": location, "temperature": "22", "unit": "celsius"}) async def main(): aweather_tool = FunctionTool.from_defaults(async_fn=aget_current_weather) llm = OpenAI(model="gpt-4o-mini", api_key=OPENAI_API_KEY) aagent = OpenAIAgent.from_tools([aweather_tool], llm=llm) response = await aagent.achat( "What's the weather like in San Francisco, Tokyo, and Paris?", ) print(response) if __name__ == "__main__": asyncio.run(main())
sleeping for 2 secs, current time: 1734580081.053151 sleeping for 2 secs, current time: 1734580083.055695 sleeping for 2 secs, current time: 1734580085.0581667 The current weather in the three cities is as follows: - **San Francisco**: 22°C - **Tokyo**: 22°C - **Paris**: 22°C All three cities are experiencing the same temperature!
from llama_index.core.agent import FunctionCallingAgent ... async def main(): aweather_tool = FunctionTool.from_defaults(async_fn=aget_current_weather) llm = OpenAI(model="gpt-4o-mini", api_key=OPENAI_API_KEY) aagent = FunctionCallingAgent.from_tools([aweather_tool], llm=llm) response = await aagent.achat( "What's the weather like in San Francisco, Tokyo, and Paris?", ) print(response)
FunctionCallingAgent
do differently vs OpenAIAgent
(other than the former one does not support astream_chat)?