Hello All!
I think there could be a bug in the OpenAI agent memory.
I have made my own tool to query bigquery, please see below:
import typing
from dotenv import load_dotenv; load_dotenv()
from llama_index.agent import OpenAIAgent
from llama_index.llms import OpenAI, ChatMessage
from llama_index.tools import BaseTool, FunctionTool
from tools import BigqueryTool
llm = OpenAI(model="gpt-3.5-turbo-0613", temperature=0)
bigquery_tool_spec = BigqueryTool(llm=llm)
tools = bigquery_tool_spec.to_tool_list()
agent = OpenAIAgent.from_tools(
tools=tools,
llm=llm,
verbose=True,
system_prompt=AGENT_SYSTEM_PROMPT,
)
response = agent.chat("Can you first grab the schema?")
print(str(response))
The response is good, I have shortened it!
=== Calling Function ===
Calling function: get_schema with args: {}
Got output: [{"name": "hour", "type": "DATETIME", "mode": "NULLABLE"}, ...]
========================
The schema of the AirGrid platform is as follows:
1. hour: DATETIME
2. buyer_member_id: INTEGER
...
We can see the function returned the data needed and the LLM summarised it!
Now when I call:
agent.all_messages
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='You are an agent tasked with helping analyse ....', additional_kwargs={})]
Only the inital system prompt set by
AGENT_SYSTEM_PROMPT
exists.