Find answers from the community

Updated 9 months ago

Hi, I faced the problem that the context

Hi, I faced the problem that the context is lost when using tools. I created a tool that reads the URL and it can answers the first question about it but with the next question it completely loses the previously taken data.

Plain Text
agent = OpenAIAgent.from_tools(
    [read_url_tool], llm=llm, verbose=True
)
...
response = agent.chat(query_text, chat_history)

The result:
Plain Text
Is this film scary https://en.wikipedia.org/wiki/Asteroid_City ?

response: The film "Asteroid City" is not categorized as a scary film. It is described as a 2023 American comedy-drama film directed by Wes Anderson. The plot revolves around a play set in a retrofuturistic version of 1955 during a Junior Stargazer convention. The film explores themes related to extraterrestrials, UFOs, and the postwar period of the 20th century. It features an ensemble cast and has received generally positive reviews.


is it suitable for kids?

response: I apologize but I don't know the answer.

So, how to make the agent keep the information obtained earlier and treat it in the same manner as context data? Thanks!
Add. Probably, the problem is I create the agent from scratch with every new question (because my program servers multiple users and can't keep all conversations loaded into memory). So, is there any way to keep the interim data, say, in some temporary context storage? Thank you.
L
S
13 comments
What happens if you print agent.chat_history before making the second chat call?
It has the chat history. I save the results and it updates.
I was wrong. It's actually empty. I pass the chat_history to the agent.chat() function only. But the chat_history has only questions and answers, but not the data obtained with tools.
So, basically the question is "How to keep and pass the interim data taken on some steps?"
I can actually keep them but how to pass them to the agent every next time?
keep interim tool ouput really risks filling the context window too fast, hence why its left out πŸ€”

in this case though, the agent responded (I think) with some response about asteroid city, which i would expect to be in the history (unless that was the tool response)
Yes it was the tool response
Experiment: construct the history_chat on-the-fly:
Plain Text
response = agent.chat(query_text, chat_history)
chat_history.append(ChatMessage(role='user', content=query_text))
chat_history.append(ChatMessage(role='assistant',                                       content=response.response))
response = agent.chat('Is it suitable for kids?', chat_history)

Even with this approach, with providing the chat history and not creating an agent from scratch, it still can't answer
One additional note: when doing the same with only tools that do something (like in the example with "add" and "multiply" tools), the agent memorizes the interim result. In my case, I also add the query engine tool that can take the data from the index:
Plain Text
        query_tool =  QueryEngineTool(
            query_engine=self.query_engine,
            metadata=ToolMetadata(
                name="query_tool",
                description=self.query_description,
            ),
        )

In this situation, when it can't find a fitting tool it looks in the index, and doesn't find anything. But how make it to store the interim result?
The reason it knows the interim result here is there are two memories, a working memory for the current step, and a top level memory.

Once the step is done (in this case agent calls one tool, knows its not done, calls the next, then it knows its done and the "step" is over
Probably you'd need to modify the agent runner to modify how this works
There are some guides on this
Great, let me look into it!
Add a reply
Sign up and join the conversation on Discord