Find answers from the community

Updated 12 months ago

the LLMCompilerAgentPack seems to have

At a glance

A community member is experiencing issues with the memory and chat history of the LLMCompilerAgentPack when running it as a REPL (chat_repl). Another community member suggests that this could be due to the module filling up the available context window quickly, and recommends using agent.memory.get_all() to get the full chat history and agent.memory.get() to get the current chat buffer.

the LLMCompilerAgentPack seems to have poor/no memory of the chat history/memory when running it as a repl (chat_repl). has anyone else bumped into that?
L
1 comment
I haven't actually used this particular module, but depending on what it's storing in the memory, it could fill up the available context window pretty quickly I imagine.

You can use agent.memory.get_all() to get the full chat history, and agent.memory.get() to get the current chat buffer
Add a reply
Sign up and join the conversation on Discord