the LLMCompilerAgentPack seems to have poor/no memory of the chat history/memory when running it as a repl (chat_repl). has anyone else bumped into that?
I haven't actually used this particular module, but depending on what it's storing in the memory, it could fill up the available context window pretty quickly I imagine.
You can use agent.memory.get_all() to get the full chat history, and agent.memory.get() to get the current chat buffer