Find answers from the community

Updated 10 months ago

the LLMCompilerAgentPack seems to have

the LLMCompilerAgentPack seems to have poor/no memory of the chat history/memory when running it as a repl (chat_repl). has anyone else bumped into that?
L
1 comment
I haven't actually used this particular module, but depending on what it's storing in the memory, it could fill up the available context window pretty quickly I imagine.

You can use agent.memory.get_all() to get the full chat history, and agent.memory.get() to get the current chat buffer
Add a reply
Sign up and join the conversation on Discord