@jerryjliu0 where can I find some documentation on how the memory module works? not the examples but what is it doing under the hood, is it just a list index that keeps growing? And then we walk through it and summarise to e.g. 3k tokens?
I was wondering if someone used an implementation of gpt index to get chat gpt api a context longert than 4k with the tree or list index, and I was wondering if there is a way to have the cumulative summary of a list index to be smaller than a certain number of tokens so that i can fit in the chat gpt api memory limits