Find answers from the community

Updated 2 months ago

Does `ConversationSummaryBufferMemory`

Does ConversationSummaryBufferMemory work with the new changes?
I have something like
Plain Text
        llm = OpenAI(model=model, temperature=0)
        llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model=model))

        memory = ConversationSummaryBufferMemory(
            memory_key="memory",
            return_messages=True,
            llm=llm,
            max_token_limit=29000 if "gpt-4" in model else 7500,
        )

But I can't run this, I get the error
Plain Text
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 124, in wrapped
    ret = await coro(arg)
  File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 978, in _invoke
    await self.callback(self.cog, ctx, **kwargs)
  File "/home/kaveen/GPTDiscord/cogs/commands.py", line 755, in talk
    await self.index_cog.index_chat_command(ctx, model)
  File "/home/kaveen/GPTDiscord/cogs/index_service_cog.py", line 212, in index_chat_command
    await self.index_handler.start_index_chat(ctx, model)
  File "/home/kaveen/GPTDiscord/models/index_model.py", line 488, in start_index_chat
    memory = ConversationSummaryBufferMemory(
  File "/usr/local/lib/python3.9/dist-packages/langchain/load/serializable.py", line 97, in __init__
    super().__init__(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationSummaryBufferMemory
llm
  Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
W
K
L
18 comments
Can you check Pydantic version once?
Or try with Pydantic version: pip install pydantic==1.10.12 once
yeah still doesn't work
It looks like an issue with langchain
Oh I checked, ConversationSummaryBufferMemory is from langchain. Yes you could be right.

You could try once with lower langchain version
yeah I think I'll just wait for llama index and langchain for be out of rc phase and for more of the components to be put up to date and etc
thank u for the help tho!
tbh I'm surprised if this ever worked before? Seems like you are mixing llama-index LLMs with langhchain

But yea, seems like a langchain issue πŸ˜…
Ah yeah it seemed to work pretty good before, I htought it was because llama's llm was just an abstraction of langchain's or something
It used to be! But a few months back we were tired of depending on langchain and made our own πŸ™‚
so should I be using the langchain llm to use the summary buffer memory? or does llama index provide something similar?
I guess the lines between langchain and llama are becoming more blurred theres a lot of overlap now
idk whether to use langchain or llama index for agents now
especially now that llama has an assistant agent and langchain foesnt
Yea there's definitely some overlap. I'm pretty sure langchain wasn't nice enough to make our LLMs work with their memory objects πŸ™„ You'd have to implement it as a custom LLM in langchain I think? Very annoying

But on that note if you wanted to add a summary memory buffer to llama-index, that would be super cool 😎 Right now we just have a basic buffer
Will look into it for sure!
In the meantime I want to keep using the langchain summary buffer though, I should be able to use the langchain llm and then that memory from langchain and still pass it in when used in conjunction with a llama index index right? That functionality hasn't changed?
or actually nvm, I think I'm getting a bit muddled between langchain and llama index, I don't even think I use a direct llama index conversation chain but instead I create a langchain agent and then add the llama index index as a tool to the agent
Add a reply
Sign up and join the conversation on Discord