Find answers from the community

Updated 3 months ago

Trying to use Gemini with my reply chain

Trying to use Gemini with my reply chain function, which works with GPT, but gemini keeps spitting out An error occurred: <MessageRole.MODEL: 'model'>.
Plain Text
async def fetch_reply_chain(message, max_tokens=4096):
    context = []
    tokens_used = 0
    current_prompt_tokens = len(message.content) // 4
    max_tokens -= current_prompt_tokens
    while message.reference is not None and tokens_used < max_tokens:
        try:
            message = await message.channel.fetch_message(message.reference.message_id)
            role = Role.MODEL if message.author.bot else Role.USER
            message_content = f"{message.content}\n"
            message_tokens = len(message_content) // 4
            if tokens_used + message_tokens <= max_tokens:
                context.append(HistoryChatMessage(message_content, role))
                tokens_used += message_tokens
            else:
                break
        except Exception as e:
            print(f"Error fetching reply chain message: {e}")
            break
    return context[::-1]

I am trying to set custom chat history via,
Plain Text
memory = ChatMemoryBuffer.from_defaults(token_limit=8192)
                context = await fetch_reply_chain(message)
                memory.set(context + [HistoryChatMessage(f"{content}", Role.USER)])
                chat_engine = index.as_chat_engine(
                    chat_mode="condense_plus_context",
                    similarity_top_k=2,
                    sparse_top_k=12,
                    vector_store_query_mode="hybrid",
                    memory=memory,
                    -
i
L
20 comments
I would think that setting the bot's role to MODEL would solve this
but it doesn't seem to
(Yes this is the latest updates for gemini via llama-index)
Is there a full traceback? Or nah?
Plain Text
An error occurred: <MessageRole.MODEL: 'model'>
Traceback (most recent call last):
  File "d:\Documents\GitHub\FrogBot\modules\utils\GPT.py", line 58, in process_message_with_llm
    chat_response = await asyncio.to_thread(chat_engine.chat, content)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python311\Lib\asyncio\threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\thecr\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Documents\GitHub\FrogBot\.venv\Lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Documents\GitHub\FrogBot\.venv\Lib\site-packages\llama_index\core\chat_engine\condense_plus_context.py", line 291, in chat
    chat_response = self._llm.chat(chat_messages)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Documents\GitHub\FrogBot\.venv\Lib\site-packages\llama_index\core\llms\callbacks.py", line 93, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Documents\GitHub\FrogBot\.venv\Lib\site-packages\llama_index\llms\gemini\base.py", line 159, in chat
    merged_messages = merge_neighboring_same_role_messages(messages)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Documents\GitHub\FrogBot\.venv\Lib\site-packages\llama_index\core\utilities\gemini_utils.py", line 36, in merge_neighboring_same_role_messages
    and ROLES_TO_GEMINI[messages[i + 1].role]
        ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
KeyError: <MessageRole.MODEL: 'model'>
I actually fixed this lol
I tried updating today
pip install -U llama-index-core
oh nice! when did .18 drop?
Lol this morning
Been going hard on releases
So many PRs lately
Add a reply
Sign up and join the conversation on Discord