Find answers from the community

Updated 4 months ago

Issue with using Gemini as chat_engine

At a glance
Issue with using Gemini as chat_engine

Code:
Plain Text
from llama_index.llms.gemini import Gemini
if "chat_engine" not in st.session_state.keys(): # Initialize the chat engine
    print("Chat Engine Created")
    st.session_state.chat_engine = index.as_chat_engine(chat_mode = "best", llm = Gemini(), verbose=True)

if st.session_state.messages[-1]["role"] != "assistant":
    with st.chat_message("assistant"):
        with st.spinner("Thinking..."):
            response = st.session_state.chat_engine.chat(prompt)
            print(response)
            st.write(response.response)
            message = {"role": "assistant", "content": response.response}
            st.session_state.messages.append(message) # Add response to message history
s
L
32 comments
Error code
Attachment
image.png
The same code works when I change the llm to OpenAI().
It seems like Gemini couldn't locate the top_candidate
@Logan M Would appreciate any pointer.
no idea on that πŸ€” I also cant test gemini. Probably requires some debugging
Very strange, it was working couple days ago I think.
maybe run pip install -U llama-index-llms-gemini ?
Ok, just ran that now, it upgraded to llama-index-llms-gemini-0.1.5
It helped a little bit but ran into another error.
@Logan M Any chance you can take a look at it? I can DM you my Gemini Key so you can see if we can recreate the issue.
I can't even use because it's banned in Canada lol

My VPN used to work, but now it doesn't
Gemini is banned in Canada???
I might be able to fix that though
It is hahahaha
Google hates us for some reason I guess. The government was trying to get them to pay up for news or something, and now it's a vendetta
Any idea how we can fix this bug?
I was thinking maybe this has something to do with
message being a dictionary other than ChatCompletionMessage object.
I remember running into that issue a few ago.
Seems like we just need to update these dicts (although weird that its passing in MODEL to ROLES_TO_GEMINI) could be some underlying issue
Attachment
image.png
Ok, I assume other people will have issues like mine as well. In that case, are you going to make an new release of the llama-index-llms-gemini?
If a PR merges, ya
Would you like me to create a Github Issue?
nah its fine, give me a few minutes tho
No rush, thanks a lot.
Two other questions I have is around context_prompt and system message
  1. Plain Text
    st.session_state.chat_engine = index.as_chat_engine(chat_mode = "best", llm = Settings.llm, context_prompt = (system_prompt), verbose=True)
So the thought is to pass some system prompt such as "You are a Virtual Assistant" etc.
Am I doing this correctly? This doesn't give me any error but I don't know if the context_prompt is working well right now given the responses coming back doesn't seem like it's getting the context_prompt.

  1. Plain Text
    st.session_state.messages.append({"role": "system", "content": f'''
     # Here is the document you have to process......
     # '''})  
So the thought here is passing in some documents here to process as system message in addtion to user prompt. This currently does give out error given that message can only have two roles (User, assistant). In that case, should I just use user then?
  1. Ah, the react agent doesn't technically support setting a system prompt, its a bit more complicated. You have to edit the react prompt header essentially
I think this guide works with react agents
https://docs.llamaindex.ai/en/stable/module_guides/models/prompts/usage_pattern.html#getting-and-setting-custom-prompts

Plain Text
chat_engine.get_prompts()
chat_engine.update_prompts(...)

  1. tbh under the hood it should be merging neighboring messages. Its converting the system prompt to a user message, and then erroring out
Thanks for the info, I will give it a try.
Let me know if you are able to make the update for llama-index-llms-gemini
try pip install -U llama-index-core, should probably be working now
It's working now, thanks a lot for all the help!
@Logan M
Add a reply
Sign up and join the conversation on Discord