Find answers from the community

Updated 3 months ago

Hi @Logan M

Hi @Logan M

I believe this issue is caused by an incompatibility between the 'google-generativeai' and 'llama-index-llms-gemini' libraries. Have you found a solution for this?

I've tried uninstalling and reinstalling them, but the problem persists. Do you know which specific versions work correctly with LlamaIndex?

Thanks in advance
L
A
9 comments
Probably something in llama-index needs to be fixed? I don't have access to gemini to test though (banned in Canada lol)

PRs are welcome πŸ™‚

This code seems wild to me, I'm surprised this ever worked?
Attachment
image.png
I live also in Canada!
It is not banned! I used to use it with LlamaIndex
happy to point you in the right direction for a PR though, the code is here

https://github.com/run-llama/llama_index/blob/1d49e15f4b91f6e4b931d8ae42f69dc678ce8ee4/llama-index-integrations/llms/llama-index-llms-gemini/llama_index/llms/gemini/utils.py#L32-L62

Really? Last time I needed a VPN to use it, and even then it didn't work lol
It's weird!
I used it 2 days ago!
even Google AI Studio allows u to use it for free in CA
I just updated the llamaindex and I believe it caused this issue
yea probably the latest google generative ai package is causing issues. I would try downgrading it or raising a PR to work with the latest
Hey Logan,

just to update u!
The Gemini() worked after creating a new env and installing the LlamaIndex from scatch.
I think there was some bug after updating the LlamaIndex in my previous env.

Also, both Gemini Flash and Pro works in Canada and they are free for a specific amount of query per day (like 1500 RPM per day - 15 RPM per min for Gemini_Flash)
Add a reply
Sign up and join the conversation on Discord