Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
๐
๐
๐
Powered by
Hall
Inactive
Updated 2 months ago
0
Follow
I ran into cuda OOM error when running
I ran into cuda OOM error when running
Inactive
0
Follow
v
victor
9 months ago
ยท
I ran into cuda OOM error when running Gemma 7b on ollama. My GPU has 8G memory and can run llama2 7b and Mistral 7b without issues. I could run Gemma 7b in ollama CLI though, just not via llamaindex in a rag app.
W
v
2 comments
Share
Open in Discord
W
WhiteFang_Jr
9 months ago
With Ollama in LlamaIndex, you interact with your hosted LLM.
Were you facing issue while interacting via llamaIndex?
v
victor
9 months ago
yeah. I was building a simple txt file rag app. when i run gemma 7b it ran into the cuda out of memory error.
Add a reply
Sign up and join the conversation on Discord
Join on Discord