Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
Hello there How can I use llama index
Hello there How can I use llama index
Inactive
0
Follow
S
Sanadh'eL
2 years ago
Β·
Hello there! How can I use llama_index with GPU?
L
S
2 comments
Share
Open in Discord
L
Logan M
2 years ago
You'll want to use a local LLM and local embedding model. You'll need at least 15GB of VRAM though
Check out the GPU section of this notebook
https://colab.research.google.com/drive/16QMQePkONNlDpgiltOi7oRQgmB8dU5fl?usp=sharing
S
Sanadh'eL
2 years ago
Thanks, I will read.
Add a reply
Sign up and join the conversation on Discord
Join on Discord