Find answers from the community

Updated 2 months ago

Hello there How can I use llama index

Hello there! How can I use llama_index with GPU?
L
S
2 comments
You'll want to use a local LLM and local embedding model. You'll need at least 15GB of VRAM though

Check out the GPU section of this notebook
https://colab.research.google.com/drive/16QMQePkONNlDpgiltOi7oRQgmB8dU5fl?usp=sharing
Thanks, I will read.
Add a reply
Sign up and join the conversation on Discord