Find answers from the community

Updated 4 months ago

Does Persistent Client saves data to VRAM of GPU?

At a glance

The community member has a question about whether loading an index from a persistent ChromaDB vector store loads the data into the GPU's VRAM. The comments suggest that the index is saved to disk, not VRAM, and that it is loaded into regular RAM when needed. However, there is no definitive answer provided, and one community member states that it depends on how ChromaDB is implemented and they are not sure if it uses VRAM or just regular RAM.

I have a question, when loading an index from a persistent chromadb vectorstore , is it loaded in the vram of gpu ?
t
g
L
5 comments
No, when you use the persist to save index, it's saved onto your disk somewhere so you no need to load it again, VRAM is volatile memmory, so once you turn off your computer, the data would be gone.
no i meant after persisting it to disk, and loading it again, does it get loaded in vram or ram ?
Plain Text
chroma_client = chromadb.PersistentClient(path=chroma_persistent_dir)
chroma_collection = chroma_client.get_collection(collection_name)
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
llm = llm = Ollama(model="llama3:70b-instruct", request_timeout=3000.0)
embed_model = HuggingFaceEmbedding(model_name="Alibaba-NLP/gte-large-en-v1.5", trust_remote_code=True, embed_batch_size=2)
Settings.llm = llm
Settings.embed_model = embed_model
index = VectorStoreIndex.from_vector_store(vector_store, embed_model=embed_model)

the chromadb is already saved to disk, all i'm doing is loading the collection and creating an index with VectorStoreIndex, and wanted to know if it actually loads the index in the vram of the gpu, since i don't have alot of vram left from loading a big model + embedding model
@Logan M do you have any insight about this pls ?
I have no idea, it depends how chroma is implemented. I'm not even sure if it uses vram, just normal ram I think
Add a reply
Sign up and join the conversation on Discord