Find answers from the community

Updated 3 months ago

Hi everyone -- hope I'm asking this in

Hi everyone -- hope I'm asking this in the correct place. Just getting started with llamaindex. I have a M1 Air with a ~1 GB index built from about 50 megs of YAML content.. I'm finding that load_index_from_storage is VERY slow. Curious if this is expected or if I should look into using a different machine. I have a Windows gamig box with a 3080 that might be faster
L
H
6 comments
That's pretty expected (and it doesn't have anything to do with your GPU)

I would suggest using a vector db integration like qdrant to make this more effecient
Ty, will take a look!
I guess I will load it, then move it over into the vector db
A little tricky to move it to the vector db, but it is possible!

lemme find the example
Plain Text
# load the index
index = load_index_from_storage(...)

# get the nodes and embeddings
nodes = index.docstore.docs
embeddings = index.vector_store._data.embedding_dict

# attach the embeddings
nodes_with_embeddings = []
for node_id, node in nodes.items():
  node.embedding = embeddings[node_id]
  nodes_with_embeddings.append(node)

# create a new index with the new backend (i.e. qdrant, chroma, weaviate)
vector_index = VectorStoreIndex(nodes_with_embeddings, storage_context=storage_context)
Thank you!! I spent some time trying to figure this out yesterday and gave up.. ty so much
Add a reply
Sign up and join the conversation on Discord