Find answers from the community

Updated 2 months ago

Hi I am creating an index but my session

Hi I am creating an index but my session keeps crashing out. Can anyone guide me with the problem?

Here is the llm which I am using for embedding:
Plain Text
# Embedding Model Setting
Settings.embed_model = HuggingFaceEmbedding(
    model_name="jinaai/jina-embeddings-v2-base-es"
)

This model outputs 768 embedding dimensions.

Here I am creating QdrantVectorStore index
Plain Text
vector_store = QdrantVectorStore(client=QDRANT_CLIENT, collection_name=collection_name)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
)

But it crashes
W
U
3 comments
You can pass the embed_batch_size like this:
https://github.com/run-llama/llama_index/blob/4394c7f11e907c4a7c9926ae98eb53e6d60a1619/llama-index-integrations/embeddings/llama-index-embeddings-huggingface/llama_index/embeddings/huggingface/base.py#L66

Embedding Model Setting

Settings.embed_model = HuggingFaceEmbedding(
model_name="jinaai/jina-embeddings-v2-base-es",
embed_batch_size=50 #default is 10

)
And what about inserting documents in an index? Do we have batching while inserting documents?
You can convert them into nodes and then pass chunk of nodes together.

index.insert_nodes([list of nodes])

this will insert the nodes into your index. default insert_nodes_batch value is 2048
Add a reply
Sign up and join the conversation on Discord