Find answers from the community

Updated 8 months ago

I create a list of TextNode's, and then created an index using **VectorStoreIndex**, but t

@kapa.ai I create a list of TextNode's, and then created an index using VectorStoreIndex, but the TextNode.embedding is empty. Where/when is the embedding created?
k
2 comments
Please stand by @node0 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
The VectorStoreIndex in LlamaIndex is responsible for creating the embeddings for your text. When you create an index using VectorStoreIndex, it takes your documents (or nodes) and splits them up into nodes. It then creates vector embeddings of the text of every node, ready to be queried by an LLM. This is what is meant when we say it "embeds your text".

The process of generating embeddings involves using an API from your LLM and can take a long time, especially if you have a lot of text, since it involves many round-trip API calls.

Here is an example of how you might create a VectorStoreIndex:

Plain Text
from llama_index.core.schema import TextNode
from llama_index.core import VectorStoreIndex, StorageContext

nodes = [
    TextNode(
        text="The Shawshank Redemption",
        metadata={
            "author": "Stephen King",
            "theme": "Friendship",
        },
    ),
    # ... more nodes ...
]

storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(nodes, storage_context=storage_context, embed_model=embed_model)


In this example, the VectorStoreIndex is created with a list of TextNode objects, a StorageContext, and an embedding model (embed_model). The VectorStoreIndex will use the embedding model to generate embeddings for the text of each node.

For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord