Find answers from the community

Updated 3 months ago

Logan cosplay

I’m a little confused on the nodes v embeddings. I notice better search quires when using embeddings but I’m wondering how llama index enhances the the retrieval when vectors data bases are implemented
i
D
11 comments
So basically, Nodes are text chunks of data @DangFutures
Arbitrarily sized based on your preference. The standard is each sentence is a chunk/node of data
The way that embedding works is it converts each chunk into a vector representation
Then, when you send a prompt, your prompt is converted to a vector representation
Then, the distance between the two vectors is calculated (K-distance)
What this means is that using LlamaIndex, you can reduce the amount of information from your data and also make sure the information is RELEVANT to the prompt
Because you are calculating the mathematical difference between your prompt and each node in the vector database, which gets you only the most relevant data (top_k)
Does that help?
Holy moly thank you!!!
No problem @DangFutures
Add a reply
Sign up and join the conversation on Discord