I’m a little confused on the nodes v embeddings. I notice better search quires when using embeddings but I’m wondering how llama index enhances the the retrieval when vectors data bases are implemented
What this means is that using LlamaIndex, you can reduce the amount of information from your data and also make sure the information is RELEVANT to the prompt
Because you are calculating the mathematical difference between your prompt and each node in the vector database, which gets you only the most relevant data (top_k)