Find answers from the community

Updated 3 months ago

Can you correct me if I am wrong only

Can you correct me if I am wrong: only the vector store index uses embeddings and vector store when creating an indexs. But when querying a list Index, we can also use an embedding-based query, and in this case the embeddings of nodes are created when querying data? Where else does LlamaIndex use embeddings? Not in a tree index or keyword Table Index? And also, when is best to use List Index (when creating synthesized answer?), when the Vector index, when the tree (summaries?) and when the keyword Index...I'm not sure I understand what are best practicies? I'm sorry for basic questions, I want to understand ...
L
1 comment
vector index == semantic retrieval based on similarity. Can be used with response_mode="tree_summarize" if you want

list index == good for queries that might need to read every node in an index. This is most useful for genrating summaries with response_mode="tree_summarize"

keyword index == just a basic keyword search. Sometimes people will combine this with the vector store index + reranking to create a hybrid search

tree index == not really useful tbh, is mostly deprecated at the moment
Add a reply
Sign up and join the conversation on Discord