Find answers from the community

Updated last year

LlamaIndex s Index doesn t seem to be

At a glance
LlamaIndex's Index doesn't seem to be able to scratch the itch.
ListIndex: (good if you want to get the latest information once, but too costly because you have to dump the whole text)
VectorIndex: (useless without embedding API)
TableKeywordIndex: (can't limit the number of queries)

Given the above, I came up with the idea of using ElasticSearch to extract N highly relevant nodes and throw them into a ListIndex.

What do you think about this?
Is it already there?
Or is it already there?
Let me know if you have any concerns.
L
2 comments
you can use local embeddings just fine. ServiceContext.from_defaults(embed_model="local:BAAI/bge-base-en-v1.5")
Many options for embeddings actually
https://docs.llamaindex.ai/en/stable/core_modules/model_modules/embeddings/modules.html

You can limit keyword results with a node postprocessor (like a reranker, I explained earlier today I think)

Generally, the ideal solution is something that combines all of these (i.e. in a router query engine, or a sub-question query engine)
Thank you for your kind consideration
However, my machine specs are very low and I cannot run the model locally!
I am considering using SQLite FTS as ElasticSearch seems to be overkill!
Add a reply
Sign up and join the conversation on Discord