Find answers from the community

Updated last year

Metadata

At a glance
I was wondering when we store some extracted metadata in the nodes, during query time how does the query engine look for that metadata and if it can be customized.
Ref- https://docs.llamaindex.ai/en/latest/examples/metadata_extraction/EntityExtractionClimate.html#entity-metadata-extraction
L
C
3 comments
The metadata is included by default when sending the text to the emebdding model and LLM

And when you create the query engine, you can specify metadata filters (or use an auto retriever to let the LLM write the filters for each query)
what do you think would be the best approach to do RAG over let's say 70k-100k documents? One approach as you mentioned is a Metadata filter so that we can do a pre-filter (in that case does llama-index support the in operator). Or do you think something like a Knowledge graph is a good approach?
I have yet to see a knowledge graph actually be useful in production πŸ˜… building them is pretty expensive too if you use an llm

I would use some kind of hybrid search, reranking. Maybe some some hierarchical document/agent approach
Add a reply
Sign up and join the conversation on Discord