Find answers from the community

Updated 10 months ago

Hello. I have a design question on using

Hello. I have a design question on using Azure AI Search for RAG. I have a large set of document in Azure AI Search that are stored with searchable key doc_id, text chunks, and their embedding. How to use llama index to: query "foo" for embedding only within scope of doc_id="XYZ"? The hybrid mode is not designed for this case. Should I create custom retriever? What is the right approach? Any existing examples? I guess this use case is common to all vector store based RAG. The key is to get the nodes with "XYZ" first then build retreiver with those nodes I guess.
D
j
3 comments
I would use a metadata filter
@DS Thank you. I will try that.
@DS metadata filter works for the case, which only works with exact match. It would be great to allow user to pass through more valid ODATA expression, such as
Plain Text
"filter": "file_id eq 'solicitation 3/GSA.pdf' and search.in(mime_type, 'application/pdf')",
Add a reply
Sign up and join the conversation on Discord