Find answers from the community

Updated 8 months ago

My set up is like this.

My set up is like this.
User sends a query to the RAG based chatbot (condense plus context one). There's a similarity cut off that doesn't pass irrelevant context to LLM. Is there any way, if the number of documents retrieved is zero, I can skip this call to LLM and return a generic response. @Logan M
1
W
C
c
9 comments
I think it is already set that there wont be any llm calls in case of no nodes are retrived for the query
It could be diff for condense + context
Hey @chaitanya , how are you able to do a similarity cut? Could you please share the information, will be really helpful for us as well
But I am getting responses based on the knowledge of LLM
Search for node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.7). https://docs.llamaindex.ai/en/stable/understanding/querying/querying/
@WhiteFang_Jr You can see the context is empty. If I use only condense question then it is giving empty response. But that is not the case for condense plus context. Any ideas or suggesstions?
Attachment
thumbnail_image001.png
I think you could add a custom node postprocessor that adds a node with some default instruction if there are no nodes left
Do you have any example for this?
Add a reply
Sign up and join the conversation on Discord