Yea I think this can be done in the prompt.
You can include details specific to your use case in the prompt, and even a small example in the prompt!
Basically, it sounds like you just need to make sure that if the node isn't relevant, the model responds appropriately with either something like "The answer cannot be found" or returns the previous answer during the refine process
Here's the current internal prompts:
https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/default_prompts.pyAnd here are the ones specific to chat models:
https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/chat_prompts.py