Is there a way for llama index, instead of giving a no context response, instead invoking a function like a web downloader and read the response and add it for context to attempt to resolve an actual answer
I think it would have to either decide ahead of time based on tool descriptions
You could also implement a custom query engine that first calls a query engine, calls the LLM to decide if if "no context response" was given, and if so, call another service like a web retriever tool