Hey folks.. I have a question about the expected behavior of LlamaIndex when the answer to a query is NOT part of the context provided. I have a file with a bunch of product information (pricing, description, etc.). I use LlamaIndex + OpenAI for being able to ask it questions and get back very accurate results.
Sometimes, I want to ask questions whose answers are not in the provided files/data. But OpenAI certainly has the infirmation ... But, I don't get these answers. I get back a response saying that the information is NOT part of the context provided.
Is there a way to get over this? Right now, i don't do any prompt engineering. I simply send the user provided question as is to the model.