The community member is having trouble controlling the output of a large language model (LLM), such as ChatGPT, as it tries to provide answers based on its own knowledge rather than strictly adhering to the provided context. Another community member comments that they are experiencing similar issues and that the model has been updated recently, which seems to have made it significantly worse.
How to strictly restrict answers to the index/context provided. I'm having trouble controlling chatgpt LLM output. Its trying to get answers from its own knowledge sometimes.