The community members have noticed a change in behavior in their LLM (Large Language Model) that is causing out-of-context responses. They suspect this is related to the LLM side rather than the llama-Index library. Some community members suggest that OpenAI may be updating their models frequently without notifying users, leading to performance variations.
Potential solutions discussed include appending extra instructions to the query, modifying the prompt templates, and using the default QA prompt. However, the community members note that creating a prompt that generalizes well to all LLMs is challenging. They welcome contributions in the form of pull requests to improve the default prompts.
The community members also observe that the issue has been occurring more frequently in the past 1-2 months, affecting both ChatGPT and the main LLM model. They find the need for prompt engineering to be tedious and are interested in exploring other solutions.
@Maximus openai seem to update their models frequently without telling anyone. Performance seems to vary quite a bit, which is likely the cause of this
@WhiteFang_Jr I agree, it's pretty annoying tbh. It sucks that the only solution (either in a PR or customizing prompts yourself) is prompt engineering :PSadge: