The community members are discussing how to prevent the llama_index library from making up answers and instead force it to only query from the local language model (LLM). The comments suggest that by default, the prompts should tell the LLM to only use the provided context to answer questions, but some prompt engineering may be required depending on the specific LLM being used. One community member had an issue where the LLM was providing responses from the OpenAI model, which included made-up information. To resolve this, the community member had to explicitly add to the system prompt to prevent the LLM from making up information, which resulted in a different, non-fabricated answer.