Find answers from the community

Updated 4 months ago

How can i prevent llama index from

At a glance

The community members are discussing how to prevent the llama_index library from making up answers and instead force it to only query from the local language model (LLM). The comments suggest that by default, the prompts should tell the LLM to only use the provided context to answer questions, but some prompt engineering may be required depending on the specific LLM being used. One community member had an issue where the LLM was providing responses from the OpenAI model, which included made-up information. To resolve this, the community member had to explicitly add to the system prompt to prevent the LLM from making up information, which resulted in a different, non-fabricated answer.

How can i prevent llama_index from making up answers. Force it to only query from the local llm
L
F
7 comments
what LLM are you using? By default, the prompts tell the LLM to only use the context provided to answer questions.

But depending on the LLM, some prompt engineering may be required
Sorry, i typed wrong, i meant only using the contents from the vector_store.
Yea, it should still be doing that by default πŸ˜…
I think it gave me some response from the OpenAi model. It made up some info about a case. Weird it never happened before
I had to implicity add to the system_promt to not make up info, and the answer changed completely (it didn't made up the answer).
Is that the correct way of doing it?
Add a reply
Sign up and join the conversation on Discord