Find answers from the community

Updated 5 months ago

Hello everyone. A newbie question for

At a glance
Hello everyone. A newbie question for which I can't find an answer in the docs, but it should be simple. I use LlamaIndex with an openai api as AI engine to make a chatbot on my personal data. But if the request is badly formulated, the LLM returns an answer that it finds in its own data and not in the data prepared at the time of launching my chatbot. My question: is there a way to force the LLM to look for answers only in the data sent?
L
1 comment
the only way to force is prompt engineering -- it's a little bit of an annoying process

The default prompt template already tells it to only use information provided πŸ˜…
Add a reply
Sign up and join the conversation on Discord