Find answers from the community

Updated 3 weeks ago

How to Optimize the Chatbot

Hello @All.
How can I get the exact answer from the document?
After a period of use, I found that many questions the chatbot cannot answer even though the data contains information and there are some cases where the chatbot cannot answer similar questions.
How to optimize the chatbot.
W
B
15 comments
What LLM are you using?
You could have a smaller chat memory , your LLM may not be that good to understand large context
I am using gpt-4o-mini. gpt-4o would be better, but I still encounter issues sometimes. It's just that the frequency is lower. However, I also have a question: can I control the answers?
Can you elaborate more on control the answers?

You mean you want to validate whether the given answer is as per certain standard or not?
Exactly. I want to control whether the answers are correct or not. Since I’m building a chatbot related to legal matters, there can be no errors because all the data is already in the documents
@WhiteFang_Jr @Logan M
I think you can validate the response based on the conditions or factors you want to check with.
```py
response = chat_engine.chat(query)

validate the response now with a llm

validate_prompt = """ADD a prompt here which has alll the conditions that are required by the bot to adhere to:
ADD bot response here

if validated return True else False
"""
flag = Settings.llm.complete(validate_prompt.format(response=bot_response))

based on flag you can either call the query engine agfain or return the response

Also modify your chat_engine prompt that it adheres to the conditions while generating the response as well
Let me ask what is the comparison mechanism here. If it is based on the returned nodes for comparison, then it will most likely rotate correctly. But what if the node is wrong?
No the comparison would not be with the nodes here, You can add the user query + returned response + your prompt containing the instructions on which the response will be evaluated. The evaluation will be done by the LLM following the instructions.
Is this what you are checking for style or structure?
I am checking for correctness of the returned content.
For evaluation based on correctness, this will require you to prepare a dataset that you can refer to when evaluating the response. Youy can refer to this: https://docs.llamaindex.ai/en/stable/examples/evaluation/correctness_eval/
After checking, is there any way to update so that the bot can understand and next time it asks, it will answer correctly?
You can sort of update the context of the previous node if the earlier info was incorrect. Or else you could add this correct info to the index. That way the bot would pick the answer correctly next time
In case of wrong nodes this method doesn't seem to work very well.
Add a reply
Sign up and join the conversation on Discord