Find answers from the community

Updated 4 months ago

Hi Logan M I am working on creating a

At a glance
Hi I am working on creating a chat_engine wherein the engine is supposed to generate questions from the context information, ask the student the question and then evaluate their answer as well for correctness. So here instead of just chatting with the context info, we have to create the questions as well?

So we either create a list of questions first, keep going through it while evaluating the answers (basically 2 diff query engines and 2 diff prompts) - is this a use case for using agents?

OR

We put everything in a single prompt like this - """Perform the following actions:
1 - Introduce yourself to the students.
2 - Wait for a response.
3 - Then ask a meaningful [Question] from the context information provided to assess the student's knowledge on the text.
4 - Wait for a response.
5 - Assess the student's response in the context of the text provided only.\
Evaluate the response on each of the [parameters] and provide a line of feedback.
[parameters]
  • Does the response answer all sub questions in the question?
  • Does the response answer all sub questions correctly?
  • Is the answer elaborate enough or is it in need of more explanation?
    6 - Continue these actions until the student types "Exit".
    """ - Not sure how to frame a common retriever and response_mode here though!!??
Can you please help me here and give a direction.
L
1 comment
Personally I think the first approach is more controllable/stable

Generate the questions, and then one by one gather responses and grade them πŸ€”
Add a reply
Sign up and join the conversation on Discord