Find answers from the community

Updated 3 months ago

@Logan M , One last question, can we

, One last question, can we able to get the score within one shot, means while using the as_query_engine for getting the response?

I need to get the score for the generated response that needs to compare with the provided context information and with the user query
L
N
5 comments
The score? Like the similarity of the retrieved nodes?

print(node.score for node in response.source_nodes)
I need to get the evaluation score for the generated response for the user input with the relevant source_nodes
The evaluation score -- Need to use one of our evaluators then
So unable to get the score in a single shot which means while getting the response for the user input

so need to score by getting the response from OpenAI for the user input and then need to pass the response, user input and the reference answer with the one of our evaluators right?
Yea.

I guess you could use structured outputs and define a response object that includes some evaluation of the response, to get a single shot

https://docs.llamaindex.ai/en/stable/optimizing/advanced_retrieval/structured_outputs/query_engine.html#query-engines-pydantic-outputs
Add a reply
Sign up and join the conversation on Discord