Hello. I am trying to evaluate a retriever, and I see that using the methods in this documentation: https://docs.llamaindex.ai/en/stable/examples/evaluation/retrieval/retriever_eval.html , we generate question context pairs. Say that I already have a question context pair, how can I use LlamaIndex's RetrieverEvaluator to fit with my generated question context pair?
When creating an index in a jupyter notebook, the output is filled with Batches. Is there a way to disable these? The same "batch" spam is also present in RetrieverEvaluator.aevaluate_dataset() as well