Find answers from the community

Updated last month

Evaluating the impact of chunk size on evaluation metrics

At a glance

The community member made a simple example using their local ollama and is testing how different chunk sizes affect some evaluation metrics. They are wondering about the intended way to use Settings. They set Settings.llm=<evaluator_model> but then had to change Settings.chunk_size when generating answers, so they set Settings.llm=<evaluatee_model> and then changed it back. The community member is asking if Settings should be set once or if there is a better way to do this.

In the comments, another community member suggests that Settings are just global defaults and that other interfaces should accept the llm etc. as needed for local overrides.

Hey everyone,

I made a simple example using my local ollama, i am testing how a different chunk sizes affect some evaluation metrics.

I am wondering what is the intended way to use Settings. I set Settings.llm=<evaluator_model> but then when i am generating answers i have to change Settings.chunk_size so i set Settings.llm=<evaluatee_model> and then change it back when i am done. Should Settings be set once? Or is there a better way to do this?

Thanks!
L
v
2 comments
Settings are just global defaults. Other interfaces should accept the llm etc. as needed for local overrides
Thanks! I just re-read the documentation, it was clear there too.
Add a reply
Sign up and join the conversation on Discord