Find answers from the community

Updated 3 days ago

Evaluating the impact of chunk size on evaluation metrics

Hey everyone,

I made a simple example using my local ollama, i am testing how a different chunk sizes affect some evaluation metrics.

I am wondering what is the intended way to use Settings. I set Settings.llm=<evaluator_model> but then when i am generating answers i have to change Settings.chunk_size so i set Settings.llm=<evaluatee_model> and then change it back when i am done. Should Settings be set once? Or is there a better way to do this?

Thanks!
L
v
2 comments
Settings are just global defaults. Other interfaces should accept the llm etc. as needed for local overrides
Thanks! I just re-read the documentation, it was clear there too.
Add a reply
Sign up and join the conversation on Discord