Hello community and all - a Sorta-noob question: How is that when a) creating an index from docs , and also b) setting up inference (e.g., loading same index file and querying it) in BOTH cases we can instantiate a LLM object and in both cases we can set "temperature" (higher vals mean more 'creative' responses). Does setting 'temperature' have any meaning at index-creation time? If so what? I could see that it would have impact at inference time.. Thanks.