The community member is asking about the use of temperature when creating an index and setting up inference using a large language model (LLM). The comments explain that some indexes, such as knowledge graphs, trees, and keywords, use the LLM during index creation. In these cases, a higher quality LLM and temperature may be needed when creating the index, as the index is built based on the LLM's output. The community member is informed that they can specify the LLM and temperature in both index creation and inference.
Hello community and all - a Sorta-noob question: How is that when a) creating an index from docs , and also b) setting up inference (e.g., loading same index file and querying it) in BOTH cases we can instantiate a LLM object and in both cases we can set "temperature" (higher vals mean more 'creative' responses). Does setting 'temperature' have any meaning at index-creation time? If so what? I could see that it would have impact at inference time.. Thanks.