Find answers from the community

Updated 2 years ago

Llm settings

At a glance
The community member is asking about the use of temperature when creating an index and setting up inference using a large language model (LLM). The comments explain that some indexes, such as knowledge graphs, trees, and keywords, use the LLM during index creation. In these cases, a higher quality LLM and temperature may be needed when creating the index, as the index is built based on the LLM's output. The community member is informed that they can specify the LLM and temperature in both index creation and inference.
Hello community and all - a Sorta-noob question: How is that when a) creating an index from docs , and also b) setting up inference (e.g., loading same index file and querying it) in BOTH cases we can instantiate a LLM object and in both cases we can set "temperature" (higher vals mean more 'creative' responses). Does setting 'temperature' have any meaning at index-creation time? If so what? I could see that it would have impact at inference time.. Thanks.
L
1 comment
Some indexes use the LLM at index creation time (knowledge graph, tree, keyword)

For these indexes, sometimes you need a higher quality LLM when creating the index, since it's building an index based on LLM output

So, in both cases, you can specify the LLM and temperature
Add a reply
Sign up and join the conversation on Discord