Find answers from the community

Updated 3 months ago

Answer variance

is the answer time dependent on the hardware where the deployment is done?
L
m
4 comments
Assuming you've set the temperature of the LLM to zero, the answer should be mostly stable.

If you are using OpenAI, you are basically at the whim of whatever they are updating or changing at their end. Usually though changes in answers are very subtle
thanks for the answer, it's an interesting observation.
but what i meant was if the "answer time" is dependent on the HW?
Nah, it shouldn't be dependent on the hardware unless you are running an LLM yourself. Everything else should be deterministic as far as I know
Add a reply
Sign up and join the conversation on Discord