Assuming you've set the temperature of the LLM to zero, the answer should be mostly stable.
If you are using OpenAI, you are basically at the whim of whatever they are updating or changing at their end. Usually though changes in answers are very subtle