I also just wanted to start a discussion about the new ChatGPT LLM predictor, it seems like even with temperature 0 it seems unreliable for use in gpt-index's query pipelines, what's the plan for this in the future?
https://github.com/jerryjliu/gpt_index/issues/590 Is this something that others have noticed too? Are there any things I can change (q&a prompt, etc) that might help?