Find answers from the community

Updated 2 years ago

I also just wanted to start a discussion

I also just wanted to start a discussion about the new ChatGPT LLM predictor, it seems like even with temperature 0 it seems unreliable for use in gpt-index's query pipelines, what's the plan for this in the future? https://github.com/jerryjliu/gpt_index/issues/590 Is this something that others have noticed too? Are there any things I can change (q&a prompt, etc) that might help?
j
K
11 comments
yeah it's something we're taking a look at - have you been able to find any prompt improvemetns ?
we might start looking at model-dependent prompts
Unfortunately no not yet, it's seeming really difficult to get chatgpt to be friendly in the query pipeline
especially if we're looking through multiple nodes like >3
I think so, any chatgpt prompt I made broke davinci3 prompting
Attachments
IMG_6762.png
IMG_6763.png
some more examples of the gpt turbo behavior
The main issue seems to be that if the refinement step doesn't improve the answer it just loses the information and doesn't echo it back (the already-good answer from the last step)
this seems to be a pretty common issue
chatgpt drops more context than text-davinci-003
Add a reply
Sign up and join the conversation on Discord