Hi everyone, I have a question about openAI, I am currently using the free credit since I am still in testing phase and today I am having a problem. I use "gpt-3.5-turbo" for the queries and today it started tellig me "Rate limit reached for default-gpt-3.5-turbo" and it says that the limit is 3 requests / min. I have never had this error before and I always used the same account with same organisation and same key with a lot more than 3 requests/min. Does it comes from a openAI update or something?
Hi everyone, I have a few questions. Since I am using llama_index to be used as a chatbot it needs to behave as one. The problem I am having is that it answer alwasys talking about the context. For example if after a question I say "okay thanks" it answers me: "given the addition context provided, the original answer regarding ... Remains accurate. Therefore there's no need to refine the originak answer." And what instead of the dots is not even what I asked before. Is there a way to fix it, and making it gives normak answer in cases like this? Another thing, also about the chatbit referring to the context, is that it always does it. After a normal question it answers "...in the provided context..." "Given the context..." Or something like that. Can I somehow remove it and just get the actual answer? Because like this is a bit weird as a chatbot if some clients have to use it.
Ok thanks, and could create a chat agent help get even better answers then just incresing top k? Because since I cannot create the same structure as the chatbot example I thought that maybe I can do the same thing but like if I only have 1 year. Does it make sense?
All right, vary nice, a feature like that would be really useful