OpenAi response looks like---- { "id":"chatcmpl-abc123", "object":"chat.completion", "created":1677858242, "model":"gpt-3.5-turbo-0301", "usage":{ "prompt_tokens":13, "completion_tokens":7, "total_tokens":20 }, "choices":[ { "message":{ "role":"assistant", "content":"\n\nThis is a test!" }, "finish_reason":"stop", "index":0 } ] } I know that there is a new tokenizer implementation in llama-index, but can we get this kind of body straight in the response, or at least the same usage body in the response.
Hi , I am getting this error for Notion connector. Error - Cannot instantiate this tokenizer from a slow version. If it’s based on sentencepiece, make sure you have sentencepiece installed. Is there any direct solution for it.
Hi This is my block of code for Jira connector. the functions are running without any error but I am getting [] in print document whereas I am having 2 tickets on my Jira account. I can see similar issues on github connectors also.
Hi folks, When I am using less documents, GptIndex is giving me expected response. But when the volume is high of docs, it is struggling to give the correct answer and taking lot of time. Any heads-up one it?