Find answers from the community

Home
Members
Subhrajit Pramanick
S
Subhrajit Pramanick
Offline, last seen 3 months ago
Joined September 25, 2024
OpenAi response looks like----
{
"id":"chatcmpl-abc123",
"object":"chat.completion",
"created":1677858242,
"model":"gpt-3.5-turbo-0301",
"usage":{
"prompt_tokens":13,
"completion_tokens":7,
"total_tokens":20
},
"choices":[
{
"message":{
"role":"assistant",
"content":"\n\nThis is a test!"
},
"finish_reason":"stop",
"index":0
}
]
}
I know that there is a new tokenizer implementation in llama-index, but can we get this kind of body straight in the response, or at least the same usage body in the response.
2 comments
S
L
Hi , I am getting this error for Notion connector. Error - Cannot instantiate this tokenizer from a slow version. If it’s based on sentencepiece, make sure you have sentencepiece installed.
Is there any direct solution for it.
4 comments
L
S
Hi
This is my block of code for Jira connector. the functions are running without any error but I am getting [] in print document whereas I am having 2 tickets on my Jira account. I can see similar issues on github connectors also.
1 comment
L
Currently stuck in an issue of composable_graph I have give more infos in the below doc. It is not able to get the output if the answer node is at the top. https://docs.google.com/document/d/1QLPCRKKriiEJqMmzLGUn5ougkx9lGVdZctm1zU4K5_0/edit?usp=sharing
3 comments
L
k
Hi folks,
When I am using less documents, GptIndex is giving me expected response. But when the volume is high of docs, it is struggling to give the correct answer and taking lot of time. Any heads-up one it?
2 comments
S
L
Hi @jerryjliu0 do we have a scope for callback manager in future. If yes, then when can we expect it.
5 comments
L
S