Find answers from the community

s
F
Y
a
P
Updated 2 years ago

Prompt token usage

I think I'm missing something about how tokens are calculated. I'm getting this:
This model's maximum context length is 4097 tokens, however you requested 4499 tokens (3499 in your prompt; 1000 for the completion). Please reduce your prompt; or completion length.

But my prompt was like 15 words... how's it possible that it used so many?
L
1 comment
Have you changed any prompt related settings? Why type of index are you using?

Your prompt might only be 15 words, but as it goes over the index it inserts each "chunk" into the prompt and answers the query
Add a reply
Sign up and join the conversation on Discord