I think I'm missing something about how tokens are calculated. I'm getting this: This model's maximum context length is 4097 tokens, however you requested 4499 tokens (3499 in your prompt; 1000 for the completion). Please reduce your prompt; or completion length.
But my prompt was like 15 words... how's it possible that it used so many?