Find answers from the community

Updated 2 years ago

Do we need to call an API to get the

At a glance

The post asks whether an API call is needed to get the token count or if it's an estimation algorithm that runs locally. A community member responds that the token counts reported in the terminal are the actual token counts of data sent to the LLM/Embedding models. They also mention that you can estimate the token usage before running something using a resource from the GPT-Index documentation.

Useful resources
Do we need to call an API to get the token count? Or is it an estimation algorithm that runs locally?
L
1 comment
The token counts reported in the terminal are the actual token counts of data sent to the LLM/Embedding models

You can estimate the token usage before running something though: https://gpt-index.readthedocs.io/en/latest/how_to/analysis/cost_analysis.html
Add a reply
Sign up and join the conversation on Discord