Find answers from the community

Updated 2 years ago

Token usage

Ok I’ll check that out. And interms of calculating tokens manually what components contribute to the total token usage?

Just thinking of doing a manual calculation to understand ahah
L
1 comment
The embedding model and llm predictor are the two root elements that contribute to token counts

Internally, there is a function decorator that is used to count tokens for different functions in different places
Add a reply
Sign up and join the conversation on Discord