Find answers from the community

Updated 3 months ago

Token counting

From https://gpt-index.readthedocs.io/en/latest/examples/callbacks/TokenCountingHandler.html Is the tokenizer accurate if you use gpt-turbo3.5-turbo-16k like so?
Plain Text
token_counter = TokenCountingHandler(
    tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo-16k").encode
)

callback_manager = CallbackManager([token_counter])

llm_predictor = LLMPredictor(
    llm=ChatOpenAI(model_name='gpt-3.5-turbo-16k', temperature=0)
)
L
e
3 comments
That looks right at first glance!
Was there an issue?
@Logan M I was testing turbo-16k vs turbo and got some 'odd' behavior with some responses showing 0 token usage using -16k which I didn't experience with non -16k. I was just checking to make sure the Tokenizer supports various models. Thanks!
Add a reply
Sign up and join the conversation on Discord