import tiktoken from llama_index.core.callbacks import CallbackManager, TokenCountingHandler token_counter = TokenCountingHandler( tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode ) I use this to calculate token usage for gpt models. How do i calculate token usage for gemini based models like gemini-pro or gemini-1.5-flash-latest?