Find answers from the community

Updated 9 months ago

import tiktoken

import tiktoken
from llama_index.core.callbacks import CallbackManager, TokenCountingHandler
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings


token_counter = TokenCountingHandler(
tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode
)

Settings.llm = OpenAI(model="gpt-3.5-turbo", temperature=0.2)
Settings.callback_manager = CallbackManager([token_counter])
Here, can we use this tokencountinghandler with BAAI/bge small model instead of open ai? If so how?
L
2 comments
Change the tokenizer to something from huggingface
Plain Text
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-small-en-v1.5")
Add a reply
Sign up and join the conversation on Discord