embed_model
and tokenizer
be?tokenizer=tiktoken.encoding_for_model(model)
... which help me compute the token_counter
based on the same ... but in Groq that doesn't seem to quite worktokenizer=AutoTokenizer.from_pretrained("<some llama2 model>").encode