A) What tokenizer does LlamaIndex uses when I am supplying local embedding model and local language model(through Ollama) like in this code? B) And how to supply a tokenizer for a LLM that I am pulling from my own Ollama repo?
Ollama wrapper only passes the text in required format to your hosted llm model. I feel the tokenizer part is handled at the Ollama side only and not the llama-index side.