Yea, ollama is like an easy way to run several different models locally
When I said Ollama is an LLM, I meant more in the abstraction sense
For example
from llama_index.llms import Ollama
llm = Ollama(model="llama2", request_timeout=300)
llm.complete("Hello!")