Find answers from the community

.
.dan._
Offline, last seen 3 months ago
Joined September 25, 2024
hi all, im having a hopefully a simple issue that I haven't been able to find any reference of online.

I'm trying to hook llama-index up with ollama, specifically using OllamaMultiModal to run a llava model. I have pulled llava and can run it with "ollama run llava" as normal but as soon as i try to run the following:
Plain Text
from llama_index.multi_modal_llms.ollama import OllamaMultiModal
mm_llm = OllamaMultiModal(model="llava")

I get a pydantic error:
Plain Text
File "---\agent.py", line 7, in <module>
    mm_llm = OllamaMultiModal(model="llava")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "---\.venv\Lib\site-packages\llama_index\multi_modal_llms\ollama\base.py", line 81, in __init__
    super().__init__(**kwargs)
  File "---\.venv\Lib\site-packages\pydantic\main.py", line 212, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for OllamaMultiModal
request_timeout
  Field required [type=missing, input_value={'model': 'llava'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.9/v/missing
3 comments
.
W