Find answers from the community

Updated 3 months ago

hi all, im having a hopefully a simple

hi all, im having a hopefully a simple issue that I haven't been able to find any reference of online.

I'm trying to hook llama-index up with ollama, specifically using OllamaMultiModal to run a llava model. I have pulled llava and can run it with "ollama run llava" as normal but as soon as i try to run the following:
Plain Text
from llama_index.multi_modal_llms.ollama import OllamaMultiModal
mm_llm = OllamaMultiModal(model="llava")

I get a pydantic error:
Plain Text
File "---\agent.py", line 7, in <module>
    mm_llm = OllamaMultiModal(model="llava")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "---\.venv\Lib\site-packages\llama_index\multi_modal_llms\ollama\base.py", line 81, in __init__
    super().__init__(**kwargs)
  File "---\.venv\Lib\site-packages\pydantic\main.py", line 212, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for OllamaMultiModal
request_timeout
  Field required [type=missing, input_value={'model': 'llava'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.9/v/missing
W
.
3 comments
The model requires request_timeout field, try passing this in here mm_llm = OllamaMultiModal(model="llava",request_timeout=60 )
you're a life saver! πŸ™‚ That page was actually the one that got me in this mess I guess because it doesn't have the request timeout param
Add a reply
Sign up and join the conversation on Discord