llava
model from HF instead of ollama
. any idea how it would work in this example? https://docs.llamaindex.ai/en/stable/examples/multi_modal/ollama_cookbook/?h=multimodal--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-58-6d30c419b9a0> in <cell line: 1>() ----> 1 response = mm_program(query_str="What was the value of Non Revenue Units in Apr 2022?") 2 print(response) 1 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1686 if name in modules: 1687 return modules[name] -> 1688 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") 1689 1690 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'LlavaForConditionalGeneration' object has no attribute 'complete'
--------------------------------------------------------------------------- ConnectError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py in map_httpcore_exceptions() 65 ---> 66 @contextlib.contextmanager 67 def map_httpcore_exceptions() -> typing.Iterator[None]: 22 frames ConnectError: [Errno 99] Cannot assign requested address The above exception was the direct cause of the following exception: ConnectError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py in map_httpcore_exceptions() 81 82 if mapped_exc is None: # pragma: no cover ---> 83 raise 84 85 message = str(exc) ConnectError: [Errno 99] Cannot assign requested address
response = mm_program(query_str="xxxxx?")
print(response)
mm_program
is <llama_index.core.program.multi_modal_llm_program.MultiModalLLMCompletionProgram at 0x7e35985700d0>
ollama
? OllamaMultiModal(base_url='http://localhost:11434', model='llava:13b', temperature=0.75, context_window=3900, request_timeout=None, additional_kwargs={})
base_url
is pointing to localhost, I am not sure how that would work on colab.curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ####################################################################################################################################### 100.0%####################################################################################################################################### 100.0% >>> Installing ollama to /usr/local/bin... >>> Creating ollama user... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line.