Find answers from the community

Updated 3 months ago

I tried using a vision LLM, and it said

I tried using a vision LLM, and it said

Plain Text
                Traceback (most recent call last):
  File "/Users/zachhandley/Documents/GitHub/my-project/api/app/db/vector_stores_temp.py", line 295, in <module>
    asyncio.run(main())
  File "/Users/zachhandley/Documents/GitHub/my-project/api/.venv/lib/python3.11/site-packages/nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/.venv/lib/python3.11/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 277, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/app/db/vector_stores_temp.py", line 238, in main
    user_images = await vector_store_temp.get_user_images()
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/app/db/vector_stores_temp.py", line 191, in get_user_images
    return await self._image_retriever.aretrieve(query_str)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/app/ai/zimage_retriever.py", line 249, in aretrieve
    return await self._atext_to_image_retrieve(query)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/app/ai/zimage_retriever.py", line 220, in _atext_to_image_retrieve
    engine = index.as_chat_engine(
             ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/zachhandley/Documents/GitHub/my-project/api/.venv/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 413, in as_chat_engine
    resolve_llm(llm, callback_manager=self._callback_manager)
  File "/Users/zachhandley/Documents/GitHub/my-project/api/.venv/lib/python3.11/site-packages/llama_index/core/llms/utils.py", line 101, in resolve_llm
    llm.callback_manager = callback_manager or Settings.callback_manager
    ^^^^^^^^^^^^^^^^^^^^
  File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__
ValueError: "OpenAIMultiModal" object has no field "callback_manager"
V
Z
L
4 comments
Plain Text
    def set_callback_manager(self, callback_manager: Any) -> None:
        """Set callback manager."""
        # TODO: make callbacks work with multi-modal
Ferb, I know what im@gonna do today
good eye @Vicent W. πŸ’ͺ
Add a reply
Sign up and join the conversation on Discord