Hi, I would like to wrap all this as a module in a query pipeline, how can i do it?
from llama_index.core.agent.react_multimodal.step import MultimodalReActAgentWorker from llama_index.core.agent import AgentRunner from llama_index.core.multi_modal_llms import MultiModalLLM from llama_index.multi_modal_llms.openai import OpenAIMultiModal from llama_index.core.agent import Task from llama_index.core.schema import ImageDocument
react_step_engine = MultimodalReActAgentWorker.from_tools( [], # you can put some tools here multi_modal_llm=mm_llm, verbose=True, ) agent = AgentRunner(react_step_engine)
yea, i am following that guide, but i have not been able to do it correcly π . My goal is to use GPT-4V for a validation of a matplotlib graph created by a python code generated by gpt3.5 based on an initial prompt based on an initial user question about my data lake.