Find answers from the community

Updated 10 months ago

ReactAgent

Has anyone try local ReacAgents with ollama before I have been trying and I have the feeling only works with OpenAI, from the docs thy mentioned GPT3.5 but I had the hopes local would also work but it doesn't work for me
W
I
L
18 comments
Yeah you can use any llm with any tool present in llamaindex

For using ollama, you have to deploy the model first. Have you hosted your LLM using ollama?

Once it is hosted, you can simple declare llm using ollama class and add it to Settings
Or pass it everywhere
yes is running in my machine the thing load and I ask questions it respond but not using the agent s/rag-ollama/test4.py
docs_folder
Extracting keywords from nodes: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 115/115 [00:00<00:00, 733.53it/s]
Generating embeddings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 115/115 [00:06<00:00, 18.53it/s]
Running on local URL: http://0.0.0.0:8080
So how are you connecting with it?
I think I have shared this earlier as well if I'm not wrong
script_dir = os.getcwd() docs_folder = os.path.join(script_dir, "document") print(f"docs_folder") #persist_dir= path=os.path.join(script_dir, "./data") documents = SimpleDirectoryReader(input_dir=docs_folder, recursive=True).load_data() Settings.llm = Ollama(model="gemma:2b-instruct-q3_K_L", request_timeout=90.0) Settings.embed_model = OllamaEmbedding(model_name="nomic-embed-text") Settings.chunk_size = 1024 Settings.chunk_overlap = 64

yes, I am doing that
I will try with a smple scrpt on the cli maybe is my gradio setup
it doesn't work cd /home/impactframes/rag-ollama ; /home/impactframes/micromamba/envs/comfy/bin/python /home/impactframes/.vscode-server-insiders/extensions/ms-python.debugpy-2024.3.10611006-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher 36507 -- /home/impactframes/rag-ollama/simple_example.py Traceback (most recent call last): File "/home/impactframes/micromamba/envs/comfy/lib/python3.10/site-packages/llama_index/core/agent/react/step.py", line 199, in _extract_reasoning_step reasoning_step = self._output_parser.parse(message_content, is_streaming) File "/home/impactframes/micromamba/envs/comfy/lib/python3.10/site-packages/llama_index/core/agent/react/output_parser.py", line 107, in parse return parse_action_reasoning_step(output) File "/home/impactframes/micromamba/envs/comfy/lib/python3.10/site-packages/llama_index/core/agent/react/output_parser.py", line 60, in parse_action_reasoning_step thought, action, action_input = extract_tool_use(output) File "/home/impactframes/micromamba/envs/comfy/lib/python3.10/site-packages/llama_index/core/agent/react/output_parser.py", line 23, in extract_tool_use raise ValueError(f"Could not extract tool use from input text: {input_text}") ValueError: Could not extract tool use from input text: **Thought:** I need to use a tool to help me answer the question. **Action:** lyft_10k **Action Input:** {"type": "object", "properties": {"input": {"title": "Input", "type": "string"}}, "required": ["input"]} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/impactframes/rag-ollama/simple_example.py", line 92, in <module> response = agent.chat("What was Lyft's revenue growth in 2021?")
It requires the LLM to output a pretty structured format. Most open source LLMs are pretty unreliable for this
@Logan M Yes, it works with OpeanAI GPT3.5 and 4 I do have the miku 70B https://ollama.com/impactframes/mistral_alpha_xs but I think it needs some sort of prompt System message to coarse the llm into a json response that this is expecting, I am trying to track what is the shema on the definitions now
I know setting the stop token with ollama to be "Observation:" helps a bit
Thank you so much I will try with the stopping
I think the BaseModel tools schema is made for OpenAI or something because the code works perfec with GPT3.5 but ollama can't ReAct Is there any way to define this to work with open source models ? class QueryingToolSchema(BaseModel): input: str my_tool_query_engine = [ QueryEngineTool( query_engine=keyword_query_engine, metadata=ToolMetadata( name="keyword_index", description="Keyword index tool to answer questions based on the context provided", fn_schema=QueryingToolSchema, ), ), QueryEngineTool( query_engine=vector_query_engine, metadata=ToolMetadata( name="vector_index", description="Vector index tool to answer questions based on the context provided", fn_schema=QueryingToolSchema, ), ), ]
nah, its supposed to be general. OpenSource models are pretty bad with structured outputs though (try asking it for json and follow a schema, its hard lol)
Okay, I will scrap the idea of making it with ollama. thank you.
Add a reply
Sign up and join the conversation on Discord