Find answers from the community

Updated 3 months ago

`from llama_index.llms import Ollama

from llama_index.llms import Ollama from llama_index.agent import ReActAgent from llama_index.tools import FunctionTool from datetime import date def add_numbers(a : int, b: int) -> int: """Adds two numbers and returns the result""" return a+b def get_current_date() -> date: """returns the current date""" return date.today() tools = [ FunctionTool.from_defaults(fn=add_numbers), FunctionTool.from_defaults(fn=get_current_date) ] llm = Ollama(model="mistral") agent = ReActAgent.from_tools(tools, llm=llm, verbose=True) response = agent.chat("what is today's date?")

I found this code online and am trying to run it

I tried pip install llama_index but am getting

from llama_index.agent import ReActAgent
ImportError: cannot import name 'ReActAgent' from 'llama_index.agent' (unknown location)

Anyone know if the imports changed?
L
L
11 comments
Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool


You'll also need to install any integrations, i.e. pip install llama-index-llms-ollama
from llama_index.llms import Ollama
ImportError: cannot import name 'Ollama' from 'llama_index.llms' (unknown location)

ok thank you! do you know how to fix this last one?
Need to install and update the import as above
oops thanks I got it now,

Im scratching my head with this one

File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\llms\ollama\base.py", line 135, in chat response.raise_for_status() File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_models.py", line 761, in raise_for_status raise HTTPStatusError(message, request=request, response=self) httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://localhost:11434/api/chat' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 PS C:\Users\lhenry\Desktop\Projects\AIProject>
nvm I got it, I had the run llm running
File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 158, in exit
self.gen.throw(value)
File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out

would running too slow cause this time out?
it seems ot work with tinyllama
increase the request timeout
llm = Ollama(model="mistral", request_timeout=3600)
File "C:\Users\lhenry\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\agent\react\step.py", line 412, in _get_response
raise ValueError("Reached max iterations.")
ValueError: Reached max iterations.

is there a similar way to up the max iterations?
Add a reply
Sign up and join the conversation on Discord