All the examples I see with llama-index agents, they are using OpenAI models, however, I would like to use models like "llama2" instead. But if I put llm="llama2" in the highlighted part of the example code below, it throws as error and doesn't work with it. Does anyone knows how to use other models with agents?
Thank you so much, I was already using Ollama but didn't know how to integrate that! Grea link, thank you so much for sharing!
In ollama, I use their chat or generat api to interact with the model, would you happen to know if that option is still available when using an integrated model from llama-index?
Like instead of: agent.chat("What is 2123 * 215123") or llm.complete("What is the capital of France?")
having something like this that's in ollama: r = requests.post( "http://0.0.0.0:11434/api/chat", json={"model": "llama2", "messages": "what is the capital of France?", "stream": True}, )
llm.compelte is the same as making that API request I think π
You can also do
Plain Text
from llama_index.core.llms import ChatMessage
messages = [ChatMessage(role="user", content="What is the capital of france?")]
response = llm.chat(messages)
I see, cause the nice thing about that post request is that I was hosting the model on some other server where I had it downloaded and interacting with it via the REST API....