Find answers from the community

Updated 3 months ago

Hi,

Hi,

All the examples I see with llama-index agents, they are using OpenAI models, however, I would like to use models like "llama2" instead. But if I put llm="llama2" in the highlighted part of the example code below, it throws as error and doesn't work with it. Does anyone knows how to use other models with agents?

Thank you.
Attachment
Screenshot_2024-04-02_at_4.32.28_PM.png
L
M
6 comments
You need to initialize the LLM with one of our many many LLM integrations

I would recommend Ollama tbh

https://docs.llamaindex.ai/en/stable/api_reference/llms/ollama/
Thank you so much, I was already using Ollama but didn't know how to integrate that! Grea link, thank you so much for sharing!

In ollama, I use their chat or generat api to interact with the model, would you happen to know if that option is still available when using an integrated model from llama-index?

Like instead of:
agent.chat("What is 2123 * 215123")
or
llm.complete("What is the capital of France?")

having something like this that's in ollama:
r = requests.post(
"http://0.0.0.0:11434/api/chat",
json={"model": "llama2", "messages": "what is the capital of France?", "stream": True},
)
llm.compelte is the same as making that API request I think πŸ‘€

You can also do

Plain Text
from llama_index.core.llms import ChatMessage

messages = [ChatMessage(role="user", content="What is the capital of france?")]

response = llm.chat(messages)
I see, cause the nice thing about that post request is that I was hosting the model on some other server where I had it downloaded and interacting with it via the REST API....
You can still do that with the Ollama class though πŸ‘€
It uses the REST api for you
Add a reply
Sign up and join the conversation on Discord