Find answers from the community

Updated 8 months ago

Does anyone know if it's possible to do

Does anyone know if it's possible to do a SimpleChatEngine with a local model using Ollama but without having to use an index? I just want to do a chat with memory using a local model.
W
M
23 comments
Just need to provide your llm
Yea but where to put the memory module here?
Plain Text
llm = Ollama(model="mistral")
# response = llm.complete("Who is Laurie Voss?")
# print(response)

chat_store = SimpleChatStore()

chat_memory = ChatMemoryBuffer.from_defaults(
    token_limit=3000,
    chat_store=chat_store,
    chat_store_key="user1",
)

chat_engine = SimpleChatEngine.from_defaults(llm=llm)
this errors
Plain Text
Traceback (most recent call last):
combat-ai  |   File "/usr/local/lib/python3.11/site-packages/llama_index/llms/utils.py", line 29, in resolve_llm
combat-ai  |     validate_openai_api_key(llm.api_key)
combat-ai  |   File "/usr/local/lib/python3.11/site-packages/llama_index/llms/openai_utils.py", line 383, in validate_openai_api_key
combat-ai  |     raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
combat-ai  | ValueError: No API key found for OpenAI.
combat-ai  | Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
combat-ai  | API keys can be found or created at https://platform.openai.com/account/api-keys
combat-ai  | 
combat-ai  | 
combat-ai  | During handling of the above exception, another exception occurred:
combat-ai  | 
combat-ai  | Traceback (most recent call last):
combat-ai  |   File "/app/app.py", line 29, in <module>
combat-ai  |     chat_engine = SimpleChatEngine.from_defaults(llm=llm)
combat-ai  |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You could do it like this:
chat_engine = SimpleChatEngine.from_defaults(llm=llm, memory=chat_memory)
yea so its erroring asking for OpenAI api keys
Somehow it is picking your openai, Try doing this:
Plain Text
from llama_index.core import Settings
Settings.llm=llm # Ollama instance

Do this at the top after declairing llm
"Settings" is unknown import symbol
ImportError: cannot import name 'Settings' from 'llama_index.core' (/usr/local/lib/python3.11/site-packages/llama_index/core/__init__.py)
What version of llamaindex are you using?
llama-index = "^0.9.19"
Ah I see, I assumed v0.10.x +

You'll have to add it to the service context then.
Plain Text
from llama_index import ServiceContext
from llama_index import set_global_service_context

service_context = ServiceContext(llm=llm, embed_model='BAAI/bge-small-en-v1.5`)
set_global_service_context(service_context)
I just upgraded...gonna try now. thanks so much
If you upgraded then the above wont work, lol
but i want to use the latest versions cuz of docs etc
also For upgrading, i would recommend using creating a new env
problem now is that this path doesnt exist
ImportError: cannot import name 'Ollama' from 'llama_index.llms' (unknown location)
i have to add it separately ? llama-index-llms-ollama ?
Btw, do you happen to know where I can look for more info on how to pass json data as part of my message to the llm ?
Add a reply
Sign up and join the conversation on Discord