The community members have questions about using Ollama with llama_index, such as how to switch between language models (like llama3.2 and phi), how to manage multiple conversations, and how to handle vector stores and indexes. The comments discuss topics like settings, chat engines, and the need to potentially create multiple vector stores and indexes for different language models. While there is no explicitly marked answer, the community members seem to be working through these issues and trying to understand the best practices for managing multiple language models and conversations.
Hi, I have a few questions about using Ollama with llama_index.
If I am currently chatting with llama3.2 using: llm = Ollama(model="llama3.2:latest") and I want to switch to phi, should I do: llm = Ollama(model="phi")?
If I want to continue the conversation with the previous llama3.2 instance after switching to phi, should I create two separate instances—one for llama3.2 and one for phi?
If I want to start a completely new chat with llama3.2, is it necessary to create a new instance for it?
If I have 5 different conversations (possibly using the same or different models), should I create 5 separate instances to manage them?
I have a question about vectorstore, as multiple models available in my program.
vectorstore = DuckDBVectorsStore.from_local(db_path) index = VectorStoreIndex.from_vector_store(vectorstore)
The problem is that when I add documents to the DB, it also changes the data in the drive.
Supposing we use phi + DB + doc_A, then switch to llama + DB + doc_B. In my current program, the switching requires remove doc_A from DB and append doc_B to DB.
Is it possible to load and create multiple vectorstore, append/remove doc doesn't affect the DB in the drive?
-------------------------------------------------------------------------------------- My code is here, I found change the argument llm doesn't change the chat_engine...
self.chat_engine = self.index.as_chat_engine( chat_mode="context", llm=self.cur_lm, memory=self.memory, system_prompt=( "You are a helpful assistant which helps users to understand scientific knowledge" "about biomechanics of injuries to human bodies." ), verbose=True, )