Find answers from the community

Updated 4 months ago

Hi everyone, I am trying to update an

At a glance
Hi everyone, I am trying to update an old code that uses Ollama since ServiceContext is deprecated following these instructions: https://docs.llamaindex.ai/en/stable/module_guides/supporting_modules/service_context_migration.html
But what is the equivalent of embbeding for Ollama ??



OLD
Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core import ServiceContext
from llama_index.core.chat_engine import SimpleChatEngine

llm = Ollama(model="mistral")
service_context = ServiceContext.from_defaults(
    llm=llm,
    embed_model="local:BAAI/bge-small-en-v1.5", 
)

chat_engine = SimpleChatEngine.from_defaults(service_context=service_context)
print(chat_engine.chat("Hi can you write a python script to use SimpleChatEngine with llama_index"))

NEW
Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings
from llama_index.core.chat_engine import SimpleChatEngine

Settings.llm = Ollama(model="mistral")

chat_engine = SimpleChatEngine.from_defaults(service_context=service_context)
print(chat_engine.chat("Hi can you write a python script to use SimpleChatEngine with llama_index"))
W
y
14 comments
Do this:
Plain Text
Settings.embed_model = "local:BAAI/bge-small-en-v1.5"
Thank you, could you please explain me what is the difference between:
Plain Text
Settings.embed_model = "local:BAAI/bge-small-en-v1.5"

and the one of the example with HugginFaceEmbedding:
Plain Text
Settings.embed_model = HuggingFaceEmbedding(
    model_name="BAAI/bge-small-en-v1.5"
)
Both are same, But the first one requires less words πŸ˜…
I have been trying to have a query_engine, but my computer is going crazy and I cannot use it, do you know what is wrong in my code here please:
Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings
from llama_index.core.chat_engine import SimpleChatEngine
from llama_index.core import SimpleDirectoryReader
from llama_index.core import VectorStoreIndex

Settings.llm = Ollama(model="mistral")
Settings.embed_model = "local:BAAI/bge-small-en-v1.5"
Settings.chunk_size = 300

documents = (
  SimpleDirectoryReader(
    input_dir='./documents/',
    required_exts = [".pdf"])
    .load_data()
)

nodes = (
    Settings
    .node_parser
    .get_nodes_from_documents(documents)
)

index = VectorStoreIndex.from_documents(documents, embed_model=Settings.embed_model)

query_engine = index.as_chat_engine(llm=Settings.llm)

query = """Eplain me what is this document?"""

resp = query_engine.query(query)

print(resp)
resp= query_engine.chat(query)
No need to pass llm anywhere if you have done Settings.llm=llm
I should ask this also, you have hosted your llm using ollama already right?

Also what error are you facing
How are the settings passed to the model ?
It just wrote something got killed and stopped
I closed the terminal 😦

But I got no error message, just my computer went crazy
For this can you explain me how are the settings passed please ? This library is new to me and the tutorials I am looking are a little outdated :(\
https://pastebin.pl/view/ff64ea7a

Here is the error I get after the correction you gave me
So when you set Settings.llm for every internal ops like retrival, text generation it first checks whether we have anything set for Settings.llm or not. If it is set it will use that else it will default to openAI
Add a reply
Sign up and join the conversation on Discord