Find answers from the community

Updated 2 months ago

Embedding Model

Hi, Is the "Example: Changing the underlying LLM" documentation wrong, or am I missing something? If I run the following, which should use Ollama instead of ChatGPT (if I understand correctly):

Plain Text
from llama_index import ServiceContext
from llama_index.llms import Ollama
llm = Ollama(model="llama2")
service_context = ServiceContext.from_defaults(llm=llm)

...I get the following errors:

Plain Text
******
Could not load OpenAIEmbedding. Using HuggingFaceBgeEmbeddings with model_name=BAAI/bge-small-en. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
W
e
3 comments
It is throwing this error for the Embedding model.
This can be removed, by adding few more lines of code.

Plain Text
from llama_index import ServiceContext
service_context = ServiceContext.from_defaults(embed_model="local")

# for choosing the specific model
service_context = ServiceContext.from_defaults(
  embed_model="local:BAAI/bge-large-en"
)

More can be found here: https://docs.llamaindex.ai/en/stable/core_modules/model_modules/embeddings/usage_pattern.html#local-embedding-models
Embedding Model
Got it, thanks! I was thinking it might be something like this because if I just went with it and put in my OpenAI key, it worked, but clearly not the same as using OpenAI for the llm class.
Add a reply
Sign up and join the conversation on Discord