Hi, Is the
"Example: Changing the underlying LLM" documentation wrong, or am I missing something? If I run the following, which should use Ollama instead of ChatGPT (if I understand correctly):
from llama_index import ServiceContext
from llama_index.llms import Ollama
llm = Ollama(model="llama2")
service_context = ServiceContext.from_defaults(llm=llm)
...I get the following errors:
******
Could not load OpenAIEmbedding. Using HuggingFaceBgeEmbeddings with model_name=BAAI/bge-small-en. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys