Find answers from the community

Updated 3 months ago

OpenAI Platform

i am trying the multi index but trying different llm
query_engine = summary_index.as_query_engine(
service_context=ServiceContext.from_defaults(
#llm=OpenAI(model="gpt-3.5-turbo")
#llm=Ollama("mistral")
llm=None
)
)
Still getting the below error (both None and Ollama)
-----
During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/llama_index/llms/utils.py in resolve_llm(llm)
29 validate_openai_api_key(llm.api_key)
30 except ValueError as e:
---> 31 raise ValueError(
32 "\n**\n"
33 "Could not load OpenAI model. "

ValueError:
**
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.
**

Any suggestions?
W
2 comments
You want to use different llm?
If you dont want to use OpenAI at all, You'll need to pass the embedding model as well.


Plain Text
from llama_index import ServiceContext
from llama_index import set_global_service_context

# pass the llm of your choice, If dont want to use the llm set it to None
service_context = ServiceContext.from_defaults(embed_model="local",llm=Ollama("mistral"))

set_global_service_context(service_context)
Add a reply
Sign up and join the conversation on Discord