Find answers from the community

Updated last year

Hi guys. I have a question:

At a glance

The community member is using the RetrieverQueryEngine with the Llama2 language model instead of the OpenAI model. They are encountering an error related to the OpenAI API key not being set. A comment suggests that the community member should add their LLM to the Settings, by setting Settings.llm = llm, which will use the specified LLM everywhere.

Useful resources
Hi guys. I have a question:
I am using RetrieverQueryEngine but with different llm model than OpenAI, I am using Llama2. when I want to run the following snippet:

Plain Text
 
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine

retriever = VectorIndexRetriever( 
    index = index,
    similarity_top_k = 4)
query_engine = RetrieverQueryEngine(retriever=retriever) 


I get the following error:

ValueError:
**
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.
**

how I can solve it?
W
1 comment
Add your LLM to the Settings.

Plain Text
from llama_index.core import Settings
Settings.llm= llm # Your llm object


THis way it will use your llm everywhere
Add a reply
Sign up and join the conversation on Discord