Find answers from the community

Home
Members
borjazz
b
borjazz
Offline, last seen 3 months ago
Joined September 25, 2024
b
borjazz
·

LLM

Hi all, I am working on an RAG application for which I want to use a model for embeddings and another one (LLM) for the question-answer flow.
For this I am using ServiceContext with the following configuration:

def setup_index(documents): embed_model = HuggingFaceEmbedding('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') service_context_embedding = ServiceContext.from_defaults(embed_model=embed_model, llm=None, chunk_size=1024) return VectorStoreIndex.from_documents(documents, service_context=service_context_embedding)

After this I store a persistent folder with the indices locally.

When I load the index from the local folder with the following code:

def load_documents(): # Create storage context from persisted data storage_context = StorageContext.from_defaults(persist_dir="./data") # Load index from storage context index = load_index_from_storage(storage_context) return index

I get this error:

Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI.

Does anyone know what is going on? I want to create the indexes without LLM and then add it as ServiceContext in a response_synthesizers.
16 comments
W
b
L