Find answers from the community

Updated 3 months ago

Back again, still going down the local

Back again, still going down the local model only path using llama-cpp-python. Getting this same error:
Plain Text
ValueError:
******
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.
******

This time tho, I'm trying to introduce Multi-Step Query:
Plain Text
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)

# Index setup
PERSIST_DIR = "storage-data"
if not os.path.exists(PERSIST_DIR):
    documents = SimpleDirectoryReader("data").load_data()
    index = VectorStoreIndex.from_documents(documents, service_context=service_context)
    index.storage_context.persist(persist_dir=PERSIST_DIR)
else:
    storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
    index = load_index_from_storage(storage_context, service_context=service_context)

query_engine = index.as_query_engine(response_mode="compact_accumulate")

# Multi-step query engine setup
step_decompose_transform = StepDecomposeQueryTransform(llm=llm, verbose=True)
multi_step_query_engine = MultiStepQueryEngine(
    query_engine=query_engine,
    query_transform=step_decompose_transform,
    index_summary="Index summary for context"
)

@app.get("/", response_class=HTMLResponse)
async def get_form(request: Request):
    return templates.TemplateResponse("index.html", {"request": request})

@app.post("/query")
async def query(user_input: str = Form(...)):
    response = multi_step_query_engine.query(user_input)
    response_text = str(response)
    return {"response": response_text}

I tried doing step_decompose_transform = StepDecomposeQueryTransform(service_context=service_context) but that gave me an error about not expecting that
L
i
3 comments
oh this one is annoying to fix. Can't wait to finish this better global-service-context I keep rambling about lol

In any case, here's the change needed
Plain Text
from llama_index.response_synthesizers import get_response_synthesizer

multi_step_query_engine = MultiStepQueryEngine(
  response_synthesizer=get_response_synthesizer(service_context=service_context),
  ...
)
Add a reply
Sign up and join the conversation on Discord