Find answers from the community

Home
Members
T.Koyoyo
T
T.Koyoyo
Offline, last seen 4 months ago
Joined September 25, 2024
I ran the following source and used the "HyDE Query Transform", using OpenAI's LLM it works fine but using AzureOpenAI's LLM I get the following error Is this a bug?
Does anyone know anything about this issue?

Plain Text
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>


Plain Text
import logging
import sys
from llama_index import ServiceContext, set_global_service_context
from llama_index.indices.query.query_transform import HyDEQueryTransform
from llama_index.query_engine.transform_query_engine import TransformQueryEngine
import common

# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

# ------------------------------
# ■ Requirements
# ------------------------------

# ------------------------------
# ■ Settings
# ------------------------------
llm_model = common.llm_azure()                    # LLM Model
embed_model = common.embed_azure()                # Embedding Model
service_context = ServiceContext.from_defaults(llm=llm_model,embed_model=embed_model)
set_global_service_context(service_context)

# ------------------------------
# ■ Load Index
# ------------------------------
index = common.load_index_vector_store_simple()

# ------------------------------
# ■ Do Query
# ------------------------------
query_engine = index.as_query_engine()
hyde = HyDEQueryTransform(include_original=True)
hyde_query_engine = TransformQueryEngine(query_engine, hyde)
response = hyde_query_engine.query("Which timecard should I use?")
print(str(response))
3 comments
T
L