Find answers from the community

Updated last year

I ran the following source and used the

At a glance

The community member ran a source code that used the "HyDE Query Transform" and encountered an error when using AzureOpenAI's LLM, but not when using OpenAI's LLM. The community member asked if this is a bug and if anyone has any information about this issue.

In the comments, another community member suggested that the issue might be related to the Hyde query transform not respecting the LLM in the service context, and that the community member should pass in an LLMPredictor that wraps the Azure LLM. Another community member confirmed that setting up the LLMPredictor solved the problem.

Useful resources
I ran the following source and used the "HyDE Query Transform", using OpenAI's LLM it works fine but using AzureOpenAI's LLM I get the following error Is this a bug?
Does anyone know anything about this issue?

Plain Text
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>


Plain Text
import logging
import sys
from llama_index import ServiceContext, set_global_service_context
from llama_index.indices.query.query_transform import HyDEQueryTransform
from llama_index.query_engine.transform_query_engine import TransformQueryEngine
import common

# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

# ------------------------------
# β–  Requirements
# ------------------------------

# ------------------------------
# β–  Settings
# ------------------------------
llm_model = common.llm_azure()                    # LLM Model
embed_model = common.embed_azure()                # Embedding Model
service_context = ServiceContext.from_defaults(llm=llm_model,embed_model=embed_model)
set_global_service_context(service_context)

# ------------------------------
# β–  Load Index
# ------------------------------
index = common.load_index_vector_store_simple()

# ------------------------------
# β–  Do Query
# ------------------------------
query_engine = index.as_query_engine()
hyde = HyDEQueryTransform(include_original=True)
hyde_query_engine = TransformQueryEngine(query_engine, hyde)
response = hyde_query_engine.query("Which timecard should I use?")
print(str(response))
T
L
3 comments
The LLM defines the following, which is no problem when using the normal query_engine.
Plain Text
## ----------------------------------------
## β–  LLM Model
## ----------------------------------------
def llm_azure() -> AzureOpenAI:
  """
  AOAI LLM Model
    -> model : text-davinci-003 | gpt-35-turbo | gpt-35-turbo-16k | gpt-4 | gpt-4-32k
    -> engine: text-davinci-003_base | gpt-35-turbo_base | gpt-35-turbo-16k_base | gpt-4_base | gpt-4-32k_base
  """
  openai.api_key = os.environ["AOAI_API_KEY"]
  openai.api_base = os.environ["AOAI_API_HOST"]
  openai.api_type = "azure"
  openai.api_version = "2023-05-15"

  return AzureOpenAI(model="gpt-35-turbo", engine="gpt-35-turbo_base", temperature=0, max_tokens=800)
Seems like the Hyde query transform doesn't respect the LLM in the service context, need to pass In an LLMPredictor that wraps the azure llm

https://github.com/jerryjliu/llama_index/blob/df1a63ecbcefa2f0d41ada6253b4c6e7e59c7f6b/llama_index/indices/query/query_transform/base.py#L87
@Logan M
Thanks for the thoughtful reply, setting up LLMPredictor solved the problem.
Add a reply
Sign up and join the conversation on Discord