Find answers from the community

Updated last year

Azure

At a glance

The community member is using AzureOpenAI and cognitive search to retrieve context, and is encountering an error in their code. The comments suggest that there have been changes to Azure OpenAI, and the community member should update the api_base to azure_endpoint. The comments also discuss using LLMMultiSelector.from_defaults() or select_multi=True to handle multiple queries and select the best response. There is no explicitly marked answer, but the community members provide suggestions on how to address the issue.

Useful resources
Hi, I'm currently always using AzureOpenAI and cognitive search as my db for retrieving context, I wanted to know why I have this error in my code:
Plain Text
documents = SimpleDirectoryReader("files").load_data()

llm = AzureOpenAI(
    model="gpt-35-turbo-16k",
    engine="gpt-35-turbo-16k",
    api_key=api_key,
    api_base=api_base,
    api_type=api_type,
    api_version=api_version,
)

# You need to deploy your own embedding model as well as your own chat completion model
embed_model = OpenAIEmbedding(
    model="text-embedding-ada-002",
    deployment_name="text-embedding-ada-002",
    api_key=api_key,
    api_base=api_base,
    api_type=api_type,
    api_version=api_version,
)


prompt_helper = PromptHelper(context_window=16384, num_output=2048)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(
        embed_model=embed_model,
        prompt_helper=prompt_helper,
        llm=llm,
    )

summary_text=(
    "Des informations contextuelles provenant de plusieurs sources sont présentées ci-dessous.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Étant donnĂ© les informations provenant de sources multiples et sans connaissances prĂ©alables,"
    "rĂ©pondre Ă  la requĂȘte.\n"
    "RequĂȘte : {query_str}\n"
    "RĂ©ponse : ")

index1 = SummaryIndex(
    documents,
    service_context=service_context,
    storage_context=storage_context,
    summary_text=summary_text,
    response_mode="tree_summarize"
        )
        


index2 = VectorStoreIndex(
    documents,
            service_context=service_context,
            storage_context=storage_context,
        )

list_query_engine = index1.as_query_engine(response_mode="tree_summarize")
vector_query_engine = index2.as_query_engine(similarity_top_k=5)

list_tool = QueryEngineTool.from_defaults(
    query_engine=list_query_engine,
    description="Utile pour les questions de synthÚse liées à la source de données",
)

vector_tool = QueryEngineTool.from_defaults(
    query_engine=vector_query_engine,
    description="Utile pour retrouver un contexte spécifique lié à la source de données",
)



# initialize router query engine (single selection, pydantic)
query_engine = RouterQueryEngine(
    selector=LLMMultiSelector.from_defaults(),
    query_engine_tools=[
        list_tool,
        vector_tool,
    ],
    service_context=service_context
)
#query_engine=index1.as_query_engine()
resp=query_engine.query("Fait un résumé complet des transcriptions d'appels, en français, à partir du contexte, avec une ou plusieurs conversations entre un assuré et un ou plusieurs opérateurs d'assistance d'assurance")
print(resp)
W
R
L
19 comments
There have been changes regarding the Azure OpenAI.

You can find all the latest changes related to Azure here: https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai.html
You'll need to change the api_base with azure_endpoint
I can't update else I will have the same issues we saw with Logan M
As you can see here, do you have an idea on how to proceed respecting this issue ?
selector=LLMMultiSelector.from_defaults(llm=llm),
you need to specify the LLM in here
you you can replace that kwarg with select_multi=True
Okey, so selector with LLM Multi is the same as select_multi=True ?
And so, how to fix my main issue, which is explained here ?
that will fix your issue actually (I think)
I got now the error: "TypeError: RouterQueryEngine.init() got an unexpected keyword argument 'select_multi'"
trying with the from_default
seems to work, could you confirm it does the same as the selector=LLMMultiSelector option
Like it's very usefull for doing multiple queries and then selecting the best that's it ?
What it does is selects multi sub-indexes to send queries to, gets the results of each query, and asks the LLM the original question again using all the responses as context
Okey, I see. Thanks.
Add a reply
Sign up and join the conversation on Discord