Find answers from the community

Updated 12 months ago

Azure

At a glance
@Logan M @nerdai I am trying an example RAG using AzureOpenAI and LlamaIndex 0.10.x. But I am getting below exception.

Plain Text
Traceback (most recent call last):
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 188, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 111, in _get_module_details
    __import__(pkg_name)
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/test.py", line 81, in <module>
    query_engine = index.as_query_engine()
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/venv/lib/python3.9/site-packages/llama_index/core/indices/base.py", line 391, in as_query_engine
    return RetrieverQueryEngine.from_args(
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/venv/lib/python3.9/site-packages/llama_index/core/query_engine/retriever_query_engine.py", line 108, in from_args
    response_synthesizer = response_synthesizer or get_response_synthesizer(
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/venv/lib/python3.9/site-packages/llama_index/core/response_synthesizers/factory.py", line 66, in get_response_synthesizer
    prompt_helper = prompt_helper or prompt_helper_from_settings_or_context(
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/venv/lib/python3.9/site-packages/llama_index/core/settings.py", line 306, in prompt_helper_from_settings_or_context
    return settings.prompt_helper
  File "/Users/sumved/genia/scm/genia-llamaindex-0.10/venv/lib/python3.9/site-packages/llama_index/core/settings.py", line 206, in prompt_helper
    self._prompt_helper = PromptHelper.from_llm_metadata(self._llm.metadata)
AttributeError: 'AzureOpenAI' object has no attribute 'metadata'
L
d
10 comments
You are using the azure openai class from llamaindex?
Hmmm, I was not able to reproduce

Some quick checks I did in a python terminal

Plain Text
# check that llm.metadata works
>>> from llama_index.llms.azure_openai import AzureOpenAI
>>> llm = AzureOpenAI(engine="gpt-35-turbo")
>>> llm.metadata
LLMMetadata(context_window=4096, num_output=-1, is_chat_model=True, is_function_calling_model=True, model_name='gpt-35-turbo', system_role=<MessageRole.SYSTEM: 'system'>)

# assign it globally
>>> from llama_index.core import Settings
>>> Settings.llm = llm
>>> from llama_index.core import VectorStoreIndex, Document
>>> index = VectorStoreIndex.from_documents([Document.example()])

# test that it works with global + specific override
>>> query_engine = index.as_query_engine()
>>> query_engine = index.as_query_engine(llm=llm)
>>> 
Does that look somewhat similar to what you did ?
Do I need to install a separate package for "llama_index.llms.azure_openai"?
yessir 🫡

Maybe try without the legacy imports

pip install llama-index-embeddings-azure-openai llama-index-llms-azure-openai

Plain Text
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.llms.azure_openai import AzureOpenAI
Let me give it a try now.
Thank you Logan! It's working now.
Add a reply
Sign up and join the conversation on Discord