Find answers from the community

Updated 11 months ago

Hello guys. I've been working recently

Hello guys. I've been working recently using AzureOpenAI service on my LLM in LlamaIndex and trying to create a custom agent. Given that, I followed the tutorial found on https://docs.llamaindex.ai/en/stable/examples/agent/custom_agent.html, changing only the llm part for AzureOpenAI instead of OpenAI. When getting to the end of the tutorial, once I'm gonna "initialize" the agent, I got the following error (Shown on thread):
a
a
L
29 comments
I don't seem to find any solution to this error. Has anyone come to the same? Or maybe is there a better way to build an agent using AzureOpenAI and LlamaIndex?
Langchain is out of discussion because I cannot use that library on the project due to some external factors
As I previously stated, the tutorial and the classes and all that is built on the script is exactly the same as in the tutorial. Help please!!
Hey @axentar, I just tried running through the same notebook using AzureOpenAI and it worked for me.
The error you share seems to indicate a missing field in the definition of RetryAgentWorker.
Are we sure that is in there?
Yes ! It is exactly as in the example !
hmmmm. Would you be able to submit an issue and share more of your code so I can try to replicate?
from llama_index.llms import AzureOpenAI, ChatMessage
from llama_index.tools import BaseTool, FunctionTool
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index.agent import ReActAgent
from llama_index import set_global_service_context, ServiceContext

import json
from typing import Sequence, List
import nest_asyncio

nest_asyncio.apply()

llm = AzureOpenAI(
model="gpt-3.5-turbo",
deployment_name = os.getenv("DEPLOYMENT_NAME"),
api_key=os.getenv("OPENAI_API_KEY"),
azure_endpoint=os.getenv("OPENAI_API_BASE")
)

embed_model = AzureOpenAIEmbedding(
model = "text-embedding-ada-002",
deployment_name= os.getenv("DEPLOYMENT_NAME_EMBEDDINGS"),
api_key=os.getenv("OPENAI_API_KEY"),
azure_endpoint=os.getenv("OPENAI_API_BASE")
)

service_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model
)

optionally set a global service context

set_global_service_context(service_context)
from llama_index.agent import CustomSimpleAgentWorker, Task, AgentChatResponse
from typing import Dict, Any, List, Tuple
from llama_index.tools import BaseTool, QueryEngineTool
from llama_index.program import LLMTextCompletionProgram
from llama_index.output_parsers import PydanticOutputParser
from llama_index.query_engine import RouterQueryEngine
from llama_index.prompts import ChatPromptTemplate, PromptTemplate
from llama_index.selectors import PydanticSingleSelector
from pydantic import Field, BaseModel
from llama_index.llms import ChatMessage, MessageRole

DEFAULT_PROMPT_STR = """
Given previous question/response pairs, please determine if an error has occurred in the response, and suggest \
a modified question that will not trigger the error.

Examples of modified questions:
  • The question itself is modified to elicit a non-erroneous response
  • The question is augmented with context that will help the downstream system better answer the question.
  • The question is augmented with examples of negative responses, or other negative questions.
An error means that either an exception has triggered, or the response is completely irrelevant to the question.

Please return the evaluation of the response in the following JSON format.

"""

def get_chat_prompt_template(
system_prompt: str, current_reasoning: Tuple[str, str]
) -> ChatPromptTemplate:
system_msg = ChatMessage(role=MessageRole.SYSTEM, content=system_prompt)
messages = [system_msg]
for raw_msg in current_reasoning:
if raw_msg[0] == "user":
messages.append(
ChatMessage(role=MessageRole.USER, content=raw_msg[1])
)
else:
messages.append(
ChatMessage(role=MessageRole.ASSISTANT, content=raw_msg[1])
)
return ChatPromptTemplate(message_templates=messages)

class ResponseEval(BaseModel):
"""Evaluation of whether the response has an error."""

has_error: bool = Field(
..., description="Whether the response has an error."
)
new_question: str = Field(..., description="The suggested new question.")
explanation: str = Field(
...,
description=(
"The explanation for the error as well as for the new question."
"Can include the direct stack trace as well."
),
)
let me send you my notebook
On cell 22 is where it doesn't work
Thanks! I’ll take a look shortly.
ah this is a classic pydantic issue

For compatibility, the notebook should be doing from llama_index.bridge.pydantic import PrivateAttr
Thanks Logan! I just took a look at the notebook, and it worked for me as I ran through it with my own openai-azure information. I didn't have to use the from llama_index.bridge.pydantic import PrivateAttr as Logan mentioned but that probably is the surest way to ensure you have compatibility with pydantic.

Would you perhaps be running pydantic v2? Our library is still using pydantic v1 so if your on v2 that is compatibility issue we're seeing here.

When I run the snippet below, I get back "1.10.13".
Plain Text
import pydantic
print(pydantic.__version__)
Will check on that
It worked with pydantic 1.10.13
Sorry, it didn't, now i get the error "NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}" when I run a query
Are you using azure?
Plain Text
self._router_query_engine = RouterQueryEngine(
    selector=PydanticSingleSelector.from_defaults(llm=llm),
    query_engine_tools=tools,
    verbose=kwargs.get("verbose", False),
)


Change that line to include the LLM in the selector
Add a reply
Sign up and join the conversation on Discord