Find answers from the community

Updated 3 months ago

Agent

Hey all,
I have a use case where I need to use a ReAct prompt for reasoning and search, after which I want to output data in Json Format.
Here is an example describing the use case,

Context - I am making a JD and CV matching system. The ReAct agent will search about all the companies candidate has worked for, using a bing search tool provided, and then give the results in JSON derived from a pydantic object.

JD - {KFC company hiring for sales manager}
CV - {candidate has worked for mcDonalds and Pizza Hut as sales rep}
Determine , if the companies candidate has worked for are similiar to KFC or not by using the seach_tool.

After determining, output in
Plain Text
class ScoringReasoning(BaseModel):
    reasoning: str = Field(description="Detailed reasoning behind the score.")
    score: str = Field(description="Score for this factor. please always show the out of how much you are scoring. For example, if a criteria has to be scored out of 50. Then please output the score in format 20/50 or 21/50, etc. and not just as 20 or 21")
    
# Define your desired data structure.
class scoring_schema(BaseModel):
    name_of_candiate : str = Field(description="Name of the candiate")
    email_of_candidate : str = Field(description="Email of the candidate")
    phone_no_of_candidate : str = Field(description="Phone number of the candidate")

    company_similarity_score : ScoringReasoning = Field(description="Score, Reasoning for candidate's similarity score between candidate's previous companies and the hiring company.")


I have tried to use the output_parser with ReAct agent, with this gives errors.
Plain Text
from llama_index.core.agent import ReActAgent
react_agent = ReActAgent.from_tools([search_bing_tool], llm=llm, verbose=True,output_parser=scoring_schema)

is there another approach I can take to accomplish this?
W
u
6 comments
Can you share the error traceback. That will help to understand the error better
Possible reason that i can assume is that you are using an Open source LLM that is not good at producing pydantic output.
Plain Text
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import BaseTool, FunctionTool

search_bing_tool = FunctionTool.from_defaults(fn=search_bing)
llm = OpenAI(model="gpt-3.5-turbo-0125")
react_agent = ReActAgent.from_tools([search_bing_tool], llm=llm, verbose=True,output_parser=scoring_schema_with_prompts)
# react_agent = ReActAgent.from_tools([search_bing_tool], llm=llm, verbose=True)


from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core.prompts import ChatPromptTemplate

hr_manager_prompt_chat = ChatPromptTemplate([
    ChatMessage(role=MessageRole.SYSTEM, 
                content=("You are the HR manager of a company. You have received a CV and a job description. You need to match the CV to the job description.")),
    ChatMessage(role=MessageRole.USER, content=("""Please match the CV to the job description.
        Please check the following:
        search about the companies candidate has worked for before, and determine if they are similar to the hiring company or not.
        jd - {jd}
        cv - {cv}"""))
])


react_agent.chat(hr_manager_prompt_chat.format(jd=jd[0], cv=cvs[0]))


I am using gpt-3.5-turbo-0125,
attached the error traceback
seems like it's a parsing error, I am on version 0.10.4
Yeah its a parsing issue. If you using OpenAI, try using it with OpenAI agent once: https://docs.llamaindex.ai/en/stable/examples/agent/openai_forced_function_call.html
OpenAI agent doesn't have support output parser. I tried passing the schema anyway in the prompt using, pydantic .model_json_schema(). But it doesn't give good results.
Add a reply
Sign up and join the conversation on Discord