Find answers from the community

Home
Members
ujjwalk9
u
ujjwalk9
Offline, last seen 3 months ago
Joined September 25, 2024
u
ujjwalk9
·

Agent

Hey all,
I have a use case where I need to use a ReAct prompt for reasoning and search, after which I want to output data in Json Format.
Here is an example describing the use case,

Context - I am making a JD and CV matching system. The ReAct agent will search about all the companies candidate has worked for, using a bing search tool provided, and then give the results in JSON derived from a pydantic object.

JD - {KFC company hiring for sales manager}
CV - {candidate has worked for mcDonalds and Pizza Hut as sales rep}
Determine , if the companies candidate has worked for are similiar to KFC or not by using the seach_tool.

After determining, output in
Plain Text
class ScoringReasoning(BaseModel):
    reasoning: str = Field(description="Detailed reasoning behind the score.")
    score: str = Field(description="Score for this factor. please always show the out of how much you are scoring. For example, if a criteria has to be scored out of 50. Then please output the score in format 20/50 or 21/50, etc. and not just as 20 or 21")
    
# Define your desired data structure.
class scoring_schema(BaseModel):
    name_of_candiate : str = Field(description="Name of the candiate")
    email_of_candidate : str = Field(description="Email of the candidate")
    phone_no_of_candidate : str = Field(description="Phone number of the candidate")

    company_similarity_score : ScoringReasoning = Field(description="Score, Reasoning for candidate's similarity score between candidate's previous companies and the hiring company.")


I have tried to use the output_parser with ReAct agent, with this gives errors.
Plain Text
from llama_index.core.agent import ReActAgent
react_agent = ReActAgent.from_tools([search_bing_tool], llm=llm, verbose=True,output_parser=scoring_schema)

is there another approach I can take to accomplish this?
6 comments
u
W
u
ujjwalk9
·

Document

I’m using LlamaIndex with the llm.completion() function to generate responses based on a document base. However, I want to keep track of which document corresponds to each response. What strategies can I use to achieve this, considering that I want to utilize only the built-in features provided by LlamaIndex?

Example structure -

Pormpt - "Match the folloing CVs with The provided JD".
JD - type(Document) - Read using SimpleDirectoryReader()
CV - type(Document) - Read using SimpleDirectoryReader()

LLM Response corresponding to each CV-
CV1 - "This is a good cv ..."
CV2 - "This is not a good CV..."

Like this I want to keep track of what response came for what CV.
6 comments
u
W