thats an odd error. Did you give your stools a strange name?
Any more details?
Actually no nevermind, the sub agent throws the same error
So the names of the tools have spaces
I guess that might be the issue?
Like 'Electronic Banking XYZ'
Yeah I think that might be it
Perhaps the regex needs to be updated to allow spaces
Let me try without spaces
That error sounds like it's coming directly from openai π
Well yeah, it looks like function names cannot have spaces, or special characters in them
I have one question @Logan M - when I try to do this approach - Agents calling Agents, I don't seem to get sources in the responses
When it calls query engines tho, it does get sources in its responses
It should have sources -- it might be a little wonky/nested though, maybe try response.sources
? It might be under response.sources[0].sources
for example -- nestedt agent responses
I get ToolOutput object has no attribute 'sources', when trying to access response.sources[0].sources
The sources seem to be empty lists
response.sources[0].raw_output maybe? I would inspect the objects a bit with prints or debugging
still not 100% if it will actually work lol but looking at the source code I don't see why not
Also there's more weird behavior - I get this intermediate output:
=== Calling Function ===
Calling function: payments_solutions with args: {
"input": "What is XYZ?"
}
=== Calling Function ===
Calling Function: Payment_ABC with args: {
"input": ""
}
So it looks like the question from master agent doesn't get passed to the secondary agent?
But I do get some response
But I suspect that is why there is weird behavior with sources being empty?
Well, it will be writing its own queries to the sub-agents
Right, by why is input empty tho
Because when I ask the secondary agent via master_agent._tools[0]._query_engine.chat(Question)
, that clearly works
Yeah this just prints response object with source_nodes=[], metadata=None
,
hmm, I think the query
method for agents doesn't properly populate the sources
well, fixed it, kind of, it's not.. elegant I suppose lol
response.sources[0].raw_output.source_nodes
@trace_method("query")
def _query(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
agent_response = self.chat(
query_bundle.query_str,
chat_history=[],
)
return Response(
response=str(agent_response), source_nodes=agent_response.source_nodes
)
Given I may have to wait till the next release for this
What is the easiest/best way for me to patch my existing installation with this code?
You can edit BaseAgent in llama_index/agents/types.py
Use pip show llama-index
to find your installed dir
@Logan M so this fix gives me sources when I use agent.query
But it doesn't work when I try agent.chat
Which method do I change to make that work?
hmm yea thats confusing. That will take some dedicated debugging. Somewhere in llama_index/agents/openai_agent.py
, in BaseOpenAIAgent
Think I fixed it with some changes to _call_function to include tool_output.source_nodes when the isntance of tool_output is AgentChatResponse
However I'm often noticing, the Agent gives me a response with no sources, when it doesn't call any function
Idk why it doesn't call functions sometimes
How can I force it to use its tools?
Mostly prompt engineering, better tool descriptions.
You can force it to use at least one tool every time by using agent.chat("message", function_call="tool_name")
anyway to make it use multiple?
Nope -- not right now anyways
It sounds like to me you should just call things in a sequence yourself?
maybe this is a disadvantage of of having many tools
each one needs really good descriptions
otherwise they may get lost in the mix
not sure how i can improve them to be honest, the descriptions seem like the best we got at the company
In what situations do you think a SubQuestionQueryEngine would be more useful than a RouterQueryEngine and vice versa?
Hmm, i think sub question is better for situations where you expect compare/contrast queries
Have you committed this anywhere in a PR?
Perhaps I can add the hack for the chat methods as well there
or do we want to find a more elegant fix lol
I haven't had time to commit yet -- feel free to open a PR! I'm not sure I see a better solution π€
Also, I was able to get massive improvements when modifying the system prompts to include: "You must ALWAYS use at least one of the tools provided when answering a question; do NOT rely on prior knowledge."
Okay, will get to it! Just so I'm clear, are the steps to contribute - fork repo, make a branch - (are there guidelines on branch names, and commit formats?), commit changes and raise PR? Does PR require notes in specific format?
You got it.
- No rquirements on branch names
- When you make a PR, a template is presented for you to fill out/work from
Once I see your PR, if its the first one, I have to click "approve" to run our CI testing.
To have a better chance at passing CI, run the following commands
(First command ensures you have the proper linting tool versions)
pip install -r requirements.txt
black .
ruff . --fix
mypy llama_index
mypy sometimes raises more errors than needed -- I wouldn't worry about errors that aren't related to the code you changed π
@Logan M I raised a PR with these fixes
Do let me know if I've made any stupid mistakes, pretty new to this
Thanks a ton! Should be able to take a look today
Fixed issues and requested a review again
Can you please take a look when you get time?
Really need agents to return sources correctly lol
ah yea, I just wanted to touch up the sources on the react agent
Having it as a list attribute on the agent means sources won't work for concurrent chats/queries
How do you suggest the attribute be?
I've basically made it a List[ToolOutput], same as the one for OpenAI Agent
Is the intention something else? @Logan M
I meant, it probably shouldn't be set as an attribute with self.sources
I'm being picky here, but if it's attached to self
, then if you are hosting on a server, the sources will get all scrambled for each request
Also, the sources list right now is not ever being reset in the react agent, which is another issue
I can fix this in a bit today though
sources is also a list attached to self, in OpenAI agent - correct?
wonder if what you're proposing would also be a factor there? or is that not the case, because requests are being sent to openai there?
I think I would want to fix it for openai too (just not for this pr)
ah screw it, maybe I'm overthinking it lol