You can see the LLM wrote the answer, and then a whole lot more underneath.
If you look at the source code, because it checks for
Action:
before
Answer:
in the output parser, it does not detect the end of the react loop.
https://github.com/run-llama/llama_index/blob/fd1edffd20cbf21085886b96b91c9b837f80a915/llama-index-core/llama_index/core/agent/react/output_parser.py#L104 Have you tried just using a different LLM? tbh open-source LLMs make terrible agents
Alternatively, you could write your own output parser and pass it in (using the above as the base?)