The community member is experiencing an issue where a Bedrock LLM agent works well in a Gradio app, but when used in a FastAPI app created by the create-llama command line tool, the agent is unable to call the tools and instead produces hallucinations. The issue only occurs when using the Bedrock agent, as the agent works fine with OpenAI but not with other models like Claude. The Bedrock agent is able to find the tools, but it is unable to receive the results when outputting the Observation, leading to hallucinations. This behavior is inconsistent with the Gradio app where the agent works as expected.
In the comments, another community member is unsure if the create-llama FastAPI app is even using an agent, but a second community member confirms that if the "just a simple chatbot or agent" option is chosen, an AgentRunner instance is created with a selection of tools like Wikipedia. The second community member also tried using the same Bedrock agent code in the create-llama app, but it always fails when using the tools.
Anyone knows why instantiating the SAME Bedrock llm agent in gradio works great and is able to call the tools, but when using the agent with the fastapi app provided by create-llama command line tool it is not able to call the tools, it just allucinates. It only works with OpenAI, but when switching to claude (for example) it is not able to use the tools. The Bedrock agent is able to find the tools but it is not able to receive the results when outputing the Observation, its just hallucinations. Its weird because in the gradio app it just works.
Yes, if you choose fastapi backend and no data and go with the "just a simple chatbot or agent" it creates an agent ( AgentRunner instance) with a choose of tools like Wikipedia and others, I tried bringing a Bedrock agent code (identical) to the create-llama app and it always fails when using the tools