Find answers from the community

Updated 4 months ago

Benefits of relying on LLaMAIndex's agent versus using OpenAI's assistants API or asking GPT-4

At a glance
The community members discuss the benefits of using LlamaIndex's agent versus OpenAI's assistants or GPT-4 for tool selection. The key points are:

- Building a custom agent using LlamaIndex's workflows allows for more control over memory, chat history, and tool setup, which can improve accuracy and customization.

- Retrieving context on each message and including it in the system prompt can further enhance the agent's performance.

- OpenAI's assistants may be more difficult to debug and customize, as they are "black boxes" that need to work out-of-the-box for a specific use case.

Useful resources
curious but is there a benefit to relying on the llamaindex's agent to pick the tool versus just using openai's assistants api or even just asking gpt4 to pick one?
L
s
5 comments
If you want, you can always build your own agent more from-scratch. Here's an example with workflows

https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/

General workflows guide, if you haven't used them
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#workflows
I mean, having control over the memory, chat history, exact tools setup, etc, is pretty powerful. Especially if you take the workflows apporach, you have total control over how tools are called
For example, you can make an agent that retrieves context on every message, and puts it in the system prompt, to improve accuracy when picking tools or answering the user
https://colab.research.google.com/drive/1wVCkvX7oQu1ZwrMSAyaJ8QyzHyfR0D_j?usp=sharing
black boxes like openai assistants will be a nightmare to debug and customize (in my opinion) -- it basically has to work for your use case out of the box, or it won't work at all
Add a reply
Sign up and join the conversation on Discord