I have an OpenAIAgent and im trying to feed it a list of messages. Everything method im finding takes a string to append to the messages. .*chat(str, history). Im just wanting to call completion on the list of messages. popping off the last message and submitting it as a string im sure would be incorrect usage of the library and im sure im missing something. Is working With the Agent really what I want? Do i want something else in the stack like the chat_engine instance?
Thanks @grifsec but not quite that far along yet, Will be getting into that but still working on scopoing out what the LlamaIndex api provides and how to use. What an Agent provides and when to go lower level without fighting the API.
@Logan M seems as though going lower in the API I don't get the automatic handling of the function call. I can pop off the message and supply it to the agent chat and that will work. Would this also be a point where I want to possibility create my own Agent type?
Is this scoping of the agent api correct? Agents handle some auto magic messaging for tool calls and possibly data retrieval with automatic response. If i go lower level im outside the scope of the agent handling automation...
the agent basically sends the chat history + tool dicts to the llm api, reads the tool calls from the llm response, calls the tools, and add the tool results to the chat history
Want im wanting to do is place my agent between 3rd party UI and my backend as a kind of middlewere to be agnostic to a front end UI. I could of course make a new object and pass along the token but would have to guess at the token counts and what not.