Find answers from the community

Updated 4 months ago

I have an OpenAIAgent and im trying to

I have an OpenAIAgent and im trying to feed it a list of messages. Everything method im finding takes a string to append to the messages. .*chat(str, history). Im just wanting to call completion on the list of messages. popping off the last message and submitting it as a string im sure would be incorrect usage of the library and im sure im missing something. Is working With the Agent really what I want? Do i want something else in the stack like the chat_engine instance?
L
C
g
11 comments
hmm, yea the only api there is agent.chat("string", chat_history=chat_history)

Your idea for popping isn't totally incorrect lol

If you wanted, you could go lower level and do llm.chat(chat_history, tools=[x.metadata.to_openai_tool() for x in tools])
thanks I'll give the second a try after work today.
Thanks @grifsec but not quite that far along yet, Will be getting into that but still working on scopoing out what the LlamaIndex api provides and how to use. What an Agent provides and when to go lower level without fighting the API.

@Logan M seems as though going lower in the API I don't get the automatic handling of the function call. I can pop off the message and supply it to the agent chat and that will work. Would this also be a point where I want to possibility create my own Agent type?

Is this scoping of the agent api correct?
Agents handle some auto magic messaging for tool calls and possibly data retrieval with automatic response. If i go lower level im outside the scope of the agent handling automation...
the agent basically sends the chat history + tool dicts to the llm api, reads the tool calls from the llm response, calls the tools, and add the tool results to the chat history
thats all its doing afaik
@Logan M Is there a response_gen() that returns the whole chunk object?
Want im wanting to do is place my agent between 3rd party UI and my backend as a kind of middlewere to be agnostic to a front end UI. I could of course make a new object and pass along the token but would have to guess at the token counts and what not.
Im digging around in the code base to see if I can overwrite a method or find one that returns what i want.
hmm, the response gen just returns the incremental responses from the llm. The last one would have the full response
Yea, looks like all the generators of the chat_engine are queuing just the token too so id have to a lot of work to push through the full response.
Add a reply
Sign up and join the conversation on Discord