Wait, isn't the agent itself handling that chat?
Oh I guess I didn’t set up an agent. I’m making a fastapi endpoint that just sets up the index and chat engine and then hits astream chat and returns a streaming response
I want to add a function call to that flow but not sure the best way
@Logan M if that doesn’t make sense I can send an example
Doing something exactly like this
What I really want is in some cases (using function calling) return a structured json output instead of a streaming response
You can attach an output class to an llm, and it will always respond with the object class
Or if you used an agent, you can add a tool with return_direct=True, and it will return the tool output directly
Can you share some examples of what you mean
I don’t want to always have a structured output
Yea, I was just talking about this yesterday, Im currently making a custom agent to do this... Or well comindering the OpenAIAgent to do it lol : )
There was one more thing I was going to look into before completing that. I think one of the Workflow types has fast api endpoint built in to access the agent. Thought that it may do what i want.
Doesn't this also skip the orchestration of automatically responding to the tool call?
@CryptRillionaire.eth do you know if it works with streaming. Thinking about it more and I think for my case it’d be fine if I could just have a function call within my chat request (when I do chat_engine.astream_chat but not sure how to pass that in)
(Imo then seems like to me what you want is an agent with tools, and not a basic chat engine 😅 )
Gotcha, so would you do agent.astream_chat
Just wanna make sure it’d still work with streaming
And is there a way to give the agent access to the docs
I’m using a vector store index
Yes to the last two also but i have not setup to do so yet.
Cool I’ll try it out, looking through the docs I didn’t see an example so wasn’t sure. Rn I’m doing index.as_chat_engine to initialize but will try with agent
Agent wraps up a chat_engine, The agent handles automatically making additional requests when it gets a tool call responses form the llm.
@CryptRillionaire.eth thank you! I’ll take a look, yeah just need to figure out how to make the agent aware of my index
@Logan M I’m still pretty confused by this. Once I initialize my vector store index, how do I pass that to the agent. I set up a QueryEngineTool that can check my docs but it only checks 1 doc at a time. I’ve also seen some examples where you pass in a retriever
But basically just want to set up an agent, and then on each chat I want it to check my docs (using a retriever or query engine)
And then I can define any additional tool (like we talked about above)