Find answers from the community

Updated 11 months ago

I think I asked this before, but is

At a glance
I think I asked this before, but is there a way to designate one model to handle function calling for an agent, and another handle the generation
for example:

Plain Text
        query_engine = get_query_engine(
        user_id=user_id, vector_store=vector_store, model=model, filters=filters
    )
    query_engine_tool = QueryEngineTool.from_defaults(query_engine=query_engine)

    model_instance = LLM_INSTANCES[model]
    return ReActAgent.from_tools(
        tools=[query_engine_tool],
        llm=model_instance,
        chat_history=history,
    )

I would love to pass gpt4 in to handle all the function calling and decision making, but allow another model to do the final generation in someway
D
1 comment
should be able to alter the llm that handles the generation on the query_engine side through customizing the response synthesizer I think, on the query planning side of the agent I'm not sure there is a simple way to alter the llm it uses conditioned on the particular step in its reasoning process
Add a reply
Sign up and join the conversation on Discord