Find answers from the community

Updated 3 months ago

Chat engine

Hello! When I'm trying to use an agent with QueryEngineTool to look for answer in the context, there is an exception and I see some inconsistence.
Despite the method is called "chat" for the agent, it doesn't accept engine created with "as_chat_engine" method:

Plain Text
query_engine = index.as_chat_engine(chat_mode='condense_plus_context', 
                                                similarity_top_k=similarity_top_k, 
                                                llm=llm_engine,
                                                system_prompt=prepared_system_prompt)

query_tool =  QueryEngineTool(
            query_engine=query_engine,
            metadata=ToolMetadata(
                name="query_tool",
                description=self.query_description,
            ),
        )
tools.append(query_tool)
agent = OpenAIAgent.from_tools(
            tools, llm=llm_engine, 
            verbose=True, 
            system_prompt=self.system_prompt
        )
response = agent.chat(query_text, chat_history=chat_history) # <====== Exception

The exception is "Got output: Error: 'CondensePlusContextChatEngine' object has no attribute 'query'"
L
S
15 comments
That's right, a chat engine is not a query engine. They are two different objects with different interfaces
Yeah but I suppose that the "chat" method of the agent object should work with the engines created as chats. If it doesn't work with chat engines, why is the method called "chat"? And how to provide keeping the conversation and knowing the chat history without using a chat engine?
I'm not sure I follow.

A query engine is stateless. It has a .query() method

A chat engine is stateful, with chat history. It has a .chat() method
Exactly. The agent.chat method works only with query_engine, not a chat engine, right? It sounds pretty inconsistent
I still don't think I know what the issue or inconsistency is here 🫠
agent.chat - doesn't work with chat_engine, only query_engine. How to make agent.chat to work with stateful, supporting chat history engine?
agent.chat is already statefull, it is a chat engine
But I want the question to be asked against my data, how can I do it? - with agent.
Its already being asked against your data.

An agent looks at the chat history and list of tools. And decides to respond using a tool (i.e. a query engine) or using the chat history alone. This allows it to respond to normal things like "Hi" and also leverage the tool for questions that involve your data

You can "force" it to use a specific tool at least once by passing in the tool name. But this might lead to some awkward/unwanted behavior

agent.chat("...", tool_calls="query_engine_tool")
Thanks! Sorry but I don't follow. I don't want to force any tool. I want one of the tools to query my data. Should I just use the index.as_query_engine instead?
Maybe! It really depends on what you want.

An agent will have your index as one of the tools. But it chooses whether or not it needs to use it
Ok, I think I've got it. finally πŸ˜† Is QueryEngineTool the only tool that could be used against my data?
You can also wrap any function in a tool too πŸ‘€

Plain Text
def my_fn_name(input1: str) -> str:
  """This is used as the tool description."
  return input1

from llama_index.core.tools import FunctionTool

tool = FunctionTool.from_defaults(fn=my_fn_name)
Add a reply
Sign up and join the conversation on Discord