The community members are discussing the purpose of using the chat_mode="openai" option when creating a chat engine. The main points are:
1. The openai mode allows the chat engine to use tools and query them to provide appropriate answers, but it's unclear how to create these tools.
2. One community member suggests that the openai mode is more reliable than the react mode, as it uses a function calling API instead of requiring complex instructions and parsing.
3. Another community member notes that the openai mode provides chat history, which is not available when directly querying the index.
However, there is no explicitly marked answer to the original question about the purpose of using the openai mode in a chat engine.
What is the point of using chat_mode="openai" when creating a chat engine? The whole point of this chat_mode is to be able to use tools, query them and then provide an appropriate answer? How do I create tools when creating a chat engine this way? M referring to this article - https://gpt-index.readthedocs.io/en/latest/examples/chat_engine/chat_engine_openai.html
What m actually asking is if there's any point in providing this mode? Why would I use this route than directly querying my index? I am guessing my chat engine itself becomes a tool in this case...
But that will happen even if I just write - index.as_chat_engine() without specifying the "openai" mode right.! What is the edge added by this "openai" mode. Theoretically, it seems to be just an unnecessary overhead. I mean creating OpenAIAgents with multiple tools makes sense but m not able to understand why I would use "openai" mode in chat_engine.
yea, index.as_chat_engine() will use the openai mode automtaically (at least in latest versions) if you are using a supported LLM. Otherwise it uses react mode.
OpenAI mode is much more reliable than react mode, since the function calling api does not require complex instructuions and parsing like the react mode does