Optimizing multiple index options for consistent responses
Optimizing multiple index options for consistent responses
At a glance
The community members are discussing the best way to combine multiple index options. The original poster tried using the CondensePlusContextChatEngine, but it kept rewriting the input and returning an empty response. The comments suggest using an agent or the query fusion retriever as alternatives. Some community members discuss using an agent with a chat engine, and note that an agent is a type of chat engine that can be given tools to access the index. They also mention the FunctionCallingAgent as an option for LLMs that support tools. There is no explicitly marked answer, but the community members provide several suggestions for combining multiple index options.
What's the best way to combine multiple index options? I tried the CondensePlusContextChatEngine and it keeps rewriting e.g. "Hello" into a bunch of different things and then returning an empty response, but most examples in the docs want me to use an index -> chat engine
CondensePlusContext will rewrite yes, but it should respond as normal (its only rewriting and then retrieving, and then putting retrieved context + chat history to the llm). You may need to adjust the system prompt