Find answers from the community

Updated last week

Optimizing multiple index options for consistent responses

What's the best way to combine multiple index options? I tried the CondensePlusContextChatEngine and it keeps rewriting e.g. "Hello" into a bunch of different things and then returning an empty response, but most examples in the docs want me to use an index -> chat engine
L
Z
19 comments
CondensePlusContext will rewrite yes, but it should respond as normal (its only rewriting and then retrieving, and then putting retrieved context + chat history to the llm). You may need to adjust the system prompt
Beyond that, combining multiple indexes, I usually use either an agent or the query fusion retriever
can I use the agent with a chat engine?
uuhhh an agent is a chat engine -- you'd give the agent tools that access your index
You could mix them I guess
But then chat history/state will get very confusing
Oh, I can just give the agent a chat history? I thought they'd be diff
I basically want a chat engine with the ability to use query tools
yea thatd be an agent then πŸ‘
ReaCt would be nice, but it needs the chat history, maybe
Plain Text
fusion_retriever = QueryFusionRetriever(
            retrievers=retrievers,
            similarity_top_k=3,
            num_queries=3,
            mode=FUSION_MODES.RECIPROCAL_RANK,
            verbose=True,
        )


so like this
with the CondensePlusContext
If you are using an LLM that supports tools (openai, anthropic) I would use the FunctionCallingAgent instead of react
and you could give it that retriever as a tool (or use a query engine as a tool, up to you)
what's weird or annoying
is that I gave it a pretty basic prompt of who it is and what not
related to project
Add a reply
Sign up and join the conversation on Discord