Hi @Logan M,
I am torn between using
query_engine = index.as_query_engine(vector_store_query_mode="mmr")
or not. Sometimes I get better answer with
vector_store_query_mode="mmr"
and sometimes I get better answer without it.
Is it possible for me to route the query to both query engines (one with
mmr
and one without) and have the LLM decide which answer is better and output that?
The only thing I found in the documentation is this
https://gpt-index.readthedocs.io/en/latest/examples/query_engine/RouterQueryEngine.html but this has me specifying "description" over the query engine, but the description would be the same, so it doesn't fit what I am looking for.
Thank you π