Find answers from the community

Updated 2 years ago

Mmr or not

At a glance

The community member is torn between using query_engine = index.as_query_engine(vector_store_query_mode="mmr") or not, as they sometimes get better answers with vector_store_query_mode="mmr" and sometimes without it. They are wondering if it's possible to route the query to both query engines (one with MMR and one without) and have the LLM decide which answer is better and output that. The community member found a related example in the documentation, but it doesn't quite fit their use case.

In the comments, a community member suggests using a similar description but mentioning that one is using MMR and one is using cosine, and seeing what the LLM does. They also suggest using a graph with a list index at the top level, but note that in this case, the LLM is not choosing the best answer, it's aggregating both of them.

The original community member acknowledges the suggestions and says they will give it a try.

Useful resources
Hi @Logan M,

I am torn between using query_engine = index.as_query_engine(vector_store_query_mode="mmr") or not. Sometimes I get better answer with vector_store_query_mode="mmr" and sometimes I get better answer without it.

Is it possible for me to route the query to both query engines (one with mmr and one without) and have the LLM decide which answer is better and output that?

The only thing I found in the documentation is this https://gpt-index.readthedocs.io/en/latest/examples/query_engine/RouterQueryEngine.html but this has me specifying "description" over the query engine, but the description would be the same, so it doesn't fit what I am looking for.

Thank you πŸ™‚
L
M
3 comments
Yea that's a tricky one. You could use a similar description but mention that one is using MMR and one is using cosine, and see what the LLM does lol (just not you'll want to use a multi selector)
Example is over herehttps://gpt-index.readthedocs.io/en/latest/examples/query_engine/RouterQueryEngine.html#pydanticmultiselector
You could also use a graph with a list index at the top level. But then then LLM is not choosing the best answer, it's aggregating both of them
I see, okay thanks for the suggestions, I will give it a try πŸ‘
Add a reply
Sign up and join the conversation on Discord