Find answers from the community

Updated 2 months ago

Query engine

Hi, I'm having trouble tracing the cause of this issue in the source code (v0.9.14.post3).

I built a hybrid retriever:

Plain Text
vector_retriever = index.as_retriever(similarity_top_k = 3)
bm25_retriever = BM25Retriever.from_defaults(docstore = index.docstore, similarity_top_k = 3)

retriever = QueryFusionRetriever(
    llm              = llama,
    mode             = "reciprocal_rerank",
    num_queries      = 1,  # set this to 1 to disable query generation
    retrievers       = [bm25_retriever, vector_retriever],
    similarity_top_k = 3,
    use_async        = True,
    verbose          = True
)

query_engine = RetrieverQueryEngine.from_args(retriever)


Note that the llm argument is initialized to be llama which is defined elsewhere as one of the Llama-2 chat models. Yet, when I call _queryengine.query(), it still sends API calls to OpenAI for completion. I thought it shouldn't be doing that because I've passed in a non-default model into the llm argument in the QueryFusionRetriever class.

What gives?
L
b
10 comments
A query engine is two things -- a retriever and a response synthesizer
Here, you've only configured the retriever
You can pass in the service context into from_args()
Or manually get the response synthesizer and pass that in (which is what is using the llm)
Thanks very much for the quick reply!
Yup, that did it πŸ‘
So many components to keep track of πŸ˜†
A bit, but it's all in the name of customizing πŸ˜…

V0.10.x makes global default settings a bit easier, if that sounds attractive πŸ™‚ (but is some work to upgrade to)
Add a reply
Sign up and join the conversation on Discord