Find answers from the community

Updated 2 months ago

```

Plain Text
query_engine_base = RetrieverQueryEngine.from_args(retriever, 
                                                   service_context=service_context)
        response = query_engine_base.query(query)

I have two questions here. By default, how many chunks get sent as a context to my llm ? And second, can I pass in the above code a similarity_top_k argument that lets me control how many of the most similar chunks get appended to the context ?
L
b
4 comments
Assuming you are using a vector store index, the default top k is 2

When you create the retriever, you can specificy the similarity_top_k
So if I do
Plain Text
retriever = index.as_retriever(similarity_top_k = 5)
query_engine_base = RetrieverQueryEngine.from_args(retriever, 
                                                   service_context=service_context)
response = query_engine_base.query(query)

So if I am doing this, I append my top 5 chunks to the context ?
thanks πŸ™‚
Add a reply
Sign up and join the conversation on Discord