That will depend on your data, mainly the chunk size. The largest factor here will be the LLM, with too much context the answer synthesis won't be as good (lost in the middle problem etc.)
Also for the cost, it mainly comes from the extra cost of passing more results to the LLM. Embeddings models are pretty cheap in comparison.
But with those numbers (assuming similar chunk sizes) it would be roughly 6x the cost.