hey , , , we trying to use 10+ nodes in prompt formation our paramters being model = gpt-4-32k, model_context_window = 25000, chat_history_max_tokens = 4000, max_output_tokens = 500, index_response_mode = compact
but still refine prompts are getting started. (currently we are using 3 nodes, with the parameters model = gpt-4, model_context_window=7500, max_output_token=500, and compact mode and there is no problem with refine prompt.) with refine prompts we have faced degradation in answer quality, so wanted a solution. please assist