Find answers from the community

Updated 3 months ago

hey Logan M ravitheja jerryjliu0

hey , , ,
we trying to use 10+ nodes in prompt formation
our paramters being
model = gpt-4-32k,
model_context_window = 25000,
chat_history_max_tokens = 4000,
max_output_tokens = 500,
index_response_mode = compact

but still refine prompts are getting started.
(currently we are using 3 nodes, with the parameters model = gpt-4, model_context_window=7500, max_output_token=500, and compact mode and there is no problem with refine prompt.)
with refine prompts we have faced degradation in answer quality, so wanted a solution.
please assist
L
1 comment
can you share some code for this? Curious how you are setting up the service context
Add a reply
Sign up and join the conversation on Discord