Actually in general, the prompt helper itself is a little deprecated.
It's best to set these values in the service context itself.
Also I think setting the chunk size like that may be causing issues? Try not using the prompt helper, and maybe doing something like this, you may see improved results actually
from llama_index import ServiceContext, set_global_service_context
from llama_index.llms import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
service_context = ServiceContext.from_defualts(llm=llm, chunk_size=1024)
set_global_service_context(service_context)
And since I set a global, no need to pass it in anywhere 👌