@jerryjliu0 Quick question: I am currently using this model: llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.9, model_name="text-davinci-003"))
define prompt helper
set maximum input size
max_input_size = 4000096
set number of output tokens
num_output = 4000096
set maximum chunk overlap
max_chunk_overlap = -2000 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) But my output is being cut off mid-way Am I missing something? The output is a five answer quiz so not terribly long