@jerryjliu0 Quick question: I am currently using this model: llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.9, model_name="text-davinci-003"))
define prompt helper
set maximum input size
max_input_size = 4000096
set number of output tokens
num_output = 4000096
set maximum chunk overlap
max_chunk_overlap = -2000 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) But my output is being cut off mid-way Am I missing something? The output is a five answer quiz so not terribly long
@jerryjliu0 i tried this, but output is still 256. Also decreased the chunk size to see if this was contributing. No matter what I seem to change in llmpredictor, or prompt_helper, output always 256.