Find answers from the community

Updated 2 years ago

jerryjliu98 9313 Quick question I am

@jerryjliu0 Quick question: I am currently using this model: llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.9, model_name="text-davinci-003"))

define prompt helper

set maximum input size

max_input_size = 4000096

set number of output tokens

num_output = 4000096

set maximum chunk overlap

max_chunk_overlap = -2000
prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
But my output is being cut off mid-way
Am I missing something? The output is a five answer quiz so not terribly long
j
M
H
8 comments
the max input size of davinci is 4096 (and the outputs count towards the input size), so num_output needs to be smaller than max_input-size
is num_output characters or tokens?
What's the default?
num_output is tokens
a token roughly ~= a word (kind of)
@jerryjliu0 i tried this, but output is still 256. Also decreased the chunk size to see if this was contributing. No matter what I seem to change in llmpredictor, or prompt_helper, output always 256.

Do you have any ideas?
Add a reply
Sign up and join the conversation on Discord