Find answers from the community

s
F
Y
a
P
Updated 2 years ago

Is the num output in PromptHelper

Is the num_output in PromptHelper working correctly (with ListIndex)? my outputs continue to come out at 256 despite overriding them with a larger number.
j
j
5 comments
oh the main thing you have to set is the LLM in langchain (see https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html). The prompt helper num_output can be derived from the LLMPredictor (so you don't need to set prompthelper at all if you use openai, ai21, cohere), but otherwise you'll still have to define a prompt helper manually
Take a look and lmk if the example makes sense
@jerryjliu0 just to clarify: would this involve setting the max_token value in the LLMPredictor?
thanks so much. i believe i have it working: i do see longer outputs in some of my prompts. i'm having a secondary problem where the particular prompt i'm making is causing the response to get cut off.

the problematic prompt is of the format: "Generate 4 multiple choice questions with 4 answer choices each and include detailed answer explanations." ends up truncating usually after outputting one question/answer explanation. for some reason, if don't answer for answer explanations, it successfully outputs all the questions.

anyway, long story short: max_tokens does seem to be working, so thank you!
Add a reply
Sign up and join the conversation on Discord