Find answers from the community

Updated 2 years ago

Hey guys I m trying to use this for non

At a glance
Hey guys. I'm trying to use this for non-English data, but it returns extremely short answers, no matter how much data I train it on. Does anyone have an idea what might be wrong? Is this a problem on Llama's side or OpenAI's API limitation? Also, please note someone is spamming Github issues with ads rn.
L
1 comment
It seems non-english data uses a lot more tokens than english data.

OpenAI has a default of 256 max token outputs.

See this page for details on how to change that: https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_llms.html#example-fine-grained-control-over-all-parameters

(or if you haven't upgraded to use 0.6.0-alpha, here's the older docs)
https://gpt-index.readthedocs.io/en/v0.5.27/how_to/customization/custom_llms.html#example-fine-grained-control-over-all-parameters
Add a reply
Sign up and join the conversation on Discord