Find answers from the community

Updated 4 months ago

hi all I have a very simple question

At a glance
hi all, I have a very simple question. With the release of gpt4 it is possible to send to gpt4 8K or 32K of context. My question is if the function load_and_split is able to fully manage this or I need manually to split with the new limits
L
1 comment
It should be all handled under the hood automatically, assuming you set the model to be gpt-4 or gpt-4-32k
Add a reply
Sign up and join the conversation on Discord