Log in
Log into community
Find answers from the community
View all posts
Related posts
Was this helpful?
π
π
π
Powered by
Hall
Inactive
Updated 4 months ago
0
Follow
hi all I have a very simple question
hi all I have a very simple question
Inactive
0
Follow
At a glance
a
amaurino#007
2 years ago
Β·
hi all, I have a very simple question. With the release of gpt4 it is possible to send to gpt4 8K or 32K of context. My question is if the function load_and_split is able to fully manage this or I need manually to split with the new limits
L
1 comment
Share
Open in Discord
L
Logan M
2 years ago
It should be all handled under the hood automatically, assuming you set the model to be gpt-4 or gpt-4-32k
Add a reply
Sign up and join the conversation on Discord
Join on Discord