Find answers from the community

Updated 2 months ago

Hello, Trying to use the latest `gpt-4-'

Hello, Trying to use the latest gpt-4-turbo-preview but it's not saying it's an option, there is also no option for gpt-4-0125-preview. Is there a way around this? Or are we stuck with gpt-4-0613-preview?
i
L
11 comments
And follow up, if we're using chat engine, will the context be 'refreshed' over time or will it run out eventually needing to be restarted?
I'm more interested in the models, I can figure a way around the context limit
Do you have the latest version of llama-index? I see it's available in the source code
Attachment
image.png
chat-engines use a window buffer memory by default, so they shouldn't need a restart
huh? I updated and dont' have those options, I'll just uninstall and reinstall, thanks
I'm looking through the docs but I cannot find a way to change this, and advice?
unless it's the 'custom history' setting
I'd like to be able to use 'best' without using it's built in memory
looking through the github repo, I cannot seem to find anything about 'best' in the 'chat_engine' folder, I'll keep digging for a little more but I must go to bed soon.
Might have gotten it, sorry for the tag, forgot to turn off the tag thing
It was confusing to set up, but I got it done, custom chat engine with my own context complete
Add a reply
Sign up and join the conversation on Discord