Find answers from the community

Updated 2 years ago

interesting OpenAI might start offering

interesting: OpenAI might start offering a context length of up to 32k tokens https://twitter.com/transitive_bs/status/1628118163874516992
1
G
L
j
9 comments
This could potentially make gpt-index even more powerful as we'd be able to store even more context in every node.
for more info check out this google doc ( i can't verify its authenticity: https://docs.google.com/document/d/1EgPqpMYZRCxP4YFIbfZY9OWm1TykPzds8z5yUM13vbM/edit# )
32K tokens is wild, curious what they are doing to enable that (besides throwing more compute at it)
Oh, they are charging a lot more according to the doc ๐Ÿ’ธ
this is exciting!

yeah i'm excited about how this will augment llamaindex. there's obviously still compute/cost considerations in putting 32k tokens into a single LLM call, and I'd love to see how the expanded cost window expands current use cases + still introduces necessary cost/latency sondierations
thanks for sharing @Glenn
Yeah, itโ€™s not clear to me the scaling limitations/factors of gpt-3+. Like, are 32k token windows for most users a plausible near future goal? Or are the computational resources needed for that untenable at a massive scale ๐Ÿคท๐Ÿฝโ€โ™‚๏ธ
People would want to have a clear validation of utility with 4k token limit before moving on to 32k.
@ekzhu totally agreed
Add a reply
Sign up and join the conversation on Discord