Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
😞
😐
😃
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
Hi Is there a way to set the rate of
Hi Is there a way to set the rate of
Inactive
0
Follow
S
Sinel❤
last year
·
Hi, Is there a way to set the rate of calling the embedding interface of openai in "GPTSimpleVectorIndex"? I am worried that a large document will overclock my openai key
L
S
3 comments
Share
Open in Discord
L
Logan M
last year
hmmm, there is not a way to set this as a user.
However, I wouldn't be too worried until you actually hit a rate error 🙂
S
Sinel❤
last year
When dealing with large documents, do LlamaIndex have a mechanism for splitting them up, or if it sends one massive request to OpenAI all at once?
L
Logan M
last year
Yea whenever you put a document into llama index, it gets broken into smaller chunks
Right now, by default it splits into 1024 token chunks (with slight overlap) using a token text splitter
Add a reply
Sign up and join the conversation on Discord
Join on Discord