Find answers from the community

Updated 3 months ago

Hi Is there a way to set the rate of

Hi, Is there a way to set the rate of calling the embedding interface of openai in "GPTSimpleVectorIndex"? I am worried that a large document will overclock my openai key
L
S
3 comments
hmmm, there is not a way to set this as a user.

However, I wouldn't be too worried until you actually hit a rate error 🙂
When dealing with large documents, do LlamaIndex have a mechanism for splitting them up, or if it sends one massive request to OpenAI all at once?
Yea whenever you put a document into llama index, it gets broken into smaller chunks

Right now, by default it splits into 1024 token chunks (with slight overlap) using a token text splitter
Add a reply
Sign up and join the conversation on Discord