Find answers from the community

Updated last week

[Question]: Consistently getting rate l...

Hi ! I keep getting rate limit error for Azure Open Ai embedding models ! Anyone has any suggestions to help remove this error I tired following the github issues but none have worked for me so far
https://github.com/run-llama/llama_index/issues/7879
L
T
6 comments
Don't embed things too fast? πŸ˜… You can either ramp up the number of max retries, or ingest things more slowly

Plain Text
embed_model = AzureOpenAIEmbedding(..., max_retries=10)

index = VectorStoreIndex(nodes=[], embed_model=embed_model, ...)
for doc in documents:
  index.insert(doc)
Ah ! I did try the max_retries, that did not work. But let me give ingest doc wise. I have 2048 nodes is that a lot to ingest at once, just curious to know ?
I mean, its not a ton per-say. But it really depends on your rate limits
Thanks for replying ! I tried doing this way but I am still some error for text nodes.

index = VectorStoreIndex(
nodes = [], storage_context=storage_context, show_progress=True
)
for node in text_nodes[1:]:
index.insert(node)

error : AttributeError: 'TextNode' object has no attribute 'get_doc_id'. Did you mean: 'ref_doc_id'?
insert is for documents
for nodes
index.insert_nodes(...)

but you'll probably want to do it one node at a time? or you can batch them (1 at a time will be kinda slow, but not too bad)
Add a reply
Sign up and join the conversation on Discord