The community member is experiencing a rate limit error when using Azure Open AI embedding models. They have tried following the GitHub issues but none of the suggestions have worked so far. The comments suggest trying to either ramp up the number of max retries or ingest the data more slowly. However, the community member has already tried the max_retries approach without success. They also have a large number of nodes (2048) to ingest, and are unsure if that is too many at once. The comments further suggest that the number of nodes may not be the issue, but rather the rate limits. The community member is also encountering an error when trying to insert the nodes, related to the 'TextNode' object not having the 'get_doc_id' attribute. The comments suggest using 'insert_nodes' instead of 'insert' for nodes, and potentially doing it one node at a time or in batches.
Hi ! I keep getting rate limit error for Azure Open Ai embedding models ! Anyone has any suggestions to help remove this error I tired following the github issues but none have worked for me so far https://github.com/run-llama/llama_index/issues/7879
Don't embed things too fast? π You can either ramp up the number of max retries, or ingest things more slowly
Plain Text
embed_model = AzureOpenAIEmbedding(..., max_retries=10)
index = VectorStoreIndex(nodes=[], embed_model=embed_model, ...)
for doc in documents:
index.insert(doc)
Ah ! I did try the max_retries, that did not work. But let me give ingest doc wise. I have 2048 nodes is that a lot to ingest at once, just curious to know ?