Hello, we have a little problem with llamaindex, when we try to load a pdf file in a database (postrgres on neon) with Mistral's embed model we get an error message about going overlimit for the tokens, we tried to split the document for every page and use the TokenTextSplitter with no good result, the problem is the only "solution" was to set the insert_batch_size parameter lower (up to 21 max) but that shouldn't have an impact on the embed model but on the db right ? 😅