The community member who posted the original post has updated their llama-index version on Python and is doing the required fixes. They were using a synchronous Opensearch client, but now it seems that it must be asynchronous. The comments discuss this issue, with one community member stating that all sync and async methods are supported and implemented in the source code, while another mentions that the vector client no longer has a connection class kwarg. The community members also discuss the need to upgrade the llama-index-vector-stores-opensearch package.
he wad this connection_class that handles requests in synchronous way... but it looks that the new versions of llama-index can only handle asynchronous requests...