Hello everyone and thank you for your valuable contribution to this community. I am relatively new to the world of LlamaIndex (about 1 month), but I am already developing a chatbot with good results. At the moment, I am using the VectorIndexAutoRetriever with CondensePlusContextChatEngine, and it works great with GPT-4-turbo, GPT-3.5-turbo, and Claude Opus on AWS Bedrock. Unfortunately, it always returns the same exception when I try to use it with GPT-4o. Can anyone tell me if this depends on the libraries, perhaps not yet updated for use with this model?
hei Roland, thank you for reply. With chat() instead of stream_chat() same problem.. VectorIndexAutoretriever() seems to be the problem here... If I use classic VectorIndex with base retriever all works fine
oh not sure what the issue is here then? maybe it's a bug or something that could be resolved with the latest version. try to update your llama-index pakcages.