Find answers from the community

Updated 3 months ago

Hello everyone and thank you for your

Hello everyone and thank you for your valuable contribution to this community. I am relatively new to the world of LlamaIndex (about 1 month), but I am already developing a chatbot with good results. At the moment, I am using the VectorIndexAutoRetriever with CondensePlusContextChatEngine, and it works great with GPT-4-turbo, GPT-3.5-turbo, and Claude Opus on AWS Bedrock. Unfortunately, it always returns the same exception when I try to use it with GPT-4o. Can anyone tell me if this depends on the libraries, perhaps not yet updated for use with this model?
1
W
i
L
7 comments
Can you share the error ? That will help to understand
yes of course, this is the output
This just means the LLM failed to predict the proper outputs, this can happen sometimes
Thank you Logan. It happens every time with gpt4o. Never with other models. I can't find an explanation to this situation.
@ianez what happens if you use chat() instead of stream_chat() in your app? (ie turn off streaming temporarily) . do you still get the same error?
hei Roland, thank you for reply. With chat() instead of stream_chat() same problem.. VectorIndexAutoretriever() seems to be the problem here... If I use classic VectorIndex with base retriever all works fine
oh not sure what the issue is here then? maybe it's a bug or something that could be resolved with the latest version. try to update your llama-index pakcages.
Add a reply
Sign up and join the conversation on Discord