Find answers from the community

Updated 2 months ago

Hello all I am having an issue with my

Hello all, I am having an issue with my chatbot. This is the third one that I am making and with this one I am having issues with a recursion error. When prompting something like "Hi" its all good and well, but when prompting something specific that when answering it has to dive in the storage I'm getting the error. I could not find anything related on GitHub/StackOverflow.
O
L
6 comments
The thing is, with this chatbot my storage is way larger then my previous chatbot. Could that be the issue? I couldn't find anything related to the recommended size of a vector store.
What version of llama-index do you have?
I'm using the latest version of Llama-index
Plain Text
llama-index==0.8.29.post1
I have resolved the previous issue, but I'm facing a challenge with my current chatbot. I have too many indexes. every prompt gives this error:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 50940 tokens. Please reduce the length of the messages.
Is there a max amount of indexes?
Yea every LLM has a token limit. Usually either 4k, 8k, or 16k depending on the LLM

But llama-index should be accounting for this, unless you are using the library in an un-intended way
Add a reply
Sign up and join the conversation on Discord