Hi can anyone help me solve the below error openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 36614 tokens. Please reduce the length of the messages.
Hi I have a list of vectorstore index I want to create a chat engine using all the indices, when I try to create one it is giving error. here's my code snippet: memory = ChatMemoryBuffer.from_defaults(token_limit=2500) timea = time.time()
collections = UserIndexes.objects.filter( user_id=user_id) timeb = time.time() print("filtering collections", timeb-timea) indices = [] for collection in collections: if collection.file_name != "": timec = time.time() index = get_indexes(collection) indices.append(index) timed = time.time() print("getting index", timed-timec) timee = time.time() index = GPTListIndex[indices] print("getting all indices", timee-timeb)