Find answers from the community

Home
Members
learn.ai
l
learn.ai
Offline, last seen 3 months ago
Joined September 25, 2024
Hi can anyone help me solve the below error
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 36614 tokens. Please reduce the length of the messages.
21 comments
l
L
In getting structured output does llama index calls openai to format our data?
4 comments
W
l
Hi I want to finetune my index so that it will give me response in desired format. How do I do it in a simple way
14 comments
l
W
T
can we have chat continuation feature in queryengine?
2 comments
l
T
l
learn.ai
·

Hi

Hi
I have a list of vectorstore index I want to create a chat engine using all the indices, when I try to create one it is giving error.
here's my code snippet:
memory = ChatMemoryBuffer.from_defaults(token_limit=2500)
timea = time.time()

collections = UserIndexes.objects.filter(
user_id=user_id)
timeb = time.time()
print("filtering collections", timeb-timea)
indices = []
for collection in collections:
if collection.file_name != "":
timec = time.time()
index = get_indexes(collection)
indices.append(index)
timed = time.time()
print("getting index", timed-timec)
timee = time.time()
index = GPTListIndex[indices]
print("getting all indices", timee-timeb)

return index.as_chat_engine()
68 comments
l
W
L