Find answers from the community

m
mtutty
Offline, last seen 5 months ago
Joined September 25, 2024
m
mtutty
·

TS

Hey all I've got a minimal Dockerized API using embeddings with ChatGPT to answer questions. Everything works great with a single wiki page imported and the storage context saved/read from disk. I went to run the full batch of wiki content and got this failure:

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 31061 tokens (31061 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Is there a way to process documents in batches, and have the entire collection available at the end? This is the line I'd like to take apart:

const index = await VectorStoreIndex.fromDocuments(allDocs, { storageContext: ctx });
36 comments
Y
m
L