Find answers from the community

Updated 2 years ago

Token sizes

Thanks! Where would I tweak the chunk size?

Also I can tell that when I run the query, the response gets cut off (I'm assuming this is something to do with the max number of tokens it's allowed to use?)— how would I go about solving this?
L
p
2 comments
Yea, you can set the chunk size in the service context object

service_context = ServiceContext.from_defaults(..., chunk_size_limit=1000)
index = GPTSimpleVectorIndex.from_documents(docs, service_context=service_context)

You can set the prompt helper by setting max_tokens https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_llms.html#example-changing-the-number-of-output-tokens-for-openai-cohere-ai21
Even when I increase my max tokens to 2048 for example, my response still keeps getting cut off. Do you know what might be causing this?
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord