The community member is asking how to increase the output token limit, as their summaries often don't fit into 256 tokens and end abruptly. Another community member responds that they are using LangChain LLMs under the hood, and if they are using the OpenAI LLM, they need to increase the max_tokens parameter. They provide a link to a guide on how to feed the OpenAI LLM into an LLMPredictor for use with GPT Index.
we use langchain LLM's under the hood, so assuming you're using the OpenAI LLM you need to increase max_tokens. See https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html for how to feed in the openai LLM into a LLMPredictor for use with gpt index (don't worry about prompt helper)