The community member who posted the original question finds that the responses obtained through the LLM+openAI API are too short, and wonders if it's because most tokens are used by context. Another community member responds that the short responses are just the LLM deciding to write them, and suggests that some prompt engineering may be needed to get longer responses. They also note that the default max tokens for OpenAI is 256, but that this can be changed.