A community member is experiencing issues with the llm.achat function. Their chat messages include a system prompt and a user request to create a summary of a 18000 token document that is at least 2500 tokens long. However, the function creates the response and uses the tokens, but then hangs indefinitely. Another community member comments that they were testing on the latest versions of things.
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='[system_prompt]', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content=[ {'text': '<document>{18000 token document}</document>', 'type': 'text', 'cache_control': {'type': 'ephemeral'}}, {'text': 'Please create a summary of the document that is at least 2500 tokens long. Currently the document is 18000 tokens long.', 'type': 'text'} ], additional_kwargs={})]