Find answers from the community

Updated 6 months ago

@Logan M Need help! 🥹

At a glance

The community member is experiencing an error when calling llm.stream_complete() with the GPT-4 language model. The error is a ChunkedEncodingError that indicates a problem with the server-side connection. A comment suggests this might be a server-side issue with OpenAI, but there is no explicitly marked answer to the problem.

Need help! 🥹
So my llm.stream_complete() call throws this error sometimes -> "requests.exceptions.ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))"

The llm I am using is gpt-4. Any idea how to avoid this error?
L
1 comment
google tells me this might be a server-side issue with OpenAI 🤔
Add a reply
Sign up and join the conversation on Discord