Find answers from the community

Updated 3 months ago

Hi, using llm.stream_complete("blablabla

Hi, using llm.stream_complete("blablabla") in a fastapi endpoint, I properly manage to stream the response, however it seems that the line breaks are not rendered in the stream response, any idea how to manage line breaks in a streaming responmse ?

Here is the code:
response = llm.stream_complete(fmt_qa_prompt) async def generate_tokens(): for r in response: try: yield(r.delta) await asyncio.sleep(.05) except asyncio.CancelledError as e: _logger.error("Cancelled") return StreamingResponse(generate_tokens(), media_type="text/event-stream")
W
2 comments
Maybe when you recieve the stream at your frontend, keep a count for how many to put in one line and once that count is reached. Move to next and empty the array and add again for next line
Or maybe add something in the prompt that works as a line breaker
Add a reply
Sign up and join the conversation on Discord