The community member is trying to stream the output of an LLM (Large Language Model) but is getting a generator object instead. The comments suggest that the solution is to iterate over the generator in a for loop, but the community member is unsure how to do that. Another community member suggests that the solution might be to simply print each chunk of the output, like for chunk in output print(chunk, end=""). However, there is no explicitly marked answer in the comments.
I'm trying to have the llm stream the output but get this message: <generator object llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat.<locals>.wrapped_gen at 0x766381b12340>
How should I have properly done it? (without the selected code in the script the code runs fine.