Find answers from the community

Updated 2 months ago

Well it looks like it simply does not

Well it looks like it simply does not work at all in fact ... I don't know why this "haiku" query was streamed ...

Here's a simpler example:
Plain Text
from llama_index.llms import OpenAI
from llama_index import ServiceContext
from llama_index.chat_engine import SimpleChatEngine

import os, openai

os.environ['OPENAI_API_KEY'] = "sk-..."
openai.api_key = os.environ['OPENAI_API_KEY']


service_context = ServiceContext.from_defaults(
    llm=OpenAI(temperature=0.7, model="gpt-3.5-turbo")
)
chat_engine = SimpleChatEngine.from_defaults(service_context=service_context)
response = chat_engine.stream_chat("Why is the sun yellow ?")
# for token in response.response_gen:
    # print(token, end="")
    
response.print_response_stream()
L
t
61 comments
Hmm, let me try this locally
Hmm, seems to stream just fine for me
Attachment
image.png
ok let me try on another computer ...
well AT FIRST it worked .... but I was on llama_index 0.7.4 (I think) ... upgraded to 0.7.17 and it stopped working (on that new computer) ...

Will try to downgrade ...
yup ... 0.7.4 streaming works ... not 0.7.17 ... trying now 0.7.16 ...
0.7.16 no streaming
0.7.10 crashes with 'StreamingAgentChatResponse' object has no attribute 'print_response_stream'
0.7.14 no streaming
And this is all with the same code above?
0.7.12 no streaming
Mind blowing tbh, the latest version should be working. That's what I used above
How do you know it's no streaming?
0.7.11 same crash as 0.7.10
If the response is short enough, it might appear like it's not streaming
0.7.8 same crash
Could prompt it to write something longer, like a short story
yes, but there's also the "wait" ...
yes you can try with the "write me a poem" query
older versions won't have that attribute
0.7.4 works though ...
Right, but the response type changed from StreamingResponse to StreamingAgentResponse, so that we can align the response types for agents/chat engines
I can't reproduce on my end, so I'm really not sure what the issue is πŸ€”
So to sum up: 0.7.5 was the last working ... from 0.7.6 to 0.7.11 : crash ... 0.7.12 and onwards no crash but no stream ...
Maybe a fresh venv would help
can you try with 0.7.4 ?
I'm not using venv (yes I know ... :p )
on bash:

Plain Text
python -m venv venv
source venv/bin/activate
pip install llama-index
nooooo don't make me do this πŸ™‚
I'm making you
arf ! lol ! πŸ™‚
Trust me, it makes python dev so much better
I used to be opposed too, but I've been converted haha
So I was told ... but no problem so far ... and ... that's another thing I need to "tame" ... πŸ™‚
Then you'll know for sure your env is using the right versions of packages πŸ™‚
ok ... I need to tame "venv" before going further ... I'm on Win11, folders are sync'd by dropbox across several work computers etc ... πŸ™‚
ok still not working (I think) ...
let me try something ...
are you using WSL? or powershell?
ok, this works:
Plain Text
for token in response.response_gen:
    print(token, end="", flush=True)

instead of response.print_response_stream()

Notice the flush=True that forces flushing the output stream
oh the flush!
I need to add that!
I'm using "WT" which is the new "windows terminal" ...
actually super important haha. Will merge a PR shortly πŸ™‚
cool ! glad I could help ! πŸ™‚
Thanks for the patience and testing!
no problem ...
really loved your videos on the YT channel, waiting for more ...
also I might have a couple of other questions in the next few days πŸ˜‰
Hey thanks! I'm just making videos as I have time to actually build the project out. Glad you like them!
And no worries, happy to help!
Add a reply
Sign up and join the conversation on Discord