Find answers from the community

Updated last year

i'm following the new blog post on how

At a glance
i'm following the new blog post on how to deploy a create-llama app and there's an issue where the python fastapi server just hangs. i've narrowed it down to here:

Plain Text
    chat_engine = index.as_chat_engine()
    print(f"sending")
    response = chat_engine.stream_chat(lastMessage.content, messages)
    print("received")


"received" never gets printed, only "sending"
S
t
16 comments
let's not clutter up the main chat
so does your script run without poetry? can you abort it then?
so long story short, I've been investigating random hangs coming from OpenAI.
I want to see if we have the same issue
without poetry, it just says
Plain Text
ModuleNotFoundError: No module named 'llama_index'
are you using the same tutorial
no, Im not using poetry, handled the dependencies by myself
so basically I found lots of threads mentioning hangs with openai and experiencing it myself.
which model are you using?
oh yeah, i changed it to gpt-4-vision-preview
that may be why
gpt-4 works
it was soo insanely slow for me that I returned back to gpt3.5-trubo-instruct. That does a summery in 1.4 seconds which takes 40-50 seconds for gpt4 or gpt3.5-turbo
but the net is really full w "openai hangs" topics, so I guess knocking on llamaindex's door is fruitless
I’ll try turbo instruct. Thanks for the assistance πŸ‘
Add a reply
Sign up and join the conversation on Discord