Find answers from the community

Home
Members
yugkha3
y
yugkha3
Offline, last seen 3 months ago
Joined September 25, 2024
hi, i am having issues with async streaming from query engine, earlier when i was not using llama index this was how i was using async streams from the chatCompletions API to send chunk over a websocket connection as soon as they were received:
Plain Text
    async def call_chatgpt_api(self, messages: List[Dict[str, str]], websocket, streams):
        ans = ""
        if streams==True:
            async for response in await openai.ChatCompletion.acreate(
                model="gpt-3.5-turbo-16k",
                messages=messages,
                temperature=0.3,
                stream=True
            ):
                r = response.choices[0]
                if not 'content' in r['delta']:
                    continue
                if r['finish_reason'] == 'stop':
                    break       
                await websocket.send_text(json.dumps({"chunk": r['delta']['content']}))
                print(r['delta']['content'])
                ans = ans + r['delta']['content']
        else:
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo-16k",
                messages=messages,
                temperature=0.3,
            )
            ans = response["choices"][0]["message"]["content"]
        return ans

now i am trying to implement the same using llama index:
https://pastebin.com/5p88sXPK
but i getting this error:
Plain Text
  File "/QueryEngine.py", line 65, in ask_query_engine
    async for text in query_engine_response.response_gen:
TypeError: 'async for' requires an object with __aiter__ method, got generator

pls help me here
8 comments
y
L