Find answers from the community

Updated 11 months ago

Hi everyone, Im getting `AttributeError

At a glance

The community member is experiencing an AttributeError: 'Response' object has no attribute 'response_gen' when using the OpenAI language model in their service. They are calling the service using StreamingResponse to return the response to the frontend/user.

In the comments, another community member provides a working example using the OpenAI and OpenAIEmbedding classes from the llama_index library. They use the VectorStoreIndex and as_query_engine methods to query the index and stream the response.

There is no explicitly marked answer in the comments, but the working example provided by the other community member may help the original poster resolve their issue.

Hi everyone, Im getting AttributeError: 'Response' object has no attribute 'response_gen' for this code:

In service:

Plain Text
        ...
        llm = OpenAI(
            model=request.model.value,
            temperature=request.temperature,
            max_tokens=NUM_OUTPUTS,
        )

        service_context = ServiceContext.from_defaults(llm=llm)

        query_engine = index.as_query_engine(
            streaming=True,
            service_context=service_context,
            similarity_top_k=1,
        )

        response_stream = query_engine.query(input_text)
        def _stream_chat(generator):
            for chunk in generator:
                yield chunk

        return _stream_chat(response_stream.response_gen)


Im calling that service from:
Plain Text
        return StreamingResponse(
            content=IDPService().query(body),
            status_code=status.HTTP_200_OK,
            media_type="text/html",
        )


Any Ideas? Im following documentation for return StreamingResponse for frontend/user. πŸ™
L
1 comment
This works for me πŸ€·β€β™‚οΈ

(I'm not sure if you are using v0.10.x yet, but I used the slightly newer syntax)

Plain Text
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core import Document, VectorStoreIndex

index = VectorStoreIndex.from_documents([Document.example()], embed_model=OpenAIEmbedding())

query_engine = index.as_query_engine(llm=OpenAI(), streaming=True)

response = query_engine.query("Tell me about LLMs.")
for token in response.response_gen:
  print(token, end="", flush=True)
Add a reply
Sign up and join the conversation on Discord