Find answers from the community

Updated 4 months ago

Vllm error: keyerror 'text' in get_response

At a glance

A community member encountered a KeyError: 'text' error when trying to use VLLM for their RAG (Retrieval Augmented Generation) implementation. Another community member requested more information, such as the full traceback and how the VLLM was used, and provided a link to an example implementation of VLLM.

Useful resources
Hi all, I wanted to use VLLM for my RAG, but this error showed up

, line 9, in get_response
return data["text"]
KeyError: 'text'

Does this have a solution?
W
1 comment
Can you share the full traceback and how you have used vllm?

vllm implentation example is given here: https://docs.llamaindex.ai/en/stable/examples/llm/vllm/
Add a reply
Sign up and join the conversation on Discord