Find answers from the community

Updated 2 months ago

SOmeoby help please

SOmeoby help please

Plain Text
completion_response = model.complete("To infinity, and")
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv\Lib\site-packages\text_generation\client.py", line 154, in chat
    raise parse_error(resp.status_code, payload)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  ext_generation\errors.py", line 81, in parse_error
    message = payload["error"]
              ~~~~~~~^^^^^^^^^
KeyError: 'error'
L
d
6 comments
what LLM is this? Looks like an issue in some 3rd party dependency
Was using the vllm custom image on hf
Ohh should I use vllm llama index Llm then lol
Maybe? This seems like an issue in the text-generation package thouvh
I know they just added vllm, maybe there's some bugs
Gotcha gotcha yeah was using TGI vllm custom image they added. Damn it huggingface
Add a reply
Sign up and join the conversation on Discord