I think the model pipeline returns a dict, but you need to return a str
Looks like the pipeline might have gotten updated on huggingface
hmm understand. My to the story. I Got a server today and i did the same docker as i use at home. There i got this error first. After that i did restart my docker at home hand hat the same issue π¦
That's what the input looks like. Just need to parse the answer out of that πͺ
I think this was not your fault haha the pipeline just changed. No way to version control that
Ok and what can i do to fix that? xD sry i do not realy understand the problem π¦
There was the update to the readme: To use the pipeline with LangChain, you must set return_full_text=True
, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
Only set this variable did not fix the problem xD
In your code, I think you just need to do this
res = model_pipeline(prompt)["generated_sequence"][0]
It might already be doing that π€
Maybe just try casting as a string and see what happens instead
return str(res)
Ok will check that next. If you are right it should be enough to go back 3 Commits to yesterday, and i will try that first π At least i hope so xD
that result in this error now xD: TypeError: list indices must be integers or slices, not str (i still reverting to yesterday with the models but i takes some time )
Yea that suggestion of mine is not so good
after looking at the huggingface code again, maybe it's just res = model_pipeline(prompt)[0]["generated_text"]
Wrapping the return with an str() call though will fix it immediately, then you can figure out the proper way to parse the output
π€
Hmm does not work as well and with the local files i run into this:
Traceback (most recent call last):
File "/workspace/LLama-Hub/GpuRequests.py", line 31, in <module>
model_pipeline = pipeline(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/init.py", line 779, in pipeline
framework, model = infer_framework_load_model(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 271, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model /workspace/LLama-Hub/Model/dolly-v2-7b/ with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXForCausalLM'>).
Weird, error about loading the model π
Maybe install transformers directly from source?
Sorry man, not sure what's going on lol
Getting somewhere maybe though!
@Logan M Thanks for your help anyway π I will try again tomorrow now its time to go sleep xD i will send here news if i get any further π
@Logan M return res[0]["generated_text"] results in: Context information is below.
---------------------
What is the capital of london
What is the capital of london
What is the capital of london
What is the capital of london
What is the capital of england
What is the capital of england
---------------------
Given the context information and not prior knowledge, answer the question: What is the capital of england
The capital of england is London
Not perfect but it shows something readable π
Nice! Now go to bed, you can clean up the output tomorrow π€£
I thought I was the only one having this error, I will try @Logan M solution as well! Thanks π
did that direct transformers install do the trick?
Hi, I also have this problem with llama-13b, did you solve it? Looking forward to your replyπ§