Find answers from the community

Updated 2 years ago

Error

hei guys I have a problem. My code just stop working today. I try to fix it by making it as simple as possible and going back to other version of the packages. But it still always crashes: code https://gist.github.com/devinSpitz/e7aabdf1036f81745543739d0d5a59b9 error: https://gist.github.com/devinSpitz/3e83f8ab3d3d49a2875d31c1263d0d9a I use that in a docker and after the restart today everything stop working (normaly restarts where no problem until today xD).
2
L
s
B
25 comments
I think the model pipeline returns a dict, but you need to return a str
Looks like the pipeline might have gotten updated on huggingface
hmm understand. My to the story. I Got a server today and i did the same docker as i use at home. There i got this error first. After that i did restart my docker at home hand hat the same issue 😦
That's what the input looks like. Just need to parse the answer out of that πŸ’ͺ
I think this was not your fault haha the pipeline just changed. No way to version control that
Ok and what can i do to fix that? xD sry i do not realy understand the problem 😦
There was the update to the readme: To use the pipeline with LangChain, you must set return_full_text=True, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
Only set this variable did not fix the problem xD
In your code, I think you just need to do this

Plain Text
res = model_pipeline(prompt)["generated_sequence"][0]
Or maybe I'm wrong haha
It might already be doing that πŸ€”

Maybe just try casting as a string and see what happens instead

return str(res)
Ok will check that next. If you are right it should be enough to go back 3 Commits to yesterday, and i will try that first πŸ˜„ At least i hope so xD
that result in this error now xD: TypeError: list indices must be integers or slices, not str (i still reverting to yesterday with the models but i takes some time )
Yea that suggestion of mine is not so good

after looking at the huggingface code again, maybe it's just res = model_pipeline(prompt)[0]["generated_text"]

Wrapping the return with an str() call though will fix it immediately, then you can figure out the proper way to parse the output

πŸ€”
Hmm does not work as well and with the local files i run into this:
Traceback (most recent call last):
File "/workspace/LLama-Hub/GpuRequests.py", line 31, in <module>
model_pipeline = pipeline(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/init.py", line 779, in pipeline
framework, model = infer_framework_load_model(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 271, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model /workspace/LLama-Hub/Model/dolly-v2-7b/ with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXForCausalLM'>).
Weird, error about loading the model πŸ™ƒ

Maybe install transformers directly from source?

Sorry man, not sure what's going on lol
Getting somewhere maybe though!
@Logan M Thanks for your help anyway πŸ˜„ I will try again tomorrow now its time to go sleep xD i will send here news if i get any further πŸ™‚
@Logan M return res[0]["generated_text"] results in: Context information is below.
---------------------
What is the capital of london
What is the capital of london
What is the capital of london
What is the capital of london
What is the capital of england
What is the capital of england

---------------------
Given the context information and not prior knowledge, answer the question: What is the capital of england

The capital of england is London
Not perfect but it shows something readable πŸ˜„
Nice! Now go to bed, you can clean up the output tomorrow 🀣
I thought I was the only one having this error, I will try @Logan M solution as well! Thanks πŸ™‚
did that direct transformers install do the trick?
Hi, I also have this problem with llama-13b, did you solve it? Looking forward to your reply🧐
Add a reply
Sign up and join the conversation on Discord