Find answers from the community

Updated 3 months ago

Adaptor

Seems like LinearLayer is missing from llama_index/finetuning/embeddings/adapter_utils.py It's imported from adapter.py in the same namespace. (from llama-index-finetuning)
L
a
11 comments
Oh wow, good call out. Definitely a bug
We can port it from v0.9.X
Great, thanks for looking.
@Logan M also, pipenv doesn't want to resolve llama-index-core==0.10.3 and llama-index-finetuneing==*, I have to roll back to 0.10.0 to get it to lock.
What's the conflict?
Plain Text
The conflict is caused by:
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.1.0 depends on llama-index-core==0.10.0
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.7 depends on llama-index-core<0.10.0 and >=0.9.54
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.6 depends on llama-index-core<0.10.0 and >=0.9.32
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.5 depends on llama-index-core<0.10.0 and >=0.9.32
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.4 depends on llama-index-core<0.10.0 and >=0.9.32
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.3 depends on llama-index-core<0.10.0 and >=0.9.32
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.2 depends on llama-index-core<0.10.0 and >=0.9.32
    The user requested llama-index-core==0.10.3
    llama-index-finetuning 0.0.1 depends on llama-index-core<0.10.0 and >=0.9.32
they all seem to depend directly on 0.10.0 or lower
Ah, 0.1.1 of fintunetuning didn't get published to fix that. OK will let you know when I have that done
No worries, when you have time. Thanks.
My app is back in working order with the new artifact, thanks again for the quick fixes.
Great! Forgot to ping you that I fixed that πŸ™‚
Add a reply
Sign up and join the conversation on Discord