Find answers from the community

Updated last year

Hey guys hugely important for not

Hey guys, hugely important for not wasting money, maybe I lost it, but when i fine tune an embedding model following the llama index guide, how can I save it? It’s my first fine tuning so pretty ignorant. Thx in advance!
L
A
4 comments
This is the full embedding fine-tuning right, not the adaptor?

You can just point the embed model to the output path, it should be already saved to the output path you gave

Plain Text
service_context = ServiceContext.from_defaults(embed_model="local:/path/to/output")
Thx @Logan M ! This is for the usage.
But suppose i’m in colab and want to close my sessions. Then when I open colab again I think i lose the FT model no?
I wanted to know how to save it somewhere in order to be able to use it again in the future!
Yea, you can right click and download the model folder from the file explorer in colab
Add a reply
Sign up and join the conversation on Discord