Hey Logan, thanks very much for pointing to the prompt.
Let me just better explain what I meant with the parameters: I knew that the
generate_qa_embedding_pairs
was the (new) place to go to modify the parameters, it's just that I didn't trust myself that much to go and change it in the core (common.py)., and I wanted to possibly make the decisions/configurations in the notebook.
So instead, I thought that the way
the embedding finetuning was (previously) made here
https://github.com/run-llama/finetune-embedding/tree/mainwas much more straight through for me to change the needed stuff: batch size, epochs, prompts, question per chunk, etc.., at least in what I called "configuration" (notebook).
Anyway, I made it to downgrade the package and to run it to the end (the old way). I am really happy with the resulting work and let me share once more the appreciation for both the work and support received.