Thank you for the prompt response.
I have a follow-up question regarding the scoring mechanism used in the retrieval process. Given the code snippet:
storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
index = load_index_from_storage(storage_context)
retriever = index.as_retriever(similarity_top_k=5)
result = retriever.retrieve(question)
Could you clarify if the scores in the result are based on cosine similarity? I'm observing some differences between the similarity scores calculated directly using the embedding model for the question and the result node text, and the scores returned by the retriever. I want to make sure my implementation aligns correctly with the expected behavior.
Regarding my first question, it seems I might not have been clear. We've fine-tuned our embedding model, but I'm uncertain which embedding approach—SentenceTransformer or HuggingFaceEmbedding—would be more suited