Hi Is it possible to calculate First for all the chunks in the document, call embedding model and calculate the embeddings, and store them in a dict / list Second for each query, calculate its embedding Third for each query do cosine sim with all the emebdding of the document and while doing all of that can we also calculate the embedding time for each query and the embedding time for the whole dataset
@WhiteFang_Jr I had another doubt assume I have a text file which i want to use as the dataset so I want to create chunks of each and every line like for each sentence there is one chunk'