Hi, I am currently in the process of configuring Elasticsearch to function (from llama_index.vector_stores import ElasticsearchStore) as my local database. During this process, I encountered an issue pertaining to the utilization of the embedding model when saving and loading the index locally.
To elaborate, I am defining a specific open-source embedding model to be used when saving the index locally. However, upon loading this index, the system seems to be not retaining the use of the initially defined open-source embedding model. Instead, it automatically defaults to utilizing the OpenAI embedding model. This causes an issue cause the length (and content) of vector between the saved index and my query is different.
Could you provide any insights or guidance on how to resolve this issue?