Find answers from the community

Updated last year

Hi I am currently in the process of

Hi, I am currently in the process of configuring Elasticsearch to function (from llama_index.vector_stores import ElasticsearchStore) as my local database. During this process, I encountered an issue pertaining to the utilization of the embedding model when saving and loading the index locally.

To elaborate, I am defining a specific open-source embedding model to be used when saving the index locally. However, upon loading this index, the system seems to be not retaining the use of the initially defined open-source embedding model. Instead, it automatically defaults to utilizing the OpenAI embedding model. This causes an issue cause the length (and content) of vector between the saved index and my query is different.

Could you provide any insights or guidance on how to resolve this issue?
L
M
4 comments
You need to pass in the service context again when loading the index
Thanks! I'm currently using below code for loading :
self.index = load_index_from_storage(storage_context)

So, guess I need to use other than 'load_index_from_storage'?
You can still do that, just need to pass in the service context

load_index_from_storage(..., service_context=service_context)
This is cool, and it works perfectly.Thanks for the help!
Add a reply
Sign up and join the conversation on Discord