The community member created embeddings using HuggingFaceEmbeddings and then passed them to the OpenAI LLM model, but encountered an error. They asked if there is a scenario where they can create embeddings, save them to an index.json file, and then pass that to the OpenAI LLM for querying.
In the comments, another community member asked what kind of error the original poster was getting. The second community member then provided a potential solution, suggesting that the original poster needs to put the service context into the vector index when creating and loading it.
Hey @Logan M , What I have done is, I created the embeddings using HuggingFaceEmbeddings and then I passed it to the openAI LLM model for query but it's throwing an error. Do we have any such scenario where we can create embeddings and save them into index.json and then pass this to openAi llm for query?