Find answers from the community

Updated 2 years ago

Embed model error

At a glance

The community member created embeddings using HuggingFaceEmbeddings and then passed them to the OpenAI LLM model, but encountered an error. They asked if there is a scenario where they can create embeddings, save them to an index.json file, and then pass that to the OpenAI LLM for querying.

In the comments, another community member asked what kind of error the original poster was getting. The second community member then provided a potential solution, suggesting that the original poster needs to put the service context into the vector index when creating and loading it.

Hey @Logan M , What I have done is, I created the embeddings using HuggingFaceEmbeddings and then I passed it to the openAI LLM model for query but it's throwing an error. Do we have any such scenario where we can create embeddings and save them into index.json and then pass this to openAi llm for query?
L
2 comments
What kind of error are you getting?
Oh, you need to put the service context into the vector index

.from_documents(documents, service_context=service_context)

And also when loading

.load_from_disk("index.json", service_context=service_context)
Add a reply
Sign up and join the conversation on Discord