Find answers from the community

Updated 9 months ago

Is it possible to use GPT4ALL embed in

Is it possible to use GPT4ALL embed in llama_index?
W
A
2 comments
Currently LlamaIndex supports these embedding model implementation: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#list-of-supported-embeddings

Not seeing GPT4ALL there, but worry not, You can use the custom embedding class to setup the embed model of your choice: https://docs.llamaindex.ai/en/stable/examples/embeddings/custom_embeddings.html
Thank you for your valuable response. I was looking at the example class provided in one of your sources. In the statement embeddings = self._model.encode([[self._instruction, query]]), it is passing two strings as a list to the embedding model. I want to know what is happening there? Does the embedding model returns a score by computing the similarities between the two strings provided here?

Edit: Okay I implemented everything and now I am getting this error:
Plain Text
utils.py", line 138, in embed_nodes
    new_embeddings = embed_model.get_text_embedding_batch(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: BaseEmbedding.get_text_embedding_batch() missing 1 required positional argument: 'texts'

edit: Okay I solved my problem!!
Add a reply
Sign up and join the conversation on Discord