Find answers from the community

Updated 8 months ago

Hi team,

Hi team,

I'm using llama_index.embeddings.huggingface and did not found any documentation of validate_supported() method in documentation. (https://docs.llamaindex.ai/en/stable/api_reference/embeddings/huggingface/#llama_index.embeddings.huggingface.HuggingFaceInferenceAPIEmbedding.validate_supported)

How can I use this method to validate if the embed model is available?
L
2 comments
Here's the source code of the function

Plain Text
def validate_supported(self, task: str) -> None:
        """
        Confirm the contained model_name is deployed on the Inference API service.

        Args:
            task: Hugging Face task to check within. A list of all tasks can be
                found here: https://huggingface.co/tasks
        """
        all_models = self._sync_client.list_deployed_models(frameworks="all")
        try:
            if self.model_name not in all_models[task]:
                raise ValueError(
                    "The Inference API service doesn't have the model"
                    f" {self.model_name!r} deployed."
                )
        except KeyError as exc:
            raise KeyError(
                f"Input task {task!r} not in possible tasks {list(all_models.keys())}."
            ) from exc
I think its just pinging the huggingface API to check if the task is supported by the model you chose
Add a reply
Sign up and join the conversation on Discord