Accessing endpoint models in serverless inference api
Accessing endpoint models in serverless inference api
At a glance
The community member is asking how to access endpoint models, as they are familiar with the serverless inference API. Another community member suggests that the inference API can also be used for endpoint models, and provides a link to an example of using the Hugging Face text generation inference. The other comments express gratitude, but do not provide a direct answer to the original question.