HuggingFace inference service
editHuggingFace inference service
editCreates an inference endpoint to perform an inference task with the hugging_face
service.
Request
editPUT /_inference/<task_type>/<inference_id>
Path parameters
edit-
<inference_id>
- (Required, string) The unique identifier of the inference endpoint.
-
<task_type>
-
(Required, string) The type of the inference task that the model will perform.
Available task types:
-
text_embedding
.
-
Request body
edit-
service
-
(Required, string)
The type of service supported for the specified task type. In this case,
hugging_face
. -
service_settings
-
(Required, object) Settings used to install the inference model.
These settings are specific to the
hugging_face
service.-
api_key
-
(Required, string) A valid access token of your Hugging Face account. You can find your Hugging Face access tokens or you can create a new one on the settings page.
You need to provide the API key only once, during the inference model creation. The Get inference API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.
-
url
- (Required, string) The URL endpoint to use for the requests.
-
rate_limit
-
(Optional, object) By default, the
huggingface
service sets the number of requests allowed per minute to3000
. This helps to minimize the number of rate limit errors returned from Hugging Face. To modify this, set therequests_per_minute
setting of this object in your service settings:"rate_limit": { "requests_per_minute": <<number_of_requests>> }
-
Hugging Face service example
editThe following example shows how to create an inference endpoint called
hugging-face-embeddings
to perform a text_embedding
task type.
resp = client.inference.put( task_type="text_embedding", inference_id="hugging-face-embeddings", inference_config={ "service": "hugging_face", "service_settings": { "api_key": "<access_token>", "url": "<url_endpoint>" } }, ) print(resp)
const response = await client.inference.put({ task_type: "text_embedding", inference_id: "hugging-face-embeddings", inference_config: { service: "hugging_face", service_settings: { api_key: "<access_token>", url: "<url_endpoint>", }, }, }); console.log(response);
PUT _inference/text_embedding/hugging-face-embeddings { "service": "hugging_face", "service_settings": { "api_key": "<access_token>", "url": "<url_endpoint>" } }
A valid Hugging Face access token. You can find on the settings page of your account. |
|
The inference endpoint URL you created on Hugging Face. |
Create a new inference endpoint on
the Hugging Face endpoint page to get an endpoint URL.
Select the model you want to use on the new endpoint creation page - for example intfloat/e5-small-v2
- then select the Sentence Embeddings
task under the Advanced configuration section.
Create the endpoint.
Copy the URL after the endpoint initialization has been finished.
The list of recommended models for the Hugging Face service: