Client Settings
editClient Settings
editThe client that you use to connect to S3 has a number of settings available.
The settings have the form s3.client.CLIENT_NAME.SETTING_NAME
. By default,
s3
repositories use a client named default
, but this can be modified using
the repository setting client
. For example:
PUT _snapshot/my_s3_repository { "type": "s3", "settings": { "bucket": "my-bucket", "client": "my-alternate-client" } }
Most client settings can be added to the elasticsearch.yml
configuration file
with the exception of the secure settings, which you add to the Elasticsearch keystore.
For more information about creating and updating the Elasticsearch keystore, see
Secure settings.
For example, if you want to use specific credentials to access S3 then run the following commands to add these credentials to the keystore:
bin/elasticsearch-keystore add s3.client.default.access_key bin/elasticsearch-keystore add s3.client.default.secret_key # a session token is optional so the following command may not be needed bin/elasticsearch-keystore add s3.client.default.session_token
If instead you want to use the instance role or container role to access S3 then you should leave these settings unset. You can switch from using specific credentials back to the default of using the instance role or container role by removing these settings from the keystore as follows:
bin/elasticsearch-keystore remove s3.client.default.access_key bin/elasticsearch-keystore remove s3.client.default.secret_key # a session token is optional so the following command may not be needed bin/elasticsearch-keystore remove s3.client.default.session_token
All client secure settings of this plugin are
reloadable. After you
reload the settings, the internal s3
clients, used to transfer the snapshot
contents, will utilize the latest settings from the keystore. Any existing s3
repositories, as well as any newly created ones, will pick up the new values
stored in the keystore.
In-progress snapshot/restore tasks will not be preempted by a reload of the client’s secure settings. The task will complete using the client as it was built when the operation started.
The following list contains the available client settings. Those that must be
stored in the keystore are marked as "secure" and are reloadable; the other
settings belong in the elasticsearch.yml
file.
-
access_key
(Secure, reloadable) -
An S3 access key. If set, the
secret_key
setting must also be specified. If unset, the client will use the instance or container role instead. -
secret_key
(Secure, reloadable) -
An S3 secret key. If set, the
access_key
setting must also be specified. -
session_token
(Secure, reloadable) -
An S3 session token. If set, the
access_key
andsecret_key
settings must also be specified. -
endpoint
-
The S3 service endpoint to connect to. This defaults to
s3.amazonaws.com
but the AWS documentation lists alternative S3 endpoints. If you are using an S3-compatible service then you should set this to the service’s endpoint. -
protocol
-
The protocol to use to connect to S3. Valid values are either
http
orhttps
. Defaults tohttps
. -
proxy.host
- The host name of a proxy to connect to S3 through.
-
proxy.port
- The port of a proxy to connect to S3 through.
-
proxy.username
(Secure, reloadable) -
The username to connect to the
proxy.host
with. -
proxy.password
(Secure, reloadable) -
The password to connect to the
proxy.host
with. -
read_timeout
-
The socket timeout for connecting to S3. The value should specify the unit.
For example, a value of
5s
specifies a 5 second timeout. The default value is 50 seconds. -
max_retries
-
The number of retries to use when an S3 request fails. The default value is
3
. -
use_throttle_retries
-
Whether retries should be throttled (i.e. should back off). Must be
true
orfalse
. Defaults totrue
. -
path_style_access
-
Whether to force the use of the path style access pattern. If
true
, the path style access pattern will be used. Iffalse
, the access pattern will be automatically determined by the AWS Java SDK (See AWS documentation for details). Defaults tofalse
.
In versions 7.0
, 7.1
, 7.2
and 7.3
all bucket operations used the
now-deprecated
path style access pattern. If your deployment requires the path style access
pattern then you should set this setting to true
when upgrading.
-
disable_chunked_encoding
-
Whether chunked encoding should be disabled or not. If
false
, chunked encoding is enabled and will be used where appropriate. Iftrue
, chunked encoding is disabled and will not be used, which may mean that snapshot operations consume more resources and take longer to complete. It should only be set totrue
if you are using a storage service that does not support chunked encoding. See the AWS Java SDK documentation for details. Defaults tofalse
. -
region
-
Allows specifying the signing region to use. Specificing this setting manually should not be necessary for most use cases. Generally,
the SDK will correctly guess the signing region to use. It should be considered an expert level setting to support S3-compatible APIs
that require v4 signatures and use a region other than the
default
us-east-1
. Defaults to empty string which means that the SDK will try to automatically determine the correct signing region. -
signer_override
- Allows specifying the name of the signature algorithm to use for signing requests by the S3 client. Specifying this setting should not be necessary for most use cases. It should be considered an expert level setting to support S3-compatible APIs that do not support the signing algorithm that the SDK automatically determines for them. See the AWS Java SDK documentation for details. Defaults to empty string which means that no signing algorithm override will be used.
S3-compatible services
editThere are a number of storage systems that provide an S3-compatible API, and
the repository-s3
plugin allows you to use these systems in place of AWS S3.
To do so, you should set the s3.client.CLIENT_NAME.endpoint
setting to the
system’s endpoint. This setting accepts IP addresses and hostnames and may
include a port. For example, the endpoint may be 172.17.0.2
or
172.17.0.2:9000
. You may also need to set s3.client.CLIENT_NAME.protocol
to
http
if the endpoint does not support HTTPS.
Minio is an example of a storage system that provides an
S3-compatible API. The repository-s3
plugin allows Elasticsearch to work with
Minio-backed repositories as well as repositories stored on AWS S3. Other
S3-compatible storage systems may also work with Elasticsearch, but these are not
covered by the Elasticsearch test suite.
Note that some storage systems claim to be S3-compatible without correctly
supporting the full S3 API. The repository-s3
plugin requires full
compatibility with S3. In particular it must support the same set of API
endpoints, return the same errors in case of failures, and offer a consistency
model no weaker than S3’s when accessed concurrently by multiple nodes.
Incompatible error codes and consistency models may be particularly hard to
track down since errors and consistency failures are usually rare and hard to
reproduce.
You can perform some basic checks of the suitability of your storage system using the repository analysis API. If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address any incompatibilities you encounter.