Repository Settings
editRepository Settings
editThe s3
repository type supports a number of settings to customize how data is
stored in S3. These can be specified when creating the repository. For example:
PUT _snapshot/my_s3_repository { "type": "s3", "settings": { "bucket": "my-bucket", "another_setting": "setting-value" } }
The following settings are supported:
-
bucket
-
(Required) Name of the S3 bucket to use for snapshots.
The bucket name must adhere to Amazon’s S3 bucket naming rules.
-
client
-
The name of the S3 client to use to connect to S3.
Defaults to
default
. -
base_path
-
Specifies the path to the repository data within its bucket. Defaults to an
empty string, meaning that the repository is at the root of the bucket. The
value of this setting should not start or end with a
/
. -
chunk_size
-
Big files can be broken down into chunks during snapshotting if needed.
Specify the chunk size as a value and unit, for example:
1TB
,1GB
,10MB
. Defaults to the maximum size of a blob in the S3 which is5TB
. -
compress
-
When set to
true
metadata files are stored in compressed format. This setting doesn’t affect index files that are already compressed by default. Defaults tofalse
. -
max_restore_bytes_per_sec
- Throttles per node restore rate. Defaults to unlimited. Note that restores are also throttled through recovery settings.
-
max_snapshot_bytes_per_sec
-
Throttles per node snapshot rate. Defaults to
40mb
per second. -
readonly
-
Makes repository read-only. Defaults to
false
. -
server_side_encryption
-
When set to
true
files are encrypted on server side using AES256 algorithm. Defaults tofalse
. -
buffer_size
-
Minimum threshold below which the chunk is uploaded using a single request.
Beyond this threshold, the S3 repository will use the
AWS
Multipart Upload API to split the chunk into several parts, each of
buffer_size
length, and to upload each part in its own request. Note that setting a buffer size lower than5mb
is not allowed since it will prevent the use of the Multipart API and may result in upload errors. It is also not possible to set a buffer size greater than5gb
as it is the maximum upload size allowed by S3. Defaults to100mb
or5%
of JVM heap, whichever is smaller. -
canned_acl
-
The S3 repository supports all
S3
canned ACLs :
private
,public-read
,public-read-write
,authenticated-read
,log-delivery-write
,bucket-owner-read
,bucket-owner-full-control
. Defaults toprivate
. You could specify a canned ACL using thecanned_acl
setting. When the S3 repository creates buckets and objects, it adds the canned ACL into the buckets and objects. -
storage_class
-
Sets the S3 storage class for objects stored in the snapshot repository.
Values may be
standard
,reduced_redundancy
,standard_ia
,onezone_ia
andintelligent_tiering
. Defaults tostandard
. Changing this setting on an existing repository only affects the storage class for newly created objects, resulting in a mixed usage of storage classes. Additionally, S3 Lifecycle Policies can be used to manage the storage class of existing objects. Due to the extra complexity with the Glacier class lifecycle, it is not currently supported by the plugin. For more information about the different classes, see AWS Storage Classes Guide
The option of defining client settings in the repository settings as documented below is considered deprecated, and will be removed in a future version.
In addition to the above settings, you may also specify all non-secure client settings in the repository settings. In this case, the client settings found in the repository settings will be merged with those of the named client used by the repository. Conflicts between client and repository settings are resolved by the repository settings taking precedence over client settings.
For example:
PUT _snapshot/my_s3_repository { "type": "s3", "settings": { "client": "my-client", "bucket": "my-bucket", "endpoint": "my.s3.endpoint" } }
This sets up a repository that uses all client settings from the client
my_client_name
except for the endpoint
that is overridden to
my.s3.endpoint
by the repository settings.
Recommended S3 Permissions
editIn order to restrict the Elasticsearch snapshot process to the minimum required resources, we recommend using Amazon IAM in conjunction with pre-existing S3 buckets. Here is an example policy which will allow the snapshot access to an S3 bucket named "snaps.example.com". This may be configured through the AWS IAM console, by creating a Custom Policy, and using a Policy Document similar to this (changing snaps.example.com to your bucket name).
{ "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snaps.example.com" ] }, { "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snaps.example.com/*" ] } ], "Version": "2012-10-17" }
You may further restrict the permissions by specifying a prefix within the bucket, in this example, named "foo".
{ "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Condition": { "StringLike": { "s3:prefix": [ "foo/*" ] } }, "Effect": "Allow", "Resource": [ "arn:aws:s3:::snaps.example.com" ] }, { "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snaps.example.com/foo/*" ] } ], "Version": "2012-10-17" }
The bucket needs to exist to register a repository for snapshots. If you did not create the bucket then the repository registration will fail.
Cleaning up multi-part uploads
editElasticsearch uses S3’s multi-part upload process to upload larger blobs to the repository. The multi-part upload process works by dividing each blob into smaller parts, uploading each part independently, and then completing the upload in a separate step. This reduces the amount of data that Elasticsearch must re-send if an upload fails: Elasticsearch only needs to re-send the part that failed rather than starting from the beginning of the whole blob. The storage for each part is charged independently starting from the time at which the part was uploaded.
If a multi-part upload cannot be completed then it must be aborted in order to delete any parts that were successfully uploaded, preventing further storage charges from accumulating. Elasticsearch will automatically abort a multi-part upload on failure, but sometimes the abort request itself fails. For example, if the repository becomes inaccessible or the instance on which Elasticsearch is running is terminated abruptly then Elasticsearch cannot complete or abort any ongoing uploads.
You must make sure that failed uploads are eventually aborted to avoid unnecessary storage costs. You can use the List multipart uploads API to list the ongoing uploads and look for any which are unusually long-running, or you can configure a bucket lifecycle policy to automatically abort incomplete uploads once they reach a certain age.
AWS VPC Bandwidth Settings
editAWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through the VPC’s NAT instance. If your VPC’s NAT instance is a smaller instance size (e.g. a t2.micro) or is handling a high volume of network traffic your bandwidth to S3 may be limited by that NAT instance’s networking bandwidth limitations. Instead we recommend creating a VPC endpoint that enables connecting to S3 in instances that reside in a private subnet in an AWS VPC. This will eliminate any limitations imposed by the network bandwidth of your VPC’s NAT instance.
Instances residing in a public subnet in an AWS VPC will connect to S3 via the VPC’s internet gateway and not be bandwidth limited by the VPC’s NAT instance.