IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Repository settings
editRepository settings
editThe Azure repository supports following settings:
-
client
-
Azure named client to use. Defaults to
default
. -
container
-
Container name. You must create the azure container before creating the repository.
Defaults to
elasticsearch-snapshots
. -
base_path
- Specifies the path within container to repository data. Defaults to empty (root directory).
-
chunk_size
-
Big files can be broken down into chunks during snapshotting if needed.
Specify the chunk size as a value and unit, for example:
10MB
,5KB
,500B
. Defaults to64MB
(64MB max). -
compress
-
When set to
true
metadata files are stored in compressed format. This setting doesn’t affect index files that are already compressed by default. Defaults tofalse
. -
max_restore_bytes_per_sec
-
Throttles per node restore rate. Defaults to
40mb
per second. -
max_snapshot_bytes_per_sec
-
Throttles per node snapshot rate. Defaults to
40mb
per second. -
readonly
-
Makes repository read-only. Defaults to
false
. -
location_mode
-
primary_only
orsecondary_only
. Defaults toprimary_only
. Note that if you set it tosecondary_only
, it will forcereadonly
to true.
Some examples, using scripts:
# The simplest one PUT _snapshot/my_backup1 { "type": "azure" } # With some settings PUT _snapshot/my_backup2 { "type": "azure", "settings": { "container": "backup-container", "base_path": "backups", "chunk_size": "32MB", "compress": true } } # With two accounts defined in elasticsearch.yml (my_account1 and my_account2) PUT _snapshot/my_backup3 { "type": "azure", "settings": { "client": "secondary" } } PUT _snapshot/my_backup4 { "type": "azure", "settings": { "client": "secondary", "location_mode": "primary_only" } }
Example using Java:
client.admin().cluster().preparePutRepository("my_backup_java1") .setType("azure").setSettings(Settings.builder() .put(Storage.CONTAINER, "backup-container") .put(Storage.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB)) ).get();