NOTE: You are looking at documentation for an older release. For the latest information, see the current release documentation.
Getting started
editGetting started
editThe plugin uses the Google Cloud Java Client for Storage to connect to the Storage service. If you are using Google Cloud Storage for the first time, you must connect to the Google Cloud Platform Console and create a new project. After your project is created, you must enable the Cloud Storage Service for your project.
Creating a Bucket
editThe Google Cloud Storage service uses the concept of a bucket as a container for all the data. Buckets are usually created using the Google Cloud Platform Console. The plugin does not automatically create buckets.
To create a new bucket:
- Connect to the Google Cloud Platform Console.
- Select your project.
- Go to the Storage Browser.
- Click the Create Bucket button.
- Enter the name of the new bucket.
- Select a storage class.
- Select a location.
- Click the Create button.
For more detailed instructions, see the Google Cloud documentation.
Service Authentication
editThe plugin must authenticate the requests it makes to the Google Cloud Storage service. It is common for Google client libraries to employ a strategy named application default credentials. However, that strategy is not supported for use with Elasticsearch. The plugin operates under the Elasticsearch process, which runs with the security manager enabled. The security manager obstructs the "automatic" credential discovery. Therefore, you must configure service account credentials even if you are using an environment that does not normally require this configuration (such as Compute Engine, Kubernetes Engine or App Engine).
Using a Service Account
editYou have to obtain and provide service account credentials manually.
For detailed information about generating JSON service account files, see the Google Cloud documentation. Note that the PKCS12 format is not supported by this plugin.
Here is a summary of the steps:
- Connect to the Google Cloud Platform Console.
- Select your project.
- Go to the Permission tab.
- Select the Service Accounts tab.
- Click Create service account.
- After the account is created, select it and download a JSON key file.
A JSON service account file looks like this:
{ "type": "service_account", "project_id": "your-project-id", "private_key_id": "...", "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", "client_id": "...", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" }
To provide this file to the plugin, it must be stored in the Elasticsearch keystore. You must add a setting name of the form gcs.client.NAME.credentials_file
, where NAME
is the name of the client configuration for the repository. The implicit client
name is default
, but a different client name can be specified in the
repository settings with the client
key.
Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment variable is not supported.
For example, if you added a gcs.client.my_alternate_client.credentials_file
setting in the keystore, you can configure a repository to use those credentials
like this:
PUT _snapshot/my_gcs_repository { "type": "gcs", "settings": { "bucket": "my_bucket", "client": "my_alternate_client" } }
The credentials_file
settings are reloadable.
After you reload the settings, the internal gcs
clients, which are used to
transfer the snapshot contents, utilize the latest settings from the keystore.
Snapshot or restore jobs that are in progress are not preempted by a reload
of the client’s credentials_file
settings. They complete using the client as
it was built when the operation started.