This plugin batches and uploads Logstash events into Amazon Simple Storage Service (Amazon S3).

Requirements:

edit
  • Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
  • S3 PutObject permission
  • Run Logstash as superuser to establish connection

Temp files on local drives are used to buffer messages until either size_file or time_file criteria is met. The default temp file location depends on the operating system. For example on Linux it will be /tmp/logstash. On OS X, it will be in /var/folders/

S3 output files will have the following format:

ls.s3.ip-10-228-27-95.2013-04-18T10.00.tag_hello.part0.txt
  • ls.s3: indicates Logstash plugin s3
  • ip-10-228-27-95 : indicates the ip of your machine.
  • 2013-04-18T10.00 : represents the time whenever you specify time_file.
  • tag_hello : this indicates the event’s tag.
  • part0 : this means if you indicate size_file then it will generate more parts if your file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed

Crash Recovery:

edit

This plugin will recover and upload temporary log files after crash/abnormal termination

Additional Notes

edit

Both time_file and size_file settings can trigger a log "file rotation". A log rotation pushes the current log "part" to s3 and deleted from local temporary storage. If you specify both size_file and time_file then it will create file for each tag (if specified). When either time_file minutes have elapsed OR log file size > size_file, a log rotation is triggered.

If you only specify time_file but not file_size, one file for each tag (if specified) will be created. When time_file minutes elapses, a log rotation will be triggered.

If you only specify size_file, but not time_file, one files for each tag (if specified) will be created. When size of log file part > size_file, a log rotation will be triggered.

If neither size_file nor time_file is specified, only one file for each tag (if specified) will be created.

WARNING: Since no log rotation is triggered in this case, S3 Upload will only occur when Logstash restarts.

Example Usage:

edit

This is an example of logstash config:

output {
   s3{
     access_key_id => "aws_key"               (optional)
     secret_access_key => "aws_access_key"    (optional)
     region => "eu-west-1"                    (optional)
     bucket => "my_bucket"                    (required)
     size_file => 2048                        (optional)
     time_file => 5                           (optional)
   }

 

Synopsis

edit

This plugin supports the following configuration options:

Required configuration options:

s3 {
}

Available configuration options:

Setting Input type Required Default value

access_key_id

string

No

aws_credentials_file

string

No

bucket

string

No

canned_acl

string, one of ["private", "public_read", "public_read_write", "authenticated_read"]

No

"private"

codec

codec

No

"line"

prefix

string

No

""

proxy_uri

string

No

region

string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]

No

"us-east-1"

restore

boolean

No

false

secret_access_key

string

No

session_token

string

No

size_file

number

No

0

tags

array

No

[]

temporary_directory

string

No

"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"

time_file

number

No

0

upload_workers_count

number

No

1

use_ssl

boolean

No

true

workers

number

No

1

Details

edit

 

access_key_id

edit
  • Value type is string
  • There is no default value for this setting.

This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:

  1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
  2. External credentials file specified by aws_credentials_file
  3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
  5. IAM Instance Profile (available when running inside EC2)

aws_credentials_file

edit
  • Value type is string
  • There is no default value for this setting.

Path to YAML file containing a hash of AWS credentials. This file will only be loaded if access_key_id and secret_access_key aren’t set. The contents of the file should look like this:

 :access_key_id: "12345"
 :secret_access_key: "54321"

bucket

edit
  • Value type is string
  • There is no default value for this setting.

S3 bucket

canned_acl

edit
  • Value can be any of: private, public_read, public_read_write, authenticated_read
  • Default value is "private"

The S3 canned ACL to use when putting the file. Defaults to "private".

codec

edit
  • Value type is codec
  • Default value is "line"

The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output, without needing a separate filter in your Logstash pipeline.

endpoint_region (DEPRECATED)

edit
  • DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
  • Value can be any of: us-east-1, us-west-1, us-west-2, eu-west-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, sa-east-1, us-gov-west-1
  • There is no default value for this setting.

AWS endpoint_region

prefix

edit
  • Value type is string
  • Default value is ""

Specify a prefix to the uploaded filename, this can simulate directories on S3

proxy_uri

edit
  • Value type is string
  • There is no default value for this setting.

URI to proxy server if required

region

edit
  • Value can be any of: us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, sa-east-1, us-gov-west-1, cn-north-1
  • Default value is "us-east-1"

restore

edit
  • Value type is boolean
  • Default value is false

secret_access_key

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Secret Access Key

session_token

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Session token for temporary credentials

size_file

edit
  • Value type is number
  • Default value is 0

Set the size of file in bytes, this means that files on bucket when have dimension > file_size, they are stored in two or more file. If you have tags then it will generate a specific size file for every tags

tags

edit
  • Value type is array
  • Default value is []

Define tags to be appended to the file on the S3 bucket.

Example: tags ⇒ ["elasticsearch", "logstash", "kibana"]

Will generate this file: "ls.s3.logstash.local.2015-01-01T00.00.tag_elasticsearch.logstash.kibana.part0.txt"

temporary_directory

edit
  • Value type is string
  • Default value is "/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"

Set the directory where logstash will store the tmp files before sending it to S3 default to the current OS temporary directory in linux /tmp/logstash

time_file

edit
  • Value type is number
  • Default value is 0

Set the time, in minutes, to close the current sub_time_section of bucket. If you define file_size you have a number of files in consideration of the section and the current tag. 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket, for now the only thing this plugin can do is to put the file when logstash restart.

upload_workers_count

edit
  • Value type is number
  • Default value is 1

Specify how many workers to use to upload the files to S3

use_ssl

edit
  • Value type is boolean
  • Default value is true

workers

edit
  • Value type is number
  • Default value is 1

The number of workers to use for this output. Note that this setting may not be useful for all outputs.