- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Setting Up X-Pack
- Breaking changes
- Upgrading Logstash
- Configuring Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Data Resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Working with plugins
- Input plugins
- Beats input plugin
- Cloudwatch input plugin
- Couchdb_changes input plugin
- Dead_letter_queue input plugin
- Drupal_dblog input plugin
- Elasticsearch input plugin
- Eventlog output plugin
- Exec input plugin
- File input plugin
- Ganglia input plugin
- Gelf input plugin
- Gemfire input plugin
- Generator input plugin
- Github input plugin
- Google_pubsub input plugin
- Graphite input plugin
- Heartbeat input plugin
- Http input plugin
- Http_poller input plugin
- Imap input plugin
- Irc input plugin
- Jdbc input plugin
- Jms input plugin
- Jmx input plugin
- Kafka input plugin
- Kinesis input plugin
- Log4j input plugin
- Lumberjack input plugin
- Meetup input plugin
- Pipe input plugin
- Puppet_facter input plugin
- Rabbitmq input plugin
- rackspace input plugin
- Redis input plugin
- Relp input plugin
- Rss input plugin
- S3 input plugin
- Salesforce input plugin
- Snmptrap input plugin
- Sqlite input plugin
- Sqs input plugin
- Stdin input plugin
- Stomp input plugin
- Syslog input plugin
- Tcp input plugin
- Twitter input plugin
- Udp input plugin
- Unix input plugin
- Varnishlog input plugin
- Websocket input plugin
- Wmi input plugin
- Xmpp input plugin
- Zenoss input plugin
- Zeromq input plugin
- Output plugins
- Boundary output plugin
- Circonus output plugin
- Cloudwatch output plugin
- Csv output plugin
- Datadog output plugin
- Datadog_metrics output plugin
- Elasticsearch output plugin
- Email output plugin
- Exec output plugin
- File output plugin
- Ganglia output plugin
- Gelf output plugin
- Google BigQuery output plugin
- Google_cloud_storage output plugin
- Graphite output plugin
- Graphtastic output plugin
- Http output plugin
- Influxdb output plugin
- Irc output plugin
- Jira output plugin
- Juggernaut output plugin
- Kafka output plugin
- Librato output plugin
- Loggly output plugin
- Lumberjack output plugin
- Metriccatcher output plugin
- Mongodb output plugin
- Nagios output plugin
- Nagios_nsca output plugin
- Newrelic output plugin
- Opentsdb output plugin
- Pagerduty output plugin
- Pipe output plugin
- Rabbitmq output plugin
- Rackspace output plugin
- Redis output plugin
- Redmine output plugin
- Riak output plugin
- Riemann output plugin
- S3 output plugin
- Sns output plugin
- Solr_http output plugin
- Sqs output plugin
- Statsd output plugin
- Stdout output plugin
- Stomp output plugin
- Syslog output plugin
- Tcp output plugin
- Udp output plugin
- Webhdfs output plugin
- Websocket output plugin
- Xmpp output plugin
- Zabbix output plugin
- Zeromq output plugin
- Filter plugins
- Aggregate filter plugin
- Alter filter plugin
- Anonymize filter plugin
- Cidr filter plugin
- Cipher filter plugin
- Clone filter plugin
- Collate filter plugin
- Csv filter plugin
- Date filter plugin
- De_dot filter plugin
- Dissect filter plugin
- Dns filter plugin
- Drop filter plugin
- Elapsed filter plugin
- Elasticsearch filter plugin
- Environment filter plugin
- Extractnumbers filter plugin
- Fingerprint filter plugin
- Geoip filter plugin
- Grok filter plugin
- I18n filter plugin
- Jdbc_streaming filter plugin
- Json filter plugin
- Json_encode filter plugin
- Kv filter plugin
- Metaevent filter plugin
- Metricize filter plugin
- Metrics filter plugin
- Mutate filter plugin
- Oui filter plugin
- Prune filter plugin
- Punct filter plugin
- Range filter plugin
- Ruby filter plugin
- Sleep filter plugin
- Split filter plugin
- Syslog_pri filter plugin
- Throttle filter plugin
- Tld filter plugin
- Translate filter plugin
- Truncate filter plugin
- Urldecode filter plugin
- Useragent filter plugin
- Uuid filter plugin
- Xml filter plugin
- Yaml filter plugin
- Zeromq filter plugin
- Codec plugins
- Avro codec plugin
- Cef codec plugin
- Cloudfront codec plugin
- Cloudtrail codec plugin
- Collectd codec plugin
- Compress_spooler codec plugin
- Dots codec plugin
- Edn codec plugin
- Edn_lines codec plugin
- Es_bulk codec plugin
- Fluent codec plugin
- Graphite codec plugin
- Gzip_lines codec plugin
- Json codec plugin
- Json_lines codec plugin
- Line codec plugin
- Msgpack codec plugin
- Multiline codec plugin
- Netflow codec plugin
- Nmap codec plugin
- Oldlogstashjson codec plugin
- Plain codec plugin
- Protobuf codec plugin
- Rubydebug codec plugin
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Glossary of Terms
- Release Notes
- Logstash 5.6.16 Release Notes
- Logstash 5.6.15 Release Notes
- Logstash 5.6.14 Release Notes
- Logstash 5.6.13 Release Notes
- Logstash 5.6.12 Release Notes
- Logstash 5.6.11 Release Notes
- Logstash 5.6.10 Release Notes
- Logstash 5.6.9 Release Notes
- Logstash 5.6.8 Release Notes
- Logstash 5.6.7 Release Notes
- Logstash 5.6.6 Release Notes
- Logstash 5.6.5 Release Notes
- Logstash 5.6.4 Release Notes
- Logstash 5.6.3 Release Notes
- Logstash 5.6.2 Release Notes
- Logstash 5.6.1 Release Notes
- Logstash 5.6.0 Release Notes
Logstash Netflow Module
editLogstash Netflow Module
editThe Logstash Netflow module simplifies the collection, normalization, and visualization of network flow data. With a single command, the module parses network flow data, indexes the events into Elasticsearch, and installs a suite of Kibana dashboards to get you exploring your data immediately.
Logstash modules support Netflow Version 5 and 9.
What is Flow Data?
editNetflow is a type of data record streamed from capable network devices. It contains information about connections traversing the device, and includes source IP addresses and ports, destination IP addresses and ports, types of service, VLANs, and other information that can be encoded into frame and protocol headers. With Netflow data, network operators can go beyond monitoring simply the volume of data crossing their networks. They can understand where the traffic originated, where it is going, and what services or applications it is part of.
Requirements
editThese instructions assume you have already installed Elastic Stack (Logstash, Elasticsearch, and Kibana) version 5.6 or higher. The products you need are available to download and easy to install.
Getting Started
edit-
Start the Logstash Netflow module by running the following command in the Logstash installation directory:
bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=NNNN
Where
NNNN
is the UDP port on which Logstash will listen for network traffic data. If you don’t specify a port, Logstash listens on port 2055 by default.The
--modules netflow
option spins up a Netflow-aware Logstash pipeline for ingestion.The
--setup
option creates anetflow-*
index pattern in Elasticsearch and imports Kibana dashboards and visualizations. Running--setup
is a one-time setup step. Omit this option for subsequent runs of the module to avoid overwriting existing Kibana dashboards.The command shown here assumes that you’re running Elasticsearch and Kibana on your localhost. If you’re not, you need to specify additional connection options. See Configuring the Module.
-
Explore your data in Kibana:
- Open your browser and navigate to http://localhost:5601. If security is enabled, you’ll need to specify the Kibana username and password that you used when you set up security.
- Open Netflow: Network Overview Dashboard.
- See Exploring Your Data for additional details on data exploration.
Exploring Your Data
editOnce the Logstash Netflow module starts processing events, you can immediately begin using the packaged Kibana dashboards to explore and visualize your network flow data.
You can use the dashboards as-is, or tailor them to work better with existing use cases and business requirements.
Example Dashboards
editOn the Overview dashboard, you can see a summary of basic traffic data and set up filters before you drill down to gain deeper insight into the data.

For example, on the Conversation Partners dashboard, you can see the source and destination addresses of the client and server in any conversation.

On the Traffic Analysis dashboard, you can identify high volume conversations by viewing the traffic volume in bytes.

Then you can go to the Geo Location dashboard where you can visualize the location of destinations and sources on a heat map.

Configuring the Module
editYou can further refine the behavior of the Logstash Netflow module by specifying
settings in the logstash.yml
settings file, or overriding settings at the
command line.
For example, the following configuration in the settings.yml
file sets
Logstash to listen on port 9996 for network traffic data:
modules: - name: netflow var.input.udp.port: 9996
To specify the same settings at the command line, you use:
bin/logstash --modules netflow -M netflow.var.input.udp.port=9996
For more information about configuring modules, see Working with Logstash Modules.
Configuration Options
editThe Netflow module provides the following settings for configuring the behavior of the module. These settings include Netflow-specific options plus common options that are supported by all Logstash modules.
When you override a setting at the command line, remember to prefix the setting
with the module name, for example, netflow.var.input.udp.port
instead of
var.input.udp.port
.
If you don’t specify configuration settings, Logstash uses the defaults.
Netflow Options
-
var.input.udp.port:
-
- Value type is number
- Default value is 2055.
Sets the UDP port on which Logstash listens for network traffic data. Although 2055 is the default for this setting, some devices use ports in the range of 9995 through 9998, with 9996 being the most commonly used alternative.
Common options
The following configuration options are supported by all modules:
-
var.elasticsearch.hosts
-
- Value type is uri
- Default value is "localhost:9200"
Sets the host(s) of the Elasticsearch cluster. For each host, you must specify the hostname and port. For example, "myhost:9200". If given an array, Logstash will load balance requests across the hosts specified in the hosts parameter. It is important to exclude dedicated master nodes from the hosts list to prevent Logstash from sending bulk requests to the master nodes. So this parameter should only reference either data or client nodes in Elasticsearch.
Any special characters present in the URLs here MUST be URL escaped! This means # should be put in as %23 for instance.
-
var.elasticsearch.username
-
- Value type is string
- Default value is "elastic"
The username to authenticate to a secure Elasticsearch cluster.
-
var.elasticsearch.password
-
- Value type is string
- Default value is "changeme"
The password to authenticate to a secure Elasticsearch cluster.
-
var.elasticsearch.ssl.enabled
-
- Value type is boolean
- There is no default value for this setting.
Enable SSL/TLS secured communication to the Elasticsearch cluster. Leaving this unspecified will use whatever scheme is specified in the URLs listed in
hosts
. If no explicit protocol is specified, plain HTTP will be used. If SSL is explicitly disabled here, the plugin will refuse to start if an HTTPS URL is given in hosts. -
var.elasticsearch.ssl.verification_mode
-
- Value type is string
- Default value is "strict"
The hostname verification setting when communicating with Elasticsearch. Set to
disable
to turn off hostname verification. Disabling this has serious security concerns. -
var.elasticsearch.ssl.certificate_authority
-
- Value type is string
- There is no default value for this setting
The path to an X.509 certificate to use to validate SSL certificates when communicating with Elasticsearch.
-
var.elasticsearch.ssl.certificate
-
- Value type is string
- There is no default value for this setting
The path to an X.509 certificate to use for client authentication when communicating with Elasticsearch.
-
var.elasticsearch.ssl.key
-
- Value type is string
- There is no default value for this setting
The path to the certificate key for client authentication when communicating with Elasticsearch.
-
var.kibana.host
-
- Value type is string
- Default value is "localhost:5601"
Sets the hostname and port of the Kibana instance to use for importing dashboards and visualizations. For example: "myhost:5601".
-
var.kibana.username
-
- Value type is string
- Default value is "elastic"
The username to authenticate to a secured Kibana instance.
-
var.kibana.password
-
- Value type is string
- Default value is "changeme"
The password to authenticate to a secure Kibana instance.
-
var.kibana.ssl.enabled
-
- Value type is boolean
- Default value is false
Enable SSL/TLS secured communication to the Kibana instance.
-
var.kibana.ssl.verification_mode
-
- Value type is string
- Default value is "strict"
The hostname verification setting when communicating with Kibana. Set to
disable
to turn off hostname verification. Disabling this has serious security concerns. -
var.kibana.ssl.certificate_authority
-
- Value type is string
- There is no default value for this setting
The path to an X.509 certificate to use to validate SSL certificates when communicating with Kibana.
-
var.kibana.ssl.certificate
-
- Value type is string
- There is no default value for this setting
The path to an X.509 certificate to use for client authentication when communicating with Kibana.
-
var.kibana.ssl.key
-
- Value type is string
- There is no default value for this setting
The path to the certificate key for client authentication when communicating with Kibana.
On this page