- Packetbeat Reference: other versions:
- Overview
- Get started
- Set up and run
- Upgrade Packetbeat
- Configure
- Traffic sniffing
- Network flows
- Protocols
- Processes
- General settings
- Project paths
- Output
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_json_fields
- decompress_gzip_field
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- registered_domain
- rename
- translate_sid
- truncate_fields
- Internal queue
- Logging
- HTTP endpoint
- packetbeat.reference.yml
- How to guides
- Exported fields
- AMQP fields
- Beat fields
- Cassandra fields
- Cloud provider metadata fields
- Common fields
- DHCPv4 fields
- DNS fields
- Docker fields
- ECS fields
- Flow Event fields
- Host fields
- HTTP fields
- ICMP fields
- Jolokia Discovery autodiscover provider fields
- Kubernetes fields
- Memcache fields
- MongoDb fields
- MySQL fields
- NFS fields
- PostgreSQL fields
- Process fields
- Raw fields
- Redis fields
- Thrift-RPC fields
- Detailed TLS fields
- Transaction Event fields
- Measurements (Transactions) fields
- Monitor
- Secure
- Visualize Packetbeat data in Kibana
- Troubleshoot
- Get help
- Debug
- Record a trace
- Common problems
- Dashboard in Kibana is breaking up data fields incorrectly
- Packetbeat doesn’t see any packets when using mirror ports
- Packetbeat can’t capture traffic from Windows loopback interface
- Packetbeat is missing long running transactions
- Packetbeat isn’t capturing MySQL performance data
- Packetbeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Fields show up as nested JSON in Kibana
- Contribute to Beats
Configure authentication credentials
editConfigure authentication credentials
editWhen sending data to a secured cluster through the elasticsearch
output, Packetbeat must either provide basic authentication credentials
or present a client certificate.
Before you begin: Grant users access to secured resources.
You specify authentication credentials in the Packetbeat configuration file:
-
To use basic authentication, specify the
username
andpassword
settings underoutput.elasticsearch
. For example:output.elasticsearch: hosts: ["localhost:9200"] username: "packetbeat_writer" password: "YOUR_PASSWORD"
Let’s assume this user has the privileges required to publish events to Elasticsearch.
The example shows a hard-coded password, but you should store sensitive values in the secrets keystore.
If you’ve configured the Kibana endpoint, also specify credentials for authenticating with Kibana. For example:
-
To use Public Key Infrastructure (PKI) certificates to authenticate users, configure the
certificate
andkey
settings. These settings assume that the distinguished name (DN) in the certificate is mapped to the appropriate roles in therole_mapping.yml
file on each node in the Elasticsearch cluster. For more information, see Using role mapping files.output.elasticsearch: hosts: ["localhost:9200"] ssl.certificate: "/etc/pki/client/cert.pem" ssl.key: "/etc/pki/client/cert.key"
To learn more about Elastic Stack security features and other types of authentication, see Secure a cluster.