- Packetbeat Reference: other versions:
- Overview
- Get started
- Set up and run
- Upgrade Packetbeat
- Configure
- Traffic sniffing
- Network flows
- Protocols
- Processes
- General settings
- Project paths
- Output
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_json_fields
- decompress_gzip_field
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- registered_domain
- rename
- translate_sid
- truncate_fields
- Internal queue
- Logging
- HTTP endpoint
- packetbeat.reference.yml
- How to guides
- Exported fields
- AMQP fields
- Beat fields
- Cassandra fields
- Cloud provider metadata fields
- Common fields
- DHCPv4 fields
- DNS fields
- Docker fields
- ECS fields
- Flow Event fields
- Host fields
- HTTP fields
- ICMP fields
- Jolokia Discovery autodiscover provider fields
- Kubernetes fields
- Memcache fields
- MongoDb fields
- MySQL fields
- NFS fields
- PostgreSQL fields
- Process fields
- Raw fields
- Redis fields
- Thrift-RPC fields
- Detailed TLS fields
- Transaction Event fields
- Measurements (Transactions) fields
- Monitor
- Secure
- Visualize Packetbeat data in Kibana
- Troubleshoot
- Get help
- Debug
- Record a trace
- Common problems
- Dashboard in Kibana is breaking up data fields incorrectly
- Packetbeat doesn’t see any packets when using mirror ports
- Packetbeat can’t capture traffic from Windows loopback interface
- Packetbeat is missing long running transactions
- Packetbeat isn’t capturing MySQL performance data
- Packetbeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Fields show up as nested JSON in Kibana
- Contribute to Beats
Dissect strings
editDissect strings
editThe dissect
processor tokenizes incoming strings using defined patterns.
processors: - dissect: tokenizer: "%{key1} %{key2}" field: "message" target_prefix: "dissect"
The dissect
processor has the following configuration settings:
-
tokenizer
- The field used to define the dissection pattern.
-
field
-
(Optional) The event field to tokenize. Default is
message
. -
target_prefix
-
(Optional) The name of the field where the values will be extracted. When an empty
string is defined, the processor will create the keys at the root of the event. Default is
dissect
. When the target key already exists in the event, the processor won’t replace it and log an error; you need to either drop or rename the key before using dissect.
For tokenization to be successful, all keys must be found and extracted, if one of them cannot be found an error will be logged and no modification is done on the original event.
A key can contain any characters except reserved suffix or prefix modifiers: /
,&
, +
and ?
.
See Conditions for a list of supported conditions.
Dissect example
editFor this example, imagine that an application generates the following messages:
"App01 - WebServer is starting" "App01 - WebServer is up and running" "App01 - WebServer is scaling 2 pods" "App02 - Database is will be restarted in 5 minutes" "App02 - Database is up and running" "App02 - Database is refreshing tables"
Use the dissect
processor to split each message into two fields, for example,
service.name
and service.status
:
processors: - dissect: tokenizer: '"%{service.name} - %{service.status}"' field: "message" target_prefix: ""
This configuration produces fields like:
"service": { "name": "App01", "status": "WebServer is up and running" },
service.name
is an ECS keyword field, which means that you
can use it in Elasticsearch for filtering, sorting, and aggregations.
When possible, use ECS-compatible field names. For more information, see the Elastic Common Schema documentation.
On this page