- Elastic Cloud Serverless
- Elasticsearch
- Elastic Observability
- Get started
- Observability overview
- Elastic Observability Serverless billing dimensions
- Create an Observability project
- Quickstart: Monitor hosts with Elastic Agent
- Quickstart: Monitor your Kubernetes cluster with Elastic Agent
- Quickstart: Monitor hosts with OpenTelemetry
- Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)
- Quickstart: Collect data with AWS Firehose
- Get started with dashboards
- Applications and services
- Application performance monitoring (APM)
- Get started with traces and APM
- Learn about data types
- Collect application data
- View and analyze data
- Act on data
- Use APM securely
- Reduce storage
- Managed intake service event API
- Troubleshooting
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure a Synthetics project
- Multifactor Authentication for browser monitors
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Scale and architect a deployment
- Synthetics Encryption and Security
- Troubleshooting
- Application performance monitoring (APM)
- Infrastructure and hosts
- Logs
- Inventory
- Incident management
- Data set quality
- Observability AI Assistant
- Machine learning
- Reference
- Get started
- Elastic Security
- Elastic Security overview
- Security billing dimensions
- Create a Security project
- Elastic Security requirements
- Elastic Security UI
- AI for Security
- Ingest data
- Configure endpoint protection with Elastic Defend
- Manage Elastic Defend
- Endpoints
- Policies
- Trusted applications
- Event filters
- Host isolation exceptions
- Blocklist
- Optimize Elastic Defend
- Event capture and Elastic Defend
- Endpoint protection rules
- Identify antivirus software on your hosts
- Allowlist Elastic Endpoint in third-party antivirus apps
- Elastic Endpoint self-protection features
- Elastic Endpoint command reference
- Endpoint response actions
- Cloud Security
- Explore your data
- Dashboards
- Detection engine overview
- Rules
- Alerts
- Advanced Entity Analytics
- Investigation tools
- Asset management
- Manage settings
- Troubleshooting
- Manage your project
- Changelog
Grok Debugger
editGrok Debugger
editYou can build and debug grok patterns in the Grok Debugger before you use them in your data processing pipelines. Grok is a pattern-matching syntax that you can use to parse and structure arbitrary text. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format written for human consumption.
Grok patterns are supported in Elasticsearch runtime fields, the Elasticsearch grok ingest processor, and the Logstash grok filter. For syntax, see Grokking grok.
Elastic ships with more than 120 reusable grok patterns. For a complete list of patterns, see Elasticsearch grok patterns and Logstash grok patterns.
Because Elasticsearch and Logstash share the same grok implementation and pattern libraries, any grok pattern that you create in the Grok Debugger will work in both Elasticsearch and Logstash.
Get started
editThis example walks you through using the Grok Debugger.
Required roles
The Admin role is required to use the Grok Debugger. For more information, refer to Assign user roles and privileges
- In the main menu, go to Developer Tools under Build, then click Grok Debugger.
-
In Sample Data, enter a message that is representative of the data you want to parse. For example:
55.3.244.1 GET /index.html 15824 0.043
-
In Grok Pattern, enter the grok pattern that you want to apply to the data.
To parse the log line in this example, use:
%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
-
Click Simulate.
You’ll see the simulated event that results from applying the grok pattern.
Test custom patterns
editIf the default grok pattern dictionary doesn’t contain the patterns you need, you can define, test, and debug custom patterns using the Grok Debugger.
Custom patterns that you enter in the Grok Debugger are not saved. Custom patterns are only available for the current debugging session and have no side effects.
Follow this example to define a custom pattern.
-
In Sample Data, enter the following sample message:
Jan 1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>
-
Enter this grok pattern:
%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{MSG:syslog_message}
Notice that the grok pattern references custom patterns called
POSTFIX_QUEUEID
andMSG
. -
Expand Custom Patterns and enter pattern definitions for the custom patterns that you want to use in the grok expression. You must specify each pattern definition on its own line.
For this example, you must specify pattern definitions for
POSTFIX_QUEUEID
andMSG
:POSTFIX_QUEUEID [0-9A-F]{10,11} MSG message-id=<%{GREEDYDATA}>
-
Click Simulate.
You’ll see the simulated output event that results from applying the grok pattern that contains the custom pattern:
If an error occurs, you can continue iterating over the custom pattern until the output matches your expected event.
On this page