Journalbeat quick start: installation and configuration
editJournalbeat quick start: installation and configuration
editThis guide describes how to get started quickly with log data collection from systemd journals. You’ll learn how to:
- install Journalbeat on each system you want to monitor
- specify the location of your log files
- parse log data into fields and send it to Elasticsearch
- visualize the log data in Kibana
Before you begin
editYou need Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it.
To get started quickly, spin up a deployment of our hosted Elasticsearch Service. The Elasticsearch Service is available on AWS, GCP, and Azure. Try it out for free.
Step 1: Install Journalbeat
editInstall Journalbeat on all the servers you want to monitor.
To download and install Journalbeat, use the commands that work with your system:
curl -L -O https://artifacts.elastic.co/downloads/beats/journalbeat/journalbeat-7.9.3-amd64.deb sudo dpkg -i journalbeat-7.9.3-amd64.deb
curl -L -O https://artifacts.elastic.co/downloads/beats/journalbeat/journalbeat-7.9.3-x86_64.rpm sudo rpm -vi journalbeat-7.9.3-x86_64.rpm
curl -L -O https://artifacts.elastic.co/downloads/beats/journalbeat/journalbeat-7.9.3-linux-x86_64.tar.gz tar xzvf journalbeat-7.9.3-linux-x86_64.tar.gz
Other installation options
editStep 2: Connect to the Elastic Stack
editConnections to Elasticsearch and Kibana are required to set up Journalbeat.
Set the connection information in journalbeat.yml
. To locate this
configuration file, see Directory layout.
Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Journalbeat. For example:
cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" cloud.auth: "journalbeat_setup:YOUR_PASSWORD"
This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore. |
-
Set the host and port where Journalbeat can find the Elasticsearch installation, and set the username and password of a user who is authorized to set up Journalbeat. For example:
output.elasticsearch: hosts: ["myEShost:9200"] username: "journalbeat_internal" password: "YOUR_PASSWORD"
This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore.
-
If you plan to use our pre-built Kibana dashboards, configure the Kibana endpoint. Skip this step if Kibana is running on the same host as Elasticsearch.
The hostname and port of the machine where Kibana is running, for example,
mykibanahost:5601
. If you specify a path after the port number, include the scheme and port:http://mykibanahost:5601/path
.The
username
andpassword
settings for Kibana are optional. If you don’t specify credentials for Kibana, Journalbeat uses theusername
andpassword
specified for the Elasticsearch output.To use the pre-built Kibana dashboards, this user must be authorized to view dashboards or have the
kibana_admin
built-in role.
To learn more about required roles and privileges, see Grant users access to secured resources.
You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.
Step 3: Configure Journalbeat
editBefore running Journalbeat, specify the location of the systemd journal files and configure how you want the files to be read. If you accept the default configuration, Journalbeat reads from the local journal.
-
In
journalbeat.yml
, specify a list of paths to your systemd journal files. Each path can be a directory path (to collect events from all journals in a directory), or a file path. For example:journalbeat.inputs: - paths: - "/dev/log" - "/var/log/messages/my-journal-file.journal" seek: head
If no paths are specified, Journalbeat reads from the default journal.
-
Set the
seek
option to control the position where Journalbeat starts reading the journal. The available options arehead
,tail
, andcursor
. The default iscursor
, which means that on first read, Journalbeat starts reading at the beginning of the file, but continues reading at the last known position after a reload or restart. For more detail about the settings, see the reference docs for theseek
option. -
(Optional) Set the
include_matches
option to filter entries in journald before collecting any log events. This reduces the number of events that Journalbeat needs to process. For example, to fetch only Redis events from a Docker container tagged asredis
, use:journalbeat.inputs: - paths: [] include_matches: - "CONTAINER_TAG=redis" - "_COMM=redis"
To test your configuration file, change to the directory where the
Journalbeat binary is installed, and run Journalbeat in the foreground with
the following options specified: ./journalbeat test config -e
. Make sure your
config files are in the path expected by Journalbeat (see Directory layout),
or use the -c
flag to specify the path to the config file.
For more information about configuring Journalbeat, also see:
- Configure Journalbeat
- Config file format
-
journalbeat.reference.yml
: This reference configuration file shows all non-deprecated options. You’ll find it in the same location asjournalbeat.yml
.
Step 4: Set up assets
editJournalbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:
-
Make sure the user specified in
journalbeat.yml
is authorized to set up Journalbeat. -
From the installation directory, run:
journalbeat setup -e
journalbeat setup -e
./journalbeat setup -e
-e
is optional and sends output to standard error instead of the configured log output.
This step loads the recommended index template for writing to Elasticsearch.
A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see Load the index template manually.
Step 5: Start Journalbeat
editBefore starting Journalbeat, modify the user credentials in
journalbeat.yml
and specify a user who is
authorized to publish events.
To start Journalbeat, run:
sudo service journalbeat start
If you use an init.d
script to start Journalbeat, you can’t specify command
line flags (see Command reference). To specify flags, start Journalbeat in
the foreground.
Also see Journalbeat and systemd.
sudo service journalbeat start
If you use an init.d
script to start Journalbeat, you can’t specify command
line flags (see Command reference). To specify flags, start Journalbeat in
the foreground.
Also see Journalbeat and systemd.
You’ll be running Journalbeat as root, so you need to change ownership
of the configuration file, or run Journalbeat with |
Journalbeat is now ready to send journal events to the Elasticsearch.
Step 6: View your data in Kibana
editThere is currently no dashboard available for Journalbeat. To start exploring your data, go to the Discover app in Kibana. From there, you can submit search queries, filter the search results, and view document data.
To learn how to build visualizations and dashboards to view your data, see the Kibana User Guide.
What’s next?
editNow that you have your logs streaming into Elasticsearch, learn how to unify your logs, metrics, uptime, and application performance data.
-
Ingest data from other sources by installing and configuring other Elastic Beats:
Elastic Beats To capture Infrastructure metrics
Logs
Windows event logs
Uptime information
Application performance metrics
Audit events
-
Use the Observability apps in Kibana to search across all your data:
Elastic apps Use to Explore metrics about systems and services across your ecosystem
Tail related log data in real time
Monitor availability issues across your apps and services
Monitor application performance
Analyze security events
The Logs app shows logs from filebeat-*
indices by default. To show
Journalbeat indices, configure the source to include journalbeat-*
. You can
do this in the Logs app when you configure the source, or you can modify the Kibana
configuration. For
example:
xpack.infra: sources: default: logAlias: "filebeat-*,journalbeat-*"