- Introducing Elasticsearch Service
- Adding data to Elasticsearch
- Migrating data
- Ingesting data from your application
- Ingest data with Node.js on Elasticsearch Service
- Ingest data with Python on Elasticsearch Service
- Ingest data from Beats to Elasticsearch Service with Logstash as a proxy
- Ingest data from a relational database into Elasticsearch Service
- Ingest logs from a Python application using Filebeat
- Ingest logs from a Node.js web application using Filebeat
- Configure Beats and Logstash with Cloud ID
- Best practices for managing your data
- Configure index management
- Enable cross-cluster search and cross-cluster replication
- Access other deployments of the same Elasticsearch Service organization
- Access deployments of another Elasticsearch Service organization
- Access deployments of an Elastic Cloud Enterprise environment
- Access clusters of a self-managed environment
- Enabling CCS/R between Elasticsearch Service and ECK
- Edit or remove a trusted environment
- Migrate the cross-cluster search deployment template
- Manage data from the command line
- Preparing a deployment for production
- Securing your deployment
- Monitoring your deployment
- Monitor with AutoOps
- Configure Stack monitoring alerts
- Access performance metrics
- Keep track of deployment activity
- Diagnose and resolve issues
- Diagnose unavailable nodes
- Why are my shards unavailable?
- Why is performance degrading over time?
- Is my cluster really highly available?
- How does high memory pressure affect performance?
- Why are my cluster response times suddenly so much worse?
- How do I resolve deployment health warnings?
- How do I resolve node bootlooping?
- Why did my node move to a different host?
- Snapshot and restore
- Managing your organization
- Your account and billing
- Billing Dimensions
- Billing models
- Using Elastic Consumption Units for billing
- Edit user account settings
- Monitor and analyze your account usage
- Check your subscription overview
- Add your billing details
- Choose a subscription level
- Check your billing history
- Update billing and operational contacts
- Stop charges for a deployment
- Billing FAQ
- Elasticsearch Service hardware
- Elasticsearch Service GCP instance configurations
- Elasticsearch Service GCP default provider instance configurations
- Elasticsearch Service AWS instance configurations
- Elasticsearch Service AWS default provider instance configurations
- Elasticsearch Service Azure instance configurations
- Elasticsearch Service Azure default provider instance configurations
- Change hardware for a specific resource
- Elasticsearch Service regions
- About Elasticsearch Service
- RESTful API
- Release notes
- Enhancements and bug fixes - March 2025
- Enhancements and bug fixes - February 2025
- Enhancements and bug fixes - January 2025
- Enhancements and bug fixes - December 2024
- Enhancements and bug fixes - November 2024
- Enhancements and bug fixes - Late October 2024
- Enhancements and bug fixes - Early October 2024
- Enhancements and bug fixes - September 2024
- Enhancements and bug fixes - Late August 2024
- Enhancements and bug fixes - Early August 2024
- Enhancements and bug fixes - July 2024
- Enhancements and bug fixes - Late June 2024
- Enhancements and bug fixes - Early June 2024
- Enhancements and bug fixes - Early May 2024
- Bring your own key, and more
- AWS region EU Central 2 (Zurich) now available
- GCP region Middle East West 1 (Tel Aviv) now available
- Enhancements and bug fixes - March 2024
- Enhancements and bug fixes - January 2024
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- AWS region EU North 1 (Stockholm) now available
- GCP regions Asia Southeast 2 (Indonesia) and Europe West 9 (Paris)
- Enhancements and bug fixes
- Enhancements and bug fixes
- Bug fixes
- Enhancements and bug fixes
- Role-based access control, and more
- Newly released deployment templates for Integrations Server, Master, and Coordinating
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Cross environment search and replication, and more
- Enhancements and bug fixes
- Enhancements and bug fixes
- Azure region Canada Central (Toronto) now available
- Azure region Brazil South (São Paulo) now available
- Azure region South Africa North (Johannesburg) now available
- Azure region Central India (Pune) now available
- Enhancements and bug fixes
- Azure new virtual machine types available
- Billing Costs Analysis API, and more
- Organization and billing API updates, and more
- Integrations Server, and more
- Trust across organizations, and more
- Organizations, and more
- Elastic Consumption Units, and more
- AWS region Africa (Cape Town) available
- AWS region Europe (Milan) available
- AWS region Middle East (Bahrain) available
- Enhancements and bug fixes
- Enhancements and bug fixes
- GCP Private Link, and more
- Enhancements and bug fixes
- GCP region Asia Northeast 3 (Seoul) available
- Enhancements and bug fixes
- Enhancements and bug fixes
- Native Azure integration, and more
- Frozen data tier and more
- Enhancements and bug fixes
- Azure region Southcentral US (Texas) available
- Azure region East US (Virginia) available
- Custom endpoint aliases, and more
- Autoscaling, and more
- Cross-region and cross-provider support, warm and cold data tiers, and more
- Better feature usage tracking, new cost and usage analysis page, and more
- New features, enhancements, and bug fixes
- AWS region Asia Pacific (Hong Kong)
- Enterprise subscription self service, log in with Microsoft, bug fixes, and more
- SSO for Enterprise Search, support for more settings
- Azure region Australia East (New South Wales)
- New logging features, better GCP marketplace self service
- Azure region US Central (Iowa)
- AWS region Asia Pacific (Mumbai)
- Elastic solutions and Microsoft Azure Marketplace integration
- AWS region Pacific (Seoul)
- AWS region EU West 3 (Paris)
- Traffic management and improved network security
- AWS region Canada (Central)
- Enterprise Search
- New security setting, in-place configuration changes, new hardware support, and signup with Google
- Azure region France Central (Paris)
- Regions AWS US East 2 (Ohio) and Azure North Europe (Ireland)
- Our Elasticsearch Service API is generally available
- GCP regions Asia East 1 (Taiwan), Europe North 1 (Finland), and Europe West 4 (Netherlands)
- Azure region UK South (London)
- GCP region US East 1 (South Carolina)
- GCP regions Asia Southeast 1 (Singapore) and South America East 1 (Sao Paulo)
- Snapshot lifecycle management, index lifecycle management migration, and more
- Azure region Japan East (Tokyo)
- App Search
- GCP region Asia Pacific South 1 (Mumbai)
- GCP region North America Northeast 1 (Montreal)
- New Elastic Cloud home page and other improvements
- Azure regions US West 2 (Washington) and Southeast Asia (Singapore)
- GCP regions US East 4 (N. Virginia) and Europe West 2 (London)
- Better plugin and bundle support, improved pricing calculator, bug fixes, and more
- GCP region Asia Pacific Southeast 1 (Sydney)
- Elasticsearch Service on Microsoft Azure
- Cross-cluster search, OIDC and Kerberos authentication
- AWS region EU (London)
- GCP region Asia Pacific Northeast 1 (Tokyo)
- Usability improvements and Kibana bug fix
- GCS support and private subscription
- Elastic Stack 6.8 and 7.1
- ILM and hot-warm architecture
- Elasticsearch keystore and more
- Trial capacity and more
- APM Servers and more
- Snapshot retention period and more
- Improvements and snapshot intervals
- SAML and multi-factor authentication
- Next generation of Elasticsearch Service
- Branding update
- Minor Console updates
- New Cloud Console and bug fixes
- What’s new with the Elastic Stack
Ingest data from Beats to Elasticsearch Service with Logstash as a proxy
editIngest data from Beats to Elasticsearch Service with Logstash as a proxy
editThis guide explains how to ingest data from Filebeat and Metricbeat to Logstash as an intermediary, and then send that data to Elasticsearch Service. Using Logstash as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup:
-
You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to Logstash. Logstash then acts as a proxy through your firewall to send the Beats data to Elasticsearch Service, as shown in the following diagram:
- This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with Elasticsearch Service. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with an Elasticsearch Service deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to Elasticsearch Service, you can use Logstash to proxy this traffic through one firewall exception or rule.
- This setup is not suitable in simple scenarios when there is only one or a couple of Beats in use. Logstash makes the most sense for proxying when there are many Beats.
The configuration in this example makes use of the System module, available for both Filebeat and Metricbeat. Filebeat’s System sends server system log details (that is, login success/failures, sudo superuser do command usage, and other key usage details). Metricbeat’s System module sends memory, CPU, disk, and other server usage metrics.
In the following sections you are going to learn how to:
Time required: 1 hour
Get Elasticsearch Service
edit- Get a free trial.
- Log into Elastic Cloud.
- Select Create deployment.
- Give your deployment a name. You can leave all other settings at their default values.
- Select Create deployment and save your Elastic deployment credentials. You need these credentials later on.
- When the deployment is ready, click Continue and a page of Setup guides is displayed. To continue to the deployment homepage click I’d like to do something else.
Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through AWS, Azure, and GCP marketplaces.
Connect securely
editWhen connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the Kibana main menu and selecting Management > Integrations, and then selecting View deployment details.
To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, API key and basic authentication. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments.
Set up Logstash
editDownload and unpack Logstash on the local machine that hosts Beats or another machine granted access to the Beats machines.
Set up Metricbeat
editNow that Logstash is downloaded and your Elasticsearch Service deployment is set up, you can configure Metricbeat to send operational data to Logstash.
Install Metricbeat as close as possible to the service that you want to monitor. For example, if you have four servers with MySQL running, we recommend that you run Metricbeat on each server. This allows Metricbeat to access your service from localhost. This setup does not cause any additional network traffic and enables Metricbeat to collect metrics even in the event of network problems. Metrics from multiple Metricbeat instances are combined on the Logstash server.
If you have multiple servers with metrics data, repeat the following steps to configure Metricbeat on each server.
Download Metricbeat
Download Metricbeat and unpack it on the local server from which you want to collect data.
About Metricbeat modules
Metricbeat has many modules available that collect common metrics. You can configure additional modules as needed. For this example we’re using Metricbeat’s default configuration, which has the System module enabled. The System module allows you to monitor servers with the default set of metrics: cpu, load, memory, network, process, process_summary, socket_summary, filesystem, fsstat, and uptime.
Load the Metricbeat Kibana dashboards
Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly index pattern) metricbeat-*, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the Elasticsearch Service deployment.
Beginning with Elastic Stack version 8.0, Kibana index patterns have been renamed to data views. To learn more, check the Kibana What’s new in 8.0 page.
- Open a command line instance and then go to <localpath>/metricbeat-<version>/
- Run the following command:
Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the |
|
Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between <username> and <password>. Depending on variables including the installation location, environment and local permissions, you might need to change the ownership of the metricbeat.yml. You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. If this isn’t a production environment and you want a fast-pass with less permissions hassles, then you can disable strict permission checks from the command line by using Depending on your system, you may also find that some commands need to be run as root, by prefixing |
Your results should be similar to the following:
Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards
Configure Metricbeat to send data to Logstash
edit- In <localpath>/metricbeat-<version>/ (where <localpath> is the directory where Metricbeat is installed), open the metricbeat.yml configuration file for editing.
- Scroll down to the Elasticsearch Output section. Place a comment pound sign (#) in front of output.elasticsearch and Elasticsearch hosts.
- Scroll down to the Logstash Output section. Remove the comment pound sign (#) from in front of output.logstash and hosts, as follows:
# ---------------- Logstash Output ----------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"]
Replace |
Set up Filebeat
editThe next step is to configure Filebeat to send operational data to Logstash. As with Metricbeat, install Filebeat as close as possible to the service that you want to monitor.
Download Filebeat
Download Filebeat and unpack it on the local server from which you want to collect data.
Enable the Filebeat system module
Filebeat has many modules available that collect common log types. You can configure additional modules as needed. For this example we’re using Filebeat’s System module. This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the /var/log/ folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions.
- Go to <localpath>/filebeat-<version>/modules.d/ where <localpath> is the directory where Filebeat is installed.
-
Filebeat requires at least one fileset to be enabled. In file <localpath>/filebeat-<version>/modules.d/system.yml.disabled, under both
syslog
andauth
setenabled
totrue
:
- module: system # Syslog syslog: enabled: true # Authorization logs auth: enabled: true
From the <localpath>/filebeat-<version> directory, run the filebeat modules
command as shown:
./filebeat modules enable system
The system module is now enabled in Filebeat and it will be used the next time Filebeat starts.
Load the Filebeat Kibana dashboards
Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view filebeat-*, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet.
- Open a command line instance and then go to <localpath>/filebeat-<version>/
- Run the following command:
Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the |
|
Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between <username> and <password>. Depending on variables including the installation location, environment, and local permissions, you might need to change the ownership of the filebeat.yml. |
Your results should be similar to the following:
Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html Loaded machine learning job configurations Loaded Ingest pipelines
- Exit the CLI.
The data views for filebeat-* and metricbeat-* are now available in Elasticsearch. To verify:
- Login to Kibana.
- Open the Kibana main menu and select Management and go to Kibana > Data views.
- In the search bar, search for data views.
- In the search results, choose Kibana / Data Views Management.
Finish configuring Filebeat
- In <localpath>/filebeat-<version>/ (where <localpath> is the directory where Filebeat is installed), open the filebeat.yml configuration file for editing.
- Scroll down to the Outputs section. Place a comment pound sign (#) in front of output.elasticsearch and Elasticsearch hosts.
- Scroll down to the Logstash Output section. Remove the comment pound sign (#) from in front of output.logstash and hosts as follows:
# ---------------- Logstash Output ----------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"]
Replace |
Configure Logstash to listen for Beats
editNow the Filebeat and Metricbeat are set up, let’s configure a Logstash pipeline to input data from Beats and send results to the standard output. This enables you to verify the data output before sending it for indexing in Elasticsearch.
- In <localpath>/logstash-<version>/, create a new text file named beats.conf.
-
Copy and paste the following code into the new text file. This code creates a Logstash pipeline that listens for connections from Beats on port 5044 and writes to standard out (typically to your terminal) with formatting provided by the Logstash rubydebug output plugin.
Logstash listens for Beats input on the default port of 5044. Only one line is needed to do this. Logstash can handle input from many Beats of the same and also of varying types (Metricbeat, Filebeat, and others).
This sends output to the standard output, which displays through your command line interface. This plugin enables you to verify the data before you send it to Elasticsearch, in a later step.
- Save the new beats.conf file in your Logstash folder. To learn more about the file format and options, check Logstash Configuration Examples.
Output Logstash data to stdout
editNow, let’s try out the Logstash pipeline with the Metricbeats and Filebeats configurations from the prior steps. Each Beat sends data into a Logstash pipeline, and the results display on the standard output where you can verify that everything looks correct.
Test Metricbeat to stdout
-
Open a command line interface instance. Go to <localpath>/logstash-<version>/, where <localpath> is the directory where Logstash is installed, and start Logstash by running the following command:
bin/logstash -f beats.conf
-
Open a second command line interface instance. Go to <localpath>/metricbeat-<version>/, where <localpath> is the directory where Metricbeat is installed, and start Metricbeat by running the following command:
./metricbeat -c metricbeat.yml
-
Switch back to your first command line interface instance with Logstash. Now, Metricbeat events are input into Logstash and the output data is directed to the standard output. Your results should be similar to the following:
"tags" => [ [0] "beats_input_raw_event" ], "agent" => { "type" => "metricbeat", "name" => "john-VirtualBox", "version" => "8.13.1", "ephemeral_id" => "1e69064c-d49f-4ec0-8414-9ab79b6f27a4", "id" => "1b6c39e8-025f-4310-bcf1-818930a411d4", "hostname" => "john-VirtualBox" }, "service" => { "type" => "system" }, "event" => { "duration" => 39833, "module" => "system", "dataset" => "system.cpu" }, "@timestamp" => 2021-04-21T17:06:05.231Z, "metricset" => { "name" => "cpu", "period" => 10000 }, "@version" => "1","host" => { "id" => "939972095cf1459c8b22cc608eff85da", "ip" => [ [0] "10.0.2.15", [1] "fe80::3700:763c:4ba3:e48c" ], "name" => "john-VirtualBox","mac" => [ [0] "08:00:27:a3:c7:a9" ], "os" => { "type" => "linux",
- Switch back to the Metricbeat command line instance. Enter CTRL + C to shut down Metricbeat, and then exit the CLI.
- Switch back to the Logstash command line instance. Enter CTRL + C to shut down Logstash, and then exit the CLI.
Test Filebeat to stdout
-
Open a command line interface instance. Go to <localpath>/logstash-<version>/, where <localpath> is the directory where Logstash is installed, and start Logstash by running the following command:
bin/logstash -f beats.conf
-
Open a second command line interface instance. Go to <localpath>/filebeat-<version>/, where <localpath> is the directory where Filebeat is installed, and start Filebeat by running the following command:
./filebeat -c filebeat.yml
-
Switch back to your first command line interface instance with Logstash. Now, Filebeat events are input into Logstash and the output data is directed to the standard output. Your results should be similar to the following:
{ "service" => { "type" => "system" }, "event" => { "timezone" => "-04:00", "dataset" => "system.syslog", "module" => "system" }, "fileset" => { "name" => "syslog" }, "agent" => { "id" => "113dc127-21fa-4ebb-ab86-8a151d6a23a6", "type" => "filebeat", "version" => "8.13.1", "hostname" => "john-VirtualBox", "ephemeral_id" => "1058ad74-8494-4a5e-9f48-ad7c5b9da915", "name" => "john-VirtualBox" }, "@timestamp" => 2021-04-28T15:33:41.727Z, "input" => { "type" => "log" }, "ecs" => { "version" => "1.8.0" }, "@version" => "1", "log" => { "offset" => 73281, "file" => { "path" => "/var/log/syslog" } },
- Review the Logstash output results to make sure your data looks correct. Enter CTRL + C to shut down Logstash.
- Switch back to the Filebeats CLI. Enter CTRL + C to shut down Filebeat.
Output Logstash data to Elasticsearch
editIn this section, you configure Logstash to send the Metricbeat and Filebeat data to Elasticsearch. You modify the beats.conf created earlier, and specify the output credentials needed for our Elasticsearch Service deployment. Then, you start Logstash to send the Beats data into Elasticsearch.
- In your <localpath>/logstash-<version>/ folder, open beats.conf for editing.
-
Replace the output {} section of the JSON with the following code:
output { elasticsearch { index => "%{[@metadata][beat]}-%{[@metadata][version]}" ilm_enabled => true cloud_id => "<DeploymentName>:<ID>" cloud_auth => "elastic:<Password>" ssl => true # api_key => "<myAPIid:myAPIkey>" } }
Use the Cloud ID of your Elasticsearch Service deployment. You can include or omit the
<DeploymentName>:
prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the Kibana main menu and selecting Management > Integrations, and then selecting View deployment details.the default usename is
elastic
. It is not recommended to use theelastic
account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the Grant access to secured resources for information on the writer role and API Keys. Use the password provided when you created the deployment if using theelastic
user, or the password used when creating a new ingest user with the roles specified in the Grant access to secured resources documentation.Following are some additional details about the configuration file settings:
-
index: We specify the name of the Elasticsearch index with which to associate the Beats output.
- %{[@metadata][beat]} sets the first part of the index name to the value of the Beat metadata field.
-
%{[@metadata][version]} sets the second part of the index name to the Beat version.
If you use Metricbeat version 8.13.1, the index created in Elasticsearch is named metricbeat-8.13.1. Similarly, using the 8.13.1 version of Filebeat, the Elasticsearch index is named filebeat-8.13.1.
- cloud_id: This is the ID that uniquely identifies your Elasticsearch Service deployment.
-
ssl: This should be set to
true
so that Secure Socket Layer (SSL) certificates are used for secure communication between Logstash and your Elasticsearch Service deployment. - ilm_enabled: Enables and disables Elasticsearch Service index lifecycle management.
- api_key: If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here.
-
-
Optional: For additional security, you can generate an Elasticsearch API key through the Elasticsearch Service console and configure Logstash to use the new key to connect securely to the Elasticsearch Service.
- Log in to the Elasticsearch Service Console.
- Select the deployment and go to ☰ > Management > Dev Tools.
-
Enter the following:
POST /_security/api_key { "name": "logstash-apikey", "role_descriptors": { "logstash_read_write": { "cluster": ["manage_index_templates", "monitor"], "index": [ { "names": ["logstash-*","metricbeat-*","filebeat-*"], "privileges": ["create_index", "write", "read", "manage"] } ] } } }
This creates an API key with the cluster
monitor
privilege which gives read-only access for determining the cluster state, andmanage_index_templates
which allows all operations on index templates. Some additional privileges also allowcreate_index
,write
, andmanage
operations for the specified index. The indexmanage
privilege is added to enable index refreshes. -
Click ▶. The output should be similar to the following:
{ "api_key": "aB1cdeF-GJI23jble4NOH4", "id": "2GBe63fBcxgJAetmgZeh", "name": "logstash_api_key" }
-
Enter your new
api_key
value into the Logstashbeats.conf
file, in the format<id>:<api_key>
. If your results were as shown in this example, you would enter2GBe63fBcxgJAetmgZeh:aB1cdeF-GJI23jble4NOH4
. Remember to remove the pound (#
) sign to uncomment the line, and comment out theusername
andpassword
lines:output { elasticsearch { index => "%{[@metadata][beat]}-%{[@metadata][version]}" cloud_id => "<myDeployment>" ssl => true ilm_enabled => true api_key => "2GBe63fBcxgJAetmgZeh:aB1cdeF-GJI23jble4NOH4" # user => "<Username>" # password => "<Password>" } }
-
Open a command line interface instance, go to your Logstash installation path, and start Logstash:
bin/logstash -f beats.conf
-
Open a second command line interface instance, go to your Metricbeat installation path, and start Metricbeat:
./metricbeat -c metricbeat.yml
-
Open a third command line interface instance, go to your Filebeat installation path, and start Filebeat:
./filebeat -c filebeat.yml
- Logstash now outputs the Filebeat and Metricbeat data to your Elasticsearch Service instance.
In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure Logstash, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service:
View data in Kibana
editIn this section, you log into Elasticsearch Service, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data.
View the Metricbeat dashboard
- Login to Kibana.
- Open the Kibana main menu and select Analytics, then Dashboard.
- In the search box, search for metricbeat system. The search results show several dashboards available for you to explore.
- In the search results, choose [Metricbeat System] Overview ECS. A Metricbeat dashboard opens:
View the Filebeat dashboard
- Open the Kibana main menu and select Analytics, then Dashboard.
- In the search box, search for filebeat system.
- In the search results, choose [Filebeat System] Syslog dashboard ECS. A Filebeat dashboard displaying your Filebeat data:
Now, you should have a good understanding of how to configure Logstash to ingest data from multiple Beats. You have the basics needed to begin experimenting with your own combination of Beats and modules.
On this page