Run Metricbeat on Kubernetes

edit

You can use Metricbeat Docker images on Kubernetes to retrieve cluster metrics.

Running Elastic Cloud on Kubernetes? See Run Beats on ECK.

Kubernetes deploy manifests

edit

You deploy Metricbeat as a DaemonSet to ensure that there’s a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes.

In addition, one of the Pods in the DaemonSet will constantly hold a leader lock which makes it responsible for handling cluster-wide monitoring. This instance is used to retrieve metrics that are unique for the whole cluster, such as Kubernetes events or kube-state-metrics. You can find more information about leader election configuration options at Autodiscover.

Note: If you are upgrading from older versions, please make sure there are no redundant parts as left-overs from the old manifests. Deployment specification and its ConfigMaps might be the case.

Everything is deployed under the kube-system namespace by default. To change the namespace, modify the manifest file.

To download the manifest file, run:

curl -L -O https://raw.githubusercontent.com/elastic/beats/8.16/deploy/kubernetes/metricbeat-kubernetes.yaml

If you are using Kubernetes 1.7 or earlier: Metricbeat uses a hostPath volume to persist internal data. It’s located under /var/lib/metricbeat-data. The manifest uses folder autocreation (DirectoryOrCreate), which was introduced in Kubernetes 1.8. You need to remove type: DirectoryOrCreate from the manifest and create the host folder yourself.

Settings

edit

By default, Metricbeat sends events to an existing Elasticsearch deployment, if present. To specify a different destination, change the following parameters in the manifest file:

- name: ELASTICSEARCH_HOST
  value: elasticsearch
- name: ELASTICSEARCH_PORT
  value: "9200"
- name: ELASTICSEARCH_USERNAME
  value: elastic
- name: ELASTICSEARCH_PASSWORD
  value: changeme
Running Metricbeat on control plane nodes
edit

Kubernetes control plane nodes can use taints to limit the workloads that can run on them. To run Metricbeat on control plane nodes you may need to update the Daemonset spec to include proper tolerations:

spec:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
   effect: NoSchedule
Red Hat OpenShift configuration
edit

If you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and grant the metricbeat service account access to the privileged SCC:

  1. In the manifest file, edit the metricbeat-daemonset-modules ConfigMap, and specify the following settings under kubernetes.yml in the data section:

      kubernetes.yml: |-
        - module: kubernetes
          metricsets:
            - node
            - system
            - pod
            - container
            - volume
          period: 10s
          host: ${NODE_NAME}
          hosts: ["https://${NODE_NAME}:10250"]
          bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          ssl.certificate_authorities:
            - /path/to/kubelet-service-ca.crt
        - module: kubernetes
          metricsets:
            - proxy
          period: 10s
          host: ${NODE_NAME}
          hosts: ["localhost:29101"]

    kubelet-service-ca.crt can be any CA bundle that contains the issuer of the certificate used in the Kubelet API. According to each specific installation of Openshift this can be found either in secrets or in configmaps. In some installations it can be available as part of the service account secret, in /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt. In case of using Openshift installer for GCP then the following configmap can be mounted in Metricbeat Pod and use ca-bundle.crt in ssl.certificate_authorities:

    Name:         kubelet-serving-ca
    Namespace:    openshift-kube-apiserver
    Labels:       <none>
    Annotations:  <none>
    
    Data
    ====
    ca-bundle.crt:
  2. If https is used to access kube-state-metrics, add the following settings to the metricbeat-daemonset-config ConfigMap under the kubernetes autodiscover configuration for the state_* metricsets:

      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.certificate_authorities:
        - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
  3. Grant the metricbeat service account access to the privileged SCC:

    oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:metricbeat

    This command enables the container to be privileged as an administrator for OpenShift.

  4. If the namespace where elastic-agent is running has the "openshift.io/node-selector" annotation set, elastic-agent might not run on all nodes. In this case consider overriding the node selector for the namespace to allow scheduling on any node:

    oc patch namespace kube-system -p \
    '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'

    This command sets the node selector for the project to an empty string. If you don’t run this command, the default node selector will skip control plane nodes.

for openshift versions prior to the version 4.x additionally you need to modify the DaemonSet container spec in the manifest file to enable the container to run as privileged:

  securityContext:
    runAsUser: 0
    privileged: true

Load Kibana dashboards

edit

Metricbeat comes packaged with various pre-built Kibana dashboards that you can use to visualize metrics about your Kubernetes environment.

If these dashboards are not already loaded into Kibana, you must install Metricbeat on any system that can connect to the Elastic Stack, and then run the setup command to load the dashboards. To learn how, see Load Kibana dashboards.

If you are using a different output other than Elasticsearch, such as Logstash, you need to Load the index template manually and Load Kibana dashboards.

Deploy

edit

Metricbeat gets some metrics from kube-state-metrics. If kube-state-metrics is not already running, deploy it now (see the Kubernetes deployment docs).

To deploy Metricbeat to Kubernetes, run:

kubectl create -f metricbeat-kubernetes.yaml

To check the status, run:

$ kubectl --namespace=kube-system  get ds/metricbeat

NAME       DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR   AGE
metricbeat   32        32        0         32           0           <none>          1m

Metrics should start flowing to Elasticsearch.

Deploying Metricbeat to collect cluster-level metrics in large clusters

edit

The size and the number of nodes in a Kubernetes cluster can be fairly large at times, and in such cases the Pod that will be collecting cluster level metrics might face performance issues due to resources limitations. In this case users might consider to avoid using the leader election strategy and instead run a dedicated, standalone Metricbeat instance using a Deployment in addition to the DaemonSet.