Run Filebeat on Kubernetes

edit

You can use Filebeat Docker images on Kubernetes to retrieve and ship container logs.

Running Elastic Cloud on Kubernetes? See Run Beats on ECK.

However, version 8.17.0 of Filebeat has not yet been released, so no Docker image is currently available for this version.

Kubernetes deploy manifests

edit

You deploy Filebeat as a DaemonSet to ensure there’s a running instance on each node of the cluster.

The container logs host folder (/var/log/containers) is mounted on the Filebeat container. Filebeat starts an input for the files and begins harvesting them as soon as they appear in the folder.

Everything is deployed under the kube-system namespace by default. To change the namespace, modify the manifest file.

To download the manifest file, run:

curl -L -O https://raw.githubusercontent.com/elastic/beats/8.x/deploy/kubernetes/filebeat-kubernetes.yaml

If you are using Kubernetes 1.7 or earlier: Filebeat uses a hostPath volume to persist internal data. It’s located under /var/lib/filebeat-data. The manifest uses folder autocreation (DirectoryOrCreate), which was introduced in Kubernetes 1.8. You need to remove type: DirectoryOrCreate from the manifest and create the host folder yourself.

Settings

edit

By default, Filebeat sends events to an existing Elasticsearch deployment, if present. To specify a different destination, change the following parameters in the manifest file:

- name: ELASTICSEARCH_HOST
  value: elasticsearch
- name: ELASTICSEARCH_PORT
  value: "9200"
- name: ELASTICSEARCH_USERNAME
  value: elastic
- name: ELASTICSEARCH_PASSWORD
  value: changeme
Running Filebeat on control plane nodes
edit

Kubernetes control plane nodes can use taints to limit the workloads that can run on them. To run Filebeat on control plane nodes you may need to update the Daemonset spec to include proper tolerations:

spec:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
   effect: NoSchedule
Red Hat OpenShift configuration
edit

If you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged. Filebeat needs to run as a privileged container to mount logs written on the node (hostPath) and read them.

  1. Modify the DaemonSet container spec in the manifest file:

      securityContext:
        runAsUser: 0
        privileged: true
  2. Grant the filebeat service account access to the privileged SCC:

    oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:filebeat

    This command enables the container to be privileged as an administrator for OpenShift.

  3. Override the default node selector for the kube-system namespace (or your custom namespace) to allow for scheduling on any node:

    oc patch namespace kube-system -p \
    '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'

    This command sets the node selector for the project to an empty string. If you don’t run this command, the default node selector will skip control plane nodes.

In order to support runtime environments with Openshift (eg. CRI-O, containerd) you need to configure following path:

filebeat.inputs:
- type: container
  paths: 
    - '/var/log/containers/*.log'

Same path needs to be configured in case autodiscovery needs to be enabled:

filebeat.autodiscover:
  providers:
    - type: kubernetes
      node: ${NODE_NAME}
      hints.enabled: true
      hints.default_config:
        type: container
        paths:
          - /var/log/containers/*.log

/var/log/containers/\*.log is normally a symlink to /var/log/pods/*/*.log, so above paths can be edited accordingly

Load Kibana dashboards

edit

Filebeat comes packaged with various pre-built Kibana dashboards that you can use to visualize logs from your Kubernetes environment.

If these dashboards are not already loaded into Kibana, you must install Filebeat on any system that can connect to the Elastic Stack, and then run the setup command to load the dashboards. To learn how, see Load Kibana dashboards.

The setup command does not load the ingest pipelines used to parse log lines. By default, ingest pipelines are set up automatically the first time you run Filebeat and connect to Elasticsearch.

If you are using a different output other than Elasticsearch, such as Logstash, you need to:

Deploy

edit

To deploy Filebeat to Kubernetes, run:

kubectl create -f filebeat-kubernetes.yaml

To check the status, run:

$ kubectl --namespace=kube-system get ds/filebeat

NAME       DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR   AGE
filebeat   32        32        0         32           0           <none>          1m

Log events should start flowing to Elasticsearch. The events are annotated with metadata added by the add_kubernetes_metadata processor.

Parsing json logs

edit

It is common case when collecting logs from workloads running on Kubernetes that these applications are logging in json format. In these case, special handling can be applied so as to parse these json logs properly and decode them into fields. Bellow there are provided 2 different ways of configuring filebeat’s autodiscover so as to identify and parse json logs. We will use an example of one Pod with 2 containers where only one of these logs in json format.

Example log:

{"type":"log","@timestamp":"2020-11-16T14:30:13+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
  1. Using json.* options with templates

    filebeat.autodiscover:
      providers:
          - type: kubernetes
            node: ${NODE_NAME}
            templates:
              - condition:
                  contains:
                    kubernetes.container.name: "no-json-logging"
                config:
                  - type: container
                    paths:
                      - "/var/log/containers/*-${data.kubernetes.container.id}.log"
              - condition:
                  contains:
                    kubernetes.container.name: "json-logging"
                config:
                  - type: container
                    paths:
                      - "/var/log/containers/*-${data.kubernetes.container.id}.log"
                    json.keys_under_root: true
                    json.add_error_key: true
                    json.message_key: message
  2. Using json.* options with hints

    Key part here is to properly annotate the Pod to only parse logs of the correct container as json logs. In this, annotation should be constructed like this:

    co.elastic.logs.<container_name>/json.keys_under_root: "true"

    Autodiscovery configuration:

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints.enabled: true
          hints.default_config:
            type: container
            paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log

    Then annotate the pod properly:

    annotations:
        co.elastic.logs.json-logging/json.keys_under_root: "true"
        co.elastic.logs.json-logging/json.add_error_key: "true"
        co.elastic.logs.json-logging/json.message_key: "message"

Logrotation

edit

According to kubernetes documentation Kubernetes is not responsible for rotating logs, but rather a deployment tool should set up a solution to address that. Different logrotation strategies can cause issues that might make Filebeat losing events or even duplicating events. Users can find more information about Filebeat’s logrotation best practises at Filebeat’s log rotation specific documentation