Running Filebeat on Kubernetes
editRunning Filebeat on Kubernetes
editFilebeat Docker images can be used on Kubernetes to retrieve and ship container logs.
Kubernetes deploy manifests
editBy deploying Filebeat as a DaemonSet we ensure we get a running instance on each node of the cluster.
Docker logs host folder (/var/lib/docker/containers
) is mounted on the Filebeat
container. Filebeat will start an input for these files and start harvesting
them as they appear.
Everything is deployed under kube-system
namespace, you can change that by
updating the YAML file.
To get the manifests just run:
curl -L -O https://raw.githubusercontent.com/elastic/beats/6.4/deploy/kubernetes/filebeat-kubernetes.yaml
If you are using Kubernetes 1.7 or earlier: Filebeat uses a hostPath volume to persist internal data, it’s located
under /var/lib/filebeat-data. The manifest uses folder autocreation (DirectoryOrCreate
), which was introduced in
Kubernetes 1.8. You will need to remove type: DirectoryOrCreate
from the manifest and create the host folder yourself.
Settings
editSome parameters are exposed in the manifest to configure logs destination, by default they will use an existing Elasticsearch deploy if it’s present, but you may want to change that behavior, so just edit the YAML file and modify them:
- name: ELASTICSEARCH_HOST value: elasticsearch - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: changeme
Deploy
editTo deploy Filebeat to Kubernetes just run:
kubectl create -f filebeat-kubernetes.yaml
Then you should be able to check the status by running:
$ kubectl --namespace=kube-system get ds/filebeat NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE filebeat 32 32 0 32 0 <none> 1m
Logs should start flowing to Elasticsearch, all annotated with Add Kubernetes metadata processor.