Run Elastic Agent standalone on Kubernetes
editRun Elastic Agent standalone on Kubernetes
editUse Elastic Agent Docker images on Kubernetes to retrieve cluster metrics.
Running Elastic Cloud on Kubernetes? Refer to Run Elastic Agent on ECK.
Kubernetes deploy manifests
editDeploy Elastic Agent as a DaemonSet to ensure that there is a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes.
In addition, one of the Pods in the DaemonSet will constantly hold a leader lock which makes it responsible for
handling cluster-wide monitoring.
Find more information about leader election configuration options at leader election provider.
This instance is used to retrieve metrics that are unique for the whole
cluster, such as Kubernetes events or
kube-state-metrics. If kube-state-metrics
is not already
running, deploy it now (see the
Kubernetes
deployment docs)
Everything is deployed under the kube-system
namespace by default. Change the namespace by modifying the manifest file.
To download the manifest file, run:
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.16/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml
This manifest includes the Kubernetes integration to collect Kubernetes metrics, System integration to collect system level metrics and logs from nodes, and the Pod’s log collection using dynamic inputs and kubernetes provider.
Settings
editSet the Elasticsearch settings before deploying the manifest:
- name: ES_USERNAME value: "elastic" - name: ES_PASSWORD value: "passpassMyStr0ngP@ss" - name: ES_HOST value: "https://somesuperhostiduuid.europe-west1.gcp.cloud.es.io:443"
Configuration details
Run Elastic Agent on master nodes
editKubernetes master nodes can use taints to limit the workloads that can run on them. The manifest for standalone Elastic Agent defines tolerations to run on master nodes. Agents running on master nodes collect metrics from the control plane components (scheduler, controller manager) of Kuberentes. To disable Elastic Agent from running on master nodes, remove the following part of the Daemonset spec:
spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule
Deploy
editTo deploy to Kubernetes, run:
kubectl create -f elastic-agent-standalone-kubernetes.yaml
To check the status, run:
$ kubectl -n kube-system get pods -l app=elastic-agent NAME READY STATUS RESTARTS AGE elastic-agent-4665d 1/1 Running 0 81m elastic-agent-9f466c4b5-l8cm8 1/1 Running 0 81m elastic-agent-fj2z9 1/1 Running 0 81m elastic-agent-hs4pb 1/1 Running 0 81m
Red Hat OpenShift configuration
editIf you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged.
-
In the manifest file, modify the
agent-node-datastreams
ConfigMap and adjust inputs:-
kubernetes-cluster-metrics
input:-
If
https
is used to accesskube-state-metrics
, add the following settings to allkubernetes.state_*
datasets:bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.certificate_authorities: - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
-
-
kubernetes-node-metrics
input:-
Change the
kubernetes.controllermanager
datastream condition to:condition: ${kubernetes.labels.app} == 'kube-controller-manager'
-
Change the
kubernetes.scheduler
datastream condition to:condition: ${kubernetes.labels.app} == 'openshift-kube-scheduler'
-
The
kubernetes.proxy
datastream configuration should look like:- data_stream: dataset: kubernetes.proxy type: metrics metricsets: - proxy hosts: - 'localhost:29101' period: 10s
-
Add the following settings to all datastreams that connect to
https://${env.NODE_NAME}:10250
:bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.certificate_authorities: - /path/to/ca-bundle.crt
ca-bundle.crt
can be any CA bundle that contains the issuer of the certificate used in the Kubelet API. According to each specific installation of Openshift this can be found either insecrets
or inconfigmaps
. In some installations it can be available as part of the service account secret, in/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
. When using the Openshift installer for GCP, mount the followingconfigmap
in the elastic-agent pod and useca-bundle.crt
inssl.certificate_authorities
:Name: kubelet-serving-ca Namespace: openshift-kube-apiserver Labels: <none> Annotations: <none> Data ==== ca-bundle.crt:
-
-
-
Grant the
elastic-agent
service account access to the privileged SCC:oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:elastic-agent
This command enables the container to be privileged as an administrator for OpenShift.
-
If the namespace where elastic-agent is running has the
"openshift.io/node-selector"
annotation set, elastic-agent might not run on all nodes. In this case consider overriding the node selector for the namespace to allow scheduling on any node:oc patch namespace kube-system -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
This command sets the node selector for the project to an empty string.
Autodiscover targeted Pods
editAutodiscover conditions can be defined to allow Elastic Agent to automatically identify Pods and start collecting from them using predefined integrations. For example, if a user wants to automatically identify a Redis Pod and start monitoring it using the Redis integration, the following configuration should be added as an extra input in the Daemonset manifest:
- name: redis type: redis/metrics use_output: default meta: package: name: redis version: 0.3.6 data_stream: namespace: default streams: - data_stream: dataset: redis.info type: metrics metricsets: - info hosts: - '${kubernetes.pod.ip}:6379' idle_timeout: 20s maxconn: 10 network: tcp period: 10s condition: ${kubernetes.pod.labels.app} == 'redis'
Refer to dynamic inputs and kubernetes provider for more information about shaping dynamic inputs for autodiscovery.