Configuration

edit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Fleet-managed Elastic Agents must connect to Fleet Server to receive their configurations. You can deploy Fleet Server instances using ECKs Agent CRD with the appropriate configuration, as shown in Fleet mode and Fleet Server.

To know more about Fleet architecture and related components, check the Fleet documentation.

Fleet mode and Fleet Server

edit

To run both Fleet Server and Elastic Agent in Fleet-managed mode, set the mode configuration element to fleet.

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: elastic-agent-sample
spec:
  mode: fleet

To run Fleet Server, set the fleetServerEnabled configuration element to true, as shown in this example:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server-sample
spec:
  mode: fleet
  fleetServerEnabled: true

You can leave the default value false for any other case.

Configure Kibana

edit

To have Fleet running properly, the following settings must be correctly set in the Kibana configuration:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  config:
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-sample-es-http.default.svc:9200"]
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-sample-agent-http.default.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        namespace: default
        monitoring_enabled:
          - logs
          - metrics
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Elastic Agent on ECK policy
        id: eck-agent
        namespace: default
        monitoring_enabled:
          - logs
          - metrics
        unenroll_timeout: 900
        is_default: true
        package_policies:
          - name: system-1
            id: system-1
            package:
              name: system
  • xpack.fleet.agents.elasticsearch.hosts must point to the Elasticsearch cluster that Elastic Agents should send data to. For ECK-managed Elasticsearch clusters, ECK creates a Service accessible through https://ES_RESOURCE_NAME-es-http.ES_RESOURCE_NAMESPACE.svc:9200 URL, where ES_RESOURCE_NAME is the name of Elasticsearch resource and ES_RESOURCE_NAMESPACE is the namespace it was deployed in.
  • xpack.fleet.agents.fleet_server.hosts must point to Fleet Server that Elastic Agents should connect to. For ECK-managed Fleet Server instances, ECK creates a Service accessible through https://FS_RESOURCE_NAME-agent-http.FS_RESOURCE_NAMESPACE.svc:8220 URL, where FS_RESOURCE_NAME is the name of Elastic Agent resource with Fleet Server enabled and FS_RESOURCE_NAMESPACE is the namespace it was deployed in.
  • xpack.fleet.packages are required packages to enable Fleet Server and Elastic Agents to enroll.
  • xpack.fleet.agentPolicies policies are needed for Fleet Server and Elastic Agents to enroll to, check https://www.elastic.co/guide/en/fleet/current/agent-policy.html for more information.

Set referenced resources

edit

Both Fleet Server and Elastic Agent in Fleet mode can facilitate the Fleet setup. Fleet Server can set up Fleet in Kibana (which otherwise requires manual steps) and enroll itself in the default Fleet Server policy. Elastic Agent can enroll itself in the default Elastic Agent policy. To allow ECK to set this up, provide a reference to ECK-managed Kibana through kibanaRef configuration element.

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server-sample
spec:
  kibanaRef:
    name: kibana

ECK can also facilitate the connection between Elastic Agents and ECK-managed Fleet Server. To allow ECK to set this up, provide a reference to Fleet Server through fleetServerRef configuration element.

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: elastic-agent-sample
spec:
  fleetServerRef:
    name: fleet-server-sample

Set elasticsearchRefs element in your Fleet Server to point to the Elasticsearch cluster that will manage Fleet. Leave elasticsearchRefs empty or unset for any Elastic Agent running in Fleet mode as the Elasticsearch cluster to target will come from Kibana xpack.fleet.agents.elasticsearch.hosts configuration element.

Currently, Elastic Agent in Fleet mode supports only a single output, so only a single Elasticsearch cluster can be referenced.

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server-sample
spec:
  elasticsearchRefs:
  - name: elasticsearch-sample

By default, every reference targets all instances in your Elasticsearch, Kibana and Fleet Server deployments, respectively. If you want to direct traffic to specific instances, refer to Traffic Splitting for more information and examples.

Customize Elastic Agent configuration

edit

In contrast to what happens with Elastic Agent as standalone, the configuration is managed through Fleet, and it cannot be defined through config or configRef elements.

You can only configure the setup part of the Fleet Server and Elastic Agent. You can override each of the environment variables that agents consume, as documented in Elastic Agent environment variables. This allows different setups where components are deployed both in local Kubernetes cluster and externally.

Upgrade the Elastic Agent specification

edit

You can upgrade the Elastic Agent version or change settings by editing the YAML specification file. ECK applies the changes by performing a rolling restart of the Agent’s Pods. Depending on the settings that you used, ECK configures an agent to set up Fleet in Kibana, enrolls itself in Fleet, or restarts Elastic Agent on certificate rollover.

Choose the deployment model

edit

Depending on the use case, Elastic Agent may need to be deployed as a Deployment or a DaemonSet. To choose how to deploy your Elastic Agents, provide a podTemplate element under the deployment or the daemonSet element in the specification. If you choose the deployment option, you can additionally specify the strategy used to replace old Pods with new ones.

Similarly, you can set the update strategy when deploying as a DaemonSet. This allows you to control the rollout speed for new configuration by modifying the maxUnavailable setting:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: elastic-agent-sample
spec:
  version: 8.16.0
  daemonSet:
    strategy:
      type: RollingUpdate
      rollingUpdate:
        maxUnavailable: 3
...

Refer to Set compute resources for Beats and Elastic Agent for more information on how to use the Pod template to adjust the resources given to Elastic Agent.

Role Based Access Control for Elastic Agent

edit

Some Elastic Agent features, such as the Kubernetes integration, require that Agent Pods interact with Kubernetes APIs. This functionality requires specific permissions. Standard Kubernetes RBAC rules apply. For example, to allow API interactions:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: elastic-agent-sample
spec:
  version: 8.16.0
  elasticsearchRefs:
  - name: elasticsearch-sample
  daemonSet:
    podTemplate:
      spec:
        automountServiceAccountToken: true
        serviceAccountName: elastic-agent
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  - nodes/metrics
  - nodes/proxy
  - nodes/stats
  - events
  verbs:
  - get
  - watch
  - list
- nonResourceURLs:
  - /metrics
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-agent
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-agent
subjects:
- kind: ServiceAccount
  name: elastic-agent
  namespace: default
roleRef:
  kind: ClusterRole
  name: elastic-agent
  apiGroup: rbac.authorization.k8s.io

Deploy Elastic Agent in secured clusters

edit

To deploy Elastic Agent in clusters with the Pod Security Policy admission controller enabled, or in OpenShift clusters, you might need to grant additional permissions to the Service Account used by the Elastic Agent Pods. Those Service Accounts must be bound to a Role or ClusterRole that has use permission for the required Pod Security Policy or Security Context Constraints. Different Elastic Agent integrations might require different settings set in their PSP/SCC.

Customize Fleet Server Service

edit

By default, ECK creates a Service for Fleet Server that Elastic Agents can connect through. You can customize it using the http configuration element. Check more information on how to make changes to the Service and customize the TLS configuration.

Override default Fleet configuration settings

edit

ECK uses environment variables to control how Elastic Agent and Fleet Server should be configured. Sometimes, it might be required to override some of these settings. For example, if Kibana TLS certificate is signed by a well-known root and can’t include kibana-kb-http.namespace.svc as a SAN, KIBANA_FLEET_HOST can be overriden to point to the URL that the certificate specifies. To do that, specify environment variable, as shown in the following example.

...
spec:
  deployment:
    podTemplate:
      spec:
        containers:
        - name: agent
          env:
          - name: KIBANA_FLEET_HOST
            value: "https://kibana.example.com:443"
...

Check the Elastic Agent docs to get a list of all the environment variables that can be used.