OpenTelemetry native support
editOpenTelemetry native support
editThe Elastic Stack natively supports the OpenTelemetry protocol (OTLP). This means trace data and metrics collected from your applications and infrastructure can be sent directly to the Elastic Stack.
- Send data to the Elastic Stack from an OpenTelemetry collector
- Send data to the Elastic Stack from an OpenTelemetry agent
Send data from an OpenTelemetry collector
editConnect your OpenTelemetry collector instances to Elastic Observability using the OTLP exporter:
receivers: # ... otlp: processors: # ... memory_limiter: check_interval: 1s limit_mib: 2000 batch: exporters: logging: loglevel: warn otlp/elastic: # Elastic APM server https endpoint without the "https://" prefix endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" headers: # Elastic APM Server secret token Authorization: "Bearer ${ELASTIC_APM_SECRET_TOKEN}" service: pipelines: traces: receivers: [otlp] exporters: [logging, otlp/elastic] metrics: receivers: [otlp] exporters: [logging, otlp/elastic] logs: receivers: [otlp] exporters: [logging, otlp/elastic]
The receivers, like the OTLP receiver, that forward data emitted by APM agents, or the host metrics receiver. |
|
We recommend using the Batch processor and the memory limiter processor. For more information, see recommended processors. |
|
The logging exporter is helpful for troubleshooting and supports various logging levels, like |
|
Elastic Observability endpoint configuration. APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport (OTLP/gRPC) and the OTLP protocol over HTTP transport (OTLP/HTTP). To learn more about these exporters, see the OpenTelemetry Collector documentation: OTLP/HTTP Exporter or OTLP/gRPC exporter. |
|
Hostname and port of the APM Server endpoint. For example, |
|
Credential for Elastic APM secret token authorization ( |
|
Environment-specific configuration parameters can be conveniently passed in as environment variables documented here (e.g. |
|
[preview]
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
To send OpenTelemetry logs to Elastic Stack version 8.0+, declare a |
You’re now ready to export traces and metrics from your services and applications.
When using the OpenTelemetry collector, you should always prefer sending data via the [OTLP
exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) to an Elastic APM Server.
Other methods, like using the [elasticsearch
exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter) to send data directly to Elasticsearch will send data to the Elastic Stack,
but will bypass all of the validation and data processing that the APM Server performs.
In addition, your data will not be viewable in the Kibana Observability apps if you use the elasticsearch
exporter.
Send data from an OpenTelemetry agent
editTo export traces and metrics to APM Server, instrument your services and applications with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app with the OpenTelemetry agent for Java. See the OpenTelemetry Instrumentation guides to download the OpenTelemetry Agent or SDK for your language.
Define the following environment variables to configure the OpenTelemetry agent and enable communication with Elastic APM.
export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200 export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer an_apm_secret_token" export OTEL_METRICS_EXPORTER="otlp" \ export OTEL_LOGS_EXPORTER="otlp" \ java -javaagent:/path/to/opentelemetry-javaagent-all.jar \ -classpath lib/*:classes/ \ com.mycompany.checkout.CheckoutServiceServer
[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. The OpenTelemetry logs intake via APM Server is currently in technical preview. |
|
Fields that describe the service and the environment that the service runs in. See resource attributes for more information. |
|
APM Server URL. The host and port that APM Server listens for events on. |
|
Authorization header that includes the Elastic APM Secret token or API key: For information on how to format an API key, see API keys. Please note the required space between |
|
The trusted certificate used to verify the TLS credentials of the client. (optional) |
You are now ready to collect traces and metrics before verifying metrics and visualizing metrics in Kibana.
Proxy requests to APM Server
editAPM Server supports both the (OTLP/gRPC) and (OTLP/HTTP) protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to the APM Server.
If you use the OTLP/gRPC protocol, requests to the APM Server must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: "Content-Type: application/grpc"
.
When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to the APM Server follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you’d select the gRPC protocol when the "Content-Type: application/grpc"
header exists on a request.
For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: Application Load Balancer Support for End-to-End HTTP/2 and gRPC.
For more information on how APM Server services gRPC requests, see Muxing gRPC and HTTP/1.1.
Next steps
edit- Collect metrics
- Add Resource attributes
- Learn about the limitations of this integration