Common problems

edit

This section describes common problems you might encounter when using a Fleet-managed APM Server.

No data is indexed

edit

If no data shows up in Elasticsearch, first make sure that your APM components are properly connected.

Is Elastic Agent healthy?

In Kibana open Fleet and find the host that is running the APM integration; confirm that its status is Healthy. If it isn’t, check the Elastic Agent logs to diagnose potential causes. See Monitor Elastic Agents to learn more.

Is APM Server happy?

In Kibana, open Fleet and select the host that is running the APM integration. Open the Logs tab and select the elastic_agent.apm_server dataset. Look for any APM Server errors that could help diagnose the problem.

Can the APM agent connect to APM Server

To determine if the APM agent can connect to the APM Server, send requests to the instrumented service and look for lines containing [request] in the APM Server logs.

If no requests are logged, confirm that:

  1. SSL isn’t misconfigured.
  2. The host is correct. For example, if you’re using Docker, ensure a bind to the right interface (for example, set apm-server.host = 0.0.0.0:8200 to match any IP) and set the SERVER_URL setting in the APM agent accordingly.

If you see requests coming through the APM Server but they are not accepted (a response code other than 202), see APM Server response codes to narrow down the possible causes.

Instrumentation gaps

APM agents provide auto-instrumentation for many popular frameworks and libraries. If the APM agent is not auto-instrumenting something that you were expecting, data won’t be sent to the Elastic Stack. Reference the relevant APM agent documentation for details on what is automatically instrumented.

Common SSL-related problems

edit
SSL client fails to connect
edit

The target host might be unreachable or the certificate may not be valid. To fix this problem:

  1. Make sure that the APM Server process on the target host is running and you can connect to it. Try to ping the target host to verify that you can reach it from the host running APM Server. Then use either nc or telnet to make sure that the port is available. For example:

    ping <hostname or IP>
    telnet <hostname or IP> 5044
  2. Verify that the certificate is valid and that the hostname and IP match.
  3. Use OpenSSL to test connectivity to the target server and diagnose problems. See the OpenSSL documentation for more info.
x509: cannot validate certificate for <IP address> because it doesn’t contain any IP SANs
edit

This happens because your certificate is only valid for the hostname present in the Subject field. To resolve this problem, try one of these solutions:

  • Create a DNS entry for the hostname, mapping it to the server’s IP.
  • Create an entry in /etc/hosts for the hostname. Or, on Windows, add an entry to C:\Windows\System32\drivers\etc\hosts.
  • Re-create the server certificate and add a Subject Alternative Name (SAN) for the IP address of the server. This makes the server’s certificate valid for both the hostname and the IP address.
getsockopt: no route to host
edit

This is not an SSL problem. It’s a networking problem. Make sure the two hosts can communicate.

getsockopt: connection refused
edit

This is not an SSL problem. Make sure that Logstash is running and that there is no firewall blocking the traffic.

No connection could be made because the target machine actively refused it
edit

A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the destination host.

I/O Timeout

edit

I/O Timeouts can occur when your timeout settings across the stack are not configured correctly, especially when using a load balancer.

You may see an error like the one below in the APM agent logs, and/or a similar error on the APM Server side:

[ElasticAPM] APM Server responded with an error:
"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout"

To fix this, ensure timeouts are incrementing from the APM agent, through your load balancer, to the APM Server.

By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 3600 seconds. Your load balancer should be set somewhere between these numbers.

For example:

APM agent --> Load Balancer  --> APM Server
   10s            15s               3600s

The APM Server timeout can be configured by updating the maximum duration for reading an entire request.

Field limit exceeded

edit

When adding too many distinct tag keys on a transaction or span, you risk creating a mapping explosion.

For example, you should avoid that user-specified data, like URL parameters, is used as a tag key. Likewise, using the current timestamp or a user ID as a tag key is not a good idea. However, tag values with a high cardinality are not a problem. Just try to keep the number of distinct tag keys at a minimum.

The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. Usually, on the next day, the spans and transactions will be indexed again because a new index is created each day. But as soon as the field limit is reached, indexing stops again.

In the agent logs, you won’t see a sign of failures as the APM server asynchronously sends the data it received from the agents to Elasticsearch. However, the APM server and Elasticsearch log a warning like this:

{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in [INDEX_NAME] has been exceeded\"}

Tail-based sampling causing high system memory usage and high disk IO

edit

Tail-based sampling requires minimal memory to run, and there should not be a noticeable increase in RSS memory usage. However, since tail-based sampling writes data to disk, it is possible to see a significant increase in OS page cache memory usage due to disk IO. If you see a drop in throughput and excessive disk activity after enabling tail-based sampling, please ensure that there is enough memory headroom in the system for OS page cache to perform disk IO efficiently.