WARNING: Version 5.4 of Packetbeat has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Frequently Asked Questions
editFrequently Asked Questions
editThis section contains frequently asked questions about Packetbeat. Also check out the Packetbeat discussion forum.
Client_server and server fields are empty?
editThe client_server
and server
fields are empty when Packetbeat is not configured
to capture information about the network topology.
To capture information about the network topology, set the save_topology
configuration option to true and make sure that
you are sending the output to Elasticsearch.
Dashboard in Kibana is breaking up data fields incorrectly?
editThe index template might not be loaded correctly. See Step 3: Loading the Index Template in Elasticsearch.
Packetbeat doesn’t see any packets when using mirror ports?
editThe interface needs to be set to promiscuous mode. Run the following command:
ip link set <device_name> promisc on
For example: ip link set enp5s0f1 promisc on
Packetbeat can’t capture traffic from Windows loopback interface?
editPacketbeat is unable to capture traffic from the loopback device (127.0.0.1 traffic) because the Windows TCP/IP stack does not implement a network loopback interface, making it difficult for Windows packet capture drivers like WinPcap to sniff traffic.
As a workaround, you can try installing Npcap, an update of WinPcap. Make sure that you restart Windows after installing Npcap. Npcap creates an Npcap Loopback Adapter that you can select if you want to capture loopback traffic.
For the list of devices shown here, you would configure Packetbeat
to use device 4
:
PS C:\Users\vagrant\Desktop\packetbeat-1.2.0-windows> .\packetbeat.exe -devices 0: \Device\NPF_NdisWanBh (NdisWan Adapter) 1: \Device\NPF_NdisWanIp (NdisWan Adapter) 2: \Device\NPF_NdisWanIpv6 (NdisWan Adapter) 3: \Device\NPF_{DD72B02C-4E48-4924-8D0F-F80EA2755534} (Intel(R) PRO/1000 MT Desktop Adapter) 4: \Device\NPF_{77DFFCAF-1335-4B0D-AFD4-5A4685674FAA} (MS NDIS 6.0 LoopBack Driver)
Packetbeat is missing long running transactions?
editPacketbeat has an internal timeout that it uses to time out transactions and TCP connections when no packets have been seen for a long time.
To process long running transactions, you can specify a larger value for the transaction_timeout
option. However, keep in mind that very large timeout values can increase memory usage if messages are lost or transaction
response messages are not sent.
Need to limit bandwidth used by Packetbeat?
editIf you need to limit bandwidth usage, we recommend that you configure the network stack on your OS to perform bandwidth throttling.
For example, the following Linux commands cap the connection between Packetbeat and Logstash by setting a limit of 50 kbps on TCP connections over port 5044:
tc qdisc add dev $DEV root handle 1: htb tc class add dev $DEV parent 1:1 classid 1:10 htb rate 50kbps ceil 50kbps tc filter add dev $DEV parent 1:0 prio 1 protocol ip handle 10 fw flowid 1:10 iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Using OS tools to perform bandwidth throttling gives you better control over policies. For example, you can use OS tools to cap bandwidth during the day, but not at night. Or you can leave the bandwidth uncapped, but assign a low priority to the traffic.
Error loading config file?
editYou may encounter errors loading the config file on POSIX operating systems if:
- an unauthorized user tries to load the config file, or
- the config file has the wrong permissions.
See Config File Ownership and Permissions for more about resolving these errors.
Found Unexpected or Unknown Characters?
editEither there is a problem with the structure of your config file, or you have used a path or expression that the YAML parser cannot resolve because the config file contains characters that aren’t properly escaped.
If the YAML file contains paths with spaces or unusual characters, wrap the paths in single quotation marks (see Wrap Paths in Single Quotation Marks).
Also see the general advice under YAML Tips and Gotchas.
Logstash connection doesn’t work?
editYou may have configured Logstash or Packetbeat incorrectly. To resolve the issue:
-
Make sure that Logstash is running and you can connect to it. First, try to ping the Logstash host to verify that you can reach it from the host running Packetbeat. Then use either
nc
ortelnet
to make sure that the port is available. For example:ping <hostname or IP> telnet <hostname or IP> 5044
- Verify that the config file for Packetbeat specifies the correct port where Logstash is running.
- Make sure that the Elasticsearch output is commented out in the config file and the Logstash output is uncommented.
- Confirm that the most recent Beats input plugin for Logstash is installed and configured. Note that Beats will not connect to the Lumberjack input plugin. See Updating the Beats Input Plugin for Logstash.
@metadata is missing in Logstash?
editLogstash outputs remove @metadata
fields automatically. Therefore, if Logstash instances are chained directly or via some message
queue (for example, Redis or Kafka), the @metadata
field will not be available in the final Logstash instance.
To preserve @metadata
fields, use the Logstash mutate filter with the rename setting to rename the fields to
non-internal fields.
Difference between Logstash and Beats?
editBeats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to Elasticsearch. Beats have a small footprint and use fewer system resources than Logstash.
Logstash has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources.
For more information, see the Logstash Introduction and the Beats Overview.
SSL client fails to connect to Logstash?
editThe host running Logstash might be unreachable or the certificate may not be valid. To resolve your issue:
-
Make sure that Logstash is running and you can connect to it. First, try to ping the Logstash host to verify that you can reach it from the host running Packetbeat. Then use either
nc
ortelnet
to make sure that the port is available. For example:ping <hostname or IP> telnet <hostname or IP> 5044
-
Verify that the certificate is valid and that the hostname and IP match.
For testing purposes only, you can set
verification_mode: none
to disable hostname checking. - Use OpenSSL to test connectivity to the Logstash server and diagnose problems. See the OpenSSL documentation for more info.
-
Make sure that you have enabled SSL (set
ssl => true
) when configuring the Beats input plugin for Logstash.
Common SSL-Related Errors and Resolutions
editHere are some common errors and ways to fix them:
x509: cannot validate certificate for <IP address> because it doesn’t contain any IP SANs
editThis happens because your certificate is only valid for the hostname present in the Subject field.
To resolve this problem, try one of these solutions:
- Create a DNS entry for the hostname mapping it to the server’s IP.
-
Create an entry in
/etc/hosts
for the hostname. Or on Windows add an entry toC:\Windows\System32\drivers\etc\hosts
. - Re-create the server certificate and add a SubjectAltName (SAN) for the IP address of the server. This make the server’s certificate valid for both the hostname and the IP address.
getsockopt: no route to host
editThis is not a SSL problem. It’s a networking problem. Make sure the two hosts can communicate.
getsockopt: connection refused
editThis is not a SSL problem. Make sure that Logstash is running and that there is no firewall blocking the traffic.
No connection could be made because the target machine actively refused it
editA firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the destination host.
Fields show up as nested JSON in Kibana?
editWhen Packetbeat exports a field of type dictionary, and the keys are not known in advance, the Discovery page in Kibana will display the field as a nested JSON object:
http.response.headers = { "content-length": 12, "content-type": "application/json" }
To fix this you need to reload the index pattern in Kibana under the Management→Index Patterns, and the index pattern will be updated with a field for each key available in the dictionary:
http.response.headers.content-length = 12 http.response.headers.content-type = "application/json"