Using IP Filtering
editUsing IP Filtering
editYou can apply IP filtering to application clients, node clients, or transport clients, in addition to other nodes that are attempting to join the cluster.
If a node’s IP address is on the blacklist, Shield will still allow the connection to Elasticsearch. The connection will be dropped immediately, and no requests will be processed.
Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the other security capabilities of Shield do not change this condition.
Enabling IP filtering
editShield features an access control feature that allows or rejects hosts, domains, or subnets.
You configure IP filtering by specifying the shield.transport.filter.allow
and shield.transport.filter.deny
settings in in elasticsearch.yml
. Allow rules take prececence over the deny rules.
Example 1. Allow/Deny Statement Priority.
shield.transport.filter.allow: "192.168.0.1" shield.transport.filter.deny: "192.168.0.0/24"
The _all
keyword denies all connections that are not explicitly allowed earlier in the file.
Example 2. _all
Keyword Usage.
shield.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4" ] shield.transport.filter.deny: _all
IP Filtering configuration files support IPv6 addresses.
Example 3. IPv6 Filtering.
shield.transport.filter.allow: "2001:0db8:1234::/48" shield.transport.filter.deny: "1234:0db8:85a3:0000:0000:8a2e:0370:7334"
Shield supports hostname filtering when DNS lookups are available.
Example 4. Hostname Filtering.
shield.transport.filter.allow: localhost shield.transport.filter.deny: '*.google.com'
Disabling IP Filtering
editDisabling IP filtering can slightly improve performance under some conditions. To disable IP filtering entirely, set the
value of the shield.transport.filter.enabled
attribute in the elasticsearch.yml
configuration file to false
.
Example 5. Disabled IP Filtering.
shield.transport.filter.enabled: false
You can also disable IP filtering for the transport protocol but enable it for HTTP only like this
Example 6. Enable HTTP based IP Filtering.
shield.transport.filter.enabled: false shield.http.filter.enabled: true
Specifying TCP transport profiles
editIn order to support bindings on multiple host, you can specify the profile name as a prefix in order to allow/deny based on profiles
Example 7. Profile based filtering.
shield.transport.filter.allow: 172.16.0.0/24 shield.transport.filter.deny: _all transport.profiles.client.shield.filter.allow: 192.168.0.0/24 transport.profiles.client.shield.filter.deny: _all
Note: When you do not specify a profile, default
is used automatically.
HTTP Filtering
editYou may want to have different filtering between the transport and HTTP protocol
Example 8. HTTP only filtering.
shield.transport.filter.allow: localhost shield.transport.filter.deny: '*.google.com' shield.http.filter.allow: 172.16.0.0/16 shield.http.filter.deny: _all
Dynamically updating ip filter settings [1.1.0] Added in 1.1.0.
editIn case of running in an environment with highly dynamic IP addresses like cloud based hosting it is very hard to know the IP addresses upfront when provisioning a machine. Instead of changing the configuration file and restarting the node, you can use the Cluster Update Settings API like this
curl -XPUT localhost:9200/_cluster/settings -d '{ "persistent" : { "shield.transport.filter.allow" : "172.16.0.0/24" } }'
You can also disable filtering completely setting shield.transport.filter.enabled
like this
curl -XPUT localhost:9200/_cluster/settings -d '{ "persistent" : { "shield.transport.filter.enabled" : false } }'
In order to not lock yourself out, the default bound transport address will never be denied. This means you can always SSH into a system and use curl to apply changes.
Using Tribe Nodes with Secured Clusters
editTribe nodes act as a federated client across multiple clusters. When using tribe nodes with secured clusters, all clusters must have Shield installed and share the same security configuration (users, roles, user-role mappings, SSL/TLS CA). The tribe node itself also must be configured to grant access to actions and indices on all of the connected clusters, as security checking is primarily done on the tribe node.
To use a tribe node with Secured Clusters:
- Install Shield on the tribe node and every node in each connected cluster.
-
Enable message authentication globally. Generate a system key on one node and copy it to the tribe node and every other node in each connected cluster.
For message authentication to work properly across multiple clusters, the tribe node and all of the connected clusters must share the same system key. By default, Shield reads the system key from
CONFIG_DIR/shield/system_key
. If you store the key in a different location, you must configure theshield.system_key.file
setting inelasticsearch.yml
. -
Enable encryption globally. To encrypt communications, you must enable SSL/TLS on every node.
To simplify SSL/TLS configuration, use the same certificate authority to generate certificates for all connected clusters.
-
Configure the tribe in the tribe node’s
elasticsearch.yml
file. You must specify each cluster that’s a part of the tribe and configure discovery and encryption settings per cluster. For example, the following configuration adds two clusters to the tribe:tribe: on_conflict: prefer_cluster1 c1: cluster.name: cluster1 discovery.zen.ping.unicast.hosts: [ "cluster1-node1:9300", "cluster1-node2:9300"] shield.ssl.keystore.path: /path-to-keystore/es-tribe-01.jks shield.ssl.keystore.password: secretpassword shield.ssl.keystore.key_password: secretpassword shield.transport.ssl: true shield.http.ssl: true c2: cluster.name: cluster2 discovery.zen.ping.unicast.hosts: [ "cluster2-node1:9300", "cluster2-node2:9300"] shield.ssl.keystore.path: /path-to-keystore/es-tribe-01.jks shield.ssl.keystore.password: secretpassword shield.ssl.keystore.key_password: secretpassword shield.transport.ssl: true shield.http.ssl: true
-
Configure the same index privileges for your users on all nodes, including the tribe node. The nodes in each cluster must grant access to indices in other connected clusters as well as their own.
For example, let’s assume
cluster1
andcluster2
each have a single index,index1
andindex2
. To enable a user to submit a request through the tribe node to search both clusters:-
On the tribe node and both clusters, define a
tribe_user
role that has read access toindex1
andindex2
:tribe_user: indices: 'index*': search
-
Assign the
tribe_user
role to a user on the tribe node and both clusters. For example, run the following command on each node to createmy_tribe_user
and assign thetribe_user
role:./bin/shield/esusers useradd my_tribe_user -p password -r tribe_user
-
-
Grant selected users permission to retrieve merged cluster state information for the tribe from the tribe node. To do that, grant them the monitor privilege on the tribe node. For example, you could create a
tribe_monitor
role that assigns themonitor
privilege:tribe_monitor: cluster: monitor
Each cluster can have it’s own users with admin privileges. In fact, you cannot perform administration tasks such as create index through the tribe node, you must send the request directly to the appropriate cluster.
- Start the tribe node. If you’ve made configuration changes to the nodes in the connected clusters, they also need to be restarted.