Use Kibana in a production environment
editUse Kibana in a production environment
editHow you deploy Kibana largely depends on your use case. If you are the only user, you can run Kibana on your local machine and configure it to point to whatever Elasticsearch instance you want to interact with. Conversely, if you have a large number of heavy Kibana users, you might need to load balance across multiple Kibana instances that are all connected to the same Elasticsearch instance.
While Kibana isn’t terribly resource intensive, we still recommend running Kibana separate from your Elasticsearch data or master nodes. To distribute Kibana traffic across the nodes in your Elasticsearch cluster, you can run Kibana and an Elasticsearch client node on the same machine. For more information, see Load balancing across multiple Elasticsearch nodes.
Use Elastic Stack security features
editYou can use Elastic Stack security features to control what Elasticsearch data users can access through Kibana.
When security features are enabled, Kibana users have to log in. They need to have a role granting Kibana privileges as well as access to the indices they will be working with in Kibana.
If a user loads a Kibana dashboard that accesses data in an index that they are not authorized to view, they get an error that indicates the index does not exist.
For more information on granting access to Kibana, see Granting access to Kibana.
Require Content Security Policy
editKibana uses a Content Security Policy to help prevent the browser from allowing
unsafe scripting, but older browsers will silently ignore this policy. If your
organization does not need to support Internet Explorer 11 or much older
versions of our other supported browsers, we recommend that you enable Kibana’s
strict
mode for content security policy, which will block access to Kibana
for any browser that does not enforce even a rudimentary set of CSP
protections.
To do this, set csp.strict
to true
in your kibana.yml
:
csp.strict: true
Enable SSL
editLoad Balancing across multiple Elasticsearch nodes
editIf you have multiple nodes in your Elasticsearch cluster, the easiest way to distribute Kibana requests across the nodes is to run an Elasticsearch Coordinating only node on the same machine as Kibana. Elasticsearch Coordinating only nodes are essentially smart load balancers that are part of the cluster. They process incoming HTTP requests, redirect operations to the other nodes in the cluster as needed, and gather and return the results. For more information, see Node in the Elasticsearch reference.
To use a local client node to load balance Kibana requests:
- Install Elasticsearch on the same machine as Kibana.
-
Configure the node as a Coordinating only node. In
elasticsearch.yml
, setnode.data
,node.master
andnode.ingest
tofalse
:# 3. You want this node to be neither master nor data node nor ingest node, but # to act as a "search load balancer" (fetching data from nodes, # aggregating results, etc.) # node.master: false node.data: false node.ingest: false
-
Configure the client node to join your Elasticsearch cluster. In
elasticsearch.yml
, set thecluster.name
to the name of your cluster.cluster.name: "my_cluster"
-
Check your transport and HTTP host configs in
elasticsearch.yml
undernetwork.host
andtransport.host
. Thetransport.host
needs to be on the network reachable to the cluster members, thenetwork.host
is the network for the HTTP connection for Kibana (localhost:9200 by default).network.host: localhost http.port: 9200 # by default transport.host refers to network.host transport.host: <external ip> transport.tcp.port: 9300 - 9400
-
Make sure Kibana is configured to point to your local client node. In
kibana.yml
, theelasticsearch.hosts
setting should be set to["localhost:9200"]
.# The Elasticsearch instance to use for all your queries. elasticsearch.hosts: ["http://localhost:9200"]
Load balancing across multiple Kibana instances
editTo serve multiple Kibana installations behind a load balancer, you must change the configuration. See Configuring Kibana for details on each setting.
Settings unique across each Kibana instance:
server.uuid server.name
Settings unique across each host (for example, running multiple installations on the same virtual machine):
logging.dest path.data pid.file server.port
Settings that must be the same:
xpack.security.encryptionKey //decrypting session information xpack.reporting.encryptionKey //decrypting reports xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys // saved objects encryption key rotation, if any
Separate configuration files can be used from the command line by using the -c
flag:
bin/kibana -c config/instance1.yml bin/kibana -c config/instance2.yml
Accessing multiple load-balanced Kibana clusters
editTo be able to access multiple load-balanced Kibana clusters from the same browser, the setting xpack.security.cookieName
must be set in the configuration to avoid having conflicts between cookies from the different Kibana instances.
In each cluster, Kibana instances should have the same cookieName
value in order to achieve seamless high availability and keep the session active in case of failure from the currently used instance.
High availability across multiple Elasticsearch nodes
editKibana can be configured to connect to multiple Elasticsearch nodes in the same cluster. In situations where a node becomes unavailable, Kibana will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.
Currently the Console application is limited to connecting to the first node listed.
In kibana.yml:
elasticsearch.hosts: - http://elasticsearch1:9200 - http://elasticsearch2:9200
Related configurations include elasticsearch.sniffInterval
, elasticsearch.sniffOnStart
, and elasticsearch.sniffOnConnectionFault
.
These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the settings page.
Memory
editKibana has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs, it may make sense to tweak limits to meet more specific requirements.
A limit can be defined by setting --max-old-space-size
in the node.options
config file found inside the kibana/config
folder or any other folder configured with the environment variable KBN_PATH_CONF
. For example, in the Debian-based system, the folder is /etc/kibana
.
The option accepts a limit in MB:
--max-old-space-size=2048