Running Enterprise Search Using Docker

edit

Running Enterprise Search Using Docker

edit

As an alternative to the native installation method, you can run Enterprise Search in a Docker container. This is useful for running the solution in development and test environments or in production, when combined with an orchestration tool like Docker Compose or Kubernetes.

This page explains how to run Enterprise Search in Docker using a simple console command or via Docker Compose. For instructions for running the solution in Kubernetes, please check out the dedicated guide for running Enterprise Search with ECK.

Docker Image

edit

The Elastic Docker registry provides a Docker image for Enterprise Search. The image supports both x86 and ARM platforms.

You can download the image from the registry, or use docker pull:

docker pull docker.elastic.co/enterprise-search/enterprise-search:7.17.25

Configuration

edit

When running in Docker, you will use environment variables to configure Enterprise Search, using fully-qualified setting names as environment variables. See Configuration for a list of configurable values.

You must configure the values that are required for a standard installation method. In most cases, these are allow_es_settings_modification and secret_management.encryption_keys.

Running Enterprise Search with Docker CLI

edit

To start a standalone Enterprise Search instance in a container, you can use a Docker command line utility:

docker run \
  -p 3002:3002 \
  --add-host="host.docker.internal:host-gateway" \
  -e elasticsearch.host='http://host.docker.internal:9200' \
  -e elasticsearch.username=elastic \
  -e elasticsearch.password=changeme \
  -e allow_es_settings_modification=true \
  -e secret_management.encryption_keys='[4a2cd3f81d39bf28738c10db0ca782095ffac07279561809eecc722e0c20eb09]' \
docker.elastic.co/enterprise-search/enterprise-search:7.17.25

Running Enterprise Search with Docker Compose

edit

A much more convenient way of running the solution in a container is by using Docker Compose. This method is often used in local development environments to try out the product before a full production deployment.

Here is an example of running Enterprise Search with Elasticsearch and Kibana in Docker Compose:

  1. Create a docker-compose.yml file replacing the {version} value with the product version you want to use:

    version: "2"
    
    networks:
      elastic:
        driver: bridge
    
    volumes:
      elasticsearch:
        driver: local
    
    services:
      elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:{version}
        restart: unless-stopped
        environment:
          - "discovery.type=single-node"
          - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
          - "xpack.security.enabled=true"
          - "xpack.security.authc.api_key.enabled=true"
          - "ELASTIC_PASSWORD=changeme"
        ulimits:
          memlock:
            soft: -1
            hard: -1
        volumes:
          - elasticsearch:/usr/share/elasticsearch/data
        ports:
          - 127.0.0.1:9200:9200
        networks:
          - elastic
    
      ent-search:
        image: docker.elastic.co/enterprise-search/enterprise-search:{version}
        restart: unless-stopped
        depends_on:
          - "elasticsearch"
        environment:
          - "JAVA_OPTS=-Xms512m -Xmx512m"
          - "ENT_SEARCH_DEFAULT_PASSWORD=changeme"
          - "elasticsearch.username=elastic"
          - "elasticsearch.password=changeme"
          - "elasticsearch.host=http://elasticsearch:9200"
          - "allow_es_settings_modification=true"
          - "secret_management.encryption_keys=[4a2cd3f81d39bf28738c10db0ca782095ffac07279561809eecc722e0c20eb09]"
          - "elasticsearch.startup_retry.interval=15"
        ports:
          - 127.0.0.1:3002:3002
        networks:
          - elastic
    
      kibana:
        image: docker.elastic.co/kibana/kibana:{version}
        restart: unless-stopped
        depends_on:
          - "elasticsearch"
          - "ent-search"
        ports:
          - 127.0.0.1:5601:5601
        environment:
          ELASTICSEARCH_HOSTS: http://elasticsearch:9200
          ENTERPRISESEARCH_HOST: http://ent-search:3002
          ELASTICSEARCH_USERNAME: elastic
          ELASTICSEARCH_PASSWORD: changeme
        networks:
          - elastic

    This sample Docker Compose file brings up a single-node Elasticsearch cluster, then starts an Enterprise Search instance on it and configures a Kibana instance as the main way of interacting with the solution.

    All components running in Docker compose attached to a dedicated Docker network called elastic and are exposed via a set of local ports accessible only from the local machine. If you want to open up the service to other computers on your network, you will need to change port mappings for the services you want to share (e.g. change 127.0.0.1:5601:5601 to 5601:5601 for Kibana).

    A dedicated Docker Named Volume called elasticsearch is created to store the Elasticsearch node data directory so the data persists across restarts. If the volume does not already exist, Docker Compose creates it when you bring up the cluster.

  2. Make sure Docker Engine is allotted at least 4GiB of memory. In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS) or Settings (Windows).

    Docker Compose is not pre-installed with Docker on Linux. See docs.docker.com for installation instructions: Install Compose on Linux

  3. Run docker-compose to bring up the cluster:

    docker compose up --remove-orphans
  4. After the solution starts and finishes the bootstrap process, you will be able to access it using one of the two ways:

You can use the elastic user and the password specified in the docker-compose.yml file (changeme by default) to log in to the solution.

The API for Elasticsearch should be accessible at http://localhost:9200.

To stop the cluster, run docker-compose down or press Ctrl+C in your terminal.

The data in the Docker volumes is preserved and loaded when you restart the cluster with docker-compose up. To delete the data volumes when you bring down the cluster, specify the -v option: docker-compose down -v.