0

I am trying to host Elasticsearch and kibana in AWS ECS (Fargate). I have created a docker-compose.ym file

version: '2.2'
services:
  es-node:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
    deploy:
          resources:
            limits:
              memory: 8Gb
    command: > 
        bash -c 
          'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
          /usr/local/bin/docker-entrypoint.sh'
    container_name: es-$ENV
    environment:
      - node.name=es-$ENV
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      # - discovery.seed_hosts=es02,es03
      # - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - ELASTIC_PASSWORD=$ES_DB_PASSWORD
      - xpack.security.enabled=true
    logging:
      driver: awslogs
      options:
         awslogs-group: we-two-works-db-ecs-context
         awslogs-region: us-east-1
         awslogs-stream-prefix: es-node
    volumes:
      - elastic_data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic

  kibana-node:
    image: docker.elastic.co/kibana/kibana:7.9.0
    container_name: kibana-$ENV
    ports:
      - 5601:5601
    environment:
      ELASTICSEARCH_URL: $ES_DB_URL
      ELASTICSEARCH_HOSTS: '["http://es-$ENV:9200"]'
      ELASTICSEARCH_USERNAME: elastic
      ELASTICSEARCH_PASSWORD: $ES_DB_PASSWORD
    networks:
      - elastic
    logging:
      options:
        awslogs-group: we-two-works-db-ecs-context
        awslogs-region: us-east-1
        awslogs-stream-prefix: "kibana-node"

volumes: 
  elastic_data:
    driver_opts:
      performance-mode: maxIO
      throughput-mode: bursting
      uid: 0
      gid: 0

networks:
  elastic:
    driver: bridge

and pass in the env variables using .env.developmentfile

ENV="development"

ES_DB_URL="localhost"
ES_DB_PORT=9200
ES_DB_USER="elastic"
ES_DB_PASSWORD="****"

and up the stack in ECS using this command after creating a docker context pointing to ECS docker compose --env-file ./.env.development up However, after creating the stack the kibana node fails to establish communication with the elasticsearch node. Check the logs from kibana node container

{
    "type": "log",
    "@timestamp": "2021-12-09T02:07:04Z",
    "tags": [
        "warning",
        "plugins-discovery"
    ],
    "pid": 7,
    "message": "Expect plugin \"id\" in camelCase, but found: beats_management"
}
{
    "type": "log",
    "@timestamp": "2021-12-09T02:07:04Z",
    "tags": [
        "warning",
        "plugins-discovery"
    ],
    "pid": 7,
    "message": "Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"
}
[BABEL] Note: The code generator has deoptimised the styling of /usr/share/kibana/x-pack/plugins/canvas/server/templates/pitch_presentation.js as it exceeds the max of 500KB.

After doing a research I have found that ecs cli does not support service.networks docker compose file field and it has given these instructions Communication between services is implemented by SecurityGroups within the application VPC.. I am wondering how to set these instructions in the docker-compose.yml file because the IP addresses get assigned after stack is being created.

1 Answers1

0

These containers should be able to communicate with each others via their compose service names. So for example the kibana container should be able to reach the ES node using es-node. I assume this needs you need to set ELASTICSEARCH_HOSTS: '["http://es-node:9200"]'?

I am also not sure about ELASTICSEARCH_URL: $ES_DB_URL. I see you set ES_DB_URL="localhost" but that means that the kibana container will be calling localhost to try to reach the ES service (this may work on a laptop where all containers run on a flat network but that's not how it will work on ECS - where each compose service is a separate ECS service).

[UPDATE] I took at stab at the compose file provided. Note that I have simplified it a bit to remove some variables such as the env file, the logging entries (why did you need them? Compose/ECS will create the logging infra for you).

This file works for me (with gotchas - see below):

services:
  es-node:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
    deploy:
      resources:
        reservations:
          memory: 8Gb
    command: > 
        bash -c 
          'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
          /usr/local/bin/docker-entrypoint.sh'
    container_name: es-node
    environment:
      - node.name=es-node
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - ELASTIC_PASSWORD=thisisawesome
      - xpack.security.enabled=true
    volumes:
      - elastic_data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  kibana-node:
    image: docker.elastic.co/kibana/kibana:7.9.0
    deploy:
      resources:
        reservations:
          memory: 8Gb
    container_name: kibana-node
    ports:
      - 5601:5601
    environment:
      ELASTICSEARCH_URL: es-node
      ELASTICSEARCH_HOSTS: http://es-node:9200
      ELASTICSEARCH_USERNAME: elastic
      ELASTICSEARCH_PASSWORD: thisisawesome

volumes: 
  elastic_data:
    driver_opts:
      performance-mode: maxIO
      throughput-mode: bursting
      uid: 0
      gid: 0

There are two major things I had to fix:

1- the kibana task needed more horsepower (the 0.5 vCPU and 512MB of memory - default - was not enough). I set the memory to 8GB (which set the CPU to 1) and the Kibana container came up.

2- I had to increase ulimits for the ES container. Some of the error messages in the logs pointed to max file opened and vm.max_map_count which both pointed to ulimits needing being adjusted. For Fargate you need a special section in the task definition. I know there is a way to embed CFN code into the compose file via overlays but I found easier/quickert to docker compose convert the compose into a CFN file and tweak that by adding this section right below the image:

        "ulimits": [
          {
            "name": "nofile",
            "softLimit": 65535,
            "hardLimit": 65535
          }
        ]

So to recap, you'd need to take my compose above, convert it into a CFN file, add the ulimits snipped and run it directly in CFN.

You can work backwards from here to re-add your variables etc.

HTH

mreferre
  • 5,464
  • 3
  • 22
  • 29
  • I ran it after changing `ELASTICSEARCH_HOSTS: '["http://es-node:9200"]` and set the `ELASTICSEARCH_URL:..local` as suggested in here - [link](https://docs.docker.com/cloud/ecs-integration/#service-discovery) under service discovery section. but still the issue seems to be persist. – Charith Jayawardana Dec 09 '21 at 11:43
  • Sorry for the stupid question but are you using actually `..local` or are you replacing `` and `` with the actual names? (the service in your case should be `es-node` and the `` by default is the folder you are launching the compose up command from. – mreferre Dec 10 '21 at 08:02
  • Given they are in the same compose I think you can omit `.local` as it's assumed. Can you try with `ELASTICSEARCH_URL: 'es-node'` ? – mreferre Dec 10 '21 at 08:04
  • No worries, I have replaced them with the actual values. I will try this `ELASTICSEARCH_URL: 'es-node'` and see how it goes. Thank you – Charith Jayawardana Dec 10 '21 at 08:42
  • Edited my answer with a "work around" – mreferre Dec 10 '21 at 18:38