1

My problem

I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.

I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected

The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.

Code

job "example" {
  datacenters = ["dc1"]
  type = "service"
  update {
    max_parallel = 1
    min_healthy_time = "10s"
    healthy_deadline = "3m"
    auto_revert = false    
    canary = 0
  }    
  group "cache" {
    count = 30    
    restart {
      attempts = 10
      interval = "5m"    
      delay = "25s"    
      mode = "delay"
    }    
    ephemeral_disk {    
      size = 300
    }    
    task "redis" {
      driver = "docker"    
      config {

    # My problem here

    image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
        port_map {
          db = 6379
        }
      }
      resources {
        network {
          mbits = 10
          port "db" {}
        }
      }
      service {
        name = "global-redis-check"
        tags = ["global", "cache"]
        port = "db"
        check {
          name     = "alive"
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

What have I tried

  • Extensive Google Search
  • Reading the manual
  • Placing the aws credentials in the machine which runs the nomad file (using aws configure)

My question

How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?

Adam Matan
  • 128,757
  • 147
  • 397
  • 562
  • 1
    Interesting that this stays unanswered and google search shows this question in first 5 results. Also your reputation assumes you know to read the documentation so the question is not without prior research. People are still looking for this. – titus Dec 31 '20 at 09:50

2 Answers2

0

Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way

Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.

sloan-dog
  • 85
  • 5
-1

I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:

config {
                image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
                auth {
                    username = "MYMAGICUSER"
                    password = "MYMAGICPASSWORD"
                }
            }
zie
  • 710
  • 4
  • 7