7

I have Jenkins server on-preminse. I have Jenkins file which create Docker image now i want to push that image to AWS ECR.Do i have to create a special IAM user and provide its access and secret access keys ? Or what will be the best way to do this.

I found below on internet

  withAWS(role:'Jenkins', roleAccount:'XXXX216610',duration: 900, roleSessionName: 'jenkinssession')
  sh ' eval \$(aws ecr get-login --no-include-email --region us-east-2) '

But as my jenkins server is onprem how role will work ?

AWS_Lernar
  • 627
  • 2
  • 9
  • 26

4 Answers4

9

Instead of eval, you now can use the Jenkins ‘amazon-ecr’ plugin from https://plugins.jenkins.io/amazon-ecr/ for ECR deployments.

pipeline {
  environment {
    registry = '1111111111111.dkr.ecr.eu-central-1.amazonaws.com/myRepo'
    registryCredential = 'ID_OF_MY_AWS_JENKINS_CREDENTIAL'
    dockerImage = ''
  }
  agent any
  stages {
    stage('Building image') {
      steps{
        script {
          dockerImage = docker.build registry + ":$BUILD_NUMBER"
        }
      }
    }
    stage('Deploy image') {
        steps{
            script{
                docker.withRegistry("https://" + registry, "ecr:eu-central-1:" + registryCredential) {
                    dockerImage.push()
                }
            }
        }
    }
  }
}
Jonas_Hess
  • 1,874
  • 1
  • 22
  • 32
  • 1
    Hmm...when you see that a plugin is up for adoption, you're not massively keen on using it – ndtreviv Sep 07 '21 at 14:17
  • https://github.com/jenkinsci/amazon-ecr-plugin is no longer up for adoption. It is maintained now. I also see recent releases – Sairam Krish Jan 19 '22 at 02:11
8

Do i have to create a special IAM user and provide its access and secret access keys ? Or what will be the best way to do this.

If you are running Jenkins inside your AWS and you using the secret key and access key you are violating best practice. You should never use the access key and secret key inside AWS VPC. These are designed to interact with AWS from outside of AWS account.

You should create an IAM role which has specific role and that role allow Jenkins only to push the image to ECR.

As far your current command, eval \$(aws ecr get-login --no-include-email --region us-east-2) you will always need this token to push/pull the image to ECR, this token has some expiry, you should read about this approach below. But it seems okay with IAM role.

ECR_AWSCLI-get-login-token

Also you can explore Amazon+ECR-plugin

About

Amazon ECR plugin implements a Docker Token producer to convert Amazon credentials to Jenkins’ API used by (mostly) all Docker-related plugins. Thank's to this producer, you can select your existing registered Amazon credentials for various Docker operations in Jenkins, for sample using CloudBees Docker Build and Publish plugin:

Adiii
  • 54,482
  • 7
  • 145
  • 148
  • I have jenkins on premise.I have create a policy which has ECR write access.Now when creating role which service i should choose (as it ask for service while creating role) ? Then where to edit trust relationship? – AWS_Lernar Nov 28 '19 at 10:20
  • On premise server user with specific permission only can make sense. but using role in this plugin i do not think it will work, you can look into this https://docs.aws.amazon.com/codedeploy/latest/userguide/register-on-premises-instance-iam-session-arn.html – Adiii Nov 28 '19 at 10:26
  • 1
    You say that to use secret and access keys inside a VPS is a **best practice** but the plugin that you suggest **Amazon+ECR-plugin** needs the secret and access keys – JRichardsz Jun 08 '21 at 14:57
  • "You should never use the access key and secret key inside AWS VPC" , you can use role as well if jenkins running inside VPC and I assume it should work, not sure about backend implementation of the plugin. – Adiii Jun 09 '21 at 01:56
4

It's possible, but very subtle to debug, so make sure you follow the steps below.

  1. Use dockerfile agent in jenkins pipeline (You can name it Dockerfile.jenkins or something else you prefer) and install amazon ecr credential helper in it to get a clean and stable building environment.
FROM ubuntu:rolling

RUN apt-get update && apt-get install -y amazon-ecr-credential-helper
  1. Create a config.json file in your git repo, like .docker/config.json.
{
    "credHelpers": {
        "[YOUR_ACCOUNT_ID].dkr.ecr.[YOUR_REGION].amazonaws.com": "ecr-login"
    }
}
  1. Test docker pull in your Jenkinsfile, make sure your access key's user is enabled with the right policy (probably AmazonEC2ContainerRegistryFullAccess).
pipeline {
    agent {
        dockerfile {
            filename 'Dockerfile.jenkins'
        }
    }
    stages {
        stage('TEST ECR') {
            steps {
                script { 
                    sh "DOCKER_CONFIG=.docker AWS_ACCESS_KEY_ID=[YOUR_ACCESS_KEY_ID] AWS_SECRET_ACCESS_KEY=[YOUR_SECRET_KEY] docker pull [YOUR PRIVATE IMAGE]"

                    // docker.build("${tag}", "${DOCKER_BUILD_ARGS} -f Dockerfile .")
                    // sh "docker push ${tag}"
                }
            }
        }
    }
}

If it's okay to pull, then you can just change DOCKER_CONFIG=.docker AWS_ACCESS_KEY_ID=[YOUR_ACCESS_KEY_ID] AWS_SECRET_ACCESS_KEY=[YOUR_SECRET_KEY] docker pull [YOUR PRIVATE IMAGE] to docker push [YOUR IMAGE] under correct environment variable settings.

Your repo would seem:

.
├── .docker
│   └── config.json
├── Dockerfile
└── Dockerfile.jenkins
kigawas
  • 1,153
  • 14
  • 27
0

I don't think that there is an easy way to assume a role from on-premise servers. As you mentioned, you will need to setup an IAM user and use the credentials on your on-prem application.

Arun Kamalanathan
  • 8,107
  • 4
  • 23
  • 39
  • 1
    That is not true. You can create a federated web identity token, for exqample, using the STS service and temporarily assume a role in your account. There is no need for storing credentials. https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html – Pat Mar 20 '20 at 18:54