5

I wish to deploy a dockerized service with kubernetes on aws. To do so I'm using the recently released AWS EKS with AWS Fargate feature. The service's docker image is stored in a private package on github.

To deploy my service I'm using a kubernetes manifest file containing a secret, a deployment and a service.

When locally deploying with kubectl on minikube, the deployment pods successfully pull the image from the private github package. I successfully reproduced the process for accessing a private dockerhub registry.

I then configured kubectl to connect to my eks cluster. When applying the manifest file I'm getting an ImagePullBackOff status for the deployment pods when pulling form github packages while it is working fine when pulling from dockerhub. The differences in the manifest file are as follow:

Generating secret for github packages:

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://docker.pkg.github.com --docker-username=myGithubUsername --docker-password=myGithubAccessToken -o yaml

Generating secret for dockerhub:

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://index.docker.io/v1/ --docker-username=myDockerhubUsername --docker-password=myDockerhubPassword -o yaml

The deployment spec is referenced as follow:

spec:
  containers:
    # when pulling from github packages 
    - image: docker.pkg.github.com/myGithubUsername/myRepo/myPackage:tag
    # when pulling from dockerhub 
    - image: myDockerhubUsername/repository:tag
    ...
  imagePullSecrets:
    - name: mySecret

Trying to make this work specifically with github packages I had a try with AWS Secrets Manager.

I created a secret "mySecret" as follows :

{
  "username" : "myGithubUsername",
  "password" : "myGithubAccessToken"
}

I then created a policy to access this secret:

{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "secretsmanager:GetSecretValue"
        ],
        "Resource": [
            "arn:aws:secretsmanager:eu-west-1:myAWSAccountId:secret:mySecret"
        ]
    }
  ]
}

I then attached the policy to both the Cluster IAM Role of my eks cluster and the Pod Execution Role referenced in its fargate profile "fp-default". I'm only working in the default kubernetes namespace. My secret and cluster are both in the eu-west-1 region.

Still, I'm getting the ImagePullBackOff status when deploying.

I'm having a hard time finding anything tackling this issue using AWS Fargate with AWS EKS, and would love some insight on this :)

Edit: The question was edited in order to present in a clearer maner that the issue is mainly related to using github packages as a registry provider.

alx.lzt
  • 456
  • 4
  • 12

3 Answers3

3

I think you need to create your secret in the same namespace as your deployment. I was able to get this work by creating a secret

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://docker.pkg.github.com --docker-username=myGithubUsername --docker-password=myGithubAccessToken --namespace=my-api -o yaml

in my deployment.yaml, I referenced it like

    spec:
      containers:
        - name: my-api
          image: docker.pkg.github.com/doc-api:0.0.0
          ports:
          - containerPort: 5000
          resources: {}
      imagePullSecrets:
        - name: mySecret
Barry Tam
  • 37
  • 2
  • Hi ! All the deployments are currently done in the default namespace, so both my secret and my pods are in the default namespace. Did you actually manage to get it to work with AWS EKS and Fargate or just locally ? – alx.lzt Feb 25 '20 at 09:32
  • 1
    ahh sorry i didnt see that you were working in the default namespace. I got this to work on EKS + Fargate, and I was able to hit my service on the public LB that it created. FWIW, I only started playing with EKS/Fargate yesterday and I loosely followed this guide https://www.learnaws.org/2019/12/16/running-eks-on-aws-fargate/ – Barry Tam Feb 25 '20 at 17:38
  • Thanks for this useful link ! I went through the guide but the image is pushed to ECR in its example. Did you actually pulled the image from a github package ? I tried with `docker.pkg.github.com/myPackage:tag` instead of `docker.pkg.github.com/myGithubUsername/myRepo/myPackage:tag` as in your provided snipped of code, thinking it might be it, but without any success. – alx.lzt Feb 26 '20 at 09:26
  • i did - i actually used a private registry that wasn't `docker.pkg.github.com`, i just did that hide my company's registry. You will want to use the full path that you can do a `docker pull` from locally – Barry Tam Feb 26 '20 at 20:34
  • Ok thanks ! As I mentioned in an edit of my initial post (title and at the end), the issue seems to be related to accessing github packages since I managed to make this work with dockerhub ! – alx.lzt Feb 27 '20 at 08:47
2

After digging a little and exchanging with some of the AWS technical staff:

The problem seems to be that EKS Fargate uses Containerd under the hood.

Containerd and Docker try to pull images differently. Containerd is currently tracking multiple issues with multiple registry providers that only correctly support Docker but not the OCI HTTP API V2, github being one of them.

As mentioned by the director of product at Github, the issue will be addressed in a few weeks to a couple of months.

alx.lzt
  • 456
  • 4
  • 12
1

If no hope with AWS docs, you can do the following :

  • Run daemonset which will create pod per node by design.
  • These pods will mount volumes of type hostPath (/var/run/docker.sock).
  • These pods has docker client and run the following "command "

    docker login docker.pkg.github.com -u username -p passWord
    
  • Now you login inside the container, but it is not reflected in the Node. then , you need to mount another hostPath volume (~/.docker/config.json) . But the challenge is to know what is the home directory of Fargate Nodes. In the example below, I put /root (volumes), but it can be something else .e.g /home/ec2-user... something to check.

  • This is how it looks like

    apiVersion: apps/v1
    kind: Daemonset
    metadata:
      name: docker-init
    spec:
      replicas: 1
      template:
        metadata:
          name: worker
          labels:
            app: worker
        spec:
          initContainers:
          - name: login-private-registies
            image: docker
            command: ['sh', '-c', 'docker login docker.pkg.github.com -u username -p passWord']
            volumeMounts:
              - name: dockersock
                mountPath: "/var/run/docker.sock"
              - name: dockerconfig
                mountPath: "/root/.docker"
          volumes:
          - name: dockersock
            hostPath:
              path: /var/run/docker.sock
          - name: dockerconfig
            hostPath:
              path: /root/.docker
    
Abdennour TOUMI
  • 87,526
  • 38
  • 249
  • 254
  • Hi ! Thanks for this answer, I get what you want me to try and I hope this will work :) I fear that the fargate serverless architecture might cause some issues regarding the rights to access volumes on EC2 instances only managed by AWS. I'll come back to you to let you know how it went ! – alx.lzt Feb 25 '20 at 09:36
  • As I feared working with volumes on AWS EKS and Fargate is not available yet :( – alx.lzt Feb 25 '20 at 16:12
  • @Abdennour TOUMI imagePullSecrets – c4f4t0r Feb 29 '20 at 10:59