4

I'm able to create an EKS cluster using Cloudformation. I'm also able to have a "node group" and EC2's + autoscaling. On the other side I can also install a FargateProfile and create "fargate nodes". This works well. But I want to use only fargate (no EC2 nodes etc). Here for I need to host my management pods (in kube-system) also on Fargate. How can I managed this?

I tried this:

Resources:
  FargateProfile:
    Type: AWS::EKS::FargateProfile
    Properties:
      ClusterName: eks-test-cluster-001
      FargateProfileName: fargate
      PodExecutionRoleArn: !Sub 'arn:aws:iam::${AWS::AccountId}:role/AmazonEKSFargatePodExecutionRole'
      Selectors:
        - Namespace: kube-system
        - Namespace: default
        - Namespace: xxx
      Subnets:
        !Ref Subnets

But my management pods remain on the EC2's. Probably I'm missing some labels but is this the way to go? Some labels are generated with a hash so I can't just add them in my fargateProfile.

with eksctl it seems possible: Adding the --fargate option in the command above creates a cluster without a node group. However, eksctl creates a pod execution role, a Fargate profile for the default and kube-system namespaces, and it patches the coredns deployment so that it can run on Fargate.

but how to do this in CloudFormation?

I also tried with the labels but then I got an error in CloudFormation: Model validation failed (#: extraneous key [k8s-app] is not permitted)

DenCowboy
  • 13,884
  • 38
  • 114
  • 210

3 Answers3

0

I am not duplicating this template. But since I had similar issue to the model validation failed, if your using labels, make sure you have format of Key: XXX Value: XXX

Tao Tao
  • 9
  • 2
0

The below configuration works for me but after manually patching the core DNS deployment file. Any idea of how to run the core DNS on the Fargate with cloud formation.

Resources:
  FargateProfile:
    Type: 'AWS::EKS::FargateProfile'
    DependsOn: ControlPlane
    Properties:
      ClusterName: my-cluster
      FargateProfileName: fp-default
      PodExecutionRoleArn: !GetAtt 
        - FargatePodExecutionRole
        - Arn
      Selectors:
        - Namespace: default
        - Namespace: kube-system
          Labels:
            - Key: k8s-app
              Value: kube-dns
0

The CloudFormation template is correct. You need change the annotation of CoreDNS deployment - remove the annotation:

Annotations: eks.amazonaws.com/compute-type: ec2

For it use this command:

kubectl patch deployment coredns \
    -n kube-system \
    --type json \
    -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

Doc ref.: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html