I'm able to create an EKS cluster using Cloudformation. I'm also able to have a "node group" and EC2's + autoscaling. On the other side I can also install a FargateProfile and create "fargate nodes". This works well. But I want to use only fargate (no EC2 nodes etc). Here for I need to host my management pods (in kube-system) also on Fargate. How can I managed this?
I tried this:
Resources:
FargateProfile:
Type: AWS::EKS::FargateProfile
Properties:
ClusterName: eks-test-cluster-001
FargateProfileName: fargate
PodExecutionRoleArn: !Sub 'arn:aws:iam::${AWS::AccountId}:role/AmazonEKSFargatePodExecutionRole'
Selectors:
- Namespace: kube-system
- Namespace: default
- Namespace: xxx
Subnets:
!Ref Subnets
But my management pods remain on the EC2's. Probably I'm missing some labels but is this the way to go? Some labels are generated with a hash so I can't just add them in my fargateProfile.
with eksctl it seems possible: Adding the --fargate option in the command above creates a cluster without a node group. However, eksctl creates a pod execution role, a Fargate profile for the default and kube-system namespaces, and it patches the coredns deployment so that it can run on Fargate.
but how to do this in CloudFormation?
I also tried with the labels but then I got an error in CloudFormation: Model validation failed (#: extraneous key [k8s-app] is not permitted)