0

I deployed a EKS Farget cluster in AWS and created a fargate profile with default namespace without any labels. I found that whenever I deploy a new deployment kubectl apply , a new fargate node will be created for that deployment. See below screenshot.

How can I make the deployment share one fargate instance?

And how can I rename the fargate node name?

enter image description here

Joey Yi Zhao
  • 37,514
  • 71
  • 268
  • 523

1 Answers1

0

The spirit of using Fargate is that you want a serverless experience where you don't have to think about nodes (they are displayed simply because K8s can't operate without nodes). One of the design tenets of Fargate is that it supports 1 pod per node for increased security. You pay for the size of the pod you deploy (not the node the service provision to run that pod - even if the node > pod). See here for how pod are sized. What is the use case for which you may want/need to run multiple pods per Fargate node? And why do you prefer Fargate over EKS managed node groups (which supports multiple pods per node)?

mreferre
  • 5,464
  • 3
  • 22
  • 29
  • I am from ECS fargate so that I am thinking to use fargate in eks. And fargate is cheaper than ec2 instance. And it scales faster as well – Joey Yi Zhao May 30 '21 at 23:05
  • The reason for using multiple pods per Fargate is to save some cost. – Joey Yi Zhao May 30 '21 at 23:06
  • Interesting. Are you giving me a practical example of how you'd save costs by running multiple pods? I am asking because given you pay for the resources configured for your pods you could possibly just right size your pods rather than jamming multiple pods in a bigger node. I know there are corner cases where this may not work but I am eager to hear what's your blocker is. – mreferre May 31 '21 at 07:24