Environment: AWS EMR cluster with managed autosclaing turned on and running hudi job
Issue: I enabled auto scaling with minimum 2 nodes and maximum 8 task nodes capacity and maximum 2 core nodes, with 2 on demand capacity.
I ran a spark job, it autoscaled it to 4 task nodes once job ran and immediately started to scale down and resized it to 2 task nodes. 2 task nodes were getting decommissioned. Then it scaled down to 0 task nodes and again started adding 2 nodes from 0. During whole process it was switching task nodes from 0 to 2-4 nodes and back to 0 again and again even job was still running. This behavior of scaling down and again scaling up in-between is strange.
Anyone have any idea if I am missing anything? TIA
Note: I checked resource manager, nodes were being used at full capacity, cores were being used to their max.