This behavior of Kubernetes is down to the Kubernetes JobTracker relying on completed Pods to not be removed in order to track the Job completion status. See the Kubernetes Enhancement Proposal (KEP-2307) for more details.
What happens on the scale down of nodes, then completed pods are evicted, but the JobTracker relies on the existence of this Pod to derive the Job Completions status, viz Pods Statuses: 4 Running / 2 Succeeded / 0 Failed. After the pods are killed on a scale down, At the next JobSync the index completion get's reset back and all the Pods killed for the index are restarted.
Setting pod disruption budget or "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" does not help as these pods are successful and marked ready for garbage collection. These are not used by the ClusterAutoScaler for auto-scaling decisions.
The fix is to turn on the feature gate JobTrackingWithFinalizers or upgrade to version 1.26 or later where this is the default. This enhancement removes the dependency on Pods being around for Job completion tracking. The Job controller creates Pods with a finalizer to prevent finished Pods from being removed by the garbage collector. The Job controller removes the finalizer from the finished Pods once it has accounted for them. It does this by keeping an additional state in the Job object. In subsequent Job syncs, the controller ignores finished Pods that don't have the finalizer
Follow the change tracking in this issue