Consider following scenario regarding workload of an application (Only CPU and memory are considered here for example) -
Normal Workload
It requires 4 Cores , 16 GB memory.
Maximum Workload
It requires 8 Cores, 32 GB Memory.
Assuming that the burst of activities (max workload) happens only 2 hours per day.
Case 1 - When application is not containerized
The application has to reserve 8 Cores and 32 GB of memory for it's working so that it could handle max workload with expected performance.
But the 4 Cores and 16 GB of memory is going to be wasted for 22 hours in a day.
Case 2 - When application is containerized
Lets assume that a container of 4 Cores and 16 GB memory is spawned. Then remaining resources, 4 Cores and 16 GB memory are available for other applications in cluster and another container of same configuration will be spawned for this application only for 2 hours in a day when max workload
is required.
Therefore the resources in cluster are used optimally when applications are containerized.
What if single machine does not have all the resources required for the application ?
In such cases, if application is containerized then containers/resources can be allocated from multiple machines in the cluster.
Increase fault tolerance
If application is running on single machine and the machine goes down then the whole application become unavailable. But in case of containerized application
running on different machines in cluster. If a machine fails then only few containers will not be available.
Regarding your question, if the application's workload is going to be uniform throughout the lifetime then there is no benefit in breaking the application in smaller containers in terms of scale. But you may still consider containerizing applications for other benefits. In terms of scale, an application is containerized only when there is varying workload or more workload is anticipated in future.