I have two physical, "identical" Linux RedHat servers. I ran a small program on both of them. My problem: the CPU usage of my program varies between both servers. I am not a Linux expert. I am wondering what could lead to that performance difference?
I wrote the program in C++ and in java to see if the inconsistency comes from the programming language chosen. The program itself does a little bit of integer calculation over time to consume a constant amount of CPU time. Both program versions have the same percentual CPU usage difference.
The environmental variables I have already thought of and could be excluded:
- identical server type
- identical processor (both have two sockets, single core)
- both Intel Hyper-Threading-Technology enabled
- identical clock speed
- identical OS version (Red Hat Enterprise Linux Server release 5.9)
- identical Java version, Java RE, JVM
- Intel Demand based Switching can be ignored since the measurement tool uses the default value of clock speed for CPU capacity
- processor affinity can be excluded as well I think. I ran multiple measurement series and I always retrieve exactly the same CPU usage values.
Is there maybe a C library or something like that, that has an impact on the CPU usage of C++ and Java programs which needs to be updated separately from the actual OS version? Or could there be a different thread scheduler?