I'm having some HUGE differences on running time when comparing local x server processes.
Our laboratory has a dedicated server running Ubuntu 14 LTS as OS and PBS for Job scheduling. In total, there are 96 cores split in two queues. We conduce different experiments using Python routines with CPU usage, without any I/O or network requests.
My routines are developed in Python and when I use my local machine it runs in about 10 to 11 hours. When I use the same routine in our server, it takes more than 25 hours to do the same thing.
When monitoring the server status using htop, the CPUs were about 100% of processing in each core. I have already tried to reduce the Load Average per core (for about 0.8 for each core), but there were not any significant difference in processing time.
Could this difference between Local x Server be related to the CPU capacities ? Can it really doubles the processing time ?
Server CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Local CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz