I am trying to run an application within a VM on ESXi. The application runs a highly latency sensitive thread. It runs in tight polling mode which almost always runs at 99.9% of the CPU and this is expected.
Details of the VM and server:
Number of vCPUs allocated to the VM: 36
Number of sockets: 2
Number of cores per socket: 10
Total number of logical CPUs on the server: 40
Hyper threading : Enabeld
ESXi version: 6.50
If nothing else is run on the VM, there is no problem. But if I run other CPU consuming applications on the VM, the thread I talked about earlier starts getting lesser CPU cycles. I have verified this by adding some counters that calculate rdtsc()
On KVM, this can be solved by:
- Pinning the vCPU to physical cpu with
virsh vcpupin <domain> ...
- Pinning the applications on the VM to run on specific vCPUs
How does the ESX CPU scheduler work? Is there a way to map vCPUs to physical cpus on ESXi?
When I monitor esxtop
on the host, I see that the thread that takes 99.9% CPU does not always run on the same core. It keeps moving around different cores. And a few times, I see that it shares the physical core with other CPU intensive applications.