3

We are running TeamCity on a Windows Server 2008 as a build server. The build server is hosted in VMWare ESXi 5. (I have very little VMWare experience so my terminology might be wrong).

When we start a build, we more often than not experience extremely poor performance. The build server guest has been assigned 4 CPUs with no upper limit and no other guest systems are very busy.

What we have observed using vSphere Client is that after a while the CPU rate drops from about 4600 MHz to about 50 MHz. When the build stops, the CPU frequency goes back to normal semi-idle rate.

Another interesting observation is that while the build server is working at about 50 MHz it gets a burst of CPU every six minutes (see graph).

Yet another observation is that the system clock loses time proportional to the missing CPU cycles (about a factor 100 in the low-CPU periods).

EDIT Added chart with host specs.

CPU graph for build CPU graph for build with cyclic performance enter image description here

Holstebroe
  • 133
  • 5
  • 3
    How's the memory and disk IO stats, also how's cpuready? – Chopper3 Nov 28 '11 at 16:04
  • Are you sure your build VM is not being resource-starved by other higher-priority VMs? – voretaq7 Nov 28 '11 at 17:18
  • Yes, the other VMs are mostly idle. I have double checked with vSphere. Memory consumption is not alarming. – Holstebroe Nov 29 '11 at 07:23
  • Just to flesh out on Chopper3's comment can you describe the disk subsystem of the ESXi host, and list any other significant software on the TeamCity VM (e.g. Antivirus). – Helvick Nov 29 '11 at 10:54
  • The system runs a Plastic SCM server and a YouTrack issue tracker. Neither is especially resource demanding and during the slow builds, they are typically sitting idle. Is there anything special I should notice regarding the disk system? – Holstebroe Nov 29 '11 at 11:52

1 Answers1

3

What are the server's specifications? RAM, physical CPUs?

One thing you can try quickly is to cut your build server down to ONE or TWO virtual CPUs and repeat your trial. This is the preference, as it's easier for the hypervisor to allocate CPU time for a single vCPU versus finding four free cores to provide resources for four vCPUs you've provisioned.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • +1 for de-provisioning virtual CPUs. My experience has been similar to ewwhite's, and 1 vCPU often offers optimal performance. – voretaq7 Nov 28 '11 at 17:17
  • I have now reduced the number of vCPUs to one per VM (in total three running machines), but the exact same thing happens. After the build has been running for a while the CPU MHz drops from about 4000 to around 54. – Holstebroe Nov 29 '11 at 08:05
  • I still observe the CPU boost cycles: 5 minutes of about 54 MHz then about one minute of about 4000 MHz. This cycle is repeated until the build completes after which normal operation and performance resumes. – Holstebroe Nov 29 '11 at 08:09
  • 1
    Before I kept a few virtual cores (not sockets). Now I have tried running the server as a single socket single core CPU and so far it has made two successful builds. I will try to make a couple of additional builds before drawing any conclusions. – Holstebroe Nov 29 '11 at 10:16
  • After a few more builds it is still looking promising. I would prefer a few more cores however. – Holstebroe Nov 29 '11 at 11:53