4

We have an environment with maybe 150 users. The users have thin clients and connect to various terminal servers. We just replaced all of the old server 2008 terminal server vms with nice new server 2019 terminal server vms, but many of the users are complaining about slowness and general lag. It became apparent that the users complaining about performance are always on our most heavily utilized terminal server vms, the ones with around 30 users.

We have, at the most, 30 users on a single terminal server vm. They have more than enough RAM allocated with 64-96 GB depending on the department, and 20 vCPUs. According to task manager performance tab, cpu utilization never really goes over 50%, nor does RAM. With 20 vCPUs, HyperV manager reports that each vm has access to 3 of the 4 physical processors, and so 24 cores, 48 threads.

The hosts are PowerEdge R820's and R910's with 4x Xeon E5-4650's, with 250-400GB physical memory.

I come from a long history of ESXi environments, so I'm wondering if it's something I'm not configuring properly with HyperV, or in windows itself. I've done some reading on what exactly vCPU's are, how the number of allocated vCPU's affect VM performance, and how allocating vCPUs is not the same thing as allocating physical cores. But I know there are organizations out there who have many many more users per terminal server, something just doesn't seem right.

Is there anything that you know of from past experience or otherwise that you'd suggest I check or look into?

d34db33f
  • 98
  • 1
  • 8
  • 2
    Have you also looked at you IOPS? Bad read/write performance on the disks can make Windows really slow. Do you run the terminal services on HDD and what RPM? Or SSD? – Daniel Sep 04 '20 at 21:19
  • 3
    Baseline performance parameters? IOPS? Network latency? YOu focus so much on CPU and memory that I am inclined to say you totally ignore the elephant in the room - otherwise you would at least mention those parameters. I am also inclined to say that a8 year old CPU may be a tad written off and overloaded with this amount of context switches. You COULD face a memory performance bottleneck? 20 vCPU on what looks like a NUMA architecture - I would say "say goodbye to memory performance", but I am not sure how relevant that is. – TomTom Sep 04 '20 at 21:23
  • I can check on all of that when I get back into the office. The reason I'm focusing on Windows and VM config is because it's not affecting all of the vms. All of the datastores reside on the same SAN. All connected to the same gigabit switches. It's only the vms with 20+ users, everywhere else performance is fine. – d34db33f Sep 04 '20 at 21:46
  • `it's not affecting all of the vms.` Are all of your VMs terminal servers with the same configuration and user count? – Daniel Sep 05 '20 at 09:07
  • @daniel no, but we have ~10 of them, 4 of which reside on the same host and the other 3 don't have issues – d34db33f Sep 06 '20 at 21:17
  • 2
    What are IOPS of 3 other VMs? They might not have issues just because thay do not need much IOPS. In any case, you can try getting the current numbers per each VM and then understand why your Terminals are slow. You can run something like Live Optics to get infrastructure performance report. https://www.liveoptics.com/ In addition, the following video might be useful: https://www.starwindsoftware.com/resource-library/how-to-prepare-your-infrastructure-for-remote-work/ – Stuka Sep 13 '20 at 08:46

0 Answers0