1

I have an OpenStack VM that is getting really poor performance on its root disk - less than 50MB/s writes. My setup is 10 GbE, OpenStack deployed using kolla, the Queen release, with storage on Ceph. I'm trying to follow the path through the infrastructure to identify where the performance bottleneck is, but getting lost along the way:

nova show lets me see which hypervisor (an Ubuntu 16.04 machine) the VM is running on but once I'm on the hypervisor I don't know what to look at. Where else can I look?

Thank you!

  • You could run `virt-top`, `iftop` and `top` on the hypervisor to see if it's resources aren't at the limit. If you have access to the ceph cluster you should check it's status, maybe the backing OSDs are saturated? How much is going on in the ceph cluster? If it's not healthy and recovery is happening the clients can suffer from performance degradation. – eblock Dec 18 '20 at 07:12
  • Thanks! I did some more debugging and discovered that the VM IO performance was dropping because it was swapping. – Peter van Heusden Dec 18 '20 at 15:21

1 Answers1

1

My advice is to check the performance first between host (hypervisor) and ceph , if you are able to create a ceph block device, then you will able to map it with rbd command , create filesystem, and mount it - then you can measure the device io perf with : sysstat , iostas, iotop, dstat, vmastat or even with sar

Norbert_Cs
  • 116
  • 2
  • Thanks, as noted above I discovered that my IO performance issues were coming from swapping. I will keep your suggestion in mind for the future! – Peter van Heusden Dec 18 '20 at 15:21