I am not sure if such issue is well-known, I failed to find any straightforward explanations.
I have an ordinary VM on Azure of a small size (B1s) running a Ubuntu 18.04 from Canonical.
There's a regular LNMP stack on it with a simple wordpress website. All software is latest stable versions from official repos.
Everything works fine and the load on the server is usually very small. However, sometimes (occasionally and I couldn't detect any particular time pattern or something) an average IOPS bursts to about 400/sec where it hits the cap allowed to this VM size and VM hangs up and do not respond to anything. The only thing which helps is restarting VM from Azure portal and after that it works well again.
It is worth mentioning that all such bursts are for read operations only. Also they are not aligned in any way with the website traffic. On contrary some of such peaks had happened when traffic on the website was at its daily minimum.
There are no cron jobs or any other scripts which could create such load (at least nothing I am aware of). Apart from the standard Ubuntu installation there's only Nginx, MySQL, PHP and PHP-FPM. Oh and there's fail2ban with default settings to cut off naughty bruteforcing bots.
Here is my IOPS chart from Azure console for the last week. It is evident that at all times the load on IO is quite low and then it just skyrockets.
Any idea what could be a cause of such behavior and where to begin?