There could be a lot of reasons. Data corruption doesn't need to be caused by just the things you've outlined, and doesn't imply a broken drive or that anything broke it.
The SSD's firmware could be bad. The controller (old, new, or both) could be bad. There could be a root or kernel process which was running in bad memory and overwrote the beginnings and ends of the drives. The CPU might even be bad. It's also possible that all the drives are actually bad (this doesn't happen often, but it does happen sometimes). If you are using software RAID with LVM, you might have upgraded to a buggy version or something, or just encountered a random bug.
The best thing to do is to take a bytewise image of any drive you need to recover data from, and manipulate that. Write it onto another drive, write a partition table exactly the same as the one that you expected to see there, and try mounting. Copy where you expect the filesystem to be on the drive, and mount it using a loop device. Use data recovery software of some kind. However, the easiest thing to do is restore from backup.
It isn't immediately clear what kind of hardware you are using. I would nonetheless run a full hardware test on the server (at least memtest, but do HDD tests and a CPU test if you have a capable test suite). Test the drives on the controller and on another controller if you can, and check their SMART status. Update everything related to the drives (the filesystem drivers, kernel, and LVM if it is in use, particularly). If you have a hardware RAID device, consider upgrading its firmware.
I have had this issue caused by several faulty RAID controllers in the past too. If it has blanked parts of multiple drives, get an RMA for it and put in a new one.