This is in regard to mass HDD wiping/testing. The software I use, WipeDrive Pro 5 and 7, both fall victim. I originally thought it was a limitation of the software, but after speaking with the software company, I was told there is no limit but what the machine is limited to. I go through a lot of hard drives and need to verify their integrity and that all data is removed. I have a few expansions hooked to various brands of servers, IBM, HP, Dell. They all limit me to exactly 26 simultaneously see-able, wipeable drives in the software. I just went into the lsi card setup and it shows every drive on each expansion, which totals 32 right now. Is this a limit of the card itself being a 3Gb/s PCI-E, the pci lane, x86 platform, or something entirely different? Could it even be the SAS cable type, SFF 8088 that has 26 pins? I just came across that as I was looking up the cable type. Perhaps thats the max number of simultaneous activity it can process? Any help would be greatly appreciated.
-
Just for completeness, what OS are you using? Also, would you detail exactly what controller and enclosures you're using and how everything's connected together? – asciiphil Jun 04 '13 at 20:49
-
Do you see all the drives under Linux? – Zoredache Jun 04 '13 at 20:50
-
This is a linux based utility that briefly flashes vm linux when its posting. I wasn't familiar with that distro but its marketed as a WipeDrive Pro or Enterprise. I'm not in an operating system environment this is just a bootable iso that I use with an x3650, with sas3444e card with 1 port, out through a SFF8088 sas cable, into an IBM Exp3000, daisied into an HP MSA70. I sometimes use two EXP3000's, but no matter what the configuration, 26 is my limit of drives that can be simultaneously wiped. The LSI adapter configuration utility shows all 32 drives. 26 has been the same max number. – BryanVoy Jun 04 '13 at 20:59
1 Answers
There is no such limit. However, the naming convention in linux is slightly different after the first 26 device names (/dev/sdaa is the next after /dev/sdz). The hard limit on linux is, if memory serves, 128 SCSI disks (which includes SAS and SATA disks typically), after which you will run out of allocated device numbers. Anyway, I strongly suspect your vendor's support is in error (at least they should have identified the 128-device limit).
Very few distributions include a /dev which contains device names after /dev/sdz, so if the contents of /dev are not being constructed at runtime (as they commonly are) and a static /dev is set up, it's possible that this might be the cause; if you can get a shell, add a device using mknod /dev/sdaa 65 160
to create a reference to the 27th disk. You may then be able to operate on it.
The device number is calculated in this way. The major will be either 8 (first 64 drives) or 65 (last 64 drives). The minor number increases in increments of 16 per drive, resulting in 16 minor numbers for the same volume; the 0th represents the whole device, and the 1st to 15th represent partitions (only 15 partitions per device are permitted).
You may also want to try a recent livecd of something (ubuntu or gentoo or anything, as long as it is reasonably recent) to see if the devices are detected there. Also check dmesg
to ensure your kernel is actually finding them.

- 25,244
- 15
- 63
- 92