Does anyone have any experience of the Ibm DS3400 san. Have found a system with about 30 servers connected to it (mostly virtual machines including all the operating system drives) including 2 sql servers (1 in very heavy use) and one exchange server. I have had a look at IBM's website but can't find any guidance on what is acceptable on this device.
2 Answers
SAN performance has so many variables involved that I can understand why people don't get into them too much.
Any given SAN has main components that contribute to it's limitations, they are;
- The controller/s - this/these take and process requests so both their overall computational and IO capabilities are key here - there's no point having lots of fast disks and/or multiple 8/10Gbps interfaces if the controllers can't keep up. Controllers themselves can be limited by design, age of components, cache/s and of course the software they run.
- The disks - obviously 288 x 15krpm FC disks are going to be faster than 4 x 2TB SATA disks - but you usually buy SAN disks for two reasons, to just give you capacity and sometimes to create performance - of course if you're interested in performance then you need to take everything else into consideration too.
- The IO interfaces - again there's usually no point having lots of 8/10Gbps interfaces if the controllers or disks can't keep up.
I'm lucky, I often get to build SAN arrays for a single purpose - either fast as hell for DB work or big for video content or back - I have the luxury to get separate systems for each function but every so often I need to build something that covers a bit of both, here's what I do;
- I dedicate the fastest available IO ports for systems/sub-platforms that need performance - this way these machines (usually DB in my experience) get the full bandwidth of dedicated ports, there's no opportunity for the other servers in less need than these to stomp all over the available bandwidth. I also buy dedicated high-performance (i.e. SSD and/or 15k FC) disks and load them out in the right number of shelves required for optimimum performance for the particular SAN (i.e. blocks of 8 shelves with HP EVA boxes and so on). If the SAN array is capable of hard partitioning I'll also dedicate a partition (and some of the cache) to that function.
- I then dedicate a good chunk of the remaining IO ports to mainstream performance production systems, this way they can be trunked together, they're not being cross interfered with by the high or low-performance systems and I buy reasonably fast (usually 10krpm FC/SAS) disk, I may choose to run these in a dedicated partition depending on requirement.
- I then put all other ancillary/test/reference/backup etc. servers on the remaining pair of ports knowing that they don't really care about performance and are unlikely to saturate the ports. I can then happily assign them slow (7.2krpm SATA/FATA) disks (I sometimes have to specifically buy disks with >30% duty cycle too).
If you plan out your systems in this way you'll see gradual, not steep, predictable performance drop-off as you add servers and load. Obviously you don't mention any details of your array or usage pattern but there are situations where a single server could saturate a DS3400/NetApp and there are situations where literally thousands of servers would leave the same box with oceans of capacity left - it really does come down to system design and understanding your usage patterns.
Feel free to come back to us when you have more data.

- 101,299
- 9
- 108
- 239
Does NOT depend on the number of systems but on what they do. SQL + exchange are heavy users. Especially SQL can use a LOT of discs to run properly (as any database), but again totally depending on what you do. I ahve seen a SQL Server with 190 dedicated discso once. My own one is on 6 discs now, upgrading soon.
I dont think - besides a technical limit - a guidance here is possible on general terms.

- 51,649
- 7
- 54
- 136
-
Thats a fair answer; my experience with HP EVA systems have seen considerable degredation in read / write performance as we added additional servers to it. At some point the cards / processors of the DS3400 will hit a limit and I am trying to work out how to determine that limit. Am seeing heavily constrained write performance even to dedicated disks at the moment. – u07ch Nov 23 '10 at 10:51
-
Dedicated volumes or dedicated arrays within the enclosure itself? Just checking, as it's not uncommon to find configurations where the volumes are spread across a single big RAID set. – Chris Thorpe Nov 23 '10 at 11:16
-
On top that realyl depends on the load again. 10 big data warehouses can overload a lot of discs that otherwise are more than happy serving 1000 web servers. – TomTom Nov 23 '10 at 12:32