0

Just installed 4 2tb Hitachi 5K3000 drives into a IBM x3500 7977 server. It's got a adaptec 8k serveraid card. Turns out it doesn't detect the drives as Sata II 3.0gbs. It only see's them as Sata I 1.5gbs. I checked and it appears there's a issue with the backplane that required them to limit all Sata drives to 1.5gbs.

The question is, these drives are intended for serving media only. Basically will function as a fileserver. Does it make much of a difference from a performance perspective? I intend to over time add additional 2tb matching disks to the array. The only usage would be occasional large file copies over the network to a USB external drive. Aside from random movie watching or downloading.

The OS would be Openfiler booting off a USB stick, and a seperate LSI 8308ELP controller would be used for 4x300gb 15k SAS drives serving db/vmdk. This way the box has tiered storage.

I'm also open to suggestions for a OS, I've only done some basic reading about ZFS/unraid and numerous *ix based distros. Openfiler has been running ok for the last 18months but... perhaps there is better out there. Especially since I intend to add disks as requirements increase.

Garuda
  • 61
  • 2
  • 8

3 Answers3

7

I build video/media servers for a living and while normally I'd be telling you to optimise the hell out of every part of your system (by ensuring you have as clear a path from disk to NIC as possible, meaning a change of disk subsystem to match your disks) in this case you've missed out the most important piece of information - your uplink speed. You don't mention this, now I may be reading too much into that but I'm going to make a broad assumption that it's no more that 1Gbps, if that's the case then even a R10 array of 4 x 7.2k disks when coupled with a decent amount of cache should keep a 1Gbps link pretty busy - i.e. I wouldn't worry about the 1.5 vs. 3.0 speed thing too much. That said if you can replace this disk subsystem easily/cheaply then I would but it depends on the effort and cost.

What does worry me much more is your plan to boot from USB, while this will work why would you buy a system with a highly resilient R10 media array yet have a slow single-point of failure as your boot drive? just boot from the same media disks - you'll lose very little space on them and be a lot more reliable.

As for your OS, it depends what service protocols you want the box to offer but you can't go too wrong with a general purpose Linux (Centos/Debian etc.) or with Openfiler - you'd need to come back with more information for us to really nail that one.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • Well I'm flexible in terms of uplink. I've got two onboard 1gbe Broadcom cards (not Jumbo Frames compatible), an additional 4 Intel Pro/1000 1gbe server ports (supporting Jumbo Frames) and a Qlogic 4gb Fibre hba. I'd prefer to have traffic over copper as I need to replace my Fibre switch. I wasn't planning on doing Raid10, only Raid5. Even for the DBs running off 15k SAS disks. Booting off the disks is easier for me, I have no issue doing that. I was just thinking USB would be useful to abstract the disks from the OS. – Garuda Apr 11 '11 at 17:41
  • Sorry, they're all internal interfaces, what is you actual outgoing cleared bandwidth? not the port speed the guaranteed minimum. – Chopper3 Apr 11 '11 at 18:55
  • Well the switch is only a 1gbe switch though a Enterprice L3 switch. Uplink into it will be teamed 2x 1gbe NICS. So whatever the combined real world speed of two 1gbe teamed nics over jumbo frames. Is that helpful? thanks! – Garuda Apr 13 '11 at 18:25
3

Flat out, SATA I still outperforms 1 gigabit Ethernet, so that shouldn't be a problem for you. Whether or not you reach I/O saturation will depend on actual performance and usage patterns of course. If you start trunking/teaming multiple GbE cards you may end up noticing the reduced speeds, though.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
  • Yes, I'll def at least be trunking two of the 1gbe nics for iSCSI (300gb 15k SAS disks array) and two for the NFS (4 or 6 2tb 7.2K SATA array) – Garuda Apr 11 '11 at 18:08
0

As long as the controller isn't doing anything goofy like accessing all of the disks over the same 1.5gbps channel, your drives will have a hard time saturating the SATA I channel.

From some quick googling, those drives (I'm assuming you mean the 5K3000? I can't find a 3K5000) push around 140 MB/s sequential, which only comes to about 75% saturation on a 1.5gbps SATA I connection.

Shane Madden
  • 114,520
  • 13
  • 181
  • 251
  • Ok, I've got the 4drives now, but having looked at the trouble involved in expanding the volume in Openfiler at a later date. I'm inclined to grab another couple disks tonight and just build it with 6 in the Raid5 array now. I don't see any realistic likiness of a transfer occuring across the network at 150 MB/s. Unless I'm mistaken about that.. Layer 3 switch with jumbo frames – Garuda Apr 11 '11 at 18:06