0

I am looking to outfit my new server (new to me) with larger capacity disks. The server right now has 4 SAS 146GB disks in it. My original thought was to just buy larger SAS disks and replace or add more disks to the existing configuration. The issue I have ran into is the drives tend to get a little pricey. So I started thinking.

Do I even need SAS disks? What type or workloads need that kind of bandwidth (bus bandwidth)?

My goal is to setup a single server with either Linux or FreeBSD running ZFS and use the ZFS pool as storage for a Xen instance running on the same machine. Basically a small virtualization setup for non-production and non-critical usage.

Is there any reason why I need to use SAS drives? Is there more to it than just RPM and bandwidth?

I would still be using "enterprise" SATA drive from [insert server manufacturer here] so I am under the impression reliability would not be a factor in my choice, right?

AtomicPorkchop
  • 1,975
  • 8
  • 34
  • 55

2 Answers2

3

I'd use SAS in just about every case, unless this is a home system that won't be running production workloads.

It's less about speed and more about error correction, the protocol and reliability of the entire system.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
0

Any disk load and you will suffer with SATA drives unless possibly if you have a large array that shares the load between disks like a RAID 10 or something like that.

I have servers with SATA drives running application servers (Tomcat) which have no disk activity other than Tomcat's log and regular OS stuff.

I have remote desktop servers with 2 SATA drives in mirror RAID which are OK, so until the server start swapping. If it swaps, the server is toasted and generally needs a reboot.

I don't have numbers or something like that to share. But my experience dictates that if you have any sort of load, and you don't have a large RAID 10 or similar, then SAS will make a big difference if your load requires disk access.

You state that this is not for a production set up - therefore, I would say that you could be fine with the SATA drive, but it depends what your non-production use is. If it is to store a bunch of VMs that will not really be used but sit there mostly idle available to test something here and there and your tests don't require performance (i.e. not benchmark or perf tests), then you can probably go the SATA route safely.

But don't make the mistake of bringing this to production or throwing one production VM on it!

ETL
  • 6,513
  • 1
  • 28
  • 48
  • This would be 8 disks in total connected by a backplane to an HBA. I am bypassing the RAID controller since I am using ZFS. – AtomicPorkchop Feb 25 '15 at 01:47