5

Right now I have a FreeBSD hosts with ZFS and NFSv4. It is replicated to another FreeBSD box for backup purposes.

The ZFS features that are important to me are

  • software RAID6
  • snapshots, or some other way of replication to another host
  • quota
  • ACL
  • replace a failed disk without taking the host offline

Question

Could the same or similar setup be done with XFS or GlusterFS on CentOS 6?

Update

The hardware is

  • Supermicro CSE-847E16-R1400LPB chassis, 36 HS bays
  • Supermicro H8DG6-F AMD Dual G34 mainboard
  • AMD Opteron 6320, 2.8GHz 8-core, 8MB L2 cache, 6400MT
  • 64GB ram, and 128GB swap

Each host have 36*3TB space in RAIDZ2, so 100TB usable and 50TB used.

It seams that it is the Ubuntu clients that can crash the host on heavy reads. For now there are ~5 NFS clients. No read caching.

No NFSv4 tuning besides enabling Jumbo Frames

echo 'kern.ipc.nmbclusters="32768"' >> /boot/loader.conf

echo 'kern.ipc.maxsockbuf=16777216' >> /etc/sysctl.conf
echo 'net.inet.tcp.sendspace=262144' >> /etc/sysctl.conf
echo 'net.inet.tcp.recvspace=262144' >> /etc/sysctl.conf
echo 'net.inet.tcp.rfc1323=1' >> /etc/sysctl.conf
echo 'net.inet.tcp.sendbuf_max=16777216' >> /etc/sysctl.conf
echo 'net.inet.tcp.recvbuf_max=16777216' >> /etc/sysctl.conf
Sandra
  • 10,303
  • 38
  • 112
  • 165
  • Why do you want to move away from ZFS? – Joshua Miller Oct 08 '13 at 15:52
  • 1
    ZFS is perfect, but NFSv4 on FreeBSD crashes the host on heavy reads. – Sandra Oct 08 '13 at 16:02
  • Why not do it on the ultimate ZFS host: Solaris 11.1. If this is non-production it wouldn't cost you, otherwise you'll have to cough up $1K for your particular config. I believe that will give you a very stable (and fast) ZFS host and ditto for NFSv4. – unixhacker2010 Oct 18 '13 at 12:18

2 Answers2

11

♡ Hey there...

I read this question as really being a problem with the FreeBSD NFS stack...

ZFS works very well on the supported platforms. So much so, that I've moved most of my ZFS systems running Solaris and NexentaStor to Linux (RHEL/CentOS), thanks to the ZFS on Linux project. If you're using ZFS now, going to anything else is a step backwards.

I'm curious about the following, though:

  • How much data are you storing?
  • How many NFS clients do you have?
  • Have you performed any NFS tuning on your existing servers?
  • Are you using any form of L2ARC read caching on the existing setup? How much RAM do you have?
  • What is the hardware configuration of your servers?

Regardless of the answers to the above, you have a few options...

  • Fix or debug your FreeBSD issue. NFS shouldn't crash servers. It may be worth getting to the root-cause of this problem if you have a lot of time invested in this setup.
  • Convert to ZFS on another platform. NexentaStor, Linux, Solaris, OpenIndiana are pretty solid on the NFS side.

All-in, some combination of XFS and a cluster filesystem can so some of the same things as ZFS, but it's not a direct comparison. I don't think you should abandon ZFS yet.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • I have now updated the OP. Is it correct understood that you use ZFSonLinux for production? If, how does it handle hot swap of failed disks? Have you had any problems due to ZFSonLinux? When reading about Openindiana, I can't see there is much of a community and package updates. Is that correct? – Sandra Oct 09 '13 at 00:24
  • I've been using ZFS in production for a year. See [this post](http://serverfault.com/questions/543686/what-kind-of-volume-storage-management-has-the-largest-support-feature-set-the/543692#543692) from last week about my experiences. It's been very stable. My only problem came when I needed some specific tuning in order to run ZFS on Linux with Fusion-io SSD cards. And yes, disk hot-swap works as long as your server's backplane supports it. – ewwhite Oct 09 '13 at 00:44
  • What about combining ZFS with GlusterFS? – CMCDragonkai Jun 20 '14 at 05:07
3

I would go for a mix of technologies.

You can also opt for the fairly new Btrfs if you like adventures.

Spack
  • 1,604
  • 15
  • 22