2

Overview

I currently have a CentOS 7 server set up as a VM host using KVM. It has a 4TB (2x2TB mirrors) ZFS array that will be used for VM and file storage. This was set up as part of a process to rebuild an AD domain from scratch on top of Server 2012 R2 (the current 2008 R2 domain left by the old IT guy is a bit of a mess, and runs on an old Core 2-era desktop machine). We need Server 2012 R2 to implement Group Policy. There's a mix of 7/8.1/10 machines, but everything will be running Windows 10 likely within a few months. Users have a share on the server which is mounted to their machine as Z:, and they also access shares on the server which hold documents used by various departments. We are also going to implement a new backup system (Crashplan/Carbonite/Mozy are the most likely options at this point).

Scale

The network consists of about 55 computers in the main office, as well as other computers outside of the office which will be connected later via OpenVPN. In total, I'm looking at around 150 computers. There are also at least that many users. In addition, the new domain will have somewhere around one to two dozen security groups.

Options

I'm trying to decide how best to handle file storage and sharing for the network. Specifically, whether I should bother with Samba or not, and if not, whether I should store the files in a disk image, or create a data volume on the ZFS array which is then passed to the Server 2012 VM as a raw block storage device.

Storing files in a disk image for use by a VM has always seemed like a poor solution to me. It's simple and will work, but I don't particularly like it. If I need to transfer those files to a Windows server, I have to either mount the image, or boot up the VM. Having all the files in a single image is convenient for transferring, but is an issue if the transfer is interrupted. This can be solved by using something like rsync, but that means I'm limited to transferring only to another Linux box. The other issue with using a disk image is that I'm then restricted to running the backup software inside of the VM (else I lose file-level access). Though I suppose I could probably mount the image read-only on the host, or add the image to another VM with read-only access specifically for the backup software.

I know that ZFS allows you to create data volumes which function as block-level devices that could then be passed through to the VM as a physical drive. This would basically mean having an NTFS partition in the ZFS array. I could give read-only access to this volume to another VM which would handle the backup software, or I could mount the volume as read-only on the bare-metal machine and potentially run the backup software inside of a Docker container. I could also transfer the files to another Linux host through a ZFS snapshot. Transferring to a Windows host would still mean mounting the volume or starting the VM.

The final option is to store all the files directly in the ZFS array and run Samba to share them out over the network. I'm expecting to have a fairly complex permissions hierarchy though, and I'm worried about managing that through Samba (I've only ever used Samba in home-server environments). On the other hand, I know I can effectively manage the hierarchy in Server 2012 R2, especially with Powershell. The other thing to consider is that I can run Samba directly on the VM host, inside of a VM, or inside of a Docker container, so it's fairly flexible.

Question

Which of these options will give me the most flexibility in the long run, without introducing unnecessary overhead or complexity? I want to keep my responsibilities as separated as I can, so I like the idea of having Samba share out the files while Server 2012 R2 handles AD and Group Policy (though I could also run a separate Server 2012 R2 instance instead of Samba for that purpose). However like I said, I'm hesitant to use Samba in a network of this scale. I am also worried about difficulties with troubleshooting file permissions if I use Samba (Windows is bad enough as it is, and that's the native option).

Any other potential solutions are also welcome.

dghodgson
  • 173
  • 1
  • 8

1 Answers1

1

We have a similar setup with one big advantage, our ZFS Storage is provided by Solaris. Why is this such a big deal? Well, Solaris has an integrated SMB Server. As long as your Zpools are secured in a way which lets you sleep well at night, i would recommend something like this:

  • Set up a Solaris Host instead of your CentOS Machine
  • Join the Solaris Host to your Domain
  • Set up Identity mapping
  • Create NFS Shares for the VM Storage
  • Create SMB Shares (on ZFS Datasets) for File Storage
  • Properly set NFSV4 ACLs on your file Storage Datasets
  • Create Snapshot routine on Datasets

Benefits:

  • If a Windows user/host "mounts" such a SMB dataset, it can authenticate with its Kerberos ticket to the solaris box (Windows DC)
  • No need to do complicated things, all needed services are included in solaris
  • Windows can handle the ZFS Snapshots of the SMB Shares. They are accessible in Windows via the Previous Version File/Folder dialog
  • NFSV4 ACLs are fully supported by Windows and Solaris. They can handle even the most complex permissions
  • No need to run Linux VMs if you want to deploy additional services, just create a Solaris Zone.

I can give you my setup documentation if you decide to go down the UNIX way

embedded
  • 466
  • 2
  • 6
  • 19
  • So Windows accesses the files over NFS, and the rest of the network accesses them from the host over SMB? Identity mapping would mean having an account for every user on the host machine. That seems kind of cumbersome, but I guess I'd have to do that anyway if I was using Samba+NFS on CentOS. – dghodgson Aug 12 '16 at 17:32
  • Either way, I'm glad to see that someone has a similar setup in production, but switching to Solaris is not an option for me. We're using CentOS mainly due to budget constraints. I assume Illumos would likely work as well, but I'm hesitant to use an OS that I have no experience with as the VM host on bare-metal. – dghodgson Aug 12 '16 at 17:33
  • The Identity mapping is really easy, just a daemon you enable. As long as u have identity management for UNIX enabled in your active directory (sometime called UNIX schema). – embedded Aug 14 '16 at 15:42
  • Identity management on Solaris is nearly flawless if you use AD (either Windows Server or samba4 works), you just have to make sure that the file server can connect to a domain controller at boot time. The only advantage of Windows Server or samba4 over Solaris is support for SMB3.0, which includes encryption. If you already use IPSec/OpenVPN or do not care about encrypted traffic, I would suggest Solaris. – user121391 Aug 16 '16 at 09:15
  • Additionally you have a different exposure to security flaws. For example, the Badlock bug that affected Windows and samba4 on Linux was not affecting illumos (OpenSolaris): http://wiki.illumos.org/display/illumos/Response+to+the+badlock.org+CVEs If this helps you depends on the circumstances, of course. – user121391 Aug 16 '16 at 09:24