I have looked through many duplicate questions over the past few hours on here but none really shed light on the issues (confidence) I have been faced with in Work, not exactly.
So we have a new Hetzner machine. 4 x 3TB, 64GB RAM, LSI MegaRAID SAS 9260-4i (Just had a BBU fitted also) and a Xeon E3-1275, Also some good network connections. Its perfect for our use case.
The Problem
I am the Sysadmin / Linux Guy, 90% of things I am fine with, but I rarely build servers from scratch and all our other servers use Software Raid (mdadm). I have never set up Hardware Raid from scratch with megacli, I have done this now but any feedback is appreciated, Other than using ext2, ext3, ext4 and btrfs I have no experience with what to expect with Xfs or ZFS
What I would like advice with
- Because Raid5 gives us more space than Raid10, boss wants to opt for
Raid5. I am not sure if Raid10 would make much difference anyway as
all files are served over the internet to mostly UK users (UK -> Germany).
Do you think Raid5 to Raid 10 will make much performance
difference? My boss has requested that we use xfs as the filesystem, I am partial to this, We will not be generating so many files, We are just looking for a filesystem that is more like something a NAS would use to store files until our update every 2 hours as we send files out to clients, we will also be writing a fair amount of data and using quite a few IOPs at some stages of the day. We will sometimes have developers connecting to the servers (via website) to test their new software release. For the all purpose use really, I was planning with just ext4 or ext3, But If you think xfs or even ZFS would be better, I would love to learn anyway.
Any suggestions?
- LVM, Now this is something I personally wanted to add to all our new server builds. Snapshots and volume resizing has saved our @$$ so many times on 2 of the servers we have, I thought it would be a good idea to use it as part of a standard build. I have only ever used this with ext4 filesystems though, Should this matter? I am guessing not?.
I know this is a long very specific story, but I would really appreciate any help you can give or even if you can point me in the right direction, I have been reading about these subjects on various different boards, articles since Monday and I can now tell my boss is getting frustrated with me :(
root# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8.2T 0 disk
├─sda1 8:1 0 512M 0 part /boot
├─sda2 8:2 0 8.2T 0 part
│ ├─vg0-root 253:0 0 8.1T 0 lvm /
│ ├─vg0-swap 253:1 0 64G 0 lvm [SWAP]
│ └─vg0-tmp 253:2 0 20G 0 lvm /tmp
└─sda3 8:3 0 1M 0 part
root# megasasctl
a0 LSI MegaRAID SAS 9260-4i encl:1 ldrv:1 batt:FAULT, module missing, pack missing, charge failed
a0d0 8382GiB RAID 5 1x4 optimal
a0e252s0 2794GiB a0d0 online
a0e252s1 2794GiB a0d0 online
a0e252s2 2794GiB a0d0 online
a0e252s3 2794GiB a0d0 online
root# megacli -LDInfo -Lall -aAll | grep 'Cache Policy:'
Default Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
root# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 16G 0 16G 0% /dev
tmpfs tmpfs 3.2G 600K 3.2G 1% /run
/dev/mapper/vg0-root xfs 8.2T 11G 8.1T 1% /
tmpfs tmpfs 16G 0 16G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-tmp reiserfs 20G 33M 20G 1% /tmp
/dev/sda1 ext4 488M 52M 401M 12% /boot
tmpfs tmpfs 3.2G 0 3.2G 0% /run/user/0
Thank You in advance for any possible help.