A linux utility used to manage software RAID devices.
Questions tagged [mdadm]
844 questions
7
votes
2 answers
Is it safe to interrupt a mdadm --grow operation?
I recently changed the disks in my RAID5 from 3x2TB to 3x3TB. I also wanted to change the chunk size from default 512k to 128k. I have added all new devices to the array and I run:
mdadm /dev/md1 --grow --backup-file=/boot/md1_grow_backup_file…

mateusz.kijowski
- 333
- 3
- 9
7
votes
2 answers
mdadm raid 1 grub only on sda
I just finished setting up a CentOS 6.3 64bit server with mdadm however a lightbulb went on and I realised grub would only be installed on the first drive which is about as much use and an ashtray on a motorbike.
I had a look to confirm my…

Backtogeek
- 577
- 2
- 6
- 14
7
votes
3 answers
RAID1: Which disk will be mirrored?
How does a RAID1 system determine which disk to use as the source and which disk to use as the destination when mirroring?
Assume for instance the following scenario: A RAID1 array is created with two disks A and B. A is replaced by disk C, which is…

tmelen
- 71
- 3
7
votes
1 answer
Use cases for "mdadm --create" vs. "mdadm --build"?
From the mdadm man page, --build section:
This usage is similar to --create. The difference is that it creates a legacy array without a superblock.
^^ So no superblock with --build. 10-4. This is followed by:
With these arrays there is no…

bug11
- 173
- 1
- 4
7
votes
4 answers
Software RAID10 for later growth
I'm wondering what the best practice is for creating RAID10 in software on Linux with the ability to later grow by adding disks or expanding volumes underneath.
I'm using EBS on Amazon, I want to create 8x1GB RAID10 but have the ability to grow…

Richard
- 71
- 1
- 2
7
votes
4 answers
Linux Software RAID1: How to boot after (physically) removing /dev/sda? (LVM, mdadm, Grub2)
A server set up with Debian 6.0/squeeze. During the squeeze installation, I configured the two 500GB SATA disks (/dev/sda and /dev/sdb) as a RAID1 (managed with mdadm). The RAID keeps a 500 GB LVM volume group (vg0). In the volume group, there's a…

flight
- 394
- 4
- 14
7
votes
1 answer
How do I replace a disk marked as removed from a linux md raid-5 array?
I had some recent computer issues and somehow one of my disks ended up not being recognized in my array anymore. It identifies fine, and both smart and some other disk checking utils all say its fine, but somehow the UUID is different.
as a result,…

semi
- 736
- 3
- 8
- 15
7
votes
2 answers
LVM2 vs MDADM performance
I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM.
Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the…

archer
- 207
- 2
- 11
7
votes
0 answers
Write performance is 5 times worse with LUKS on top of mdadm RAID10 than without LUKS
I have servers with many NVMe disks. I am testing disk performance with fio using the following:
fio --name=asdf --rw=randwrite --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --size=10G --runtime=60 --group_reporting
For a single disk, LUKS…

tacos_tacos_tacos
- 3,250
- 18
- 63
- 100
6
votes
1 answer
Storage Setup for iSCSI/NFS servers
We are preparing to replace our storage servers (iSCSI+NFS). The current servers are Debian Wheezy using mdadm+lvm2 for storage, and failover using drbd and heartbeat (never got heartbeat to work).
For our replacement servers, I would like to use…

Rod
- 61
- 1
6
votes
2 answers
MD RAID - disable NCQ
Why is it recommended in a MD RAID (mdadm) to disable NCQ per-disk?
echo 1 > /sys/block/sdX/device/queue_depth
I read this tip in many articles regarding RAID tuning but nobody explains why.

Javier Franck
- 101
- 1
- 5
6
votes
2 answers
mdadm RAID5 random read errors. Dying disk?
First the long story:
I have a RAID5 with mdadm on Debian 9. The Raid has 5 Disks, each 4TB of size. 4 of them are HGST Deskstar NAS, and one that came later is a Toshiba N300 NAS.
In the past days I noticed some read errors from that Raid. For…

kevinq
- 63
- 3
6
votes
3 answers
Disabling ext4 write barriers when using an external journal
I'm currently experimenting with different ways of improving write speeds to a fairly large, rotating disk-based, software-raid (mdadm) array on Debian using fast NVMe devices.
I found that using a pair of such devices (raid1, mirrored) to store the…

jcharaoui
- 322
- 2
- 12
6
votes
1 answer
RAID6 resync with fast writes but slow reads
I'm using Debian Jessie.
# uname -a
Linux host 4.9.0-0.bpo.3-amd64 #1 SMP Debian 4.9.30-2+deb9u5~bpo8+1 (2017-09-28) x86_64 GNU/Linux
And have setup a RAID6.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md0 : active raid6…

rabudde
- 304
- 5
- 22
6
votes
2 answers
How do I determine the failed/removed HDD in mdadm raid?
My current mdstat:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid6 sde[8] sdh[4] sdg[1] sdd[6] sdb[5] sdc[7]
9766914560 blocks super 1.2 level 6, 512k chunk,…

DimanNe
- 161
- 1
- 6