I had a server with 2 disks in a raid and one of them failed. Called the provider and had them change the disk.
After a reboot I still see only one disk:
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0]
1073610560 blocks super 1.2 [2/1] [U_]
md3 : active raid1 sda4[0]
1839089920 blocks super 1.2 [2/1] [U_]
md0 : active raid1 sda1[0]
16768896 blocks super 1.2 [2/1] [U_]
md1 : active raid1 sda2[0]
523968 blocks super 1.2 [2/1] [U_]
unused devices: <none>
And it looks like the new drive is removed:
mdadm -D /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Wed Jun 17 00:26:21 2015
Raid Level : raid1
Array Size : 1839089920 (1753.89 GiB 1883.23 GB)
Used Dev Size : 1839089920 (1753.89 GiB 1883.23 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sun Nov 5 15:56:00 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : rescue:3
UUID : 0c807ba7:4535e375:273f715a:7ab59c54
Events : 2851
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
2 0 0 2 removed
So the question is how do I enable my new disk in /dev/sdb? It looks OK when I test it with the tool:
smartctl -H /dev/sdb
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-98-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
How do I proceed?