0

I installed Ubuntu 20.04 Server on a system with three Seagate 6TB drives. I have partitioned both sda and sdb in two partitions. The first partition on each disk is of 5G. I combined those two 5G partitions into a RAID1 array to mount as /boot. The second partition of those two disks, and the third disk as a whole is a part of a RAID5 array which is mounted as /. I did the partitioning from the Ubuntu Server installer itself. The final disk layout looks like below.

# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda       8:0    0  5.5T  0 disk  
├─sda1    8:1    0    1M  0 part  
├─sda2    8:2    0    5G  0 part  
│ └─md0   9:0    0    5G  0 raid1 /boot
└─sda3    8:3    0  5.5T  0 part  
  └─md1   9:1    0 10.9T  0 raid5 /
sdb       8:16   0  5.5T  0 disk  
├─sdb1    8:17   0    5G  0 part  
│ └─md0   9:0    0    5G  0 raid1 /boot
└─sdb2    8:18   0  5.5T  0 part  
  └─md1   9:1    0 10.9T  0 raid5 /
sdc       8:32   0  5.5T  0 disk  
└─md1     9:1    0 10.9T  0 raid5 /

The issue here is, immediately after the installation, during the first boot of the OS, the RAID5 array went to degraded mode. What even more surprising is, /dev/sdb2 is not considered as a "spare", instead of an "active" device. As I am writing this message, the RAID is rebuilding.

# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Jul 13 20:41:04 2021
        Raid Level : raid5
        Array Size : 11710287872 (11167.80 GiB 11991.33 GB)
     Used Dev Size : 5855143936 (5583.90 GiB 5995.67 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Jul 14 02:09:29 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Rebuild Status : 28% complete

              Name : ubuntu-server:1
              UUID : 3e9e3342:44ac6698:40bb0467:0ada161a
            Events : 5709

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8        3        1      active sync   /dev/sda3
       3       8       18        2      spare rebuilding   /dev/sdb2

I reinstalled the OS three times, the same happens every single time. What's going on here?

sherlock
  • 141
  • 5
  • This doesn't look that abnormal. Unless the partitions previously had a valid raid already built on them, RAID5 will require a rebuild/recover process (unlike something like ZFS which will only resilver blocks in use) - which for a 10TB raid could take quite a few hours. – Brandon Xavier Jul 14 '21 at 15:20
  • That's not my point. All these are fresh HDDs. Why would it start rebuilding just after installing an OS? As I mentioned, the RAId is created from the installer itself. – sherlock Jul 14 '21 at 15:37
  • Installing the OS does not build a RAID5 instantly. Parity has to be calculated for every block in the raid array by reading the corresponding blocks on N-1 of the disks, XOR'ing them, and writing the result on the remaining disk (yes, that's oversimplified). This happens whether you have actually written any data to it yet or not. – Brandon Xavier Jul 14 '21 at 19:56

0 Answers0