-1

I have an Ubuntu box which was installed by an admin who is not with me now. I have no information on how he configured the RAID and neither no experience on setting up a software RAID on Linux systems.

I'd like to figure out "what is the RAID layout actually used in the system?".

There are two packages installed: dmraid and mdadm, so they make me confusing even more.

The physical machine has two identical 1 TB hard disks on /dev/sda, /dev/sdb. Here are a list of various commands what you might need...

Output of dmraid -r:

/dev/sdb: isw, "isw_ffbjceeci", GROUP, ok, 1953525166 sectors, data@ 0
/dev/sda: isw, "isw_ffbjceeci", GROUP, ok, 1953525166 sectors, data@ 0

Output of mdadm --detail --scan:

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=d10f206b:f0ccf4b7:ffedda3d:00dc2380

Output of mdadm --detail /dev/md0:

/dev/md0:
        Version : 00.90
  Creation Time : Thu Apr  8 23:41:19 2010
     Raid Level : raid1
     Array Size : 964879808 (920.18 GiB 988.04 GB)
  Used Dev Size : 964879808 (920.18 GiB 988.04 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jul 30 17:52:26 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : d10f206b:f0ccf4b7:ffedda3d:00dc2380
         Events : 0.57178142

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1     252        1        1      active sync   /dev/block/252:1

Output of fdisk -l:

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cb9b2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1      120122   964879933+  fd  Linux raid autodetect
/dev/sda2          120123      121601    11880067+   5  Extended
/dev/sda5          120123      121601    11880036   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cb9b2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1      120122   964879933+  fd  Linux raid autodetect
/dev/sdb2          120123      121601    11880067+   5  Extended
/dev/sdb5          120123      121601    11880036   fd  Linux raid autodetect

Disk /dev/md0: 988.0 GB, 988036923392 bytes
2 heads, 4 sectors/track, 241219952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Output of cat /etc/fstab:

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# / was on /dev/md0 during installation
UUID=0cf600bf-ec3e-4898-ae6b-743b40f1cde0 /               ext4    errors=remount-ro 0       1
# swap was on /dev/md1 during installation
UUID=833dd62b-8127-48b6-affe-981b1236158c none            swap    sw              0       0

So, is it safe to remove one of dmraid or mdadm on-the-fly? If so, which one should I remove to remove redudant RAID configuration?


Output of cat /proc/mdstat:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 dm-1[1]
      964879808 blocks [2/1] [_U]

unused devices: <none>

Some clarification:

Since the current system works well anyway, so there is no urgent need to change the configuration or remove the packages.

But I'm going to do dist-upgrade from Ubuntu 10.04 to Ubunt 12.04, and I know that there are some changes such as grub2, so I'd like to make sure that this configuration would not cause any problem.

Just in cases, I want to know the details of this configuration.


Output of ls -al /dev/mapper:

ls -al /dev/mapper
total 0
drwxr-xr-x  2 root root     100 2012-05-04 10:02 .
drwxr-xr-x 15 root root    3540 2012-05-04 10:03 ..
crw-rw----  1 root root  10, 59 2012-05-04 10:03 control
brw-rw----  1 root disk 252,  0 2012-05-04 10:03 isw_ffbjceeci_Volume0
brw-rw----  1 root disk 252,  1 2012-05-04 10:03 isw_ffbjceeci_Volume01
Achimnol
  • 109
  • 1
  • 5

2 Answers2

2

Firstly you have a RAID 1 mirror of two disks.

Secondly if you can't spot that from what you've let us have I'd strongly advise you just leave things as they are and don't even think about removing or changing the config - if it works don't touch it basically. Why are you considerings touching it anyway?

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • I have a plan for major upgrades, and it may require full reinstallation of the system. Additionally, I want to know what's going on my system. I know that the system is configured to use RAID 1 mirror, but I can't figure out which of mdadm or dmraid is taking care of actual mirroring. My guess is that both are configured for RAID 1, but mdadm looks working with a single device only and dmraid does the actual mirroring. Am I correct? – Achimnol Jul 30 '12 at 09:53
  • I understand that but I'd leave things as they are until you reinstall. – Chopper3 Jul 30 '12 at 09:56
  • Yes, I will leave the things as they are until I really have to reinstall the whole system. But I just want to know! – Achimnol Jul 30 '12 at 09:57
  • And the upgrade that I'm going to do is a dist-upgrade from Ubuntu 10.04 to Ubuntu 12.04. I want to be sure that this RAID configuration would not cause any problem during upgrades... Just in cases, I think it would be better to know how the system is configured to troubleshoot the possible problems. – Achimnol Jul 30 '12 at 10:00
  • 1
    @Achimnol just lately upgraded 11.04 to 11.10 (not yet to 12.04) with the RAID1 configuration and it didn't cause me any issues. But i can say no more. – Janis Veinbergs Jul 30 '12 at 10:19
0

I'd endorse what Chopper3 says, but I'd add that I think your RAID device is currently degraded; that is, one half of the mirror has failed. You need to understand what's going on, and fix this, or get someone else to do it, before the other half fails and your server dies. I would definitely want to fix this before upgrading the OS.

That said, the output of proc/mdstat is slightly odd, in my experience. Could we get the output from ls -al /dev/mapper added to the question?

Edit: my word, that is unusual. I have never seen RAID-1 done this way: it seems to be an md mirror, one half of which is a fully-working dm mirror, and the other half of which is missing. I must withdraw from offering any further advice on this, as I'm baffled why anyone would do such a thing, and dm raid isn't my field.

MadHatter
  • 79,770
  • 20
  • 184
  • 232