1
root@rescue:~# fdisk -l

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1           69632 102713344 102643713    49G 83 Linux
/dev/sda2       102782976 467808255 365025280 174.1G fd Linux raid autodetect
/dev/sda3       467808256 468854783   1046528   511M 82 Linux swap / Solaris

Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf5bbee69

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sdb1           69632 102713344 102643713    49G 83 Linux
/dev/sdb2       102782976 467808255 365025280 174.1G fd Linux raid autodetect
/dev/sdb3       467808256 468854783   1046528   511M 82 Linux swap / Solaris

Disk /dev/md0: 479 MiB, 502267904 bytes, 980992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 48.9 GiB, 52520026112 bytes, 102578176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@rescue:~# lsblk 
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdb       8:16   0 223.6G  0 disk  
├─sdb2    8:18   0 174.1G  0 part  
├─sdb3    8:19   0   511M  0 part  
└─sdb1    8:17   0    49G  0 part  
  └─md1   9:1    0  48.9G  0 raid1 
sda       8:0    0 223.6G  0 disk  
├─sda2    8:2    0 174.1G  0 part  
├─sda3    8:3    0   511M  0 part  
│ └─md0   9:0    0   479M  0 raid1 
└─sda1    8:1    0    49G  0 part  
  └─md1   9:1    0  48.9G  0 raid1 

Here, disk /dev/sdb failed so we had to replace with new /dev/sdb. After this we are not able to mount it.

root@rescue:~# mount /dev/md1 /mnt
NTFS signature is missing.
Failed to mount '/dev/md1': Invalid argument
The device '/dev/md1' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

I do not know why it is showing NTFS. Is it possible to remove /dev/sdb and retrieve data from /dev/sda only?

UPDATE 1

root@rescue:~# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md1 : active raid1 sda1[0]
      51289088 blocks super 1.2 [2/1] [U_]

unused devices: <none>

root@rescue:~# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Thu Oct 17 00:56:45 2019
     Raid Level : raid1
     Array Size : 51289088 (48.91 GiB 52.52 GB)
  Used Dev Size : 51289088 (48.91 GiB 52.52 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 17 00:56:45 2019
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue.ovh.net:1  (local to host rescue.ovh.net)
           UUID : 0e4f4fb1:e750b67a:6db391a3:a9f6501e
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

Update 2

# mount /dev/md11 /test
NTFS signature is missing.
Failed to mount '/dev/md11': Invalid argument
The device '/dev/md11' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
Prakash
  • 131
  • 6
  • 1
    Why not connect the good remaining disk via external USB storage and copy everything you need ? – Overmind Oct 16 '19 at 07:32
  • Not possible the physical disk is a remote server. I only have access to the disk. – Prakash Oct 16 '19 at 08:21
  • 2
    Please post output of `cat /proc/mdstat` and `mdadm --detail /dev/md1` – Peter Zhabin Oct 16 '19 at 14:34
  • @PeterZhabin I have updated the question. – Prakash Oct 17 '19 at 06:02
  • Can you PXE boot your server off anything else, map your live disk with lio as iscsi target and copy your data where you’ll run iscsi initiator to connect to your target? – RiGiD5 Oct 17 '19 at 06:02
  • Sorry I didn't understand. The server is already in rescue mode. If that what you mean. – Prakash Oct 17 '19 at 06:05
  • Looks your data on sda1 is screwed. Very strange, lsblk shows sdb1 is a part of array, but /proc/mdstat and mdadm --detail doesn't know about sdb1. lsblk was done before old disk removed? Aslo, do you have 50 gb of free space anywhere on the server? It could be wise to stop md1, take a dump of sda1 and play with the dump, to not complicate things further. – Nikita Kipriyanov Oct 17 '19 at 06:06
  • please show blkid /dev/sda1, blkid /dev/md1 – Nikita Kipriyanov Oct 17 '19 at 06:09
  • @NikitaKipriyanov I am not so concern about RAID1 now, all I need is data from disk. Forget about /dev/sdb, how can I create a dump of sda and recover the data? – Prakash Oct 17 '19 at 06:09
  • try if=/dev/sda1 of=/somewhere/50gb.img – Nikita Kipriyanov Oct 17 '19 at 06:11
  • @NikitaKipriyanov I have copy of /dev/sda in /dev/sdb, how can I recover the data from /dev/sda? – Prakash Oct 17 '19 at 06:12
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/99992/discussion-between-nikita-kipriyanov-and-err0rr). – Nikita Kipriyanov Oct 17 '19 at 06:12

1 Answers1

0

You need to setup your new sdb, it won't magically autoconfigure. The RAID part only mirrors the inner data of sda2, not the partition table and other partitions.

In you case it would look like :

sfdisk -d /dev/sda |sfdisk /dev/sdb      # clone sda partition table into sdb
mkswap /dev/sdb3
mdadm --add /dev/md0 /dev/sdb3

I don't know what to do about this lonely sdb1, I think it's best to forget about it, but keep in mind you don't have a redundant /boot partition, thus if the sda disk fails you won't be able to boot. Most bootloaders accept RAID1 /boot partitions fine, you should favor this setup.

EDIT: misread your strange raid setup :

  • it's actually md0=sda3+sdb3 (your large partition)
  • you do have /boot on RAID1 with md1=sda+sdb1, but then your partition type is wrong, it should be 'Linux raid autodetect' juste like sda2
  • your md0 array is incoherent, lsblk shows sda3 as a member but it's a swap partition, does not make any sense ...
zerodeux
  • 656
  • 4
  • 6
  • we actually do not want any RAID1 now. Is it possible to retrieve data from the healthy disk? – Prakash Oct 17 '19 at 08:51
  • Yes, you should be able to directly mount a partition from a RAID1 array, as long as it's not part of an active array (according to your /proc/mdstat it seems to be the case), eg. ```mkdir -p /mnt/rescue && mount /dev/sda3 /mnt/rescue``` – zerodeux Oct 18 '19 at 10:04
  • I have updated the question with result. – Prakash Oct 18 '19 at 10:10
  • I'm lost, what's /dev/md11 now ? I suggested to mount /dev/sda3. And if you want to mount /dev/sda1 (I'm not sure which filesystem you want to mount), you will have to dectivate the RAID array it's part of with `mdadm --stop /dev/md1`. – zerodeux Oct 19 '19 at 19:23
  • After posting post here, I have tried all things I could so /dev/md1 was changed to /dev/md11 or /dev/md127.... I am sure there is data as I can see it using strings command. But not sure how to recover it. I used photorec to restore the data but it is not helping as it does not restore original files/folders – Prakash Oct 20 '19 at 05:41