0

So, for various reasons, I've ended up with a 45TB Single Linux Logical volume, without a partition table, formatted as NTFS containing 28TB of data (the Filesystem itself is 28TB).

The filesystem was created in Linux, and is mountable by Linux. The problem comes when I try and mount this within a KVM-based Windows VM on the same box. Windows does not see a 28TB filesystem, but a 1.8TB disk containing a few randomly sized unhelpful partitions.

Disk Management output showing Disk1 with randomly sized partitions

I presume this is because Windows is trying to read the first few bytes of the real NTFS filesystem data as a partition table.

I can see a few possible solutions to this problem, but can't work out how to actually execute any of them:

  • Have Windows read an unpartitioned disk (single volume) as a Filesystem?
  • Generate a partition table somehow on this Logical Volume without destroying the data that's held within the filesystem itself?
  • Somehow fake a partition table, pointing at the LVM volume and export this to the KVM guest (running in libvirt)

The current "partition table" as reported by parted is:

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/chandos--dh-data: 48.0TB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  48.0TB  48.0TB  ntfs
JamesHannah
  • 1,731
  • 2
  • 11
  • 24
  • 1
    What's the current disk layout? `parted -l /dev/sdx` – Matt Nov 06 '12 at 13:14
  • Incredible... I was just going to post an identical question. I `dd`'d a partition to an image, converted it to a vhd and forgot to include a partition table. – Mitch Nov 06 '12 at 13:15
  • I've added the parted output – JamesHannah Nov 06 '12 at 13:38
  • What's weird is the reported size being 2TB(ish) when it's supposedly a 28TB filesystem, you've sized it beyond the size of underlying disk? The easiest way around it I can think of would be to use ntfs-3g and copy the data out, file by file, to another disk, though you may risk damaging ACLs etc this way. – Alex Berry Nov 06 '12 at 17:02
  • @AlexBerry I don't think those sizes are true at all, I think Windows is trying to read the NTFS data itself as an MSDOS partition table, and coming up with just rubbish which it's interpreting as a 2TB disk and a few partitions. The actual volume is a 43GB HP CCISS volume – JamesHannah Nov 06 '12 at 23:30
  • What I mean is, regardless of the partition table, windows should be able to read the underlying size of the disk seperately (the figure under "basic"). – Alex Berry Nov 07 '12 at 10:56

3 Answers3

2

I had a similar problem where I accidentally imaged a partition rather than a disk. The images were being copied across the network, and I didn't have time to copy them again. They were, however, much smaller than 28TB, and I used a process that required a copy of the image to be made.

The initial image was taken by using:

dd if=/dev/sda1 of=/image.bin

To add a partition table, without copying everything across the network, I copied just the MBR to a file.

dd if=/dev/sda of=/mbr.bin bs=512 count=1

Then, I prepended the mbr and copied in the data.

fdisk -l /mbr.bin
# take the start position * units in bytes (ex start at 256 * units of 512 bytes = 131072 bytes)
truncate -s (disk size in bytes + number of above) /newfile.bin
dd if=/mbr.bin of=/newfile.bin
dd if=/image.bin of=/newfile.bin oflag=seek_bytes seek=(number from above)

Once complete, /newfile.bin has the complete partion table + data.

Mitch
  • 2,363
  • 14
  • 23
  • I was going to suggest something similar, but I imagine @JamesHannah doesn't have a 48TB USB hard drive laying around to store the image on, perhaps the same thing with compression would work though, depending on how much actual data there is. – Alex Berry Nov 06 '12 at 17:00
  • This would be the correct solution, given a small enough amount of data. – JamesHannah Nov 12 '12 at 13:56
  • I am pleased to see you found an alternate solution. Even for a small solution, using Device Mapper could prove much faster. This will certainly end up in my favorites. – Mitch Nov 13 '12 at 00:28
0

I've actually not found a good solution to this. Luckily there's another drive shelf handy with ~30TB of space which I can use to migrate to a newly partitioned volume. It'll take a long time but it should work.

There was a suggestion that some clever stuff could be done with the Linux Device Mapper (creating a virtual device which maps a fake GPT partition table from a file, alongside the LVM logical volume), but I'll leave that for someone smarter to work out.

Edit: Actually ended up writing up a solution to this here

vidarlo
  • 6,654
  • 2
  • 18
  • 31
JamesHannah
  • 1,731
  • 2
  • 11
  • 24
  • Marked my own solution as accepted, as there's a solution which doesn't involve copying somewhere else and back, however bad it is in the long-term. – JamesHannah Nov 12 '12 at 13:55
-1

disk with morethan 2TB need to use GPT partition table. for disk <2TB MBR is sufficient.

MKG
  • 1
  • Did you read the question? The OPs problem is that he can't easily create any kind of partition table in his environment. – Sven Nov 08 '12 at 09:01