1

I have a server with a 9650SE-24M8 in it. which has a 7x2TB drive RAID5 array.

tw-cli shows the following:

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u1    RAID-5    OK             -       -       256K    11175.8   RiW    ON     

however I'm unable to partition it any bigger than 6000.0GB:

Model: AMCC 9650SE-24M DISK (scsi)
Disk /dev/sdb: 6000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

I'm running ubuntu 14.04 x86 64bit.

Not sure if this info is relevant:

Firmware    FE9X 4.10.00.021
Driver  2.26.02.014
BIOS    BE9X 4.08.00.003
Boot Loader BL9X 3.08.00.001

Thank you

EDIT:

forgot to mention that I tried to partprobe which returns successfully but no change in the total disk size.

this is from fdisk, I know fdisk doesn't do large drives correctly, but it does display their size:

Disk /dev/sdb: 6000.0 GB, 5999966552064 bytes
256 heads, 63 sectors/track, 726604 cylinders, total 11718684672 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Edit 2:

relevant output from parted -l

Model: AMCC 9650SE-24M DISK (scsi)
Disk /dev/sdb: 6000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name              Flags
 1      1049kB  6000GB  6000GB  ext4         Linux filesystem

/dev/sdb is what my linux box labeled the block device exposed by 3ware. I partitioned it just in case and just like expected it's 6000GB (maximum available space) as detected by linux, not the 11000GB as 3ware reports.

Edut 3:

Attached output from tw-cli /c1/u1 show

m@r2:~$ sudo tw-cli /c0/u1 show

Unit     UnitType  Status         %RCmpl  %V/I/M  Port  Stripe  Size(GB)
------------------------------------------------------------------------
u1       RAID-5    OK             -       -       -     256K    11175.8   
u1-0     DISK      OK             -       -       p5    -       1862.63   
u1-1     DISK      OK             -       -       p1    -       1862.63   
u1-2     DISK      OK             -       -       p2    -       1862.63   
u1-3     DISK      OK             -       -       p3    -       1862.63   
u1-4     DISK      OK             -       -       p0    -       1862.63   
u1-5     DISK      OK             -       -       p4    -       1862.63   
u1-6     DISK      OK             -       -       p6    -       1862.63   
u1/v0    Volume    -              -       -       -     -       11175.8 

I have the auto carving disabled .

maddios
  • 41
  • 4
  • 4
    "7x2TB drive RAID5 array" <--- you're asking for trouble there, sir. – EEAA Jan 31 '15 at 03:14
  • Do you recommend RAID6? – maddios Jan 31 '15 at 03:16
  • Around these parts, RAID-6 or RAID-10 finds *much* more favour. We've just seen waaaay too many second-drive failures to be happy with -5. Also, don't forget RAID isn't a backup, and you need those, too. Also also, make sure you're constantly monitoring the state of the underlying hardware, so you find out about single drive failures and fix them before they become array-failure issues. – MadHatter Jan 31 '15 at 07:12
  • Can you **show** us an attempt to make a bigger partition, and how it fails? – MadHatter Jan 31 '15 at 07:12
  • Well, i put it right in my description, the MAX size of the drive possible is 6000GB, i'm unable to assign a sector further than that. Meanwhile it should be what tw-cli reports, or close to it. fdisk and gdisk both corroborate the story. – maddios Jan 31 '15 at 07:42
  • I understand that it should be RAID6, this is a test and RAID6 takes longer to initialize than raid5. – maddios Jan 31 '15 at 07:44
  • You **told** us you can't do it; what you **showed** us is the largest-possible success. We'd like you to **show** us it failing; the detail of the failure often sheds light on the nature of the problem. – MadHatter Jan 31 '15 at 07:56
  • Wait, I'm confused, what i'm showing you are not partition sizes but block device sizes. I'm not creating anything at this point or failing at anything other than getting linux to show me the block device at 11TB. Having said that, I did attempt to create a partition just in case they're lying and the end sector is exactly what fdisk shows, 11718684672 sectors. which is 6TB. For 11TB it should be roughly double that in sectors. To recap creating the partition is working fine, it's creating it to the full size of the device, the issue is the device itself is smaller than it should be. – maddios Jan 31 '15 at 08:29
  • Can you cut-and-paste into your question the output of `parted -l`? Also I, too, am suspicious that it's `sdb`, rather than eg `sdb1`, that's showing up as 6TB. I don't understand where `sdb` has come from, because you use `tw-cli` to show us the underlying device. Is it possible that you have created an 11TB logical *volume*, inside which is a logical *drive* - that the OS maps to `sdb` - which is only 6TB? – MadHatter Jan 31 '15 at 10:54
  • I didn't do any mapping, at least not myself. All I did was create the 3ware unit, let it initialize, then ubuntu automatically detected the new volume and assigned it the /dev/sdb logical device. At which point i noticed that it's 6TB not 11TB. I did mess with partitioning after that but that was just a waste of time since there's no way I can partition it larger than the whole drive listed. Oh and I did try rebooting too. – maddios Jan 31 '15 at 21:54
  • OK, I'm slightly fishing the dark here, because I don't have the same unit you do. But the doco suggests that 9000 series controllers have the concept of autocarving, where available disc space is automatically presented as several smaller volumes of a fixed maximum size (no, I have no idea why). Could we see what the underlying volume(s) are with `tw-cli /c0/u1 show` (or other controller as appropriate)? – MadHatter Feb 01 '15 at 07:01
  • `tw-cli` usually reports hot spares if any. I don't see one here - as noted by others, I would really go for at least RAID 6 (still with a hot spare) or RAID 10 (also hot spare). RAID 10 gets you better speed and up to 50% disk failure handling. – MrMajestyk Feb 02 '15 at 08:27
  • Thanks for the `tw-cli` output. I agree that logical volume is 11TB. Why ubuntu only sees it as 6TB, I can't imagine. Any chance you could blow away `/c0/u1/v0` and recreate it as 5TB, confirm that Ubuntu sees it getting smaller with `parted`, then nuke it again and create it at 7TB? At least then we'll be absolutely sure of the linkage between `sdb` and `/c0/u1/v0`, and we'll be sure it's not some weird clipping effect at the 8TB device / 512B sector limit). Once that's proven I'll be stumped - I know of no reason why there should be an arbitrary 6TB limit. – MadHatter Feb 02 '15 at 08:51
  • Thanks, I'll look into this (might take some time since initializing the array takes a few days) and I'll get back to you. – maddios Feb 03 '15 at 22:15

2 Answers2

1

You should use another sector size, like 4k sectors instead of 512b sectors. If it actually is 4k and it only reports 512b, the problem may be of different nature. How exactly are you creating the logical partitions ? Did you try something like : mkpart primary 0.00TB 11.00TB ?. Also, make sure you set CONFIG_EFI_PARTITION=y (even if ubuntu should have this pre-compiled).

Overmind
  • 3,076
  • 2
  • 16
  • 25
  • Well, the issue seems to be before partitioning, the OS was showing the 6000GB disk size before i partitioned it. To create that one partition I used gdisk actually, hit n then hit enter 4x which just created a partition to the maximum size possible for the given device. – maddios Feb 03 '15 at 22:19
  • Try to test with something that is partition-independent. As HD tune for windows can perform tests even on unpartitioned drives, get an equivalent for linux (Hard Disk Sentinel should do) and make any read/write test that accesses the over 3TB area. That will clear up what's going on. – Overmind Feb 04 '15 at 06:20
1

Though I received a lot of good answers/solutions here, the real solution was a bit odd.

A full power cycle (randomly power went out on the rack) solved the issue.

I tried rebooting twice early on and there was no effect, but for some reason a full shutdown of the machine for a little while and powering back up seems to have fixed it. parted now shows the full size: Disk /dev/sdb: 12.0TB.

Very odd indeed.

maddios
  • 41
  • 4