4

I have a Dell 1U PowerEdge 1950, and for some reason, we cannot install Ubuntu 14.04 64bit Desktop edition on it. I know what you're thinking... why are you installing desktop? I have asked the same question myself over and over again! The individual that will be using this server wants to use the GUI version of Virtual Box and as a result wants nothing but the Desktop version of Ubuntu installed. That issue aside, here is the weird part. I create a RAID 1 array between the two drives like normal and do the install like normal. Everything works great and the system installs successfully. Then upon reboot it drops me to the BusyBox v1.21.1 shell. I get dropped to a (initramfs) prompt. If I pull one of the RAID 1 drives and boot again, it will boot just fine. If I replace the drive and pull the other drive, it also boots just fine. This tells me it is something to do with RAID. ie, when the RAID array is degraded, it is just booting off a single drive as if no RAID was present. When the RAID is active though, it seems it cant boot.

Also, before you ask, we have confirmed this is not a hardware issue. I thought we had a RAID hardware issue and so I shipped the original server back and had it replaced with a completely different but identical server. I just tried to do the install again this morning on the new server and ran into the exact same issue.

Seems like this is a driver issue, but I have never experienced this before with Ubuntu.

Any thoughts?

Thanks!

Here is the shell I get dropped to with output:

Gave up waiting for root device. Common problems:
 - Boot args (cat /proc/cdmline)
   - Check rootdelay= (did the system wait long enough?)
   - Check root= (did the system wait for the right device?)
 - Missing modules (cat /proc/modules; ls /dev)
ALERT!  /dev/mapper/ubuntu--vg-root does not exist.  Dropping to a shell!


BusyBox v1.21.1 (Ubuntu 1:1.21.0-1ubuntu1) built-in (ash)
Enter 'help' for a list of builtin commands.

(initramfs)
Atomiklan
  • 549
  • 1
  • 8
  • 16
  • 1
    What's the RAID controller (or lack thereof) situation for the drives - are the disks RAIDed in hardware or software? Also, what's the systems' boot order configuration look like? – Shane Madden Aug 15 '14 at 17:11
  • This is an enterprise class hardware RAID card. The Dell 1U PowerEdge 1950 servers come with an LSI SAS 6 card. There are x2 Dell SAS 15k 146GB drives configured as a RAID 1. Ubuntu was installed on the RAID 1 using the defaults, ie normal installation using entire RAID 1. – Atomiklan Aug 15 '14 at 17:22
  • I have never had trouble in the past installing Ubuntu on this exact setup. I don't know however if I have ever installed Ubuntu 14.04 on PowerEdge 1950's. Maybe I have only installed up to Ubuntu 13? – Atomiklan Aug 15 '14 at 17:23
  • Hmm, well in theory the raid controller shouldn't be presenting the disk any differently based on whether the RAID is degraded or not.. so it's really interesting that it's even possible for that to have an impact. `/boot` is on the same RAID group, right? Can you boot to something like [system rescue CD](http://www.sysresccd.org/) and verify that the root partition can be mounted while both disks are inserted? And maybe as a sanity check, try installing the server edition of Ubuntu 14.04 and see if the behavior's any different? – Shane Madden Aug 15 '14 at 17:30
  • That's a good idea. I'll try that next. Stay tuned. – Atomiklan Aug 15 '14 at 17:32
  • Well good and bad news. It does the exact same thing on server version. I was worried it would work on server version requiring me to talk with server owner and have a big discussion on why he needs to start using server version which would ultimately probably spiral down to me being incompetent. http://host.atomiklan.com/FORUMS/StackExchange/ServerFault/1.jpg for screenshot of output from server boot. – Atomiklan Aug 15 '14 at 18:12

2 Answers2

2

An interesting side-node, check out scsi 2:0:0:0 and 2:0:1:0 there, those seem to be the component disks of the RAID? Odd that they're exposed directly.

But, anyway, scsi 2:1:0:0 pops up after, which is the RAID disk. It is finding the partitions on the correct ID, scsi 2:1:0:0, so all is well - the problem is that the disk is ready at 38 seconds, and the "Gave up waiting for root" fired before the disk was ready.

Now that we know this, we can see that others have had the same problem on different platforms; seems like they dropped that rootdelay too low in 14.04.

Let's try upping it; get booted into the system on one disk, and in /etc/default/grub, set:

GRUB_CMDLINE_LINUX="rootdelay=90"

..then run update-grub.

Once that's done, get both disks in (making sure it rebuilds the second disk based on the data on the disk you just made that change ) and try to boot - hopefully the increased disk timeout will allow the disks to be ready within the timeout window.

Shane Madden
  • 114,520
  • 13
  • 181
  • 251
0

I had a similar problem and have been searching out by googling but nothing seemed to solve my problem. I just bought a DELL PowerEdge R720 server with 1 TB x 2 SATA HDD on PERC H710 adapter on RAID 1 configuration. I was installing it using a bootable USB drive made by YUMI tool.

Anyway, what I did was to write a DVD for ubuntu server 14.04 ISO and then boot. I plugged in a USB Wifi and the base files downloaded from the internet successfully, it wrote the boot loader properly in /boot partition. Make sure you make a 700 MB /boot partition and mark it Bootable Flag = YES while defining the partition type.

Checked it with Ubuntu Desktop 14.04 and that also worked fine by bootable USB and bootable DVD...