0

I am setting up a small cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). Each bare metal server also has 2T of disk space exported using Ceph 0.94.5.

I would like to be able to run LXD VMs (containers) on this cluster and to easily migrate these VMs to different nodes. I could have installed OpenStack, but that seems rather complicated to me (maybe OpenStack is an overkill for such a small cluster like mine). So, my solution was to create a big Ceph/rbd block volume and mount it at the LXD container folder (/var/lib/lxd/containers) in all nodes. To move a VM, I just shut it down in one node and then start it again in another.

For just one VM it's working fine, but it doesn't seem to me as a long term solution. My questions:

  1. Is there a way to pass a block volume (Ceph/rbd) or folder to a LXD VM, so LXD itself would mount the root folder (/)? It would be nice to have a block volume for each VM (and not all VMs using the same folder).

  2. Is there a simpler solution than OpenStack for my use case (or a simpler installation procedure for OpenStack)?

  3. Ultimately, I would like my cluster to have the ability to schedule VMs to nodes, move VMs from failed nodes, etc. Any suggestions on how to get that?

dilvan
  • 2,109
  • 2
  • 20
  • 32
  • Please be sure to post the findings you've received via the LXC mailing list to help other community members :) – JamieB Nov 26 '15 at 07:20

1 Answers1

0

you didn't say if you'd already followed some "guide" or not.

But to make sure you are aware of them take a look that these 2 guides by canonical:

https://help.ubuntu.com/lts/clouddocs/installer/

bmullan
  • 364
  • 2
  • 6