2

For testing purpose, I want to install OpenStack on two VirtualBox instances using Ansible. As the documentation says, I pre-configure local network with four VLAN-s. And create a bridge interfaces. The network connectivity is fine after that.

I also configure openstack_user_config.yml file.

---
cidr_networks:
  container: 172.29.236.0/22
  tunnel: 172.29.240.0/22
  storage: 172.29.244.0/22
used_ips:
  - "172.29.236.1,172.29.236.255"
  - "172.29.240.1,172.29.240.255"
  - "172.29.244.1,172.29.244.255"

global_overrides:
  internal_lb_vip_address: 192.168.33.22
  external_lb_vip_address: dev-ows.hive
  tunnel_bridge: "br-vxlan"
  management_bridge: "br-mgmt"
  provider_networks:
    - network:
      container_bridge: "br-mgmt"
      container_type: "veth"
      container_interface: "eth1"
      ip_from_q: "container"
      type: "raw"
      group_binds:
        - all_containers
        - hosts
      is_container_address: true
    - network:
      container_bridge: "br-vxlan"
      container_type: "veth"
      container_interface: "eth10"
      ip_from_q: "tunnel"
      type: "vxlan"
      range: "1:1000"
      net_name: "vxlan"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-vlan"
      container_type: "veth"
      container_interface: "eth11"
      type: "flat"
      net_name: "flat"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-storage"
      container_type: "veth"
      container_interface: "eth2"
      ip_from_q: "storage"
      type: "raw"
      group_binds:
        - glance_api
        - cinder_api
        - cinder_volume
        - nova_compute
...

But I get the error after running the playbook:

# openstack-ansible setup-hosts.yml
...
TASK [lxc_container_create : Gather container facts] *********************************************************************************************************************************************************************************************************
fatal: [controller01_horizon_container-6da3ab23]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_horizon_container-6da3ab23\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_utility_container-3d6724b2]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_utility_container-3d6724b2\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_keystone_container-01c915b6]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_keystone_container-01c915b6\". Make sure this host can be reached over ssh", "unreachable": true}
...

I figure out that LXC containers that were created by Ansible playbooks haven't the interfaces and consequently IP address too. That is why when Ansible connects via ssh to these containers I get "Host unreachable" error.

# lxc-ls -f
NAME                                           STATE   AUTOSTART GROUPS            IPV4 IPV6 UNPRIVILEGED
controller01_cinder_api_container-e80b0c98     RUNNING 1         onboot, openstack -    -    false
controller01_galera_container-2f58aec8         RUNNING 1         onboot, openstack -    -    false
controller01_glance_container-a2607024         RUNNING 1         onboot, openstack -    -    false
controller01_heat_api_container-d82fd06a       RUNNING 1         onboot, openstack -    -    false
controller01_horizon_container-6da3ab23        RUNNING 1         onboot, openstack -    -    false
controller01_keystone_container-01c915b6       RUNNING 1         onboot, openstack -    -    false
controller01_memcached_container-352c2b47      RUNNING 1         onboot, openstack -    -    false
controller01_neutron_server_container-60ce9d02 RUNNING 1         onboot, openstack -    -    false
controller01_nova_api_container-af09cbb9       RUNNING 1         onboot, openstack -    -    false
controller01_rabbit_mq_container-154e35fe      RUNNING 1         onboot, openstack -    -    false
controller01_repo_container-bb1ebb24           RUNNING 1         onboot, openstack -    -    false
controller01_rsyslog_container-07902098        RUNNING 1         onboot, openstack -    -    false
controller01_utility_container-3d6724b2        RUNNING 1         onboot, openstack -    -    false

Please give me some advice on what I am doing wrong.

Roman
  • 116
  • 1
  • 4

1 Answers1

2

As you have noted the containers are not getting the management ip.

Have you make sure the br-mgmt bridge on your two virtualboxes are working as expected? Check the connectivity between these two host via br-mgmt. Eg. ping with the br-mgmt ip address between the two hosts.

If you've set up the vlan and bridges correctly, you should be able to establish connectivity using between the hosts via specific bridges.

$ ansible -vi inventory/myos all -m shell -a "ip route" --limit infra,compute
Using /etc/ansible/ansible.cfg as config file
infra2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.12 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.12 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.12 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.12 

infra1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.11 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.11 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.11 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.11 

infra3 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.13 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.13 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.13 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.13

compute1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.16 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.16 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.16 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.16 

compute2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.17 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.17 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.17 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.17 

So using the br-mgmt IP (172.29.236.x) from any of the hosts above I should be able reach the peers with the same br-mgmt subnet.

mino
  • 166
  • 3
  • Hosts can reach each other over every bridge interface. – Roman Nov 07 '18 at 14:09
  • I assume one of the two VBoxes is also the deployer node. Can the deployer node SSH to the other host (again via br-mgmt)? You could show us the full openstack_user config along with your hosts network config so we can get more idea. – mino Nov 08 '18 at 00:52
  • I running ansible playbooks, not from the virtual environment. ** My [openstack_user config](https://github.com/piar1989/openstack-ansible/blob/master/openstack_user_config.yml) ** ** My [network configuration](https://github.com/piar1989/openstack-ansible/blob/master/openstack_user_config.yml) ** – Roman Nov 08 '18 at 16:27
  • Results for `ip route` comand is [there](https://github.com/piar1989/openstack-ansible/blob/master/controller-ip-route) – Roman Nov 08 '18 at 16:37
  • 1
    Two things I see from your setup that may or may not contribute to your issue: Your deployment node should also be on the same br-mgmt network according to the [guide](https://docs.openstack.org/project-deploy-guide/openstack-ansible/rocky/deploymenthost.html#configure-the-network) Your openstack_user config should use the br-mgmt IP to identify your hosts. Eg. shared-infra_hosts: controller01: ip: – mino Nov 09 '18 at 02:11
  • I really missed this configuration. Thanks. Now I have tried configuring the deployment host (where Ansible is executed) on the same layer 2 network. But lxc containers are still without IP-s. Maybe you have some other ideas? – Roman Nov 09 '18 at 09:30
  • From your infra node ip r output, the lxcbr0 is missing. I suggest you check the ansible log from /openstack/log/ to see what the failure is related to. – mino Nov 09 '18 at 20:34
  • In my the [log file](https://github.com/piar1989/openstack-ansible/blob/master/anisble.log) I can't find any errors that that related to my problem. – Roman Nov 12 '18 at 18:43
  • 1
    I think the hint is [in these outputs](https://github.com/piar1989/openstack-ansible/blob/master/anisble.log#L1160-L1197). These indicates the network interface of the containers is not configured according...Please check documentation on whether there's a missing step or at best check with the upstream project group on #openstack-ansible to seek further assistance. – mino Nov 14 '18 at 01:31
  • Thanks but it's not the root of the problem. I tried to specify this variable `lxc_user_defined_container: "ubuntu-18.04.yml"` But this does not help. – Roman Nov 15 '18 at 13:13