1

I am attempting to install and configure OpenStack Mitaka on a 4 node stack. 1 Controller, 1 Compute, 1 Block Storage, and 1 Object Storage. When attempting to create the Block storage node I am unable to create a volume via the dashboard. The base OS os Ubuntu 14.04, and like I said earlier the Mitaka release of OpenStack.

Here is the cinder.conf on the Controller Node

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi


[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = *********

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********


[database]
connection = mysql+pymysql://cinder:********@controller/cinder

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

Here is the cinder.conf on the Cinder (Block Storage) Node

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.41

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********
enabled_backends = lvm
glance_api_servers = http://controller:9292

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = ********

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[database]
#connection = mysql+pymysql://cinder:*******@controller/cinder
connection = mysql+pymysql://cinder:*******@controller/cinder
#connection = mysql://cinder:******@controller/cinder

[api_database]
connection = mysql+pymysql://cinder:*******@controller/cinder_api



[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

The status after I create the volume is "error". Here is the error line I get inside the cinder-scheduler.log file on there Controller Node

2016-09-07 17:14:22.291 10607 ERROR cinder.scheduler.flows.create_volume [req-272c5387-a2e3-4371-8a14-8330831910d0 a43909277cbb418fa12fab4d22e0586c 64d180e39e2345ac9bbcd0c389b0a7c4 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available

This is the most important part of the error message I believe:

volume:create: No valid host was found. No weighed hosts available

When I run the command "cinder service-list" from the Controller Node I get the following output:

+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2016-09-07T22:13:11.000000 |        -        |
|  cinder-volume   |   cinder   | nova | enabled |   up  | 2016-09-07T22:13:30.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

it is interesting to note that the host name is cinder. Where as in the Mitaka install guide the host name is block1@lvm. Not sure why mine is different, or if that is even relevant. Found it interesting and maybe a clue into my problem.

This leads me to believe that the Cinder Node and The Controller node are able to "see" or communicate with each other. I believe I have configured lvm properly inside the Cinder node. Just in case here is the filter section from the lvm.conf file:

filter = [ "a/sda/", "a ...

With all this being said. I am thinking it is either a partition/hard drive format issue. Or, a rabbitmq (Messaging Service) issue. I do have the rabbitmq-server installed on the Cinder Node which I know is not the way the guide has it set up, meaning it is probably wrong. What I am attempting to do now is remove the rabbitmq-server from the Cinder Node. The problem I believe I will run into is that the Cinder Node and the Controller Node won't "see" each other. If that is the case then maybe there is something wrong with the cons files on any one of the 3 nodes I have running right now? 3 nodes running right now would be Controller, Compute, and Cinder.

Let me know what you guys think. If you see an issue with my cons files please tell me. The last paragraph is there to explain my thinking, and the current state of the project. If you see an error in my logic, or think there maybe a better way to solve the problem I am all ears!

Thanks Everyone!

StevieSwift
  • 11
  • 1
  • 2
  • The cinder tag you have added is for a different Cinder - I don't know much about this Cinder - and even less about that one ;-) – Jimmy Oct 08 '16 at 19:59

2 Answers2

0

First check the output of vgs command . If you installed the openstack (same as me) via packstack. The default volume size is 20GB or so.You can view the packstack answer file to confirm or view the volume group size

CONFIG_CINDER_VOLUMES_SIZE=20G

If you want to extend the size of this volume group use this link :-

Hope this will resolve your issue.

Community
  • 1
  • 1
0

you have put enabled_backends key in the wrong section. It will be defined in the [Default] section on both the controller as well as the storage node.