I'm posting this as a question to report a problem (and workaround) I encountered that doesn't seem to be covered by other questions. It's probably quite specific to the software setup I'm using, but in case it helps...
This was on a single-node configuration that had been running successfully for many years (Ubuntu 12.04, Havana OpenStack), but this was the first time in a while I had attempted to create a new VM image.
The command I ran is this:
cinder create 50 --display_name bionic-test-annalist-50Gb \
--volume_type lvm-scsi \
--image-id 5121d3e9-ef3d-4ff9-a5b9-f2f31c08cbbe \
--availability-zone nova
Following which I see this volume status:
root@seldon:/etc/cinder# cinder list
+--------------------------------------+--------+-----------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-----------------------+------+-------------+----------+--------------------------------------+
| 26277f8f-e0cd-43e7-8e5c-c42b0be21706 | in-use | dhoxss-annalist-50Gb | 50 | lvm-scsi | true | d436f20c-5f8f-47cb-9ad5-eacaf6bda882 |
| 852fd771-71ec-4d0a-ae62-b48b5e35ff93 | in-use | demo-annalist-50Gb | 50 | lvm-scsi | true | eac53b50-54f3-4e93-804d-91569e1ed337 |
| abe7e7e6-502c-48b5-95ef-207891076e11 | in-use | test-databank-50Gb | 50 | lvm-scsi | true | 367bddfe-da43-40f2-a23c-75a5dac5225e |
| afa05ae4-e956-446b-bb26-a1439502435c | error | bionic-annalist-50Gb | 50 | lvm-scsi | false | |
| ce7e0d7b-dfe3-4c8a-a541-91d9b6b388d9 | in-use | fast-performance-50Gb | 50 | lvm-scsi | true | 233a8924-cfd0-4f2c-a242-d596f1bb0cee |
| da9a5222-246e-4697-b10e-02c9a912d4b6 | in-use | dev-annalist-50Gb | 50 | lvm-scsi | true | 463ffed0-7a31-467b-9ec6-a5acdbf72723 |
+--------------------------------------+--------+-----------------------+------+-------------+----------+--------------------------------------+
The Cinder log file (I think it was /var/log/cinder/cinder-scheduler.log
) shows this:
2018-10-10 18:29:49.803 2111 WARNING cinder.scheduler.host_manager [req-4d12534f-abcd-499f-99cf-5f49d0308439 c570590c61be4ae5819c9b2d93986df2 1e701a6ab66141b9a64bfd963e301bc6] volume service is down or disabled. (host: seldon)
2018-10-10 18:29:49.804 2111 WARNING cinder.scheduler.host_manager [req-4d12534f-abcd-499f-99cf-5f49d0308439 c570590c61be4ae5819c9b2d93986df2 1e701a6ab66141b9a64bfd963e301bc6] volume service is down or disabled. (host: seldon@lvmdriver-scsi)
2018-10-10 18:29:49.805 2111 ERROR cinder.volume.flows.create_volume [req-4d12534f-abcd-499f-99cf-5f49d0308439 c570590c61be4ae5819c9b2d93986df2 1e701a6ab66141b9a64bfd963e301bc6] Failed to schedule_create_volume: No valid host was found.
Specifically note: Failed to schedule_create_volume: No valid host was found.
And the service-list confirms that the service is not running.
root@seldon:/etc/cinder# cinder service-list
+------------------+-----------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+-----------------------+------+---------+-------+----------------------------+
| cinder-scheduler | seldon | nova | enabled | up | 2018-10-10T17:30:07.000000 |
| cinder-volume | seldon | nova | enabled | down | 2014-03-11T14:17:02.000000 |
| cinder-volume | seldon@lvmdriver-sas | nova | enabled | up | 2018-10-10T17:30:12.000000 |
| cinder-volume | seldon@lvmdriver-scsi | nova | enabled | down | 2018-10-10T17:27:55.000000 |
+------------------+-----------------------+------+---------+-------+----------------------------+
Given that system was previously working, and the existing VMs are still fine, what's going on here? Googling didn't uncover any fixes.