Info nova compute manager updating host status

This address space is shared among all user instances, regardless of which tenant they belong to.

Each tenant is free to grab whatever address is available in the pool.

I tried the DR procedure here without any improvement: error in is: ERROR nova.compute.manager [req-adacca25-ede8-4c6d-be92-9e8bd8578469 cb302c58bb4245cebc61e132c79c1111 768bd68a0ac149eb8e300665eb3d3950] [instance: 3cd109e4-addf-4aa8-bf66-b69df6573cea] Cannot reboot instance: i SCSI device not found at /dev/disk/by-path/ip- This is a very restrictive issue, because I can not simply attach volumes to instances knowing that in a power failure or reboot for maintenance I will have my instances unavailable. use "virsh list --all" list all your no-running vm. goto the instances dir(default /var/lib/nova/instances/instance-00000001), run "virsh define libvirt.xml" 3.

For example, there is an instance name "instance-00000001", you cannot reboot using nova command because it attached block disk. then "virsh start instance-00000001", it can be started now/ 4. then you can "reboot" use nova-client or in dashboard. then attach volume to the instance in nova-client or dashboard I hope it can help you.

By the way, I have put that in a shell script below, to make it easy to run the procedure. As I said in a previous email, the attach was reporting and error that device /dev/vdc was already in use (what is not the case.. I changed the device to /dev/vde and it accepts and submit the command, but does not attach the device. Hope someone - including you livemoon :) - still has something else to say about this. Timeout() 2012-11-18 TRACE nova.compute.manager [instance: c5cf37e2-9e96-45a2-a739-638ac9877128] Timeout: Timeout while waiting on RPC response.