Table of Contents
Open Stack Operations
Links
List Virtual Machines
–all_tenants shows all vm instances regardless of project.
root@ctl-1:~# nova list --all_tenants +--------------------------------------+------------------------+---------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+---------+------------+-------------+--------------------------------------+ | 6023f874-f719-4a12-8e0a-c6528df39e87 | API | ACTIVE | None | Running | private=10.xx.yy.49 | | 95bda306-6e01-4775-8a04-133b2eb76ae7 | Ident API | ACTIVE | None | Running | private=10.xx.yy.6 | | 62c80713-9a12-4956-9406-d24493550519 | Ingest API | ACTIVE | None | Running | private=10.xx.yy.4, 84.aa.bbb.166 | | 18c7a005-b079-465a-a28c-160b100da449 | Logstash-dev | ACTIVE | None | Running | private=10.xx.yy.20 | | 112b4b48-5bfa-4006-85c0-1dd58d99b4a2 | assetstore | ACTIVE | None | Running | private=10.xx.yy.52 |
Start / stop / reboot Instances
root@ctl-1:~# nova start 0131a281-bf2b-4881-8142-061f095020fb root@ctl-1:~# nova reboot 0e661f2b-438c-4642-b5f6-1b4af75db4ae
List Hypervisors
Note, this lists ALL hypervisors known to the database, including ones which have been retired, rebuild or otherwise disposed of, and as shown below, can include duplicate names in the hostname field.
root@ctl-1:~# nova hypervisor-list +----+------------------------------+ | ID | Hypervisor hostname | +----+------------------------------+ | 1 | ncom-1.int.company.com | | 2 | ncom-2.int.company.com | | 3 | ncom-4.int.company.com | | 4 | ncom-5.int.company.com | | 5 | ncom-6.int.company.com | | 6 | ncom-7.int.company.com | | 7 | ncom-8.int.company.com | | 8 | ncom-3.int.company.com | | 9 | ncom-7.int.company.com | | 10 | ncom-8.int.company.com | | 11 | ncom-9.int.company.com | | 12 | ncom-10.int.company.com | | 13 | ncom-11.int.company.com | | 14 | ncom-12.int.company.com | | 15 | ncom-11.int.company.com | | 16 | ncom-12.int.company.com | +----+------------------------------+
List all virtual machines on hypervisor
root@ctl-1:~# nova hypervisor-servers ncom-1.int.company.com +--------------------------------------+-------------------+---------------+-----------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+-----------------------------+ | 95bda306-6e01-4775-8a04-133b2eb76ae7 | instance-00000073 | 1 | ncom-1.int.company.com | | f1ae00b2-b7a4-4984-999c-ce88e0d1fc32 | instance-0000007a | 1 | ncom-1.int.company.com | | fddd3b27-844a-4bb3-ba06-7a6742501fcb | instance-00000091 | 1 | ncom-1.int.company.com | | ca307c06-f17a-44d5-bdc4-cd45d3cf0ff3 | instance-00000096 | 1 | ncom-1.int.company.com | | 32a96abd-6244-45dd-9ebd-841b8b5d2fca | instance-00000097 | 1 | ncom-1.int.company.com | | ea236b7e-20ad-476b-9a68-f1dadebcbe1a | instance-000000ae | 1 | ncom-1.int.company.com | | e6942ca2-fe69-4c1c-94d2-5fe4ea778fcc | instance-000000b8 | 1 | ncom-1.int.company.com | | ebaa6e76-37af-4ffd-812b-afd92ed5f718 | instance-000000bc | 1 | ncom-1.int.company.com | | b900e9d3-5863-41fb-b16b-0e69559d4c97 | instance-000000cb | 1 | ncom-1.int.company.com | | 3a10e33e-0d01-46e4-84b7-1f93cc442c33 | instance-000000cc | 1 | ncom-1.int.company.com | | 9c0fed31-6be0-4798-952e-08ebdfd2a779 | instance-000000cf | 1 | ncom-1.int.company.com | +--------------------------------------+-------------------+---------------+-----------------------------+ root@ctl-1:~#
List all virtual machines
Using this command without the –all_tenants
will show only instances associated with the current project, as we are admin
, this will be the admin
project.
root@ctl-1:~# nova list --all_tenants +--------------------------------------+------------------------+---------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+---------+------------+-------------+--------------------------------------+ | 9c0fed31-6be0-4798-952e-08ebdfd2a779 | API | ACTIVE | None | Running | private=10.xx.yy.52 | | 95bda306-6e01-4775-8a04-133b2eb76ae7 | Ident API | ACTIVE | None | Running | private=10.xx.yy.6 | | 62c80713-9a12-4956-9406-d24493550519 | Ingest API | ACTIVE | None | Running | private=10.xx.yy.4, 84.aa.bbb.166 | | 18c7a005-b079-465a-a28c-160b100da449 | Logstash-dev | ACTIVE | None | Running | private=10.xx.yy.20 | | 45d4ec2d-7ae9-4eb4-af4f-06cacc364a41 | ajstest3 | ACTIVE | None | Running | private=10.xx.yy.51 | | 7b25c7e9-33b6-414f-b924-13a3183dd3f9 | asset store | ACTIVE | None | Running | private=10.xx.yy.53 | | 6ba41a72-a50c-492a-8976-9297efb3b139 | assetstore03 | ACTIVE | None | Running | private=10.xx.yy.41 |
List all floating ipaddresses
Display all floating ipaddresses:-
root@ctl-1:~# nova floating-ip-bulk-list +----------------------------------+---------------+--------------------------------------+------+-----------+ | project_id | address | instance_uuid | pool | interface | +----------------------------------+---------------+--------------------------------------+------+-----------+ | bb7d3686c3f74eb28ea15cb659c8019e | 84.aa.bbb.163 | 8cd77b27-0cf8-43d3-8351-7f3b8c4a6903 | nova | eth2 | | bb7d3686c3f74eb28ea15cb659c8019e | 84.aa.bbb.164 | None | nova | eth2 | | a59fad25f7d34d7eac7f0ba8d7cf8cd7 | 84.aa.bbb.165 | dc7884aa-327a-49a7-8d56-dc30f6237ca9 | nova | eth2 | .... edited.... | e232ca1d2ec547caad6f446639751b86 | 84.aa.bbb.185 | e7ceb5d8-b0f6-4df1-8532-0aa1a133f854 | nova | eth2 | | 58463d6730264c3fac3d7c22349834db | 84.aa.bbb.160 | 3c870ab0-d5be-4da3-a3c4-3f4442a7bb03 | nova | eth2 | | 3aeda05c2cae4bb0aeda1ae1e9279eb8 | 84.aa.bbb.161 | 53e3e7a9-bb88-483b-beb5-e66c86af3cc5 | nova | eth2 | | 5f875f5bd39c4b119ca4cc3b7d811e70 | 84.aa.bbb.162 | None | nova | eth2 | +----------------------------------+---------------+--------------------------------------+------+-----------+ root@ctl-1:~#
Add floating ipaddress
root@ctl-1:~# nova add-floating-ip 9323d158-5894-47ea-a2fc-1c51df09ee17 84.aa.bbb.180
Show details of specific instance
root@ctl-1:~# nova show tenantID +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | status | ACTIVE | | updated | 2014-04-09T11:40:32Z | | OS-EXT-STS:task_state | None | | OS-EXT-SRV-ATTR:host | ncom-6.int.company.com | | key_name | rservers | | image | ubuntu-12.04 (d829a34c-4016-4535-9d03-192ad3a6dca6) | | private network | 10.xx.yy.24 | | hostId | 3dbe6985c4bfd6f84e7dd0612f2d01432e5ee94d892db88ad6589de8 | | OS-EXT-STS:vm_state | active | | OS-EXT-SRV-ATTR:instance_name | instance-000000d7 | | OS-SRV-USG:launched_at | 2014-04-09T11:40:32.000000 | | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-6.int.company.com | | flavor | m1.small (2) | | id | 7d6c6e83-ff4c-456e-a75e-e565e4c33fcb | | security_groups | [{u'name': u'DNS-servers'}, {u'name': u'default'}] | | OS-SRV-USG:terminated_at | None | | user_id | 4de8ce3bd877407783f4c903bb9d1530 | | name | inet04 | | created | 2014-04-09T11:39:51Z | | tenant_id | 4c7db08bd3644fb58fca3d52501c75c6 | | OS-DCF:diskConfig | MANUAL | | metadata | {} | | os-extended-volumes:volumes_attached | [] | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 1 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | +--------------------------------------+----------------------------------------------------------+ root@ctl-1:~#
Due to bug https://bugs.launchpad.net/python-novaclient/+bug/1233492 it has to be used:
nova show “tenant ID” instead of name.
Rename Instance
nova rename f84ffd61-a74e-4171-835c-576caf507db4 jenkins-mon01
Delete VM
VM Stuck in non running state:-
root@ctl-1:~# nova reset-state migrationtest2 root@ctl-1:~# nova delete 222e3e46-abca-4f6b-932e-68112a1d1a40
Reset to running state
root@ctl-1:~# nova reset-state --active
Delete VM from database the naughty way
If horizon is still showing a vm in stuck state of one sort or another, the vm can be hacked out deleted from the database this way. NOT to be recommended!!
The machines I have tried this on are ghost shells left behind when a compute node has been deleted, so the actual vm files don't exist.
root@ctl-1:~# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 61224 Server version: 5.5.34-0ubuntu0.12.04.1 (Ubuntu) mysql> use nova; Database changed mysql> mysql> select hostname, vm_state, deleted_at, power_state from instances where uuid='45d4ec2d-7ae9-4eb4-af4f-06cacc364a41'; +----------+----------+------------+-------------+ | hostname | vm_state | deleted_at | power_state | +----------+----------+------------+-------------+ | ajstest3 | error | NULL | 1 | +----------+----------+------------+-------------+ 1 row in set (0.00 sec) mysql> mysql> update instances set vm_state='deleted', task_state=NULL,deleted=1,deleted_at=now() WHERE uuid='45d4ec2d-7ae9-4eb4-af4f-06cacc364a41'; Query OK, 1 row affected (0.13 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> select hostname, vm_state, deleted_at, power_state from instances where uuid='45d4ec2d-7ae9-4eb4-af4f-06cacc364a41'; +----------+----------+---------------------+-------------+ | hostname | vm_state | deleted_at | power_state | +----------+----------+---------------------+-------------+ | ajstest3 | deleted | 2014-05-19 11:17:22 | 1 | +----------+----------+---------------------+-------------+ 1 row in set (0.00 sec) mysql>
Force delete vm from node
Probably not a god idea unless there is no other way.
This was tried to remove an instance from a nova compute node which seemed not to be responding, after removing the instance from the nova compute node, the instance could be Terminated from the web GUI.
root@ncom-11:~# virsh list --all Id Name State ---------------------------------------------------- 2 instance-000000c0 running 3 instance-000000c2 running 7 instance-000000ca running root@ncom-11:~# virsh destroy instance-000000c0 Domain instance-000000c0 destroyed root@ncom-11:~# virsh undefine instance-000000c0 Domain instance-000000c0 has been undefined root@ncom-11:~# virsh list --all Id Name State ---------------------------------------------------- 3 instance-000000c2 running 7 instance-000000ca running
Block Live Migration
See setup on compute nodes ncom-1 to ncom-8
Source the Keystone env vars first:-
root@ctl-1:~# ls keystonerc root@ctl-1:~# . keystonerc
root@ctl-1:~# nova show migrationtest2 | grep hypervisor | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-8.int.company.com root@ctl-1:~# nova live-migration --block-migrate migrationtest2 ncom-7.int.company.com root@ctl-1:~# root@ctl-1:~# nova show migrationtest2 | grep hypervisor | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-7.int.company.com root@ctl-1:~#
Log nova-compute
on source machine:-
2016-02-04 09:15:54.415 1991 INFO nova.compute.resource_tracker [-] Compute_service record updated for ncom-3.int.company.com:ncom-3.int.company.com 2016-02-04 09:15:55.366 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During sync_power_state the instance has a pending task. Skip. 2016-02-04 09:15:55.367 1991 INFO nova.compute.manager [-] Updating bandwidth usage cache .... .... .... 2016-02-04 09:27:12.215 1991 INFO nova.compute.manager [-] Lifecycle event 2 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:12.378 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During sync_power_state the instance has a pending task. Skip. 2016-02-04 09:27:12.378 1991 INFO nova.compute.manager [-] Lifecycle event 2 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:12.554 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During sync_power_state the instance has a pending task. Skip. 2016-02-04 09:27:51.694 1991 INFO nova.compute.manager [-] Lifecycle event 1 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:51.701 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] _post_live_migration() is started.. 2016-02-04 09:27:51.818 1991 WARNING nova.virt.libvirt.utils [-] systool is not installed 2016-02-04 09:27:51.865 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During sync_power_state the instance has a pending task. Skip. 2016-02-04 09:27:51.869 1991 WARNING nova.virt.libvirt.utils [-] systool is not installed 2016-02-04 09:27:55.248 1991 ERROR nova.virt.libvirt.driver [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During wait destroy, instance disappeared. 2016-02-04 09:27:55.253 1991 INFO nova.virt.libvirt.firewall [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] Attempted to unfilter instance which is not filtered 2016-02-04 09:27:55.398 1991 INFO nova.virt.libvirt.driver [req-fd91da77-e672-41ab-8fa4-a98fad6d92b1 None None] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] Deleting instance files /var/lib/nova/instances/bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:56.405 1991 INFO nova.virt.libvirt.driver [req-fd91da77-e672-41ab-8fa4-a98fad6d92b1 None None] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] Deletion of /var/lib/nova/instances/bd93a229-e399-4148-b061-500cb2dbd34f complete 2016-02-04 09:27:57.063 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] Migrating instance to ncom-5.int.company.com finished successfully. 2016-02-04 09:27:57.063 1991 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] You may see the error "libvirt: QEMU error: Domain not found: no domain with matching name." This error can be safely ignored.
Log nova-compute
on dest machine:-
2016-02-04 09:14:51.585 3149 INFO nova.compute.resource_tracker [-] Compute_service record updated for ncom-5.int.company.com:ncom-5.int.company.com 2016-02-04 09:14:56.618 3149 INFO nova.virt.libvirt.driver [req-e3fc55b5-723d-43fb-b81a-be678e575cc4 4de8ce3bd877407783f4c903bb9d1530 4c7db08bd3644fb58fca3d52501c75c6] Instance launched has CPU info: {"vendor": "Intel", "model": "SandyBridge", "arch": "x86_64", "features": ["pdpe1gb", "osxsave", "dca", "pcid", "pdcm", "xtpr", "tm2", "est", "smx", "vmx", "ds_cpl", "monitor", "dtes64", "pbe", "tm", "ht", "ss", "acpi", "ds", "vme"], "topology": {"cores": 4, "threads": 1, "sockets": 1}} 2016-02-04 09:15:11.131 3149 WARNING nova.virt.disk.vfs.guestfs [req-e3fc55b5-723d-43fb-b81a-be678e575cc4 4de8ce3bd877407783f4c903bb9d1530 2016-02-04 09:15:14.319 3149 INFO nova.compute.manager [-] Lifecycle event 0 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:15:14.445 3149 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During the sync_power process the instance has moved from host ncom-5.int.company.com to host ncom-3.int.company.com .... .... .... 2016-02-04 09:27:48.451 3149 INFO nova.compute.manager [-] Lifecycle event 3 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:48.579 3149 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During the sync_power process the instance has moved from host ncom-5.int.company.com to host ncom-3.int.company.com 2016-02-04 09:27:48.580 3149 INFO nova.compute.manager [-] Lifecycle event 3 on VM bd93a229-e399-4148-b061-500cb2dbd34f 2016-02-04 09:27:48.702 3149 INFO nova.compute.manager [-] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] During the sync_power process the instance has moved from host ncom-5.int.company.com to host ncom-3.int.company.com 2016-02-04 09:27:52.684 3149 INFO nova.compute.manager [req-e3fc55b5-723d-43fb-b81a-be678e575cc4 4de8ce3bd877407783f4c903bb9d1530 4c7db08bd3644fb58fca3d52501c75c6] [instance: bd93a229-e399-4148-b061-500cb2dbd34f] Post operation of migration started
Flavors / Flavours
To resize an instance see (Not tried in production):-
http://docs.openstack.org/user-guide/content/nova_cli_resize.html (dead link)
Snapshots
Snapshots created in the GUI are created as private instances by default, so they show up in nova image-list, but not glance:-
root@ctl-1:~# nova image-list +--------------------------------------+------------------------------+--------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+------------------------------+--------+--------------------------------------+ | 403a3b3f-7a26-4350-83d3-7477f443d75e | migrationtest3-snapshot-test | ACTIVE | 3a897963-f10b-4db2-8de8-944dffc0ca90 | +--------------------------------------+------------------------------+--------+--------------------------------------+ root@ctl-1:~# glance image-list +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ | 41ef7948-9675-4c7f-b086-ed3f994f15c2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active | | f2774648-ffcb-4968-8e93-435721076f57 | fedora19 | qcow2 | bare | 237371392 | active | | d829a34c-4016-4535-9d03-192ad3a6dca6 | ubuntu-12.04 | qcow2 | bare | 254738432 | active | +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ root@ctl-1:~# glance image-update 403a3b3f-7a26-4350-83d3-7477f443d75e --is-public=True +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | Property 'base_image_ref' | d829a34c-4016-4535-9d03-192ad3a6dca6 | | Property 'clean_attempts' | 1 | | Property 'description' | Ubuntu 12.04 LTS | | Property 'image_location' | snapshot | | Property 'image_state' | available | | Property 'image_type' | snapshot | | Property 'instance_type_ephemeral_gb' | 0 | | Property 'instance_type_flavorid' | 1 | | Property 'instance_type_id' | 12 | | Property 'instance_type_memory_mb' | 512 | | Property 'instance_type_name' | m1.tiny | | Property 'instance_type_root_gb' | 10 | | Property 'instance_type_rxtx_factor' | 1 | | Property 'instance_type_swap' | 0 | | Property 'instance_type_vcpus' | 1 | | Property 'instance_uuid' | 3a897963-f10b-4db2-8de8-944dffc0ca90 | | Property 'kernel_id' | None | | Property 'os_type' | None | | Property 'owner_id' | 4c7db08bd3644fb58fca3d52501c75c6 | | Property 'ramdisk_id' | None | | Property 'user_id' | 4de8ce3bd877407783f4c903bb9d1530 | | checksum | 1246f9d0b19033fdf1202c82a2699e1d | | container_format | bare | | created_at | 2014-01-28T12:41:29 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 403a3b3f-7a26-4350-83d3-7477f443d75e | | is_public | True | | min_disk | 10 | | min_ram | 0 | | name | migrationtest3-snapshot-test | | owner | None | | protected | False | | size | 881590272 | | status | active | | updated_at | 2014-01-28T12:44:55 | +---------------------------------------+--------------------------------------+ root@ctl-1:~# glance image-list +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ | 41ef7948-9675-4c7f-b086-ed3f994f15c2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active | | f2774648-ffcb-4968-8e93-435721076f57 | fedora19 | qcow2 | bare | 237371392 | active | | 403a3b3f-7a26-4350-83d3-7477f443d75e | migrationtest3-snapshot-test | qcow2 | bare | 881590272 | active | | d829a34c-4016-4535-9d03-192ad3a6dca6 | ubuntu-12.04 | qcow2 | bare | 254738432 | active | +--------------------------------------+------------------------------+-------------+------------------+-----------+--------+ root@ctl-1:~#
Protect Image
See http://docs.openstack.org/user-guide/content/cli_manage_images.html#glance_add_image
root@ctl-1:~# glance image-update 4adea981-478f-4118-92fe-e96687214f2d --is-protected=True +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | Property 'base_image_ref' | d829a34c-4016-4535-9d03-192ad3a6dca6 | | Property 'description' | Ubuntu 12.04 LTS | | Property 'image_location' | snapshot | | Property 'image_state' | available | | Property 'image_type' | snapshot | ... edited ... | is_public | True | | protected | True | +---------------------------------------+--------------------------------------+ root@ctl-1:~#
Password login
Go to Instances, click the Launch Instance button, select required settings, in the Post-Creation tab, insert the code below as Customization Script.
#cloud-config password: mysecret chpasswd: { expire: False } ssh_pwauth: True
The Octothorpe is a required directive, mysecret will be encrypted and used as the password, ssh_pwauth allows ssh passwords (PasswordAuthentication yes), see sshd.
Monitoring
See:- HP hardware monitoring
Remove compute node from service
root@ctl-1:~# nova service-disable usage: nova service-disable [--reason <reason>] <hostname> <binary> error: too few arguments Try 'nova help service-disable' for more information. root@ctl-1:~# nova service-disable ncom-7 nova-compute +----------------+--------------+----------+ | Host | Binary | Status | +----------------+--------------+----------+ | ncom-7 | nova-compute | disabled | +----------------+--------------+----------+ root@ctl-1:~# nova service-disable ncom-7 nova-network +----------------+--------------+----------+ | Host | Binary | Status | +----------------+--------------+----------+ | ncom-7 | nova-network | disabled | +----------------+--------------+----------+
root@ctl-1:~# nova service-list +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+ ...edited... | nova-compute | ncom-7 | nova | disabled | up | 2014-02-04T15:46:20.000000 | None | | nova-compute | ncom-8 | nova | disabled | up | 2014-02-04T15:46:26.000000 | None | | nova-network | ncom-7 | internal | disabled | up | 2014-02-04T15:46:33.000000 | None | | nova-network | ncom-8 | internal | disabled | up | 2014-02-04T15:46:40.000000 | None | +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+
Move Instance between Projects
mysql> use keystone; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select id, name, enabled from project\G *************************** 1. row *************************** id: 0000968bc4854852b87edbac32f4ff94 name: Staging enabled: 1 *************************** 2. row *************************** id: 4c7db08bd3644fb58fca3d52501c75c6 name: admin enabled: 1 ....edited.... *************************** 8. row *************************** id: bb7d3686c3f74eb28ea15cb659c8019e name: Monitoring enabled: 1 ....edited.... 11 rows in set (0.00 sec) mysql> select project_id, reservation_id, display_name from nova.instances where project_id = 'bb7d3686c3f74eb28ea15cb659c8019e'; +----------------------------------+----------------+-------------------------------+ | project_id | reservation_id | display_name | +----------------------------------+----------------+-------------------------------+ | bb7d3686c3f74eb28ea15cb659c8019e | r-wnlls6d1 | monitoring6-api | ....edited.... | bb7d3686c3f74eb28ea15cb659c8019e | r-2f8u5i1b | monitoring6-inbound-mail | | bb7d3686c3f74eb28ea15cb659c8019e | r-00cw2es8 | monitoring6-staging-api | | bb7d3686c3f74eb28ea15cb659c8019e | r-yx4j765p | monitoring6-staging-db | | bb7d3686c3f74eb28ea15cb659c8019e | r-w6uobrww | monitoring6-staging-scripts | | bb7d3686c3f74eb28ea15cb659c8019e | r-15x5fm6x | monitoring6-staging-map | | bb7d3686c3f74eb28ea15cb659c8019e | r-960hvsve | monitoring6-staging-bean | | bb7d3686c3f74eb28ea15cb659c8019e | r-lhkh1qja | monitoring6-staging-memcache | +----------------------------------+----------------+-------------------------------+ 14 rows in set (0.00 sec) NOTE: Since there are multiple reservation ID entries for the same instance , we need to look at which instance is the current one by sorting out the deleted ones. Eg: If you are looking at an instance with a pattern "scripts" and is currently active , query would be as below mysql> select project_id, reservation_id, display_name, deleted_at from nova.instances where project_id = '0000968bc4854852b87edbac32f4ff94' and display_name like '%scripts%' order by deleted_at; +----------------------------------+----------------+-----------------------------+---------------------+ | project_id | reservation_id | display_name | deleted_at | +----------------------------------+----------------+-----------------------------+---------------------+ | 0000968bc4854852b87edbac32f4ff94 | r-0ukvgnb2 | cid-stag-scripts | NULL | | 0000968bc4854852b87edbac32f4ff94 | r-tt2y1mtb | mon6-stag-scripts01 | NULL | | 0000968bc4854852b87edbac32f4ff94 | r-8mxq4fa0 | cid-stag-scripts01 | NULL | | 0000968bc4854852b87edbac32f4ff94 | r-cu2nwmeq | cid-stag-scripts02 | NULL | | 0000968bc4854852b87edbac32f4ff94 | r-ce4090jg | cid-stag-scripts-1404 | 2014-12-22 11:14:27 | | 0000968bc4854852b87edbac32f4ff94 | r-sklces5v | cid-stag-scripts-1404-test2 | 2014-12-22 12:22:02 | | 0000968bc4854852b87edbac32f4ff94 | r-1xk7ove8 | cid-stag-scripts-1404-test | 2014-12-22 12:22:15 | | 0000968bc4854852b87edbac32f4ff94 | r-q2u6hp32 | cid-stag-scripts-1404 | 2015-01-15 14:32:16 | | 0000968bc4854852b87edbac32f4ff94 | r-w6uobrww | mon-stag-scripts | 2015-01-27 15:06:15 | | 0000968bc4854852b87edbac32f4ff94 | r-u7yn8rjf | cid-stag-scripts01 | 2015-03-06 14:36:18 | +----------------------------------+----------------+-----------------------------+---------------------+ 10 rows in set (0.00 sec) Note: update of project ID needs doing on an instance with the deleted_at status as "NULL" which implies its not deleted yet mysql> update nova.instances set project_id = '0000968bc4854852b87edbac32f4ff94' where reservation_id = 'r-00cw2es8'; Query OK, 1 row affected (0.03 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql>
This works as far as it goes, but it leaves the summary totals in Horizon incorrect. This post explains how to force OpenStack to recalculate the totals by setting certain fields to “-1”. THIS DOES NOT WORK ON HAVANA and it *WILL* BREAK Horizon!!You will be presented with an error screen for every URL.
I know as I did it for me. (The fix is below, having broken and fixed this does give a better idea how OpenStack displays resources used.
Log in as Nova user or root, but you will have to prefix tables with 'nova.' . Use the above code to find the id and name of the project of interest.
mysql> select id, name, enabled from project\G *************************** 1. row *************************** id: bb7d3686c3f74eb28ea15cb659c8019e name: Monitoring enabled: 1
Use this id to select the fields from quota_usages:-
mysql> select id, resource, in_use, updated_at from quota_usages where project_id='bb7d3686c3f74eb28ea15cb659c8019e'; +----+-----------------+--------+---------------------+ | id | resource | in_use | updated_at | +----+-----------------+--------+---------------------+ | 21 | floating_ips | 3 | 2014-05-17 10:41:04 | | 23 | instances | 1 | 2013-12-23 08:20:13 | | 24 | ram | 1 | 2013-12-23 08:20:13 | | 25 | cores | 1 | 2013-12-23 08:20:13 | | 26 | fixed_ips | 12 | 2014-07-23 08:53:22 | | 27 | instances | 1 | 2014-04-16 16:11:10 | | 28 | ram | 1 | 2014-04-16 16:11:10 | | 29 | cores | 1 | 2014-04-16 16:11:10 | | 78 | instances | 1 | 2014-03-11 16:44:21 | | 79 | ram | 1 | 2014-03-11 16:44:21 | | 80 | cores | 1 | 2014-03-11 16:44:21 | | 89 | security_groups | 3 | 2014-07-24 10:23:23 | | 91 | instances | 3 | 2014-07-23 08:53:23 | | 92 | ram | 24576 | 2014-07-23 08:53:23 | | 93 | cores | 9 | 2014-07-23 08:53:23 | +----+-----------------+--------+---------------------+ 15 rows in set (0.00 sec) mysql>
Setting the in_use='-1' as in the post link above will destroy all these in_use values and this is what causes Horizon to fail. To fix the situation, they have to be returned to non zero values. Horizon seems to add them up on the fly to calculate the totals.
As these values are wrong following an instance move from project to project, rather than returning them to the original values I set them to the correct values. These can be found from the project overview page and by counting up the resources used manually.
Eg. for the cores (which are unhelpfully labelled “VCPUs” in the GUI), this is what I did, as history is kept in this table, I set the first values to 1 and then adjusted to the total in the last row update:-
mysql> update quota_usages set in_use='1' where project_id='bb7d3686c3f74eb28ea15cb659c8019e' and resource='cores' and id='25'; Query OK, 1 row affected (0.04 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update quota_usages set in_use='1' where project_id='bb7d3686c3f74eb28ea15cb659c8019e' and resource='cores' and id='29'; Query OK, 1 row affected (0.03 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update quota_usages set in_use='1' where project_id='bb7d3686c3f74eb28ea15cb659c8019e' and resource='cores' and id='80'; Query OK, 1 row affected (0.02 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update quota_usages set in_use='9' where project_id='bb7d3686c3f74eb28ea15cb659c8019e' and resource='cores' and id='93'; Query OK, 1 row affected (0.04 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql>
Interestingly in the destination project where all the values showed as zero (This was an empty project before I manually moved instanced to it), creating a new instance of a vm updated the totals to reflect the correct picture, deleting this instance reduced the totals, but they were still correct. It is possible that setting in_use='0' may have allowed Horizon to recalculate rather than have a fatal error.
Editing Security Groups from CLI
If the GUI Horizon is erroring adding a new security group to an instance, this may work:-
Set an env variable for the Tenant Name (Project in GUI-speak), this will force nova to process the correct Project.
This only shows the Security Grops in Admin, but we need to see Staging project.
root@ctl-1:~# nova secgroup-list +----+---------------------+--------------------------------------------------------------------+ | Id | Name | Description | +----+---------------------+--------------------------------------------------------------------+ | 35 | DNS-servers | Allow services for DNS | | 47 | Sysmonitoring | System Monitoring services | | 1 | default | default | | 14 | logstash_all-in-one | Logstash server configured all-in-one with Redis and Elasticsearch | +----+---------------------+--------------------------------------------------------------------+ root@ctl-1:~#
Set an Env variabe and we can work on Staging. This needs to be unset after.
root@ctl-1:~# export OS_TENANT_NAME=Staging root@ctl-1:~# nova secgroup-list +----+---------------+-------------------+ | Id | Name | Description | +----+---------------+-------------------+ | 57 | Sysmonitoring | System Monitoring | | 56 | default | default | +----+---------------+-------------------+ root@ctl-1:~#
Show secgroup associated with instance
root@ctl-1:~# nova list-secgroup 0131a281-bf2b-4881-8142-061f095020fb +----+---------------+----------------------------+ | Id | Name | Description | +----+---------------+----------------------------+ | 57 | Sysmonitoring | System Monitoring | | 58 | TLS_Access | Https web access to server | | 56 | default | default | | 44 | Sysmonitoring | System Monitoring services | | 3 | default | default | +----+---------------+----------------------------+ root@ctl-1:~#
Remove Security Group from instance
root@ctl-1:~# nova remove-secgroup 3 0131a281-bf2b-4881-8142-061f095020f
Remove Security Group from instance the naughty way
This was required to correct a problem when an instance was moved from one project to another, the security groups from the old project moved with the instance but were not able to be deleted from the GUI or cli.
Log in to the controller and source keystone.
Each association of a rule with an instance in table nova.security_group_instance_association
seems to be unique, so we need to find the mapping of rules to instances,
the instance uuid can come from the GUI overview:-
root@ctl-1:~# nova list --all-tenants (....edited....) +--------------------------------------+---------------------------+---------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------------------+---------+------------+-------------+--------------------------------------+ | 0131a281-bf2b-4881-8142-061f095020fb | mon-stag-mapi | ACTIVE | None | Running | private=10.xx.yy.62 | +--------------------------------------+---------------------------+---------+------------+-------------+--------------------------------------+
mysql> select id, security_group_id, instance_uuid from security_group_instance_association where instance_uuid='0131a281-bf2b-4881-8142-061f095020fb'; +-----+-------------------+--------------------------------------+ | id | security_group_id | instance_uuid | +-----+-------------------+--------------------------------------+ | 301 | 3 | 0131a281-bf2b-4881-8142-061f095020fb | | 302 | 44 | 0131a281-bf2b-4881-8142-061f095020fb | | 388 | 56 | 0131a281-bf2b-4881-8142-061f095020fb | | 389 | 57 | 0131a281-bf2b-4881-8142-061f095020fb | | 475 | 58 | 0131a281-bf2b-4881-8142-061f095020fb | +-----+-------------------+--------------------------------------+ 5 rows in set (0.00 sec)
These rules can be decoded and reveal the names of the rules in a useful way, the project_id column has two rules left over from the previous project and these are the rules to delete. Just to be sure, you can delete the security groups in the GUI applied to the instance, then the only rules showing in the database are the ones to get rid of. The required rules can then be added back in the GUI.
In thei example, rules with id of 3 and 44 need to go.
mysql> mysql> select id, name, description, user_id, project_id from security_groups; (....edited....) +----+----------------------------+--------------------------------------------------------------------+----------------------------------+----------------------------------+ | id | name | description | user_id | project_id | +----+----------------------------+--------------------------------------------------------------------+----------------------------------+----------------------------------+ | 3 | default | default | cefbc8a807414b0fbd80a92e446fc04c | bb7d3686c3f74eb28ea15cb659c8019e | | 44 | Sysmonitoring | System Monitoring services | 4de8ce3bd877407783f4c903bb9d1530 | bb7d3686c3f74eb28ea15cb659c8019e | | 56 | default | default | 4de8ce3bd877407783f4c903bb9d1530 | 0000968bc4854852b87edbac32f4ff94 | | 57 | Sysmonitoring | System Monitoring | 4de8ce3bd877407783f4c903bb9d1530 | 0000968bc4854852b87edbac32f4ff94 | | 58 | TLS_Access | Https web access to server | 4de8ce3bd877407783f4c903bb9d1530 | 0000968bc4854852b87edbac32f4ff94 | +----+----------------------------+--------------------------------------------------------------------+----------------------------------+----------------------------------+ mysql> delete from security_group_instance_association where id='302'; Query OK, 1 row affected (0.04 sec) mysql> delete from security_group_instance_association where id='301'; Query OK, 1 row affected (0.02 sec) mysql> mysql> select id, security_group_id, instance_uuid from security_group_instance_association where instance_uuid='0131a281-bf2b-4881-8142-061f095020fb'; +-----+-------------------+--------------------------------------+ | id | security_group_id | instance_uuid | +-----+-------------------+--------------------------------------+ | 388 | 56 | 0131a281-bf2b-4881-8142-061f095020fb | | 389 | 57 | 0131a281-bf2b-4881-8142-061f095020fb | | 475 | 58 | 0131a281-bf2b-4881-8142-061f095020fb | +-----+-------------------+--------------------------------------+ 4 rows in set (0.00 sec) mysql>
Delete duplicate hypervisors from OpenStack
http://thornelabs.net/2014/08/03/delete-duplicate-openstack-hypervisors-and-services.html
Show running services in order to identify hypervisors to remove:-
root@ctl-1:~# nova service-list +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+ | nova-cert | ctl-1.int.company.com | internal | enabled | up | 2014-12-17T15:29:53.000000 | None | | nova-consoleauth | ctl-1.int.company.com | internal | enabled | up | 2014-12-17T15:29:51.000000 | None | | nova-scheduler | ctl-1.int.company.com | internal | enabled | up | 2014-12-17T15:29:45.000000 | None | | nova-conductor | ctl-1.int.company.com | internal | enabled | up | 2014-12-17T15:29:48.000000 | None | | nova-network | ncom-1.int.company.com | internal | enabled | up | 2014-12-17T15:29:49.000000 | None | | nova-compute | ncom-1.int.company.com | nova | enabled | up | 2014-12-17T15:29:51.000000 | None | | nova-compute | ncom-2.int.company.com | nova | enabled | up | 2014-12-17T15:29:51.000000 | None | | nova-network | ncom-2.int.company.com | internal | enabled | up | 2014-12-17T15:29:46.000000 | None | | nova-compute | ncom-4.int.company.com | nova | enabled | up | 2014-12-17T15:29:49.000000 | None | | nova-network | ncom-4.int.company.com | internal | enabled | up | 2014-12-17T15:29:44.000000 | None | | nova-compute | ncom-5.int.company.com | nova | enabled | up | 2014-12-17T15:29:45.000000 | None | | nova-network | ncom-5.int.company.com | internal | enabled | up | 2014-12-17T15:29:50.000000 | None | | nova-compute | ncom-6.int.company.com | nova | enabled | up | 2014-12-17T15:29:54.000000 | None | | nova-network | ncom-6.int.company.com | internal | enabled | up | 2014-12-17T15:29:46.000000 | None | | nova-compute | ncom-7.int.company.com | nova | enabled | down | 2014-01-29T09:23:27.000000 | None | | nova-network | ncom-7.int.company.com | internal | enabled | down | 2014-01-29T09:23:32.000000 | None | | nova-compute | ncom-8.int.company.com | nova | enabled | down | 2014-01-29T09:23:55.000000 | None | | nova-network | ncom-8.int.company.com | internal | enabled | down | 2014-01-29T09:23:59.000000 | None | | nova-network | ncom-3.int.company.com | internal | enabled | up | 2014-12-17T15:29:45.000000 | None | | nova-compute | ncom-3.int.company.com | nova | enabled | up | 2014-12-17T15:29:46.000000 | None | | nova-conductor | ctl-1 | internal | enabled | down | 2014-03-04T16:43:42.000000 | None | | nova-scheduler | ctl-1 | internal | enabled | down | 2014-03-04T16:43:41.000000 | None | | nova-cert | ctl-1 | internal | enabled | down | 2014-03-04T16:43:46.000000 | None | | nova-consoleauth | ctl-1 | internal | enabled | down | 2014-03-04T16:43:41.000000 | None | | nova-compute | ncom-7 | nova | disabled | down | 2014-02-04T15:46:20.000000 | None | | nova-compute | ncom-8 | nova | disabled | down | 2014-02-04T15:46:26.000000 | None | | nova-network | ncom-7 | internal | disabled | down | 2014-02-04T15:46:33.000000 | None | | nova-network | ncom-8 | internal | disabled | down | 2014-02-04T15:46:40.000000 | None | | nova-compute | ncom-9.int.company.com | nova | enabled | up | 2014-12-17T15:29:47.000000 | None | | nova-network | ncom-9.int.company.com | internal | enabled | up | 2014-12-17T15:29:47.000000 | None | | nova-compute | ncom-10.int.company.com | nova | enabled | up | 2014-12-17T15:29:48.000000 | None | | nova-network | ncom-10.int.company.com | internal | enabled | up | 2014-12-17T15:29:50.000000 | None | | nova-network | ncom-11 | internal | enabled | down | 2014-03-05T12:34:51.000000 | None | | nova-compute | ncom-11 | nova | enabled | down | 2014-03-05T09:52:35.000000 | None | | nova-compute | ncom-12 | nova | enabled | down | 2014-03-05T09:54:25.000000 | None | | nova-network | ncom-12 | internal | enabled | down | 2014-03-05T12:40:36.000000 | None | | nova-compute | ncom-11.int.company.com | nova | enabled | up | 2014-12-17T15:29:49.000000 | None | | nova-network | ncom-11.int.company.com | internal | enabled | up | 2014-12-17T15:29:46.000000 | None | | nova-compute | ncom-12.int.company.com | nova | enabled | down | 2014-12-17T15:04:09.000000 | None | | nova-network | ncom-12.int.company.com | internal | enabled | down | 2014-12-17T15:04:01.000000 | None | +------------------+------------------------------+----------+----------+-------+----------------------------+-----------------+
Start the removal from mysql:-
root@ctl-1:~# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 609888 Server version: 5.5.34-0ubuntu0.12.04.1 (Ubuntu) Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use nova; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes; +----+---------------------+---------------------+------------------------------+ | id | created_at | updated_at | hypervisor_hostname | +----+---------------------+---------------------+------------------------------+ | 1 | 2013-11-13 16:20:11 | 2014-12-17 15:31:17 | ncom-1.int.company.com | | 2 | 2013-11-22 12:46:14 | 2014-12-17 15:30:50 | ncom-2.int.company.com | | 3 | 2013-12-04 15:12:39 | 2014-12-17 15:30:34 | ncom-4.int.company.com | | 4 | 2013-12-04 15:59:54 | 2014-12-17 15:31:23 | ncom-5.int.company.com | | 5 | 2013-12-04 16:37:46 | 2014-12-17 15:31:08 | ncom-6.int.company.com | | 6 | 2013-12-04 16:54:45 | 2014-01-29 09:22:52 | ncom-7.int.company.com | | 7 | 2013-12-04 17:21:58 | 2014-01-29 09:23:58 | ncom-8.int.company.com | | 8 | 2013-12-05 14:56:10 | 2014-12-17 15:31:09 | ncom-3.int.company.com | | 9 | 2014-02-03 15:33:45 | 2014-02-03 16:25:13 | ncom-7.int.company.com | | 10 | 2014-02-03 15:33:50 | 2014-02-03 16:18:01 | ncom-8.int.company.com | | 11 | 2014-02-04 11:50:21 | 2014-12-17 15:31:09 | ncom-9.int.company.com | | 12 | 2014-02-04 12:00:42 | 2014-12-17 15:31:02 | ncom-10.int.company.com | | 13 | 2014-02-24 12:00:33 | 2014-03-05 09:52:41 | ncom-11.int.company.com | | 14 | 2014-02-24 12:28:56 | 2014-03-05 09:53:58 | ncom-12.int.company.com | | 15 | 2014-03-05 12:37:24 | 2014-12-17 15:31:25 | ncom-11.int.company.com | | 16 | 2014-03-05 12:43:06 | 2014-12-17 15:03:40 | ncom-12.int.company.com | +----+---------------------+---------------------+------------------------------+ 16 rows in set (0.00 sec) mysql> DELETE FROM compute_node_stats WHERE compute_node_id='6'; Query OK, 174 rows affected (0.06 sec) mysql> mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='ncom-7.int.company.com'; Query OK, 2 rows affected (0.03 sec) mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes; +----+---------------------+---------------------+------------------------------+ | id | created_at | updated_at | hypervisor_hostname | +----+---------------------+---------------------+------------------------------+ | 1 | 2013-11-13 16:20:11 | 2014-12-17 15:35:16 | ncom-1.int.company.com | | 2 | 2013-11-22 12:46:14 | 2014-12-17 15:34:50 | ncom-2.int.company.com | | 3 | 2013-12-04 15:12:39 | 2014-12-17 15:34:39 | ncom-4.int.company.com | | 4 | 2013-12-04 15:59:54 | 2014-12-17 15:34:22 | ncom-5.int.company.com | | 5 | 2013-12-04 16:37:46 | 2014-12-17 15:35:08 | ncom-6.int.company.com | | 7 | 2013-12-04 17:21:58 | 2014-01-29 09:23:58 | ncom-8.int.company.com | | 8 | 2013-12-05 14:56:10 | 2014-12-17 15:35:10 | ncom-3.int.company.com | | 10 | 2014-02-03 15:33:50 | 2014-02-03 16:18:01 | ncom-8.int.company.com | | 11 | 2014-02-04 11:50:21 | 2014-12-17 15:35:00 | ncom-9.int.company.com | | 12 | 2014-02-04 12:00:42 | 2014-12-17 15:35:02 | ncom-10.int.company.com | | 13 | 2014-02-24 12:00:33 | 2014-03-05 09:52:41 | ncom-11.int.company.com | | 14 | 2014-02-24 12:28:56 | 2014-03-05 09:53:58 | ncom-12.int.company.com | | 15 | 2014-03-05 12:37:24 | 2014-12-17 15:34:53 | ncom-11.int.company.com | | 16 | 2014-03-05 12:43:06 | 2014-12-17 15:03:40 | ncom-12.int.company.com | +----+---------------------+---------------------+------------------------------+ 14 rows in set (0.00 sec) mysql> DELETE FROM compute_node_stats WHERE compute_node_id='7'; Query OK, 44 rows affected (0.02 sec) mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='ncom-8.int.company.com'; Query OK, 2 rows affected (0.03 sec) mysql> DELETE FROM services WHERE host='ncom-8.int.company.com'; Query OK, 2 rows affected (0.03 sec)
Resize Instances OpenStack to new flavor
(flavour - added to allow search to find both flavor and the correctly spelled flavour).
Prerequisites:
1. The shell for “nova” user needs to be changed from “/bin/false” to “/bin/bash”
2. Passwordless authentication between compute nodes needs to be setup as “nova” user for resizing to work.
3. Same key pair ( private + public ) has been used for authentication between compute nodes considering scaling requirements in future
Test Process:
a) Create an instance : Node 12 ( current size m1.small ( 2 )
root@ctl-1:~/.ssh# nova show e014a0bf-300e-4aad-8adf-90765403871c +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | status | ACTIVE | | updated | 2015-01-19T09:17:42Z | | OS-EXT-STS:task_state | None | | OS-EXT-SRV-ATTR:host | ncom-12 | | key_name | rservers | | image | ubuntu-12.04 (d829a34c-4016-4535-9d03-192ad3a6dca6) | | private network | 10.xx.yy.38 | | hostId | 9eb23996d5f8cb7ead926d3549aab29b82c82051cddaaa446879c82d | | OS-EXT-STS:vm_state | active | | OS-EXT-SRV-ATTR:instance_name | instance-00000153 | | OS-SRV-USG:launched_at | 2015-01-19T09:17:57.000000 | | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-12.int.company.com | | flavor | m1.small (2) | | id | e014a0bf-300e-4aad-8adf-90765403871c | | security_groups | [{u'name': u'default'}] | | OS-SRV-USG:terminated_at | None | | user_id | 4de8ce3bd877407783f4c903bb9d1530 | | name | test_resize | | created | 2015-01-19T09:17:08Z | | tenant_id | 4c7db08bd3644fb58fca3d52501c75c6 | | OS-DCF:diskConfig | MANUAL | | metadata | {} | | os-extended-volumes:volumes_attached | [] | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 1 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | +--------------------------------------+----------------------------------------------------------+
b. List existing flavors
root@ctl-1:~/.ssh# nova flavor-list +--------------------------------------+------------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+------------------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 10 | 0 | | 1 | 1.0 | True | | 10 | m2.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | | 6 | m2.tiny | 512 | 10 | 0 | | 1 | 1.0 | True | | 64fe1e0b-dabd-40c1-aeef-e2a87d1ae934 | m1.small-bigdisk | 2048 | 100 | 0 | | 1 | 1.0 | False | | 7 | m2.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 8 | m2.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 9 | m2.large | 8192 | 80 | 0 | | 4 | 1.0 | True | +--------------------------------------+------------------+-----------+------+-----------+------+-------+-------------+-----------+
c. Resize command to change the instance from m1.small (2 ) to m1.medium (3 )
root@ctl-1:~/.ssh# nova resize test_resize 3 --poll Instance resizing... 100% complete Finished
d. The status of the instance on completion should be “VERIFY_RESIZE” as at this point we do have an option to revert the resize process in case of any error
root@ctl-1:~/.ssh# nova show e014a0bf-300e-4aad-8adf-90765403871c +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | status | VERIFY_RESIZE | | updated | 2015-01-19T09:23:36Z | | OS-EXT-STS:task_state | None | | OS-EXT-SRV-ATTR:host | ncom-11 | | key_name | rservers | | image | ubuntu-12.04 (d829a34c-4016-4535-9d03-192ad3a6dca6) | | private network | 10.xx.yy.38 | | hostId | 46e3c29742f5ce7a600e7a58128ddcd40ef2102f16b3870be6ef18ab | | OS-EXT-STS:vm_state | resized | | OS-EXT-SRV-ATTR:instance_name | instance-00000153 | | OS-SRV-USG:launched_at | 2015-01-19T09:23:36.000000 | | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-11.int.company.com | | flavor | m1.medium (3) | | id | e014a0bf-300e-4aad-8adf-90765403871c | | security_groups | [{u'name': u'default'}] | | OS-SRV-USG:terminated_at | None | | user_id | 4de8ce3bd877407783f4c903bb9d1530 | | name | test_resize | | created | 2015-01-19T09:17:08Z | | tenant_id | 4c7db08bd3644fb58fca3d52501c75c6 | | OS-DCF:diskConfig | MANUAL | | metadata | {} | | os-extended-volumes:volumes_attached | [] | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 1 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | +--------------------------------------+----------------------------------------------------------+
e. In case you need to revert the resize at this point , use below command
root@ctl-1:~/.ssh# nova resize-revert e014a0bf-300e-4aad-8adf-90765403871c
f. Confirm the resize and the instance state would turn to “ACTIVE”
root@ctl-1:~/.ssh# nova resize-confirm e014a0bf-300e-4aad-8adf-90765403871c root@ctl-1:~/.ssh# nova show e014a0bf-300e-4aad-8adf-90765403871c +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | status | ACTIVE | | updated | 2015-01-19T09:24:48Z | | OS-EXT-STS:task_state | None | | OS-EXT-SRV-ATTR:host | ncom-11 | | key_name | rservers | | image | ubuntu-12.04 (d829a34c-4016-4535-9d03-192ad3a6dca6) | | private network | 10.xx.yy.38 | | hostId | 46e3c29742f5ce7a600e7a58128ddcd40ef2102f16b3870be6ef18ab | | OS-EXT-STS:vm_state | active | | OS-EXT-SRV-ATTR:instance_name | instance-00000153 | | OS-SRV-USG:launched_at | 2015-01-19T09:23:36.000000 | | OS-EXT-SRV-ATTR:hypervisor_hostname | ncom-11.int.company.com | | flavor | m1.medium (3) | | id | e014a0bf-300e-4aad-8adf-90765403871c | | security_groups | [{u'name': u'default'}] | | OS-SRV-USG:terminated_at | None | | user_id | 4de8ce3bd877407783f4c903bb9d1530 | | name | test_resize | | created | 2015-01-19T09:17:08Z | | tenant_id | 4c7db08bd3644fb58fca3d52501c75c6 | | OS-DCF:diskConfig | MANUAL | | metadata | {} | | os-extended-volumes:volumes_attached | [] | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 1 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | +--------------------------------------+----------------------------------------------------------+
Update DNS settings on Clients
This process is controlled by DNS masq running on the individual hypervisors. The config is is /etc/nova/dnsmasq.conf
:-
dhcp-option=6,10.xx.yy.24,10.10.110.10,10.10.110.11 all-servers
The hypervisor needs to have dnsmasq processes killed and then nova-network restarting.
Troubleshooting
1. Horizon login stalls, usually caused by apache needing a restart, cause presently unknown. service apache2 restart solves problem
2. ERROR nova.virt.libvirt.driver [-] Getting disk size of instance-00000171: [Errno 2] No such file or directory: '/var/lib/nova/instances/7576097d-a8e5-47c3-8d97-df5ea18f9222/disk' message on compute node. This is left behind after an instance is migrated to a different size on a different compute node.
2015-06-12 16:16:59.892 16118 ERROR nova.virt.libvirt.driver [-] Getting disk size of instance-00000171: [Errno 2] No such file or directory: '/var/lib/nova/instances/7576097d-a8e5-47c3-8d97-df5ea18f9222/disk' 2015-06-12 16:17:00.567 16118 WARNING nova.compute.resource_tracker [-] [instance: 7576097d-a8e5-47c3-8d97-df5ea18f9222] Instance not resizing, skipping migration.
See https://bugs.launchpad.net/nova/+bug/1178316
root@ncom-8:/var/lib/nova/instances# virsh list --all Id Name State ---------------------------------------------------- 7 instance-00000175 running ....edited.... 42 instance-00000077 running - instance-0000016f shut off - instance-00000171 shut off root@ncom-8:/var/lib/nova/instances# virsh undefine instance-00000171 Domain instance-00000171 has been undefined root@ncom-8:/var/lib/nova/instances#
Single User Mode ( Openstack Instances )
Changes needed in /etc/default/grub:
a) Uncomment the below line and set the timeout as 5 seconds:
GRUB_HIDDEN_TIMEOUT=5
b) Uncomment the below line and change it to false:
GRUB_HIDDEN_TIMEOUT_QUIET=false
c) Edit the below line as below:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty"
d) Set the grub timeout to zero as the hidden timeout is already set as per step a
GRUB_TIMEOUT=0
e) Update grub
root@test-build:~# update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-56-virtual Found initrd image: /boot/initrd.img-3.2.0-56-virtual Found memtest86+ image: /boot/memtest86+.bin done
Process: Booting in single user mode with root password set
1. Assuming you are booting under GRUB2 then boot your Linux box and hold shift while booting. This should bring up the GRUB boot menu.The GRUB boot menu check passes by quickly. This can be tricky under a virtual machine scenario (VirtualBox, Xen, KVM, etc.), so you might have to reboot a few times before you catch it.
2. Select a boot image from the menu then press 'e' to edit.
3. Select the line containing the kernel entry and edit is as below:
linux /boot/vmlinuz-3.2.0-56-virtual root=UUID=4d5d3260-9dca-43ba-ac6b-c2957cbbd73c ro recivery nomodeset
PS: You can boot into the recovery mode directly from the grub menu ( so steps 2 & 3 can be skipped and you can select the option to boot into recovery mode from the grub menu itself )
4. Press 'F10' to boot with these new settings. 5. If the OS appears to boot normally, but you see a message that says, Give root password for maintenance (or type Control-D to continue):
At this stage you can provide the root password and continue
Process: Booting in single user mode with no root password set ( only key based logins )
This method will get you past the “Give root password for maintenance” message, but the environment will be much more primitive, but this should be enough for you to issue a 'passwd' command to change the password for root.
If you want to do more than that then you may have to mount filesystems and manually start the network.
a) Reboot your machine; press 'Shift' key while booting to get to the GRUB menu.
b) select your image; press 'e' to edit; select the Kernel line.
c) Edit the line as below
linux /boot/vmlinuz-3.2.0-56-virtual root=UUID=4d5d3260-9dca-43ba-ac6b-c2957cbbd73c console=tty rw init=/bin/bash
d) Press 'F10' to boot with these new settings.
e) When you get to the shell try editing /etc/passwd and /etc/shadow. Usually I just blank out password field for the root user then reboot. This may not work if the PAM was setup to disallow root login. In that case you may need to boot back into single user mode and then update the PAM to allow root login or allow root login without a password. Alternatively, the passwd command may be available so you can just run this to actually set a real password.
f) The reboot / shutdown commands may not work directly from the shell so you can use below command to reboot
'exec /sbin/init 6'
Alternatively , you can click “Send Ctrl+Alt+Delete” from the console to reboot.