openstack之kilo安装 计算服务

这周在折腾openstack,下面是过程小记。

参考:Chapter 5. Add the Compute service

nova服务安装在控制节点、计算节点。

Install and configure controller node

配置数据库

#
mysql -u root -p

MariaDB [(none)]>

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
exit

执行admin-openrc.sh脚本获取admin CLI命令权限:

#
source admin-openrc.sh

创建nova user:

#
openstack user create nova --password nova

添加admin role to the nova user:

#
openstack role add --project service --user nova admin

创建ova service entity:

#
openstack service create --name nova \
--description "OpenStack Compute" compute

创建 Compute service API endpoint:

#
openstack endpoint create \
--publicurl http://controller:8774/v2/%\(tenant_id\)s \
--internalurl http://controller:8774/v2/%\(tenant_id\)s \
--adminurl http://controller:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute

安装packages:

#
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
python-novaclient

编辑/etc/nova/nova.conf 文件:

#
cp /etc/nova/nova.conf /etc/nova/nova.confbak
echo "[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
verbose = True
[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
[glance]
host = controller
[oslo_concurrency]
lock_path = /var/lib/nova/tmp" >/etc/nova/nova.conf

同步Compute database:

#
su -s /bin/sh -c "nova-manage db sync" nova

启动 Compute services and configure them to start when the system boots:

#
systemctl enable openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
#
systemctl start openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

需要在controller节点上开打开端口,–permanent参数为永久增加,不然在计算节点上启动openstack-nova-compute.service时,无法成功。

#
firewall-cmd --permanent --add-port=5672/tcp
#firewall-cmd --permanent --add-port=8774/tcp
firewall-cmd --reload
firewall-cmd --list-all

Install and configure compute node

This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

[Note] Note
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section using the same networking service as your existing environment. For either networking service, follow the NTP configuration and OpenStack packages instructions. For OpenStack Networking (neutron), also follow the OpenStack Networking compute node instructions. For legacy networking (nova-network), also follow the legacy networking compute node instructions. Each additional compute node requires unique IP addresses.

安装 packages:

#
yum install -y openstack-nova-compute sysfsutils

编辑/etc/nova/nova.conf 文件:

#
cp /etc/nova/nova.conf /etc/nova/nova.confbak
echo "[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
vnc_enabled = True
my_ip = 10.0.0.31
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://controller:6080/vnc_auto.html
verbose = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
[glance]
host = controller
[oslo_concurrency]
lock_path = /var/lib/nova/tmp" >/etc/nova/nova.conf

Determine whether your compute node supports hardware acceleration for virtual machines:

#
egrep -c '(vmx|svm)' /proc/cpuinfo

If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.

If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

编辑 /etc/nova/nova.conf 文件的 [libvirt]部分:

#
echo "[libvirt]
virt_type = qemu" >>/etc/nova/nova.conf

启动Compute service including its dependencies and configure them to start automatically when the system boots:

#
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

验证 operation

Verify operation of the Compute service.

[Note] Note
Perform these commands on the controller node.

Source the admin credentials to gain access to admin-only CLI commands:

#
source admin-openrc.sh

列出 service components to verify successful launch and registration of each process:

#
nova service-list

列出 API endpoints in the Identity service to verify connectivity with the Identity service:

#
nova endpoints

列出 images in the Image service catalog to verify connectivity with the Image service:

#
nova image-list