openstack之kilo安装 块设备存储服务

OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷
类型的功能。

块存储服务通常包含下列组件:

  • cinder-api
    接收API请求,然后将之路由到cinder-volume这里执行。
  • cinder-volume
    直接和块存储的服务交互,以及诸如 cinder-scheduler这样的流程交互,它和这些流程交互是通过消息队列。cinder-volume
    服务响应那些发送到块存储服务的读写请求以维护状态,它可以可多个存储供应商通过driver架构作交互。
  • cinder-scheduler守护进程
    选择最佳的存储节点来创建卷,和它类似的组件是novascheduler。
  • 消息队列
    在块存储的进程之间路由信息。

参考:Chapter 8. Add the Block Storage service

Install and configure controller node

创建cinder数据库:

#
mysql -u root -p

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';


exit

导入 admin身份凭证以执行管理员用户专有的命令:

$
source admin-openrc.sh

创建 cinder user:

#
openstack user create cinder --password cinder

添加 admin role to the cinder user:

#
openstack role add --project service --user cinder admin

创建 cinder service entities:

#
openstack service create --name cinder \
--description "OpenStack Block Storage" volume
#
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2

Note
The Block Storage service requires both the volume and volumev2 services. However, both services use the same API endpoint that references the Block Storage version 2 API.

创建 Block Storage service API endpoints:

#
openstack endpoint create \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region RegionOne \
volume

#
openstack endpoint create \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region RegionOne \
volumev2

安装软件包:

#
yum install -y openstack-cinder python-cinderclient python-oslo-db

复制 /usr/share/cinder/cinder-dist.conf 文件到 /etc/cinder/cinder.conf.

#
cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf
chown -R cinder:cinder /etc/cinder/cinder.conf

编辑 /etc/cinder/cinder.conf 文件:

#
cp /etc/cinder/cinder.conf /etc/cinder/cinder.confbak
#
echo "[database]
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
auth_strategy = keystone
my_ip = 192.168.200.220
verbose = True
rpc_backend = rabbit
log_dir = /var/log/cinder
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lock/cinder">/etc/cinder/cinder.conf
#
su -s /bin/sh -c "cinder-manage db sync" cinder
/usr/lib/python2.7/site-packages/cinder/openstack/common/service.py:38: DeprecationWarning: The oslo namespace package is deprecated. Please use oslo_config instead.
from oslo.config import cfg
No handlers could be found for logger "oslo_config.cfg"

原因:
日志文件的配置项错误

解决办法:
[DEFAULT]
log_dir = /var/log/cinder

sed -i “s/(log)(dir)/\1_\2/g” /etc/nova/nova.conf

初始化 Block Storage database:

#
su -s /bin/sh -c "cinder-manage db sync" cinder

启动块设备存储服务,并将其配置为开机自启:

#
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

Install and configure a storage node

这个部分将描述如何安装和配置存储节点,以使用块设备存储服务。简单来说,这个配置将一个存储节点关联到一个空的本地块存储设备 /dev/sdb上,这个设备包含了一个合适的分区表,其中一个分区 /dev/sdb1占用了整个设备。该服务使用 LVM在这个设备上提供了逻辑卷,并通过 iSCSI传输将这些逻辑卷提供给实例使用。您可以根据这些小修改的指导,通过额外的存储节点来增加您环境的规模。

在您安装和配置卷服务之前,您必须先配置存储节点。类似于控制节点,存储节点包含一个管理网络接口上的网络。存储节点也需要一个适合您环境大小的空的块存储设备。

Configure the management interface:

IP address: 10.0.0.41
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
Set the hostname of the node to block1.

在所有节点修改hosts文件:

#
echo "#block1
10.0.0.41 block1">> /etc/hosts

If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU support package:

#
yum install -y qemu

安装 LVM packages:

#
yum install -y lvm2

启动LVM的metadata服务并且设置该服务随系统启动::

#
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

创建LVM物理卷/dev/sdb1:

#
pvcreate /dev/sdb1

创建LVM卷组cinder-volumes:

#
vgcreate cinder-volumes /dev/sdb1

块存储服务会在这个卷组中创建逻辑卷。

只有实例能够访问块存储卷。但是,底层的操作系统管理这些设备,关联到卷上。默认情况下,LVM 卷的扫描工具会扫描包含卷的块存储设备的 /dev目录。如果租户在卷上使用了 LVM,扫描工具会检查这些卷并尝试缓存它们,这会在底层系统和租户卷上产生各种各样的问题。您必需重新配置 LVM,仅仅扫描包含了 cinder-volume卷组的设备。

修改配置文件 /etc/lvm/lvm.conf并完成以下操作:

In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices:
在 devices部分,添加一个过滤器,接受 /dev/sdb设备,拒绝其他所有设备:

devices {
...
filter = [ "a/sdb/", "r/.*/"]

每个过滤器序列中的元素都以 a开头,即为 accept,或以 r开头,即为 reject,并好括一些设备名称的表示规则。您可以使用 vgs -vvvv命令来测试过滤器。

[Warning] Warning
如果您的存储节点在操作系统磁盘上使用了 LVM,您还必需添加相关的设备到过滤器中。例如,如果 /dev/sda设备包含在操作系统中:

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

类似地,如果您的计算节点在操作系统磁盘上使用了 LVM,您也必需修改这些节点上 /etc/lvm/lvm.conf文件中的过滤器,将操作系统磁盘包含到过滤器中。例如,如果 /dev/sda设备包含了操作系统:

filter = [ "a/sda/", "r/.*/"]

安装软件包:

#
yum install -y openstack-cinder targetcli python-oslo-db python-oslo-log MySQL-python

编辑 /etc/cinder/cinder.conf文件:

#
echo "[DEFAULT]
verbose = True
my_ip = 192.168.200.217
enabled_backends = lvm
glance_host = controller
rpc_backend = rabbit
auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[database]
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lock/cinder" >/etc/cinder/cinder.conf

启动块存储卷服务及其依赖的服务,并将其配置为随系统启动:

#
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

Verify operation

In each client environment script, configure the Block Storage client to use API version 2.0:

$
echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

导入 admin身份凭证以执行管理员用户专有的命令:

$
source admin-openrc.sh

列出服务组件以验证是否每个进程都成功启动:

$
cinder service-list

Source demo租户凭证,以非管理员租户的身份来执行下列步骤:

$
source demo-openrc.sh

创建 1 GB volume:

$
cinder create --name demo-volume1 1

Verify creation and availability of the volume:

$
cinder list