kolla-ansible合并配置文件属性之新增glance等配置介绍文档

ceph密钥测试官网方式构建

官网版本(测试此种版本)

1
2
3
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' -o ceph.client.glance.keyring 
ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=vms' -o ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups' -o ceph.client.cinder-backup.keyring

ZzNnwN.部署版本密钥创建如下

1
2
3
4
5
cd /etc/ceph
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' -o ceph.client.cinder.keyring
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o ceph.client.glance.keyring
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups, allow rwx pool=volumes' -o ceph.client.cinder-backup.keyring
ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rwx pool=volumes, allow rx pool=images' -o ceph.client.nova.keyring

目前已验证合并配置文件生效配置

下方内容来自 | :

https://wn-apple-teawine.fun/2023/06/02/ceph与openstack-yoga集成

可参考官网:

https://docs.ceph.com/en/reef/rbd/rbd-openstack/

volumecinder-backup利用kolla-ansible合并配置文件属性后生成的配置文件分别位于

/etc/kolla/cinder-volume/cinder.conf
/etc/kolla/cinder-backup/cinder.conf

glancenova分别位于

/etc/kolla/glance-api/glance-api.conf
/etc/kolla/nova-compute/nova.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# glance的配置镜像存储
vi /etc/kolla/config/glance/glance-api.conf
# 添加:
[glance_store]
stores=rbd
default_store=rbd
rbd_store_pool=images
rbd_store_user=glance
rbd_store_ceph_conf=/etc/ceph/ceph.conf

# cinder的配置卷存储
vi /etc/kolla/config/cinder/cinder-volume.conf
# 添加:
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
# rbd_flatten_volume_from_snapshot = true
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver # 此为驱动
rbd_secret_uuid=ea241bba-2031-4b0e-b750-c8356cbb46a8

新增:( rbd_flatten_volume_from_snapshot = true

# 注意:rbd_secret_uuid在/etc/kolla/passwords.yml 文件中找
# cat /etc/kolla/passwords.yml | grep rbd_secret_uuid

# cinder-backup的配置
vi /etc/kolla/config/cinder/cinder-backup.conf
# 添加:
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
# 本次使用cinder.backup.drivers.ceph.CephBackupDriver
backup_driver=cinder.backup.drivers.ceph.CephBackupDriver #此为驱动可选cinder.backup.drivers.ceph
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
restore_discard_excess_bytes=true

# nova的配置
vi /etc/kolla/config/nova/nova-compute.conf
# 添加:
[libvirt]
virt_type=kvm
cpu_mode=none
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
# disk_cachemodes= "network=writeback"
# inject_password=true
# 如果是在虚拟机上部署的话需要将virt_type 改为qemu,如果不改成qemu,在openstack之上在起虚机,这个虚机是起不来的
(实际环境参考
新增: disk_cachemodes= "network=writeback" inject_password=true,images_type=rbd
删除: virt_type=kvm, cpu_mode = none :删除新增配置此种时成功 )

glance配置文件(glance-api.conf)

ZzNnwN.部署版本 kolla_ansible合并 配置文件路径:/etc/kolla/config/glance/glance-api.conf

官网引用

KILO AND AFTER>>

Edit /etc/glance/glance-api.conf and add under the [glance_store] section:

1
2
3
4
5
6
7
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

For more information about the configuration options available in Glance please refer to the OpenStack Configuration Reference: http://docs.openstack.org/.

ENABLE COPY-ON-WRITE CLONING OF IMAGES

Note that this exposes the back end location via Glance’s API, so the endpoint with this option enabled should not be publicly accessible.

ANY OPENSTACK VERSION EXCEPT MITAKA

If you want to enable copy-on-write cloning of images, also add under the <span class="pre">[DEFAULT]</span> section:

1
show_image_direct_url = True
1
show_image_direct_url = True

在OpenStack中,Glance是用于管理虚拟机镜像的组件,而Ceph则是一种分布式存储系统,常被用于OpenStack的后端存储。在你提到的配置中,show_image_direct_url = True 是用于配置 Glance 的 glance-api.conf 文件的一部分,它用于启用或禁用显示镜像直接URL的功能。当你设置 show_image_direct_url = True 时,Glance API 将包含一个名为 direct_url 的属性,该属性包含指向存储镜像直接URL的链接。这使得用户能够通过直接URL访问镜像,而不必通过 Glance API 服务。这在某些情况下可能是有用的,例如,如果你希望通过直接链接共享镜像给其他用户或系统,而无需通过 Glance API 。在 Ceph 和 OpenStack 集成的情况下,你需要确保 Glance 配置文件中的 default_store 设置为 rbd,以便将镜像存储在 Ceph RBD(块设备)后端。同时,你需要配置好 Ceph 的访问权限,确保 OpenStack 的各个组件可以正确地访问和使用 Ceph 存储。

ciner(官网参考)

ZzNnwN.部署版本 kolla_ansible合并 配置文件路径:/etc/kolla/config/cinder/cinder-volume.conf

OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
enabled_backends = ceph
glance_api_version = 2
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

If you are using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier:

1
2
3
4
[ceph]
...
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

Note that if you are configuring multiple cinder back ends, <span class="pre">glance_api_version</span><span> </span><span class="pre">=</span><span> </span><span class="pre">2</span> must be in the <span class="pre">[DEFAULT]</span> section.

enabled_backends = ceph 此配置实际上后端手动操作为如下 实际上就是创建磁盘时选择的类型 新版本不需要手动创建 配置文件填写完成即可

比如 我手动创建rbd-1类型时则会报错:

执行 cinder type-key 1f90a0e1-3657-4d42-9965-4e9ab76ed635 set volume_backend_name=rbd-1 报错
ERROR: Volume Type is currently in use. (HTTP 400) (Request-ID: req-48ab8f51-d024-45e4-a9f8-376f7c71bf45)

cinder type-create ceph

cinder type-key 8a8a69b6-ec9f-4a09-8126-a211fce34ef3 set volume_backend_name=ceph

cinder type-list

https://github.com/zznn-cloud/zznn-cloud-blog-images/raw/main/Qexo/24/4/image_081d10d51610d49db32c3074544340f6.png

ciner-backup

ZzNnwN.部署版本 kolla_ansible合并 配置文件路径:/etc/kolla/config/cinder/cinder-backup.conf

事实上默认使用的驱动是swift驱动 ZzNnWn.使用驱动为cinder.backup.drivers.ceph.CephBackupDriver 需要注意的是cinder.backup.drivers.ceph此驱动已弃用。

1
2
backup_driver= cinder.backup.drivers.swift.SwiftBackupDriver
backup_driver=cinder.backup.drivers.swift.SwiftBackupDriver

https://github.com/zznn-cloud/zznn-cloud-blog-images/raw/main/Qexo/24/4/image_0c901ca872c1b350140046935d9a97ad.png

官网引用

CONFIGURING CINDER BACKUP

OpenStack Cinder Backup requires a specific daemon so don’t forget to install it. On your Cinder Backup node, edit /etc/cinder/cinder.conf and add:

1
2
3
4
5
6
7
8
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

nova与ceph

ZzNnwN.部署版本 kolla_ansible合并 配置文件路径:/etc/kolla/config/nova/nova-compute.conf

此处应设置为如下

rbd_secret_uuid 通过如下两种方式皆可查看

1
2
3
apt install libvirt-clients -y
virsh secret-list (此种为已与openstack集成才能使用)
cat /etc/kolla/passwords.yml |grep rbd_secret_uuid

完整配置

1
2
3
4
5
6
7
8
9
10
11
# nova的配置
vi /etc/kolla/config/nova/nova-compute.conf
# 添加:
[libvirt]
virt_type=kvm
cpu_mode=none
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
# rbd_secret_uuid=ea241bba-2031-4b0e-b750-c8356cbb46a8 (可选不需要配置后端会自动生成)

https://github.com/zznn-cloud/zznn-cloud-blog-images/raw/main/Qexo/24/4/image_5cdbe807c394a81c86e7f04b76d40922.png

https://github.com/zznn-cloud/zznn-cloud-blog-images/raw/main/Qexo/24/4/image_4e38f71c5e42268c245ab6d95a5044b5.png

官网引用

CONFIGURING NOVA TO ATTACH CEPH RBD BLOCK DEVICE

In order to attach Cinder devices (either normal block or by issuing a boot from volume), you must tell Nova (and libvirt) which user and UUID to refer to when attaching the device. libvirt will refer to this user when connecting and authenticating with the Ceph cluster.

1
2
3
4
[libvirt]
...
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

These two flags are also used by the Nova ephemeral back end.

CONFIGURING NOVA

In order to boot virtual machines directly from Ceph volumes, you must configure the ephemeral backend for Nova.

It is recommended to enable the RBD cache in your Ceph configuration file; this has been enabled by default since the Giant release. Moreover, enabling the client admin socket allows the collection of metrics and can be invaluable for troubleshooting.

This socket can be accessed on the hypervisor (Nova compute) node:

1
ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help

To enable RBD cache and admin sockets, ensure that on each hypervisor’s <span class="pre">ceph.conf</span> contains:

1
2
3
4
5
6
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

Configure permissions for these directories:

1
2
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirtd /var/run/ceph/guests /var/log/qemu/

Note that user <span class="pre">qemu</span> and group <span class="pre">libvirtd</span> can vary depending on your system. The provided example works for RedHat based systems.

Tip

If your virtual machine is already running you can simply restart it to enable the admin socket