ubuntu配置网卡bond版 openstack部署

环境:8C24G windows10家庭版 workstdtion17 pro ubuntu20.04

由于电脑性能原因此次使用单节点测试

一. ubuntu网卡bond配置

见链接 | :workstdtion模拟交换机划分vlan及网卡bond设置

二. 两或单副本ceph部署

脚本部署方式见链接 | :ceph一键部署

备注ceph网络选择: –mon-ip 10.56.60.11 –cluster-network 10.56.50.0/24 –allow-overwrite

三. 两或单副本openstack部署

1. global.yml文件配置 网络与一中保持一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
# -------------------<<分割线>>------------------------
# kolla-ansible global.yml配置
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "zznnyoga"
nova_compute_virt_type: "qemu"
node_custom_config: "/etc/kolla/config"

# 配置私有阿里云仓库
docker_registry: "registry.cn-hangzhou.aliyuncs.com" # 拉取 docker 镜像的源
docker_namespace: "zznn" # docker registry 项目名

# 外部ceph配置 ( workstdtion ens33 nat模式 )
swift_storage_interface: "vlan60" # ceph 存储网络的接口 (ceph public network 所在接口)
# external_ceph_pool_pgs: 16 # (hdd osd 个数) * 3 的值最接近 2^n 的数, 例如 hdd osd 个数为 12, 则 external_ceph_pool_pgs 取 32
# external_ceph_ssd_pool_pgs: 32 # (ssd osd 个数) * 3 的值最接近 2^n 的数, 如果没有 ssd, 则可忽略该配置
failed_swap_on: "yes" # 默认禁止开启 swap
enable_external_ceph: "yes" # 开启外部ceph
external_ceph_admin_permission: "yes"

# openstack内部接口配置 (ens33 nat模式 ens37 主机模式( 其实此处ens37应该使用外部IP即使用 桥接模式的ens38 但vmware环境这样也行 因为windows端可以ping通主机模式生成的IP ))
keepalived_virtual_router_id: "99" # keepalived 唯一的 ID,需要和局域网内其他 keepalived ID 不一致, 取值范围 0~255
kolla_internal_vip_address: "10.56.70.100" # 内部 vip ,需要和 /etc/hosts 中的 IP 网段一致 (workstdtion实验环境划分lan区段20只能内部通信 )
network_interface: "vlan70" # 内部 vip 接口,服务器内部通讯地址
kolla_external_vip_address: "192.168.1.99" # 外部 vip,用于 horizon dashboard 界面操作等(若开启高可用:enable_haproxy: "yes"需要配置为没有占用的ip)
horizon_port: "10000" # dashboard 的端口
kolla_external_vip_interface: "bond3" # 外部 vip 接口, 若该网卡划分了 vlan,则需要填写配置外网的子接口workstdtion模拟ens38桥接模式网卡为192.168.31段

# 其他配置
# kolla_external_vip_interface: "{{ network_interface }}"
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
cluster_interface: "{{ network_interface }}"
tunnel_interface: "{{ network_interface }}"
dns_interface: "{{ network_interface }}"
# neutron_external_interface: "ens38" # 访问openstack的物理网卡不需要配置IP,业务口
neutron_plugin_agent: "openvswitch"

# 外网设置( ens38 nat模式 此处ens38为业务口不配置IP 采用办公网192.168.31段作为外网 接线接办公网的光纤或网线 )
# neutron_external_interface: "ens38" # flat 网络使用的物理网卡,若需要连接专网
neutron_external_interface: "bond2,vlan80" # workstdtion模拟桥接模式网卡 用于模拟外网可配置网络也可不配置 物理服务器需要接外网网线
neutron_bridge_name: "{% for i in range(neutron_external_interface.split(',')|length) %}br-ex{{ loop.index0 + 1 }}{% if not loop.last %},{% endif %}{% endfor %}"
# 网桥名字, neutron_external_interface 有多少个接口就要指定多少个网桥名称
enable_neutron_dns: "no" # neutron 是否开启 dns, 如果开启 dns, 可能遭受 dns 反射型 ddos 攻击, 如果不开启, horizon 中创建子网时需要指定公共 dns, 如 223.5.5.5
enable_neutron_mtu_1550: "no" # 不要修改该参数,否则可能会带来一些网络问题
enable_neutron_ext_net: "yes" # 是否自动创建 flat 网络
neutron_ext_net_cidr: "192.168.1.0/24"
neutron_ext_net_range: "start=192.168.1.70,end=192.168.1.120"
neutron_ext_net_gateway: "192.168.1.1"

# 禁用时钟检查(可选)
# prechecks_enable_host_ntp_checks: "false"
# skyline可选
# enable_skyline: "yes"
# 其他配置
enable_ceph: "no"
enable_haproxy: "yes"
enable_chrony: "yes"
# enable_mariabackup: "no"
# enable_ceph_rgw: "yes"
enable_cinder: "yes"
enable_manila_backend_cephfs_native: "yes"
# enable_neutron_qos: "yes"
# enable_ceph_rgw_keystone: "yes"
glance_backend_ceph: "yes"
glance_enable_rolling_upgrade: "no"
gnocchi_backend_storage: "ceph"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
ironic_dnsmasq_dhcp_range:
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:

# ------------分割线--------------
# 2023.10.30新增(新增vpn qos 数据库) --------------------->> 下方新增配置------------
enable_neutron_dvr: "yes" # 您启用了 OpenStack Neutron 的 DVR(Distributed Virtual Router)功能。DVR 是一项功能,它允许虚拟路由器(Virtual Router)的数据面功能在计算节点上分布式执行,而不是集中执行在网络节点上。这有助于提高网络性能和扩展性。
enable_neutron_vpnaas: "yes" # vpn
enable_neutron_qos: "yes" # qos
enable_nova_ssh: "yes" # 这允许用户通过 SSH 协议访问虚拟机实例,以进行远程管理和操作。
# enable_neutron_provider_networks: "yes" # 提供者网络是一种网络类型,允许 OpenStack 租户连接到外部网络,通常是物理网络或其他外部网络服务。
# enable_trove: "yes" # Trove 是 OpenStack 的数据库即服务

问题

可以看到此种方式即使是测试环境ceph osd写入速度也可以达到1M/s以上

注:此种测试时发现浮动IP无法使用原因如下

文件:

  • /etc/kolla/neutron-server/ml2_conf.ini
  • /etc/kolla/neutron-openvswitch-agent/openvswitch_agent.ini
  • bond2 vlan80 与浮动IP网段均不在一起 bond2 上网卡网口需要接与浮动IP同网段的网线 并且不能配置 IP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
# 查看配置文件
cat /etc/kolla/neutron-openvswitch-agent/openvswitch_agent.ini

[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true
enable_distributed_routing = True
extensions = qos

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
bridge_mappings = physnet1:br-ex1,physnet2:br-ex2
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
ovsdb_timeout = 10
local_ip = 10.56.70.11 # IP为56段IP但浮动IP为192.168段

# 查看交换机内部
(venv3) root@ceph1:/etc/kolla/neutron-openvswitch-agent# docker exec -it openvswitch_vswitchd bash
(openvswitch-vswitchd)[root@ceph1 /]# ovs-vsctl show
0b40fa1b-d121-4e67-ae65-2a702169c162
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex1
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port br-ex1
Interface br-ex1
type: internal
Port bond2 # 可以看到此处使用的外网网卡是bond2 但是此时bond2是
Interface bond2
Port phy-br-ex1
Interface phy-br-ex1
type: patch
options: {peer=int-br-ex1}
Bridge br-ex2
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port phy-br-ex2
Interface phy-br-ex2
type: patch
options: {peer=int-br-ex2}
Port br-ex2
Interface br-ex2
type: internal
Port vlan80
Interface vlan80
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port tapc4f556bc-44
tag: 3
Interface tapc4f556bc-44
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port qvo0650711e-a1
tag: 2
Interface qvo0650711e-a1
Port int-br-ex1
Interface int-br-ex1
type: patch
options: {peer=phy-br-ex1}
Port qg-97668e2d-78
tag: 3
Interface qg-97668e2d-78
type: internal
Port int-br-ex2
Interface int-br-ex2
type: patch
options: {peer=phy-br-ex2}
Port sg-02263924-70
tag: 2
Interface sg-02263924-70
type: internal
Port tap823e4df9-c8
tag: 2
Interface tap823e4df9-c8
type: internal
Port fg-69b4fc5a-cc
tag: 3
Interface fg-69b4fc5a-cc
type: internal
Port br-int
Interface br-int
type: internal
Port qr-69d003b3-f2
tag: 2
Interface qr-69d003b3-f2
type: internal

解决

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
(venv3) root@ceph1:~# cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
ethernets:
ens33: {}
ens37: {}
ens38: {}
ens39: {}
ens40: {}
ens41: {}
ens42: {}
ens43: {}
bonds:
bond0:
addresses: ["10.56.50.11/24"] # 此网口用万兆口 用于ceph osd间通信(--cluster-network)
interfaces:
- ens37
parameters:
mode: 802.3ad
mii-monitor-interval: 100
lacp-rate: fast
transmit-hash-policy: layer3+4
bond1:
interfaces:
- ens38
parameters:
mode: 802.3ad
mii-monitor-interval: 100
lacp-rate: fast
transmit-hash-policy: layer3+4
bond2:
interfaces:
- ens39 # 接与浮动IP同网段的网线
parameters:
mode: 802.3ad
mii-monitor-interval: 100
lacp-rate: fast
transmit-hash-policy: layer3+4
bond3:
addresses: ["192.168.1.10/24"] # 此网口用千兆口 >> 电口 用于远程连接 为浮动IP
gateway4: 192.168.1.1
interfaces:
- ens33
parameters:
mode: 802.3ad
mii-monitor-interval: 100
lacp-rate: fast
transmit-hash-policy: layer3+4
vlans:
vlan60:
id: 60
link: bond1
addresses: ["10.56.60.11/24"]
vlan70:
id: 70
link: bond1
addresses: ["10.56.70.11/24"]
vlan80:
id: 80
link: bond1
#addresses: ["10.56.80.11/24"] #浮动IP使用接口不配置IP
vlan90:
id: 90
link: bond1
addresses: ["10.56.90.11/24"]
version: 2

此时问题解决。

效果

ceph状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(venv3) root@ceph1:~# ceph -s
cluster:
id: dc7fa486-f1a0-11ee-a64f-c194d5eb9f2e
health: HEALTH_WARN
5 pool(s) have no replicas configured

services:
mon: 1 daemons, quorum ceph1 (age 62m)
mgr: ceph1.phqsba(active, since 62m)
osd: 1 osds: 1 up (since 62m), 1 in (since 2h)

data:
pools: 5 pools, 129 pgs
objects: 157 objects, 220 MiB
usage: 111 MiB used, 20 GiB / 20 GiB avail
pgs: 129 active+clean

openstack创建实例验证效果 >>

https://github.com/zznn-cloud/zznn-cloud-blog-images/raw/main/Qexo/24/4/image_ea5efea6d38310a022b2f7c573e3afcd.png

此时集群测试圆满完成。