将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 2113|回复: 1
收起左侧

cinder 对接多个 ceph 存储

[复制链接]
发表于 2020-12-26 15:00:04 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
环境说明当前 openstack环境正常使用由于后端 ceph 存储容量已经超过 85%不想直接进行扩容, 因为会有大量的数据迁移新创建一个独立的ceph 集群, 并计划用于 openstack 现有环境成为一个新的 ceph后端旧的 ceph 集群称为 ceph-A,  使用中的 pool 为 volumes新的 ceph 集群称为 ceph-B,  使用中的 pool 为 new_volumes目标在 openstack 中,  同时连接到两个不同的 ceph backendcinder server 配置1. ceph 连接配置2. cinder 配置ceph 连接配置

1.同时把两个 ceph 集群中的配置复制到 cinder 服务器 /etc/ceph 目录下, 定义成不同命名

[root@hh-yun-db-129041 ceph]# tree `pwd`/etc/ceph├── ceph.client.admin-develop.keyring      <- ceph-B 集群中的 admin key├── ceph.client.admin-volumes.keyring      <- ceph-A 集群中的 admin key├── ceph.client.developcinder.keyring      <- ceph-B 集群中的用户 developcinder key├── ceph.client.cinder.keyring             <- ceph-A 集群中的 cinder key├── ceph.client.mon-develop.keyring        <- ceph-B 集群中的 mon key├── ceph.client.mon-volumes.keyring        <- ceph-A 集群中的 mon key├── ceph-develop.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)└── ceph-volumes.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)

这里需要注意, clinet.client.(username).keyring 必须要与连接 ceph 的合法用户命名一致, 否则 cinder server 端, 无法正确获得权限

2.命令行下, 测试连接不同的 ceph 后端测试

ceph-A 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-volumes.conf -k ceph.client.admin-volumes.keyring -s cluster xxx-xxx-xxxx-xxxx-xxxx
% l( I8 {' e$ b) M2 d6 [* E& T# g0 i' Q. a: N6 I, t! g
! L  P: `+ U3 y6 I8 F
       health HEALTH_OK
6 |4 j  h; k* P" W: Y) S; p
6 ~$ @% h% P8 ~6 o$ s2 F
# l, T; Q+ w, B       monmap e3: 5 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0,hh-yun-ceph-cinder025-128075=240.30.128.75:6789/0,hh-yun-ceph-cinder026-128076=240.30.128.76:6789/0}, election epoch 452, quorum 0,1,2,3,4 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-yun-ceph-cinder024-128074,hh-yun-ceph-cinder025-128075,hh-yun-ceph-cinder026-128076
# ~' d7 m* F( `4 a# {6 O, ?! Q
# B; k& L) O3 P2 B$ s& Q" M6 e- \$ p5 ~
      osdmap e170088: 226 osds: 226 up, 226 in ) v1 |) b; R# M1 |0 h, X  J& Y& e% H
0 v4 C( b5 m. J( i1 v) U/ O& |+ U5 }

8 |; Q0 Z% j: J4 s7 x- Y+ Q     pgmap v50751302: 20544 pgs, 2 pools, 157 TB data, 40687 kobjects 474 TB used, 376 TB / 850 TB avail 20537 active+clean 7 active+clean+scrubbing+deep client io 19972 kB/s rd, 73591 kB/s wr, 3250 op/s
: G. y5 B+ y5 I0 K7 {( J6 _/ t6 C* F

ceph-B 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-develop.conf -k ceph.client.admin-develop.keyring -s cluster 4bf07d3e-a289-456d-9bd9-5a89832b413b + c) Q* s. y0 |1 P1 n% P
  
+ o  q, G$ e, M) |" p  H    health HEALTH_OK monmap e1: 5 mons at {240.30.128.214=240.30.128.214:6789/0,240.30.128.215=240.30.128.215:6789/0,240.30.128.39=240.30.128.39:6789/0,240.30.128.40=240.30.128.40:6789/0,240.30.128.58=240.30.128.58:6789/0} election epoch 6, quorum 0,1,2,3,4 240.30.128.39,240.30.128.40,240.30.128.58,240.30.128.214,240.30.128.215 6 N  f5 e7 L% a0 _1 d" n2 t
" f# A9 j' V, w" ^$ o

5 d1 H  k* o7 L" J( ^   osdmap e559: 264 osds: 264 up, 264 in flags sortbitwise
* M7 G7 e) M2 Y' R" R1 h9 m) b3 ?

7 P) T7 e" L1 C5 q% B* f9 V   pgmap v116751: 12400 pgs, 9 pools, 1636 bytes data, 171 objects 25091 MB used, 1440 TB / 1440 TB avail 12400 active+clean9 S1 l8 }; h8 e9 T
cinder 配置

对 cinder 服务端进行配置

/etc/cinder/cinder.conf


2 |8 i2 M) t. B- ~

enabled_backends=CEPH_SATA,CEPH_DEVELOP...
# j1 R, h1 j1 X, `$ ^0 i4 T[CEPH_SATA]4 `' {, W5 G- v; N
glance_api_version=2
: G; |/ x1 R, a
! Y; k* W, A$ t8 ~2 tvolume_backend_name=ceph_sata
, a3 P7 d: }& z: R
, @! ~6 l' e& P: |  xrbd_ceph_conf=/etc/ceph/ceph-volumes.conf
, u5 y3 V, L! B8 m/ L/ }7 \4 c
rbd_user=cinder
( z4 _9 f% C8 q' n3 V
9 e$ c9 f& w1 u4 p+ prbd_flatten_volume_from_snapshot=False
. A5 ^" ?  u" G0 B% U0 N2 `3 x
" H+ J5 u% n$ a# C. u. Xrados_connect_timeout=-1
4 @( G3 r9 @! P, G- k  ^) ^. }
2 ?* `+ f  ]5 v/ I9 Prbd_max_clone_depth=5( s' _5 Q2 z$ `
% O! w" V' V( d- {% I! M" g# u
volume_driver=cinder.volume.drivers.rbd.RBDDriver
7 |4 C6 R3 ?9 p3 U
( G$ ]( k4 c+ C" G" zrbd_store_chunk_size=4# t5 v& H# o7 [3 ?! Y7 J

7 M( e+ n( K' t6 G8 Srbd_secret_uuid=dc4f91c1-8792-4948-b68f-2fcea75f53b
: z$ k) ]5 ^( m% t0 Y' C* Q4 y
4 M- p; R2 [$ _2 krbd_pool=volumeshost=cinder.vclound.com
7 I8 ~; d" g8 N7 z! o
( V/ l, u; `" n4 R5 |$ q[CEPH-new_volumes]
% x; y+ a# M. G; ^
4 @4 f# Z" k' n- ?7 D$ b3 Dglance_api_version=2
5 J7 [& a- w8 ~# v5 O8 `: {
5 N; T7 z' `- S: e" Tvolume_backend_name=ceph-new_volumes1 t: r( |/ e# A, w! Y9 Z( e8 w

, H! L4 a5 O- z( a8 J# hrbd_ceph_conf=/etc/ceph/ceph-new_volumes.conf
# @7 }/ [5 j4 u, u- s4 G- u: h8 U
6 N" e& |2 @% g( |; P' Grbd_user=cinder- ?9 |2 G6 J$ E/ d' |& L

, g5 Z6 P( u" @rbd_flatten_volume_from_snapshot=False
5 h# e& ?, s3 [) s$ h/ s4 S- X) z
1 j7 a/ Y7 [: F8 O: X5 t  q1 q" Crados_connect_timeout=-1
: }, F+ o% y  V
* l- h( ?7 @! Q2 Y. {) I, Krbd_max_clone_depth=59 V( x! b: w% e1 O- k- b' ?. a9 s2 S

* ]# P4 H, H8 o" @2 \volume_driver=cinder.volume.drivers.rbd.RBDDriver
% {- D5 m' x$ g2 [+ ~7 K7 J* B& ^  ^3 K! e7 Y, ~2 Z9 I1 \2 U9 W/ p
rbd_store_chunk_size=4
  I$ g: \: g" v/ g4 d6 a4 n2 Q# N: ]
* l% u8 }8 t! E! y: ^rbd_secret_uuid=4bf07d3e-a289-456d-9bd9-5a89832b413  i/ }; i9 C8 M; s8 t" X

- {3 o2 z# y  R' \) q" {rbd_pool=new_volumes
# V" d5 ]7 z, V1 R. T2 B4 f/ R+ F& D$ A+ G4 w7 }* y5 U
host=cinder.vclound.com
/ C( y7 g5 b' Y6 v4 a8 j7 ]6 K3 s2 @; X" p0 T
 楼主| 发表于 2021-1-14 23:28:30 | 显示全部楼层
在ceph监视器上执行+ T) @. F- I  r+ c$ ~, ~
CINDER_PASSWD='cinder1234!'+ ^. m) J" O" H4 }% U
controllerHost='controller'
4 l7 O. ~+ R6 v3 w, m4 i7 gRABBIT_PASSWD='0penstackRMQ'
* Y' C% c2 G( ?5 x
/ ]6 I% M# Z6 E) q1.创建pool池
7 q& x$ d4 W2 h8 w! {' @  }1 K为cinder-volume服务创建pool池(因为我只有一个OSD节点,所以要将副本数设置为1); x9 g. ?- ?9 r2 a3 ?8 ?) ^
ceph osd pool create cinder-volumes 328 p/ ?: X, I2 V: X" k
ceph osd pool set cinder-volumes size 1
3 e  e- Z6 @0 o0 Sceph osd pool application enable  cinder-volumes rbd
& ~: c* z; m* f! Uceph osd lspools* c: l6 t* \9 D
) J5 g, f- x9 \9 L0 d! W+ d
2.查看pool池的使用情况5 X9 Z, b7 a5 l/ d8 ]2 F
ceph df; l, B1 T% F) s" D% P0 w; \& ~& r5 x
) e1 D( }2 u- I: V  R3 E
3.创建账号
. j* j+ }5 o# n( p4 ~ceph auth get-or-create client.cinder-volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volumes, allow rwx pool=glance-images' -o /etc/ceph/ceph.client.cinder-volumes.keyring2 n2 F; Z4 o$ T8 O; n
#查看
4 k  Z! \; R; ?- U3 e( |' dceph auth ls | grep -EA3 'client.(cinder-volumes)'% Y' j5 L, r. D% b/ D

- }$ m- h+ p4 f  k! G* I4.修改ceph.conf配置文件并同步到所有的监视器节点(这步一定要操作)
+ z$ j4 V! R4 e  fsu - cephd
! y$ v* H  p8 p* ]& V, x3 }* g& dcd ~/ceph-cluster/
" R0 M0 I- A  L# C) R" tcat <<EOF>> ceph.conf8 V1 V2 a; Z7 ~! w
[client.cinder-volumes]0 B% G4 j/ H0 V+ R) I8 H; p
keyring = /etc/ceph/ceph.client.cinder-volumes.keyring
) n3 Y7 P5 d4 [+ u$ ?! Q  iEOF
$ k% _, t( y1 I! F1 \; M4 @) t" j9 ^ceph-deploy --overwrite-conf admin ceph-mon01  F  P9 i* ?3 b8 M
exit
' P$ o! M  F' ]
- p/ J  {$ N3 a5.安装cinder-volume组件和ceph客户端(如果ceph监视器是在控制节点上不需要执行这一步)
  P% H+ P% J. B0 j! p3 Z0 W% zyum -y install openstack-cinder python-keystone ceph-common# a, ~0 q7 ?1 \$ a6 g' ^3 C8 C

; b3 Z0 G8 e9 u0 D: P0 g# K; ^6.使用uuidgen生成一个uuid(确保cinder和libvirt中的UUID一致)1 q1 ~# h1 I" s" h( q1 U
uuidgen0 @6 U; m$ s5 {/ [0 W( W
运行uuidgen命令可以得到下面的UUID值:# G$ n% l- f# m8 B3 Y5 }
4 c1 b) v; q: }# X# @' |
086037e4-ad59-4c61-82c9-86edc31b0bc08 F0 L" @' d' D+ ?' |
7.配置cinder-volume服务与cinder-api服务进行交互1 m( i7 C! k5 p2 y* o
openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:${RABBIT_PASSWD}@${controllerHost}:5672
# w; ^& D7 o' M% ~openstack-config --set /etc/cinder/cinder.conf cache backend  oslo_cache.memcache_pool
: e0 a" c" v, A" B. Iopenstack-config --set /etc/cinder/cinder.conf cache enabled  true
0 T0 F( F  z1 M% Gopenstack-config --set /etc/cinder/cinder.conf cache memcache_servers  ${controllerHost}:11211
  X. z9 g7 B# |9 q5 S  \% kopenstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone
8 H$ H$ }: a' H" Y3 A5 @openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_uri  http://${controllerHost}:5000& r. ?  z# U$ ~* z0 V( D+ Y" o0 ]
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_url  http://${controllerHost}:5000. H6 U) a9 D  N& c$ Y3 l. ]
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_type password # e7 I/ x, ]! \/ j; B% a
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_domain_id  default
9 |! b* P$ y, y; \openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  user_domain_id  default
  q; y6 r. u9 I: Q8 @openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_name  service $ z- e! ^% E9 @' E) H2 L
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  username  cinder3 ?* s2 ^) K; x! P7 k# m
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  password  ${CINDER_PASSWD}
( t$ r5 g$ J- L, B4 F" Xopenstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/cinder/tmp
# V$ `9 [$ _8 z: }1 Q' U' h; _) I+ l. j
8.配置cinder-volume服务使用的后端存储为ceph
3 e1 X# \9 b5 j) {openstack-config --set /etc/cinder/cinder.conf  DEFAULT  enabled_backends  ceph
( H; I) K$ A0 J2 M8 w( @1 F& {4 C$ o' N
9.配置cinder-volume服务驱动ceph  O! \9 |9 r& n+ |$ e0 |' E
openstack-config --set /etc/cinder/cinder.conf  ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver
0 z2 b, l3 n6 Jopenstack-config --set /etc/cinder/cinder.conf  ceph rbd_pool  cinder-volumes5 k3 n3 }) ]! x. ^8 A& x4 x" Y2 p
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_user cinder-volumes
3 t4 k6 c; \7 k; V7 X/ t3 Lopenstack-config --set /etc/cinder/cinder.conf  ceph rbd_ceph_conf  /etc/ceph/ceph.conf 8 T+ W. t6 N$ R0 C1 x( b7 b, h
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_flatten_volume_from_snapshot  false
. `  Q" g  x2 Gopenstack-config --set /etc/cinder/cinder.conf  ceph bd_max_clone_depth  5 8 N$ I# Y1 s; P" u' F
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_store_chunk_size  4
8 z8 c. {1 b) |0 U  kopenstack-config --set /etc/cinder/cinder.conf  ceph rados_connect_timeout  -1
, D( ^6 m( A) m+ `" r* z7 `9 E7 L9 y# Popenstack-config --set /etc/cinder/cinder.conf  ceph glance_api_version 2
/ e" F4 ?- _# _openstack-config --set /etc/cinder/cinder.conf  ceph rbd_secret_uuid  086037e4-ad59-4c61-82c9-86edc31b0bc0
- I  q  n0 K1 w2 {& g1 J# X7 r1 s: T* B3 M3 U; ?: e
10.启动cinder-volume服务/ h8 N; q1 Y6 K2 u: F7 h  u
systemctl enable openstack-cinder-volume.service
9 Y) y" E  a  s" |  nsystemctl start openstack-cinder-volume.service
: K% ?5 m+ ~; [: r3 C+ V# msystemctl status openstack-cinder-volume.service
/ i1 y5 X) F& A3 P. x. l, w- l
$ G( N5 O/ d8 b+ V" }2 e! Z9 p在需要挂载ceph卷的所有计算节点上执行3 H! e1 G2 I8 h+ t0 c" _' o
1.创建secret文件(UUID需要与cinder服务中一致)+ c2 Z4 C# c! \5 y$ i
cat << EOF > ~/secret.xml# A" y6 [+ T3 y, y
<secret ephemeral='no' private='no'>
( K1 u% I3 W# [     <uuid>086037e4-ad59-4c61-82c9-86edc31b0bc0</uuid>
3 x4 H& w% o1 S3 z' f; u: V; z     <usage type='ceph'>/ {% W% W/ V/ _! |0 j4 q
         <name>client.cinder-volumes secret</name>2 r0 k2 J& U9 }
     </usage>
! `+ r  j4 f. U9 h7 E, @7 m</secret>
8 |* z8 k- ?# W3 \) r' W* T& F, pEOF4 B- v! w8 G% i( V5 z% P
2 C* f1 y) ]- z! f1 q5 w# o1 J3 z
2.从ceph监视器上获取cinder-volumes账户的密钥环
2 k  p9 O& l; _- V8 z! S2 b. r' E8 Rceph auth get-key client.cinder-volumes# _& [+ q2 B7 t0 ~
得到如下的结果:
& V% ~! V8 j2 w  U2 \* MAQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==4 p9 Q4 L; V! i5 q$ j% g+ k& u/ D
7 V1 J# {$ O# }! {9 S
3.在libvirt中注册UUID$ }; q0 F$ o" A! e! G
virsh secret-define --file ~/secret.xml
+ c3 G' _9 y1 H6 {% t7 x/ Q# Z: u% U$ N  M& p/ B: t  u, A
4.在libvirt中添加UUID和cinder-volumes密钥环( l+ `8 O* i& v) @5 j1 u
virsh secret-set-value --secret 086037e4-ad59-4c61-82c9-86edc31b0bc0 --base64 AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==5 P4 y8 k" z1 p7 e/ k# l" h1 k

- @' C) F. G, m$ M( a8 i5.查看libvirt中添加的UUID+ Y3 p- ]- X6 H
virsh secret-list
* b  w% s0 P2 s& u( p6 j3 ^# n
6.重启libvirt0 _0 T: v9 z* J' D
systemctl restart libvirtd.service
' l# k" o: B. s# Qsystemctl status libvirtd.service
' s  z3 ^  i; I& L# L( A# V9 d8 @  Y: Z
出错回滚的方案* ?; b' I! t  |4 _: _$ _% V
1.删除pool池
9 b, t4 ?6 Q0 N/ s+ m' L( J先在所有的监视器节点上开启删除pool的权限,然后才可以删除。
$ o; R4 ?* m: V# B) o" ]4 r删除pool时ceph要求必须输入两次pool名称,同时加上--yes-i-really-really-mean-it选项。6 K* l2 ^; |2 Q0 @' Z
echo '
* m$ C2 R- f) x" Amon_allow_pool_delete = true
$ N% a- L. G$ }, M# y8 h[mon]2 c0 \0 I( B) o! Q- Q
mon allow pool delete = true
- C6 N4 x# `+ i' >> /etc/ceph/ceph.conf ) A6 v' Y' q8 @5 U: c
systemctl restart ceph-mon.target! N! a+ ^) K( K% F# j4 L) l, E
ceph osd pool delete cinder-volumes cinder-volumes  --yes-i-really-really-mean-it
" [6 x, F% a0 E: P0 X
$ R0 H# c! Y7 R" q2.删除账号# D' U' p7 B8 {7 N
ceph auth del client.cinder-volumes
$ f9 V1 A9 l  a5 r) C7 A6 Z; Q( P6 f9 A; J3 _; _2 c
3.删除libvirt中注册的UUID和cinder-volumes密钥环% `0 J) j  q4 [0 P
查看:
# V3 }1 y9 X$ k! Xvirsh secret-list
( W: T, x5 a0 U* O4 ]删除(secret-undefine后跟uuid值):
! N7 v0 l1 z/ k4 p( Svirsh secret-undefine  086037e4-ad59-4c61-82c9-86edc31b0bc0
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2021-10-28 16:11 , Processed in 0.044648 second(s), 22 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表