将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 464|回复: 6
收起左侧

新版使用cephadm安装ceph octopus

[复制链接]
发表于 2021-7-19 10:56:03 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
Ceph集群基础库安装
& t9 X) C& F. m1 e步骤:
" n* [4 E6 C+ U% a: \2-1. 安装配置其它基础包,pip、deltarpm、ceph-common  
& z6 \$ K" H* ]# V2-2. 防火墙规则添加
5 H; U1 a/ A4 }1 @1 v# Z* s1 d5 g- K3 z" I0 ]/ d
2-1 安装配置其它基础包,pip、deltarpm、ceph-common
5 f$ h! L  t, M( |
# k1 A, K9 L) y* s/ q8 ^1. sed -i 's/^SELINUX\=.*/SELINUX=disabled/g' /etc/selinux/config2.  yum install -y python3 epel-release ceph-mgr-dashboard ceph-common    pip3 install --upgrade pip3. yum install -y snappy leveldb gdisk python3-ceph-argparse python3-flask gperftools-libs4. yum install -y ceph; i& \6 ]: d0 b" O. q/ e0 H
: l% V1 \$ t$ g. f! b9 ^9 @
2-2 防火墙规则添加- l* C+ A9 A/ k8 I5 A1 y8 q# ]
" \, Z0 `+ _# R* y! I
bash> firewall-cmd --zone=public --add-service=ceph-mon --permanentbash> firewall-cmd --zone=public --add-service=ceph --permanentbash> firewall-cmd --zone=public --add-service=ntp --permanentbash> firewall-cmd --reload
* \* \3 |3 l, L/ W' Y: o& |- P5 e附:ceph相关使端口列表0 n! b# ~6 i  C: n/ A" p
  CephMonitor(ceph-mon):3300、6789(TCP)
6 q& a1 ^  g) q7 y* Z2 A  CephManager(ceph-mgr):6800、6801、以及一个自定义Web端口(TCP)" a9 ]" b, _9 }$ ^6 c
  CephOSD(ceph-osd): 6800->7300(TCP)
4 j4 n3 N, W! x2 N# m( P. b6 W以上为止,则整个ceph集群的基础依赖环境都安装好。
3 ~! I7 v, C8 ?9 ]
; U7 M' X6 B& H+ C4 q* oCephadm使用podman容器和systemd安装和管理Ceph分布式集群,并与CLI和仪表板GUI紧密集成。7 r  f) x1 j8 @8 [2 e( |( X( h
•cephadm仅支持octopus  v15.2.0或者更高版本。
  u# M6 E' I  H0 j# C•cephadm与新的业务流程API完全集成,并完全支持新的CLI和仪表板功能来管理集群部署。; I. v* Q& a! a. E
•cephadm需要容器支持(podman或docker)和Python 3。
9 a0 I! I. z$ _3 p8 P•时间同步/ Y( H5 P) R3 L8 ^7 [9 F# x
6 T+ Z/ q% o2 K, v6 n- V
这里使用的centos8来安装的ceph
) r6 f) \  k( F/ o4 v, ~) q
7 u# ~8 Q2 ^: s6 l7 U/ P, q: ?配置hosts解析cat >> /etc/hosts <<EOF192.168.8.65 node1192.168.8.66 node2192.168.8.67 node3EOF关闭selinux
0 s, ]! t- Q2 d+ W9 ?, ^  ysetenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config设置主机名hostnamectl set-hostname node1安装podmandnf install -y podman
/ x8 v. c9 Y. t' s) ], i. M0 [安装cephadm
% [* B% h" J& I! Gcephadm命令可以
% g1 q( v( s' V. Q( n7 ]1.引导新集群; E4 i$ U6 U  |* A/ c, n/ ?1 m% B
2.使用有效的Ceph CLI启动容器化的Shell
3 F5 L$ I) i* m  A% Q6 _3.帮助调试容器化的Ceph守护进程。
$ {) g7 H! f0 @! A7 w( O$ U( W) c6 W) ]  {6 H
以下操作只在一台节点执行就可以
6 U9 j; }: c& ~9 V, X( o+ O% I; hcurl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
0 b  O3 w* s* n6 ]: M8 Qchmod +x cephadm3 D% P# g/ q2 V) I  @4 ]. l
安装cephadmn+ R8 g4 D, n# Z. e' S" c
./cephadm add-repo --release octopus/ V- a2 P( K4 G( g2 I7 N1 L' t
./cephadm install
* b8 D9 C1 w5 G: b& f. ^0 v要引导群集,需要先创建一个目录:/etc/ceph
) c( ]% N5 N" H9 m" n0 dmkdir -p /etc/ceph
5 X4 L" c4 u% x. \然后运行该命令:ceph bootstrap
! d' w" G. r1 w4 Ocephadm bootstrap --mon-ip 192.168.8.65
% w: v0 j; v9 E0 f
; j+ T& W2 N6 ], H( Y& d6 p& d
0 H# H+ i- p- f6 N6 z. v% N: d此命令将会进行以下操作:% b" }% H9 v, K$ b+ J
为本地主机上的新群集创建monitor和manager守护程序。1 T+ y& }  F4 B" ]: ^: @
为 Ceph 群集生成新的 SSH 密钥,并将其添加到root用户的文件/root/.ssh/authorized_keys
6 y8 _8 N8 ]* b2 W; w将与新群集通信所需的最小配置文件保存到 /etc/ceph/ceph.conf
  L( u( g, l5 R4 E4 h: ~将client.admin管理(特权!)密钥的副本写入/etc/ceph/ceph.client.admin.keyring
3 i+ y6 t1 `* H8 @- n( K将公钥的副本写入/etc/ceph/ceph.pub! [& ?3 O: L5 E% u; d. b7 Y
安装完成后会有一个dashboard界面- [5 n- a  _+ |6 \9 q

3 h; j! V9 ], V$ H7 T3 S' r8 P& a$ Z% m3 f$ M  [
执行完成后我们可以查看ceph.conf已经写入了2 d. |0 x9 W4 N0 |
9 V7 ?8 q: y/ M0 k
" z/ {, O& T% E: U: u2 s
启用 CEPH CLI; v# n+ G3 P4 y
cephadm shell命令在安装了所有Ceph包的容器中启动bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置和keyring文件,则会将它们传递到容器环境中,以便shell完全正常工作。0 @! ^9 K' ?4 ~) R1 i
cephadm shell& n: r* {) I; p
0 W; T. ?" t# G
可以在节点上安装包含所有 ceph 命令的包,包括 、(用于安装 CephFS 文件系统)等, v4 ]0 f' N5 B! r1 h' S6 J( C' w
cephadm add-repo --release octopus* j5 J: J( O  z; P, o% w
cephadm install ceph-common
, _8 u* u% y  \$ Y9 s  s  X4 f; {5 G+ v! {. }/ O" y' L  z
安装过程很慢,可以手动将源改为本地镜像:
1 u& S% h7 x& s+ I) V5 R) A2 F( s/ a- g4 L* p! w
添加主机到集群2 U. h+ \3 ?3 V6 I! i: L8 L
将公钥添加到新主机. z( i1 K& x) N8 f
ssh-copy-id -f -i /etc/ceph/ceph.pub node2# P0 x0 U( L' ~+ Y9 M4 z' e! d% c
ssh-copy-id -f -i /etc/ceph/ceph.pub node3: [! F( ]% S1 u0 l8 `: j
5 v2 `9 `+ W2 _5 z1 j6 `

  d$ @; b+ }6 U3 s5 n告诉Ceph,新节点是集群的一部分9 y2 R5 S) T) F
[root@localhost ~]# ceph orch host add node2
: o* |; K/ F& aAdded host 'node2'8 s1 z  E+ V: C; i; E& }$ |
[root@localhost ~]# ceph orch host add node3& X4 G- B4 H" C/ u% @% f, ]  D: \
Added host 'node3'* y( j! R% G* b" v7 P
! V1 G+ I  E- F' V) F7 R9 R  K! n" R* `
添加主机会自动扩展mon和mgr节点1 D$ o9 S7 b4 e: B0 `
) K  `& }4 R* m0 N7 z
6 M& f0 i1 u7 p+ P

1 B. ]' Q+ z" L8 [1 F- d% w7 w部署其他监视器(可选)
9 J+ |4 E0 d: h. b; K! d典型的 Ceph 群集具有三个或五个分布在不同主机的mon守护程序。如果群集中有五个或更多节点,建议部署五个mon。( S$ ~0 b# I% D7 W: n
当Ceph知道mon应该使用什么IP子网时,它可以随着群集的增长(或收缩)自动部署和缩放mon。默认情况下,Ceph假定其他mon使用与第一个mon的IP相同的子网。5 z$ p# Y/ O3 D' G8 X0 |, P9 l8 D
在单个子网的情况下,如果向集群中添加主机,默认最多只会添加5个mon 如果有特定的IP子网给mon使用,可以使用CIDR格式配置该子网:# n1 E5 f) Z2 N) ~5 L( M9 @  e& l1 {

" I* H. q! N. Kceph config set mon public_network 10.1.2.0/24
# `0 b9 }% k7 q3 g2 {" c5 s; r2 O  rcephadm只会在配置了特定子网IP的主机上部署mon守护程序 如果要调整特定子网mon的默认数量,可以执行以下命令:4 s' B' f* ]$ i" H6 \3 O0 |
ceph orch apply mon *<number-of-monitors>*% f* S, T7 X3 J, g0 ]9 F- o
如果要在一组特定的主机上部署mon,可以执行以下命令:4 M2 ^& `  S5 {3 p! ^3 ~  P4 x2 t
ceph orch apply mon *<host1,host2,host3,...>*
$ d6 L5 J6 M6 t: A+ `" s; K1 B( o+ y如果要查看当前主机和标签,可以执行以下命令:
5 k* ^, i' K; l& z1 U, S7 y4 ~ 0 t9 H. G( J2 k7 w0 |4 t
[root@node1 ~]# ceph orch host ls8 ~* q' e$ @( @  d. O+ M
HOST   ADDR   LABELS  STATUS  6 a- H- ?# j& u7 w5 U- f
node1  node1                  , v0 m1 U8 J* d8 t: ?6 h! `, [# N
node2  node2                  
" k0 m- k0 [  V5 |. _: \. Bnode3  node3  
$ G3 a" R( D( w6 }如果要禁用自动mon部署,执行以下命令:$ B7 L8 c8 b+ ]
ceph orch apply mon --unmanaged. h/ A0 X/ C! p: m6 |0 Z! E; ^+ F
要在不同网络中添加mon执行以下命令:% m1 b; a. l9 }7 [+ i
ceph orch apply mon --unmanaged
8 ]8 y  {. {  C. i; tceph orch daemon add mon node2:192.168.8.66
- W: I+ D* x2 U8 f4 c- }ceph orch daemon add mon node2:192.168.17.0/24
# g. z2 t  ?# D/ g
' |3 y: d; E8 A2 l  |如果要添加mon到多个主机,也可以用以下命令:  ?4 m2 D! d) O0 p0 V
ceph orch apply mon "host1,host2,host3"& O2 v9 M3 f5 i% p
0 T8 r4 f2 Y; K. q$ _
部署OSD
0 V* q0 C; y  @" E( ~. ]1 S$ P可以用以下命令显示集群中的存储设备清单
3 `. `! [3 E6 Rceph orch device ls* w( }8 C9 y& v& O+ \$ W

, g2 Y) C  N* S# @: H7 ^如果满足以下所有_条件_,则存储设备被视为可用:
+ Q7 {  ~- k1 m" q5 q" n1 n设备必须没有分区。& f& K* l# k; C! `
设备不得具有任何 LVM 状态。
1 S5 b! {& }9 @  g" Z不得安装设备。' E5 Q8 q3 w8 S6 A2 y$ f
设备不能包含文件系统。  O, S, K8 |! ^$ `
设备不得包含 Ceph BlueStore OSD。' N0 C3 H; o" v
设备必须大于 5 GB。
; E! b: I5 f. B- q. t3 D/ ICeph 拒绝在不可用的设备上预配 OSD。为保证能成功添加osd,我刚才在每个node上新加了一块磁盘 创建新 OSD 的方法有几种方法:
8 a8 N0 d  M; i# w. l0 v在未使用的设备上自动创建osd# i. P3 u0 r% U& O& I0 J9 H- ~9 f
[root@node1 ~]# ceph orch apply osd --all-available-devices
' X6 m* e% o$ DScheduled osd.all-available-devices update...
- y7 Q+ o, J. I% K& Z' g可以看到已经在三块磁盘上创建了osd  e! e# ~' B3 R6 b, E3 A2 ?
! P: w( U! `6 l. U8 a9 D( ]/ v. Y
从特定主机上的特定设备创建 OSD( a/ K+ a9 T- u7 b$ K7 D
ceph orch daemon add osd host1:/dev/sdb0 M  f9 c6 M" s+ ~) ]! N# T
部署MDS
6 Y1 J; A' S% C9 h使用 CephFS 文件系统需要一个或多个 MDS 守护程序。如果使用新的ceph fs卷接口来创建新文件系统,则会自动创建这些文件 部署元数据服务器:! s& s0 s/ U" g3 n. R( l3 I( c1 f2 G# f
ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
+ c4 [$ t. r/ }$ f& M( YCephFS 需要两个 Pools,cephfs-data 和 cephfs-metadata,分别存储文件数据和文件元数据
: k* ~, e3 c2 ^5 ?! j; G[root@node1 ~]# ceph osd pool create cephfs_data 64 64
, ^5 |! b, ]# F3 C/ q; D# o; e[root@node1 ~]# ceph osd pool create cephfs_metadata 64 64
/ t/ ^6 S* e2 n- H  P创建一个 CephFS, 名字为 cephfs& |* J: B# h! ]
[root@node1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
1 z2 V% o0 u$ C& _" I) |" i! X[root@node1 ~]# ceph orch apply mds cephfs --placement="3 node1 node2 node3"% v( Z9 J& Z7 V# z1 B  L! A
Scheduled mds.cephfs update...
6 h4 A1 ^9 _: S' H& _
- N7 b; g! b& J/ H. I& a9 L$ o验证至少有一个MDS已经进入active状态,默认情况下,ceph只支持一个活跃的MDS,其他的作为备用MDS& X4 M1 z3 w" G! C8 ^; K( Q: m
ceph fs status cephfs
) }4 \' _0 g0 a$ e( c% S
- b( c" V& P. }3 K4 W部署RGW( z6 ~) o# B9 P' n. W4 P
Cephadm将radosgw部署为管理特定领域和区域的守护程序的集合,RGW是Ceph对象存储网关服务RADOS Gateway的简称,是一套基于LIBRADOS接口封装而实现的FastCGI服务,对外提供RESTful风格的对象存储数据访问和管理接口。& h# b( m' P0 @  l& h5 F

9 o8 b# o0 Q- I% {; }+ Z: P; Q使用cephadm时,radosgw守护程序是通过mon配置数据库而不是通过ceph.conf或命令行配置的。如果该配置尚未就绪,则radosgw守护进程将使用默认设置启动(默认绑定到端口80)。要在node1、node2和node3上部署3个服务于myorg领域和us-east-1区域的rgw守护进程,在部署rgw守护进程之前,如果它们不存在,则自动创建提供的域和区域:$ j+ P: ?1 t6 x' K- {# W
ceph orch apply rgw myorg cn-east-1 --placement="3 node1 node2 node3"- V' X" c4 n7 H) H# G7 A
或者可以使用radosgw-admin命令手动创建区域、区域组和区域:/ X6 W; W9 J  u1 _* o# N; d- m
radosgw-admin realm create --rgw-realm=myorg --default3 e! _/ g( {3 x4 s
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
8 Y% r  N  E% p. c# a) Gradosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default' [" j; J" y7 K  W4 o1 @( R! Z
radosgw-admin period update --rgw-realm=myorg --c
8 t  K0 K/ t" m可以看到RGW已经创建完成
. o- A+ g* \4 \& s; ]) N
1 G# r6 h4 b0 V. E) _: m并且cephadm会自动安装Prometheus和grafana等组件,grafana默认用户密码为admin/admin,并且已经导入了Prometheus监控ceph的仪表盘3 G+ {* T" I4 v6 L' Z3 i- w

2 A( {! Y- p3 \1 a1 V4 Y0 S9 f2 ?* s9 \' ?- y) p* l& H* Y

  ]+ E, C9 p9 l7 N4 t2 f5 m
+ o, R$ G4 A; z( p1 b- J) U" }' y$ `) m0 a) w0 M7 Q8 l
 楼主| 发表于 2021-7-19 10:56:11 | 显示全部楼层
安装ceph源
+ s. \' ?5 h6 z$ E6 n
8 Y% e; d; R9 I" n" p# u" w[root@controller ~]# yum install centos-release-ceph-octopus.noarch -y
2 G8 c6 X, S" m) Z& e7 r[root@computer ~]# yum install centos-release-ceph-octopus.noarch -y& E1 e8 K& l$ b5 I5 H  E
安装ceph组件/ C: a- I# `+ `6 Z0 n% t
% N2 `. _1 j( H2 N1 O( v; c( N+ ]& I
[root@controller ~]# yum install cephadm -y
. h4 \+ h! b+ z$ u% K2 [  r4 R+ P2 x[root@computer ~]# yum install ceph -y
0 ~7 T7 n% O: G6 hcomputer结点安装libvirt3 T* P8 z; c/ S- o

1 _. D' u6 }7 o3 ~5 x; w+ l[root@computer ~]# yum install libvirt -y
; m- j. A6 Z: \7 Y9 m# y) @部署ceph集群
$ u* j0 [& o7 }. f+ M创建集群
& |8 h* d! B) c8 h, ]$ O
# n+ w& e$ a+ K' X  S5 K# z, p[root@controller ~]# mkdir -p /etc/ceph
# V* m2 Y  Q& T6 r' O[root@controller ~]# cd /etc/ceph/
3 z) N' _3 v. z$ f5 A) Q" t) a[root@controller ceph]# cephadm boostrap --mon-ip 192.168.29.148
" {2 o% C, n8 o5 o2 o[root@controller ceph]# ceph status
; {' I# k; p: }% x[root@controller ceph]#  cephadm install ceph-common- ]) i- \9 ?( h, j5 F! s
[root@controller ceph]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@computer
; u. E5 \& S  T! E' f' \: X修改配置
0 V8 J/ D1 S8 y* E& w& P. |4 |0 h- U
1 J0 Q( s1 y5 p, E( q2 v* }[root@controller ceph]# ceph config set mon public_network 192.168.29.0/24
: x- e4 f! W8 e& q- e; U添加主机
: B5 O" Y7 G/ e3 u( Y  j+ Q
! {1 ^2 P, }- H6 H' B: o% T[root@controller ceph]# ceph orch host add computer
9 D8 Q3 U$ [: `4 W% T7 y8 B- g[root@controller ceph]# ceph orch host ls
' \  X( t: r" x" P( E' F( v( ~7 T* O/ I7 S初始化集群监控
, [: Y5 t% W- Z3 L
" a* S7 G' G) `' C[root@controller ceph]# ceph orch host label add controller mon0 [0 l4 e( `$ ]* c% f9 i
[root@controller ceph]# ceph orch host label add computer mon
; J  l  p3 n6 }. x9 j7 R[root@controller ceph]# ceph orch apply mon label:mon9 U1 c0 M5 y# c/ q) }' P+ k) i6 E
[root@controller ceph]# ceph orch daemon add mon computer:192.168.29.149
" V& G- o- x) N5 q创建OSD% ]  C5 w2 J3 G" ]  |
3 w8 d; }; k$ q
[root@controller ceph]# ceph orch daemon add osd controller:/dev/nvme0n2- q  P" R: f% v. m3 r0 A- |
[root@controller ceph]# ceph orch daemon add osd computer:/dev/nvme0n3, z' Q. ?: r# n& n
查看集群状态( u, q6 j2 [- t0 H& V1 `

. Z. ?: ?3 G; @5 U5 X[root@controller ceph]# ceph -s
+ o/ a( H$ N/ F, m: l0 h8 k  N# T1 i查看集群容量- i. W8 h0 G2 C
2 j1 R, y# N4 `9 ~8 t, T4 [
[root@controller ceph]# ceph df8 E2 `3 R" {/ T
创建pool4 W: s1 o8 A& l; T3 u! j  F
3 T. c* }+ d( k9 }& m3 s
[root@controller ceph]# ceph osd pool create volumes 64$ x( p# |% n0 i6 g: n
[root@controller ceph]# ceph osd pool create vms 64
& n: T" k3 S# L' X* u2 R) h# u" t& |  B( f! w% Q
#设置自启动" @* K* W0 \" D0 K' T
[root@controller ceph]# ceph osd pool application enable vms mon
) R+ J7 M3 r% e, @[root@controller ceph]# ceph osd pool application enable volumes mon
  l) ^! [" b- m" Z查看mon,osd,pool状态
9 m% _7 V: O, h) G6 F- S9 b/ \4 o8 k6 ?% ~1 j
[root@controller ceph]# ceph mon stat8 c3 _( r$ m+ Z" g
[root@controller ceph]#  ceph osd status
8 A, ], D3 k5 l. n8 o! j[root@controller ceph]# ceph osd lspools
+ ^& c" {" Y! s4 b查看pool情况+ i2 x* u3 }  x9 U8 l

4 h: W+ O" e. m' F& v[root@controller ~]# rbd ls vms
' z* B2 G7 M4 x5 k[root@controller ~]# rbd ls volumes
 楼主| 发表于 2021-7-19 11:34:30 | 显示全部楼层
RUNNING THE BOOTSTRAP COMMAND- P# Z2 @& @0 X/ p5 Z6 d
Run the ceph bootstrap command:7 |" T/ C2 \+ e; P' Z
0 O! y3 [; r/ [0 m2 P' }8 v7 b
cephadm bootstrap --mon-ip *<mon-ip>*" }* e. P" ]: m% }+ ]; K' \* R0 `2 r
This command will:
7 h5 U! S$ h/ x1 _$ V. b, p: y) p2 w" T+ d# ?; L
Create a monitor and manager daemon for the new cluster on the local host.
5 Z( \+ f$ |' v' K) b/ P4 t
5 D$ F' K. `: f1 g; t; vGenerate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.* j/ c. y/ m0 k, F3 K) g  t% f

9 {* ]" s6 L7 ~& T: ~& J. M; iWrite a copy of the public key to /etc/ceph/ceph.pub.
$ z% B/ ^8 ^2 a- F' U% O, f6 ?% U; Z9 b1 U# o2 B; F! f6 b
Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster., c' {! U8 z3 q& J3 b) J

4 k- @8 f6 y; N, DWrite a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.: Z4 V$ K- J( q& `3 ]! v$ y

! Z8 R- l$ J1 E4 j- a- VAdd the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.6 J3 Q8 {! R) v. Q1 n9 e8 u' _4 e

5 U2 U8 i( V# wFURTHER INFORMATION ABOUT CEPHADM BOOTSTRAP0 A- _/ [; v( u5 B" }4 E( {
The default bootstrap behavior will work for most users. But if you’d like immediately to know more about cephadm bootstrap, read the list below.
0 k0 X/ [! _+ ]: N* b8 E+ G* J4 o. l8 |! y( _
Also, you can run cephadm bootstrap -h to see all of cephadm’s available options.. h% L$ u0 D3 ]( c6 l- Z
1 ?# l& X+ l% [4 L: C
By default, Ceph daemons send their log output to stdout/stderr, which is picked up by the container runtime (docker or podman) and (on most systems) sent to journald. If you want Ceph to write traditional log files to /var/log/ceph/$fsid, use --log-to-file option during bootstrap.4 Q1 N% h8 S7 o6 Y& p% H: W

1 Z) R# U: n4 X3 @6 b& d6 rLarger Ceph clusters perform better when (external to the Ceph cluster) public network traffic is separated from (internal to the Ceph cluster) cluster traffic. The internal cluster traffic handles replication, recovery, and heartbeats between OSD daemons. You can define the cluster network by supplying the --cluster-network option to the bootstrap subcommand. This parameter must define a subnet in CIDR notation (for example 10.90.90.0/24 or fe80::/64).
# D3 y; a. ^# E
0 b  x. E' o8 w$ C8 xcephadm bootstrap writes to /etc/ceph the files needed to access the new cluster. This central location makes it possible for Ceph packages installed on the host (e.g., packages that give access to the cephadm command line interface) to find these files.
6 ]. j1 i/ s6 n9 g# C; ?( J9 x  p$ O1 T: W
Daemon containers deployed with cephadm, however, do not need /etc/ceph at all. Use the --output-dir *<directory>* option to put them in a different directory (for example, .). This may help avoid conflicts with an existing Ceph configuration (cephadm or otherwise) on the same host.
6 N5 c/ K/ Z7 C& b. l
6 ?/ |0 P- @3 Y3 o+ j" SYou can pass any initial Ceph configuration options to the new cluster by putting them in a standard ini-style configuration file and using the --config *<config-file>* option. For example:
1 w8 q  S/ z. j7 d, U; a' D" z' r/ e) i% D4 [0 z$ s6 X2 x2 R
cat <<EOF > initial-ceph.conf& r1 p9 D! T! F# ^5 ^
[global]1 ~3 }& H5 B% [
osd crush chooseleaf type = 0/ N/ l# j8 W5 n
EOF
8 _  r# A6 n' i5 {9 K  M, N./cephadm bootstrap --config initial-ceph.conf ...; o6 W% |4 ~( F! {
The --ssh-user *<user>* option makes it possible to choose which ssh user cephadm will use to connect to hosts. The associated ssh key will be added to /home/*<user>*/.ssh/authorized_keys. The user that you designate with this option must have passwordless sudo access.6 M; P1 N( H; e- t

' q9 M) w% e. I$ SIf you are using a container on an authenticated registry that requires login, you may add the three arguments:- l6 V' l3 E( w" i* W

3 M# q' q* F  h: ?5 P7 p6 ?" V. V' D+ M2 N--registry-url <url of registry>* l/ u6 f+ O; ?0 g
8 B: ]5 R% v3 q, ?0 e
--registry-username <username of account on registry>; }; u% B% t/ W" T% u$ m

% C2 a8 w# Y/ L' i; P" F, i--registry-password <password of account on registry>
6 c" J. a: ~% @) t0 m! C$ p2 f' Q" O$ j+ i) C& h! D7 T( j  a9 N
OR
. a# V% ]0 v' G0 T3 l) H. X( @+ \* D, y/ V
--registry-json <json file with login info>
5 }1 ~  m# W3 K8 r' K: n" M
0 Y$ O6 F' @7 S# ]) `Cephadm will attempt to log in to this registry so it can pull your container and then store the login info in its config database. Other hosts added to the cluster will then also be able to make use of the authenticated registry.6 e/ I+ |: R. |: i3 h

5 c+ J& ?9 D9 kENABLE CEPH CLI8 c* e# ]! y) r
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph command. There are several ways to do this:' J' Q! w3 [( ]& s1 o3 e; t0 C% u* I

, D# z+ X# C& v/ U5 RThe cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in /etc/ceph on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, cephadm shell will infer the config from the MON container instead of using the default configuration. If --mount <path> is given, then the host <path> (file or directory) will appear under /mnt inside the container:
: T, N, Q& g' l- s/ L; N4 J: \( u% d, k: w9 i0 c, x) c7 k
cephadm shell
* d# P5 h' j' o3 QTo execute ceph commands, you can also run commands like this:
( f+ S' T7 q, E9 P; P4 l, |  J( A3 q! q7 U8 F$ u; V
cephadm shell -- ceph -s
2 O$ X9 c' D  A9 J) UYou can install the ceph-common package, which contains all of the ceph commands, including ceph, rbd, mount.ceph (for mounting CephFS file systems), etc.:" \* N+ j3 o+ }: E0 x5 V$ f
' }# U( w! T8 u% N
cephadm add-repo --release pacific7 k* }/ p5 ^# M' _* S* A' ^: k
cephadm install ceph-common, O0 k/ h2 P$ q. I% {* C& m
Confirm that the ceph command is accessible with:
, B3 X* G% O9 ?7 q7 K; A; Z3 G7 L( [! ^+ O$ G; l* T+ p  ^
ceph -v9 _0 f" X1 ^' Q/ x6 @5 q
Confirm that the ceph command can connect to the cluster and also its status with:
9 {4 ~5 s: m* j6 Y& Y$ U+ E- O! P
5 J/ n$ o% S1 M; k# f2 W$ }1 M4 D. rceph status" G6 O) [4 y+ u/ u& j
ADDING HOSTS" D8 q6 l  }* ?9 C. v$ z/ U0 _
Next, add all hosts to the cluster by following Adding Hosts.
7 _, @4 S# `7 c/ w
  q  u" _: v0 X3 L+ }By default, a ceph.conf file and a copy of the client.admin keyring are maintained in /etc/ceph on all hosts with the _admin label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the _admin label so that the Ceph CLI (e.g., via cephadm shell) is easily accessible on multiple hosts. To add the _admin label to additional host(s),
+ w/ ]9 v5 }7 c* g9 [9 Y$ s" ?6 K; j$ g  L8 t1 y
ceph orch host label add *<host>* _admin+ K; Z! |4 ?/ q$ @
ADDING ADDITIONAL MONS
/ E$ E  j) @5 A5 l0 FA typical Ceph cluster has three or five monitor daemons spread across different hosts. We recommend deploying five monitors if there are five or more nodes in your cluster./ P9 S( ?5 v* k% t6 Q1 p& Z
4 z. j1 b  X& i) V0 r5 N
Please follow Deploying additional monitors to deploy additional MONs.1 x: l; p0 Z  Q9 M7 Y: [9 v
" H0 v$ X' Y  e& P. Q4 c% @
ADDING STORAGE
% M* s2 ^; F5 D/ ^  v3 y8 P1 x1 ?To add storage to the cluster, either tell Ceph to consume any available and unused device:: h) Y# K2 z8 f  v2 h
- x, l  n( A- M" C& c1 `
ceph orch apply osd --all-available-devices1 @7 a1 x3 k/ }6 y
Or See Deploy OSDs for more detailed instructions.
 楼主| 发表于 2021-7-19 11:37:35 | 显示全部楼层
cat <<EOF > initial-ceph.conf9 v) A/ w% R3 i  j( X: Z+ A
[global]
+ c) M7 |2 I* [ osd crush chooseleaf type = 0
) y; Q3 T. C8 u0 v; ]' E EOF" p. \3 \+ M4 t8 u3 C& J
./cephadm bootstrap --config initial-ceph.conf --mon-ip *<mon-ip>*- j  k$ j" i* w' w- X
) Q) N5 `7 |6 f1 y% y* x' |# Q' v
cephadm shell
 楼主| 发表于 2021-7-19 11:38:48 | 显示全部楼层
To deploy an iSCSI gateway, create a yaml file containing a service specification for iscsi:* Y- p/ A" F; Y& A9 M
2 s2 p  r; c6 D
service_type: iscsi
7 f" a$ n" ]7 y$ F, ~# V  S0 qservice_id: iscsi
' N( l. m( j. s! Y3 Bplacement:) R& |/ f, V) D. n2 i# R
  hosts:
9 A2 h' p4 j; S: [    - host1. M, ~; n! Q2 w3 T! ~
    - host2% V1 ?1 A. U. m3 A% L
spec:
1 `0 D! n. p2 y" w# V& h2 u  pool: mypool  # RADOS pool where ceph-iscsi config data is stored.. y- @3 B! s- x/ l7 Y9 Z
  trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2"! s7 t+ f1 I; ?
  api_port: ... # optional
6 Z2 @9 M3 n# D7 ]  api_user: ... # optional- I% \( B) U8 I9 v9 V
  api_password: ... # optional
! v; ?! E& M, G* F  {! n  L" \$ e- _  api_secure: true/false # optional
0 t1 l4 i( f) K0 X; S* ]  ssl_cert: | # optional
* L0 u2 j. W& M+ ?, a6 L1 q    ...6 Y% P5 d0 h; z+ ^" A' D
  ssl_key: | # optional) Z# f& H9 B3 F5 e+ D
    ...
% c8 T* d9 C4 C. ?! qFor example:
5 k) h4 ^* W4 R$ {* M% \8 I3 B4 a( G# g( J
service_type: iscsi9 j( [* f/ N  l" t8 d# a$ U
service_id: iscsi
8 P3 \, X# }1 C* ^- Uplacement:
5 X# T6 s4 R- I8 d  hosts:- l6 Y! q% n. Y  h
  - [...]# i1 i5 \, G" t! R. y, H# S( W
spec:
! u- t" f* D# e. a0 x) D' k  pool: iscsi_pool
! V# a$ F% D) y4 M  trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2,IP_ADDRESS_3,..."
5 B8 ^% t* A; L. ]7 f7 P  api_user: API_USERNAME: T  l4 h/ f' N5 B
  api_password: API_PASSWORD7 T% j6 k- F- Q& O
  api_secure: true
8 f! f4 y, h" @. W3 f2 d# E7 H  ssl_cert: |
  Z' o) \3 _6 n' H5 ?    -----BEGIN CERTIFICATE-----  Q, Q9 Y+ F; g
    MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3" e! p* ^9 q; R& ^# i: I5 b
    DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T
# L. p0 p! m6 w/ T4 v3 I* x4 N! o    [...]+ @: r! ]4 ^; \$ Z9 R! @* Q7 f
    -----END CERTIFICATE-----0 c0 j) @) D! P* {3 ?; E5 g. k
  ssl_key: |
" @  y4 @1 O2 m' b, l& s    -----BEGIN PRIVATE KEY-----
  a& @/ n3 E7 [  `    MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4
& D" Z; k5 H  p% W0 @    /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h
' K' A3 o( T7 Q    [...]
# {* G- _/ l3 S& w0 k  I- J    -----END PRIVATE KEY-----1 i: T3 `9 v# ~# B
The specification can then be applied using:
) N1 T* @+ ?5 ?9 [) a, }# F3 H/ i* H7 v9 d
ceph orch apply -i iscsi.yaml
 楼主| 发表于 2021-7-19 11:44:03 | 显示全部楼层
ceph orch apply mon --unmanaged
3 K. ~& k9 W" q. c* oceph orch apply mon --unmanaged/ Y; U* x$ j+ `+ N  M' r  x
" w8 P7 ]" |4 R$ ?1 v/ [
ceph orch daemon add mon newhost1:10.1.2.123$ Y& `0 n8 q3 M: O/ \& p
6 i2 X9 a1 t: ?! G
ceph orch daemon add mon newhost2:10.1.2.0/242 I' y' n6 C: J% ]" X4 N. w4 a
6 Z: `8 C2 d) c& {

+ y8 B" L/ @$ P6 R+ k1 u6 k* u# Cceph orch apply mon "host1,host2,host3"
! C: d0 l) a- g& O1 o& N
 楼主| 发表于 2021-7-19 13:50:11 | 显示全部楼层
环境介绍
* d3 a3 L$ K6 e: kIP地址 配置 主机名 Ceph版本
( j* T: ^) @& S8 L' h6 M5 o- \10.15.253.161 c2m8h300 cephnode01 Octopus 15.2.4
1 D' U' o' B4 n0 r10.15.253.193 c2m8h300 cephnode02 Octopus 15.2.4
& K# C7 F* ?) k) E1 [$ L10.15.253.225 c2m8h300 cephnode03 Octopus 15.2.4" u& w& h+ \- m( g  u2 z3 A8 l

" l4 m8 t" _, I" U/ {6 w#Linux系统版本
6 S8 B0 f3 a! Z[root@cephnode01 ~]# cat /etc/redhat-release
/ G& b9 c9 \4 h+ hCentOS Linux release 8.2.2004 (Core)2 }# L4 s" P! e/ T2 p9 R
[root@cephnode01 ~]# uname -r" j2 q# ]# [/ ^/ G) y
4.18.0-193.14.2.el8_2.x86_64! b6 t, r( X" t+ D) N  [6 U
, d& u# F6 W% W. p. \2 T
#网络设计:建议各网络单独分开$ ?: e9 C9 W1 j# x0 u
10.15.253.0/24 #Public Network  公共网络
+ r8 |* e6 ~% z2 m# Y9 q7 @2 u172.31.253.0/24 #Cluster Network 集群网络& O' q, R* [: w% m. z1 a0 K
#每台ceph节点下除系统盘外,最少挂载两块相同的大容量硬盘,不需要进行分区
: S2 o% ~7 V* g# e/ i: G- X3 B[root@cephnode01 ~]# lsblk
* S( ?7 Z7 s/ ]: u* d" wNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT0 O9 d* }  U8 O& g% G! `. N" O) h. ^
sda      8:0    0   20G  0 disk
2 ^+ ]6 @2 A- _- \( @' E├─sda1   8:1    0  200M  0 part /boot0 m( P8 p$ L, O+ K: }  m3 i" d7 I
├─sda2   8:2    0    1G  0 part [SWAP]
! N9 j6 Z8 T: _9 g: i, S5 B4 B└─sda3   8:3    0 18.8G  0 part /$ U$ Z! ]7 s! d& s1 E# u$ M
sdb      8:16   0   20G  0 disk* b6 w0 ]% F: H! y
2.1.1 Ceph安装与版本选择: e2 y/ x% w7 c" i0 f% F6 G
https://docs.ceph.com/docs/master/install/. l# I& w9 d9 Y' S, E# h% k: D
ceph-deploy 是用于快速部署群集的工具;社区不再积极维护ceph-deploy。仅支持Nautilus版之前的Ceph版本上进行。它不支持RHEL8,CentOS 8或更新的操作系统。
5 G! y9 G! R& o; ?$ S& E% M3 K这里的系统环境是centos8系统,所以需要使用cephadm部署工具部署octopus版的ceph8 @! h  B  U, M& f; j5 V" J" V  t
2.1.2 基础环境准备
6 p# h$ V) ]  q& [5 j% k8 T" E4 k全部Ceph节点上操作;以cephnode01节点为例;
7 O$ R: r1 x. A: R) M/ o1 _/ N0 v- z0 K
#(1)关闭防火墙:
& ^9 p: Z' v9 y2 |) {* c# asystemctl stop firewalld
+ S3 \6 \$ e6 p: d/ fsystemctl disable firewalld. m- I1 ~" `2 [" ^+ Z. r8 ?
#(2)关闭selinux:4 C1 s' l) i% W( C  M' F8 {3 e
sed -i 's/enforcing/disabled/' /etc/selinux/config
- N$ l  K7 X2 ^5 R' Vsetenforce 09 t0 M! y* o' l$ g7 k
#(3)在cephnode01上配置免密登录到cephnode02、cephnode03
6 P  o$ X( n+ ~) n  ^dnf install sshpass -y  |' U+ O0 t! [$ Z+ |, z
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''* T+ J+ E& |8 G1 [/ V
for ip in 161 193 225 ;do sshpass -pZxzn@2020 ssh-copy-id -o StrictHostKeyChecking=no 10.15.253.$ip ;done) C% A! x, ]+ S- S. F" J
#(4)在cephnode01上添加主机名:已经配置过则不需要再次添加
6 M$ M9 B5 L: D2 bcat >>/etc/hosts <<EOF
4 ~* D7 `* f) O7 T1 f! S* ~( t10.15.253.161 cephnode01# e: [  R! |& W7 }% s9 W  z) s4 m- D
10.15.253.193 cephnode02
  L9 d1 V4 J9 Q5 b% M3 }2 J# Y0 E$ }10.15.253.225 cephnode03
9 V  Q! K+ W, k- z, w. UEOF
# S- Z  u2 R8 S8 kfor ip in 193 225 ;do scp -rp /etc/hosts root@10.15.253.$ip:/etc/hosts ;done7 j7 x; U2 J7 o' Z3 a" ~
#(5)设置文件连接数最大值
' [# ^6 l5 ?7 G0 [6 W" W2 vecho "ulimit -SHn 102400" >> /etc/rc.local0 V: u5 b# J  C( a! Q. A1 _7 }) o$ z
cat >> /etc/security/limits.conf << EOF+ z. A/ m- m9 `* x2 z5 b
* soft nofile 65535
, j* g% `2 o- g* c* hard nofile 65535' r0 C/ v2 \9 O* R+ C
EOF8 C/ Y& ^6 y$ C0 @! n5 P. `- B
#(6)内核参数优化- K( g. _, y5 _& i( A+ z! o! _
echo 'net.ipv4.ip_forward = 1' >>/etc/sysctl.conf
, y2 j% J4 \0 v# k2 i6 Lecho 'kernel.pid_max = 4194303' >>/etc/sysctl.conf
% |0 m+ u9 k6 }/ s. `  w. V#内存不足时低于此值,使用交换空间
- b' a$ Q' K+ V- j7 Gecho "vm.swappiness = 0" >>/etc/sysctl.conf
0 A9 m% [9 D0 N- f1 Jsysctl -p
1 g3 v; ^3 `! Q  c6 r6 W. Y; @#(7)同步网络时间和修改时区;已经添加不需要配置  |. `- @# S. w3 |, A! H
安装chrony时间同步 同步cephnode01节点
4 p1 c7 R7 a  h: ]" ?0 w. {yum install chrony -y
+ r  `$ W3 @. l8 r. L. V8 c( ~; b, d# v' Hvim /etc/chrony.conf7 o; {! v1 O; D
server cephnode01 iburst
0 Z9 Q5 X7 F* n3 k* I---
. q7 S$ t6 \, }& W( L2 W- Gsystemctl restart chronyd.service
. [) `* g/ r) S6 \% m$ ^systemctl enable chronyd.service
" o: Z2 v8 m# x5 N2 |chronyc sources
8 l. T* F' e/ d$ \" s#(8)read_ahead,通过数据预读并且记载到随机访问内存方式提高磁盘读操作: E& a% j" s- {$ X# N& t' O# t5 a
echo "8192" > /sys/block/sda/queue/read_ahead_kb8 ]# E# Q4 e9 ?6 w" ~6 ^/ v
#(9) I/O Scheduler,SSD要用noop(电梯式调度程序),SATA/SAS使用deadline(截止时间调度程序)& ~/ B4 s" H* T7 M/ g
#https://blog.csdn.net/shipeng1022/article/details/78604910
$ L" P( a1 l4 V1 ]9 ~* q1 Techo "deadline" >/sys/block/sda/queue/scheduler
6 ?% ^. z; h4 Recho "deadline" >/sys/block/sdb/queue/scheduler2 \4 f6 }4 U# o- j( c1 R. r
#echo "noop" >/sys/block/sd[x]/queue/scheduler
4 O8 ^; h, ^3 P7 g5 F* S* h3. 添加Octopus版yum源: |. w% T$ ?6 q0 x4 ?# N8 v
1 N( W! c: q5 ^0 _
cat >>/etc/yum.repos.d/ceph.repo <<EOF$ u* p( _/ K1 [' O) p6 V: _. l
[Ceph]
$ B2 v  @; t3 L& u+ P2 sname=Ceph packages for $basearch
% |4 w4 A- F" u" b+ ?8 Y8 @: b0 }! Vbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/$basearch: o9 [- j  r2 F
enabled=1
; C" O: e+ h1 }  }5 w8 R' t) d+ Ngpgcheck=09 n" w; D2 {% R
type=rpm-md. H9 R. ~2 O( l% l- I" B
[Ceph-noarch]
# n2 Y$ l' V! S7 V2 z( P* R' M9 Z. gname=Ceph noarch packages1 G4 \5 a, N, |/ d! n6 E$ M
baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/noarch$ G7 S3 C# ~, L6 V5 x
enabled=1
2 O4 ^) `. ?* C/ E9 V! Ygpgcheck=0- w6 \+ ~& {" r$ X: _1 _1 j/ G  [
type=rpm-md+ k6 }" W& r, L
[ceph-source]% c, [, X  w1 h' ~4 e5 K+ c" K) p
name=Ceph source packages
8 H7 Y+ o% ]0 J3 w4 Ibaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/SRPMS/ G, A! {* W8 t  i3 u
enabled=1
' W0 Z8 J" V$ u; U, Rgpgcheck=0
6 T6 {# N; }) M% X) P( x: c. J$ itype=rpm-md3 v! ^% r- Y5 Z2 w, I$ _
EOF
: B+ W% v3 D5 ?1 K& L8 I2 ^3 Pyum clean all && yum makecache
9 {, U/ [8 U' U/ K( ^#安装基础软件
4 w& U5 C# ]0 j* b0 U& xyum install net-tools wget vim bash-completion lrzsz unzip zip -y% ]* a. U$ Z9 C/ U" b7 R
4. cephadm工具部署- T' m! }- @+ l
https://docs.ceph.com/docs/master/cephadm/install/- [2 W' e/ _2 V
在15版本,支持使用cephadm工具部署,ceph-deploy在14版本前都支持( X' r1 F' h: h  H% R: [! `* m
4.1 拉取最新的cephadm并赋权; k" w+ R& S" l+ _* y. Z
在cephnode01节点配置;! V  h+ u. V; w% P# p3 X4 V& y
  {* P5 q/ `8 v6 a2 b
[root@cephnode01 ~]# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
) i$ U: E+ N( Q" Q# q[root@cephnode01 ~]# chmod +x cephadm
: Z; M4 O: O0 f7 v1 N* b0 i[root@cephnode01 ~]#ll
( [( S) O$ J3 b4 ~-rwxr-xr-x. 1 root root 184653 Sep 10 12:01 cephadm
2 E4 x) Z# [) K# u8 X- B+ ]5 F4.2 使用cephadm获取octopus最新版本并安装
8 x# m6 l/ c$ x已手动配置为国内yum源,不需要按官方文档的步骤再进行添加yum源
2 y/ w& O6 W+ @9 K7 p* u% ^7 A* x: d! q) \8 s3 ^
#全部ceph节点安装% t# w8 g: \0 l
[root@cephnode01 ~]# dnf install python3 podman -y3 H) D1 \( J7 e. r
[root@cephnode01 ~]# ./cephadm install. v5 k  O/ C% V+ o2 z3 f0 X9 E
...8 o; l  I0 U. v/ }7 i
[root@cephnode01 ~]# which cephadm! u8 n; d9 U) y
/usr/sbin/cephadm! l6 X( c& f6 y1 r! T+ N6 q" [, y
5. 创建ceph新集群9 {$ R1 V0 Z% ?. |4 F8 a
5.1 指定管理节点
% v  m8 l& E9 f创建一个可以被任何访问Ceph集群的主机访问的网络,指定mon-ip,并将生成的配置文件写进/etc/ceph目录里
% V5 L8 C" M5 h- h9 v: W; e
$ W5 j3 w; E; i$ [/ T[root@cephnode01 ~]# mkdir -p /etc/ceph& d, e9 p) s3 s. Y1 Y
[root@cephnode01 ~]# cephadm bootstrap --mon-ip 10.15.253.161
; w* E& M, s% d0 E3 h...
9 ?- P$ E; s* t7 K. ]+ ?        URL: https://cephnode01:8443// R$ P9 l0 T8 T+ e  h$ q8 n
        User: admin
  t; P) L- R" Z! i2 `    Password: 6v7xazcbwk
6 p7 o& D5 X  j7 k& w$ q5 ]/ m' |8 F...
0 ?7 I8 r7 x, y; ?" I可登陆URL: https://cephnode01:8443/,首次登陆要修改密码,进行验证  R: d" g& o( e) e: b

; j% j* F- v; z0 _8 ]9 L. m$ H" \7 A* a* `
5.2 将ceph命令映射到本地
- i2 g: Y8 s& z& j: RCephadm不需要在主机上安装任何Ceph包。但是,建议启用对ceph命令的简单访问。
& r8 ]2 F% {0 r; kcephadm shell命令在安装了所有Ceph包的容器中启动一个bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置和keyring文件,它们将被传递到容器环境中,这样就可以完全正常工作了。
( S+ j) V& P3 e* f1 b
- D& D0 l/ o( r0 ~: ][root@cephnode01 ~]# cephadm shell# ^1 e6 K! f' h
[root@cephnode01 ~]# alias ceph='cephadm shell -- ceph'
( L# W5 ?% E( Z; L0 c: r[root@cephnode01 ~]# exit) t% i5 d7 T0 Z, k2 Z  X
#安装ceph-common包;包括ceph,rbd,mount.ceph的命令( ]$ [* g8 G* ~3 T! D: }2 u; E- H, Y. @
[root@cephnode01 ~]# ephadm install ceph-common包;包括ceph,rbd,mount.ceph的命令/ M* [1 J# w2 q! C- |
#查看版本
6 Z7 X& I1 v; f7 q! h: d4 }6 Z[root@cephnode01 ~]# ceph -v
8 @3 ?9 X! @9 t7 j, Wceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)* Y' R) }' P0 {3 i5 w; N
查看状态
9 x: @) t& ~! |3 U, o/ u/ d: q+ c7 d% ~7 D1 c; `# L" Y, |+ M) _! [
[root@cephnode01 ~]# ceph status6 Q- Z0 m: O6 W+ f# v  e  D9 N
  cluster:- |! w/ Q) z( a5 E
    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a9 E5 w5 j9 c7 g) `. u4 y) g
    health: HEALTH_OK
: p3 }0 P; }5 u3 K$ e$ }    Reduced data availability: 1 pg inactive- e7 b& k  D4 T; F
    OSD count 0 < osd_pool_default_size 3  ]' e! j3 G4 s) ~$ G3 r

6 N' K5 D3 L9 ~  services:" s7 E, i5 H2 {( F- r
    mon: 1 daemons, quorum ceph135 (age 14m)% _# _7 ]- Q! T
    mgr: ceph03.oesega(active, since 10m)( l6 L1 B: y2 n
    osd: 0 osds: 0 up (since 31m), 0 in
% Z! R2 H9 O0 \# M5 M9 ?
/ b- r! ?. z/ A  I4 S) r' z  data:
) k5 K$ Y6 m* ^/ H! V2 ~3 F# b    pools:   1 pools, 1 pgs
5 V& h- s/ a, o& {: l( i. M7 y    objects: 0 objects, 0 B
# l+ w" ]0 O. [. |8 e# ~) x# C2 z    usage:   0 B used, 0 B / 0 B avail
* c' u5 R- M2 \' u( v    pgs:     100.000% pgs unknown
1 ]. x2 {. ^1 v- s/ Y             1 unknown8 K5 _6 E' O- Q' ]  Y8 i
5.3 添加新节点进ceph集群
7 y6 C% i+ o' Q+ {9 v0 r
9 f2 _; i- {5 k. O  m[root@cephnode01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@cephnode02/ w) r4 L  r( Q3 s3 l
[root@cephnode01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@cephnode03. s; h- v0 @5 N& l" q# x& V* Q
[root@cephnode01 ~]# ceph orch host add cephnode02+ {% Z' i2 I' A" L9 T" n% W; h! y
Added host 'cephnode02'
/ M/ X8 x4 O  f' _# B$ O[root@cephnode01 ~]# ceph orch host add cephnode03
8 D' x' V+ m( J6 j4 _4 R' LAdded host 'cephnode03'
+ e, ]% |1 P% M$ {$ V5.4 部署添加 monitor6 s( m- |3 g0 b- V4 x& D
选择需要设置mon的节点,全选
" @+ x1 x& S! f$ a- B! O- O
. T9 T; r0 ]( f) q! y! X" S& I[root@cephnode01 ~]# ceph orch host label add cephnode01 mon
$ f% `( M, @2 U' pAdded label mon to host cephnode01
4 ~* T1 \- x% T1 Y  ~[root@cephnode01 ~]# ceph orch host label add cephnode02 mon2 b4 d9 A( U, }8 K
Added label mon to host cephnode02
4 k1 Q/ [- c8 U[root@cephnode01 ~]# ceph orch host label add cephnode03 mon% G* `+ w9 J; ^+ m: F. R; `
Added label mon to host cephnode03
, ~+ G$ E  ]2 ^2 W[root@cephnode01 ~]# ceph orch host ls
, |1 I) I* z- q* g# J4 {HOST        ADDR        LABELS  STATUS  
* Z5 D+ c6 i) {cephnode01  cephnode01  mon            
8 }. l* N' R* I4 s$ D, G5 Ucephnode02  cephnode02  mon            
; Q3 m8 z; [8 Y+ ocephnode03  cephnode03  mon# N' o% _- X* A3 u( K0 P1 H& @
告诉cephadm根据标签部署mon,这步需要等待各节点拉取images并启动容器
, x2 c+ G  h6 w  |3 f) g$ n$ I$ j& f2 z9 `% a
[root@cephnode01 ~]# ceph orch apply mon label:mon
/ C; `, J. {: o  V具体验证是否安装完成,其他两台节点可查看下
. M7 V% M7 m) L' U( B3 D
" M" t. O$ ^: K6 n( ][root@cephnode02 ~]# podman ps -a
6 m  F8 ?6 s( r6 }. Y: ~0 q! H) m...
" i9 _( Q9 ^  [$ C. D) g1 ]+ @[root@cephnode02 ~]# podman images0 D4 E  G- g3 k! q. Q
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE% v. G, U; ^' A1 Q* L( e- C2 x; G9 s' T
docker.io/ceph/ceph            v15       852b28cb10de   3 weeks ago     1 GB
  e, p0 I- Z* ?) E$ |8 pdocker.io/prom/node-exporter   v0.18.1   e5a616e4b9cf   15 months ago   24.3 MB
" e- ~+ s; L$ C2 j  t6. 部署OSD* d5 a" v9 m! r% D) k3 R0 I
6.1 查看可使用的硬盘
9 i8 |1 m2 B% R: Q4 u) h  ^, E7 d5 m% [/ Y9 d5 a  d
[root@cephnode01 ~]# ceph orch device ls
! G: X; y) w" ~  |' \HOST    PATH      TYPE   SIZE  DEVICE  AVAIL  REJECT REASONS                        . `2 W/ ~  F  s/ }' g* n" _
ceph01  /dev/sda  hdd   20.0G          False  locked, Insufficient space (<5GB) on vgs, LVM detected  ! [& y: m: y  M& d: ]& B" i, U
ceph01  /dev/sdb  hdd   20.0G          True# ]5 B9 S2 O4 }9 P- V& r9 a
ceph02  /dev/sda  hdd   20.0G          False  Insufficient space (<5GB) on vgs, LVM detected, locked  / ], ^" @9 O7 p1 c: z
ceph02  /dev/sdb  hdd   20.0G          True
' h( v  p4 k- N$ V" Lceph03  /dev/sda  hdd   20.0G          False  locked, Insufficient space (<5GB) on vgs, LVM detected  4 t( Z' p  s. X) k% ?& q
ceph03  /dev/sdb  hdd   20.0G          True  C( \# _1 f2 @0 J: c0 G8 ]
6.2 使用所有可用硬盘
5 u6 q) H" Z- q3 B* V; m. |, K" Y
0 d. e$ O+ r: i' P  I[root@cephnode01 ~]# ceph orch apply osd --all-available-devices
! ]" o$ G3 m6 }  D8 ~添加单块盘的方式
7 E) f4 G( r' D* y, N" d; p" L( i
3 M& X, z1 k  Q- W+ v- L[root@cephnode01 ~]# ceph orch daemon add osd cephnode02:/dev/sdc
* s5 T2 w+ I9 j5 k9 A6.3 验证部署情况
- Z. V1 |6 [2 [8 Z6 y& B. q  B* j4 j8 I: H6 A9 N
[root@cephnode01 ~]# ceph osd df! G0 m( R) C" v# b2 l- a/ D+ L9 h
ID  CLASS  WEIGHT   REWEIGHT  SIZE    RAW USE  DATA     OMAP     META      AVAIL   %USE  VAR   PGS  STATUS* u! N! y9 A4 H' ~- P3 P
0    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up1 B3 X7 p# ?; X9 A$ \
1    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up
- R" B" r/ \. Z6 J1 }# c( R% s 2    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up
) b; }5 r& W) o                       TOTAL  60 GiB  3.0 GiB   11 MiB  4.2 KiB   3.0 GiB  57 GiB  5.02                  
0 B! ]) b( |9 h# v8 T0 w! }5 eMIN/MAX VAR: 1.00/1.00  STDDEV: 0
; L8 C# o( [: p* F2 r7. 存储部署5 T( s: y  F7 C$ F
7.1 CephFS部署
0 M- H$ X( _2 J9 ?: l/ R- H部署cephfs的mds服务,指定集群名及mds的数量( ~" H& ^. S2 [) g" g2 L
, c+ Q& W8 r8 M5 R2 d2 W6 r* J
[root@cephnode01 ~]# ceph orch apply mds fs-cluster --placement=3; k+ I. U) k4 k/ U. k* ^0 `
# z6 K! T+ `5 y% \4 I. S5 M
[root@cephnode01 ~]# ceph -s9 g4 [: H/ c$ L! S8 N. G7 y" ~
  cluster:
& h. Q0 I8 ?* D1 ~    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a, l; x! U$ z8 l" c7 `% N6 W
    health: HEALTH_OK; t! P9 |: a4 h# R' |* T
3 T# }) K. s9 ]3 J$ \2 m
  services:
* S; G+ p+ R* ~    mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 1m)! b- J) f6 G& L; n: [9 [6 y
    mgr: cephnode01.oesega(active, since 49m), standbys: cephnode02.lphrtb, cephnode03.wkthtb
! z) V* I- k' M# I$ _$ s    mds:  3 up:standby) r- l0 }: y) P: c6 x
    osd: 3 osds: 3 up (since 51m), 3 in (since 30m)( X, n5 f, \9 S. S# {9 ^
7 ^- q9 J: P) [% }* U$ @
  data:! b. L; n9 U" s5 k/ V9 ?! Z$ s1 I
    pools:   1 pools, 1 pgs
3 Z8 b5 s$ n7 {3 {" t2 h! d! i    objects: 0 objects, 0 B
0 {1 Y! s, r$ y! X7 v' r4 X    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
6 u/ K& g" M4 |' F    pgs:     1 active+clean
) v+ L0 H( Y+ F+ `% b7.2 部署RGW
# ]7 V6 a! p4 b! c2 p创建一个领域
8 W3 Y- L2 K: n
7 Z( x" B0 O. O3 A$ I7 \[root@cephnode01 ~]# radosgw-admin realm create --rgw-realm=rgw-org --default
+ w3 W( _' @# J* a; h) {6 m{7 a$ Y8 o7 o$ z' m/ }, [. F
    "id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00",
# g8 x* Q% C8 p0 E    "name": "rgw-org",
% c1 X) ~4 A3 S    "current_period": "ea3dd54c-2dfe-4180-bf11-4415be6ccafd",
5 d! j6 w) k* o1 q4 I+ J    "epoch": 1* J8 d; B# V8 B) F" a
}8 o9 R, r% F- f* s( r4 a
创建一个zonegroup区域组7 u6 N  F; d6 h
# Q; [5 [: N; n9 [( J( }
[root@cephnode01 ~]# radosgw-admin zonegroup create --rgw-zonegroup=rgwgroup --master --default
- x5 C0 H4 w  {0 z; E{
7 M2 x" y; i# S: G    "id": "1878ecaa-216b-4c99-ad4e-b72f4fa9193f",
3 v% `2 C+ V& S- Q% C    "name": "rgwgroup",
* r5 s7 P1 i/ a8 _* x8 r    "api_name": "rgwgroup",- A( f7 K" J; Q) d) y1 l
    "is_master": "true",
, m" Z* W! p) K5 f6 E3 E8 ~7 c    "endpoints": [],
( c0 _! N+ a1 V( k9 `    "hostnames": [],% ]5 C# U# b- @$ B6 n
    "hostnames_s3website": [],
3 Z8 r1 F- P. d) T    "master_zone": "",
- @( H& \5 F( ~    "zones": [],2 O. A+ [& ~% N
    "placement_targets": [],
6 i! }$ U' I# z& O    "default_placement": "",
- `" D/ m$ T9 o! m8 s! k    "realm_id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00",4 x; O5 x1 e" o/ |, L2 w
    "sync_policy": {1 {( d' Z* r- `2 G9 Z& a6 R
        "groups": []
5 t: ^! h- k" [# T    }
" g+ x7 S. d7 \6 _, o0 A}4 Q% ]; @2 S8 j( a7 {7 q) v- e
创建一个区域
' W7 k& Y+ K  t+ P
* p! D  G' @! S, U5 r: `[root@cephnode01 ~]# radosgw-admin zone create --rgw-zonegroup=rgwgroup --rgw-zone=zone-dc1 --master --default
# a2 I$ `; c0 U0 R1 Y( w{% ?+ A5 h6 G6 C
    "id": "fbdc5f83-9022-4675-b98e-39738920bb57",* O& N7 N) Z5 X' [- [
    "name": "zone-dc1",
7 d% a5 e1 s3 P$ A0 {% `5 W# a    "domain_root": "zone-dc1.rgw.meta:root",
7 a+ l- s# ]  O0 y& j2 X    "control_pool": "zone-dc1.rgw.control",+ f# r( Z- F0 u$ J
    "gc_pool": "zone-dc1.rgw.log:gc",
4 n' \( u& S( b3 d; k    "lc_pool": "zone-dc1.rgw.log:lc",
. C0 W% b8 M6 k    "log_pool": "zone-dc1.rgw.log",0 y, H; \$ X- V  ~! m& A
    "intent_log_pool": "zone-dc1.rgw.log:intent",/ W$ o% f1 V5 g3 r9 r
    "usage_log_pool": "zone-dc1.rgw.log:usage",
3 q3 |7 F. U) d& r& C1 e    "roles_pool": "zone-dc1.rgw.meta:roles",
/ a/ \/ \5 K2 _# Y9 q$ b1 D    "reshard_pool": "zone-dc1.rgw.log:reshard",& r4 U, H3 E. `4 |2 O
    "user_keys_pool": "zone-dc1.rgw.meta:users.keys",' X' y; S# T8 m6 y3 O* i% d
    "user_email_pool": "zone-dc1.rgw.meta:users.email",8 y5 C% {( z# t$ h
    "user_swift_pool": "zone-dc1.rgw.meta:users.swift",
( q/ u- e+ O! H3 ]! N8 m7 k    "user_uid_pool": "zone-dc1.rgw.meta:users.uid",: U/ m) J4 V% {" g* u$ _+ O
    "otp_pool": "zone-dc1.rgw.otp",! \# ^: q; {: ]* p; o/ T# I, z0 s
    "system_key": {
( q9 ?6 A, I" x0 Q; d        "access_key": "",8 g) q. Z* ^% W+ h2 o' P8 n
        "secret_key": ""$ F/ L; p4 j# J( y, r. V- |6 z: P
    },+ a8 L- @* a& K! h6 F
    "placement_pools": [
* ]8 d; f9 D+ `* s, A        {
; q6 l3 {2 n7 a* h# e            "key": "default-placement",
4 y0 \+ X) s% O* Z1 G            "val": {$ X. O' @6 B8 ~$ F$ G9 v
                "index_pool": "zone-dc1.rgw.buckets.index",) Z' C9 ?0 I3 Q* o
                "storage_classes": {
0 k7 q; ^. b' }$ C9 D' c                    "STANDARD": {& F8 e9 j5 N4 z1 m' B  X  N
                        "data_pool": "zone-dc1.rgw.buckets.data"
7 {4 ?3 x5 @8 W                    }; ^, N" ~: M% q: w% T
                },: z7 G+ R, o# G7 w( o, o4 h0 I
                "data_extra_pool": "zone-dc1.rgw.buckets.non-ec",
+ Q" Y  h) P8 U& ~                "index_type": 0
$ ^5 |" q) o  Y( o+ e            }" g' C) B; X) o
        }
5 E& d& @5 }0 O7 a    ],
* ~0 P+ _7 v& ^( n3 p2 `% l* E1 s    "realm_id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00"% |8 ?  w( N% z7 h( s4 D9 H
}
! R2 P* b/ J4 k7 v! N/ Q为特定领域和区域部署一组radosgw守护进程,这里只指定了两个节点开启rgw6 @$ O; M2 ~# z/ t6 p  n  `& v; Z% u
1
7 M+ q- {/ p6 ^' \[root@cephnode01 ~]# ceph orch apply rgw rgw-org zone-dc1 --placement="2 cephnode02 cephnode03"
$ ^  V* G) o! \7 B& J, b验证6 l6 o: i8 q/ k# D- m3 t% K% ]

. o% X" ~7 V$ W4 a; H% y, J[root@cephnode01 ~]# ceph -s  n' D/ D/ S! u" e$ `, v1 I
  cluster:/ @5 H* P& l( \% ]# @3 Y
    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a
, W3 P# `7 j+ V    health: HEALTH_OK
- h8 I: ^' ^& \ + T. h  ^) ]. r) N( r9 M' `6 n( E
  services:$ I  F$ u. k! T0 q4 f% `3 v# u
    mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 1m)
  \- T  V& C# _; G4 t7 H& L0 h: j; q    mgr: cephnode01.oesega(active, since 49m), standbys: cephnode02.lphrtb, cephnode03.wkthtb
4 B; y" w1 F  W/ b. |    mds:  3 up:standby  z5 c  A! V. T& m, I. L0 v
    osd: 3 osds: 3 up (since 51m), 3 in (since 30m)
. X4 T$ ]1 A' o! b+ h    rgw: 2 daemons active (rgw-org.zone-dc1.cephnode02.cdgjsi, rgw-org.zone-dc1.cephnode03.nmbbsz)
4 t1 u% K# r2 g1 z% c  data:6 m8 R2 c' Q- W0 F' s% U% K
    pools:   1 pools, 1 pgs, k. V) m1 i# N; e/ J8 P+ X& O9 ]3 h
    objects: 0 objects, 0 B
4 t/ y# v" s6 @8 H+ l    usage:   3.0 GiB used, 57 GiB / 60 GiB avail+ ^4 Q* _! d8 y/ `
    pgs:     1 active+clean) B! Z) \  _: H
为RGW开启dashborad5 N! J' J. ^' m% T( h8 G
  M# {0 k" t: u/ R* a9 F" ^
#创建rgw的管理用户2 f9 K( a5 |9 g* E! a7 i: [  n
[root@cephnode01 ~]# radosgw-admin user create --uid=admin --display-name=admin --system5 h& f+ Z$ J! l- a0 H
{" [, a7 O; x# A( Z! s; @
    "user_id": "admin",' L1 h7 g% E6 ^" q8 r. s
    "display_name": "admin",& M5 t* f) {. G# z* k4 w8 i( R
    "email": "",) I0 B3 H7 @: ?& i+ ~) s
    "suspended": 0,2 b6 k$ H7 r7 y' [0 Q2 L
    "max_buckets": 1000,
  ~1 P& ]- k) c1 M( W    "subusers": [],1 D# R: e" |4 }6 }' G/ A( a$ N  u
    "keys": [" }% S1 K0 r: H* d+ L1 _8 q
        {( f5 u2 [9 ^  h8 p  d7 J6 G
            "user": "admin",  C# R$ r, v% |- o7 Q
            "access_key": "WG9W5O9O11TGGOLU6OD2",. v% t9 `# Y& P2 \$ O/ `
            "secret_key": "h2DfrWvlS4NMkdgGin4g6OB6Z50F1VNmhRCRQo3W"
. N  s) k; d) [        }
$ J. ~% b4 i, ^' v    ],
4 z! A) R+ a6 i$ Y% T7 C) {9 }- A- j    "swift_keys": [],
' `# I2 B* b! z, K+ ^; S9 ~    "caps": [],  X# d3 }! Q/ |& u0 J& `
    "op_mask": "read, write, delete",6 u) ^; H& }1 S3 _$ X
    "system": "true",+ v9 X. A+ d# A/ r; ]) O( y+ N
    "default_placement": "",* d8 x: m' l+ i) R0 y1 T
    "default_storage_class": "",
  P9 ~. l( v1 [" H    "placement_tags": [],
3 C% s" c: i7 `! B& R/ W    "bucket_quota": {9 F  B+ [2 w2 K  H2 }1 [/ @! N
        "enabled": false,
+ \" C& m# w  a5 i+ K$ w* R/ i6 C        "check_on_raw": false,
1 Q" k8 M( B! d        "max_size": -1,
3 P! B' Y; r) J  f/ I) ?4 b        "max_size_kb": 0,
* B+ s8 i5 M8 G. q5 @2 O. T7 t6 Z        "max_objects": -1
8 T( Y  m7 [6 J+ }8 C; w' `    }," t1 i; s1 g* \5 v/ d, f
    "user_quota": {
7 s& W2 j" c/ P8 S! j: m        "enabled": false,
! T% \9 d  N4 o# v8 I0 F        "check_on_raw": false,
: b! t' y$ C& H1 d8 q) _        "max_size": -1,  o0 Z$ U. m. s, ^- X" s
        "max_size_kb": 0,
2 Y3 J2 |  Q: }6 O/ ?        "max_objects": -15 y- ]( \/ x  I9 X$ ?7 C$ B
    },) H! P/ J3 o/ l6 W2 V
    "temp_url_keys": [],
% d' ~0 ~0 z7 l$ d    "type": "rgw",
% x7 S- E( B1 o& N    "mfa_ids": []
5 H! ?1 G: H& w+ F& O}
0 p0 u" g' ~8 r* a  U2 P  V设置dashboard凭证9 S( M+ b: G3 ?* j: c
) Q" @% H1 i' A+ P
[root@cephnode01 ~]# ceph dashboard set-rgw-api-access-key WG9W5O9O11TGGOLU6OD2
3 A( c4 I+ p. Y2 \# A' z/ k8 vOption RGW_API_ACCESS_KEY updated
: \+ X* r7 H3 F- R& C3 ]5 e; E& ]$ \0 x[root@cephnode01 ~]# ceph dashboard set-rgw-api-secret-key h2DfrWvlS4NMkdgGin4g6OB6Z50F1VNmhRCRQo3W
1 B5 J: {1 I7 |7 _Option RGW_API_SECRET_KEY updated/ j( H) J; k- |0 J( _+ u
设置禁用证书验证、http访问方式及使用admin账号* t9 r5 `, P" r: K/ ^# ?

- m( b8 c' g0 U1 eceph dashboard set-rgw-api-ssl-verify False
; [& J4 s$ y( V3 aceph dashboard set-rgw-api-scheme http0 L  _' P$ c. {
ceph dashboard set-rgw-api-host 10.15.253.225$ ]0 l5 c7 F. V; q
ceph dashboard set-rgw-api-port 80$ ?1 f; v( T0 @
ceph dashboard set-rgw-api-user-id admin
0 j7 U$ y, d' z重启RGW- l( I, D1 t; e( }7 M
) _# R% |- S; z  {9 ~  ?
ceph orch restart rgw$ `* k+ B" ^1 m( }, d( n  A
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2021-9-27 03:15 , Processed in 0.058411 second(s), 23 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表