当前位置:主页 > 查看内容

CEPH安装部署

发布时间:2021-07-19 00:00| 位朋友查看

简介:文章目录 一、节点规格 二、修改主机名 1.ceph01节点修改主机名 2.ceph02节点修改主机名 3.ceph03节点修改主机名 三、配置网络 1.ceph01配置网络 2.ceph02配置网络 3.ceph03配置网络 四、关闭防火墙 ceph01关闭防火墙ceph02ceph03同理 五、配置本地解析 ceph……


一、节点规格

在这里插入图片描述ceph官网:ceph.com;部署方式:ceph-deploy

二、修改主机名

1.ceph01节点修改主机名:

[root@localhost ~]# hostnamectl set-hostname ceph01 
[root@localhost ~]# su 
[root@ceph01 ~]#

2.ceph02节点修改主机名:

[root@localhost ~]# hostnamectl set-hostname ceph02 
[root@localhost ~]# su 
[root@ceph02 ~]#

3.ceph03节点修改主机名:

[root@localhost ~]# hostnamectl set-hostname ceph03 
[root@localhost ~]# su 
[root@ceph03 ~]#

三、配置网络

ceph01:192.168.13.47/24
ceph02:192.168.13.48/24
ceph03:192.168.13.49/24
此处我配置的是动态ip地址,网络重启后需要nmcli connection up ens33再次启动一下,大家也可以配置静态ip地址

1.ceph01配置网络:

[root@ceph01 ~]# nmcli connection show 
NAME  UUID                                 TYPE     DEVICE 
ens33 e2d80065-81bc-42b0-8a36-730ae6da1112 ethernet ens33
[root@ceph01 ~]# nmcli connection modify ens33 ipv4.method manual 
[root@ceph01 ~]# nmcli connection modify ens33 ipv4.addr 192.168.13.47/24 
[root@ceph01 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.13.2 
[root@ceph01 ~]# nmcli connection modify ens33 ipv4.dns 114.114.114.114 
[root@ceph01 ~]# nmcli connection up ens33 
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/2) 
[root@ceph01 ~]# ip a 
	1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 
	link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
	inet 127.0.0.1/8 scope host lo 
		valid_lft forever preferred_lft forever 
	inet6 ::1/128 scope host 
		valid_lft forever preferred_lft forever 
	2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 
	link/ether 00:0c:29:37:87:2b brd ff:ff:ff:ff:ff:ff 
	inet 192.168.13.47/24 brd 192.168.13.255 scope global noprefixroute ens33 
		valid_lft forever preferred_lft forever 
	inet6 fe80::b7bb:bd87:8730:25d/64 scope link noprefixroute 
		valid_lft forever preferred_lft forever 
[root@ceph01 ~]#

2.ceph02配置网络:

[root@ceph02 ~]# nmcli connection show 
NAME  UUID                                 TYPE     DEVICE 
ens33 e2d80065-81bc-42b0-8a36-730ae6da1112 ethernet ens33
[root@ceph02 ~]# nmcli connection modify ens33 ipv4.method manual 
[root@ceph02 ~]# nmcli connection modify ens33 ipv4.addr 192.168.13.47/24 
[root@ceph02 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.13.2 
[root@ceph02 ~]# nmcli connection modify ens33 ipv4.dns 114.114.114.114 
[root@ceph02 ~]# nmcli connection up ens33
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/2)
[root@ceph02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
	link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
	inet 127.0.0.1/8 scope host lo
		valid_lft forever preferred_lft forever
	inet6 ::1/128 scope host
		valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
	link/ether 00:0c:29:73:be:85 brd ff:ff:ff:ff:ff:ff
	inet 192.168.13.48/24 brd 192.168.13.255 scope global noprefixroute ens33 
		valid_lft forever preferred_lft forever 
	inet6 fe80::b7bb:bd87:8730:25d/64 scope link tentative noprefixroute dadfailed
		valid_lft forever preferred_lft forever 
	inet6 fe80::b8c:bc91:3125:2e93/64 scope link noprefixroute
		valid_lft forever preferred_lft forever
[root@ceph02 ~]#

3.ceph03配置网络:

[root@ceph03 ~]# nmcli connection show 
NAME  UUID                                 TYPE     DEVICE 
ens33 e2d80065-81bc-42b0-8a36-730ae6da1112 ethernet ens33 
[root@ceph03 ~]# nmcli connection modify ens33 ipv4.method manual 
[root@ceph03 ~]# nmcli connection modify ens33 ipv4.addr 192.168.13.47/24 
[root@ceph03 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.13.2 
[root@ceph03 ~]# nmcli connection modify ens33 ipv4.dns 114.114.114.114 
[root@ceph03 ~]# nmcli connection up ens33 
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/2) [root@ceph03 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
	link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
	inet 127.0.0.1/8 scope host lo 
		valid_lft forever preferred_lft forever 
	inet6 ::1/128 scope host 
		valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
	link/ether 00:0c:29:4f:a8:91 brd ff:ff:ff:ff:ff:ff 
	inet 192.168.13.49/24 brd 192.168.13.255 scope global noprefixroute ens33 
		valid_lft forever preferred_lft forever 
	inet6 fe80::b7bb:bd87:8730:25d/64 scope link tentative noprefixroute dadfailed
		valid_lft forever preferred_lft forever 
	inet6 fe80::b8c:bc91:3125:2e93/64 scope link tentative noprefixroute dadfailed
		valid_lft forever preferred_lft forever 
	inet6 fe80::1e35:965f:5370:e430/64 scope link noprefixroute 
		valid_lft forever preferred_lft forever 
[root@ceph03 ~]#

四、关闭防火墙

ceph01关闭防火墙(ceph02,ceph03同理):

[root@ceph01 ~]# systemctl stop firewalld 
[root@ceph01 ~]# systemctl disable firewalld 
[root@ceph01 ~]# systemctl status firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
	Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) 
	Active: inactive (dead) 
		Docs: man:firewalld(1) 

12月 03 18:34:25 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon... 
12月 03 18:34:26 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon. 
12月 03 18:36:48 localhost.localdomain systemd[1]: Stopping firewalld - dynamic firewall daemon... 
12月 03 18:36:48 localhost.localdomain systemd[1]: Stopped firewalld - dynamic firewall daemon. 
[root@ceph01 ~]#

五、配置本地解析

ceph01配置(ceph02,ceph03同理):

[root@ceph01 ~]# vi /etc/hosts 
[root@ceph01 ~]# cat /etc/hosts 
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 
192.168.13.47 ceph01 
192.168.13.48 ceph02 
192.168.13.49 ceph03 
[root@ceph01 ~]#

六、配置免密登录

ceph01配置(ceph02,ceph03同理):

  • 利用RSA算法生成一对公私钥:
[root@ceph01 ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pL7PSgeEzh7W2mPAKsrtpfmJ0G/MwYo4pqE6DhdqKI4 root@ceph01
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|     .           |
|    . . .        |
|   + o o         |
|  . O + S        |
|...= O .         |
|Oo+.*.B .        |
|&*o.**.=         |
|E*.*o+ooo        |
+----[SHA256]-----+
[root@ceph01 ~]# ls .ssh/
id_rsa id_rsa.pub
[root@ceph01 ~]#
  • 将公钥传给三节点
[root@ceph01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@ceph01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'ceph01 (192.168.13.47)' can't be established.
ECDSA key fingerprint is SHA256:O2UIVl6/sJEnLAljn6wNjyo+1h+92mCCet7cYbF8W8w.
ECDSA key fingerprint is MD5:0a:98:47:06:ab:aa:f8:51:74:d2:dd:30:23:f6:2c:bd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph01's password:
Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@ceph01'" and check to make sure that only the key(s) you wanted were added.

[root@ceph01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@ceph02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub" The authenticity of host 'ceph02 (192.168.13.48)' can't be established.
ECDSA key fingerprint is SHA256:O2UIVl6/sJEnLAljn6wNjyo+1h+92mCCet7cYbF8W8w.
ECDSA key fingerprint is MD5:0a:98:47:06:ab:aa:f8:51:74:d2:dd:30:23:f6:2c:bd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph02's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@ceph02'" and check to make sure that only the key(s) you wanted were added.

[root@ceph01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@ceph03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub" The authenticity of host 'ceph03 (192.168.13.49)' can't be established.
ECDSA key fingerprint is SHA256:O2UIVl6/sJEnLAljn6wNjyo+1h+92mCCet7cYbF8W8w.
ECDSA key fingerprint is MD5:0a:98:47:06:ab:aa:f8:51:74:d2:dd:30:23:f6:2c:bd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph03's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@ceph03'" and check to make sure that only the key(s) you wanted were added.

[root@ceph01 ~]#

七、配置源

ceph01配置(ceph02,ceph03同理):

  • 配置BASE源
[root@ceph01 ~]# mkdir /etc/yum.repos.d/bak 
[root@ceph01 ~]# mv /etc/yum.repos.d/* /etc/yum.repos.d/bak/ 
mv: 无法将目录"/etc/yum.repos.d/bak" 移动至自身的子目录"/etc/yum.repos.d/bak/bak" 下 
[root@ceph01 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo 
https://mirrors.huaweicloud.com/repository/conf/CentOS-7-anon.repo --2020-12-03 19:25:15-- https://mirrors.huaweicloud.com/repository/conf/CentOS- 7-anon.repo 
正在解析主机 mirrors.huaweicloud.com (mirrors.huaweicloud.com)... 117.78.24.34, 117.78.24.42, 117.78.24.40
正在连接 mirrors.huaweicloud.com (mirrors.huaweicloud.com)|117.78.24.34|:443... 已 连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:未指定 [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

	[ <=>                                              ] 1,811
  --.-K/s 用时 0s

2020-12-03 19:25:15 (233 MB/s) - “/etc/yum.repos.d/CentOS-Base.repo” 已保存 [1811]

[root@ceph01 ~]#

mv: 无法将目录"/etc/yum.repos.d/bak" 移动至自身的子目录"/etc/yum.repos.d/bak/bak" 下,这个报
错可以不用管,因为实际是已经把/etc/yum.repos.d/下的yum源移到了/etc/yum.repos.d/bak下了

  • 移动前
[root@ceph01 ~]# cd /etc/yum.repos.d/ 
[root@ceph02 yum.repos.d]# ls 
bak               CentOS-CR.repo          CentOS-fasttrack.repo   CentOS- Sources.repo 
CentOS-Base.repo  CentOS-Debuginfo.repo   CentOS-Media.repo       CentOS- Vault.repo 
[root@ceph02 yum.repos.d]#
  • 移动后
[root@ceph01 yum.repos.d]# ls bak 
[root@ceph01 yum.repos.d]# cd bak/ 
[root@ceph01 bak]# ls 
CentOS-Base.repo    CentOS-Debuginfo.repo   CentOS-Media.repo   CentOS-Vault.repo   
CentOS-CR.repo      CentOS-fasttrack.repo   CentOS-Sources.repo 
[root@ceph01 bak]#
  • 配置EPEL源
[root@ceph01 ~]# vi /etc/yum.repos.d/epel.repo 
[root@ceph01 ~]# cat /etc/yum.repos.d/epel.repo 
[epel] 
name=epel_huaweicloud 
baseurl=https://mirrors.huaweicloud.com/epel/7/x86_64/ 
gpgcheck=0 
enabled=1 
[root@ceph01 ~]#
  • 配置CEPH源
[root@ceph01 ~]# vi /etc/yum.repos.d/ceph.repo 
[root@ceph01 ~]# cat /etc/yum.repos.d/ceph.repo 
[ceph] 
name=ceph_huaweicloud 
baseurl=https://mirrors.huaweicloud.com/ceph/rpm-15.2.4/el7/x86_64/
gpgcheck=0 
enabled=1 
[ceph-noarch] 
name=ceph-noarch_huaweicloud 
baseurl=https://mirrors.huaweicloud.com/ceph/rpm-15.2.4/el7/noarch/ 
gpgcheck=0 
enabled=1 
[root@ceph01 ~]# 
[root@ceph01 ~]# yum clean all
已加载插件:fastestmirror 
正在清理软件源: base ceph ceph-noarch epel extras updates 
Cleaning up list of fastest mirrors 
[root@ceph01 ~]# yum makecache 
已加载插件:fastestmirror Determining fastest mirrors 
base
| 3.6 kB 00:00:00 
ceph
~~进度过程省略~~ 
ceph 
				39/39 
ceph 
				39/39 
ceph 
				39/39 
ceph-noarch 
				25/25 
ceph-noarch 
				25/25 
ceph-noarch 
				25/25 
元数据缓存已建立 
[root@ceph01 ~]#

八、配置NTP

1.ceph01节点操作:

[root@ceph01 ~]# vi /etc/chrony.conf 
[root@ceph01 ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project. 
# Please consider joining the pool (http://www.pool.ntp.org/join.html). 
server ntp.aliyun.com iburst 
# Record the rate at which the system clock gains/losses time. driftfile /var/lib/chrony/drift 
# Allow the system clock to be stepped in the first three updates 
# if its offset is larger than 1 second. 
makestep 1.0 3 
# Enable kernel synchronization of the real-time clock (RTC). rtcsync 
# Enable hardware timestamping on all interfaces that support it. 
#hwtimestamp * 
# Increase the minimum number of selectable sources required to adjust 
# the system clock. 
minsources 2 
# Allow NTP client access from local network. 
allow 192.168.13.0/24 
# Serve time even if not synchronized to a time source. 
#local stratum 10 
[root@ceph01 ~]# timedatectl set-timezone Asia/Shanghai 
[root@ceph01 ~]# chronyc sources -v 
210 Number of sources = 1

  .-- Source mode '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined, 
| /    '?' = unreachable, 'x' = time may be in error, '~' = time too variable. 
||                                                  .- xxxx [ yyyy ] +/- zzzz 
||        Reachability register (octal) -.          | xxxx = adjusted offset,
||        Log2(Polling interval) --.      |         | yyyy = measured offset, 
||                                   \    |         | zzzz = estimated error. 
||                                    |   |          \ 
MS Name/IP address           Stratum Poll Reach LastRx Last sample 
===============================================================================

^- 203.107.6.88                    2 6 17 33 +86399s[+86399s] +/-   31ms 
[root@ceph01 ~]#

2.ceph02节点操作:

[root@ceph02 ~]# vi /etc/chrony.conf 
[root@ceph02 ~]# cat /etc/chrony.conf 
e public servers from the pool.ntp.org project. 
# Please consider joining the pool (http://www.pool.ntp.org/join.html). 
server 192.168.13.47 iburst 
# Record the rate at which the system clock gains/losses time. 
driftfile /var/lib/chrony/drift 
# Allow the system clock to be stepped in the first three updates 
# if its offset is larger than 1 second. 
makestep 1.0 3 
# Enable kernel synchronization of the real-time clock (RTC). rtcsync 
# Enable hardware timestamping on all interfaces that support it. 
#hwtimestamp * 

# Increase the minimum number of selectable sources required to adjust 
# the system clock. 
#minsources 2 

# Allow NTP client access from local network. 
#allow 192.168.0.0/16 

# Serve time even if not synchronized to a time source. 
#local stratum 10 

# Specify file containing keys for NTP authentication. 
#keyfile /etc/chrony.keys 
[root@ceph02 ~]# 
[root@ceph02 ~]# timedatectl set-timezone Asia/Shanghai 
[root@ceph02 ~]# chronyc sources -v 
210 Number of sources = 4

.-- Source mode '^' = server, '=' = peer, '#' = local clock. 
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined, 
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable. 
|| .- xxxx [ yyyy ] +/- zzzz 
|| Reachability register (octal) -. | xxxx = adjusted offset, 
|| Log2(Polling interval) --. 
| | yyyy = measured offset, 
|| \ | | zzzz = estimated error. 
|| | | \ 
MS Name/IP address Stratum Poll Reach LastRx Last sample 
=============================================================================== 
^- electrode.felixc.at 3 6 67 31 +1823us[+1823us] +/- 141ms 
^- tock.ntp.infomaniak.ch 1 6 67 31 +45ms[ +45ms] +/- 145ms 
^- 203.107.6.88 2 6 37 34 +268us[ +268us] +/- 36ms 
^* 119.28.183.184 2 6 37 35 +84us[-5223us] +/- 44ms 
[root@ceph02 ~]#

3.ceph03节点操作:

[root@ceph03 ~]# cat /etc/chrony.conf 
e public servers from the pool.ntp.org project. 
# Please consider joining the pool (http://www.pool.ntp.org/join.html). 
server 192.168.13.47 iburst 
# Record the rate at which the system clock gains/losses time. 
driftfile /var/lib/chrony/drift 
# Allow the system clock to be stepped in the first three updates 
# if its offset is larger than 1 second. 
makestep 1.0 3 
# Enable kernel synchronization of the real-time clock (RTC). rtcsync 
# Enable hardware timestamping on all interfaces that support it. 
#hwtimestamp * 

# Increase the minimum number of selectable sources required to adjust 
# the system clock. 
#minsources 2 

# Allow NTP client access from local network. 
#allow 192.168.0.0/16 

# Serve time even if not synchronized to a time source. 
#local stratum 10

# Specify file containing keys for NTP authentication. 
#keyfile /etc/chrony.keys 
[root@ceph03 ~]# 
[root@ceph03 ~]# timedatectl set-timezone Asia/Shanghai 
[root@ceph03 ~]# chronyc sources -v 
210 Number of sources = 4

  .-- Source mode '^' = server, '=' = peer, '#' = local clock. 
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined, 
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable. 
||                                                 .- xxxx [ yyyy ] +/- zzzz 
||       Reachability register (octal) -.          | xxxx = adjusted offset, 
||       Log2(Polling interval) --.      |         | yyyy = measured offset, 
||                                 \     |         | zzzz = estimated error. 
||                                  |    |          \ 
MS Name/IP address           Stratum Poll Reach LastRx Last sample =============================================================================== 
^? a.chl.la                        2 6 3 12 -1593us[-1593us] +/- 136ms 
^* ntp.xtom.nl 2 6 77 15 -838us[+1093us] +/- 132ms 
^? ntp6.flashdance.cx 2 7 6 136 +22ms[+6903us] +/- 234ms 
^- tock.ntp.infomaniak.ch 1 6 155 4 -7759us[-7759us] +/- 118ms 
[root@ceph03 ~]#

九、安装ceph

ceph01,ceph02和ceph03同理:

[root@ceph01 ~]# yum install -y ceph 
已加载插件:fastestmirror 
Loading mirror speeds from cached hostfile 
正在解决依赖关系 
--> 正在检查事务 
---> 软件包 ceph.x86_64.2.15.2.4-0.el7 将被 安装
--> 正在处理依赖关系 ceph-mon = 2:15.2.4-0.el7,它被软件包 2:ceph-15.2.4-0.el7.x86_64 需要
--> 正在处理依赖关系 ceph-osd = 2:15.2.4-0.el7,它被软件包 2:ceph-15.2.4-0.el7.x86_64 需要
--> 正在处理依赖关系 ceph-mds = 2:15.2.4-0.el7,它被软件包 2:ceph-15.2.4-0.el7.x86_64 需要
--> 正在处理依赖关系 ceph-mgr = 2:15.2.4-0.el7,它被软件包 2:ceph-15.2.4-0.el7.x86_64 需要
--> 正在检查事务 
—————————————————————————————————————— 省略 —————————————————————————————————————— 

作为依赖被升级: 
	cryptsetup-libs.x86_64 0:2.0.3-6.el7


完毕!
[root@ceph01 ~]#

十、安装ceph部署工具

只需要在ceph01节点安装

[root@ceph01 ~]# yum install python-setuptools ceph-deploy -y 
已加载插件:fastestmirror 
Loading mirror speeds from cached hostfile 
正在解决依赖关系 
—————————————————————————————————————— 省略 —————————————————————————————————————— 

已安装: 
	ceph-deploy.noarch 0:2.0.1-0 python-setuptools.noarch 0:0.9.8-7.el7 

作为依赖被安装: 
	python-backports.x86_64 0:1.0-8.el7       python-backports- ssl_match_hostname.noarch 0:3.5.0.1-1.el7 
	python-ipaddress.noarch 0:1.0.16-2.el7 

完毕! 
[root@ceph01 ~]#

十一、部署ceph

部署一个新的Ceph集群,mon部署节点为ceph01,ceph02,ceph03(只在ceph01操作)

[root@ceph01 ~]# ceph-deploy new ceph01 ceph02 ceph03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph01 ceph02 ceph03 
[ceph_deploy.cli][INFO ] ceph-deploy options: 
—————————————————————————————————————— 省略 —————————————————————————————————————— 
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph01', 'ceph02', 'ceph03'] 
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.13.47', '192.168.13.48', '192.168.13.49'] 
[ceph_deploy.new][DEBUG ] Creating a random mon key... 
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... 
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... 
[root@ceph01 ~]#
[root@ceph01 ~]# ceph-deploy mon create-initial 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create- initial 
—————————————————————————————————————— 省略 —————————————————————————————————————— 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring 
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring 
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpaX3Qjj 
[root@ceph01 ~]#
[root@ceph01 ~]# ceph-deploy admin ceph01 ceph02 ceph03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph01 ceph02 ceph03 
[ceph_deploy.cli][INFO ] ceph-deploy options: 
[ceph_deploy.cli][INFO ] username                       : None [ceph_deploy.cli][INFO ] verbose                        : False 
[ceph_deploy.cli][INFO ] overwrite_conf                 : False 
[ceph_deploy.cli][INFO ] quiet                          : False 
[ceph_deploy.cli][INFO ] cd_conf                        : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fde42f704d0> 
[ceph_deploy.cli][INFO ] cluster                        : ceph 
[ceph_deploy.cli][INFO ] client                         : ['ceph01', 'ceph02', 'ceph03'] 
[ceph_deploy.cli][INFO ] func                           : <function admin at 0x7fde43a99230> 
[ceph_deploy.cli][INFO ] ceph_conf                      : None 
[ceph_deploy.cli][INFO ] default_release                : False [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph01 
[ceph01][DEBUG ] connected to host                      : ceph01 
[ceph01][DEBUG ] detect platform information from remote host 
[ceph01][DEBUG ] detect machine type 
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph02 
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host 
[ceph02][DEBUG ] detect machine type 
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph03 
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host 
[ceph03][DEBUG ] detect machine type 
[ceph03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[root@ceph01 ~]#
[root@ceph01 ~]# ceph-deploy osd create --data /dev/sdb ceph01 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create -- data /dev/sdb ceph01 
—————————————————————————————————————— 省略 —————————————————————————————————————— 
[ceph01][INFO ] checking OSD status... 
[ceph01][DEBUG ] find the location of an executable 
[ceph01][INFO ] Running command: /bin/ceph --cluster=ceph osd stat -- format=json 
[ceph_deploy.osd][DEBUG ] Host ceph01 is now ready for osd use. 
[root@ceph01 ~]# 
[root@ceph01 ~]# ceph-deploy osd create --data /dev/sdb ceph02 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create -- data /dev/sdb ceph02 
—————————————————————————————————————— 省略 —————————————————————————————————————— 
[ceph02][INFO ] checking OSD status... 
[ceph02][DEBUG ] find the location of an executable 
[ceph02][INFO ] Running command: /bin/ceph --cluster=ceph osd stat -- format=json 
[ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use. 
[root@ceph01 ~]# 
[root@ceph01 ~]# ceph-deploy osd create --data /dev/sdb ceph03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create -- data /dev/sdb ceph03 
——————————————————————————————————————— 省略 ——————————————————————————————————————— 
[ceph03][INFO ] checking OSD status... 
[ceph03][DEBUG ] find the location of an executable 
[ceph03][INFO ] Running command: /bin/ceph --cluster=ceph osd stat -- format=json 
[ceph_deploy.osd][DEBUG ] Host ceph03 is now ready for osd use. 
[root@ceph01 ~]#

十二、测试(ceph健康)

[root@ceph01 ~]# ceph -s 
	cluster: 
		id: 	57cdd1ff-f2a0-471f-b867-0f9356c143d5 
		health: HEALTH_OK 

	services: 
		mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 2m)
		mgr: ceph01(active, since 81s), standbys: ceph02, ceph03 
		osd: 3 osds: 3 up (since 6s), 3 in (since 6s) 

	data: 
		pools: 	 1 pools, 1 pgs 
		objects: 0 objects, 0 B 
		usage: 	 3.0 GiB used, 57 GiB / 60 GiB avail 
		pgs: 	 1 active+clean 

[root@ceph01 ~]#
;原文链接:https://blog.csdn.net/weixin_44064728/article/details/115753651
本站部分内容转载于网络,版权归原作者所有,转载之目的在于传播更多优秀技术内容,如有侵权请联系QQ/微信:153890879删除,谢谢!

推荐图文


随机推荐