zhmg23

我们是如此的不同

RHEL7.2快速安装部署Ceph

一、安装准备

1、安装概况



2、创建用户

useradd -d /home/ceph -m ceph

passwd ceph


echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

chmod 0440 /etc/sudoers.d/ceph

sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers


3、安装配置NTP

yum install -y ntp ntpdate ntp-doc

ntpdate  2.cn.pool.ntp.org

hwclock --systohc

systemctl enable ntpd.service

systemctl start ntpd.service


4、安装Open-vm-tools

If you are running all nodes inside VMware, you need to install this virtualization utility. Otherwise skip this step.




5、禁用SELinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


6、配置host

192.168.1.41            c001041

192.168.1.42            c001042

192.168.1.43            c001043


sed -i '$ a\192.168.1.41\tc001041'  /etc/hosts

sed -i '$ a\192.168.1.42\tc001042'  /etc/hosts

sed -i '$ a\192.168.1.43\tc001043'  /etc/hosts



二、配置SSH Server

# su - ceph

$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 

Created directory '/home/ceph/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/ceph/.ssh/id_rsa.

Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.

The key fingerprint is:

47:7f:10:c9:8d:bd:22:6f:2c:96:c9:6b:a9:0a:0a:d0 ceph@h0073021

The key's randomart image is:

+--[ RSA 2048]----+

|           ..=   |

|            +.o  |

|          . .  . |

| .       ......  |

|. E     S..*...  |

|.        .* +.   |

|.   .    . =     |

| . . .    +      |

|  .   ...o       |

+-----------------+


注: 一直回车

编辑文件,加入如下内容

vim ~/.ssh/config


Host c001041

        Hostname c001041

        User ceph

 

Host c001042

        Hostname c001042

        User ceph

 

Host c001043

        Hostname c001043

        User ceph

 

$ sudo chmod 644 ~/.ssh/config

$ sudo ssh-keyscan c001041 c001042 c001043 >> ~/.ssh/known_hosts

$sudo  ssh-copy-id c001041

$sudo  ssh-copy-id c001042

$sudo  ssh-copy-id c001043

注: 以上操作,是在c001041

 三、配置 Firewalld

systemctl start firewalld

systemctl enable firewalld

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent

sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent

sudo firewall-cmd --reload

打开 6800-7300 在 osd nodes

sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

sudo firewall-cmd --reload

注:此步如果不使用防火墙,可忽略

Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信



四、在各节点上创建ceph 源

在 /etc/yum.repos.d/目录下创建 ceph.repo然后写入以下内容

[Ceph]

name=Ceph packages for $basearch

baseurl=https://mirrors.163.com/ceph/rpm-jewel/el7/$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


[Ceph-noarch]

name=Ceph noarch packages

baseurl=https://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


[ceph-source]

name=Ceph source packages

baseurl=https://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


注:科大的源 https://mirrors.ustc.edu.cn/ceph/

五、创建  Ceph Cluster

1、在管理节点c001041上进行安装准备

ssh root@c001041

su - ceph

# mkdir  ~/ceph-cluster

# cd  ~/ceph-cluster


$ sudo yum install https://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

安装ceph-deploy

$ sudo yum install ceph-deploy


若安装ceph后遇到麻烦可以使用以下命令进行清除包和配置

$ sudo ceph-deploy purge c001041 c001042 c001043

$ sudo ceph-deploy purgedata c001041 c001042 c001043

$ sudo ceph-deploy forgetkeys


2、创建CEPH集群

$sudo ceph-deploy new c001041  c001042 c001043


$ sudo vim ceph.conf

在[global]最下面加入如下内容

public network = 192.168.1.0/24

osd pool default size = 2

保存并退出

3、在每个 Ceph 节点上都安装 Ceph

$ sudo ceph-deploy install c001041  c001042 c001043


配置初始 monitor(s)、并收集所有密钥

# ceph-deploy mon create-initial


4、创建osd

添加两个 OSD ,登录到 Ceph 节点、并给 OSD 守护进程创建一个目录

#ssh c001042

#sudo mkdir /var/local/osd0

#exit


#ssh c001043

#sudo mkdir /var/local/osd1

#exit

然后,从管理节点(c001041)执行 ceph-deploy 来准备 OSD

$ sudo ceph-deploy osd prepare c001042:/var/local/osd0 c001043:/var/local/osd1


最后,激活 OSD

$ sudo ceph-deploy osd activate  c001042:/var/local/osd0 c001043:/var/local/osd1


确保你对 ceph.client.admin.keyring 有正确的操作权限

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


检查集群的健康状况

[ceph@c001041 ceph-cluster]$ ceph -s

    cluster 8a529d9d-046e-472b-a771-98be158b7405

     health HEALTH_OK

     monmap e1: 3 mons at {c001041=192.168.1.41:6789/0,c001042=192.168.1.42:6789/0,c001043=192.168.1.43:6789/0}

            election epoch 4, quorum 0,1,2 c001041,c001042,c001043

     osdmap e10: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v55: 64 pgs, 1 pools, 0 bytes data, 0 objects

            13079 MB used, 89270 MB / 102350 MB avail

                  64 active+clean


$ ceph --version

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)




评论