Openstack 部署Ceph 存储集群的步骤是:
1.系统环境准备:
192.168.138.5 ceph-node1 ceph-mon, ceph-osd, ceph-deploy
192.168.138.7 ceph-node2 ceph-mon, ceph-osd
192.168.138.8 ceph-node3 ceph-mon, ceph-osd
2.做互信:
ceph-node1节点做如下操作:
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.138.5 ceph-node1
192.168.138.7 ceph-node2
192.168.138.8 ceph-node3
EOF
scp /etc/hosts 192.168.138.7:/etc/hosts
scp /etc/hosts 192.168.138.8:/etc/hosts
3.所有节点生成密钥:
#ssh-keygen
4.authorized_keys文件拷到其它节点:
ceph-node1节点做如下操作:
scp /root/.ssh/authorized_keys root@ceph-node2:/root/.ssh
scp /root/.ssh/authorized_keys root@ceph-node3:/root/.ssh
5.所有节点关闭防火墙和selinux:
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
6.所有节点配置国内网易源并安装 epel:
mv /etc/yum.repos.d/CentOS-Base.repo{,.bak}
curl
http://mirrors.163.com/.help/CentOS7-Base-163.repo -o /etc/yum.repos.d/CentOS-Base.repo
yum install epel-release -y && yum clean all && yum update -y
7.所有节点配置crontab每4个小时同步所有节点的时间:
yum install ntpdate -y
cat >> /etc/crontab <<EOF
0 */4 * * * root /usr/sbin/ntpdate pool.ntp.org >> /var/log/sync-time.log 2>&1
EOF
8.所有节点安装ceph-jewel的yum源:
rpm -Uvh
http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm yum update -y
9.使用ceph-deploy部署ceph-jewel集群:
ceph-node1节点安装ceph-deploy:
yum install ceph-deploy -y
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
mkdir /etc/ceph
# 以下操作必须要在 /etc/ceph/ 目录!
cd /etc/ceph/
ceph-deploy new ceph-node1 ceph-node2 ceph-node3
ceph-deploy install ceph-node1 ceph-node2 ceph-node3
ceph-deploy mon create-initial
ceph -s
10.为每个节点添加大小一样的磁盘。
11.ceph-node1节点使用ceph-deploy部署ceph-jewel集群:
cd /etc/ceph/
ceph-deploy disk list ceph-node1
ceph-deploy disk list ceph-node2
ceph-deploy disk list ceph-node3
ceph-deploy disk zap ceph-node1:sdb ceph-node2:sdb ceph-node3:sdb
ceph-deploy osd create ceph-node1:sdb ceph-node2:sdb ceph-node3:sdb
12.ceph -s查看ceph状态。
ceph osd tree 查看OSD状态。
ceph health detail查看状态。
剔除OSD的操作:
systemctl stop ceph-osd@xx
ceph osd out osd.xx
ceph osd crush remove osd.xx
ceph osd rm osd.xx
ceph auth del osd.xx
ceph 停止OSD及重启mon的操作命令:
#systemctl stop ceph-osd@xx
#systemctl restart ceph-mon@ceph-node1
ceph抑制pg数目大于默认300的警告:
cat >> /etc/ceph/ceph.conf << EOF
mon_pg_warn_max_per_osd = 1000
EOF