内容简介:本文风哥教程参考Linux官方文档、Red Hat Enterprise Linux官方文档、Ansible Automation Platform官方文档、Docker官方文档、Kubernetes官方文档和Podman官方文档等内容,详细介绍了相关技术的配置和使用方法。
本文档介
风哥提示:
绍Ceph分布式存储的基础知识和部署方法。
Part01-Ceph架构介绍
1.1 Ceph组件说明
[root@ceph-admin ~]# cat > /root/ceph-architecture.txt << 'EOF' Ceph架构组件 ============ 1. MON(Monitor)监控节点 - 维护集群状态映射 - 管理集群成员关系 - 提供一致性决策 - 建议部署3个或5个节点 2. OSD(Object Storage Daemon)存储节点 - 存储实际数据 - 处理数据复制 - 提供数据恢复 - 每个磁盘一个OSD进程 3. MDS(Metadata Server)元数据服务器 - 为CephFS提供元数据服务 - 管理文件系统结构 - 缓存元数据 4. MGR(Manager)管理器 - 提供集群监控 - 管理集群模块 - 提供Web Dashboard 5. RGW(RADOS Gateway)对象网关 - 提供S3/Swift兼容接口 - 支持RESTful API - 对象存储服务 存储接口: - RADOS:可靠自动分布式对象存储 - RBD:块设备接口 - CephFS:文件系统接口 - RGW:对象存储接口 EOF # 查看系统配置 [root@ceph-admin ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain 192.168.1.10 ceph-admin 192.168.1.11 ceph-mon1 192.168.1.12 ceph-mon2 192.168.1.13 ceph-mon3 192.168.from PG视频:www.itpux.com1.21 ceph-osd1 192.168.学习交流加群风哥微信: itpux-com1.22 ceph-osd2 192.168.1.23 ceph-osd3 # 配置SSH免密登录 [root@ceph-admin ~]# ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa Your public key has been saved in /root/.ssh/id_rsa.pub [root@ceph-admin ~]# for host in ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1学习交流加群风哥QQ113257174 ceph-osd2 ceph-osd3; do ssh-copy-id -o StrictHostKeyChecking=no root@$host done
1.2 安装Ceph
[root@ceph-admin ~]# dnf install -y ceph-deploy
Updating Subscription Management repositories.
Last metadata expiration check: 0:05:23 ago on Fri Apr 4 19:20:00 2026.
Dependencies resolved.
================================================================================
Package Architecture Version Repository Size
================================================================================
Installing:
ceph-deploy noarch 2.1.0-1.el9 epel 100 k
Transaction Summary
================================================================================
Install 1 Package
Total download size: 100 k
Installed size: 300 k
Downloading Packages:
ceph-deploy-2.1.0-1.el9.noarch.rpm 500 kB/s | 100 kB 00:00
——————————————————————————–
Total 500 kB/s | 100 kB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : ceph-deploy-2.1.0-1.el9.noarch 1/1
Verifying : ceph-deploy-2.1.0-1.el9.noarch 1/1
Installed:
ceph-deploy-2.1.0-1.el9.noarch
Complete!
# 创建集群目录
[root@ceph-admin ~]# mkdir -p /root/ceph-cluster
[root@ceph-admin ~]# cd /root/ceph-cluster
# 创建集群
[root@ceph-admin ceph-cluster]# ceph-deploy new ceph-mon1 ceph-mon2 ceph-mon3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy new ceph-mon1 ceph-mon2 ceph-mon3
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-mon1][DEBUG ] connected to host: ceph-admin
[ceph-mon1][INFO ] Running command: ssh -CT -o StrictHostKeyChecking=no ceph-mon1
[ceph-mon2][DEBUG ] connected to host: ceph-admin
[ceph-mon2][INFO ] Running command: ssh -CT -o StrictHostKeyChecking=no ceph-mon2
[ceph-mon3][DEBUG ] connected to host: ceph-admin
[ceph-mon3][INFO ] Running command: ssh -CT -o StrictHostKeyChecking=no ceph-mon3
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon1
[ceph_deploy.new][DEBUG ] Monitor ceph-mon1 at 192.168.1.11
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon2
[ceph_deploy.new][DEBUG ] Monitor ceph-mon2 at 192.168.1.12
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon3
[ceph_deploy.new][DEBUG ] Monitor ceph-mon3 at 192.168.1.13
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘ceph-mon1’, ‘ceph-mon2’, ‘ceph-mon3’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘192.168.1.11’, ‘192.168.1.12’, ‘192.168.1.13’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…
# 查看配置文件
[root@ceph-admin ceph-cluster]# cat ceph.conf
[global]
fsid = 12345678-90ab-cdef-1234-567890abcdef
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 192.168.1.11,192.168.1.12,192.168.1.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
Part02-部署Ceph集群
2.1 安装Ceph软件包
[root@ceph-admin ceph-cluster]# ceph-deploy install –release quincy ceph-admin ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2 ceph-osd3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy install –release quincy ceph-admin ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2 ceph-osd3
[ceph_deploy.install][DEBUG ] Installing stable version quincy on host ceph-admin
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-admin…
[ceph-admin][INFO ] Distro info: Rocky Linux 9.2 Green Obsidian
[ceph-admin][INFO ] Installing Ceph on ceph-admin
[ceph-admin][INFO ] Running command: yum -y install epel-release
[ceph-admin][INFO ] Running command: yum -y install ceph ceph-radosgw
# 部署Monitor
[root@ceph-admin ceph-cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf更多学习教程公众号风哥教程itpux_com
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph, hosts ceph-mon1 ceph-mon2 ceph-mon3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1…
[ceph-mon1][DEBUG ] determine host passed in as ceph-mon1
[ceph-mon1][DEBUG ] write conf to /etc/ceph/ceph.conf
[ceph-mon1][DEBUG ] create the mon path if it does not exist
[ceph-mon1][DEBUG ] writing the keyring to /var/lib/ceph/mon/ceph-ceph-mon1/keyring
# 查看生成的密钥
[root@ceph-admin ceph-cluster]# ls -l
total 16
-rw——-. 1 root root 71 Apr 4 19:25 ceph.client.admin.keyring
-rw-r–r–. 1 root root 285 Apr 4 19:25 ceph.conf
-rw——-. 1 root root 71 Apr 4 19:25 ceph.mon.keyring
# 分发配置文件和密钥
[root@ceph-admin ceph-cluster]# ceph-deploy admin ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2 ceph-osd3
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mon1
[ceph-mon1][DEBUG ] write conf to /etc/ceph/ceph.conf
[ceph-mon1][DEBUG ] write keyring to /etc/ceph/ceph.client.admin.keyring
# 部署Manager
[root@ceph-admin ceph-cluster]# ceph-deploy mgr create ceph-mon1 ceph-mon2
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph, hosts ceph-mon1 ceph-mon2
[ceph-mon1][DEBUG ] write conf to /etc/ceph/ceph.conf
[ceph-mon1][DEBUG ] create mgr path /var/lib/ceph/mgr/ceph-ceph-mon1
# 查看集群状态
[root@ceph-mon1 ~]# ceph -s
cluster:
id: 12345678-90ab-cdef-1234-567890abcdef
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 2m)
mgr: ceph-mon1(active, since 1m), standbys: ceph-mon2
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
2.2 添加OSD节点
[root@ceph-admin ceph-cluster]# ceph-deploy disk list ceph-osd1
[ceph-osd1][DEBUG ] connection detected need for sudo
[ceph-osd1][DEBUG ] connected to host: ceph-osd1
[ceph-osd1][DEBUG ] execute command: lsblk –paths –output NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
[ceph-osd1][INFO ] Disk /dev/sda: 100 GB, 107374182400 bytes, 209715200 sectors
[ceph-osd1][INFO ] Disk /dev/sdb: 1000 GB, 1073741824000 bytes, 2097152000 sectors
[ceph-osd1][INFO ] Disk /dev/sdc: 1000 GB, 1073741824000 bytes, 2097152000 sectors
# 擦除磁盘
[root@ceph-admin ceph-cluster]# ceph-deploy disk zap ceph-osd1 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.disk][DEBUG ] zapping /dev/sdb on ceph-osd1
[ceph-osd1][DEBUG ] connection detected need for sudo
[ceph-osd1][DEBUG ] connected to host: ceph-osd1
[ceph-osd1][DEBUG ] execute command: sgdisk –zap-all –clear –mbrtogpt /dev/sdb
[ceph-osd1][DEBUG ] Creating new GPT entries.
[ceph-osd1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.更多视频教程www.fgedu.net.cn
# 创建OSD
[root@ceph-admin ceph-cluster]# ceph-deploy osd create –data /dev/sdb ceph-osd1
[ceph_deploy.osd][DEBUG ] Deploying osd, cluster ceph, host ceph-osd1, device /dev/sdb
[ceph-osd1][DEBUG ] create the osd path /var/lib/ceph/osd/ceph-0
[ceph-osd1][DEBUG ] write fsid to /var/lib/ceph/osd/ceph-0/fsid
[ceph-osd1][DEBUG ] write ceph_fsid to /var/lib/ceph/osd/ceph-0/ceph_fsid
[ceph-osd1][DEBUG ] write magic to /var/lib/ceph/osd/ceph-0/magic
# 添加更多OSD
[root@ceph-admin ceph-cluster]# ceph-deploy disk zap ceph-osd1 /dev/sdc
[root@ceph-admin ceph-cluster]# ceph-deploy osd create –data /dev/sdc ceph-osd1
[root@ceph-admin ceph-cluster]# ceph-deploy disk zap ceph-osd2 /dev/sdb
[root@ceph-admin ceph-cluster]# ceph-deploy osd create –data /dev/sdb ceph-osd2
[root@ceph-admin ceph-cluster]# ceph-deploy disk zap ceph-osd3 /dev/sdb
[root@ceph-admin ceph-cluster]# ceph-deploy osd create –data /dev/sdb ceph-osd3
# 查看集群状态
[root@ceph-mon1 ~]# ceph -s
cluster:
id: 12345678-90ab-cdef-1234-567890abcdef
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 10m)
mgr: ceph-mon1(active, since 9m), standbys: ceph-mon2
osd: 4 osds: 4 up, 4 in
data:
pools: 1 pools, 32 pgs
objects: 0 objects, 0 B
usage: 4.0 GiB used, 3.9 TiB / 4.0 TiB avail
pgs: 32 active+clean
- 至少部署3个Monitor节点
- OSD节点建议使用SSD
- 配置独立的网络
- 定期检查集群健康状态
- 合理规划PG数量
本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html
