1. 首页 > 软件下载 > 正文

Ceph下载-Ceph分布式存储下载地址-Ceph分布式存储下载方法

1. Ceph简介与版本说明

Ceph是一个统一的分布式存储系统,提供对象存储、块存储和文件系统存储三种接口。它采用CRUSH算法实现数据分布,具有高可扩展性、高可靠性和高性能特点。更多学习教程www.fgedu.net.cn

Ceph最新版本:

Ceph 19.2.x (Squid) – 最新稳定版,发布于2024年,提供增强的性能和新特性

Ceph 18.2.x (Reef) – 长期支持版本,适合生产环境

Ceph 17.2.x (Quincy) – 旧版稳定版本,维护支持中

Ceph 16.2.x (Pacific) – 传统稳定版本

Ceph核心组件:

MON (Monitor):集群监控组件,维护集群状态映射

OSD (Object Storage Daemon):对象存储守护进程,存储实际数据

MDS (Metadata Server):元数据服务器,为CephFS提供元数据服务

MGR (Manager):管理器组件,提供监控和管理接口

RGW (RADOS Gateway):对象存储网关,提供S3/Swift兼容接口

2. Ceph下载方式

Ceph提供软件包安装和容器化部署两种主要方式。学习交流加群风哥微信: itpux-com

方式一:官方软件源安装(推荐)

# 添加Ceph软件源(RHEL/CentOS/Rocky/9)
# vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-19.2/el9/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-19.2/el9/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-19.2/el9/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

# 导入GPG密钥
# rpm –import https://download.ceph.com/keys/release.asc

# 查看可用版本
# yum list ceph –showduplicates
ceph.x86_64 19.2.0-1.el9 Ceph
ceph.x86_64 19.2.1-1.el9 Ceph
ceph.x86_64 18.2.4-1.el9 Ceph
ceph.x86_64 18.2.5-1.el9 Ceph

方式二:下载RPM安装包

# 下载Ceph RPM包
# cd /fgedudb/ceph
# wget https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-19.2.1-1.el9.x86_64.rpm

# 下载输出案例如下:
–2026-04-04 14:00:15– https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-19.2.1-1.el9.x86_64.rpm
Resolving download.ceph.com… 158.69.68.124
Connecting to download.ceph.com|158.69.68.124|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 15678901 (15M) [application/x-rpm]
Saving to: ‘ceph-19.2.1-1.el9.x86_64.rpm’

ceph-19.2.1-1.el9.x86_64.rpm 100%[===============================================>] 14.96M 5.2MB/s in 2.9s

2026-04-04 14:00:18 URL:https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-19.2.1-1.el9.x86_64.rpm [15678901/15678901] -> “ceph-19.2.1-1.el9.x86_64.rpm” [1]

# 下载依赖包
# wget https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-base-19.2.1-1.el9.x86_64.rpm
# wget https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-mon-19.2.1-1.el9.x86_64.rpm
# wget https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-osd-19.2.1-1.el9.x86_64.rpm
# wget https://download.ceph.com/rpm-19.2/el9/x86_64/ceph-mgr-19.2.1-1.el9.x86_64.rpm

# 验证下载文件
# ls -lh ceph*.rpm
-rw-r–r– 1 root root 15M Apr 4 14:00 ceph-19.2.1-1.el9.x86_64.rpm
-rw-r–r– 1 root root 8.5M Apr 4 14:01 ceph-base-19.2.1-1.el9.x86_64.rpm
-rw-r–r– 1 root root 5.2M Apr 4 14:01 ceph-mon-19.2.1-1.el9.x86_64.rpm
-rw-r–r– 1 root root 6.8M Apr 4 14:01 ceph-osd-19.2.1-1.el9.x86_64.rpm
-rw-r–r– 1 root root 4.5M Apr 4 14:01 ceph-mgr-19.2.1-1.el9.x86_64.rpm

方式三:Docker容器镜像

# 拉取Ceph容器镜像
# docker pull ceph/ceph:v19

# 下载输出案例如下:
v19: Pulling from ceph/ceph
Digest: sha256:a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6a7b8c9d0e1f2
Status: Downloaded newer image for ceph/ceph:v19
docker.io/ceph/ceph:v19

# 查看镜像
# docker images ceph/ceph
REPOSITORY TAG IMAGE ID CREATED SIZE
ceph/ceph v19 a1b2c3d4e5f6 3 days ago 1.2GB
ceph/ceph v18 b2c3d4e5f6g7 2 weeks ago 1.1GB

# 拉取指定版本
# docker pull ceph/ceph:v18.2.5

方式四:源码编译安装

# 下载Ceph源码
# cd /fgedudb/ceph
# git clone –branch v19.2.1 https://github.com/ceph/ceph.git

# 下载输出案例如下:
Cloning into ‘ceph’…
remote: Enumerating objects: 856234, done.
remote: Counting objects: 100% (856234/856234), done.
remote: Compressing objects: 100% (156789/156789), done.
remote: Total 856234 (delta 699445), reused 856234 (delta 699445), pack 0
Receiving objects: 100% (856234/856234), 456.78 MiB | 15.23 MiB/s, done.
Resolving deltas: 100% (699445/699445), done.
Note: switching to ‘v19.2.1’.

# 下载子模块
# cd ceph
# git submodule update –init –recursive

# 查看版本
# cat src/ceph_ver.h
#define CEPH_GIT_VER “v19.2.1”

3. 系统环境准备

Ceph集群部署前需要进行系统环境配置。学习交流加群风哥QQ113257174

步骤1:配置主机名和解析

# 配置主机名(所有节点)
# hostnamectl set-hostname ceph-node1.fgedu.net.cn

# 配置hosts解析
# vi /etc/hosts
192.168.1.51 ceph-node1 ceph-node1.fgedu.net.cn
192.168.1.52 ceph-node2 ceph-node2.fgedu.net.cn
192.168.1.53 ceph-node3 ceph-node3.fgedu.net.cn
192.168.1.54 ceph-node4 ceph-node4.fgedu.net.cn

# 验证解析
# ping -c 2 ceph-node2
PING ceph-node2 (192.168.1.52) 56(84) bytes of data.
64 bytes from ceph-node2 (192.168.1.52): icmp_seq=1 ttl=64 time=0.235 ms
64 bytes from ceph-node2 (192.168.1.52): icmp_seq=2 ttl=64 time=0.156 ms
— ceph-node2 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1001ms

步骤2:配置时间同步

# 安装chrony
# yum install -y chrony

# 配置时间服务器
# vi /etc/chrony.conf
server ntp.aliyun.com iburst
server ntp.tencent.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10

# 启动chrony服务
# systemctl start chronyd
# systemctl enable chronyd

# 验证时间同步
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 17 32 -23us[ -60us] +/- 21ms
^+ 139.199.215.251 2 6 17 32 +156us[ +119us] +/- 32ms

# 检查时间偏差
# chronyc tracking
Reference ID : CB6B0658 (203.107.6.88)
Stratum : 3
Ref time (UTC) : Fri Apr 04 06:15:00 2026
System time : 0.000012345 seconds fast of NTP time
Last offset : -0.000023456 seconds
RMS offset : 0.000012345 seconds

步骤3:配置SSH免密登录

# 生成SSH密钥(在部署节点执行)
# ssh-keygen -t rsa -N ” -f ~/.ssh/id_rsa

# 输出案例如下:
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0 root@ceph-node1
The key’s randomart image is:
+—[RSA 2048]—-+
| .o. |
| …o |
| . .+ . |
| +.=. . |
| o B.=S |
| . + B.o |
| . =.* |
| o E.* |
| .oO= |
+—-[SHA256]—–+

# 复制公钥到所有节点
# ssh-copy-id root@ceph-node1
# ssh-copy-id root@ceph-node2
# ssh-copy-id root@ceph-node3
# ssh-copy-id root@ceph-node4

# 输出案例如下:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘root@ceph-node2′”
and check to make sure that only the key(s) you wanted were added.

# 验证免密登录
# ssh ceph-node2 “hostname”
ceph-node2

步骤4:配置系统参数

# 配置内核参数
# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.swappiness = 0

# 使配置生效
# sysctl -p

# 配置文件描述符限制
# vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536

# 关闭防火墙或开放端口
# systemctl stop firewalld
# systemctl disable firewalld

# 或开放Ceph端口
# firewall-cmd –permanent –add-port=6789/tcp
# firewall-cmd –permanent –add-port=3300/tcp
# firewall-cmd –permanent –add-port=6800-7300/tcp
# firewall-cmd –reload

4. Ceph集群安装实战

使用ceph-deploy或cephadm工具部署Ceph集群。更多学习教程公众号风哥教程itpux_com

步骤1:安装ceph-deploy工具

# 安装ceph-deploy
# yum install -y ceph-deploy

# 安装输出案例如下:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
–> Running transaction check
—> Package ceph-deploy.noarch 0:2.1.0-0 will be installed
–> Finished Dependency Resolution

Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
ceph-deploy noarch 2.1.0-0 Ceph-noarch 156 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 156 k
Installed size: 456 k
Downloading packages:
ceph-deploy-2.1.0-0.noarch.rpm | 156 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ceph-deploy-2.1.0-0.noarch 1/1
Verifying : ceph-deploy-2.1.0-0.noarch 1/1

Installed:
ceph-deploy.noarch 0:2.1.0-0

Complete!

# 创建部署目录
# mkdir -p /fgedudb/ceph-cluster
# cd /fgedudb/ceph-cluster

步骤2:创建集群

# 创建Monitor节点
# ceph-deploy new ceph-node1 ceph-node2 ceph-node3

# 输出案例如下:
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy new ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node2][DEBUG ] connected to host: ceph-node2
[ceph-node3][DEBUG ] connected to host: ceph-node3
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘ceph-node1’, ‘ceph-node2’, ‘ceph-node3’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘192.168.1.51’, ‘192.168.1.52’, ‘192.168.1.53’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…

# 查看生成的配置文件
# ls -la
total 16
drwxr-xr-x 2 root root 4096 Apr 4 14:20 .
drwxr-xr-x 3 root root 4096 Apr 4 14:20 ..
-rw-r–r– 1 root root 251 Apr 4 14:20 ceph.conf
-rw——- 1 root root 73 Apr 4 14:20 ceph-deploy-ceph.log
-rw——- 1 root root 77 Apr 4 14:20 ceph.mon.keyring

# 查看配置文件
# cat ceph.conf
[global]
fsid = a1b2c3d4-e5f6-7890-abcd-ef1234567890
mon_initial_members = ceph-node1, ceph-node2, ceph-node3
mon_host = 192.168.1.51,192.168.1.52,192.168.1.53
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

步骤3:安装Ceph软件包

# 在所有节点安装Ceph
# ceph-deploy install –release squid ceph-node1 ceph-node2 ceph-node3 ceph-node4

# 输出案例如下:
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy install –release squid ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.install][DEBUG ] Installing stable version squid on cluster ceph hosts ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: /usr/bin/yum -y install ceph
[ceph-node2][INFO ] Running command: /usr/bin/yum -y install ceph
[ceph-node3][INFO ] Running command: /usr/bin/yum -y install ceph
[ceph-node4][INFO ] Running command: /usr/bin/yum -y install ceph

[ceph_deploy.install][INFO ] Ceph is installed on ceph-node1, ceph-node2, ceph-node3, ceph-node4

# 验证安装
# ssh ceph-node1 “ceph –version”
ceph version 19.2.1 (a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0) squid (stable)

步骤4:部署Monitor节点

# 部署Monitor
# ceph-deploy mon create-initial

# 输出案例如下:
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.1.0): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] deploying monitor, host ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: /usr/bin/ceph –cluster ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph-node1][DEBUG ] ********************************************************************************
[ceph-node1][DEBUG ] status for monitor: ceph-node1
[ceph-node1][DEBUG ] {
[ceph-node1][DEBUG ] “name”: “ceph-node1”,
[ceph-node1][DEBUG ] “rank”: 0,
[ceph-node1][DEBUG ] “state”: “leader”,
[ceph-node1][DEBUG ] “election_epoch”: 3,
[ceph-node1][DEBUG ] …
[ceph-node1][DEBUG ] }
[ceph_deploy.mon][INFO ] monitor ceph-node1 is running
[ceph_deploy.mon][INFO ] processing monitor ceph-node2
[ceph_deploy.mon][INFO ] processing monitor ceph-node3
[ceph_deploy.mon][INFO ] all monitors are running

# 查看生成的密钥文件
# ls -la ceph.client.admin.keyring
-rw——- 1 root root 63 Apr 4 14:25 ceph.client.admin.keyring

风哥提示:Monitor节点建议部署奇数个(3、5、7),以实现仲裁机制。生产环境建议至少3个Monitor节点。

5. Ceph集群配置实战

完成基础部署后,需要配置OSD和Manager组件。from:www.itpux.com

步骤1:部署Manager节点

# 部署Manager
# ceph-deploy mgr create ceph-node1 ceph-node2

# 输出案例如下:
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph, hosts ceph-node1 ceph-node2
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][INFO ] Running command: /usr/bin/ceph –cluster ceph –admin-daemon /var/run/ceph/ceph-mgr.ceph-node1.asok status
[ceph_deploy.mgr][INFO ] mgr.greatfge01 is running
[ceph_deploy.mgr][INFO ] processing mgr.greatfge02

# 分发配置文件和密钥
# ceph-deploy admin ceph-node1 ceph-node2 ceph-node3 ceph-node4

# 设置密钥权限
# ssh ceph-node1 “chmod +r /etc/ceph/ceph.client.admin.keyring”
# ssh ceph-node2 “chmod +r /etc/ceph/ceph.client.admin.keyring”
# ssh ceph-node3 “chmod +r /etc/ceph/ceph.client.admin.keyring”
# ssh ceph-node4 “chmod +r /etc/ceph/ceph.client.admin.keyring”

# 查看集群状态
# ceph -s
cluster:
id: a1b2c3d4-e5f6-7890-abcd-ef1234567890
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3 services: mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 5m) mgr: ceph-node1(active, since 1m), standbys: ceph-node2 osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:

步骤2:创建OSD节点

# 列出可用磁盘
# ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 ceph-node4

# 输出案例如下:
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: /usr/bin/ceph-disk list
[ceph-node1][DEBUG ] /dev/sdb :
[ceph-node1][DEBUG ] /dev/sdb1 other, xfs, mounted on /boot
[ceph-node1][DEBUG ] /dev/sdc :
[ceph-node1][DEBUG ] /dev/sdc1 other
[ceph-node1][DEBUG ] /dev/sdd :
[ceph-node1][DEBUG ] /dev/sdd1 other

# 擦除磁盘数据
# ceph-deploy disk zap ceph-node1 /dev/sdc
# ceph-deploy disk zap ceph-node1 /dev/sdd
# ceph-deploy disk zap ceph-node2 /dev/sdc
# ceph-deploy disk zap ceph-node2 /dev/sdd
# ceph-deploy disk zap ceph-node3 /dev/sdc
# ceph-deploy disk zap ceph-node3 /dev/sdd

# 创建OSD
# ceph-deploy osd create –data /dev/sdc ceph-node1
# ceph-deploy osd create –data /dev/sdd ceph-node1
# ceph-deploy osd create –data /dev/sdc ceph-node2
# ceph-deploy osd create –data /dev/sdd ceph-node2
# ceph-deploy osd create –data /dev/sdc ceph-node3
# ceph-deploy osd create –data /dev/sdd ceph-node3

# 输出案例如下:
[ceph_deploy.osd][DEBUG ] Deploying osd, cluster ceph, host ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: /usr/sbin/ceph-volume lvm create –bluestore –data /dev/sdc
[ceph-node1][DEBUG ] Running command: /usr/bin/ceph-authtool –gen-print-key
[ceph-node1][DEBUG ] Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new a1b2c3d4-e5f6-7890-abcd-ef1234567890
[ceph-node1][DEBUG ] –> ceph-volume lvm prepare successful for: /dev/sdc
[ceph-node1][DEBUG ] –> ceph-volume lvm activate successful for: /dev/sdc
[ceph_deploy.osd][INFO ] OSD created on ceph-node1

步骤3:验证集群状态

# 查看集群健康状态
# ceph -s
cluster:
id: a1b2c3d4-e5f6-7890-abcd-ef1234567890
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 10m)
mgr: ceph-node1(active, since 8m), standbys: ceph-node2
osd: 6 osds: 6 up, 6 in

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 594 GiB / 600 GiB avail
pgs: 1 active+clean

# 查看OSD树
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.58589 root default
-3 0.19530 host ceph-node1
0 hdd 0.09765 osd.0 up 1.00000 1.00000
1 hdd 0.09765 osd.1 up 1.00000 1.00000
-5 0.19530 host ceph-node2
2 hdd 0.09765 osd.2 up 1.00000 1.00000
3 hdd 0.09765 osd.3 up 1.00000 1.00000
-7 0.19530 host ceph-node3
4 hdd 0.09765 osd.4 up 1.00000 1.00000
5 hdd 0.09765 osd.5 up 1.00000 1.00000

# 查看详细健康信息
# ceph health detail
HEALTH_OK

6. 存储服务配置实战

Ceph提供对象存储、块存储和文件系统三种存储服务。

步骤1:配置块存储RBD

# 创建RBD存储池
# ceph osd pool create rbd 128 128
pool ‘rbd’ created

# 启用RBD应用
# ceph osd pool application enable rbd rbd
enabled application ‘rbd’ on pool ‘rbd’

# 初始化RBD存储池
# rbd pool init rbd

# 创建RBD镜像
# rbd create rbd/image1 –size 10G
# rbd create rbd/image2 –size 20G

# 查看RBD镜像
# rbd ls rbd
image1
image2

# 查看镜像详情
# rbd info rbd/image1
rbd image ‘image1’:
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: a1b2c3d4e5f6
block_name_prefix: rbd_data.a1b2c3d4e5f6
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Apr 4 14:35:00 2026
access_timestamp: Fri Apr 4 14:35:00 2026
modify_timestamp: Fri Apr 4 14:35:00 2026

# 客户端映射RBD镜像
# rbd map rbd/image1
/dev/rbd0

# 格式化并挂载
# mkfs.xfs /dev/rbd/rbd/image1
# mkdir -p /mnt/rbd
# mount /dev/rbd/rbd/image1 /mnt/rbd

# 验证挂载
# df -h /mnt/rbd
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 10G 33M 10G 1% /mnt/rbd

步骤2:配置对象存储RGW

# 部署RGW服务
# ceph-deploy rgw create ceph-node1

# 输出案例如下:
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph, host ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph_deploy.rgw][INFO ] rgw.greatfge01 is running

# 查看RGW状态
# ceph -s | grep rgw
rgw: 1 daemon active (ceph-node1)

# 创建S3用户
# radosgw-admin user create –uid=”admin” –display-name=”Admin User”

# 输出案例如下:
{
“user_id”: “admin”,
“display_name”: “Admin User”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“subusers”: [],
“keys”: [
{
“user”: “admin”,
“access_key”: “A1B2C3D4E5F6G7H8I9J0”,
“secret_key”: “a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0”
}
],
“swift_keys”: [],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“default_storage_class”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“temp_url_keys”: [],
“type”: “rgw”,
“mfa_ids”: []
}

# 使用S3客户端访问
# s3cmd –configure
# s3cmd mb s3://test-bucket
Bucket ‘s3://test-bucket/’ created

步骤3:配置文件系统CephFS

# 创建CephFS存储池
# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 64

# 启用CephFS应用
# ceph osd pool application enable cephfs_data cephfs
# ceph osd pool application enable cephfs_metadata cephfs

# 创建CephFS文件系统
# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2

# 部署MDS服务
# ceph-deploy mds create ceph-node1 ceph-node2

# 查看CephFS状态
# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

# 查看MDS状态
# ceph mds stat
cephfs:1 {0=ceph-node1=up:active} 1 up:standby

# 客户端挂载CephFS
# mkdir -p /mnt/cephfs
# mount -t ceph 192.168.1.51:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

# 验证挂载
# df -h /mnt/cephfs
Filesystem Size Used Avail Use% Mounted on
192.168.1.51:6789:/ 585G 0 585G 0% /mnt/cephfs

7. 生产环境最佳实践

在生产环境中部署Ceph需要考虑性能优化、安全加固和监控告警。

性能优化配置

# 配置OSD参数
# ceph config set osd osd_memory_target 4G
# ceph config set osd osd_op_threads 32
# ceph config set osd osd_disk_threads 4

# 配置PG自动伸缩
# ceph config set global osd_pool_default_pg_autoscale_mode on

# 配置存储池副本数
# ceph config set global osd_pool_default_size 3
# ceph config set global osd_pool_default_min_size 2

# 配置CRUSH规则
# ceph osd crush rule create-replicated rule-host default host
# ceph osd pool set rbd crush_rule rule-host

# 查看配置
# ceph config dump
WHO MASK NAME VALUE
global osd_pool_default_size 3
global osd_pool_default_min_size 2
osd osd_memory_target 4G
osd osd_op_threads 32

监控与告警配置

# 启用Dashboard
# ceph mgr module enable dashboard

# 创建自签名证书
# ceph dashboard create-self-signed-cert

# 创建Dashboard用户
# ceph dashboard ac-user-create admin -i password.txt administrator

# 查看Dashboard地址
# ceph mgr services
{
“dashboard”: “https://ceph-node1:8443/”
}

# 配置Prometheus监控
# ceph mgr module enable prometheus

# 查看Prometheus端点
# ceph mgr services
{
“dashboard”: “https://ceph-node1:8443/”,
“prometheus”: “http://ceph-node1:9283/”
}

# 配置告警
# ceph config set global mon_warn_on_pool_no_redundancy true
# ceph config set global mon_warn_on_legacy_crush_straw true

生产环境建议:Ceph集群建议至少3个Monitor节点实现仲裁。OSD建议使用独立磁盘,避免使用分区。生产环境建议副本数为3,最小副本数为2。建议配置PG自动伸缩功能。定期检查集群健康状态和磁盘使用率。建议配置异地容灾备份。

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息