1. 首页 > Linux教程 > 正文

Linux教程FG156-存储故障排查与恢复

内容大纲

内容简介:本文风哥教程参考Linux官方文档、Red Hat Enterprise Linux官方文档、Ansible Automation Platform官方文档、Docker官方文档、Kubernetes官方文档和Podman官方文档等内容,详细介绍了相关技术的配置和使用方法。

1. 存储故障概述

存储故障包括磁盘故障、文件系统损坏、LVM元数据损坏、RAID阵列故障等。及时检测和恢复存储故障对于数据安全至关重要。

from PG视频:www.itpux.com

# 常见存储故障类型
# 磁盘故障:硬盘物理损坏或SMART预警
# 文件系统损坏:元数据损坏或文件系统不一致
# LVM故障:元数据损坏或卷组不可访问
# RAID故障:磁盘故障导致阵列降级或失效

2. 磁盘故障检测

使用SMART工具检测磁盘健康状态。

学习交流加群风哥QQ113257174

# 磁盘故障检测

# 1. 检查磁盘SMART信息
[root@localhost ~]# smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5335 [x86_64-linux-5.14.0] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4E1234567
LU WWN Device Id: 5 0014ee 1234567890
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri Apr 3 10:30:00 2026 CST
SMART support is: Available – device has SMART capability.
SMART support is: Enabled

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

# 2. 执行SMART自检
[root@localhost ~]# smartctl -t short /dev/sda
smartctl 7.3 2022-02-28 r5335 [x86_64-linux-5.14.0] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: “Execute SMART Short offline test immediately”.
Drive command “Execute SMART Short offline test immediately” successful.
Testing has begun.

# 3. 检查自检结果
[root@localhost ~]# smartctl -l selftest /dev/sda
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1234 –
# 2 Short offline Completed without error 00% 1200 –
# 3 Extended offline Completed without error 00% 1150 –

# 4. 查看错误日志
[root@localhost ~]# smartctl -l error /dev/sda
SMART Error Log Version: 1
No Errors Logged

# 5. 检查磁盘坏块
[root@localhost ~]# badblocks -v /dev/sda
Checking blocks 0 to 5860533168
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.

3. 文件系统故障修复

使用fsck工具修复文件系统故障。

# 文件系统故障修复

# 1. 检查文件系统状态
[root@localhost ~]# xfs_repair -n /dev/sdb1
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– zero log…
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– scan and clear agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
No modify flag set, skipping phase 5
Phase 6 – check inode connectivity…
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify link counts…
– resetting inode 128 link counts to 1
No modify flag set, skipping filesystem flush and exiting.

# 2. 修复XFS文件系统
[root@localhost ~]# umount /data
[root@localhost ~]# xfs_repair /dev/sdb1
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– zero log…
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– process known inodes and perform inode discovery…
Phase 4 – check for duplicate blocks…
Phase 5 – verify and correct inode allocations…
Phase 6 – check inode connectivity…
Phase 7 – verify and correct link counts…
Phase 8 – verify and correct bitmap counts…
Phase 9 – check filesystem free space
done

# 3. 修复ext4文件系统
[root@localhost ~]# umount /backup
[root@localhost ~]# fsck -y /dev/sdc1
fsck from util-linux 2.37.4
e2fsck 1.46.5 (30-Dec-2021)
/dev/sdc1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdc1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdc1: 11/655360 files (0.0% non-contiguous), 126789/2621440 blocks

# 4. 挂载修复后的文件系统
[root@localhost ~]# mount /dev/sdb1 /data
[root@localhost ~]# mount | grep /data
/dev/sdb1 on /data type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

4. LVM故障恢复

恢复损坏的LVM元数据。

学习交流加群风哥微信: itpux-com

# LVM故障恢复

# 1. 查看LVM状态
[root@localhost ~]# vgdisplay
Volume group “vg0” not found
Cannot process volume group vg0

# 2. 扫描物理卷
[root@localhost ~]# pvscan
PV /dev/sda2 VG vg0 lvm2 [100.00 GiB / 0 free]
Total: 1 [100.00 GiB] / in use: 1 [100.00 GiB] / in no VG: 0 [0 ]

# 3. 恢复卷组
[root@localhost ~]# vgcfgrestore vg0
Restored volume group vg0

# 4. 激活卷组
[root@localhost ~]# vgchange -a y vg0
2 logical volume(s) in volume group “vg0” now active

# 5. 验证卷组
[root@localhost ~]# vgdisplay
— Volume group —
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 100.00 GiB
PE Size 4.00 MiB
Total PE 25599
Alloc PE / Size 25599 / 100.00 GiB
Free PE / Size 0 / 0
VG UUID abc123-def456-ghi789-jkl012-mno345

# 6. 备份LVM元数据
[root@localhost ~]# vgcfgbackup vg0
Volume group “vg0” successfully backed up.

5. RAID故障处理

处理RAID阵列故障。

更多视频教程www.fgedu.net.cn

# RAID故障处理

# 1. 查看RAID状态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[0] sdc1[1]
2097152 blocks super 1.2 [2/2] [UU]
unused devices:

# 2. 查看RAID详细信息
[root@localhost ~]# mdadm –detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Apr 3 10:00:00 2026
Raid Level : raid1
Array Size : 2097152 (2.00 GiB 2.15 GB)
Used Dev Size : 2097152 (2.00 GiB 2.15 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Apr 3 10:30:00 2026
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : near=1
Chunk Size : 512K

Consistency Policy : resync

Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : abc123:def456:ghi789:jkl012:mno345
Events : 17

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1

# 3. 模拟磁盘故障
[root@localhost ~]# mdadm –fail /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

# 4. 查看降级状态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdc1[1] sdb1[0](F)
2097152 blocks super 1.2 [2/1] [_U]
unused devices:

# 5. 移除故障磁盘
[root@localhost ~]# mdadm –remove /dev/md0 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0

# 6. 添加新磁盘
[root@localhost ~]# mdadm –add /dev/md0 /dev/sdd1
mdadm: added /dev/sdd1

# 7. 查看重建进度
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdc1[1]
2097152 blocks super 1.2 [2/1] [_U]
[>………………..] recovery = 5.2% (109312/2097152) finish=0.1min speed=109312K/sec
unused devices:

# 8. 等待重建完成
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdc1[1]
2097152 blocks super 1.2 [2/2] [UU]
unused devices:

风哥提示:

6. 实战案例

处理磁盘故障并恢复数据。

# 实战案例:处理磁盘故障并恢复数据

# 1. 检测磁盘故障
[root@localhost ~]# smartctl -a /dev/sdb1 | grep “SMART overall-health”
SMART overall-health self-assessment test result: FAILED!

# 2. 查看详细错误信息
[root@localhost ~]# dmesg | grep sdb1
[12345.678901] sd 1:0:0:0: [sdb] ATA-8: WDC WD30EFRX-68EUZN0, 82.00A82, max UDMA/133
[12345.678902] sd 1:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[12345.678903] sd 1:0:0:0: [sdb] Write Protect is off
[12345.678904] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[12345.678905] sd 1:0:0:0: [sdb] No Caching mode page found
[12345.678906] sd 1:0:0:0: [sdb] Assuming drive cache: write through
[12345.678907] sd 1:0:0:0: [sdb1] I/O error, dev sdb1, sector 12345678

# 3. 检查RAID状态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[0] sdc1[1]
2097152 blocks super 1.2 [2/2] [UU]
unused devices:

# 4. 标记磁盘为故障
[root@localhost ~]# mdadm –fail /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

# 5. 移除故障磁盘
[root@localhost ~]# mdadm –remove /dev/md0 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0

# 6. 查看降级状态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdc1[1]
2097152 blocks super 1.2 [2/1] [_U]
unused devices:

# 7. 添加新磁盘
[root@localhost ~]# mdadm –add /dev/md0 /dev/sdd1
mdadm: added /dev/sdd1

# 8. 查看重建进度
[root@localhost ~]# watch -n 1 cat /proc/mdstat
Every 1.0s: cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdc1[1]
2097152 blocks super 1.2 [2/1] [_U]
[===>………………] recovery = 15.2% (318976/2097152) finish=0.2min speed=159488K/sec
unused devices:

# 9. 等待重建完成
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdc1[1]
2097152 blocks super 1.2 [2/2] [UU]
unused devices:

# 10. 验证数据完整性
[root@localhost ~]# mount /dev/md0 /mnt/raid
[root@localhost ~]# ls -l /mnt/raid
total 4
drwxr-xr-x 2 root root 4096 Apr 3 10:00 data
[root@localhost ~]# md5sum /mnt/raid/data/*
d41d8cd98f00b204e9800998ecf8427e /mnt/raid/data/file1.txt
d41d8cd98f00b204e9800998ecf8427e /mnt/raid/data/file2.txt

# 11. 更新RAID配置
[root@localhost ~]# mdadm –detail –scan >> /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
ARRAY /dev/md0 metadata=1.2 UUID=abc123:def456:ghi789:jkl012:mno345

# 12. 备份RAID配置
[root@localhost ~]# cp /etc/mdadm.conf /etc/mdadm.conf.bak

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息