当前位置: 首页 > news >正文

多块盘创建RAID5以及后增加空间

✅ 创建硬盘并挂载到EC2上,后查询如下

[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk 
├─nvme0n1p1   259:1    0  40G  0 part /
├─nvme0n1p127 259:2    0   1M  0 part 
└─nvme0n1p128 259:3    0  10M  0 part /boot/efi
nvme1n1       259:4    0  25G  0 disk 
nvme2n1       259:5    0  25G  0 disk 
nvme3n1       259:6    0  25G  0 disk 
nvme4n1       259:7    0  25G  0 disk 
nvme5n1       259:8    0  25G  0 disk 
nvme6n1       259:9    0  25G  0 disk 
nvme7n1       259:10   0  25G  0 disk

✅ 创建 RAID 5 阵列

✅ 第一次创建
root@ip-127-0-0-1 data]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
mdadm: Defaulting to version 1.2 metadata
[1040141.720624] raid6: avx512x4 gen() 15874 MB/s
[1040141.900622] raid6: avx512x2 gen() 15900 MB/s
[1040142.080622] raid6: avx512x1 gen() 15608 MB/s
[1040142.260621] raid6: avx2x4   gen() 13282 MB/s
[1040142.440613] raid6: avx2x2   gen() 17548 MB/s
[1040142.620613] raid6: avx2x1   gen() 12584 MB/s
[1040142.634906] raid6: using algorithm avx2x2 gen() 17548 MB/s
[1040142.830616] raid6: .... xor() 17751 MB/s, rmw enabled
[1040142.848740] raid6: using avx512x2 recovery algorithm
[1040142.876973] xor: automatically using best checksumming function   avx       
[1040142.911142] async_tx: api initialized (async)
[1040142.953397] md/raid:md5: device nvme3n1 operational as raid disk 2
[1040142.977724] md/raid:md5: device nvme2n1 operational as raid disk 1
[1040143.007732] md/raid:md5: device nvme1n1 operational as raid disk 0
[1040143.038997] md/raid:md5: raid level 5 active with 3 out of 4 devices, algorithm 2
[1040143.074939] md5: detected capacity change from 0 to 157181952
[1040143.104449] md: recovery of RAID array md5
mdadm: array /dev/md5 started.
✅ 出问题后删除重新创建
root@ip-127-0-0-1 data]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
mdadm:[1040812.304463] md/raid:md5: device nvme3n1 operational as raid disk 2Defaulting to v[1040812.341375] md/raid:md5: device nvme2n1 operational as raid disk 1
[1040812.378576] md/raid:md5: device nvme1n1 operational as raid disk 0
ersion 1.2 metad[1040812.412584] md/raid:md5: raid level 5 active with 3 out of 4 devices, algorithm 2
ata
[1040812.452558] md5: detected capacity change from 0 to 157181952
mdadm: array /dev/md5 started.
[1040812.490559] md: recovery of RAID array md5

✅ 查看创建过程

✅ 查看创建的状态
[root@ip-127-0-0-1 data]# mdadm --detail /dev/md5
/dev/md5:Version : 1.2Creation Time : Tue Apr 29 08:51:21 2025Raid Level : raid5Array Size : 78590976 (74.95 GiB 80.48 GB)Used Dev Size : 26196992 (24.98 GiB 26.83 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Tue Apr 29 08:52:21 2025State : clean, degraded, recovering Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 24% completeName : 5UUID : 765eb9e7:993e38a9:30e4c551:c2d3696bEvents : 4Number   Major   Minor   RaidDevice State0     259        4        0      active sync   /dev/sdb1     259        5        1      active sync   /dev/sdc2     259        6        2      active sync   /dev/sdd4     259        7        3      spare rebuilding   /dev/sde
✅ 查看创建进度
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_][==>..................]  recovery = 13.0% (3417244/26196992) finish=4.4min speed=85522K/secunused devices: <none>
等到进度条达到100%,大概要等个6-8min,否则后面会报错
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_][===================>.]  recovery = 98.8% (25887256/26196992) finish=0.0min speed=87390K/secunused devices: <none>
[root@ip-127-0-0-1 data]# [1041121.201669] md: md5: recovery done.
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

✅ 配置自动挂载 (持久化)

✅ 更新 mdadm 配置
[root@ip-127-0-0-1 data]# mkfs.xfs /dev/md5
mkfs.xfs: /dev/md5 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
✅ 查看RAID5属性
[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk  
├─nvme0n1p1   259:1    0  40G  0 part  /
├─nvme0n1p127 259:2    0   1M  0 part  
└─nvme0n1p128 259:3    0  10M  0 part  /boot/efi
nvme1n1       259:4    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme2n1       259:5    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme3n1       259:6    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
nvme4n1       259:7    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 
✅ 更改mdadm.conf配置
[root@ip-127-0-0-1 data]# mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
ARRAY /dev/md5 metadata=1.2 name=5 UUID=bf661b1a:944c5721:0250d992:e188b1b9m.conf
✅ 查看RAID5的id
[root@ip-127-0-0-1 data]# blkid /dev/md5
/dev/md5: UUID="f8efe843-ed80-4ad0-bbc4-5c18677b257f" BLOCK_SIZE="512" TYPE="xfs"
✅ 配置开启自动挂载
[root@ip-127-0-0-1 data]# tail -1 /etc/fstab 
UUID=f8efe843-ed80-4ad0-bbc4-5c18677b257f /data/raid-storge/  xfs  defaults,nofail  0  0
[root@ip-127-0-0-1 data]# mount -a
[1041519.933683] XFS (md5): Mounting V5 Filesystem
[1041520.000302] XFS (md5): Ending clean mount
✅ 查看挂载后的路径目录大小
[root@ip-127-0-0-1 data]# df -h
Filesystem        Size  Used Avail Use% Mounted on
devtmpfs          4.0M     0  4.0M   0% /dev
tmpfs             3.9G     0  3.9G   0% /dev/shm
tmpfs             1.6G  636K  1.6G   1% /run
/dev/nvme0n1p1     40G  5.4G   35G  14% /
tmpfs             3.9G     0  3.9G   0% /tmp
/dev/nvme0n1p128   10M  1.3M  8.7M  13% /boot/efi
overlay            40G  5.4G   35G  14% /var/lib/docker/overlay2/84699b7470c48b0c4a1cb8b91b868be21f96c388de173f25df9ac741be7d0d0e/merged
tmpfs             782M     0  782M   0% /run/user/1000
/dev/md5           75G  568M   75G   1% /data/raid-storge
⚠️ 注意事项
遇到如下报错,错误原因是没有等RAID5生成进度达到100%就开始格式化了,所以报错了
[root@ip-127-0-0-1 data]# mount -a
[1041253.813403] XFS (md5): Mounting V5 Filesystem
[1041253.829487] XFS (md5): totally zeroed log
[1041253.849364] XFS (md5): Corruption warning: Metadata has LSN (1:352) ahead of current LSN (1:0). Please unmount and run xfs_repair (>= v4.3) to resolve.
[1041253.914765] XFS (md5): log mount/recovery failed: error -22
[1041253.942365] XFS (md5): log mount failed
mount: /data/raid-storge: wrong fs type, bad option, bad superblock on /dev/md5, missing codepage or helper program, or other error.
解决办法
强制清理 RAID 设备上的所有文件系统签名
[root@ip-127-0-0-1 data]# wipefs -a /dev/md5
/dev/md5: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42
# 再次确认设备干净:
[root@ip-127-0-0-1 data]# wipefs /dev/md5
# 用 dd 擦除前部空间(包括日志区域和超级块)
[root@ip-127-0-0-1 data]# dd if=/dev/zero of=/dev/md5 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.185719 s, 565 MB/s
# 重新创建 XFS 文件系统
[root@ip-127-0-0-1 data]# mkfs.xfs -f /dev/md5
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md5               isize=512    agcount=16, agsize=1227904 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=1, sparse=1, rmapbt=0=                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=19646464, imaxpct=25=                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

✅ 通过添加新盘的方式对RAID5进行扩容

✅ 确认新盘无分区
[root@ip-127-0-0-1 data]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS
nvme0n1       259:0    0  40G  0 disk  
├─nvme0n1p1   259:1    0  40G  0 part  /
├─nvme0n1p127 259:2    0   1M  0 part  
└─nvme0n1p128 259:3    0  10M  0 part  /boot/efi
nvme1n1       259:4    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme2n1       259:5    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme3n1       259:6    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme4n1       259:7    0  25G  0 disk  
└─md5           9:5    0  75G  0 raid5 /data/raid-storge
nvme5n1       259:8    0  25G  0 disk 
✅ 如果 /dev/nvme5n1 上有分区或文件系统,请清除:
[root@ip-127-0-0-1 data]# wipefs -a /dev/nvme5n1
[root@ip-127-0-0-1 data]# dd if=/dev/zero of=/dev/nvme5n1 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.207361 s, 506 MB/s
✅ 添加磁盘到阵列

这会把它加为 spare disk

[root@ip-127-0-0-1 data]# mdadm --add /dev/md5 /dev/nvme5n1
✅ 扩展 RAID5 阵列(增加设备数)
[root@ip-127-0-0-1 data]# mdadm --grow /dev/md5 --raid-devices=5
✅ 查看扩容进度
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 nvme5n1[5] nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU][>....................]  reshape =  4.5% (1202548/26196992) finish=12.4minspeed=33539K/secunused devices: <none>
[root@ip-127-0-0-1 data]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 nvme5n1[5] nvme4n1[4] nvme3n1[2] nvme2n1[1] nvme1n1[0]78590976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU][=====>...............]  reshape = 28.3% (7431172/26196992) finish=9.4min speed=33237K/secunused devices: <none>
[root@ip-127-0-0-1 data]# [1042541.211177] md: md5: reshape done.
[1042541.299852] md5: detected capacity change from 157181952 to 209575936
✅ 更新 mdadm 配置文件
[root@ip-127-0-0-1 data]# mdadm --detail --scan >> /etc/mdadm.conf
✅ 扩展文件系统
如果你用的是 XFS 文件系统:
[root@ip-127-0-0-1 data]# xfs_growfs /data/raid-storge
meta-data=/dev/md5               isize=512    agcount=16, agsize=1227904 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=1, sparse=1, rmapbt=0=                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=19646464, imaxpct=25=                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 19646464 to 26196992
如果是 ext4:
resize2fs /dev/md5

必须在挂载状态下操作,否则可能报错。

✅ 验证新容量
[root@ip-127-0-0-1 data]# df -h /data/raid-storge
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5        100G  747M  100G   1% /data/raid-storge
⚠️ 注意事项:

reshape 操作是高风险操作,务必提前备份数据。
reshape 过程中不要断电、重启、格式化。
reshape 很慢,尤其磁盘较大时,可能需要几个小时或更久。

http://www.xdnf.cn/news/210583.html

相关文章:

  • shell(4)
  • UBUS 通信接口的使用——添加一个object对象(ubus call)
  • 开放平台架构方案- GraphQL 详细解释
  • 2025年- H13-Lc120-189.轮转数组(普通数组)---java版
  • Cliosoft安装
  • 【AI学习】李宏毅新课《DeepSeek-R1 这类大语言模型是如何进行「深度思考」(Reasoning)的?》的部分纪要
  • 大屏 UI 设计:解锁视觉盛宴的奥秘
  • Microsoft .NET Framework 3.5 离线安装包 下载
  • python celery框架结合django的使用
  • 爬虫学习笔记(五)---数据解析之re
  • 【最新 MCP 战神手册 09】利用资源和提示增强上下文
  • Linux批量管理:Ansible自动化运维指南
  • 飞蛾扑火算法优化+Transformer四模型回归打包(内含MFO-Transformer-LSTM及单独模型)
  • 开源Kotlin从零单排0基础完美入门教程
  • 第十六届蓝桥杯 2025 C/C++组 破解信息
  • 绿色版的notepad++怎么加入到右键菜单里
  • 深度学习---pytorch搭建深度学习模型(附带图片五分类实例)
  • 【docker】启动临时MongoDB容器、挂载数据卷运行数据库服务,并通过备份文件恢复MongoDB数据库备份数据
  • MCP 架构全解析:Host、Client 与 Server 的协同机制
  • Spring MVC 中解决中文乱码问题
  • 解决STM32H743单片机USB_HOST+FATF操作usb文件
  • 代码随想录算法训练营 Day35 动态规划Ⅲ 0-1背包问题
  • Python数据处理:文件的自动化重命名与整合
  • JavaWeb:后端web基础(TomcatServletHTTP)
  • 当跨网文件传输遇上医疗级安全筛查
  • <c++>使用detectMultiScale的时候出现opencv.dll冲突
  • Docker容器资源控制--CGroup
  • 公路风险落图,道路点任意经纬度里程求解
  • 2. python协程/异步编程详解
  • 【软考-高级】【信息系统项目管理师】【论文基础】沟通管理过程输入输出及工具技术的使用方法