I have an older server that has the following problem:
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 sdf1[0] sde1[1] sdd1[2](F)
1953511936 blocks [2/2] [UU]
md0 : active raid5 sdc1[2] sdb1[1] sda1[0]
1953519872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Note that sdd1 has now failed and that a problem. Also note that sdd was only a 2TB drive. To fix the problem AND upgrade my storage I bought two 5TB drives and immediately slotted one of them into md1 to get the mirror fixed.
Since they are 5TB I can no longer use fdisk and must use parted and set the partition to GPT. Now you may be asking yourself why partition the disk at all if it's just a RAID device. I'm not sure I have a great answer beside it's what I've always done.
The problem I'm currently having is that my newly added sdf is not staying in the array past a reboot. I believe that is because I forgot to mark it as raid disk in the partition (think type fd Linux raid autodetect) although I was pretty sure that was no longer needed since the kernel would scan the disks. I've changed my partition creation to the below and once the rebuild has completed will reboot to verify that it was indeed the problem.
# Add in new disk
dmesg
# VERIFY NEW DISK IS /dev/sdg
parted /dev/sdg -a optimal
mklabel gpt
unit TB
mkpart primary 0.00TB 5.00TB
set 1 raid on
print
quit
No comments:
Post a Comment