Using mdadm to create a raid5 array degrades during creation

Weird behavior I just observed. I’m still fighting a hd crash I had last week in my file server and I’m now ready to set up a raid5 array on the machine. So I use this command to set the array up:

mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 --auto=md /dev/sde1 /dev/sdf1 /dev/sdg1

For some weird reason, /proc/mdstat now tells me this:

md1 : active raid5 sdg1[3] sdf1[1] sde1[0]
626304 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[=======>.............]  recovery = 39.5% (124680/313152) finish=0.1min speed=31170K/sec

As you can see, one device is marked as a spare, and mdadm --detail /dev/md1 actually says so as well:

State : clean, degraded, recovering
...
Number   Major   Minor   RaidDevice State
   0       8       65        0      active sync   /dev/sde1
   1       8       81        1      active sync   /dev/sdf1
   3       8       97        2      spare rebuilding   /dev/sdg1

The fact that this array is degraded is even reported in syslog (mdadm is running as a monitor) and mailed to root. Hm.

Now, in this example I tried things with three 300M partitions, and after a short while I could see that everything ends up being just fine:

md1 : active raid5 sdg1[2] sdf1[1] sde1[0]
      626304 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Problem is, my real array was supposed to use three 1.5TB partitions, and I didn’t want to wait for the sync to complete several hours from now just to find out that something was wrong. For some reason, this rather confusing behavior during the first sync phase of a new raid5 array doesn’t seem to be documented anywhere, or at least I couldn’t find anything — so here it is now, for others to find in Google 😉

3 Comments on Using mdadm to create a raid5 array degrades during creation

  1. Thanks! I’m glad you put this out there– I’ve been a little worried, wondering if all this time watching my new raid5 using 4 1.5TB disks was going to be for nothing!

    Like

  2. Thanks for the heads-up! I’ve been playing with different RAID configurations and kept puzzling over this. RAID6 creation does not have this weird [UUU_] behavior, which was even more confusing.

    Like

  3. I came across this blog by accident but I thought I should comment…

    This is documented behaviour in man. Look under the section “CREATE MODE” and you will see:

    “When creating a RAID5 array, mdadm will automatically create a degraded array with an extra spare drive. This is because building the spare into a degraded array is in general faster than resyncing the parity on a non-degraded, but not clean, array. This feature can be over-ridden with the –force option.”

    I hope that helps to clarify.

    Like

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s