Weird behavior I just observed. I’m still fighting a hd crash I had last week in my file server and I’m now ready to set up a raid5 array on the machine. So I use this command to set the array up: mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 --auto=md /dev/sde1 /dev/sdf1 /dev/sdg1 For some weird reason, /proc/mdstat now tells me this:

md1 : active raid5 sdg1[3] sdf1[1] sde1[0]
626304 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[=======>.............]  recovery = 39.5% (124680/313152) finish=0.1min speed=31170K/sec

As you can see, one device is marked as a spare, and mdadm --detail /dev/md1 actually says so as well:

State : clean, degraded, recovering
...
Number   Major   Minor   RaidDevice State
    0       8       65        0      active sync   /dev/sde1
    1       8       81        1      active sync   /dev/sdf1
    3       8       97        2      spare rebuilding   /dev/sdg1

The fact that this array is degraded is even reported in syslog (mdadm is running as a monitor) and mailed to root. Hm. Now, in this example I tried things with three 300M partitions, and after a short while I could see that everything ends up being just fine:

md1 : active raid5 sdg1[2] sdf1[1] sde1[0]
      626304 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Problem is, my real array was supposed to use three 1.5TB partitions, and I didn’t want to wait for the sync to complete several hours from now just to find out that something was wrong. For some reason, this rather confusing behavior during the first sync phase of a new raid5 array doesn’t seem to be documented anywhere, or at least I couldn’t find anything — so here it is now, for others to find in Google ;-)