First, let me reassure you: if your mdadm drives are partition-based (eg: sda1, etc), the first "dd" was OK and it did not cause any mdadm metadata copy (the metadata are inside the partition itself, not inside the MBR).
What you are observing is normal MDRAID behavior. You re-added the new drives using two separate mdadm -a commands, right? In this case, mdadm will first resync the first drive (putting the second one to "spare" mode) and then it will transition the second drive to "rebuilding spare" status. If you re-add the two drives with a single command (eg: mdadm /dev/mdX -a /dev/sdX1 /dev/sdY1) the rebuild will run concurrently.
Let have a look at my (testing) failed RAID6 arraid:
[root@kvm-black test]# mdadm --detail /dev/md200
/dev/md200:
Version : 1.2
Creation Time : Mon Feb 9 18:40:59 2015
Raid Level : raid6
Array Size : 129024 (126.02 MiB 132.12 MB)
Used Dev Size : 32256 (31.51 MiB 33.03 MB)
Raid Devices : 6
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Feb 9 18:51:03 2015
State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:md200 (local to host localhost)
UUID : 80ed5f2d:86e764d5:bd6979ed:01c7997e
Events : 105
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
4 0 0 4 removed
5 0 0 5 removed
Re-adding the drives using two separate command (mdadm /dev/md200 -a /dev/loop6; sleep 1; mdadm /dev/md200 -a /dev/loop7) caused the following detailed report:
[root@kvm-black test]# mdadm --detail /dev/md200
/dev/md200:
Version : 1.2
Creation Time : Mon Feb 9 18:40:59 2015
Raid Level : raid6
Array Size : 129024 (126.02 MiB 132.12 MB)
Used Dev Size : 32256 (31.51 MiB 33.03 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Mon Feb 9 18:56:40 2015
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 6
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 9% complete
Name : localhost:md200 (local to host localhost)
UUID : 80ed5f2d:86e764d5:bd6979ed:01c7997e
Events : 134
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
6 7 6 4 spare rebuilding /dev/loop6
5 0 0 5 removed
7 7 7 - spare /dev/loop7
After some time:
[root@kvm-black test]# mdadm --detail /dev/md200
/dev/md200:
Version : 1.2
Creation Time : Mon Feb 9 18:40:59 2015
Raid Level : raid6
Array Size : 129024 (126.02 MiB 132.12 MB)
Used Dev Size : 32256 (31.51 MiB 33.03 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Mon Feb 9 18:57:43 2015
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:md200 (local to host localhost)
UUID : 80ed5f2d:86e764d5:bd6979ed:01c7997e
Events : 168
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
6 7 6 4 active sync /dev/loop6
7 7 7 5 active sync /dev/loop7
Adding the two drives in a single command (mdadm /dev/md200 -a /dev/loop6 /dev/loop7) leads to that report:
[root@kvm-black test]# mdadm --detail /dev/md200
/dev/md200:
Version : 1.2
Creation Time : Mon Feb 9 18:40:59 2015
Raid Level : raid6
Array Size : 129024 (126.02 MiB 132.12 MB)
Used Dev Size : 32256 (31.51 MiB 33.03 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Mon Feb 9 18:55:44 2015
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 6
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 90% complete
Name : localhost:md200 (local to host localhost)
UUID : 80ed5f2d:86e764d5:bd6979ed:01c7997e
Events : 122
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
7 7 7 4 spare rebuilding /dev/loop7
6 7 6 5 spare rebuilding /dev/loop6
So, in the end: let mdadm do its magic, then check if all drives are marked as "active".