I've set up 8x500gb disks in raid0 on an EBS optimized EC2 instance and I ran iops.py to check the speed, but it seems really slow. Does anyone know if this is normal speeds for raid0? I'd think I'd be getting IO/s over 1000 more or less constantly?
Here are the stats:
python python/iops.py --num_threads 16 --time 2 /dev/md0
/dev/md0, 4.29 TB, 32 threads:
512 B blocks: 831.6 IO/s, 415.8 KiB/s ( 3.4 Mbit/s)
1 KiB blocks: 290.8 IO/s, 290.8 KiB/s ( 2.4 Mbit/s)
2 KiB blocks: 543.7 IO/s, 1.1 MiB/s ( 8.9 Mbit/s)
4 KiB blocks: 581.5 IO/s, 2.3 MiB/s ( 19.1 Mbit/s)
8 KiB blocks: 275.5 IO/s, 2.2 MiB/s ( 18.1 Mbit/s)
16 KiB blocks: 486.8 IO/s, 7.6 MiB/s ( 63.8 Mbit/s)
32 KiB blocks: 415.3 IO/s, 13.0 MiB/s (108.9 Mbit/s)
64 KiB blocks: 277.8 IO/s, 17.4 MiB/s (145.6 Mbit/s)
128 KiB blocks: 205.3 IO/s, 25.7 MiB/s (215.3 Mbit/s)
256 KiB blocks: 116.4 IO/s, 29.1 MiB/s (244.2 Mbit/s)
512 KiB blocks: 114.1 IO/s, 57.0 MiB/s (478.5 Mbit/s)
1 MiB blocks: 60.4 IO/s, 60.4 MiB/s (506.8 Mbit/s)
2 MiB blocks: 28.5 IO/s, 57.1 MiB/s (478.9 Mbit/s)
cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 xvde[0] xvdf[7] xvdg[6] xvdh[5] xvdi[4] xvdb[3] xvdc[2] xvdd[1]
4194295808 blocks super 1.2 512k chunks
unused devices: <none>
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jul 29 08:23:38 2013
Raid Level : raid0
Array Size : 4194295808 (3999.99 GiB 4294.96 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistent
Update Time : Mon Jul 29 08:23:38 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : hostname
UUID : 7df463c3:5de17a1b:cfe3345c:8e8c22ac
Events : 0
Number Major Minor RaidDevice State
0 202 64 0 active sync /dev/xvde
1 202 48 1 active sync /dev/xvdd
2 202 32 2 active sync /dev/xvdc
3 202 16 3 active sync /dev/xvdb
4 202 128 4 active sync /dev/xvdi
5 202 112 5 active sync /dev/xvdh
6 202 96 6 active sync /dev/xvdg
7 202 80 7 active sync /dev/xvdf