-1

I've set up 8x500gb disks in raid0 on an EBS optimized EC2 instance and I ran iops.py to check the speed, but it seems really slow. Does anyone know if this is normal speeds for raid0? I'd think I'd be getting IO/s over 1000 more or less constantly?

Here are the stats:

python python/iops.py --num_threads 16 --time 2 /dev/md0
/dev/md0,   4.29 TB, 32 threads:
 512   B blocks:  831.6 IO/s, 415.8 KiB/s (  3.4 Mbit/s)
   1 KiB blocks:  290.8 IO/s, 290.8 KiB/s (  2.4 Mbit/s)
   2 KiB blocks:  543.7 IO/s,   1.1 MiB/s (  8.9 Mbit/s)
   4 KiB blocks:  581.5 IO/s,   2.3 MiB/s ( 19.1 Mbit/s)
   8 KiB blocks:  275.5 IO/s,   2.2 MiB/s ( 18.1 Mbit/s)
  16 KiB blocks:  486.8 IO/s,   7.6 MiB/s ( 63.8 Mbit/s)
  32 KiB blocks:  415.3 IO/s,  13.0 MiB/s (108.9 Mbit/s)
  64 KiB blocks:  277.8 IO/s,  17.4 MiB/s (145.6 Mbit/s)
 128 KiB blocks:  205.3 IO/s,  25.7 MiB/s (215.3 Mbit/s)
 256 KiB blocks:  116.4 IO/s,  29.1 MiB/s (244.2 Mbit/s)
 512 KiB blocks:  114.1 IO/s,  57.0 MiB/s (478.5 Mbit/s)
   1 MiB blocks:   60.4 IO/s,  60.4 MiB/s (506.8 Mbit/s)
   2 MiB blocks:   28.5 IO/s,  57.1 MiB/s (478.9 Mbit/s)


cat /proc/mdstat
Personalities : [raid0] 
md0 : active raid0 xvde[0] xvdf[7] xvdg[6] xvdh[5] xvdi[4] xvdb[3] xvdc[2] xvdd[1]
      4194295808 blocks super 1.2 512k chunks

unused devices: <none>

mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jul 29 08:23:38 2013
     Raid Level : raid0
     Array Size : 4194295808 (3999.99 GiB 4294.96 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Mon Jul 29 08:23:38 2013
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : hostname
           UUID : 7df463c3:5de17a1b:cfe3345c:8e8c22ac
         Events : 0

    Number   Major   Minor   RaidDevice State
       0     202       64        0      active sync   /dev/xvde
       1     202       48        1      active sync   /dev/xvdd
       2     202       32        2      active sync   /dev/xvdc
       3     202       16        3      active sync   /dev/xvdb
       4     202      128        4      active sync   /dev/xvdi
       5     202      112        5      active sync   /dev/xvdh
       6     202       96        6      active sync   /dev/xvdg
       7     202       80        7      active sync   /dev/xvdf
Mackwerk
  • 149
  • 3
  • 4
    Amazon is a cloud service provider, who lives on sharing hardware resources across different customers. It's impossible for us to say if this is enough IOPS or not, you'll have to direct this question to Amazon. Alternatively, look at your contract and see if there is any SLA on performance. – pauska Jul 29 '13 at 11:11

2 Answers2

3

Are you paying for provisioned IOPS? If not, you'll be getting around 100 IOPS per EBS on average, which ties in with what you're seeing with 8 EBS volumes tied together.

From here:

Standard volumes offer storage for applications with moderate or bursty I/O requirements. Standard volumes deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS. Standard volumes are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.

Chris McKeown
  • 7,168
  • 1
  • 18
  • 26
2

You're not really giving us enough information to go off of. Taking any kind of write penalty out of the equation if we assume your using standard 7200rpm Sata drives then your IOPS would be as follows:

1/(12ms seek time (.012)+ 5.5ms latency (.0055))= 57.14 IOPS

So we would use the following to determine the IOPS of the array:

8 (numdisks)* 57.14 (MaxIOofSingleDisk)=457 max read IO for the array.

So it comes down to, what speed/type of disks are you using? Then find out the avg. seek time and latency from the manufacturer and use the above values to see if the IOPS is what is expected (again write penalty and overhead should be factored in for write IO).

This might be harder since your not in a physical environment but I'm sure someone can give you a clue about the disks used in EC2 pools.

David V
  • 840
  • 1
  • 8
  • 15