I'm maintaining an AWS RDS database (db.m6g.xlarge
) which has a 750GB disk. This gives 2250 IOPS and the RDS monitoring tells me that the read throughput on average is ~ 50MB/s and write throughput is ~ 2MB/s.
The database IOPS hovers around 2250, so it clearly needs more IOPS.
I'm looking at the new gp3 storage, and when I set up a price estimate for that storage type for a 750GB disk I get 12000 IOPS and 500 MiBps (524.288 MBps) for the same price as the current gp2 disk costs.
This sounds too good to be true. Getting a gp2 with 12000 IOPS would require 4TB of storage and cost 1500 USD/month, and getting an io1 disk with 12000 IOPS would cost 2600 USD/month. The database with a gp3 disk will cost about 700 USD/month.
I believe the "catch" in this is that any IOPS and/or throughput that exceeds the 12.000 IOPS and 500 MiBps will incur additional costs, but it's not entirely clear to me how this is calculated.
$0.005/provisioned IOPS-month over 3,000;
$0.04/provisioned MiB/s-month over 125MiB/s
If I move our database from gp2 to gp3, we get an additional 9750 IOPS. With out current throughput of about 50MB/S, I can't see how we'd exceed the 500 MiBps baseline for gp3.
- Is there anything particular about the gp3 storage type I need to be careful about, which I need to know, but have clearly forgotten to consider?
- How does an IOPS-month work for IOPS above the 12000 IOPS baseline? If an RDS database hovers around 10000 IOPS, but sometimes "bursts" to 20000 IOPS - how does this count towards an IOPS-month?