Architecturally the solution you seem to have decided on is difficult to implement and isn't typically the best way to approach things.
A better approach is often to have more than one set / cluster of web servers that aren't in the same building, and to keep your database synchronized.
Availability Zones
The easiest way to achieve this is to take advantage of availability zones (AZs). AZs are independent data centers in the same area with very low latency between them, each AZ has independent network and power feeds. It's rare for multiple AZs to fail at one time, but it can happen - my feeling is it might happen every couple of years.
The advantage of using AZs is AWS makes it easy to increase application reliability using AZs within a single region. You can use a load balancer and distribute traffic to web / application servers in multiple AZs, each serving traffic. RDS has synchronous replicas across AZs in a region, so any data written in AZ A is applied to AZ B immediately. If AZ A fails the RDS instance in AZ B comes up in between seconds to a small number of minutes.
Multi Region
Multi region architectures are more difficult. You load balance or fail-over with Route53, which means load can be distributed or can fail over.
You can't have synchronous replicas with RDS across regions. You can do read replicas across regions, and you could manually promote a database to master if required. Once the outage resolves getting things running back in the original configuration can be problematic as the old master DB is out of sync with the new master. If you want multi master databases across regions I suspect you'll have to run them yourself on EC2 instances.
TLDR
Use AZs to increase reliability for standard failures. If you need exceptional availability then you can use another region as either a hot or a cold site, depending on your RTO (recovery time objective).