I have two PHP written microservices A & B. Endpoints are a.example.com and b.example.com. Both services need to be public accessible. In addition, service B makes a lot of curl requests to a.example.com while processing.
Both services are running on same AWS VPC (within same private network). I also have an external CDN (eg, Akamai) for each endpoints.
Design 1:
Public and Service B make requests to A
|
V
a.example.com
|
V
CDN
|
V
Public Load Balancer
|
V
Web Servers for service A
- Higher cost because more bandwidth cost from AWS
- More protection on service A as it is behind CDN
- Slower service B response time as traffic goes out to Cloud and comes back in
Design 2:
Public makes requests to A Service B makes requests to A
| |
V V
a.example.com a-internal.example.com
| |
V |
CDN |
| |
V V
Public Load Balancer Internal Load Balancer
| |
V V
Web servers for service A
- Faster service B response time as traffic is within private network
- Lower bandwidth cost as traffic is within private network
- A risk of service A may be smashed by service B
- Extra cost for the additional internal load balancer
- Complexity of maintaining two a. and a-internal. endpoints
Question, Are these correct designs for microservices inter-connections? If not, what's the common design for microservices inter-connections?
Bonus question, is split DNS (eg, AWS Route 53 private zone) a good use case if I want to maintain the single endpoint a.example.com instead of two (a. & a-internal.).