5

I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B". I don't want to make any ports public.

Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.

But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?

XZen
  • 225
  • 5
  • 27
  • 49

2 Answers2

2

The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:

You could go for open source solutions like Consul.

All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.


Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.

Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331

trallnag
  • 2,041
  • 1
  • 17
  • 33
  • 1
    Thank you, I've tried using Route 53, but seems it doesn't fit my needs - looks like it allows 'exposing' one port per service, but in my case I have only one service (in every cluster) and multiple tasks (containers) under it, and I want to 'expose' multiple ports for one service. Looks like both Route 53 and AWS Cloud Map solve 'service discovery' but not 'container discovery', maybe I should consider changing my cluster to have 1 service - 1 task - 1 container instead of 1 service and multiple containers, which is inconvenient for me though :( – XZen Nov 25 '20 at 08:47
  • Why inconvenient? I prefer it because I can redeploy individual elements. If everything is in one task definition everything will be redeployed. Btw I added another alternative to my answer – trallnag Nov 25 '20 at 09:02
  • Inconvenient because for some containers I have multiple ports to be exposed, so even if I do 1 service - 1 container it won't help. I'll try alternative way you mentioned, thank you – XZen Nov 26 '20 at 09:06
  • 1
    AWS Cloud Map works well for this use case. One **service** contains multiple **instances** (tasks in ECS or containers in general). Each instance is identified by IP and port (and other useful attributes like AVAILABILITY_ZONE). It is not limited to one instance (port) per service. If you're using ECS, feel free to enable [Service Discovery](https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/) for your cluster. – vanekjar Dec 15 '20 at 00:21
0

To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.

sashimi
  • 1,224
  • 2
  • 15
  • 23