2

My team is developing an application which consist of 20 microservice, This will be deployed in aws ecs ec2 instance. I am planning to launch 3 aws ec2 instance in the cluster. Each microservice will have its on task definition and service.

docker container is build, pushed to aws ecr and then deployed to ecs via jenkins using Jenkinsfile. In the taskdefinition json files I have set the host port to 0 so that each docker container host port will be random and there wont be any port conflict if we increase the desired count of a service to 2 or more.

I guess the containers on same task definition can be communicate using links but in my case we cannot predict on which instance the docker containers are.

If we use rabbitmq on a different server how it can talk with the different microservice? I am using the Network Mode as "default network" in taskdefinition, should I change that to awsvpc?

sandeep krishna
  • 415
  • 2
  • 9
  • 28
  • 1
    Is your problem related to how to consume your deployed microservices or rabbitmq instances when you don't know its ip or public domain? – JRichardsz Dec 21 '18 at 16:26

2 Answers2

0

"awsvpc" network mode is not for service discovery

The network mode awsvpc isn't made for this purpose out of box. This mode just allocates an elastic networking interface to each running task, providing a dynamic private IP address and internal DNS name. This helps in getting major network related benefits like control, flowlogs, traffic monitoring per task def. etc, I am not sure if there is any way to keep track of those dynamic private IPs and use them wisely.

Ref - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html

Solutions -

You can either look out for a service discovery mechanism. AWS had launched ECS service discovery recently which you can integrate with your services, this uses Route 53 heavily and makes changes on the fly.
Ref -
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/ https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html

On a lighter note, People have suggested "ecs-task-kite" as well but doesn't seem to be production ready as if now - https://github.com/awslabs/ecs-task-kite

Seems like using AWS ECS service discovery mechanism would be your best bet for now. Although, I haven't yet got a chance to test it out.

Another way would be to use a different cluster for RabbitMQ itself, expose it via NLB or ALB depending on the protocol you want the support for. And let the services cluster talk to RabbitMQ cluster using it's ALB/NLB endpoint. Make ASG [min,max,desire=1] in case RabbitMQ is not scalable.

vivekyad4v
  • 13,321
  • 4
  • 55
  • 63
0

I have a similar deployment running right now. The RabbitMq server is a separate instance in the same vpc as the cluster. The security group is configured to allow all the traffic from that network. The "weak" part of my deployment is that I'm not doing any service discovery. As the machine is not restarted the local ip adress (10.0....) never changes, so I have defined an extrahost in the task definition pointing to that ip address.

When a service want's to read/publish a message y uses the host defined. If for some reason I reboot the rabbitmq instance and the ip address I'll have to redeploy al the containers with the updated ip address.

This could be fixed using the instance metadata

[ec2-user ~]$ curl http://169.254.169.254/latest/meta-data/local-ipv4

according to the official docs Ec2 Instance metadata registering it somewhere (redis, zookeeper, etcd ...) and then your services can check it, so no redeploy is needed if the rabbitMq instance reboots.

Carlos
  • 1,411
  • 15
  • 21