0

I am trying to configure load balancing with my ECS Fargate cluster. Currently I have an application load balancer setup to do redirects from port 80 to port 9003. Port 9003 is the port the containers are using for their service. I would like the load balancer to redirect to port 9003 over HTTPS if that is possible. Currently I have a listener for port 9003 that forwards to an IP based target group. Below are the details for the target group:

Target Group details

The container is designed to run a script at startup. When running the script locally, it takes a few minutes to complete. I am not sure if I need to increase the timeout and interval settings in the target group. I also have a security group set up with the ECS Service. Currently, it allows any traffic from the application load balancer to reach the container running on the service. I have also specified an ingress rule for the container port (9003) to be able to communicate with anything with the VPC.

The issue I am being faced with deals with the containers being in a loop where they provision and then minutes later they drain, stop, and ECS Fargate spins up a new IP in the target group. The only details present are as follows:

Stopped reason Essential container in task exited

Under Details in the Container section:

Exit Code   0
Entry point ["bash"]
Command ["/tmp/init.sh"]

Any advice on how to overcome this would be helpful.

Dave Michaels
  • 847
  • 1
  • 19
  • 51
  • "I am not sure if I need to increase the timeout and interval settings in the target group." there's a "Edit" button for the health checks right there in your screenshot. – Mark B Sep 29 '21 at 21:31
  • Is there nothing in the container logs stating why the container exited? I would start by adding more logging to your container application to determine why it is exiting. – Mark B Sep 29 '21 at 21:33

1 Answers1

0

The error possibly indicates an internal type error in your application. The process monitored by docker ( your init script ) is exiting. If it were related to health checks it would specifically say "Task failed ELB health checks". You need to check your application log.

If you don't see anything then try (with your equivalents):

aws ecs list-tasks \
     --cluster cluster_name \
     --desired-status STOPPED \
     --region us-west-2

aws ecs describe-tasks \
     --cluster cluster_name \
     --tasks arn:aws:ecs:us-west-2:account_id:task/cluster_name/task_ID \
     --region us-west-2

You may get more details in the response of the second command