0

Lets say I have 2 containers. One with nginx and and another with a simple app. Each container has his own ECR repo image and his own Task Definition. My nginx container is running in a public subnet with a public IP. How do I pass the request from nginx to my container like this:

proxy_pass app_container:9000;

How can I make my second container only visible to nginx container? Should I put him in a private subnet? Do I need to map a port for the app container in the task definition? Should I use Cloud Map? Should I call him with localhost:9000? Will the container be called the same as it is in the task definition?

I tried using service discovery but I still dont know how to call my container. I created the container, its running, but my nginx container cant reach him doesnt matter how I try, the documentation doesnt explain it well. How exactly should i call my container?

Sasquatch
  • 87
  • 1
  • 2
  • 7

1 Answers1

1

How can I make my second container only visible to nginx container? Should I put him in a private subnet?

Private subnet is fine here yes.

Do I need to map a port for the app container in the task definition?

No, they are running on two separate services, since you are using two separate task definitions. So port mapping isn't an option here.

Should I use Cloud Map?

No, just use ECS Service Discovery.

Should I call him with localhost:9000?

No, that would only work if both containers were in the same task definition. And to be honest, that is probably the correct solution for this sort of thing. Running Nginx on a completely separate server is unnecessary and makes this all much more complicated.

Will the container be called the same as it is in the task definition?

No, you either have the option of using Service Discovery, and then using the name you gave the service in the Service Discovery namespace, or moving both of the containers into the same task definition and using localhost for inter-service communication.

I tried using service discovery but I still dont know how to call my container. I created the container, its running, but my nginx container cant reach him doesnt matter how I try, the documentation doesnt explain it well. How exactly should i call my container?

Without any details about what you did exactly, it's impossible to point out what you did wrong. When you created a private DNS namespace for Service Discovery, what DNS name did you use? The service address would be the service name + the private DNS name.

Mark B
  • 183,023
  • 24
  • 297
  • 295
  • Thanks for answering. I gave up trying to use different services ang im grouping everything together in a single service and task. So you said localhost:9000 would work, but what if I have 2 containers with the same 9000 port? How do I map the container port to another one without exposing it to the host? – Sasquatch Dec 02 '22 at 14:42
  • 1
    You can't have multiple containers in a single ECS task that use the same port. That would generate errors from ECS. I don't think it would even let you deploy that. What are you doing that you have multiple containers with the same port? – Mark B Dec 02 '22 at 14:49
  • I have 2 php-fpm containers, which by default listen on port 9000, how do I change that in a task definition? – Sasquatch Dec 02 '22 at 15:24
  • 1
    You can't edit that in the task definition, unless you can pass a port setting to the PHP process via the `CMD` or `ENDPOINT` property. Looking at this, it appears you will need to change a file in one of the docker images to change the port: https://stackoverflow.com/questions/68039059/how-to-change-php-fpm-default-port – Mark B Dec 02 '22 at 16:23
  • Its working now, thx a lot! But I have another problem. I will point my domains to the task public IP (which will be listen by nginx), but if the task goes down, the new task will have a new IP, so not to suffer from this, what ip should I point my domains? – Sasquatch Dec 02 '22 at 17:22
  • 1
    You shouldn't be pointing a domain directly to an ECS task. You should have an Application Load Balancer that you point your domain to, and configure the ECS service to register itself with the load balancer. – Mark B Dec 02 '22 at 17:32
  • Ok, I see i need to create a target group and define a specific IP inside my VPC. Is it the specific internal IP from my ecs service? If yes, how can I find it? And if it do not put any IP in the target group, will it find my ecs service by itself? – Sasquatch Dec 02 '22 at 18:15
  • 1
    No, you do not add targets manually in the target group for ECS services. Do what I said in my last comment, configure the ECS service with knowledge of the load balancer/target group and ECS will automatically register tasks with the target group as it creates them. – Mark B Dec 02 '22 at 18:18
  • Ok, so once my load balancer is running, will it push the requests to my nginx container (listening on 80) or should I remove the nginx container and use the load balancer to forward it to containers directly? – Sasquatch Dec 02 '22 at 18:26
  • 1
    Yes the load balancer will send request to the Nginx container. As part of configuring the load balancer settings on the ECS service you have to specify what container and port to send requests to. – Mark B Dec 02 '22 at 18:35
  • You are my hero! Now one last thing (I always think it is the last but never is), I have an ACM ssl certificate, how do i pass this certificate to ssl directives inside nginx container? – Sasquatch Dec 02 '22 at 19:54
  • 1
    You add the ACM SSL certificate to the load balancer. SSL is terminated at the load balancer. The load balancer sends unencrypted traffic to Nginx. If you need Nginx to be aware that the client connection is over SSL it can check the `x-forarded-proto` header. But generally you should just add another listener rule in the load balancer that redirects all port 80 traffic to port 443, and never worry about unencrypted traffic on the back-end. – Mark B Dec 02 '22 at 20:16
  • So the right thing to do is to listen on 80, redirect to 443. Then forward 443 to my TG, and then listen on 443 in the nginx conf? Also, if I just listen on 443 with nginx, I dont need to put ssl directives? – Sasquatch Dec 02 '22 at 20:36
  • 1
    Almost, but you have the Nginx port wrong: Listen on 80, redirect to 443. Then forward 443 to my TG, and then listen on 80 in the nginx conf. Since Nginx doesn't have an SSL certificate installed in the container it can't listen on port 443. The target group is where the port mapping from 443 on the load balancer to port 80 on the container happens. – Mark B Dec 02 '22 at 20:38
  • Ok, I got the nginx part. But im a bit confused with the listeners. I have a HTTP:80 listener redirecting to 443. Another listener HTTPS:443 with the certificate thats forwarding to TG. So im missing a rule to do 443 -> 80 to my nginx container. Should I just add another HTTPS:443 listener or im doing it wrong? What rules should i have? – Sasquatch Dec 02 '22 at 20:46
  • 1
    I feel like you're really overthinking it at this point. There isn't a "rule to do 443 -> 80". You setup a target group that sends traffic to your container on port `80`. When you tell the 443 listener on the load balancer to forward traffic to that target group, it automatically forwards that traffic to port 80, because that's the traffic port you configured on the target group. That's it. There are no rules or redirects or anything else you need to configure. The port mapping happens automatically, based on the target group settings. – Mark B Dec 02 '22 at 20:50
  • I just want to have sure im not doing anything wrong. I appreciate your time and patience. Yes, the target group is listening on 80. So I just need 1 listener HTTPS:443 to forward to the target group and it is done? – Sasquatch Dec 02 '22 at 20:58
  • 1
    Yes that's all you need. – Mark B Dec 02 '22 at 21:09
  • I feel like you saved me 1 week of aws documentation and google searchs. THANKS – Sasquatch Dec 02 '22 at 21:26