0

I have a python3 project which runs in a docker container environtment.

My Python project uses AWS Acces keys and secret but using a credentials file stored in the computer which is added to the container using ADD.

I deployed my project to EC2. The server has one task running which works fine. I am able to go through port 8080 to the webserver (Airflow).

When I do a new commit and push to a master branch in github, the hook download the content and deploy it without build stage.

The new code is in the EC2 server, I check it using ssh but the container that runs in the task get "stuck" and the bind volumes dissapear and they are not working until I restart a new task. The volumes are applied again from 0, and those reference to the new code. This action is fully manual.

Then, to fix it I listen about AWS ECS Blue/Green deployment, so I implemented it. In this case the Codepipeline add a build stage, but here starts the problem. If in the build I try to push a docker image to the ECR, which my task definition makes reference it fails. It fails because in the server, and in the repo (which I commit push my new code) there is no credentials file.

I tryed doing the latest docker image from my localhost, and avoiding build stage in codepipeline, and it works fine, but then when I go to the 8080 port in both working ip's I am able to get in the webserver, but the code is not there. If i click anywhere it says code not found.

So, in a general review I would like to understand what i am doing wrong, and how to fix in a general guidelines, and in the other hand ask why my EC2 instance in the AWS ECS Blue/Green cluster has 3 ip's.

enter image description here

The first one is the one that I use to reach server through port 22. And if there I run docker ps I see one or two containers running depending if I am in the middle of a new deployment. If here I search my new code its not here...

The other two ip's are changing after every deployment (I guess its blue and green) and both work fine until Codepipeline destroys the green one (5 minutes wait time), but the code is not there. I know it because when I click in any of the links in the webserver it fails saying the Airflow Dag hasn't been found.

So my problem is that I have a fully working AWS ECS Blue/Green deployment but without my code. Then my webserver doesn't have anything to run.

Reg Edit
  • 6,719
  • 1
  • 35
  • 46
mrc
  • 2,845
  • 8
  • 39
  • 73
  • You have 3 IPs, because two are for your tasks - guess you are using `awsvpc` networking mode. So if you do b/g you will need two more IPs. Can you container instance handle 5 ENI in total? – Marcin Jun 24 '20 at 20:19
  • @Marcin What do you mean if it can handle 5 ENI? Wha'ts the point of handle 5? – mrc Jun 25 '20 at 05:51
  • Elastic network interface. If you use `awsvpc`, which seems to be the case, each task will need to have its own ENI attached to an instance. – Marcin Jun 25 '20 at 05:52
  • @Marcin, yes and its working. But never tried with 5 ENI. Why did you ask for just 5? The point of the question is that my code is not propagated to the task :S – mrc Jun 25 '20 at 05:56
  • You ask why you have 3 IPs and why green environment terminates after a timeout? Have I misunderstood your issue? – Marcin Jun 25 '20 at 06:00
  • @Marcin well as I writte in the last paragraph: "So my problem is that I have a fully working AWS ECS Blue/Green deployment but without my code. Then my webserver doesn't have anything to run." I understand why had 3 ip's, and just one is the "real" ip. Green environtment ends after 5 minutes because is my terminate time setup when deploying. The issue is that my last merged code in my branch in github is not in the server. – mrc Jun 25 '20 at 06:03
  • I'm sorry I don't understand how something can be "fully working", yet not giving you your desired outcome. But its ok. Hopefully someone else will be able to provide better help. – Marcin Jun 25 '20 at 06:07

0 Answers0