2

I'm a beginner with microservices and have spent hours on the most tiny painful things of AWS today, would appreciate any expert advice as I suspect the next step is very small but could take me hours to work it out otherwise.

So I deployed a nano instance then ssh into it. Had to actually redo it to fix the security group but anyway it worked eventually. Used scp to put my docker image up there per the instructions here, in summary docker save to make a .tar out of the image locally and docker load to put it into the system remotely after waiting 15 minutes for scp to upload. Then typed docker run at the command prompt.

Had resorted to these (linux) terminal measures as over the last 3 days had twice tried and failed to do it from the AWS console, as in it uploaded but wouldn't run.

Now it runs fantastically when I type docker run my_image and I can see it in there with both the commands docker images and docker ps -a !

But the command prompt on my AWS instance is busy while it runs.. if I close the terminal window it will surely die. Now that I know it works there, how can I 'deploy' it, ie let it run and continue to run for a month or until further notice? I think it might need some kind of json file called 'task definition' but don't really know at all what to do next. Can this task definition and all remaining tasks be done from within a terminal logged into the instance?

cardamom
  • 6,873
  • 11
  • 48
  • 102
  • 1
    The -d flag answer from @nathanpeck is exactly what you want from the description in your last paragraph – NHol Jun 06 '17 at 16:23

1 Answers1

2

I see a few things wrong:

  • You should use a Docker registry service instead of SCPing an image. On AWS there is EC2 Container Registry or you can also use Docker Hub as well. This will make it much easier to get your images onto your instances.

  • I'm not sure why you weren't able to start your container using the console. I assume you are using AWS ECS? You might try the troubleshooting guide to help you figure out why your task wasn't running.

  • It sounds like you are starting a docker container in the foreground (attached to your shell) rather than in the background. Try adding the -d flag to your docker run command to run the container in the background so you can close your SSH session. Note that if the application process inside your container crashes the container will still stop. This is one reason for using an orchestrator such as AWS ECS to define a service that will attempt to always run a certain number of your tasks. ECS also helps with getting the docker container onto the instance and starting it for you in the background automatically.

nathanpeck
  • 4,608
  • 1
  • 20
  • 18
  • Thanks! The `-d` flag worked. Could start it in terminal and flag put it in the background rather than the foreground. I then logged out, closed the terminal, closed AWS console and service was still running very well, had to log back in and use `docker stop` to stop it again. Can the "orchestrator AWS ECS" be done from inside this instance's terminal somehow? Sounds possibly not robust enough otherwise, if the process crashes the container stops. – cardamom Jun 06 '17 at 16:34
  • 1
    AWS ECS controls your instance externally using an agent that runs on the instance. You don't have to restart anything manually because AWS ECS will restart your containers automatically. It also allows you to make a simple API call to AWS ECS and it will update your container. The real power comes when you have multiple instances, because AWS ECS will control an entire fleet of instances for you and start and stop containers on your behalf in response to high level instructions like "start five of this container distributed across availability zones and keep them running" – nathanpeck Jun 19 '17 at 14:01
  • Thanks, well the good news is it's been running perfectly for about a week now, managed by ECS (which I learned how to use) no longer log into the instance to manage it or enter any of the commands here. You said _a simple API call to AWS ECS and it will update your container_ Whenever I update (re-push) my container I go and update the task and cluster as well. Maybe that isn't necessary, suspect that re-pushing the container might allow it to keep going without having to log into AWS or touch the task or cluster. – cardamom Jun 19 '17 at 14:13