1

I am looking at a number of Docker containers running on a Centos7 VM. Each container will be running a number of processes. For each process, I want to be able to see the CPU network and RAM usage to identify when the container is starting to get overloaded. Getting the CPU and RAM of the container is not going to be enough, as the CPU could have 100% of RAM allocated but in the processes there is actually RAM to spare. I have used Sysdig chisels to get the CPU usage of separate processes, but network and RAM use is not incorporated. Specifically for network statistics I want to be able to know network packets dropped any other relevant statistics. Ideally, I would be able to use a Sysdig-like tool to retrieve the stats from the host, instead of having to use resources inside the container to run a separate log generator.

Alex Pomerenk
  • 39
  • 1
  • 5
  • A container should not contain more than one process. You can see the total stats of a container using: `docker stats`. – ESala Jul 01 '16 at 17:12
  • 1
    Yes, traditionally docker containers only run one process. There are use-cases for multiple processes in a container however: [http://tiborsimko.org/docker-running-multiple-processes.html] and docker documentation talks about running multiple processes [https://docs.docker.com/engine/admin/using_supervisord/] – Alex Pomerenk Jul 01 '16 at 17:29
  • hmm that's new, very interesting. Have an upvote ;) – ESala Jul 01 '16 at 17:40

1 Answers1

1

I believe you can use docker top <container id> to view all the processes running in a container but this will not show you the memory and cpu usage etc.

You can, as suggested in the comments view the total memory usage, cpu usage, i/o etc of a entire container using docker stats <container id>.

Alternatively you can log into your container and just have a look yourself by using docker exec -it <container id> bash but note, you may not have bash available depending on your base so you may have to use sh.

tommyyards
  • 147
  • 2
  • 4
  • You can actually just use the `docker attach ` command which will allow you to get into the container and use whatever command line tools that container has. Problem with that is that it is now a process running on the container, not the host, so it will eat up RAM and CPU, and if you're logging, then also disk space and I/O on the container, not the host. – Alex Pomerenk Jul 05 '16 at 12:26