10

I want to see the logs from my Docker Swarm service. Not only because I want all my logs to be collected for the usual reason, but also because I want to work out why the service is crashing with "task: non-zero exit (1)".

I see that there is work to implement docker logs in the pipeline, but there a way to access logs for production services? Or is Docker Swarm not ready for production wrt logging?

Joe
  • 46,419
  • 33
  • 155
  • 245
  • 1
    the upcoming docker 1.13 has the change you mention above. I think they currently should have the beta out. Otherwise, a good practice is to do what @Dockstar recommends and configure a log driver. We use logstash/kibana with the gelf driver and we have this configured on the docker daemon so any docker host in our clusters log to kibana. Also be sure to grab the syslog output because that's where stuff goes when your container dies (e.g. out of memory) or docker itself is having issues. – Jilles van Gurp Dec 02 '16 at 22:34

3 Answers3

47

With Docker Swarm 17.03 you can now access the logs of a multi instance service via command line.

docker service logs -f {NAME_OF_THE_SERVICE}

You can get the name of the service with:

docker service ls

Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode:

Update: docker service logs is now a standard feature of docker >= 17.06. https://docs.docker.com/engine/reference/commandline/service_logs/#parent-command

similar question: How to log container in docker swarm mode

inquisitive
  • 3,738
  • 6
  • 30
  • 56
db80
  • 4,157
  • 1
  • 38
  • 38
3

What we've done successfully is utilize GrayLog. If you look at docker run documentation, you can specify a log-driver and log-options that allow you to send all console messages to a graylog cluster.

docker run... --log-driver=gelf --log-opt gelf-address=udp://your.gelf.ip.address:port --log-opt tag="YourIdentifier"

You can also technically configure it at the global level for the docker daemon, but I would advise against that. It won't let you add the "Tag" option, which is exceptionally useful for filtering down your results.

Docker service definitions also support log driver and log options so you can use docker service update to adjust your services without destroying them.

Dockstar
  • 1,005
  • 11
  • 15
  • 1
    Thanks very much! I'll take a good look. It's hard to pick out the relevant bits of the documentation, very useful to have a steer! – Joe Dec 02 '16 at 20:23
  • 1
    And sorry, can you confirm that this works with Swarm? And if possible point to any documentation about using log drivers in Swarm? – Joe Dec 02 '16 at 20:28
  • 1
    Hey! I can confirm this works in swarm as we are collecting these logs across swarm instances. Log configurations from Docker are at https://docs.docker.com/engine/admin/logging/overview/. – Dockstar Dec 02 '16 at 20:59
  • 1
    Here's the details for swarm service creation. You can see the log-driver and log-opts are present :) https://docs.docker.com/engine/reference/commandline/service_create/ – Dockstar Dec 02 '16 at 21:04
  • 1
    Thanks for you help! I saw that, but I also saw the ticket for a future release that says "implement logging in swarm", and I'm still not sure how reliable the documentation is. Tickets in question: https://github.com/docker/docker/issues/24319 and https://github.com/docker/docker/pull/24476 – Joe Dec 05 '16 at 10:15
  • Hey there. Both those tickets are closed or merged and deal with the configuration above. Basically they were to allow you to configure log drivers on services the same as individual containers, which is what we have implemented in the above statements :) – Dockstar Dec 06 '16 at 14:30
  • Thanks! I've got it all hooked up now. – Joe Dec 06 '16 at 15:33
  • Just notice that the gelf driver only supports UDP and packages might be lost. Also you can not use a service name in the gelf address because the deamon manages logs and it does not share the swarm dns. If you want to use elk running in your swarm you must expose a port and right now (althout it should be fixed with 17.05) when the conatiner goes down udp packages will not reach the new container immediately because dns caches are not deleted properly. – herm May 09 '17 at 11:02
  • For things like logging and databases, I still find they're best separated from docker. Also I never brought up the docker service logs command because it's still experimental. Once that moves into a GA release, we'll probably drop Graylog and move to a less resource-intensive logging solution. – Dockstar Jun 14 '17 at 18:24
  • As a follow up, after having done some work with Kubernetes, what may make more sense is to run EFK with fluent on the swarm hosts. It'll just collect and parse the JSON logs, sending them to elastic search. I accomplish the same thing in Kubernetes with a daemon set. – Dockstar Jun 18 '19 at 17:29
0

As the documents says:

docker service logs [OPTIONS] SERVICE|TASK

resource: https://docs.docker.com/engine/reference/commandline/service_logs/

Yakir GIladi Edry
  • 2,511
  • 2
  • 17
  • 16