181

If you use the Coreutils tail command in Linux, you have a -f option that lets you follow a log file from the log's current position (it does not go to the very beginning of the file and display everything).

Is this functionality available in docker logs without waiting for it to traverse the whole log?

I have tried:

docker logs --since 1m somecontainer

and

docker logs -f --since 1m somecontainer

It appears that it actually traverses the entire log file (which can take a long time) and then starts echoing to the screen once it reaches the time frame you specify.

Is there a way to start tailing from the current point without waiting? Is my best option to always log out to some external file and tail that with the Coreutils tail command?

Amin Shojaei
  • 5,451
  • 2
  • 38
  • 46
svenwinkle
  • 1,927
  • 2
  • 10
  • 8
  • Do you need to keep the entire log file for other reasons, or would you be okay with only having the last few megs of logs preserved. – BMitch Sep 01 '18 at 11:26
  • I'd be fine with just having the last few megs. Are you suggesting a fifo buffer? – svenwinkle Sep 03 '18 at 17:45
  • There's an option to have docker automatically rotate the logs. So instead of waiting several minutes to parse hundreds of megs of logs, you can have the logfile limited to only a few megs. Not exactly the solution you requested, but it would dramatically speed things up which seems to be your goal. – BMitch Sep 03 '18 at 18:12

7 Answers7

416

Please read docker logs --help for help. Try below, starting from the last 10 lines. More details here.

docker logs -f --tail 10 container_name
Raphael PICCOLO
  • 2,095
  • 1
  • 12
  • 18
Light.G
  • 5,548
  • 1
  • 14
  • 25
  • When you have a very large json log file (e.g. several gigs in size) would this skip the parsing for all the lines up to the last 10 by the docker engine? – BMitch Nov 13 '19 at 14:14
  • 1
    @BMitch Although I did not find any description on the official website of `docker.com`, but I did some test. It takes less than 0.5 second to get the last 200 lines of log from a large log file with 16G big. I think this could be an evidence. – Light.G Apr 01 '20 at 10:28
  • @BMitch yes it does. – Amila Sep 01 '21 at 08:38
42

Alternatively, we can check the log by time (eg. since last 2mins) as:

docker logs --since=2m <container_id> // since last 2 minutes
docker logs --since=1h <container_id> // since last 1 hour
rc.adhikari
  • 1,974
  • 1
  • 21
  • 24
  • 2
    The OP posted this as a non-option in their answer: "It appears that it actually traverses the entire log file (which can take a long time) and then starts echoing to screen once it reaches the the time frame you specify. " – BMitch Nov 13 '19 at 14:13
  • The reason to use above is easy to reload/refresh for the latest activity log, If I use `docker logs -f --tail 10 `, we have to keep coming out of its cursor to load for most recent log. – rc.adhikari Nov 13 '19 at 15:20
  • That explains why you prefer this over other upvoted answers, but not why you recommended this when the OP specifically listed issues with this option and was seeking alternatives that wouldn't first parse the logfiles. The problem they listed was when they have an extremely large logfile that they didn't want to parse before outputting. – BMitch Nov 13 '19 at 15:25
  • I agree with you, I am just giving as an alternative option just to check log here. – rc.adhikari Nov 13 '19 at 15:28
21

use the --tail switch:

>  docker logs -f <container name> --tail 10

this will show the log starting from the last 10 lines onwards

Dror
  • 5,107
  • 3
  • 27
  • 45
10

I think you are doing it correctly and it seems to work as expected when I try it. Are you using some non-default log driver etc?

To follow only new log files you can use -f --since 0m.

DevThiman
  • 920
  • 1
  • 9
  • 24
Mattias Wadman
  • 11,172
  • 2
  • 42
  • 57
  • Thanks for the reply. I am using the default log driver - json-file. I could probably just switch to syslog and watch logs on the host. – svenwinkle Sep 03 '18 at 17:50
  • 2
    Also I tried using `-f --since 0m` as you suggested. It still spent 2 minutes traversing the full log file before returning output. It's not awful but it does increase debug time. – svenwinkle Sep 03 '18 at 17:52
3

If you want to get the logs based on the service name (case of docker-compose) you can use this shorthand (nginx here is an example of a service name):

docker logs -f --since=1m $(docker ps -f name=nginx --quiet)
medunes
  • 541
  • 3
  • 16
2

The default setting for the log driver is a JSON file format, and the only way I can think of to reliably parse that involves parsing the file from the beginning, which I suspect is exactly what docker does. So the I'm not sure there's an option to do exactly what you are asking. However, there are two log options you can adjust when starting a container with the default JSON log driver.

  1. max-size: this limits how large a single JSON log file will grow to. After this, docker will create a new file. By default it is unlimited (-1).
  2. max-file: this limits the number of JSON log files that will be created up to the max size set above. By default it set to 1.

You can read about these options here: https://docs.docker.com/config/containers/logging/json-file/

I typically set these options with new default values for all containers being run on the docker host using the following lines inside my /etc/docker/daemon.json file:

{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}

Those two options say to keep up to 3 different 10 meg JSON log files. The result is a limit between 20-30 megs of logs per container. You need to trigger a reload on the dockerd process to load this file (killall -HUP dockerd or systemctl reload docker).

You can override this on an individual container by passing the log options on your run command (or inside the compose file):

docker container run --log-opt max-size=5m --log-opt max-file=2 ...

There does not appear to be a way to change the logging options of an existing container, so you will need to recreate your containers to apply these changes.

The end result is that docker may still have to parse the entire file to show you the most recent logs, but the file will be much smaller with automatically rotating logs than the default unlimited logging option.

DevThiman
  • 920
  • 1
  • 9
  • 24
BMitch
  • 231,797
  • 42
  • 475
  • 450
-2

Please read docker logs --help for help but it will help to check the current logs.

docker logs -f container_name/id

D.maurya
  • 1
  • 1