209

I want to redirect all the logs of my docker container to single log file to analyse them. I tried

docker logs container > /tmp/stdout.log 2>/tmp/stderr.log

but this gives log in two different file. I already tried

docker logs container > /tmp/stdout.log

but it did not work.

Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286
KETAN PATIL
  • 2,276
  • 2
  • 13
  • 18

14 Answers14

257

How about this option:

docker logs containername >& logs/myFile.log

It will not redirect logs which was asked for in the question, but copy them once to a specific file.

mahler
  • 526
  • 5
  • 25
Eddy Hernandez
  • 5,150
  • 1
  • 24
  • 31
  • 1
    If I am not wrong, this command will basically copy all the logs from when the container was started to present to myFile.logs. RIght.? – S Andrew Dec 22 '17 at 07:15
  • @SAndrew basically yes! but new versions of Docker may change. better to see `docker logs --help` to be sure – Eddy Hernandez Dec 23 '17 at 07:13
  • this will not stream logs, just paste the current history into a file – Ben Feb 16 '21 at 10:30
  • ^ the OP doesn't ask to stream logs. It asks how to redirect to a single file, which is what this answer explains. – DarkCygnus Mar 29 '22 at 15:53
  • Just to complement this answer: if you want the logs since a specific date, you can use "docker logs CONTAINER_ID --since 2022-12-13 >& logs/myFile.log". This will redirect the output of the "docker logs" command to the file "logs/myFile.log" – Joseloman Dec 13 '22 at 16:15
223

No need to redirect logs.

Docker by default store logs to one log file. To check log file path run command:

docker inspect --format='{{.LogPath}}' containername

/var/lib/docker/containers/f844a7b45ca5a9589ffaa1a5bd8dea0f4e79f0e2ff639c1d010d96afb4b53334/f844a7b45ca5a9589ffaa1a5bd8dea0f4e79f0e2ff639c1d010d96afb4b53334-json.log

Open that log file and analyse.

if you redirect logs then you will only get logs before redirection. you will not be able to see live logs.

EDIT:

To see live logs you can run below command

tail -f `docker inspect --format='{{.LogPath}}' containername`

Note:

This log file /var/lib/docker/containers/f844a7b45ca5a9589ffaa1a5bd8dea0f4e79f0e2ff639c1d010d96afb4b53334/f844a7b45ca5a9589ffaa1a5bd8dea0f4e79f0e2ff639c1d010d96afb4b53334-json.log will be created only if docker generating logs if there is no logs then this file will not be there. it is similar like sometime if we run command docker logs containername and it returns nothing. In that scenario this file will not be available.

pl_rock
  • 14,054
  • 3
  • 30
  • 33
  • 1
    `tail -f \`docker inspect --format='{{.LogPath}}' myapp\`` - it really is JSON – Adam Dec 13 '17 at 16:38
  • it will fail if that file not there. means if docker not generate any log then this file will not be created. but if docker generating logs then this command is good to see live logs. thanks Adam. adding it to my answer to help others. – pl_rock Dec 14 '17 at 05:06
  • "Docker by default store logs to one log file." - in what context? All the containers running on a docker host get output to a single file? A single container? – Chris Stryczynski Sep 20 '19 at 11:15
  • @ChrisStryczynski Docker creates a log file per container – Eddy Hernandez Oct 16 '19 at 16:59
  • 1
    For me it returned each wrapped in a JSON object {"log": "", ,"stream" : "stdout", "time":"2022-01-23T20:56:34.451682111Z" }. – Martin Drozdik Jan 24 '22 at 12:39
  • No need to `tail -f` - use `docker logs -f containername` – Kyle Pittman Aug 29 '22 at 17:37
  • There is a downside to this approach. This will give you the source that Docker uses to generate real logs. These are JSON objects, and less readable than the pure normal logs. – ViBoNaCci Dec 06 '22 at 16:37
  • @ViBoNaCci On the plus side, this approach saves space and allow for direct file manipulation, which is much more efficient when your logs are several GB large. And if you just want the log messages, you can always use `jq ".log" logfile.json` (e.g. after a `grep` or `awk` or `head` or `tail` or whatever filtering feets your needs). – Skippy le Grand Gourou Feb 28 '23 at 10:38
  • can we setup cron to periodically copy this whole log file to new file because when container is deleted, this log file is also deleted @pl_rock – Yusuf Aug 12 '23 at 05:10
90

docker logs -f <yourContainer> &> your.log &

Explanation:

  • -f (i.e.--follow): writes all existing logs and follows logging everything that comes next.
  • &> redirects both the standard output and standard error.
  • Likely you want to run that method in the background, thus the &.
  • You can separate stdout and stderr by: > output.log 2> error.log (instead of using &>).
juanmirocks
  • 4,786
  • 5
  • 46
  • 46
22

To capture both stdout & stderr from your docker container to a single log file run the following:

docker logs container > container.log 2>&1
Jean Velloen
  • 339
  • 2
  • 2
14

Assuming that you have multiple containers and you want to aggregate the logs into a single file, you need to use some log aggregator like fluentd. fluentd is supported as logging driver for docker containers.

So in docker-compose, you need to define the logging driver

  service1:
    image: webapp:0.0.1
    logging:
      driver: "fluentd"
      options:
        tag: service1 

  service2:
        image: myapp:0.0.1
        logging:
          driver: "fluentd"
          options:
            tag: service2

The second step would be update the fluentd conf to cater the logs for both service 1 and service 2

 <match service1>
   @type copy
   <store>
    @type file
    path /fluentd/log/service/service.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
  </store>
 </match> 
 <match service2>
    @type copy
   <store>
    @type file
    path /fluentd/log/service/service.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%
  </store>
 </match> 

In this config, we are asking logs to be written to a single file to this path
/fluentd/log/service/service.*.log

and the third step would be to run the customized fluentd which will start writing the logs to file.

Here is the link for step by step instructions

Bit Long, but correct way since you get more control over log files path etc and it works well in Docker Swarm too .

Abhishek Galoda
  • 2,753
  • 24
  • 38
  • 4
    I learned something of this answer. Even though it may not directly suit the OP it may very well is an inspiration for the majority of people that stumble upon this thread. – tortal Sep 03 '20 at 17:36
  • 1
    This is exactly what i was looking for, thank you! You wrote perfect way for me to collect logs. – Fedcomp Jun 18 '21 at 19:30
  • This is perfect. I was able to find this detailed documentation on fluentd's website that may offer more thorough instructions - https://docs.fluentd.org/container-deployment/docker-compose – Ragav Y Feb 01 '22 at 13:04
9

Below works for me,

docker logs <container-id> > mylogs.txt 2>&1
Naveenraj
  • 91
  • 1
  • 2
6

First check your container id

docker ps -a

You can see first row in CONTAINER ID columns. Probably it looks like this "3fd0bfce2806" then type it in shell

docker inspect --format='{{.LogPath}}' 3fd0bfce2806

You will see something like this

/var/lib/docker/containers/3fd0bfce2806b3f20c2f5aeea2b70e8a7cff791a9be80f43cdf045c83373b1f1/3fd0bfce2806b3f20c2f5aeea2b70e8a7cff791a9be80f43cdf045c83373b1f1-json.log

then you can see it as

cat /var/lib/docker/containers/3fd0bfce2806b3f20c2f5aeea2b70e8a7cff791a9be80f43cdf045c83373b1f1/3fd0bfce2806b3f20c2f5aeea2b70e8a7cff791a9be80f43cdf045c83373b1f1-json.log

It would be in JSON format, you can use the timestamp to trace errors

kiran beethoju
  • 141
  • 1
  • 4
2

The easiest way that I use is this command on terminal:

docker logs elk > /home/Desktop/output.log

structure is:

docker logs <Container Name> > path/filename.log
2

This command works for me:

$ docker logs -f containername &> /tmp/out.log &
Jon Huang
  • 21
  • 4
1

Bash script to copy all container logs to a specified directory:

#!/usr/bin/env bash

TARGET_DIR=~/logs/docker_logs
mkdir -p "$TARGET_DIR"
for name in `sudo docker ps --format '{{.Names}}'`;
do
    path=$(sudo docker inspect --format='{{.LogPath}}' $name)
    sudo cp -rf "$path" "$TARGET_DIR"/$name.log
done
Mugen
  • 8,301
  • 10
  • 62
  • 140
1

As @juanmirocks pointed out to continuously log the containers output to a file you can use:

docker logs -f <yourContainer> &> your.log &

To append to the end of the existing file data instead of overwriting it, you can use the append redirector >> instead of the output redirector &> like this:

docker logs -f <yourContainer> >> your.log &
0

If you work on Windows and use PowerShell (like me), you could use the following line to capture the stdout and stderr:

 docker logs <containerId> | Out-File 'C:/dev/mylog.txt'

I hope it helps someone!

mirind4
  • 1,423
  • 4
  • 21
  • 29
  • 1
    To save all container logs to file, based on the container name... `foreach ($element in $(docker ps -a --format "{{.Names}}")) {docker logs $element | Out-File "C:/dockerlogs/$element.log"}` – Francisco Jun 18 '20 at 09:48
0

docker logs -f docker_container_name >> YOUR_LOG_PATH 2>&1 &

focus zheng
  • 345
  • 1
  • 4
  • 12
-1

Since Docker merges stdout and stderr for us, we can treat the log output like any other shell stream. To redirect the current logs to a file, use a redirection operator

$ docker logs test_container > output.log
docker logs -f test_container > output.log

Instead of sending output to stderr and stdout, redirect your application’s output to a file and map the file to permanent storage outside of the container.

$ docker logs test_container> /tmp/output.log

Docker will not accept relative paths on the command line, so if you want to use a different directory, you’ll need to use the complete path.

Anish Varghese
  • 3,450
  • 5
  • 15
  • 25