2

Scenario: I have load balanced servers running a web application. To debug I'd like to see the output of the same log file (such as the nginx log) combined into a single stream, e.g. a tail -f on all servers in the same cluster, where one of them is the server I am currently logged in.

Here is a simple approach using background processes, for example with three servers app-server-01, app-server-02, app-server-03, where I am logged in to the first one:

ssh -t "jneutron@app-server-02" "tail -f /var/log/nginx/access.log" &
ssh -t "jneutron@app-server-03" "tail -f /var/log/nginx/access.log" &
tail -f /var/log/nginx/access.log

This combines the log output across multiple servers as desired, but you end up with lingering background processes after hitting Ctrl-C.

I was looking for an automated way to clean up those background processes, and came up with a script, shared here as an answer.

Peter Thoeny
  • 141
  • 5

1 Answers1

2

Here is a bash script that cleans up the background processes on a Ctrl-C. It can be configured with a user and a list of servers. It assumes that you run the script on one of the servers in the list.

#!/bin/bash

# Purpose: Continuously tail and combine log files on current and other servers

# configuration:
#   servers, need to be hostname:
servers=("jimmy-neutron-01" "jimmy-neutron-02" "jimmy-neutron-03")
#   remote user:
user="jneutron"

function ctrl_c_cleanup() {
    echo
    echo 'Ctrl-C trap: kill remote tail processes:'
    plist=`ps -eaf | egrep "ssh .* $user[@].* tail \-f" | awk '{print $2}' | xargs echo`
    echo "kill -9 $plist"
    echo $plist | xargs kill -9
}

if [ $# -lt 1 ]; then
    echo "Purpose: Continuously tail and combine log files on current and other servers"
    echo "Example: $ xtail /var/log/nginx/access.log"
else
    echo "tail -f $1 across servers: ${servers[@]}"
    trap ctrl_c_cleanup INT  # trap Ctrl-C and call ctrl_c_cleanup()
    host=`uname --nodename`
    for server in ${servers[@]}; do
        if [[ $server != $host ]]; then
            ssh -t "$user@$server" "tail -f $1" &
        fi
    done
    tail -f "$1" # tail local log and wait for Ctrl-C
fi

#EOF

Save this as xtail, make it executable, and accessible via path.

Example usage:

$ xtail /var/log/nginx/access.log
tail -f /var/log/nginx/access.log across servers: jimmy-neutron-01 jimmy-neutron-02 jimmy-neutron-03
...continuous log output...
^C
Ctrl-C trap: kill remote tail processes:
kill -9 116919 116921
Peter Thoeny
  • 141
  • 5
  • 2
    `tail -f /var/log/... <( ssh remote tail -f /var/log/...) ` works just as well and is much less involved than this. – Ginnungagap Aug 14 '23 at 20:37
  • @Ginnungagap: Interesting. Does that also work with more than one remote tail? – Peter Thoeny Aug 14 '23 at 23:47
  • 1
    `<( ... )` gets replaced by a path to a file descriptor that gets the output so yes, in the case of `tail` you can put as many as you want. – Ginnungagap Aug 15 '23 at 07:50
  • @Ginnungagap: I tried your suggestion and could not get it to work, so I stick with the original approach – Peter Thoeny Aug 19 '23 at 04:13
  • Tip: Rather than relying on `ps` to find the PIDs to kill: Immediately after `ssh ... &` save the PID of the background ssh process that you get from `$!` – DouglasDD Aug 19 '23 at 18:59
  • @Ginnungagap portability note: `tail -f multiple files ...` requires a relatively recent GNU tail binary. Some other versions need it written as `tail -f multiple -f files ...`. Other versions don't support multiple files (where you would need a tool like "multitail" – DouglasDD Aug 19 '23 at 19:15
  • @DouglasDD: Good idea, I'll try that – Peter Thoeny Aug 19 '23 at 19:35