0

I want to run a series of commands concurrently such that if one of them dies they all stop.

Carl Patenaude Poulin
  • 6,238
  • 5
  • 24
  • 46

2 Answers2

1

If you've got Bash 4.3 (2014) or later, this Shellcheck-clean code demonstrates one way to do it:

#! /bin/bash -p

# Start some concurrent processes
sleep 30 & sleep 31 & sleep 32 & sleep 33 & sleep 34 & sleep 35 & sleep 36 &
sleep 5 &

# Wait for any job to complete
wait -n

# Get the list of still running jobs, and kill them
jobs=$(jobs -r)
while read -r num _ cmd; do
    printf 'Killing: %s\n' "$cmd" >&2
    kill "%${num//[^[:digit:]]/}"
done <<<"$jobs"
  • The -n option for wait was introduced in Bash 4.3, so the code will not work with older versions of Bash.
  • See ProcessManagement - Greg's Wiki to learn about techniques for working with concurrent processes in Bash. It explains all of the process control and monitoring mechanisms used in the code.
  • An example value of the num variable is [3]+. ${num//[^[:digit:]]/} extracts the job number (3 in this case) by removing all non-digit characters. See Removing part of a string (BashFAQ/100 (How do I do string manipulation in bash?)) for more information about ${var//old/}.
pjh
  • 6,388
  • 2
  • 16
  • 17
0

This is my solution using job control.

  1. Whenever a job dies, it kills the parent script
  2. When the parent script dies, it kill the whole process group, including all jobs

I would hope that there's something simpler out there.

#!/usr/bin/env bash

# Mocks

function process () {
    while true ; do
    echo "Process $1 is working..."
    sleep 10
    done
}

# Actual implementation

trap terminate_entire_process_group EXIT
function terminate_entire_process_group () {
    trap - EXIT
    kill -15 -$$
}

function terminate_parent_process () {
    trap - EXIT
    kill $$ 2> /dev/null
}

(
    trap terminate_parent_process EXIT
    process 1
) &

(
    trap terminate_parent_process EXIT
    process 2
) &

wait
Carl Patenaude Poulin
  • 6,238
  • 5
  • 24
  • 46