3

I have to run a Bash command. But this command will take a few minutes to run.

If I execute this command normally (synchronously), my application will hang until the command is finished running.

How do I run Bash commands asynchronously from a Perl script?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
The Anh Nguyen
  • 748
  • 2
  • 11
  • 27
  • 2
    See [How can I fire and forget a process in Perl?](http://stackoverflow.com/questions/2133910/how-can-i-fire-and-forget-a-process-in-perl) – devnull May 27 '13 at 09:04
  • Do you care about bash output? – mpapec May 27 '13 at 09:21
  • Yes. I want the result, what and when? something like: when the command is finished, it alert back to my application like event trigger. – The Anh Nguyen May 27 '13 at 10:16
  • Below is example with threads where you get script output, but from time to time you have to manually check if script has finished. – mpapec May 27 '13 at 10:33

3 Answers3

3

If you do not care about the result, you can just use system("my_bash_script &");. It will return immediately and the script does what is needed to be done.

I have two files:

$ cat wait.sh
#!/usr/bin/bash
for i in {1..5}; { echo "wait#$i"; sleep 1;}

$cat wait.pl
#!/usr/bin/perl
use strict; use warnings;
my $t = time;
system("./wait.sh");
my $t1 = time;
print $t1 - $t, "\n";
system("./wait.sh &");
print time - $t1, "\n";

Output:

wait#1
wait#2
wait#3
wait#4
wait#5
5
0
wait#1
wait#2
wait#3
wait#4
wait#5

It can be seen that the second call returns immediately, but it keeps writing to the stdout.

If you need to communicate to the child then you need to use fork and redirect STDIN and STDOUT (and STDERR). Or you can use the IPC::Open2 or IPC::Open3 packages. Anyhow, it is always a good practice to wait for the child to exit before the caller exits.

If you want to wait for the executed processes you can try something like this in Bash:

#!/usr/bin/bash

cpid=()
for exe in script1 script2 script3; do
  $exe&
  cpid[$!]="$exe";
done

while [ ${#cpid[*]} -gt 0 ]; do
  for i in ${!cpid[*]}; do
    [ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i]
  done
  echo DO SOMETHING HERE; sleep 2
done

This script at first launches the script# asynchronously and stores the pids in an array called cpid. Then there is a loop; it browses that they are still running (/proc/ exists). If one does not exist, text UNSET <PID> is presented and the PID is deleted from the array.

It is not bulletproof as if DO SOMETHING HERE part runs very long, then the same PID can be added to another process. But it works well in the average environment.

But this risk also can be reduced:

#!/usr/bin/bash

# Enable job control and handle SIGCHLD
set -m
remove() {
  for i in ${!cpid[*]}; do
    [ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i] && break
  done
}
trap "remove" SIGCHLD

#Start background processes
cpid=()
for exe in "script1 arg1" "script2 arg2" "script3 arg3" ; do
  $exe&
  cpid[$!]=$exe;
done

#Non-blocking wait for background processes to stop
while [ ${#cpid[*]} -gt 0 ]; do
  echo DO SOMETHING; sleep 2
done

This version enables the script to receive the SIGCHLD signal when an asynchronous sub process exited. If SIGCHLD is received, it asynchronously looks for the first non-existent process. The waiting while-loop is reduced a lot.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
TrueY
  • 7,360
  • 1
  • 41
  • 46
  • Yet another option is `open(my $fh, '|-', 'wait.sh', ...)` or `open(my $fh, '-|', 'wait.sh', ...)` if you want to read its output or write its input but not both. – reinierpost May 27 '13 at 12:39
  • No. you don't understand me @TrueY, the bash command of me takes a lot of time to run, so i want when the bash command is executing, my app show text "Executing..." and after finishing command, system triiger (notice) my app to show "Execute done!". – The Anh Nguyen May 28 '13 at 10:06
  • @ThếAnhNguyễn: Sorry for my misunderstanding, but your description does not contain too much information. I had a [SO answer to an other question](http://stackoverflow.com/questions/16772186/bash-parallelize-md5sum-checksum-on-many-files/16773505#16773505). It runs specific number of parallel programs in bash. I tailor it for your needs. – TrueY May 28 '13 at 11:23
3

You can use threads to start Bash asynchronously,

use threads;
my $t = async {
  return scalar `.. long running command ..`;
};

and later manually test if thread is ready to join, and get output in a non-blocking fashion,

my $output = $t->is_joinable() && $t->join();
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
mpapec
  • 50,217
  • 8
  • 67
  • 127
  • +1. I have a minor issue. If after `async` line `$t` is used, then perl returns an error: `Global symbol "$t" requires explicit package name`. A `sleep 1;` before the first usage of `$t` solves this problem, but is there a proper way to handle this? – TrueY May 27 '13 at 11:30
  • ``my $t = async { return `./wait.sh`; }`` `print $t;`. If `sleep 1` is added in between the two lines then it works properly. – TrueY May 27 '13 at 12:54
  • I've got `Perl exited with active threads:` and you don't need `sleep` to avoid this behavior, as `my $foo = $t->join();` will block until thread is ready to join. – mpapec May 27 '13 at 13:39
  • Hmmm... `$t->join;` also does not work as the `$t` is not visible (same error message). Same under `cygwin` and `CentOS 5.7`. – TrueY May 27 '13 at 13:48
  • @TrueY And you're sure the `$t->join` is in the scope of `$t`? – aschepler May 28 '13 at 11:58
3

The normal way to do this is with fork. You'll have your script fork, and the child would then call either exec or system on the Bash script (depending on whether the child needs to handle the return code of the Bash script, or otherwise interact with it).

Then your parent would probably want a combination of wait and/or a SIGCHILD handler.

The exact specifics of how to handle it depend a lot on your situation and exact needs.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189