5

I issue a command task1 & task2 dir/ && task3 &

  • Task1 never ends, so we send it to the background.
  • Task2 changes the directory
  • Task3 depends on task2 completing.

Issuing: jobs -l will show:

[1]- 39281 Running                 task1 &
[2]+ 39282 Running                 task2 && task3 &

Issuing: ps will show:

39281 ttys002    0:04.17 task1  
39282 ttys002    0:00.00 task2
39283 ttys002    0:03.66 task3

Questions:

  1. Is there a way to show task3 in the jobs command output?
  2. Why doesn't task3 show up as a its own job?
  3. Is there a way to kill all three tasks simultaneously?
  4. Why doesn't task3 die if I kill task2?
  5. Is there a better way to do this?

Goal: Issue a single line of commands to init a workflow and send it to the background. When ready kill all started processes in one go.

For context and in my case, task1 is a grunt task with livereload, so it should be sent to the background. Task2 changes the directory so that task3 can watch files changed in that directory.

Scorpius
  • 999
  • 1
  • 10
  • 22
  • use pstree -ap and see the relationship between tasks , may be task3 is not your son. – michael501 Apr 04 '14 at 21:55
  • I don't have my computer to hand, but what happens if you put parentheses around the whole lot so it runs in a sub-shell? Does killing the sub-shell allow you to kill the children? – Mark Setchell Apr 04 '14 at 22:07
  • Is task2 a long running task? I attempted to reproduce this but could not. $ ./task1 & ./task2 && ./task3 & Could be useful if you could provide short script samples that reproduce the undesired behaviour. – Rob Kielty Apr 04 '14 at 22:13
  • @michael - I'm on OS X, so pstree is not readily available. – Scorpius Apr 04 '14 at 22:44
  • @MarkSetchell I tried this and killing the job associated with the bash sub-process only kills bash, grunt and ruby remain running. – Scorpius Apr 04 '14 at 22:50
  • @RobKielty task2 is just a cd command so that task3 runs from the correct path. – Scorpius Apr 04 '14 at 22:51
  • 1
    `cd` is a shell builtin, and would not show up in the output of `ps`. Further, if `task2` is a separate process, any change it makes to its *own* working directory will have no effect on `task3`. – chepner Apr 05 '14 at 13:35

3 Answers3

2

Using bash with job control you can kill the most recent two background tasks more-or-less simultaneously with

kill %% %-

If you want the job not to show the cd part you'll need to change the directory in the current shell, which means isolating the cd expression:

sleep 10000 & cd .. && { sleep 20000 & }
kojiro
  • 74,557
  • 19
  • 143
  • 201
2

This will kill all child processes started from that shell, including your background processes:

pkill -P $(echo $$)
Justin
  • 21
  • 2
0

Questions:

Is there a way to show task3 in the jobs command output?

Not in this case.

Why doesn't task3 show up as a its own job?

Because task3 is not an independent process, started separately from the others, like task1 but it depends on the returned value of task2 (remember that task2 can ideally return a value to the shell before the process is ended). Only when task3 will return a value to the shell, the the entire "flow" will be finished for the shell. This is why task2 AND task3, must all have a single job id.

Is there a way to kill all three tasks simultaneously?

I think don't understand the question, I'm sorry. You can take pids, so you can kill processes by pid.

  1. Why doesn't task3 die if I kill task2?

Because task2 is not a parent for task3, but just a task that will be executed after task2 will return a non-error code to the shell. This means that in this case task2 and task3 are two independent processes that will be run in the same calling context. The task2 dir/ && task3 & in your example is something similar to the following:

(task2 dir/
if [ $? -eq 0 ]; then
  task3
fi
)&

As you can see, task2 and task3 are not linked in any way but the error code of task2.

Is there a better way to do this?

I think you're doing a good job.

In other words:

  • from the bash perspective a "job" is an entire flow of processes that will start with the first and will end when the last process will return an error code to the shell. In this case task2 and task3 must have a single job id. That's why we are using the 'job' term instead if 'process'.

  • from the "processes hierarchy" perspective, task2 and task3 are totally independent so they will have two different pids and the PPID will always be the bash shell.

About your goal: I still I think I don't understand. You have task1 and task3 pids, you can kill both with a single kill command like: kill pid-for-task1 pid-for-task2.