1

I have a cgi script which get request from the html javascript and runs some system command then response back to the javascript. Now in my cgi script and in the one of the set function I have the below code of lines.

$shred1 = `/usr/bin/find /usr/demo/logs -type f | xargs shred -uvz -n 5`;
$shred2 = `/usr/bin/find /var/lib/pgsql/data -type f | xargs shred -uvz -n 5`;

the first line i.e, "shred1" is getting executed successfully but the second line "shred2" just hangs and even wont return any error. Is there any thing that I am missing here or is there any other approach to achieve the same. Thank you.

charan
  • 81
  • 5
  • Try executing the command as the user that runs the cgi script and see if you get any error, or if it hangs as well. Maybe there is just a lot of data and `shred` takes a long time to run. You can use `iotop` to check if there is any I/O happening on the disk and `top`/`htop` to check if it is doing anything at all. – toydarian Jul 30 '20 at 07:09
  • Yes, tried executing from both user and root but it just hangs. If I just run this command "/usr/bin/find /var/lib/pgsql/data -type f | xargs shred -uvz -n 5" in a terminal then it will work but not from cgi script. – charan Jul 30 '20 at 07:17
  • I would `tee` the stdout and stderr of the command to a logfile. Maybe this gives some insight. You could also prefix the `shred` by an `echo` and check whether it's hanging too, i.e. whether the problem is with the `find` or with the `shred`. How long did you wait until you decided that it hangs? Maybe there is just a lot to shred in the second case. – user1934428 Jul 30 '20 at 07:24
  • It is problem with the shred I think coz when I replace shred with "rm -rf" it works perfectly. And if I run the shred command manually for the same directory it is taking 2 min to complete the execution. So, I am checking the status of the cgi file only after 2 or 4 min. – charan Jul 30 '20 at 07:36
  • Try `/usr/bin/find /var/lib/pgsql/data -type f | xargs -n1 -I{} -P1 bash -c "echo {} >> /tmp/logfile; shred -uvz -n 5 {}"` to see if `shred` hangs at a certain file. This will shred one file after another, so you can `tail -f /tmp/logfile` and see at which file it stops. If it is a certain file that causes the problem, you see which one it is. – toydarian Jul 30 '20 at 07:59
  • And another thing, I just realized: Is there a PostgreSQL process running that is accessing those files? If that is the case, it might have a lock on one of the files and `shred` will wait for the lock to be released before it shreds the file. That could cause the hang. So make sure to stop the database before running the command. – toydarian Jul 30 '20 at 08:00
  • No, postgres DB is not running. and in the /tmp/logfile, it is exactly hanging at /var/lib/pgsql/data/base/1/12842. I tried the same for 3 times. – charan Jul 30 '20 at 08:44
  • And the same command works when I run it from the terminal instead of cgi script. – charan Jul 30 '20 at 08:44
  • 1
    Then run `/usr/bin/find /var/lib/pgsql/data -type f | xargs -n1 -I{} -P1 bash -c "echo {} >> /tmp/logfile; shred -uvz -n 5 {} &> /tmp/shred.log"` using cgi and check `/tmp/shred.log` for any output shred gives. – toydarian Jul 30 '20 at 11:28
  • Thank you, after redirecting the output to /tmp/shred.log the command get getting executed successfully. – charan Jul 30 '20 at 15:24
  • @charan I put the solution and some further explanation in an answer. I would very much appreciate, if you would click the "accept" button! :) – toydarian Aug 05 '20 at 06:56

1 Answers1

1

For future reference:
The problem here was that the output of shred blocked or filled up some buffer or pipe in the cgi execution. As shred has no "quiet" flag (according to the man page), the solution is to run it like this, to redirect the output to /dev/null:

$shred1 = `/usr/bin/find /usr/demo/logs -type f | xargs -I{} bash -c "shred -uvz -n 5 {} &> /dev/null"`;
$shred2 = `/usr/bin/find /var/lib/pgsql/data -type f | xargs -I{} bash -c "shred -uvz -n 5 {} &> /dev/null"`;

In case of problems that might come from too many files being shredded, use the -n and -P flags to limit the number of files which are shredded at once.

If the output of shred is of any importance, instead of sending it to the void, it can be piped to a file by replacing /dev/null with the location of the log-file, it should be stored in.
In that case, the log-file is overwritten on every on every run. If that should not happen, use &>> instead of &> to append to the log-file (don't forget to rotate the log-file from time to time in that case).

toydarian
  • 4,246
  • 5
  • 23
  • 35