1

I'm working on an audio encoder cgi script that utilises libmp3lame. I'm writing in a mixture of C/C++.

I plan to have an entry-point cgi that can spawn multiple encoding processes that run in the background. I need the encoding processes to be asynchronous as encoding can take several hours but I need the entry-point cgi to return instantly so the browser can continue about its business.

I have found several solutions for this (some complete/ some not) but there are still a few things I'd like to clear up.

Solution 1 (easiest): The entry-point cgi is a bash script which can then run a C++ process cgi in the background by sending the output to /dev/null/ 2/&>1& (simples! but not very elegant).

Solution 2: Much like solution 1, except the entry-point cgi is in C++ and uses system() to run the proc/s and send the output to /dev/null/ 2/&>1& again.

[question] This works well but I'm not sure if shared hosting companies allow use of the system() function. Is this the case?

Solution 3 (incomplete): I've looked into using fork()/pthread_create() to spawn separate threads which seems more elegant as I can stay in the realms of C. The only problem being: It seems that the parent thread doesn't exit until all child threads have returned.

[question] Is there any way to get the parent thread to exit whilst allowing child threads to continue in the background.

[idea] Maybe I can send the child proc/s output to the black hole! Can I simply redirect stdout to /dev/null. If so, how do I do this?

I hope this makes sense to someone. I'm still a bit of a noob with C stuff so I may be missing very basic concepts (please have mercy!).

I'd be very grateful of any advise on this matter.

Many thanks in advance,

Josh

Josh
  • 37
  • 6

1 Answers1

3

You probably want the standard Unix daemon technique, involving a double fork:

void daemonize(void)
{
  if (fork()) exit(0); // fork.  parent exits.
  setsid(); // become process group leader
  if (fork()) _exit(0); // second parent exits.
  chdir("/"); // just so we don't mysteriously prevent fs unmounts later
  close(0); // close stdin, stdout, stderr.
  close(1);
  close(2);
}

Looks like modern Linux machines have a daemon() library function that presumably does the same thing.

It's possible that the first exit should be _exit, but this code has always worked for me.

Eric Seppanen
  • 5,923
  • 30
  • 24
  • Excellent. I haven't come across this before. I'll get testing. Will write back in a bit. Thanks for the reply. – Josh Dec 23 '09 at 18:31
  • Wow! Worked straight out of the box. Thank you. I don't quite get the fs unmount bit - but hey, it works! Cheers. Josh. – Josh Dec 23 '09 at 19:01
  • Each process will hold a reference to its current working directory. If you try to unmount a filesystem (even a lowly flash drive) while somebody still holds references to anything in the filesystem, it will fail. This can look pretty mysterious if it's a program running in the background holding up that filesystem. Common practice is for daemons is to `chdir("/")` so that sort of thing doesn't happen. – Eric Seppanen Dec 23 '09 at 20:39
  • There should also be error handling for things that can fail, like `fork()`. I wanted to create the shortest possible example code. This is an ancient, standard technique, and you will find similar code in just about any Unix/Linux daemon. – Eric Seppanen Dec 23 '09 at 20:44
  • Thanks Eric, I think I'm getting the hang of it now. I found a nice article about Unix Daemon Servers (has helped me anyway). The url is: http://bit.ly/7diFoD if anyone is interested. – Josh Dec 23 '09 at 23:21
  • @Eric: I tried this trick in Python to achieve the same end, but it's not working for me. Perhaps you could figure out what I'm doing wrong? I've posted this question: http://stackoverflow.com/questions/6024472/start-background-process-daemon-from-cgi-script – Mitch Lindgren May 16 '11 at 23:26