My question arises from implementing a version of the standard I/O fgets function, called tfgets, that times out and returns NULL if it does not receive an input line on standard input within 5 seconds. A reference solution is:
#include <setjmp.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
sigjmp_buf buf;
void sigchild_handler(int sig) {
puts("handler\n");
siglongjmp(buf, 1);
}
// read with timeout
char *tfgets(char *s, int size, FILE *stream) {
if (fork() == 0) {
sleep(5);
printf("Child exits.\n");
exit(0);
}
switch (sigsetjmp(buf, 1)) {
case 0:
signal(SIGCHLD, sigchild_handler);
return fgets(s, size, stream);
break;
case 1:
return NULL;
break;
default:
break;
}
return s;
}
int main() {
char buf[1024];
if (tfgets(buf, 1024, stdin) == NULL)
printf("BOOM!\n");
else
printf("%s", buf);
return 0;
}
This is implemented with sigsetjmp
and siglongjmp
. I noticed that when the user input something within 5 seconds, the program would behave normally and return.
The forked child process usually is still sleeping, since the main routine can not last this long. I tried to block main()
function by adding a line of while(1)
before return. We I try to run it, input something within 5 secs, and then let it loop. About 5 secs later, the child process should wake up and the signal handler should be executed. However, the siglongjmp
doesn't work.
In this senario, tfgets
has already returned but sigsetjmp
is in it. In man page for sigsetjmp, NOTES section, it says:
If the function which called setjmp() returns before longjmp() is called, the behavior is undefined. Some kind of subtle or unsubtle chaos is sure to result.
My question: Is the behavior that the long jump is not taken is actually not guaranteed to behave in this way? If so, how could I fix that? Thanks a lot~