0

I'm sending an array of floats, one by one, over a TCP socket . My Server (receiver - which handles multiple requests simultaneously) program should read data until until receiving the value 0. If the client(sender) doesn't send anything for 10 seconds after he connected (or after the last sent value) i want the server to close that connection. I found this signal approach but I think it will not be optimal to use in threads, but more likely for fork() because it forces me to use global variables. I must send the "socket" param to the function, so i can close it and AFAIK this is not posible.

void time_out(int semnal) {
  printf("Time out.\n");
  close(socket);
  exit(1);
}

and each time a client has connected, or sends something, I call this:

signal(SIGALRM, time_out);
alarm(10);

What other options do i have to count 10 seconds and to be able to restart this timer?

  • 1
    If you're using any sort of I/O multiplexing (e.g. `epoll`), then there's a good chance that there's already a native timing mechanism available for those (e.g. `timerfd` on Linux, or `select` with timeout, etc.) – Kerrek SB Mar 06 '13 at 09:54
  • can you please be a little more specific, a link or a small example on how timerfd works (i found the linux man page, but that example is not quite straight-forward) – 55651909-089b-4e04-9408-47c5bf Mar 06 '13 at 10:16
  • 1
    The manual for `timerfd` is pretty clear, I find. You make a file descriptor that becomes ready to read when the time interval is up. Put that file descriptor into your monitored set, and you have a convenient way of knowing when the timer fired. – Kerrek SB Mar 06 '13 at 21:22

1 Answers1

2

You could use select with an empty FD set and a timeout. That's pretty common and pretty portable.

Joe
  • 7,378
  • 4
  • 37
  • 54