12

I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.

Do I need to send some kind of keep alive packet every once in a while?

Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.

Nick Banks
  • 4,298
  • 5
  • 39
  • 65
  • u can manually set the timeout values. In java the variable for it is `SO_TIMEOUT` – arunmoezhi Jul 29 '12 at 19:52
  • @arunmoezhi In *C* the *socket option constant* for a *read timeout* is SO_TIMEOUT. In Java you call the setSoTimeout() method, but again this sets a read timeout. None of which is what he is asking about. – user207421 Jul 30 '12 at 00:31
  • i thought the setSoTimeout() method sets the variable SO_TIMEOUT. – arunmoezhi Jul 30 '12 at 00:55
  • @arunmoezhi Of course it does. But that's not what you said. There is no SO_TIMEOUT at all in Java, let alone a variable. – user207421 Jul 30 '12 at 05:44
  • 1
    @EJP: I was referring to http://docs.oracle.com/javase/1.4.2/docs/api/java/net/SocketOptions.html#SO_TIMEOUT – arunmoezhi Jul 31 '12 at 22:03

4 Answers4

9

On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.

The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.

Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).

Answering your question: no, the sockets do not disconnect automatically.

Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.

Pavel Radzivilovsky
  • 18,794
  • 5
  • 57
  • 67
  • 1
    Because of the C# tag. This behavior is platform-dependent. – Pavel Radzivilovsky Jul 30 '12 at 00:53
  • 1
    +1 for suggesting occassional ping-pong. This is useful way to make sure the connection is active, and that way you can handle disconnected clients. – David Anderson Jul 30 '12 at 03:53
  • 1
    Why are 'sockets with no data sent' a 'big source for trouble' *on Windows?* and not, by implication, on other systems? – user207421 Jul 30 '12 at 05:47
  • I don't see how it's difficult to test, either. It's pretty trivial to block the thread/s in either client or server that perform the tx, so preventing the sending of any data without closing any sockets. – Martin James Jul 31 '12 at 10:35
  • Sending a byte every few minutes seems to fix the issue. Thanks. Any idea on a minimum amount of time between pings that I should use? – Nick Banks Jul 31 '12 at 18:33
  • @MartinJames Blocking the server does not help in case of TCP, where the connection is managed by the network layer and it's own processes. Which is good, because this enables stopping your app in debugger without breaking the connection. – Pavel Radzivilovsky Aug 06 '12 at 20:47
  • @gamernb I suggest 10 seconds. Should be good for every normal use case. If you want the number to be less magic, then it should be much bigger than the ACK/retrans timeout and much smaller than user unavailability tolerance. – Pavel Radzivilovsky Aug 06 '12 at 20:48
1

TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.

user207421
  • 305,947
  • 44
  • 307
  • 483
0

There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.

Scott Chamberlain
  • 124,994
  • 33
  • 282
  • 431
0

It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.

This sample code do it under linux:

int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))                   
    fprintf(stderr, "setsockopt failure : %d", errno);

Regards.

TOC
  • 4,326
  • 18
  • 21