0

Im working with TCP Sockets in C but yet dont really understand "how far" the delivery of data is ensured.

My main problem is that in my case the server sometimes sends a message to the client and expects an answer shortly after. If the client doesnt answer in time, the server closes the connection. When reading through the manpages of the recv() function in C, I found the MSG_PEEK Flag which lets me look/peek into the Stream without actually reading the data.

But does the server even care if I read from the stream at all?

Lets say the server "pushes" a series of messages into the stream and a Client should receive them. As long as the Client doesnt call recv() those messages will stay in the Stream right? I know about ACK messages being send when receiving data, but is ACK sent when i call the recv() function or is the ACK already sent when the messsage successfully reached its destination and could (emphasising could) be received by the client if it choses to call recv()?

My hope is to trick the server into thinking the message wasnt completely send yet, because the client has not called recv() yet. Therefore the Client could already evaluate the message by using the MSG_PEEK flag and ensure it always answers in time. Of course I know the timout thing with my server depends on the implementation. My question basically is, if PEEKING lets the server think the message hasnt reached it destination yet or if the server wont even care and when ACK is sent when using recv().

I read the manpages on recv() and wiki on TCP but couldnt really figure out how recv() takes part in the process. I found some similar questions on SO but no answer to my question.

  • 3
    No, the ACK is not sent when you read. – Steve Summit Dec 09 '22 at 16:20
  • 2
    You might want to think of it as if there are *two* streams: A TCP stream between your machine (your machine's OS) and the server, and a stream between your machine's OS and your program. When TCP packets are received, your OS acks them and assembles them into a buffer, waiting for your program to read them. When your program reads, it drains that buffer. – Steve Summit Dec 09 '22 at 16:22
  • If you're having problems with your code not responding to input from the server in time, I believe you'll want to rearrange the architecture of your program, not try to play tricks with ACKs. In particular, you want either (a) your program to spend most of its time in a blocking `read` or `recv` call waiting for input (meaning that it's guaranteed to respond immediately), or (b) your program to be event-driven, such that the moment it learns that input is available, it goes and reads it. – Steve Summit Dec 09 '22 at 16:24
  • If you're spending significant time processing, and that's what's causing you to miss messages from the server, you may want to switch to a multithreaded architecture, so that you can have one thread doing processing, and one thread talking to the server. – Steve Summit Dec 09 '22 at 16:25
  • Now, everything I've said assumes your C program is running under a full-fledged OS, with a conventional networking stack. If you're doing embedded programming, things will probably be completely different. (In particular, in an embedded system, there may well *not* be an intermediate buffer on your side where some portion of the received TCP stream is being assembled, waiting for your program to read it.) – Steve Summit Dec 09 '22 at 16:28

1 Answers1

2

TL;DR

Does the recv() function trigger sending the ACK?

No, not on any regular OS. Possibly on an embedded platform with an inefficient network stack. But it's almost certainly the wrong problem anyway.


Your question about finessing the details of ACK delivery is a whole can of worms. It's an implemention detail, which means it is highly platform-specific. For example, you may be able to modify the delayed ACK timer on some TCP stacks, but that might be a global kernel parameter if it even exists.

However, it's all irrelevant to your actual question. There's almost no chance the server is looking at when the packet was received, because it would need it's own TCP stack to even guess that, and it still wouldn't be reliable (TCP retrans can keep backing off and retrying for minutes). The server is looking at when it sent the data, and you can't affect that.

The closest you could get is if the server uses blocking writes and is single-threaded and you fill the receive window with un-acked data. But that will probably delay the server noticing you're late rather than actually deceiving it.

Just make your processing fast enough to avoid a timeout instead of trying to lie with TCP.

Useless
  • 64,155
  • 6
  • 88
  • 132