I am currently developping two applications to work over a custom video streaming "protocol". Basically:
- The server captures video frames from a webcam, slices them into parts, and sends these parts to the clients over UDP.
- The clients receive all the frames parts, and handle all the "reordering" : placing the parts in the right order, storing new frames "over" old ones, and so on... This is where a "protocol" was implemented: clients need to make sense of the video data they receive, so they can reorder things correctly, and display a proper frame.
The mechanism itself works perfectly fine (after quite a lot of struggle I'll admit). However, now that my applications are running, I find my client struggling to receive some frames parts. The client will successfully retrieve parts for the first, let's say n frames, and then just... hang on recvfrom
. I won't bother you with all the applications' details, but here are some stats:
- The server captures a frame (38,016 bytes) every 40,000 microseconds (25fps).
- Each frame is divided into 24 parts (38,016 / 24 = 1584 bytes per part).
Let's assume we have 1 client only. On the network side, this means that:
- Every 40,000 microseconds, the server sends 24 buffers to the client (
sendto
). Each buffer is 1584 bytes long. On the other hand, the client callsrecvfrom
24 times. - In a second, the server can capture 25 frames. This means that in 1s, it will send 25 * 24 frame parts to the client. This represents 25 * 24 * 1584 = 950,400 bytes per second).
We'll also assume that the client is always listening when the server is sending. Now, with these rates, the client will be able to keep up for 1 or 2 seconds. While the application does not freeze, the client will eventually start hanging on recvfrom
, as if the server had stopped broadcasting.
I added some verbosity to my server to make sure it kept broadcasting, and it does. After a few seconds, it seems like the server's sendto
calls no longer reach the client's recvfrom
calls. I checked all my network-related code, and since it is quite simple, there isn't much to be wrong about: the server builds a buffer, calls sendto
, and starts preparing the next buffer... The client simply waits for buffers.
Since I can't find an explanation in the programming, I'm starting to believe something is... stuck over the network. It seems like something, somewhere, is preventing my UDP packets from reaching the client after some time. Now, since UDP is competely control-free, I can't find a way to check my buffers' transport from my program.
However, would there be a way for me to see whether or not the system does transmit my packets, or if it eventually reaches one of its limits, and starts dropping them? If so, what is this limiting mechanism, and is there a way I could configure my system so it allows my applications to work at the rate I programmed them to work?
Since my applications communicate over the loopback interface (server on 127.0.0.1:n), I thought it'd be a good idea to add information about this interface. I am running a GNU/Linux (kernel 3.13.0) system.
$ ifconfig lo
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:98821 errors:0 dropped:0 overruns:0 frame:0
TX packets:98821 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:202639359 (202.6 MB) TX bytes:202639359 (202.6 MB)