I'm trying to improve the performance of my multicast application (in order to reduce its packet loss) which is working on a huge network
My experiments show that in the first run of the application there is some lost packets. But when I run the application again after the previous run (sometimes with a little delay also) there is no packet loss. Although when I re-run the application after a long delay (for example 20 minutes or so) I see the packet loss again.
And when I checked their timestamps, I saw that the lost packets were mostly the packets which were sent at the beginning. So it seems like the switches or routers need some warm up! or something (I don't know how to call this phenomena).
I've checked the tcpdump
results and the number of packets that were received by receiver application was exactly the same number of packets which were received by network cart.
And I've already tried the following tricks: 1- change affinity of the process on the different CPU cores and scheduling policy 2- changing priority of the socket descriptor
changing priority of the socket descriptor already made it better (reduced number of lost packets) but after setting the priority to high, again there were some lost packets)
// For example
MulticastSender multicast_sender;
multicast_sender.init();
// Here I need a function in order to find out the switch is already warmed up or not
while (some condition)
{
////
multicast_sender.send(something);
////
}
I was wondering isn't any possible way to add some code in order to find out whether the switch (or router) is already warmed up! enough?