I writing a network program as part of a networking project. The program generates a bunch of packets (TCP used for communication using berkely socket API) and sends it to a particular port and measures the responses that it gets back. The program works perfectly fine, but I wanted to do a little back end calculations like how much data rate my program is actually generating. What I tried to do is that I measured the time before and after the routine of the code which sends the packets out and divided the total data by that time i-e: A total of 799 packets are sent out in one go of the routine where each packet is 82 bytes so:
799 x 82 x 8 = 524144 bits. The time measured was = 0.0001s So the data rate , 524144 / 0.0001 = 5.24 Gbps
Here is the piece of code that I tried:
struct timeval start,end;
double diffTime (struct timeval* start, struct timeval* end)
{
double start_sec = ((double) start->tv_sec) + (double) start->tv_usec / 1000000.00;
double end_sec = ((double) end->tv_sec) + (double) end->tv_usec / 1000000.00;
return (end_sec - start_sec);
}
while(1){
gettimeofday(&start, NULL); // getting start time
*/ Call packet sending routine */
gettimeofday(&end, NULL);
printf("Time taken for sending out a batch is %f secs\n", diffTime(&start,&end));
}
I wanted to confirm whether I am approaching the problem rightly or not. Also If this is the right method is there a way to find out actually at what rate the packets are getting out of the wire i-e the actual physical rate of the packets from the ethernet interface? Can we estimate what will be the difference between the packet rate I calculated in the program (which is in user mode and I expect it to be a lot slower as the user/kernel barrier is traversed on system calls) and the actual packet rate? All help much appreciated.
Thanks.