1

I have Windows code as follows:

    LARGE_INTEGER frequency;        // ticks per second
    LARGE_INTEGER t1, t2;           // ticks
    double data_in_one_sec = 0;
    double second = 0;

    while (1) {
     QueryPerformanceFrequency(&frequency);
     // start timer
     QueryPerformanceCounter(&t1);

     udp_packet_len = recvfrom(sock, udp_packet, sizeof(udp_packet), 0, (struct sockaddr *) &addr, &addrlen);
     data_in_one_sec += (double)udp_packet_len;

     // stop timer
     QueryPerformanceCounter(&t2);
     second += ((double)(t2.QuadPart - t1.QuadPart)*1000000.0 / frequency.QuadPart)/1000000.0;

     if (second >= 1.0) {
      stream.udp_bitrate = (data_in_one_sec)/second;
      data_in_one_sec = 0;
      second = 0;
     }

     //do something with this data
     parse_udp_data(udp_packet,udp_packet_len);
    }

I am dividing udp_data_len bytes received in one second to determine UDP bitrate stream (bps). But this is not accurate for bigger amount of data. Same behavior I see for another function that determines elapsed time (clock()).

Is there any way to calculate receiving UDP data bitrate more accurately?

Przemo
  • 193
  • 2
  • 16
  • Don't count seconds, count ticks. Maybe that incremental computation destroys a lot of precision. It's awkward code, too. – usr Sep 10 '15 at 09:51
  • Also, you are just counting the recvfrom time. That can be very small. Shouldn't you just count *all* time to get the true bit rate? – usr Sep 10 '15 at 09:52

0 Answers0