4

I'm learning C (and Cygwin) and trying to complete a simple remote execution system for an assignment.

One simple requirement that I'm getting hung up on is: 'Client will report the time taken for the server to respond to each query.'

I've tried searching around and implemented other working solutions but always getting back 0 as a result.

A snippet of what I have:

#include <time.h>

for(;;)
{
    //- Reset loop variables
    bzero(sendline, 1024);
    bzero(recvline, 1024);
    printf("> ");
    fgets(sendline, 1024, stdin);

    //- Handle program 'quit'
    sendline[strcspn(sendline, "\n")] = 0;
    if (strcmp(sendline,"quit") == 0) break;

    //- Process & time command
    clock_t start = clock(), diff;
    write(sock, sendline, strlen(sendline)+1);
    read(sock, recvline, 1024);
    sleep(2);
    diff = clock() - start;
    int msec = diff * 1000 / CLOCKS_PER_SEC;

    printf("%s (%d s / %d ms)\n\n", recvline, msec/1000, msec%1000);
}

I've also tried using a float, and instead of dividing by 1000, multiplying by 10,000 just to see if there is any glint of a value, but always getting back 0. Clearly something must be wrong with how I'm implementing this, but after much reading I can't figure it out.

--Edit--

Printout of values:

clock_t start = clock(), diff;
printf("Start time: %lld\n", (long long) start);
//process stuff
sleep(2);
printf("End time: %lld\n", (long long) clock());
diff = clock() - start;

printf("Diff time: %lld\n", (long long) diff);
printf("Clocks per sec: %d", CLOCKS_PER_SEC);

Result: Start time: 15 End time: 15 Diff time: 0 Clocks per sec: 1000

-- FINAL WORKING CODE --

#include <sys/time.h>

//- Setup clock
struct timeval start, end;

//- Start timer
gettimeofday(&start, NULL);

//- Process command
/* Process stuff */

//- End timer
gettimeofday(&end, NULL);

//- Calculate differnce in microseconds
long int usec =
    (end.tv_sec * 1000000 + end.tv_usec) -
    (start.tv_sec * 1000000 + start.tv_usec);

//- Convert to milliseconds
double msec = (double)usec / 1000;

//- Print result (3 decimal places)
printf("\n%s (%.3fms)\n\n", recvline, msec);
HyperionX
  • 1,332
  • 2
  • 17
  • 31
  • Note: I have already read [this question](http://stackoverflow.com/questions/18436734/c-unix-millisecond-timer-returning-difference-of-0?rq=1) and the solution didn't work. – HyperionX Sep 07 '15 at 11:18
  • What are the raw `clock_t` values? `printf("%lld\n", (long long) start)` – chux - Reinstate Monica Sep 07 '15 at 11:19
  • The problem lies in your assigment, you're assigning the difference to an `int`, which will assign 0 for any value less than 1, use `float msec = float(clock()-start)/CLOCKS_PER_SEC;` instead. – Pooja Nilangekar Sep 07 '15 at 11:19
  • @chux I got a value of 46, would that mean it's actually working correctly and I'm just handling it wrong? – HyperionX Sep 07 '15 at 11:20
  • Better to post the before _and_ after time values. It is possible the time difference is below your clock tick. 2) Also post the value of `CLOCKS_PER_SEC` – chux - Reinstate Monica Sep 07 '15 at 11:21
  • Minor format idea: `"%d.%03d s"`. – chux - Reinstate Monica Sep 07 '15 at 11:26
  • @chux Just ammended the post to include the clock values and the result I'm getting. Seems both start and finish are 15 (ms?) so the diff would correctly be 0, but why the same time? – HyperionX Sep 07 '15 at 11:31
  • Note: `printf("Clocks per sec: %d", CLOCKS_PER_SEC);` is presumptive as `CLOCKS_PER_SEC` is not known to be type `int`. Unless a variable or constant type is known, best to explicitly cast it wide: `printf("%lld\n", (long long) CLOCKS_PER_SEC)` or even better `printf("%jd\n", (intmax_t) CLOCKS_PER_SEC)` – chux - Reinstate Monica Sep 07 '15 at 15:25
  • Also: `long int usec = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec * 1000000 + start.tv_usec);` likely overflows the multiplication and subtraction. Code is getting the expected answer, but UB happened twice. Suggest `long int usec = (end.tv_sec - start.tv_sec)*1000000 + (end.tv_usec - start.tv_usec);` – chux - Reinstate Monica Sep 07 '15 at 15:31

2 Answers2

4

I think you misunderstand clock() and sleep().

clock measure CPU time used by your program, but sleep will sleep without using any CPU time. Maybe you want to use time() or gettimeofday() instead?

Roddy
  • 66,617
  • 42
  • 165
  • 277
  • Thank you! Implemented `time()` and calculating the difference using `difftime()` However does this have the resolution for converting to ms? – HyperionX Sep 07 '15 at 11:48
  • 1
    @Blake No - `time` is just seconds, but `gettimeofday` can be more accurate, IIRC. – Roddy Sep 07 '15 at 11:50
  • Thanks a lot, just added my final code to OP for future programmers. Good to see StackOverflow was kind to me tonight haha – HyperionX Sep 07 '15 at 12:37
2

Cygwin means you're on Windows.

On Windows, the "current time" on an executing thread is only updated every 64th of a second (roughly 16ms), so if clock() is based on it, even if it returns a number of milliseconds, it will never be more precise than 15.6ms.

GetThreadTimes() has the same limitation.

Medinoc
  • 6,577
  • 20
  • 42
  • That would not affect `GetThreadTimes()`, and if `clock()` returns so small a value, it's probably based on it rather than on the internal clock. – Medinoc Sep 07 '15 at 11:40
  • @Medinoc `timeBeginPeriod`? – user877329 Sep 07 '15 at 12:03
  • @user877329 This is for multimedia timers, but it's possible they could be used for such measuring purposes. The high-resolution Performance Counter may be more appropriate, though. – Medinoc Sep 07 '15 at 12:08