1

I couldn't figure out why the execution time for the following code snippet varies significantly on Windows(MSVC++)Virtual machine, Linux(GCC)virtual machine and Mac(xCode) physical machine.

#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>

using namespace std;
using namespace std::chrono;


int main()
{
    const int TIMES = 100;
    const int STARS = 1000;

    steady_clock::time_point t1;// = steady_clock::now();
    steady_clock::time_point t2;// = steady_clock::now();
    int totalCountMicro = 0;
    int totalCountMilli = 0;

    for(int i = 0; i < TIMES; i++) {
        t1 = steady_clock::now();
        for (int j = 0; j< STARS; j++) cout << "*";
        t2 = steady_clock::now();
        cout << endl;
        totalCountMilli += duration_cast<duration<int, milli>>(t2 - t1).count();
        totalCountMicro += duration_cast<duration<int, micro>>(t2 - t1).count();
    }

    cout << "printing out " << STARS << " stars " << TIMES << " times..." << endl;
    cout << "takes " << (totalCountMilli / TIMES) << " milliseconds on average." << endl;
    cout << "takes " << (totalCountMicro / TIMES) << " microseconds on average." << endl;

    getchar();

    return 0;
}

The code above tries to print 1000 stars 100 times and calculates the average time that it has taken for printing 1000 stars.

The result are:

Windows virtual machine:

  • compiler:MSVC
  • 33554 microseconds

  • compiler:GCC

  • 40787 microseconds

linux virtual machine:

  • compiler: GCC
  • 39 microseconds

OSX physical machine:

  • compiler:xcode C++
  • 173 microseconds

First thought was that it could be the problem of virtual machine, but as the linux virtual machine done it pretty fast, I believe it probably could be some other reasons that I don't know.

Any thoughts or comments will be highly appreciated!

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
artecher
  • 339
  • 3
  • 8
  • 2
    How about measuring the time *outside* of *both* loops? Other than than, the console and many of the Windows implementations of C++ standard stuff are very slow (Another example is std::mutex) – deviantfan Mar 27 '17 at 08:45
  • @deviantfan tried to measure without loop (only print 1000 stars one time), but still got the same difference. You mentioned windows' implementation of C++ standard are very slow, do you have any reference for this conclusion? I am suspecting the same thing, but didn't find the proof. – artecher Mar 27 '17 at 08:49
  • What virtual machine? Using MSVC 2017 in a Hyper-V machine, I get 30 ms for the run (compared to 10 ms running native). – Bo Persson Mar 27 '17 at 08:56
  • @artecher For the mutex example, a) there are some published benchmarks etc. in the internet, b) I did my own extensive testing some time ago, and c) if it's just "is it slower yes/no" then it's pretty easy to test it on your own too. ... For common usage patterns, being 70-100 times slower than std::mutex on Linux is pretty normal. ... About the console part, again pretty easy to test. Writing to a file on a SSD vs. writing to the console... – deviantfan Mar 27 '17 at 09:06
  • 1
    I tested with Windows10 64bit on Oracle VM VirtualBox installed on Macbook Pro. @BoPersson, you just reminds me that I made a mistake, the time on windows was 33554 microseconds instead of milliseconds. Actually all are microseconds, so the difference remains. Updated the original question. Sorry. – artecher Mar 27 '17 at 09:07
  • Well, then it appears that steady_clock on Windows is one of its better parts... better than mutex. – deviantfan Mar 27 '17 at 09:08
  • @deviantfan then with this huge performance difference, deploy a time critical c++ program on Windows is obviously not wise... – artecher Mar 27 '17 at 09:15
  • @artecher A wait, I misunderstood your last comment ... previously I was thinking you realized that they are somewhat equally fast (so I wrote apparently steady_clock is good, and the console is slow on both systems, so...). Re-reading, it appears, steady_clock is bad after all... – deviantfan Mar 27 '17 at 09:17
  • @artecher And deploying a time-critical application on anything without realtime support (which includes Windows as well as most Linux) is not wise. ... If you just meant "fast", well, there's a reason most supercomputers don't run Windows :p – deviantfan Mar 27 '17 at 09:19
  • @deviantfan The GCC(MinGW) on windows is also slow. Seems not the implementation of C++ standard, but just c++ running on windows is slow? – artecher Mar 27 '17 at 09:29
  • @artecher MinGW (the compiler) is not the same as the library with the implementations for steady_clock, mutex etc. MinGW is using the **same** lib VS is using, the one MS made (which is different from the lib GCC on Linux is using, and that's the real difference for steady_clock etc.). – deviantfan Mar 27 '17 at 09:37
  • @deviantfan thanks for the explanation on MinGW. that makes a lot sense. – artecher Mar 27 '17 at 09:40
  • 1
    Writing the stars to a file (`ofstream`) instead of `cout` takes the time down to 11 microseconds on my machine. Apparently the Windows console isn't optimized for scrolling lots of text. – Bo Persson Mar 27 '17 at 10:09
  • @Bo Persson: it's ridiculously slow, I was hit by this many times when logging a lot to a console. I've no idea how fast is it on linux. – Andriy Tylychko Mar 27 '17 at 11:16

0 Answers0