6

When I do a fresh compilation for my project, which includes 10+ open-source libs. It takes about 40mins. (on normal hardware)

Question: where really are my bottle necks at? hard-drive seeking or CPU Ghz? I don't think multi-core would help much correct?

--Edit 1--
my normal hardware = i3 oc to 4.0Ghz, 8GB 1600Mhz DDR3 and a 2tb Western digital

--Edit 2--
my code = 10%, libs = 90%, I know I dont have to build everything everytime, but I would like to find out how to improve compiling performance, so when buying new pc for developer, I would make a smarter choice.

--Edit 3--
cc = Visual Studio (damn)

c2h2
  • 11,911
  • 13
  • 48
  • 60

5 Answers5

4

You're wrong, multi-core brings a tremendous speed-up, right up until the moment your hard-drive gives up actually :)

Proof by example: distcc, which brings distributed builds (My build use about 20 cores in parallel, it's actually bound by the local preprocessing phase).

As for the real bottleneck, it's got something to do with the #include mechanism. Languages with modules are compiled much faster...

Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
4

40 minutes to build is most likely (In fact at 40 minutes I'd go as far as saying near definitely) caused by poor #include usage. You are including things that don't need to be included they may only need forward declarations.

Tidying up your code will make HUGE differences. I know its a lot of work but you will be surprised. At one company I worked at a library that took over 30 minutes to build was optimised down to a 3 minute build in just over a week by making sure that all #includes were needed and by adding forward declarations instead of #includeing. This library was significantly over a million lines of code to give you an idea ...

Goz
  • 61,365
  • 24
  • 124
  • 204
2

Since VS 2010, VS can optionally use multiple cores when compiling a single project. It can also compile multiple projects in parallel. However, the parallel speed-up doesn't seem to be significant in my experience: e.g. Xcode is much better at doing parallel builds.

Fortunately you can don't have to rebuild the open source libs every time, right? You could build them once, store the .lib files in version control, and use those for subsequent builds.

Have you tried precompiled header files for your own code? This can yield a massive speedup.

Frederik Slijkerman
  • 6,471
  • 28
  • 39
  • yes I know that I don't have to build everything from CS lesson 1. but just being a paranoid noob. that wants to do fresh build everything for new release. – c2h2 Feb 01 '11 at 14:01
  • @c2h2: Then you'll be a paranoid noob who wastes money on hardware and spends a lot of time sitting around bored, waiting for your code to compile. Seriously, take advantage of your compiler's precompiled headers support: it's there for a reason, and it works just fine. That library code is *not* going to change. – Cody Gray - on strike Feb 01 '11 at 14:07
  • @c2h2: How often do you build a new release? Is a 40 minute build time a huge deal, even if you do it once a day? (Did you mean something other than every new release?) – Fred Nurk Feb 01 '11 at 14:10
  • Wrong. Files are compiled separately using /MP switch. This does not answer the question. – Joe Jul 18 '12 at 07:25
  • That flag is only officially supported (in the IDE) since VS 2010. – Frederik Slijkerman Jul 18 '12 at 07:27
2

multicore compilation will help, tremendously in most cases.

you'll have to analyze your projects, and the time spent in each phase in order to determine where the bottlenecks are.

in typical large c++ projects, the process is typically CPU bound, then disk bound. if it's the other way around, you're probably in header dependency hell.

there's actually a ton of ways to reduce compile times and dependency in your projects. the best singular reference i know of is by Lakos:

http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620/ref=sr_1_1?ie=UTF8&qid=1296569079&sr=8-1

it's one of the most important/practical c++ books i've read.

you can typically reduce compile times dramatically (e.g, over 40x faster if you take it very seriously), but may take a lot of work/time to correct existing codebases.

justin
  • 104,054
  • 14
  • 179
  • 226
1

When you compile from scratch, yes, it will take longer. Use the 40 year-old technology of make, which VS includes as project management, to compile only what needs to be compiled after the first run.

That said, C++'s translation unit model plus extensive use of templates can be a significant practical problem.

Fred Nurk
  • 13,952
  • 4
  • 37
  • 63