5

I am used to putting header guards around my objects like:

#ifndef SOMETHING_H
#define SOMETHING_H

class Something {
...
}
#endif

but I have been given code where they also do:

#ifndef SOMETHING_H
#include "something.h"
#endif

for every include. Supposedly, this is better. Why? Is this redundant with guards around the object?

Chad Befus
  • 1,918
  • 1
  • 18
  • 18
  • 3
    possible duplicate of [Are redundant include guards necessary?](http://stackoverflow.com/questions/2233401/are-redundant-include-guards-necessary) The answer to this question is relevent to you. – Serdalis Nov 06 '13 at 00:07
  • 1
    to me it would seem that both are exactly the same, I would have thought that declaring in the header file would be better since it will always be protected, whereas outside of the header file it can be easily forgotten by a programmer – Matthew Pigram Nov 06 '13 at 00:07
  • 1
    @Serdalis: You are correct, this is a duplicate of that (which itself is a duplicate of http://stackoverflow.com/questions/1021357/wrapping-includes-in-ifndefs-adds-any-value. Apparently I need to work on my search abilities. Thanks. – Chad Befus Nov 06 '13 at 00:15
  • @MatthewPigram: I also thought it was a bad place to put them in the implementation but as per the other questions, it does improve compilation time. – Chad Befus Nov 06 '13 at 00:16

4 Answers4

5

The thinking behind it is the preprocessor will not need to open the header file and read the contents to determine that that header has been previously included, thus saving some time during compilation. However, most compilers these days are already smart enough to spot multiple inclusions of the same file and ignore subsequent occurrences.

Praetorian
  • 106,671
  • 19
  • 240
  • 328
5

This is discussed in pretty good detail here:
http://c2.com/cgi/wiki?RedundantIncludeGuards

Here are the highlights:

  • Yes this is redundant, but for some compilers it may be faster because the compiler will avoid opening the header file if it doesn't need to.
  • "Good compilers make this idiom unnecessary. They notice the header is using the include-guard idiom (that is, that all non-comment code in the file is bracketed with the #ifndef). They store an internal table of header files and guard macros. Before opening any file they check the current value of the guard and then skip the entire file."
  • "Redundant guards have several drawbacks. They make include sections significantly harder to read. They are, well, redundant. They leak the guard name, which should be a secret implementation detail of the header. If, for example, someone renames the guard they might forget to update all the places where the guard name is assumed. Finally, they go wrong if anyone adds code outside of the guard. And of course, they are just a compile-time efficiency hack. Use only when all else fails."
Andrew Clark
  • 202,379
  • 35
  • 273
  • 306
0

It's good to have this on header and class definition files, so that on compilation, if a file is referenced in a loop (a.cpp references a.h and b.cpp, and b.cpp also references a.h, a.h will not be read again) or other similar cases.

The case that worries me most about what looks like your question is that the same constant name is being defined in different files, and possibly preventing the compiler from seeing some necessary-to-see constants, classes, types, etc. as it will "believe" that the file was "already read".

Long story short, put different #ifndef constants in different files to prevent confusion.

Hypino
  • 1
  • There is only one #define for any constant, it is in the .h file. everywhere else the guards are around the include statement. – Chad Befus Nov 06 '13 at 00:35
0

The purpose of doing this is to save on compile time. When the compile sees #include "something.h", it has to go out and fetch the file. If it does that ten times and the last nine all basically amount to:

#if 0
...
#endif

then you're paying the cost of finding the file and fetching it from disk nine times for no real benefit. (Technically speaking, the compiler can pull tricks to try and reduce or eliminate this cost, but that's the idea behind it.)

For small programs, the saving probably aren't very significant, and there isn't much benefit to doing it. For large programs consisting of thousands of files, it isn't uncommon for compilation to take hours, and this trick can shave off substantial amounts of time. Personally, it's not something I would do until compilation time starts becoming a real issue, and like any optimization I would look carefully at where the real costs are before running around making a bunch of changes.

Ben S.
  • 1,133
  • 7
  • 7
  • This would have been a good answer ~ 5 years ago. Nowadays, compilers take care of this. – Konrad Rudolph Nov 06 '13 at 00:16
  • @KonradRudolph You may very well be right; I don't work with enough different compilers to say how common optimizations to this are. I suspect that many programmers who use this technique picked it up years ago and have simply stuck with it without bothering to check whether it was still necessary. It's also worth pointing out that there are plenty of companies out there using compilers that are more than five years old, either because they're targeting an older platform or because upgrading didn't seem worth the costs. – Ben S. Nov 06 '13 at 00:22