DEV Community

fluffy
fluffy

Posted on • Updated on • Originally published at beesbuzz.biz

#ifndef guards vs. #pragma once

After getting in an extended discussion about the supposed performance tradeoff between #pragma once and #ifndef guards vs. the argument of correctness or not (I was taking the side of #pragma once based on some relatively recent indoctrination to that end), I decided to finally test the theory that #pragma once is faster because the compiler doesn't have to try to re-#include a file that had already been included.

For the test, I automatically generated 500 header files with complex interdependencies, and had a .c file that #includes them all. I ran the test three ways, once with just #ifndef, once with just #pragma once, and once with both. I performed the test on a fairly modern system (a 2014 MacBook Pro running OSX, using XCode's bundled Clang, with the internal SSD).

First, the test code:

#include <stdio.h>

//#define IFNDEF_GUARD
//#define PRAGMA_ONCE

int main(void)
{
    int i, j;
    FILE* fp;

    for (i = 0; i < 500; i++) {
        char fname[100];

        snprintf(fname, 100, "include%d.h", i);
        fp = fopen(fname, "w");

#ifdef IFNDEF_GUARD
        fprintf(fp, "#ifndef _INCLUDE%d_H\n#define _INCLUDE%d_H\n", i, i);
#endif
#ifdef PRAGMA_ONCE
        fprintf(fp, "#pragma once\n");
#endif


        for (j = 0; j < i; j++) {
            fprintf(fp, "#include \"include%d.h\"\n", j);
        }

        fprintf(fp, "int foo%d(void) { return %d; }\n", i, i);

#ifdef IFNDEF_GUARD
        fprintf(fp, "#endif\n");
#endif

        fclose(fp);
    }

    fp = fopen("main.c", "w");
    for (int i = 0; i < 100; i++) {
        fprintf(fp, "#include \"include%d.h\"\n", i);
    }
    fprintf(fp, "int main(void){int n;");
    for (int i = 0; i < 100; i++) {
        fprintf(fp, "n += foo%d();\n", i);
    }
    fprintf(fp, "return n;}");
    fclose(fp);
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

And now, my various test runs:

# gcc pragma.c -DIFNDEF_GUARD
# ./a.out
# time gcc -E main.c  > /dev/null

real    0m0.164s
user    0m0.105s
sys 0m0.041s
# time gcc -E main.c  > /dev/null

real    0m0.140s
user    0m0.097s
sys 0m0.018s
# time gcc -E main.c  > /dev/null

real    0m0.193s
user    0m0.143s
sys 0m0.024s
# gcc pragma.c -DPRAGMA_ONCE
# ./a.out
# time gcc -E main.c  > /dev/null

real    0m0.153s
user    0m0.101s
sys 0m0.031s
# time gcc -E main.c  > /dev/null

real    0m0.170s
user    0m0.109s
sys 0m0.033s
# time gcc -E main.c  > /dev/null

real    0m0.155s
user    0m0.105s
sys 0m0.027s
# gcc pragma.c -DPRAGMA_ONCE -DIFNDEF_GUARD
# ./a.out
# time gcc -E main.c  > /dev/null

real    0m0.153s
user    0m0.101s
sys 0m0.027s
# time gcc -E main.c  > /dev/null

real    0m0.181s
user    0m0.133s
sys 0m0.020s
# time gcc -E main.c  > /dev/null

real    0m0.167s
user    0m0.119s
sys 0m0.021s
# gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/4.2.1
Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin17.0.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Enter fullscreen mode Exit fullscreen mode

As you can see, the versions with #pragma once were indeed slightly faster to preprocess than the #ifndef-only one, but the difference was quite negligible, and would be far overshadowed by the amount of time that actually building and linking the code would take. Perhaps with a large enough codebase it might actually lead to a difference in build times of a few seconds, but between modern compilers being able to optimize #ifndef guards, the fact that OSes have good disk caches, and the increasing speeds of storage technology, it seems that the performance argument is moot, at least on a typical developer system in this day and age. Older and more exotic build environments (e.g. headers hosted on a network share, building from tape, etc.) may change the equation somewhat but in those circumstances it seems more useful to simply make a less fragile build environment in the first place.

The fact of the matter is, #ifndef is standardized with standard behavior whereas #pragma once is not, and #ifndef also handles weird filesystem and search path corner cases whereas #pragma once can get very confused by certain things, leading to incorrect behavior which the programmer has no control over. The main problem with #ifndef is programmers choosing bad names for their guards (with name collisions and so on) and even then it's quite possible for the consumer of an API to override those poor names using #undef - not a perfect solution, perhaps, but it's possible, whereas #pragma once has no recourse if the compiler is erroneously culling an #include.

Thus, even though #pragma once is demonstrably (slightly) faster, I don't agree that this in and of itself is a reason to use it over #ifndef guards.

Top comments (1)

Collapse
 
cctechwiz profile image
Josh Maxwell

Thanks for the concrete comparisons.

I agree that the most worrisome thing about #pragma once is that it is not standardized, and will probably never be. I definitely prefer the way #pragma once looks in code, but can't justify using something that I have no control over in the long run.