Report a bug If you spot a problem with this page, click here to create a Bugzilla issue.
Improve this page Quickly fork, edit online, and submit a pull request for this page.
Requires a signed-in GitHub account. This works well for small changes.
If you'd like to make larger changes you may want to consider using
local clone.

The C Preprocessor vs D

Back when C was invented, compiler technology was primitive.
Installing a text
macro preprocessor onto the front end was a straightforward
and easy way to add many
powerful features. The increasing size & complexity of programs
have illustrated
that these features come with many inherent problems.
D doesn't have a preprocessor; but
D provides a more scalable means to solve the same problems.

The C Preprocessor Way

C and C++ rely heavily on textual inclusion of header files.
This frequently results in the compiler having to recompile tens of thousands
of lines of code over and over again for every source file, an obvious
source of slow compile times. What header files are normally used for is
more appropriately done doing a symbolic, rather than textual, insertion.
This is done with the import statement. Symbolic inclusion means the compiler
just loads an already compiled symbol table. The needs for macro "wrappers" to
prevent multiple #inclusion, funky #pragma once syntax, and incomprehensible
fragile syntax for precompiled headers are simply unnecessary and irrelevant to
D.

The D Way

The C Preprocessor Way

This is used in C to adjust the alignment for structs.

The D Way

For D classes, there is no need to adjust the alignment (in fact, the
compiler is free to rearrange the data fields to get the optimum layout,
much as the compiler will rearrange local variables on the stack frame).
For D structs that get mapped onto externally defined data structures,
there is a need, and it is handled with:

Preprocessor macros add powerful features and flexibility to C. But
they have a downside:

Macros have no concept of scope; they are valid from the point of definition
to the end of the source. They cut a swath across .h files, nested code, etc. When
#include'ing tens of thousands of lines of macro definitions, it becomes
problematical to avoid inadvertent macro expansions.

Macros are unknown to the debugger. Trying to debug a program with
symbolic data is undermined by the debugger only knowing about macro
expansions, not the macros themselves.

Macros make it impossible to tokenize source code, as an earlier macro change
can arbitrarily redo tokens.

The purely textual basis of macros leads to arbitrary and inconsistent usage,
making code using macros error prone. (Some attempt to resolve this was
introduced with templates in C++.)

Macros are still used to make up for deficits in the language's expressive
capability, such as for "wrappers" around header files.

Here's an enumeration of the common uses for macros, and the
corresponding feature in D:

The C Preprocessor Way

It's common in a function to have a repetitive sequence
of code to be executed in multiple places. Performance
considerations preclude factoring it out into a separate
function, so it is implemented as a macro. For example,
consider this fragment from a byte code interpreter:

This works by causing a compile time semantic error if the condition
evaluates
to false. The limitations of this technique are a sometimes very
confusing error message from the compiler, along with an inability
to use a static_assert outside of a function body.

The D Way

D has the static assert,
which can be used anywhere a declaration
or a statement can be used. For example: