You already know what your target architecture is (or at least you should)

How do I determine programmatically what processor architecture
my program was compiled for?
I want the x86 version of my program to behave differently from the
ia64 version.
Is there some API I can call?

But why do you need an API for this?
You already know what your target architecture is
because you compiled it yourself!

It so happens that the Microsoft Visual C++ compiler
defines several symbols for you automatically
(assuming you're not running
the compiler in "strict ANSI compliance mode").
If you're willing to tie yourself to the Microsoft Visual C++ compiler,
you can use those symbols.

If you don't want to tie yourself to a particular compiler,
you'll have to pass that information yourself.
For example, you could have your x86 makefile pass
-DBUILDING_FOR_x86 in the compiler flags,
while having the ia64 makefile pass
-DBUILDING_FOR_ia64, and so on.
This is the approach used by the build utility
that comes with the Windows DDK:
The DDK's makefile system defines a variety
of symbols that programs (and other makefiles) can use to alter their
behavior depending on the compilation environment:

As we saw in the earlier article,
you can also use the _WIN64 symbol to detect that
the target platform is 64-bit Windows.

But the point is that this is all something you control yourself.
You're the one who is compiling the program.
You know what your target architecture is.
No need to ask somebody to tell you something that is already
entirely under your own control.

A disassembler could probably guess. Just look for lots of AMD64-specific opcodes in the code segment. As I understand it, they’re gibberish constants to an x86.

[If a disassembler doesn’t even know what instruction set to decode it’s already in a major world of hurt. I could give you an ia64 byte stream – good luck with that. Presumably the disassembler has an external cue – you may have told it explicitly, it may have inferred it from the module header, it may have a live process to extract context from. -Raymond]

Most AMD64 instructions are not gibberish if interpreted as x86 code because it was mostly a reuse of legacy opcodes (for example one-byte versions of INC and DEC, or the BCD arithmetic opcodes). Of course this does not include in-code constants, but I doubt it’s an easy task to tell between x86 and AMD64 instruction sets only by reading the stream. Reading the PE header is certainly easier.

If that’s the case – is there an API for reading the PE header and telling what architecture the program runs on? If so, then the answer to the original query is actually yes.

Vorn

[Even if there were such a function, it would be overkill. You already know the answer. You don’t need to ask somebody to tell you something you already know. It’s like having a function that tells you what language your program was written in. You already know what language your program was written in since you wrote it in that language. -Raymond]

Developers raised on Project Properties often don’t understand the build process, it’s handled automagically by VS. Those command line options and defines that they see in Properties are just gobbledegook. It’s not clear to them whether something happens at VS autocomplete, compile, link, or run time, they just know it happens.

By hiding a lot of the details, VS makes it possible for these people to overcome their defective/incomplete models of the programming environment and write C++ that kinda sorta works. Rejoice, for this provides much blog material for The Old New Thing.

What if you want to know at runtime what architecture a dll was compiled against?

Obviously the compiler target is inherently compiler specific. But some people may find it useful to know what instructions may be contained in a dll.

This may be especially true for people who wish to load some plugin dlls but don’t want any non 64bit ones in the mix to trigger WOW64 (this is therefore less a runtime query and more of a file based one but the principle of determining the ISA, pointer length, endian (whatever) assumptions, of some code you did not compile.

Obviously the answer will, in many cases, be unless you know how it was built you can’t without trying it!.

[Are you saying somebody took your x86 library and linked it into an x64 binary? How does that work? -Raymond]

Most of you are completely missing the point. This isn’t about analyzing some other module to see what language it is written in, or what architecture it targets. This is about when you know the target architecture of the code you are currently writing.

Look at the examples — it’s so obvious. The target architecture is decided when you compile something. The compiler defines preprocessor macros (X86, AMD64, whatever) precisely so your code can choose different behaviors for different targets. Your code ALREADY knows (or rather, you, as a competent developer, should realize that these macros are available to you) the target architecture!

This is like sizeof(INT_PTR). It’s decided when the compiler runs, and the build of the compiler decides the value. sizeof(INT_PTR), no matter what its value, will not change while your program is running. That’s why sizeof is an intrinsic language keyword, not a C runtime function like GetSizeOfIntPtr().

This has NOTHING to do with whether you are using VC++, Notepad, or even whether you’re developing on Windows or a completely different platform!

The post was not about disassemblers, or any other software where one module analyzes a different module. The post was about a module choosing its own behavior, based on the target architecture.

I’d like to take the opportunity to discourage using the ifdefs in Raymond’s article more than once in an app. If you use then to enable capability style defines, then adding new architectures and capabilities will be a lot easier, and there will be less and more localized platform specific code.

My guess for the original question is that this is some sort of optimization question – you need different codepaths for an mp4 encoder on x86-64 vs. intel if you want the best speed. The alternative, behavioral changes, makes no sense at all.

A number of people asked whether you can tell what architecture a module was built for by looking at the header. The answer is "Of course you can". It’s specified in the "Machine" field of the IMAGE_FILE_HEADER structure.

Some people, like id software, check endian-ness at run-time, too. It’s kind of pointless, and they then have to use indirect calls for no good reason. I don’t think Visual Studio made them that stupid, though.

While it’s true that for unmanaged code you don’t need this, in the managed world you might. A .Net 2.0 assembly which has the "MSIL" target architecture (the "Any CPU" setting in Visual Studio 2005) it’ll run as x86 on an x86 OS, as native x64 on an x64 OS, and as native ia64 on an Itanium OS.

So here the question of how to determine the architecture at runtime is a valid one. If all you care about is 32 vs 64 bit it’s easy: just check the value of System.IntPtr.Size. But if you want to tell the difference between x64 and ia64, you’re out of luck with the .Net BCL. As far as I know the only way to do it would be to use PInvoke to call GetSystemInfo or GetNativeSystemInfo.

The same technique would apply to native code as well, a combination of sizeof(void*) and Get(Native)SystemInfo would do the trick. But as Raymond pointed out, there’s no reason to.

[Are you saying somebody took your x86 library and linked it into an x64 binary? How does that work? -Raymond]

I meant something along the lines of:

I have an app which is 64bit. it has a directory where plugin dlls can be dumped. I wish to LoadLibrary() on all of these except if I can know before hand that the library is 32 or 64 bit capable in advance. I do not wish to simply call load library and have it fail (since I am not sure I will be able to determine if the fail was due to the dll being fundamentally unsound or if it is simply the wrong bitness) say I want to dump some dll’s in but load only IA64 compiled ones instead of AMD64…

I know this is a contrived example which would be easily solved by separate directories or naming convention but the concept is rational. To want to know before invoking LoadLibrary on a dll what ISA it targets (and in the case of AMD64 and IA64 it is not even a question if bitness)

I wholly agree the the question as framed by Raymond (since he certainly knows how to be exact) is *not* asking this since it explicitly says ‘my program’ hence reasonable assumption that source and/or linkage entirely known at compilation and under your control.

What I am talking about is ‘can you know, from a dll, whether it *definitely cannot* be loaded via LoadLibrary via a system call or simple parsing of the header.

The answer ‘no’ is perfectly acceptable – I just thought it was a reasonable question.

Coming more from a managed, high level of introspection background I find the question an interesting one.

[Well, okay, but that’s a different problem from the one posed in this article. -Raymond]

Yea, it’s pretty common for a video player to have different decoding routines for different instruction sets (even if they’re not actually written in assembly). That way you can distribute a binary with all of them, and choose the the most appropriate one at run time, rather than having to distribute several different versions of the application, one for each processor type.

This is a good example of why you need to state your goal when asking the question though, because the easy answer (compile time preprocessor checks) won’t work for you.

"Visual C++ would be fully justified in defining these symbols in even the strictest of ISO/ANSI compliant modes. GCC and Intel C++ have similar macros defined in their strict modes."

It would be justified, but if the purpose of the ISO/ANSI strict mode is to make sure the code would compile anywhere, leaving no room for doubt, then letting these macros be defined would violate the purpose of that mode.

Would not. The entire meaning of the ISO/ANSI statement that that the implementation gets to use those kinds of identifiers is that the implementation gets to use those kinds of identifiers.

A program that inspects those identifiers will almost surely be a non-strictly-conforming program, but that’s a nearly irrelevant statement about the program not about the implementation. Strictly conforming programs are useless for practical purposes even if they’re longer than 5 lines.

Re-reading the entry of the blog the question Mr. Chen quotes and the problem Mr. Chen says is being solved don’t properly add up.

The quote asks what architecture it was compiled for. It does not say what it was compiled on. I have a 32 bit processor but can compile 64 code because the compiler allows me to. I can’t test it myself.

Mr. Chen says “This person wants the program to detect whether it was compiled with an x86 compiler, an ia64 compiler, an amd64 compiler, or whatever.” – notice the compiled with.

.Net 2.0 appears to support a ProcessorArchitecture enumation.

[By “compiled with an XYZ compiler” I mean “compiled with a compiler that generates code intended to run on an XYZ system”. -Raymond]

This sounds an awful lot like arguing over semantics – I read the quote to mean a compiler that targets x64, a compiler that targets ia64, a compiler that targets amd64 or whatever; in which case the argument stands.

If you are compiling for a target platform you know the target platform you are compiling for – if you don’t then your problems are far worse than any simple API call can solve.

The kernel actually prohibits you from calling NtMapViewOfSection on a 32 bit image section into a 64 bit process. This restriction is completely arbitrary and pointless, especially since 64 bit code in a 32 bit process is allowed to map image sections of either type.

It should be user mode’s problem whether something "makes sense"; the kernel should let user mode do whatever it wants that doesn’t violate system integrity.