Introduction

This article is the compliment to Dynamic TEXT Section Image Verification. The article will demonstrate detecting hardware faults or unauthorized patches; back patching the executable to embed the expected hash value of the .text section; and demonstrate the process of repairing the effects of hostile code (for example, an unauthorized binary patcher).

The ideas presented in this article work equally well whether the executable was patched on disk or in-memory. However, the self repair occurs in memory. In the context of Reverse Engineering and Patching, Kris Kaspersky (no relation to Kaspersky Labs namesake Eugene Kaspersky, a former KGB Cryptologist) refers to this as Online Patching in his book Hacker Disassembling Uncovered.

The samples will use a flat GZip'd file for storing a copy of the unaltered .text section. As Garth Lancaster points out, the reader should explore using executable resources to embed the hash or archived .text section. An example can be found in Adrian Cooper's Adding and Extracting Binary Resources.

The code presented in this article was successfully tested on Windows 2000, Windows XP, Windows Server 2003, and Windows Vista. Many thanks to Tim Deveaux and Joergen Sigvardsson for their assistance in testing the code against Windows Vista. Note that a standard user account successfully executed the demonstrated code under Windows Vista. See the 'Windows Vista Compatibility' Section at the end of the article for a discussion.

Downloads

There are 8 downloads available with this article. Loosely speaking the following concepts are introduced:

Self Healing 1 - Base Line (walking the EXE header)

Self Healing 2 - Hashing the .text section

Self Healing 3 - Self Healing 2 with Back Patching

Self Healing 4 - Extracts and Compresses the .text section

Self Healing 5 - Self Healing 4 with Back Patching

Self Healing 6 - Archived .text section Restoration, Back Patching

Self Healing 7 - Full Demonstration (Tampering and Healing)

RelExe - A Release Build Executable of Self Healing 7

Tools

Tool requirements for this article are the same as those in Dynamic TEXT Section Image Verification, though this article focuses less on previously demonstrated correctness. However, the compression routines using GZip and decompression routines using Gunzip from the Crypto++ library warrant a challenge.

The author's copy of WinZip 11.0 (Build 7313) claims the created archive is not valid (since the archive does not have a .gz extension). It appears WinZip relies solely on the file name extension. As an alternative, those who have WinRAR installed should find it a suitable replacement which works properly as it examines the file's header.

For those who are interested in other C++ Cryptographic libraries, please see Peter Gutmann's Cryptlib or Victor Shoup's NTL.

SHA Hash Function

This article will use SHA-224. SHA-224 is in the family of SHA-2 hashes, currently recommended by NIST. The SHA-2 family of hashes produce a digest of at least 160 bits, which is the current best practice. In the case of SHA-224, the digest is 224 bits (28 bytes) in length.

Self Integrity Checking

Computer viruses have employed integrity checking in the past, including the use of Hamming Codes to attack breakpoints and to correct errors. For example, the Yankee Doodle family of viruses introduced in 2000. Self Integrity Checking has also been a topic studied in academic circles. For an example, see J. Giffin, M. Christodorescu, and L. Kruger, Strengthening Software Self-Checksumming via Self-Modifying Code.

Microsoft offers a digital signature scheme for .NET assemblies called Strong-Named Assemblies. However, as Cracking .NET Assemblies demonstrates, the system is easily subverted. Self Healing is interleaved with the program, rather than existing as a shell around the program (Strong-Named Assemblies). By integrating the integrity check in the executable, it is hoped the system will be more difficult to remove.

Self Healing Software

It was suggested the article be named 'Tamper Aware and Self Repairing Code'. However, the author felt 'Self Repairing Code' was a bit sterile and detached — the MSI installer repairs. This code is much more tightly coupled to the programmer's work, so the metaphorical 'Self Healing Code' was used to embody the process.

Press Hype

At times there is much in the press on the topic of Self Healing Software, which would lead one to believe this area is thoroughly studied (and patented). Once investigated, the 'Self Healing Software' assigned by the press seems to be a bit of a misnomer.

For example, take the following press release uncovered by a Google search of 'Self Healing Software': Self-Healing Software Gets Push from IBM. One would expect to see an article describing software which could be flown on the Space Shuttle, have radiation flip bits in it's program code, and the software repair itself.

This is not the case. The IBM article discusses the capabilities of Tivoli Monitoring software (in the author's opinion it is a very nice product). In contrast to the title, IBM's statement in the article is:

...Tivoli Monitoring 6.1 oversees and fixes IT service-related problems in servers or databases for online applications such as e-mail

The thrust of this CodeProject article is software integrity and self repair, while the "IT service-related problems" mentioned in the Compuworld article discuss auto diagnosing and correcting issues such as those between an Email server and a Firewall.

Patent Issues

Dr. Brooke Stephens did uncover Patent 6530036, Self-Healing Computer System Storage. However, US Patent law being what it is, Patent 6530036 does not strictly apply to this CodeProject article. The holders of the patent restart their storage system should an anomaly be detected (the anomaly detection occurs through a proxy). The system described in this article recovers on the fly and does not use a proxy.

Self Healing Systems Workshop

In 2002, the first workshop on Self Healing systems was held in Charleston, South Carolina. The two papers of interest follow. However, neither system actively employs the system as program code (each use an 'external agent'). Both articles are available for download with this article.

The reader should keep in mind the author is neither a lawyer or programmer — he is a Network Engineer and Network Architect who has a passion for Programming and Cryptography. The compiler workings and caveats combined with the Penicillin Code presented in this article was an interesting application of Cryptography and made interesting reading.

Back Patching

In the most strict sense, back patching is an operation performed by the compiler during the compilation stage. This article will borrow the term since the article's endeavors are so closely related to the compiler's use of the term.

Consider the following code fragment which calculates parity:

if( 0 == a % 2 )
{
p = 0;
}
else
{
p = 1;
}

On first pass, the compiler will encounter if( 0 == a % 2 ) and generate code to perform the comparison. Next, the assignments of either p = 0 or p = 1 are encountered. What will be generated is a compare instruction, and dropping into the first assignment, or a jump instruction (stepping over the first assignment) and performing the second assignment. The point to observe at his step is "how far to jump" is not known because the full if/else statement has not been evaluated. The disassembly of the above code is shown below (note that the code which is not relevant to this discussion — the modular reduction - has been masked).

On second pass, the code for the statements p = 0 and p = 1 has been generated (that is, the size of the emitted opcodes is now known), so the jump opcode at 0x411A16 can be patched with a displacement (more correctly, the immediate value of the operand can now be written). This is known as back patching.

Error Free Hash Transcription

These examples will require the reader to often copy the calculated hash into the expected hash. To this end, the following tip is presented. First, open the properties of the Windows NT interpreter.

Next, enable Quick Edit mode in the Windows NT interpreter.

With Quick Edit mode enabled, one can now:

Insert the caret at first character of the hash

Left mouse click and hold

Highlight the hash text

Release left mouse button

Press ENTER to copy to the clipboard

Hash Variable Placement and Initialization

Though the issue of hash variable placement and initialization will not rise until examples two and three, it will be addressed now. There are two important caveats associated with variable placement and initialization.

Initialized Global Hash Variable

For the first caution, consider the following program fragment:

BYTE cbExpectectedImageHash[ CryptoPP::SHA224::DIGESTSIZE ];

Notice that the BYTE array cbExpectedImageHash has been declared, but not initialized. This allocation will exist in the .bss section (uninitialized data section). The first run will produce the following result.

This run is expected in the Debug build, the compiler has initialized the BYTE array on behalf of the programmer to an expected value. Next, one would take the calculated image hash (09165E0392F4028240D0AEEA30B6CAF494CC929089757082347119ED), and use it to initialize cbExpectedImageHash as follows:

The effects of the above are subtle: The variable cbExpectedImageHash was moved from uninitialized data (.bss section) to initialized data (.data section).

In the interim the compiler has emitted different code: though cbExpectedImageHash will still exist in a DATA Segment (now the initialized data section), the instances have different initialization code, which will always reside in the .text section by default. Perhaps a simple

when the data existed in the .bss section (uninitialized data section) has been removed. A second run of the above code would produce the following incorrect results:

A third run is required to properly calculate the precomputed hash.

Local Hash Variable

The final caveat in DATA Section initialization has to do with the placement of the hash variable on the stack. Simply put, one can back patch the executable as often as one desires between non Visual Studio (i.e. outside of the environment) runs, and one "will NOT obtain the correct results when the variable is not Global in scope." This is because cbExpectedHashImage resides on the programs stack, and the initialization code resides in the .text section. In the case of a command line project, the variable cbExpectedImageHash must be placed outside of main(). So the following will not produce expected results:

Analysis of Code Generations

Viewing the disassembly of the following trivial code reveals the reason for continual code generation changes when the BYTEarray[] is placed inside main() — the encoding of the immediate value within the opcode.

To understand why a global variable does not cause the code generation issue above, one can use PE Browse to examine the .data section (initialized data section) of the executable for the following example.

Notice below that the array is now stored in the .data section, rather than a collection of immediate value opcodes or the .bss section (uninitialized data section). Recall this article does not hash data sections — only the distinguished .text section. This is the "allocation and initialization" of array. Hence the reason there is no changing .text section code.

Below is a view of the .data section when examining the executable using PE Browse.

If one were to hover the mouse over the variable array in the Visual Studio Debugger, Intellisense would report the address of array as 0x408030. If one were to accidentally overflow array memory, the first byte to be overwritten would be at 0x40805C — the byte value of 0xA0.

Polling Versus Notification

This article uses Crypto++ and hashing to determine when the .text section has been modified in memory through Polling. It appears Polling is the only option available to a programmer. An obvious point to observe: if triggering is possible, an executable which has had an unauthorized patch applied on disk will not trigger an event.

Windows API

If Microsoft Windows provided the programmer with a memory write notification (into the .text section) API, one could simply wait for the trigger and inject the Penicillin Code as required. According to Dr. Newcomer, Microsoft MVP, such a notification is not available.

Debug Registers

As Matthew Faithfull points out (reiterated by Oleg Starodumov above), under the Visual Studio Debugger, one can set a hardware breakpoint to accomplish the task for data. In Debugging Applications, John Robbin's presents the source code for a debugger. However, the program uses software breakpoints and not hardware breakpoints.

Use VirtualQueryEx() to get the current page protection of the memory location in question.

Use VirtualProtectEx() to change protection of the memory location to current_page_protection | PAGE_GUARD

Look for exceptions [WaitForDebugEvent()] with code STATUS_GUARD_PAGE and the address belonging to the memory location. (STATUS_GUARD_PAGE is not defined in the include files (I wish I knew why); its numerical value is 0x80000001.)

Once you (your debugger) receive such an exception, do what you want to do, then use SetThreadContext() to set the thread to single step execution, then dispose of the debug event [ContinueDebugEvent() with DBG_CONTINUE]. If the target process is multi-threaded, you should suspend all the other threads (otherwise the other threads may access the memory location without you seeing that).

Wait for EXCEPTION_SINGLE_STEP exception, after which call ContinueDebugEvent() with DBG_CONTINUE and go to step 2. If you suspended threads at the previous step, resume them now.

For the purpose of this article, the exception of interest would be STATUS_GUARD_PAGE_VIOLATION.

Self Healing 1

Self Healing 1 is taken from Dynamic TEXT Section Image Verification. It is a basic rewrite (which should have been performed in the previous article) — primarily a copy and paste to rearrange the executable for functionality and aesthetics. It will serve as the starting point of this article.

ImageInformation() populates the parameters for use later in the program by locating the start of the TEXT Section in memory, by combining the address returned from GetModuleHandle(), and parsing the various headers.

The sample then dumps the byte codes encountered by reading the in-memory .text section using standard memory read functions — note there is no requirement for MapViewOfFile() or ReadProcessMemory() since the operations are within the confines of its own process.

Self Healing 2

The sample provided in Self Healing 2 builds upon the previous example by adding a Cryptographic Hash function. The Hash Function creates a digest of the executable's .text section.

The code's data was modified by adding two BYTE arrays for SHA-224 hash of the .text section: the expected (precalculated) hash, and the calculated (runtime) hash.

In addition to the BYTE arrays for the hash, a hash object and code to perform the hashing was added. This code can be seen below.

To build an executable which functions properly requires two compilations: the first compilation and subsequent run generates the expected (now known as the precalculated) hash. Then the precalculated hash is added to the executable. Finally, a second run will result in the precalculated hash equalling the runtime hash.

"Note that the operation of running the executable under the Debugger will cause the hash to change." This is because the Visual Studio Debugger will insert software breakpoints (0xCC opcode or Interrupt 3) into the program. To compound this issue, the software breakpoints are not displayed when viewing a disassembly. According to Oleg Starodumov, Microsoft VC++ MVP:

[The] Visual Studio debugger can only use hardware breakpoints on data access (only for write). If you need to break when the code executes, consider WinDbg.

Finally, taking from Ken Johnson, Microsoft SDK MVP:

...in WinDbg, if you use the 'ba' command then the code bytes in question will not be modified (i.e. substituted with an 0xcc/int 3). You are limited to 4 simultaneously active 'ba' breakpoints as they use the hardware supplied debug registers, which only support four target addresses.

Because of the Visual Studio software breakpoint issue, the program was built and then run from the Command Line rather than the Visual Studio environment. This is readily apparent if the reader observes the change in the Title Bar text.

In the two images below, Self Healing 2 was: run once from the Visual Studio Environment (yellow text); and run once from the Command Line (green text) to demonstrate the breakpoint issue. In either case, the code is exactly the same.

The above code was run to create the precalculated hash. In the intermeditate step between running the program the first time and the second time, one would back patch the executable to populate the correct expected digest. The code of Self Healing 2 is displayed below before the first run.

Armed with the correct expected hash value (E259A10464E487076CDB8F83E6D06ACB53564A1684BA84B3ABA72F4B), one can now insert it into the code for proper initialization of cbExpectedImageHash as shown below.

Self Healing 3

Though introduced previously, Self Healing 3 is a proper run of sample 2 from the command line outside the debugger with the expected image hash variable back patched (and in Global scope).

Self Healing 4

The fourth sample code is the code to extract and compress the unmodified .text section from the executable. For this portion of the article, the compressed .text section will be saved to a file named TextImage.gz.

The extracted and compressed .text section is the data which is subsequently restored, should one detect a load error or unauthorized memory patch. The reader should explore other means for storing the extracted and compressed data. Candidates include:

As an Executable's Resource

As a Resource DLL

As a File

In the Windows Registry

As far as the candidates stand, the Windows Registry is probably the least desirable (this is not the case if one chooses to embed the expected hash value as the hashes will be fewer than 32 bytes). Microsoft recommends a limit of approximately 2048 bytes of data. Please read Microsoft's Registry Element Size Limit in MSDN.

A flat file was chosen for simplicity, functionality, and to demonstrate the Crypto++ Gzip and Gunzip classes.

This sample simply takes the in memory .text section, compresses it, and writes it to a file. Self Healing 4 is examined in detail under the next section, after back patching has occurred. For completeness, the Command Line run is shown below. Note the place holder for the expected hash: 0x00, 0x01, ..., 0x05, 0x06 to assure consistent code generation across runs for the back patch.

Self Healing 5

Self Healing 5 performs the TEXT Section export after compressing the image. Note that back patching has occurred.

Since the program is being run from the Command Line, the interpreter may have a pwd — or present working directory — different than that of the program directory. In this case, pwd is C:\. As such, the archive is placed in C:\ rather than in the program's build directory.

Navigating WinRAR to the root of C:\ and opening the archive reveals a consistent TEXT Section image. Taking from the information dumped in the fifth sample, the .text section size is 0x17FCE5, which is 1,572,069 decimal bytes.

The final step to be performed is extracting TextImage.gz, and then opening the extracted file using a hex editor to verify the correctness of the compression and extraction operations. This is verified below using UltraEdit32.

The code in this example adds one function call as follows. pCodeStart and dwCodeSize are being used from ImageInformation().

The GZip constructor takes a BufferdTransformation* (the FileSink object), and a Deflate level as parameters. The documented constructors being used Gzip and FileSink are as follows. Reference the Gzip and FileSink class in the Crypto++ manual.

Then one encounters the Put() and MessageEnd() functions previously encountered in the HexEncoder. Different objects (Gzip vs. HexEncoder), same results — the data is pushed into the object, processed, and then the object is informed to complete it's operations and flush its buffers.

Self Healing 6

Sample 6 is rather boring — it simply reads the reads the TEXT Section archive, places it in a buffer (a rather larger buffer in Debug builds for a Command Line project), and dumps the first 96 bytes to compare with the original TEXT Section. This sample is presented after the back patching operation (back patching was performed in Samples 2 through 5, essentially making one example into two).

The Gunzip code is shamelessly ripped from Wei's Crypto++, test.cpp (with the addition of wrapping in a try/catch block):

The following code snippet and figure of a Release build run (using Green text) demonstrates using conditional compilation based on _DEBUG. The programmer now enjoys 4 build for a Debug and Release pair. Also noteworthy is the dramatic reduction in .text section size for the Release build: 0x40130 or 262,448 decimal bytes. After compression, this is 135,687 decimal bytes.

Once switching to WriteProcessMemory() for the Tampering (1 byte), the sample again uses the function for Healing. However, the entire .text section is restored. The reason for the extreme restoration is that the author spent considerable time attempting to perform both functional level detection and restoration.

It is felt functional level detection and restoration can be performed, but not without a dynamic dissembler. This is clearly feasible, since SoftICE (among other debuggers) has the feature. With that said, Russell Osterlund respectfully declined to share his source code for PEBrowse.

What does not perform as expected in Debug builds is the address (&) operator and book-ending the function. Consider the following code fragment:

The start of main() can be determined with &main(); conversely the address of the first function is &Function1(). One would then incorrectly conclude sizeof( main ) is a difference of the addresses.

In Debug builds &main() will return the address of a jump stub (for a discussion, see GetAddressOfMain() in Dynamic TEXT Section Image Verification). Next, there is no guarantee the binary layout of the Debug or Release build mimic that of the source file. Finally, function inlining in Release builds could optimize away the function call alltogether.

The reader is encouraged to further this work by creating a deterministic method for both functional level detection and restoration.

The results of the Debug (blue text) and Release (green text) executions are shown below.

The functions of interest are now AlterTextImage() and HealTextImage(). AlterImageText() simply writes one No Operation instruction to the first byte of the .text section:

Best practices would dictate that one verify the integrity of the archived copy of the .text section before performing the above operation (perhaps using a hash). An even better solution would be to digitally sign the archived code, so that there would be no way to forge the hash or the enclosed Penicillin Code without detection. For an example of Message Signing with Recovery, see Product Activation Based on RSA Signatures. The exercises are left to the reader.

Windows Vista Compatibility

The author is very pleased to report these techniques are 100% Windows Vista Compatible. One minor issue was encountered: TextImage.gz could not be created in C:\ when running the program under a standard user account.

The Windows Message Box is being invoked because one has written garbage into the .text section — recall that the archive creation and subsequent restoration failed. Admittedly, the author should have placed more checks in the demonstration code.

If the archive file existed (from a previous run under an Administrator account), the program worked as desired.

This is because without virtualization, Local Users (of which an Authenticated User is a member) are allowed three permissions by default. Note that this computer is part of a private Domain (home.pvt):

Comments and Discussions

You claim at the start that one of the things this technique can protect against is:

"unauthorized patches"

But this technique is no defence against a competent, deliberate attempt to modify the behaviour of the code. (E.g., a malicious attack.) Indeed, you point this out in one of your replies to a comment:

"The sad part is no matter how complex your check is, all it takes is changing a "jmp conditional" to a "jmp" to bypass any single check in a protection scheme. This CRC check is no exception. However, as a tool to defend against corruption, or as a fail-safe to use in mission-critical resources this is a very useful tool."

The only kind of unauthorized patch it would prevent is an untested one... If someone is competent to find and adjust some piece of functionality of your component, then they (a) they will notice that it has failed to work when the test it and (b) they will have the necessary skills to disable the check.

As a means of detecting accidental corruption it seems more interesting. However, did you consider trying to build such a self-check around existing integrity mechanisms such as code signing? The nice thing about using the standard Windows mechanisms for code signing is that the end user can see the signature and be confident that the binary comes from where it claims to have come - Windows Explorer shows this stuff in the Properties dialog, and there are existing tools for dealing with these signatures. (Whereas with your technique, a sneaky hacker could just alter the embedded hash instead of disabling it, leaving no obvious clue that anything is wrong.)

(There are some situations where components with an authenticode signature are verified automatically before being run. .NET components not installed in the GAC are a case in point. I'm not sure that happens with regular Win32 components, and if it doesn't wouldn't it make sense to build something that just checks that, rather than devising a new form of check.)

I apologize if you feel I mislead you. It was not my intention. I've deliberaltely side stepped the cracker vs. protectionist debate. That article will come much later. I think the introduction stated the gist of the article fairly well:

This article is the compliment to 'Dynamic TEXT Section Image Verification'. The article will demonstrate detecting hardware faults or unauthorized patches; back patching the executable to embed the expected hash value of the .text section; and demonstrate the process of repairing the effects of hostile code (for example, an unauthorized binary patcher).

Ian Griffiths wrote:

However, did you consider trying to build such a self-check around existing integrity mechanisms such as code signing?

Yes - three replies here:
1) It is available when using .NET framework (what does one who is maintaining legacy code with Visual C++ 6.0 use?)
2) Use of a Signature Scheme with Recovery was stated at the end of the article. It is available from Visual C++ 5.0 and above.
3) Code Signing may not be feasible for whatever reason

In regards to [2], from the article:

Best practices would dictate that one verify the integrity of the archived copy of the .text section before performing the above operation (perhaps using a hash). An even better solution would be to digitally sign the archived code, so that there would be no way to forge the hash or the enclosed Penicillin Code without detection.

With respect to [3], reasons for not using Code Signing could include lead time required to incorporate, purchasing of the certificate, etc. Also, Microsoft does a horrible job protecting it's own binaries (reference Eliminating Explorer's Delay when Deleting an In-Use File[^]), so an author may choose to supplement Microsoft methods with his or her own.

In the end, I'm not aware of such a treatment on this subject in the context of Windows 32 with all prototyped code given away. More tools for the war chest is not a bad thing.

I found an interesting publication with regards to CRCs. It basically demonstrates changing a document, calculate the new CRC, and then change a few more byte (based on the new CRC) to recapture the old CRC.

It can be done I was hasty to reply this morning as I was in a rush, but I'll explain a bit more.

I've always been interested in ways to fight reverse engineering, as I do quite a bit of it myself to learn the way code compiles to assembly, what my own code looks like when compiled, and to understand more about how "hackers" break programs.

The sad part is no matter how complex your check is, all it takes is changing a "jmp conditional" to a "jmp" to bypass any single check in a protection scheme. This CRC check is no exception. However, as a tool to defend against corruption, or as a fail-safe to use in mission-critical resources this is a very useful tool.

The way you abused the compiler's methods and recompiled the correct hash into .DATA was the most useful step of the article for me, but all the other examples are nice as well.

Ah, quick question I suppose. Where on the security scale does "CRC" fall? I'm speaking about WinRAR / PKZIP CRC, specifically. Any ideas?

The way you abused the compiler's methods and recompiled the correct hash into .DATA was the most useful step of the article for me, but all the other examples are nice as well.

Abuse is such as harsh word - how about exploited our new knowledge

Michael Sadlon wrote:

Ah, quick question I suppose. Where on the security scale does "CRC" fall? I'm speaking about WinRAR / PKZIP CRC, specifically. Any ideas?

It really does not. A CRC is a code used to detect transmission errors - the underlying mathematical structure is such that one can easily determine the expected CRC (and sometimes reconstruct the message). To pound it home (and the underlying structure) see A Deterministic Method of Determining a Document's Modified State[^]. This article analyzes CRC, Adler, and SHA. I was very suprised at the results. Hence the reason for the hashing...

Michael Sadlon wrote:

This CRC check is no exception. However, as a tool to defend against corruption, or as a fail-safe to use in mission-critical resources this is a very useful tool.

In practice, I would probably use MD5 in place of CRC. For the article, I try to stay with best practices (hence the reason for an SHA-2 hash). I can't imagine someone finding a collion (matching hash) which: 1) runs at all, and 2) provides the same functionality with the same hash (with some cracking going on).

The sad part is no matter how complex your check is, all it takes is changing a "jmp conditional" to a "jmp" to bypass any single check in a protection scheme. This CRC check is no exception. However, as a tool to defend against corruption, or as a fail-safe to use in mission-critical resources this is a very useful tool.

By the way, I spent years on HCU's legendary site. Email correspondences with Fravia and Mammon (especially Mammom) were quite common - Mammom gave me my first taste of Network Security on an old RedHat box - pre 4.0 release.

I've always been interested in ways to fight reverse engineering, as I do quite a bit of it myself to learn the way code compiles to assembly, what my own code looks like when compiled, and to understand more about how "hackers" break programs.

Thanks. I was not successful in researching the subject (there did not appear to be anything out there). So I put what I had out there. What I had was built around previous articles, and what I thought.

... So hopefully some one with real subject matter knowledge will correct any inconsistensies.

Like so many of my articles, this has gone through a flurry of revisions (early on in it's life). You may want to look again - hash variable initialization/code generation is now covered in detail (complete with disassemblies).