About a year ago I blogged strategically on decoys but I promised a second post. Here it is. Quite simply almost out of the user-manual, there are many levels of how our tamper-proofing implementation makes use of decoys.

Some instructions are like red flags, they only occur in “suspicious” circumstances. Besides generating such instructions on the fly, our tool generates a few of these instruction in a whole bunch of places, adding just a small container of extra needles to the proverbial hay stack. Let the attacker guess which the right needle is.

Not just instructions maybe be recognized, the same is true for instruction patterns. In fact, looking for pattern is a common attack method. (While this is not topic relevant, rest assured there is quite some effort in the tool to avoid patterns). More relevant to the topic: when some vanilla extra code is needed somewhere, the tool can nicely generate known patterns it otherwise tries to avoid.

And of course, extra, never executed code can be added. Let the attacker spend the appropriate extra effort. Software engineers can easily forget how resourceful and clever attackers are. Attackers worth that name will use good statistics of instructions used or groups of instructions and find out that some code just doesn’t quite look right. The most clever solution to get code with a particular distribution for a specific application is also the most simple one: Just let the programmer specify some extra binary input files of his choice, the tool will simply use those instructions and not worry about any math, statistics, or being fancy. Using code found in the real application to be protected as decoys however would be a rather stupid idea.

Extra code might be more than a bundle of instructions to look at: We distinguish such simple extra instruction sequences from more clever instruction sequences which also get executed and have not much effect. But unlike the former, such code will really be executed in the running application.

Don’t forget: the tool only does the tool’s work; to really fool an attacker the programmer can help with extra decoys really looking attractive. This touches one of our standard mantras: an application doesn’t need to be changed at all to be protected, but when a real first class, high level protection is desired, efforts of the tool can be highly augmented by a programmer who knows what he is doing.

A simple measure against crackers is detecting whether a program is being debugged. There is a race between defense adding new recognition features and attackers finding ways to fool defenders. In fact, White Hawk Software does not have the illusion we can always recognize when an attacker uses a debugger. So we use a large number of different methods, annoying any attacker so that he just never knows when he is finished after finding the next hurdle.

For an extra kick the WHS protected program makes its defense in such a subtle way, that the attacker might not recognize the defense. Maybe a wrong algorithm is chosen, or the precision will become dismal… Sadly, we cannot create subtle misbehavior automatically: in practice it turns out to either not be subtle enough, or not to disturb the attacker at all. Real subtlety is the domain for manual protections and therefore restricted for use in very high-end defenses. Here, White Hawk Software can offer to give the user enough control and enable manual creation and integration of such a protection. One thing however we can do automatically: Add a delay between detection and acting, so the attacker might not see the spot where his presence has been detected. Another trick against scripted attackers is to not always trigger but to use some randomness.

We want to present some tricks to recognize a debugger. The absolute last thing we want to do is educating attackers, and that is indeed not happening. Rest assured, attackers already know about what is written here, it’s ok to also let the other people know. By the way: When researching known anti-debugger tricks we have found a large number to be broken. Traditionally it is the good guy which uses a debugger, and malware trying to prevent debugging. For this reason (at least we occasionally think so) text-books describe detection methods which don’t quite work completely. We will stay within that tradition, we will stay far too general and high-level to risk exposure of new information to criminals.

There are very different classes of debuggers. For example: Breakpoints can be created by modifying the code. Breakpoints can be implemented in hardware. A debugger can stop all threads, or could stop just one single thread. An emulated processor could be used. The program could be run on special hardware for debugging. No single measure will recognizes all.

Finally, here are some ways to detect your program may be debugged

The operating system has an API to detect debuggers. Look for that call.

There are some bits set by debuggers which can be checked directly without using the API. (E.g. Windows Process Environment Block)

Recognize presence of trap instructions. (And don’t trip over data with the same encoding.)

Special-purpose breakpoint registers can be “used” by us, so use by a debugger becomes conflicted.

Debuggers themselves can have bugs which are known and exploited by malware. (But eventually those will become fixed.)

Another short blog-entry which isn’t really for software producers, but aimed at everyday software users: I found it more difficult then necessary to setup encryption. Here is what I did, maybe this can help somebody.

Encryption is a really big deal. If you already do this and worry about real classified stuff, don’t read on. You already know how to handle encryption and these simple instructions may be useless for you.

If you simply load your security certificates into your browser and are happy, you can also stop wasting your time reading this blog. That is probably good enough for many users.

People inbetween (like me) just think the standard processes to use encryption are too complex. The system may be foolproof, but it for sure fails to convince me that the stuff I’m sending around doesn’t contain my private keys. I may have an ultra secure certificate, but why should I think my computer keeps it really secret? Some unknown code in my browser somehow uses a certificate, paints nicely closed locks on the monitor and what not. But I know my normal desktop computer is not safe. I know my virus checking program does safely catch about 60% of the simpler viruses (and secretly deletes binaries of tamper-proofed test program which it usually assumes to be malicious.)

Getting concrete: a few programs which are really simple to use, so simple you might avoid making mistakes of your own. So simple that their code is self contained and far away from most malware already having attacked your computer.

1)Encrypting or decrypting any file. That program creates a window. You give it the password and simply drag and drop files into it.http://spi.dod.mil/ewizard.htm In the middle of the page it says “Download EW-Public”. Unzip the file you download and create a directory. No installation is required. The directory contains simple instructions.

2) To encrypt or decrypt just lines of text, e.g. within an email message. The recommended program creates a window with a form-field for the password. To use that program, use drag and drop as with the other program, only this time use lines of text, instead of complete files.http://www.fourmilab.ch/javascrypt/ Use what they call the “Lean” version. The simpler a program is, the less chances you have of making errors. You can make a local copy. No installation is required. The directory contains the full program and simple instructions.

3) When security really matters, there exists a program which can be used to make an otherwise unsafe computer safe. How? Use is simple: you reboot your computer into that program. You get a “desktop” which is safe and completely separated from your file system. Among other security tools, this program already contains both encryption applications mentioned above.http://spi.dod.mil/lipose.htmGet one of the “LPS-Public ISO Images”.

Just because users use White Hawk Software tools does not prevent them from adding some protection code of their own.

Decoys are among the best defenses. However, use of decoys can become dangerous and may be tricky.

Consider a decoy having been introduced. What are the possibilities?

The decoy is not detected.
Nothing happens; no good, no bad. It still is there for possible later detection.

The decoy is detected, but confused with the real thing.
The best possible outcome. Attacker stops searching because he thought he got the result.

The decoy is detected and recognized to be a decoy.
The worst possible outcome. Any attacker is reinforced that there must be something worth hiding. Attacker will multiply efforts to search for the real thing.

Use of decoys is a strategic decision which can be made only after evaluating the possible outcomes and their consequences.

Should decoys be protected? Of course. If a decoy is not protected doesn’t that just scream this code is intended for viewing? On the other side: don’t protect it too well, or an attacker has no clue of the decoy and won’t waste his time.
So, how well should decoys be protected? That is a difficult question; maybe protect it just a tiny bit less then the real code. Or, have several decoys and protect them at different levels.

When time permits I plan on blogging how NestX86 itself takes advantage of decoys at different levels. …Here it is

We reached an interesting way-point: “We got to eat our own dog food”. We created a protection for the protection-tool. First the obvious: we need tamper-proofing for the same reason everybody else does. But the second point is more interesting: about every customer will ask us whether we protect our own tool. We just have to do it.

Of course we would have a good excuse: The majority of the NestX86 protection-tool is written in a high level, byte-coded language. Our product is about protecting machine-code in object files. That may be a very good excuse technically, but to our customers it may nevertheless feel lame.

We can’t really handle byte-codes, but we still had to fake it. Instead of writing another protection-tool, we made a custom-protection. It is a prime example for how good expert work and protection-design can make up for lots of automatic tools. We also learned the lesson we want our customers to learn: One can develop one’s own protection, but buying a tool is much cheaper. (We knew that before.)

We know our program. We know what parts we really want to protect. We know what parts might give most insight into the internal workings of the important parts. Our tools’ performance is so good, we can easily give away some computer cycles for the protection. Doing it manually, we can add some devious decoys. Not random decoys, but decoys aimed at the particular circumstances. In addition we can protect the base libraries together with the real tool. The quantity alone of the protected code should discourage most attackers. (How can an attacker crack the binary if the source code with all its documentation is still hard to comprehend?) We don’t randomly rename identifiers, but make sure our renaming causes confusion and some aliasing. A small number of artificial sub-classing, use of undocumented private algorithms and some multithreading should dot the i-s and cross the t-s. Not yet having a protection tool doesn’t mean we didn’t create a number of special-purpose hacks for modifications to our source code before compilation.

For now we skip semantics-preserving transformations, until we have a decent automatic analyzer and composer for byte codes. If we weren’t a penny pinching startup company we would have bought a tool from one of our competitors. The free tools we tried were too difficult to use for our program base. The grin of a competitors sales-person selling us a tamper-proofing tool would just have been completely unbearable.

Go try to hack our tool while there is still some possibility. With the next release, or maybe the after-next, you can forget that. Sorry, this is rhetorical only: For legal reasons the license for our tool prohibits reverse engineering of the tool.

But what if you still need to stick to an XP platform? Maybe the software is just a small part of a complex system? For software producers who, for whatever reason, still must create code for an old Windows XP platform after its end of life, there is a solution: White Hawk Software helps making the software hacker-proof.

When the application software has been tamper proofed, hackers might still take over an XP machine. Protected software may go down with the machine, but it will not continue running and create wrong results.