Blog

Just because users use White Hawk Software tools does not prevent them from adding some protection code of their own.

Decoys are among the best defenses. However, use of decoys can become dangerous and may be tricky.

Consider a decoy having been introduced. What are the possibilities?

The decoy is not detected.
Nothing happens; no good, no bad. It still is there for possible later detection.

The decoy is detected, but confused with the real thing.
The best possible outcome. Attacker stops searching because he thought he got the result.

The decoy is detected and recognized to be a decoy.
The worst possible outcome. Any attacker is reinforced that there must be something worth hiding. Attacker will multiply efforts to search for the real thing.

Use of decoys is a strategic decision which can be made only after evaluating the possible outcomes and their consequences.

Should decoys be protected? Of course. If a decoy is not protected doesn’t that just scream this code is intended for viewing? On the other side: don’t protect it too well, or an attacker has no clue of the decoy and won’t waste his time.
So, how well should decoys be protected? That is a difficult question; maybe protect it just a tiny bit less then the real code. Or, have several decoys and protect them at different levels.

When time permits I plan on blogging how NestX86 itself takes advantage of decoys at different levels. …Here it is

We reached an interesting way-point: “We got to eat our own dog food”. We created a protection for the protection-tool. First the obvious: we need tamper-proofing for the same reason everybody else does. But the second point is more interesting: about every customer will ask us whether we protect our own tool. We just have to do it.

Of course we would have a good excuse: The majority of the NestX86 protection-tool is written in a high level, byte-coded language. Our product is about protecting machine-code in object files. That may be a very good excuse technically, but to our customers it may nevertheless feel lame.

We can’t really handle byte-codes, but we still had to fake it. Instead of writing another protection-tool, we made a custom-protection. It is a prime example for how good expert work and protection-design can make up for lots of automatic tools. We also learned the lesson we want our customers to learn: One can develop one’s own protection, but buying a tool is much cheaper. (We knew that before.)

We know our program. We know what parts we really want to protect. We know what parts might give most insight into the internal workings of the important parts. Our tools’ performance is so good, we can easily give away some computer cycles for the protection. Doing it manually, we can add some devious decoys. Not random decoys, but decoys aimed at the particular circumstances. In addition we can protect the base libraries together with the real tool. The quantity alone of the protected code should discourage most attackers. (How can an attacker crack the binary if the source code with all its documentation is still hard to comprehend?) We don’t randomly rename identifiers, but make sure our renaming causes confusion and some aliasing. A small number of artificial sub-classing, use of undocumented private algorithms and some multithreading should dot the i-s and cross the t-s. Not yet having a protection tool doesn’t mean we didn’t create a number of special-purpose hacks for modifications to our source code before compilation.

For now we skip semantics-preserving transformations, until we have a decent automatic analyzer and composer for byte codes. If we weren’t a penny pinching startup company we would have bought a tool from one of our competitors. The free tools we tried were too difficult to use for our program base. The grin of a competitors sales-person selling us a tamper-proofing tool would just have been completely unbearable.

Go try to hack our tool while there is still some possibility. With the next release, or maybe the after-next, you can forget that. Sorry, this is rhetorical only: For legal reasons the license for our tool prohibits reverse engineering of the tool.

But what if you still need to stick to an XP platform? Maybe the software is just a small part of a complex system? For software producers who, for whatever reason, still must create code for an old Windows XP platform after its end of life, there is a solution: White Hawk Software helps making the software hacker-proof.

When the application software has been tamper proofed, hackers might still take over an XP machine. Protected software may go down with the machine, but it will not continue running and create wrong results.

A recent article in Wired magazine explained how some very smart mathematicians had theorized for years that there was a way to use encryption techniques to protect executable code as well as data. As far as I can tell, most of them never got around to it as they thought the mathematical simulation and proof that this would work was estimated to be a 3 many year project. However, some new research and concept tools in this area are close to coming to fruition and hence the article.

Coopers Hawk in nest – thanks to Cornell Univ.

But what if someone with a lot of experience in obfuscation tools, and others, created a new complex tool set that used a variety of techniques simultaneously to properly protect sensitive parts (or the whole) of a software system? Tools that can balance speed, protection and size? Tools that can protect object code as well as work on source code? That is what Dr. Jacobi has been doing for White Hawk for the past 3 years using intense applied science, starting from a clean slate. Plus he previously worked for a major vendor in this area where they applied completely different techniques.

He has developed a software protection technique based on random control of novel obfuscations, mutually checking protection aspects, and algorithmic combinations of diverse code primitives. We are busy packaging the X-86 version of this as NEST-X86 for demo and beta testing in late March 2014. Forget trying to model its strengths and weaknesses, as each company will implement their chosen protection plans in different ways with this tool set.

Do think about signing up to be a beta tester or even a beta breaker – if you can.

A long time ago I was so fascinated by how one air traffic control system could handle all the planes for three New York Area airports simultaneously, long before there was the internet or even multi user computers, that I wrote a paper on it for part of my post grad degree program. Recently the SenseCy blog has called out some highlights from the AIAA official release of ” A Framework for Aviation Cyber Security” which discussed the connectivity challenge in a networked world..

With the enormous number of computers involved today in air traffic control, airport and ground control, as well as on-board control, the concerns about cyber security in this special industry have expanded dramatically. Perhaps because air travel is the primary method of international travel, plus the fact that other transportation systems don’t fall out of the sky when they fail, more attention needs to be paid to aviation than rail, ship or car travel (not that these others aren’t susceptible to attacks too).

Since the development of drones and their much wider deployment in recent conflicts, even Joe Public knows you can take control of planes from the right computers. This has also been portrayed in movies and TV shows. My concern is that all the attention seems to be paid to protecting these “front end access” systems. But what if malicious code has infected the back end systems or embedded code, for example? Infections that may lie dormant for a long time but then cause a lot of problems. A virus made its way into the International Space Station via a simple USB drive some astronaut brought aboard – so this is not just a theoretical discussion.

I hope and trust that some more attention will be paid to making back-end and embedded systems more tamper-proof before I next leave for the airport.

Recently some people have asked us why we don’t call our software “hacker-proof tools” rather than tamper-proofing software tools. Both terminologies are correct of course, but we think the word “hacker” often has a connotation of “amateur” or at least not full time professional.

Yes, we want to protect your software from hackers, but we also want to protect it from professional code-breakers, competitors and virus developers. Hence the stronger term tamper-proof.

Picture of International Space Station that itself was infected with a virus. Photo thanks to Wiki Commons

Renowned Russian virus and security expert Eugene Kaspersky revealed recently that a virus had even been discovered on board the International Space Station – despite them being a million miles from the nearest internet node. Turns out some space astronaut accidentally took along the virus on a USB “thumb” drive for use on one of the many laptops deployed in the space station. See full story from the International Business Times.

The big motto of this story is that you don’t have to be attached to the internet to be infected. So don’t wait to run virus checkers and hope for the best. Mission critical software should all be tamper-proof so that no malware can hook in and cause any damage whatsoever.

In Washington on Thursday Nov 7th, FBI Director James Comey said that cyber attacks are increasingly representing the most serious threats to the homeland security and in the next decade will likely eclipse the risk posed by traditional terrorist threats.
He told a Senate committee that this cyber risk is a multi-layered threat posed by thieves, hackers and others who are able to travel the world via the internet at the “speed of light.’ His stern warning continued with “there are no safe neighborhoods.”