According to Dark Reading, Jesse Hertz and Shawn Denbow found numerous flaws in commonly used RATs, including SQL injection, arbitrary file reading, and weak encryption.

"This shows that it is possible, and that it's not hard, to pick apart attacker tools and come up with proactive defenses against them," says John Villamil, senior security consultant with Matasano, who served as Denbow and Hertz's adviser for the project. "If nothing else, it can help forensics companies analyzing traffic from compromises ... and help build tools that analyze these Trojans, and provide signatures [to detect them]."

Vulnerability research into attacker tools is rare, but not unheard of. "It's very rare to see this type of research," Villamil says.

RATs, which typically conduct keylogging, screen and camera capture, file management, code execution, and password-sniffing, for example, basically give the attacker a foothold in the infected machine as well as the targeted organization.

This is great news for cybersecurity. It opens new opportunities for attribution of computer attacks, along lines I’ve suggested before: “The same human flaws that expose our networks to attack will compromise our attackers’ anonymity.”

In this case, the flaws identified by Hertz and Denbow could allow defenders to decrypt stolen documents and even to break into the attacker’s command and control link – while the attacker is still on line. That opens up the possibility of a true counterhack, in which the defender exploits a flawed attack to gain control of the attacker’s machine.

It’s only a matter of time before counterhacks become possible. The real question is whether they’ll ever become legal. Both the reporter and the security researcher agree that, “legally, organizations obviously can't hack back at the attacker.”

I think they're wrong on the law, but first let's explore the policy question. Should victims be able to poison attackers' RATs and then use the compromised RAT against their attacker?

We'll start with the obvious. Somebody should be able to do this. And, indeed, it seems nearly certain that somebody in the U.S. government -- using some combination of law enforcement, intelligence, counterintelligence, and covert action authorities -- can do this. (I note in passing, though, that there may be no one below the President who has all these authorities, so that as a practical matter RAT poisoning may not happen without years of delay and a convulsive turf fight. That's embarrassing, but beside the point, at least today.)

Asking government to do the job has some drawbacks, though. Counterhacking is likely to work best if the attacker is actually on line, when the defenders can stake out the victim’s system, ready to give the attacker bad files, to monitor the command and control machine, and to copy, corrupt, or modify exfiltrated material. Defenders may have swing into action with little warning.

Who is going to do this? Put aside the turf fight. Does anyone think that NSA or the FBI or the CIA have enough technically savvy counterhackers to stake out the networks of the Fortune 500, waiting for the bad guys to show up?

And even if they do, who wants them there? Privacy campaigners will hate the idea of giving the government that kind of access to private networks, even networks that are under attack. For that matter, businesses holding sensitive data won’t much like the stark choice of either letting foreign governments steal it all or giving the US government wide access to their networks.

From a policy point of view, surely everyone would be happier if businesses could hire their own network defenders to do battle with attackers. Hiring defenders would greatly reinforce the thin ranks of government investigators. It would make wide-ranging government access to private networks less necessary. And busting the government monopoly on active defense would probably increase the diversity, imagination, and effectiveness of the counterhacking community.

But, you ask, what about vigilantism, that tired bugaboo of the Justice Department’s Old Guard?

First, as I’ve suggested elsewhere, allowing private counterhacking doesn’t mean reverting to a Hobbesian war of all against all. Government can set rules and discipline violators, just as it does with other privatized forms of law enforcement, from the securities industry’s FINRA to private investigators.

Second, the "vigilatism" claim depends heavily on sleight of hand. People who hate this idea invariably call it "hacking back," with the heavy implication that the defenders will blindly fire malware at whoever touches their network, laying indiscriminate waste to large swaths of the Internet. For the record, I'm against that kind of hacking back too. But RAT poison makes possible a kind of counterhacking that is far more tailored and prudent. Indeed, with such a tool, trashing the attacker's system is dumb; it is far more valuable as an intelligence tool than for any other purpose.

Of course, even if they aren't trashing machines, the defenders will be collecting information. And gathering information from someone else's computer certainly raises moral and legal questions. So let's look at the computers that RAT poisoning might allow investigators to access.

First, and most exciting, this research could allow us to short-circuit some of the cutouts that attackers use to protect themselves. I grant that I’m beyond my technical capabilities in saying this, but it seems highly unlikely to me that an attacker can use a RAT effectively without a real-time connection from his machine to the compromised network. Sure, the attacker can run his commands through onion routers and cutout controllers. But at the end of all the hops, the attacker is still typing here and causing changes there. If the software he’s using can be compromised, then it may also be possible to reverse the flow of code inject arbitrary code into his machine and thus compromise both ends of the attacker's communications. That’s the Holy Grail of attribution, of course.

Is there a policy problem with allowing private investigators to compromise the attacker's machine for the purpose of gathering attribution information? Give me a break. Surely not even today's ACLU could muster more than a flicker of concern for a thief's right to keep his victim from recovering stolen data.

The harder question comes when the attacker is using a cutout -- an intermediate command and control computer that actually belongs to someone else. In theory, gathering information on the intermediate computer intrudes on the privacy of the true owner. But, assuming that he's not a party to the crime, he has already lost control of his computer and his privacy, since the attacker is already using it freely. What additional harm does the owner suffer if the victim gathers information on his already-compromised machine about the person who attacked them both? Indeed, an intermediate command and control machine is likely to hold evidence about hundreds of other compromised networks. Most of those victims don't know they've been compromised, but their records are easy to recover from the intermediate machine once it has been accessed. Surely the social value of identifying and alerting all those victims outweighs the already attenuated privacy interest of the true owner.

In short, there's a strong policy case for letting victims of cybercrime use tools like this to counterhack their attackers. If the law forbids it, then to paraphrase Mr. Bumble, "the law is a ass, a idiot," and Congress should change it.

But I don't think the law really does prohibit counterhacking of this kind, for reasons I'll offer in a later post.

PHOTO: iStockPhoto

UPDATE: I modified a phrase that turned out to be more colorful than helpful to literal-minded readers.