In early 1998, Defense Department computer networks came under attack. The offensive was codenamed Moonlight Maze, and it came—for the first time that anyone knew—from a foreign power.

Moonlight Maze marked the first skirmish in what would soon emerge as a new theater of global conflict: cyberwar. Nearly two decades before cyberattacks became a routine feature of international relations, the story of how the U.S. grappled with the attack and its attackers—never before reported in full—shows how a fledgling national security team in the Pentagon bureaucracy learned to follow a new kind of trail to identify cyber culprits. It also reveals just how big a challenge this new kind of war posed—and, in many ways, still poses—to the American security establishment.

Story Continued Below

***

In early March of 1998, word came through that someone had hacked into the computers at Wright-Patterson Air Force Base in Ohio and was pilfering files—unclassified but sensitive—on cockpit design and microchip schematics. Over the next few months, the hacker fanned out to other military facilities. No one knew his location (the hopping from one site to another was prodigious, swift, and global); his searches bore no clear pattern (except that they involved high-profile military R&D projects).

Nine months earlier, when cyber-war was still a hypothetical matter, the Pentagon had staged a war game called Eligible Receiver, in which 25 members of an NSA Red Team, using commercially available gear, hacked into every Defense Department computer network, shutting down or distorting vital communications. Now, in an operation that an interagency task force dubbed Moonlight Maze, the Pentagon was experiencing the real thing, from a foreign power.

The hacker would log in to the open computers of university research labs to gain access to military sites and networks. He didn’t dart in and out of a site, like some joyride hackers the government had seen;he was persistent; he was looking for specific information, he seemed to know where to find it, and, if his first path was blocked, he stayed inside the network, prowling for other approaches.

He was also remarkably sophisticated, employing techniques that impressed even the NSA teams that were following his moves. He would log on to a site, using a stolen username and password; when he left, he would rewrite the log so that no one would know he’d ever been there. Finding the hacker was touch-and-go: the analysts would have to catch him in the act and track his moves in real time; even then, since he erased the logs when exiting, the on-screen evidence would vanish after the fact. It took a while to convince some higher-ups that there had been an intrusion.

A year earlier, the analysts probably wouldn’t have detected a hacker at all, unless by pure chance. With few exceptions, the Army, Navy and civilian leaders in the Pentagon would have had no way of knowing whether an intruder was present, much less what he was doing or where he was from.

That all changed with the Eligible Receiver war game, whichconvinced high-level officials, even those who had never thought about the issue, that America was vulnerable to a cyber-attack and that this condition endangered not only society’s critical infrastructure but also the military’s ability to act in a crisis.

Right after Eligible Receiver, Deputy Secretary of Defense John Hamre called a meeting of senior civilians and officers in the Pentagon to ask what could be done. One solution, a fairly easy gap-filler, was to authorize an emergency purchase of devices known as intrusion-detection systems or IDS—a company in Atlanta, Georgia, called Internet Security Systems, could churn them out in quantity—and to install them on more than a hundred Defense Department computers. As a result, when Moonlight Maze erupted, far more Pentagon personnel saw what was happening, far more quickly, than they otherwise would have.

Not everyone got the message. After Eligible Receiver, Matt Devost, who’d led the aggressor team in war games testing the vulnerability of American and allied command-control systems, was sent to Hawaii to clean up the networks at U.S. Pacific Command headquarters, which the NSA Red Team had devastated. Devost found gaps and sloppiness everywhere. In many cases, software vendors had long ago issued warnings about the vulnerabilities along with patches to fix them; the user had simply to push a button, but no one at PacCom had done even that. Devost lectured the admirals, all of them more than twice his age. This wasn’t rocket science, he said. Just put someone in charge and order him to install the repairs. Several months later, around the time of Moonlight Maze, Devost was working computer forensics at the Defense Information Systems Agency. He came across PacCom’s logs and saw that they still hadn’t fixed their problems: despite his strenuous efforts, nothing had changed. (He decided at that point to quit government and do computer-attack simulationsin the private sector.)

Even some of the officers who’d made the changes, and installed the devices, didn’t understand what they were doing. Six months after the order went out to put intrusion-detection systems on Defense Department computers (still a few months before Moonlight Maze),

Hamre called a meeting to see how the devices were working.

An Army one-star general furrowed his brow and grumbled that he didn’t know about these IDS things: ever since he’d put them on his computers, they were getting attacked every day.

The others at the table suppressed their laughter. The general didn’t realize that his computers might have been getting hacked every day for months, maybe years; all the IDS had done was to let him know it.

Not long before Moonlight Maze, Hamre called another meeting, imbued with the same sweat of urgency as the one he’d called in the wake of Eligible Receiver, and asked the officers around him the same question he’d asked before: “Who’s in charge?”

They all looked down at their shoes or their notepads, because no one was in charge. The IDS devices may have been in place, but no one had issued protocols on what to do if the alarm went off or how to distinguish an annoying prank from a serious attack.

Finally, Brigadier General John “Soup” Campbell, who had been the Joint Staff’s point man on Eligible Receiver, raised his hand. “I’m in charge,” he said, though he had no idea what that might mean.

By the time Moonlight Maze started wreaking havoc, Campbell was drawing up plans for a new office called Joint Task Force-Computer Network Defense—or JTF-CND. Orders to create the task force had been signed July 23, and it had commenced operations on December 10. It was staffed with just twenty-three officers, a mixof computer specialists and conventional weapons operators whohad to take a crash course on the subject, all crammed into a trailerbehind a government building in the Virginia suburbs, not far from the Pentagon. It was an absurdly modest effort for an outfit that, according to its charter, would be “responsible for coordinating and directing the defense of DoD computer systems and computer networks,” including “the coordination of DoD defensive actions” with other “government agencies and appropriate private organizations.”

Campbell’s first steps would later seem elementary, but no one had ever taken them—few had thought of them—on such a large scale. He set up a 24/7 watch center, established protocols for alerting higher officials and combatant commands of a cyber intrusion, and—the very first step—sent out a communiqué, on his own authority, advising all Defense Department officials to change their computer passwords.

By that point, Moonlight Maze had been going on for several months, and the intruder’s intentions and origins were still puzzling. Most of the intrusions, the ones that were noticed, took place in the same nine-hour span. Some intelligence analysts in the Pentagon and the FBI looked at a time zone map, did the math, and guessed that the attacker must be in Moscow. Others, in the NSA, noted that Tehran was in a nearby time zone and made a case for Iran as the hacker’s home.

Meanwhile, the FBI was probing all leads. The hacker had hopped through the computers of more than a dozen universities—theUniversity of Cincinnati, Harvard, Bryn Mawr, Duke, Pittsburgh, Auburn, among others—and the bureau sent agents to interview students, tech workers, and faculty on each campus. A few intriguing suspects were tagged here and there—an IT aide who answered questions nervously, a student with a Ukrainian boyfriend—but none of the leads panned out. The colleges weren’t the source of the hack; they were merely convenient transit points from one target site to another.

Finally, three breakthroughs occurred independently. One was inspired by Cliff Stoll, the Berkeley astronomer and computer systems administrator who, a decade earlier, had nabbed an East German spy using the university’s portal to steal computer-based military secrets. The breakthrough had come when Stoll created a “honey pot”—a set of phony files, replete with directories, documents, usernames and passwords (all of Stoll’s invention), seemingly related to the American missile-defense program, a subject of particular interest to the hacker. Once lured to the pot, he stayed in place long enough for the authorities, with whom Stoll had been in touch, to trace his movements and track him down. The interagency intelligence group in charge of solving Moonlight Maze—mainly NSA analysts working under CIA auspices—followed Stoll’s example, creating a honey pot, in this case a phony website of an American stealth aircraft program, which they figured might lure their hacker. (Everyone in the cyber field was enamored of The Cuckoo’s Egg, Stoll’s book about his exploits; when Stoll, a long-haired Berkeley hippie, came to give a speech at NSA headquarters not long after his book was published, he received a hero’s welcome.) Just as in Stoll’s scheme, the hacker took the bait.

But with their special access to exotic tools, the NSA analysts took Stoll’s trick a step further. When the hacker left the site, he unwittingly took with him a digital beacon—a few lines of code, attached to the data packet, which sent back a signal that the analysts could follow as it piggybacked through cyberspace. The beacon was an experimental prototype; sometimes it worked, sometimes it didn’t.

But it worked well enough for them to trace the hacker to an IP address of the Russian Academy of Sciences, in Moscow.

Some intelligence analysts, including at NSA, remained skeptical, arguing that the Moscow address was just another hopping point along the way to the hacker’s real home in Iran.

Then came the second breakthrough. While Soup Campbell was setting up Joint Task Force-Computer Network Defense, he hired a naval intelligence officer named Robert Gourley to be its intel chief.

Gourley was a hard-driving analyst with a background in computer science. In the waning days of the Cold War, he’d worked in a unit that fused intelligence and operations to track, and aggressively chase, Russian submarines. Now ensconced in his task force office, using a secure phone line, he called Rich Haver, an intel veteran he’d met a few times, laid out the Moonlight Maze problem, as well as the debate over the intruder’s identity, and asked if he had advice on how to resolve it.