I was saddened to hear the news about Doug Engelbart’s passing. Although most famous for having invented the mouse (I once had the privilege of holding the original – in some ways it was even better than its successors, as its two beveled wheels allowed the mouse to easily be drawn in a straight line), his contributions to the digital world we now take for granted run much deeper than that specific innovation.

I had the privilege of meeting Mr. Engelbart on a few occasions, and in the wake of this news I’m prompted to repost something I wrote a few years ago following one of those encounters, something that contemplated how law and innovation so often seemed to collide in a way deleterious for the latter. As we take this moment to recognize the rich legacy Mr. Engelbart leaves the world it should remind us to never allow law to deprive the world of other such gifts in future.

In early December I attended the “Program for the Future,” celebrating the 40th anniversary of a seminal event in technological history: Doug Engelbart’s “mother of all demos.” While today the technologies he showed off in his 1968 presentation must seem ordinary and quaint, back then they were revolutionary and laid the foundation for what we now take for granted.

While perhaps most widely known for being the world debut of the mouse, which he invented, Engelbart’s presentation is most notable for how it advanced collective intelligence. What made the presentation so important weren’t the technologies themselves but the human problems they stood to solve.

So in celebration of Engelbart’s important contribution to the world, a group of futurists and technologists gathered together at The Tech museum in San Jose to contemplate the future innovations yet to come. Personally, for me, the event was a bit nostalgic. Before law school, as a technologist in Silicon Valley, I often attended such events. Sometimes they got a bit silly, as there’d be so much “blue skying” and thinking about what could be done that nothing would actually get done. But these kinds of events were still important and because they fostered an environment where the bolts of inspiration could be seized upon and fanned into exciting innovations.

I still gravitate towards technology-related events, only today they are invariably legally-related. At these events technology is always considered in the context of regulatory frameworks, and the people doing the thinking are always lawyers and policy makers. Notably, however, at this event I was only one out of maybe a handful attendees who was a lawyer. And therein lies the disconnect.Continue reading »

At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

This article on TechDirt summarizes a recent brouhaha that recently broke out in a corner of the Internet I tend to haunt with other lawyers and cyberlaw professionals and has started to percolate into the mainstream. The upshot is that someone is upset that other people have reposted her tweets without her permission and control, and she is convinced this is legally wrongful. So convinced is she, in fact, that she keeps threatening to sue a number of them who have used these tweets to comment on her erroneous legal theory, which only stokes further interest in criticizing her as even more observers come to note that the law is not, in fact, on her side. (TechDirt’s analysis does a decent job explaining why.)

It is easy to be tempted to join in the mocking of this person’s very public tantrums, and to be sure, threatening litigation is not to be taken lightly. Doing so, particularly when cloaked in legal ignorance, is ripe for justifiable criticism.

But while the exhibition of personal arrogance begs the schadenfreude of public censure, the underlying problem it can reveal is not. The reality is that for me and my cyberlaw peers, we are so inured to how this area of law “works” (to the extent that it does) we tend to forget how foreign it is to most laypeople (and even many other lawyers), for whom its mystical mechanations can be really terrifying. This sort of knowledge gap isn’t good for anyone. That’s how we end up with bad law.

The answer naturally cannot be to modify the law to fit its common misperceptions. Sometimes the law is what it is for very good reasons, or at least reasons that cannot simply be discounted, even if those reasons aren’t intuitively obvious to a layperson. We can’t use common misapprehensions as the pillars upon which law should be based. In fact, when we have done so in recent years, often in response to technology (another complex system that can be scary to those who don’t understand it), the end result has been law that so overreacts that it creates more problems while failing to properly solve any.

At the same time, however, rather than mocking those who don’t understand the law, those who do understand it should be endeavoring to explain it. Let’s get everyone on the same page to understand how law works and why, so we can all work together to fix it when it doesn’t. After all, in a democracy law should belong to everyone, not just the rarified few specially trained to understand it.

Of course, the above sympathetic sentiment is directed at those who would be willing to learn. It’s not a moral failing to not know everything about the law, but it is to not care whether one does or not before proceeding with bumptious legal threats or dangerously inapt policy advocacy. Those who would seek to use the law as a weapon without bothering to learn how it operates are justly entitled to whatever chastisement they get.

These actions challenged the status quo, however, and the status quo fought back. For those who treat knowledge as a currency that can be horded, acts to free it are seen as a threat. Unfortunately for Aaron, those people have power, and they wielded it against him. Furthermore, and most saliently for this project, it happened not through private actions, but by leveraging the power of the state to pursue and criminally prosecute him for his efforts.

Thus the parallel purpose of this project is to help advocate for better legal policy, so that we don’t empower the state to punish our innovators for innovating. The disruption they spawn, though perhaps painful for incumbents who liked things as they were, are necessary in order to have a future that benefits everyone.

A word about “hacking.” Hacking is a word often colloquially misused to describe the unauthorized access of a computer system. Among self-described hackers, however, the correct term to describe such behavior is “cracking,” as in “safe cracking.” “Hacking” instead describes a far more neutral, or even beneficial activity: the creative problem solving involved in engineering a solution. (Links point to Eric Raymond’s Jargon File.)

It would greatly assist policy discussion to keep these terms clear, particularly given the interest in criminalizing the unauthorized access of computer systems. Associating the activities of hacking with the more pejorative definition loses nuance and tends to lead to the criminalization of more benign, even objectively good, technology uses.

Thus this site will endeavor to use the correct term as much as possible. But when citing other media it may necessarily parrot whatever word was used, however incorrectly.

Edit 2/20/13: I’ve realized I’m shouting into the wind on this issue. “Hacking” is too colloquially accepted to describe all sorts of innovative applications of technology, good and bad, to ever completely avoid. But I will remind others that the term does indeed describe both good uses and bad uses and should not be presumed to be a pejorative.

Yes, I do have other relevant things to blog about than more TSA antics. This isn’t supposed to be a TSA-only blog. But (a) some recent news is too outrageous/tempting to skip, and (b) there are relevant lessons to be extrapolated.

People in authority are very good at deeming things threats. They are very good at using their police power to exert control over what they deem as threats. They are less good at actually meting out their authority commensurate to the actual problem, and as a consequence it’s very easy for innocent people to have their rights unduly affected.

These observations hold for many contexts, and technology regulation is no exception. Exercises of governmental power can easily be heavy-handed, imprecise, and ill-suited for the problems they pretend to solve. The identification and definition of the underlying problems can also be equally ham-fisted and oftentimes ignorant of actual risk. Which is not to say that all government regulation is illegitimate. On the contrary, these examples illustrate why it’s important to question and discuss exactly when and how governments should be involved in technology use and development. They may well have important roles to play. But only if they are played with care.

The Electronic Frontier Foundation has launched a new project, Global Censorship Chokepoints, whose mission is to track instances of censorship caused by allegations of copyright infringement.

Global Chokepoints is an online resource created to document and monitor global proposals to turn Internet intermediaries into copyright police. These proposals harm Internet users’ rights of privacy, due process and freedom of expression, and endanger the future of the free and open Internet. Our goal is to provide accurate empirical information to digital activists and policy makers, and help coordinate international opposition to attempts to cut off free expression through misguided copyright laws, policies, agreements and court cases.

There is some overlap between that project and this one, especially insofar as the state allows itself to be the enforcement arm of copyright infringement complaints. But there is plenty of work to go around when it comes to protecting free speech around the world. (Digital Age Defense also looks at state imposition of intermediary liability for non-IP related reasons as well. See, e.g., attempts by the Indian government to demand content filters in order not to cause social unrest.)

LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.

Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?

Recently it appeared the fear of a foreign hacker penetrating the online systems of American infrastructure had been realized with news that a Russian hacker had attacked and disabled a pump in an Illinois water system. These fears have now been shown to be misplaced: the supposed “hack” was a login by an engineer traveling in Russia at the time he was requested to perform some work on the system, and the pump broke down on its own, unrelatedly, months later.

Vulnerabilities of public infrastructure are not an idle concern. The Stuxnet virus, which specifically targeted nuclear facilities in Iran, illustrates that infrastructure can be a compelling target and quite feasible to affect if those systems are not properly protected.

But the water system “hack” shows that proper protection of infrastructure — and, accordingly, any law intended to advance this — needs to be done carefully, with clear understanding of the actual threat and competent engineering not prone to panicked histrionics. From the BBC article about it:

“Nobody checked with anybody. Lots of people assumed things they shouldn’t have assumed, and now it’s somebody else’s fault and we’re into a finger-pointing marathon,” wrote Nancy Bartels.

“If the public can be distracted from the issue of how DHS and ISTIC fumbled notification so badly, then nobody will be to blame, which is what’s really important after all.

“Meanwhile, one of these days, there’s going to be a really serious infrastructure attack, and nobody’s going to pay attention because everyone is going to assume that it’s another DHS screw-up.”