If you think competition between companies drives innovation, what might happen when they also have to go up against autonomous pieces of software running distributed across millions of computers through the Internet and around the world? It sounds like something out of a Singularity obsessed science fiction novel, but if you know where to look, the bones of this idea are already beginning to be seen today. The results might look pretty strange still, but there are some fascinating things happening in the “Distributed Autonomous Corporation” (DAC) area.

A DAC is a (so far) hypothetical construction that could perform at least some of the same functions as a corporation, non-profit organization, or other grouping of humans without the centralized legal or physical trappings of those organizations. This could be accomplished by creating a blockchain-type system (similar to that running Bitcoin) in which the code that makes up the DAC runs. DACs are simply algorithms tied to payment accounts that pay for their own computing cycles used, are paid for the services they provide, and can modify their own code.

DACs as an idea have been tossed around the Bitcoin community for a few years, and were somewhat codified in a series of blog posts by Stan Larimer beginning with “Bitcoin and the Three Laws of Robotics”. Larimer posits that the Bitcoin system itself is a DAC, suggesting that much of the network’s value comes from “performing a trustworthy confidential fiduciary service,” much like a Swiss bank would do. Unlike a Swiss bank, however, Bitcoin is open source and thus anyone can look at the code and be relatively assured that the network itself will act as designed and is worthy of trust. Of course, as we’ve seen time and again since the launch of the Bitcoin software, the same cannot be said of the human beings that may provide any related services.

When we talk about competition in the cloud services marketplace, we’re usually thinking of Google’s services, Amazon’s AWS, Dropbox’s storage, or VMWare’s large-scale virtualizations. But those types of cloud offerings are coming up against some unique competition lately: personal cloud offerings that are open source and meant to be run from inexpensive computers within the home such as the credit-card-sized $40 Raspberry Pi. For online services that are aimed at consumers, such as web mail, document storage, calendaring, and others, these personal cloud projects aim to help give users a privacy protective alternative if they want one. How well do they work? I spent my free time over the past week setting one up for myself and it turns out the biggest challenge actually comes from the broadband providers.

I started out by buying a Raspberry Pi computer and a 16GB SD card off Amazon and installing ArkOS on the SD card. ArkOS is an open source linux server management console that, once installed, gives the user the ability to install web, mail, file storage, and other services with the click of a mouse. At least, that’s the idea. ArkOS is still very much in alpha, and there isn’t yet a plugin to run an email server (though plans for such are very much on the todo list). Fortunately for me, however, I have a little bit of experience in Linux administration and I managed to get email up and running. ArkOS does have a personal file storage and sync plug in, called OwnCloud, which I also set up.

The most immediate problem facing a personal cloud user, however, isn’t the alpha nature of the software or a lack of familiarity with the arcane inner workings of Linux; it’s a domain name. Or, more specifically, the IP address connected to the domain name. The domain name system is one of the magical underpinnings of the Internet that turns the URL you know, like facebook.com, into the series of numbers that the routers and switches use to let you communicate with a server far away.

It’s those numbers that are the problem. Called IP addresses, each ISP has a certain number of them to hand out to their users. Without one, you’re not on the Internet. Oh, and we’re running out of them as more and more people bring more and more devices online (a point that I’ll come back to in just a second). Getting all of their users to properly configure their computers to use an assigned IP address is a hassle, so ISPs generally use Dynamic Host Configuration Protocol (DHCP) to automatically assign computers to an IP address.

All well and good, except that with DHCP you can’t guarantee that you’re going to get the same IP address every time you start up (in practice, with most ISPs, you actually do, but you can’t be sure). Without static IP address, it is hard to set up a domain name to point to your brand new server, as you would have to notice that the address had changed then update the DNS every time. While the use of DHCP is a matter of convenience for most ISP customers, some ISPs do provide users with the option of getting a static IP. My Internet access is through Verizon FiOS, who will let their business customers purchase a static IP for a monthly fee. In the end it would have ended up costing me around $50 additional per month. Fortunately there are technological solutions, including running a program every once in a while that will check to see if your address has changed and automatically update your DNS records.

Today President Obama gave a speech and issued a Presidential Policy Directive (PPD) surrounding the reforms he is making to the National Security Agency and international intelligence gathering in general. In the PPD, the President recognized that collection of signals intelligence poses risks to “our commercial, economic, and financial interests, including a potential loss of international trust in U.S. firms.” While it was gratifying to see the President grappling with the issues that we’ve been exploring for months, the actual policy changes proposed were high level and the devil, as they say, will be in the details.

There must be at least some hope, however. We have, today, policies regarding when the U.S. government will collect information on foreigners and how it will treat that information when it is collected. People everywhere can begin making decisions about which online services to trust with our data based on the features of the service and their respect for our data — rather than the geographical location of the service itself.

For many months now, the focus of commerce on the Internet has been a connection to the United States. If the U.S. government follows through on some of the privacy protections that everyone deserves, it will be a start that can bring us back to the ideal world where companies from everywhere compete on their products rather than the surveillance performed by governments.

A warning for everyone: Advertising-supported webmail all over the world will be shutting down in the not too distant future. Since pretty much every single person I know has a webmail account of some kind, I feel like this will be relevant news for pretty much all of our readers.

Ok, so it’s not actually 100% certain that ad supported webmail is shutting down, but that certainly seems to be what some consumer groups and the courts are aiming for given a recent court opinion. A federal court in California ruled that a computer scanning the text of an email was committing a wiretap under the law. If scanning text is a wiretap, say goodbye to the advertising-supported model of the web — which has led to such unprecedented innovation — or even spam filters for that matter.

Last Thursday, Judge Lucy Koh denied a motion to dismiss filed by Google in a case that alleges that because Gmail scans emails in order to serve advertisements, that they are in violation of the Wiretap Act. That holding ushered in a follow-on suit against Yahoo! yesterday, Kevranian v. Yahoo! Inc., making similar allegations. Suits against other online services may follow.

The Wiretap Act, by the way, is a close cousin of the Electronic Communications Privacy Act (ECPA), which the Digital Due Process Coalition has been working on a legislative fix for. The Wiretap Act generally prohibits intercepting “wire, oral, or electronic communications” by anyone, although it is often invoked to prohibit the government from listening in without a warrant.

But the Wiretap Act isn’t about just the phone lines anymore, like it was when it was first written in 1968 (though it has been amended many times since then). It applies to the online world too, where data packets moving over the Internet are routinely inspected for completely legitimate reasons such as routing, combating fraud, spam, and cyber attacks, and advertising. However, Nobody thinks that Nigerian princes looking to move large sums of money should be able to complain about the “wiretapping” that shunts their pleas into a spam box. That isn’t to say that the Wiretap Act doesn’t provide some important protections, but that we have to think carefully about how its provisions apply in the 21st Century.

An open source project now gaining steam aims to fix some of the privacy and security problems with our modern email system. Bitmessage is a peer-to-peer messaging system loosely based on the cryptographic theories behind Bitcoin. It supposedly provides for entirely encrypted communications and even goes so far as to mask the sender and addressee of a message from third-parties. (I say supposedly because there has not yet been a thorough audit of the security of the system). Full scale peer-to-peer messaging with encryption and masking of metadata are obviously intriguing to lots of users in the wake of the NSA revelations in the US, but Bitmessage has one major problem: Your friends (probably) aren’t using it yet.

We talk a lot about allowing disruptive innovation here on DisCo and look a lot at the regulatory structures that can hold back that kind of innovation. One thing we haven’t touched on much, however, is the issue of competing against the network effects that entrench an older service by themselves. In this case email is the entrenched competitor.

We learned today that the latest victims of email hacking here in the US were not average consumers suddenly staring down the barrel of identity theft (as it so often is), but instead a whole host of Congressional staff who have had their usernames and passwords stolen and posted to a public website. While this will no doubt be a hassle for those staffers and I don’t envy the systems administrators down on the Hill right now, we shouldn’t let this teachable moment pass us by.

What is perhaps most interesting about the hacked passwords is that they exemplify, in many cases, everything that you should not do when constructing a strong password. In many cases they are just dictionary words with numbers tacked on to the end, the names of the staffers’ bosses, or their favorite sports team. While industry and security experts have tried to emphasize for users the importance of strong passwords, including how long they should be, not to use common words, and to include numbers and punctuation, obviously many people still use easy to guess passwords.

Passwords alone, however, are not the end of the conversation in this day and age. There is little reason today why any information service can’t offer additional protections in its authentication processes. One favorite means today is two-factor authentication, which is becoming more and more widely available online from Google to Dropbox to Twitter. If that sounds familiar, we’ve talked about it a coupletimes here in the past.

We should all be thinking, however, of what comes next, because passwords are inherently a technology of yesterday that we should be working to move away from. Biometrics and other advanced technologies we haven’t even heard of yet are the future, and companies should be competing to develop them and roll them out to improve everyone’s security.

This area is one in which the federal government itself – including consumer-oriented agencies like the Federal Trade Commission – can and should play a strong role. Poor account security can cause massive consumer harm. That is why identity theft and security have been the number one complaint to the FTC for the past 5 consecutive years, according to the Commission. When the government itself is the victim, there can be no greater case for government-originated guidance, workshops, and institutional education on improving end-user security. No doubt, the public would benefit from agencies like the FTC and others bringing to bear their own experience on mitigating this persistent problem.

Over the weekend, the New York Times’ Bits blog featured a post by Quentin Hardy, summarizing a talk given by Kate Crawford, a Microsoft Research employee, at a Berkeley I School conference. The (somewhat provocative) title of the post is “Why Big Data is Not Truth” and it makes the rather uncontroversial conclusion, by way of a quotation from Ms. Crawford, that “We need to think about how we will navigate these [big data] systems. Not just individually, but as a society.” It gets there, however, by arguing against a number of strawmen surrounding big data that I’ve heard elsewhere and which I wanted to highlight.

First, Ms. Crawford sets up the “myth” that big data is objective, which she then dismantles, in part by pointing out that the users of Twitter skew toward the young, urban, and affluent. While this weekend’s protesters in Gezi Park may disagree with this characterization (fun fact: around 90% of the tweets using the various hashtags associated with the protests are coming from local Turkish users, and around 88% are in Turkish), the broader point that Crawford seems to be making — that the data in our big data system is inherently biased in various ways and we must accommodate that truth — is a well understood limitation of all statistical systems and at the same time is no reason to stop engaging in statistics. What wasn’t mentioned in the Times post, however, was the way in which big data attempts to compensate for the biases in statistical modeling through larger and larger data sets. No data set is ever perfect but the more you have, the closer to “truth” you can get.

Secondly, the Times piece has Crawford questioning the myth that big data doesn’t discriminate. She points out that even anonymized data can have information such as gender, race, and sexual orientation extracted from it by a determined analyst. By pointing out this supporting fact, it seem to me that she undermines her argument. Indeed, big data doesn’t discriminate. Big data is just a term for the force multipliers that come into play when you cram a lot of information into one system. People discriminate. How any one entity uses big data should be examined for discrimination. Casting the blame on big data itself is meaningless.

The Bitcoin community has been up in arms again the past few weeks, but I’m not convinced that it’s for a good reason. The agency of the U.S. Treasury Department charged with organizing the fight against financial crime and money laundering, the Financial Crimes Enforcement Network (FinCEN), released a “guidance” on March 18th, targeting virtual currencies. Clearly this guidance is aimed at Bitcoin. While there have been outcries on various comment boards from people enamoured of Bitcoin and inherently suspicious of anything with the word “government” attached, this guidance may well be the best thing to happen to Bitcoin in a long time.

The mark of any early disruptive technology has to be the moment when the government perks up its ears and begins to take action to bring the technology under its supervision. You can learn a lot about the future of that technology by how the government goes about this. A technology that sees the government seriously limit or ban its use will live a different life than one where the government takes a light touch. Despite the panic of the more libertarian wing of the bitcoin movement, FinCEN’s guidance seems like it is firmly in the latter category. Indeed, as Timothy Lee mentions in a piece on Forbes, government regulation may be a good sign for Bitcoin.

The new guidance spells out who in the Bitcoin community will be bound by federal laws that require “money transmitters” to collect certain information about their customers, for the purposes of fighting money laundering. (By the way, if the topic of money transmitters interests you, Ali wrote about Square’s regulatory issues a few weeks ago.) The good news is that FinCEN stated the obvious and said that in virtual currencies, just as in real currencies, average users who buy things with the currency are not subject to FinCEN’s regulations. Just as obviously, the guidance states that individuals or companies that exchange bitcoins for non-virtual currency are money transmitters in the same way as banks that deal with converting non-virtual currencies are.

More to the point, the guidance places Bitcoin on the same range as non-virtual currencies. If I sell my friend 5 BTC, FinCEN isn’t going to get involved anymore than if I sold him the leftover €5 I had from my trip last week. These sorts of low level private exchanges, while they may be technically under the rule, aren’t what concerns FinCEN.

As a quick aside, the new guidelines are not specific to Bitcoin and claim to sweep in all virtual currencies. It will be interesting to see how, if at all, these regulations apply to the virtual currencies that are used in video games. Particularly in some of the larger massively multiplayer games, the in-game currency is broadly used to buy goods (if only virtual ones), and at least in the case of the currency ISK from CCP’s game Eve Online, the company tacitly allows transfers into and out of the currency from real world money through a somewhat convoluted system.

The big question left from the FinCEN guidance concerns the “miners” that do the work that powers the Bitcoin network and are rewarded by receiving newly minted bitcoins. As it stands in the recently received guidance, miners in the system are considered money transmitters only insofar as they sell the bitcoins they mine. If they keep those bitcoins to be used as their own, they are not. This part of the regulation I’m less comfortable with, but again a lot will depend on how it is implemented in the real world.

Just a few days ago Reuters ran an article on Bitcoin, entitled “Bitcoins Need More than Fear and Love to Thrive” in which the author argues that jolts to the stability of Bitcoin are coming about because fear and love are the forces driving it. The author ends the article by saying that “Bitcoin’s success depends on it becoming more of a bore.” I can’t think of anything more boring than FinCEN regulation.

The Verge pointed out in a blog post today a new service from one of my favorite websites, Docracy (a website that takes a GitHub approach to drafting contracts? How could that not be a straight shot to my heart?). I hadn’t heard of the service yet, and it fits neatly into a series of posts [1, 2, 3] that Rob, Matt, and I wrote around a month ago. As The Verge details, the service scrapes a large number of terms of service every day, stores them, and highlights changes.

The Verge sees the opportunity here for people to catch companies who silently change the terms that bind their users, and that’s true. This service, though, can also serve to help the competition in terms of service that we were discussing in our previous blog posts. Companies that are proud of their terms should start linking to their page at Docracy to show that they will be transparent about how and when their ToS change.

Any companies that wanted to take it to the next level could use Docracy as a collaborative space to design changes to the terms alongside their users, allowing them to propose changes and discuss their impact. The end result would clearly not necessarily be the result of that collaboration, but companies that even went so far as to have the discussion would get plenty of points with discriminating users.

About DisCo

The Disruptive Competition Project (DisCo) is a project to promote disruptive innovation and competition to policymakers. Plenty of other groups in DC defend incumbent industries and protect the status quo; DisCo brings together experts to explain how disruptive change in the modern economy promotes growth and advances our society.
Follow @DisCo_ProjectMORE »

Featured Post

THE PUBLIC COSTS OF PRIVATE DISTRIBUTION STRATEGIES: CONTENT RELEASE WINDOWS AS NEGATIVE EXTERNALITIES
But this raises an obvious question: if there is a better way of doing things, why aren’t things done that way? The answer is that different stakeholders bear the costs of different solutions. Moving to worldwide online distribution entails risks borne mostly by industry stakeholders, who would be abandoning a time-honored content distribution strategy referred to as “windowing” or “release windows.” On the other hand, the risks of the current windowing model are known, and the costs of this model fall at least in part on taxpayers.READ MORE »