from the in-the-ether dept

Hype around blockchain has risen to an all-time high. A technology once perceived to be the realm of crypto-anarchists and drug dealers has gained increasing popular recognition for its revolutionary potential, drawing billions in venture-capital investment by the world's leading financial institutions and technology companies.

Regulators, rather than treating blockchain platforms (such as Bitcoin or Ethereum) and other "distributed ledgers" merely as tools of illicit dark markets, are beginning to look at frameworks to regulate and incorporate this important technology into traditional commerce.

Beyond potentially making a lot of people poorer – who probably should have known better than to invest in an experimental "robotic corporation" — the theft has created a massive political rift within the blockchain community, and threatens to undermine trust in a technology described as the "trust machine". In addition, this event raises serious questions about the cybersecurity risks of distributed applications, the (lack of) enforcement of existing securities laws and the potential for increased scrutiny by regulators looking to protect unwary investors.

Prior to last week, The DAO was widely considered a phenomenal success. It enjoyed the largest crowdfunding in history, raising the equivalent of more than $150 million, or about a tenth of the value of the Ethereum blockchain platform on which it was built. While you could conceivably build a DAO for anything, since it was a piece of software, The DAO was created for the purpose of developing the Ethereum platform and other decentralized software projects. According to its "manifesto" on daohub.org:

The goal of The DAO is to diligently use the ETH it controls to support projects that will:

• Provide a return on investment or benefit to the DAO and its members.
• Benefit the decentralized ecosystem as a whole.

In short, it was developed as a venture-capital fund and, importantly, its investors expected returns.

@steve_somers Personally I think it will be spent more smartly than if it was just as pure ETH. Now falls under governance of the many.

What is a DAO, anyway? And how does it work? Christoph Jentzsch — founder of the German company Slock.it, which helped create The DAO — explained the concept in his white paper as "organizations in which (1) participants maintain direct real-time control of contributed funds and (2) governance rules are formalized, automated and enforced using software."

As American Banker's Tanaya Macheel writes, DAOs and the smart contracts on which they are built could have a lot to offer traditional financial institutions:

In theory, distributed autonomous organizations (of which the DAO is one of the first examples) are a hardcoded solution to the age-old principal-agent problem. Simply put, backers shouldn't have to worry about a third party mismanaging their funds when that third party is a computer program that no one party controls.

At a time when the financial services industry is trying to automate old processes to cut costs, errors and friction, DAOs represent perhaps the most extreme attempt to take people out of the picture.

DAOs can be deployed on the distributed global computer of the Ethereum platform or other suitable blockchains, including private ones. One mechanism to fund them is through a "crowdsale" of DAO tokens that act like shares of stock, which is what The DAO did. Token-holders can vote on new proposals (weighted by the number of tokens a user controls) to change the structure of the DAO and alter its code. Tokens also can be traded and have an exchange-value. As The DAO's "official website" daohub.org describes it:

The DAO is borne from immutable, unstoppable, and irrefutable computer code, operated entirely by its members.

How exactly does an immutable decentralized computer get "hacked"? According to DAO developer Felix Albert, it wasn't. Unlike the failed bitcoin exchange Mt. Gox — where nearly $500 million of bitcoins were lost due to a combination of breach and fraud — the theft exploited a bug that previously had been undiscovered (or more accurately, hadn't been fixed) in its code.

A quirk of robotic corporations is that they take their bylaws literally. Like Asimov's robots, DAOs are built with rules to govern their behavior that cannot easily be revised or overwritten once they are set in motion. Inevitably, these sometimes conflict with our preconceived ideas of how they ought to operate.

The attack [on The DAO] is a recursive calling vulnerability, where an attacker called the "split" function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.

It wasn't really a hack at all. It was human error. Making matters worse, The DAO's promoters (in this case, Slock.it Chief Operating Officer Stephan Tual) had said this kind of bug wouldn't be an issue just a few days before the theft (whoops).

Lots of potentialvulnerabilitiesfor The DAO had been discussed and it was even suggested to place a moratorium on proposals. Meanwhile, its promoters confidently asserted everything was fine:

We are assuming that the base contract is secure. This assumption is justified due to the community verification and a private security audit.

Additionally, Slock.it's blog claimed that the generic DAO framework code had been audited by a leading security firm:

We're pleased to announce that one of the world's leading security audit companies, Deja Vu Security, has performed a security review of the generic DAO framework smart contracts.

On close inspection, the only report they linked in their blog was three pages long. It's unclear whether a rigorous formal audit had ever been conducted. After the attack, people started asking for the audit report and wondering why Slock.it hadn't shared it. The security firm, Deja Vu, even responded on Reddit.

Hi Everyone, Adam Cecchetti CEO of Deja vu Security here. For legal and professional reasons Deja vu Security does not discuss details of any customer interaction, engagement, or audit without written consent from said customer. Please contact representatives from Slock.it for additional details.

Whoever was in charge of auditing the code screwed up big-time. As former Ethereum release coordinator Vinay Gupta explained on YouTube, The DAO was an experiment that was never built to handle this much risk:

We all knew as we watched this happening that this was an emperor's clothes scenario ... there was no way that that smart contract had undergone an appropriate amount of scrutiny for something that was a container for $160 million.

Sure, everyone involved should have stopped it from getting carried away. But what are the actual consequences when a decentralized extralegal robot corporation doesn't do what it's expected to? Is anyone really "in charge" of making sure it works? Is anyone on the hook if the whole thing goes down the tubes because of its creators' (or proposal authors') lack of due diligence?

For one thing, as Coin Center's Peter Van Valkenburgh explains, DAOs are likely to run afoul of existing securities law – potentially implicating their developers, promoters and investors:

The Securities Act intentionally defines "promoter" broadly: "any person that, alone or together with others, directly or indirectly, takes initiative in founding the business or enterprise of the issuer." Given the breadth of this language, developers should carefully weigh the risks of being visibly associated with the release and sale of [DAO] tokens.

Individuals deemed to be promoters of a [DAO] may be found to be in violation of Section 5(a) and 5(c) of the Securities Act. Under these sections it is unlawful to directly or indirectly offer to sell or buy unregistered securities, or to "carry" for sale or delivery after the sale an unregistered security or a prospectus detailing that security. Even if a [DAO] is deemed to be an unregistered security, it remains very unclear how promoting that [DAO] would or would not equate to these unlawful activities, and who—if anyone—would be found to have violated the law. Nonetheless, broad interpretation of these laws may potentially implicate any participant or visibly affiliated developer or advocate.

So DAO evangelists could soon be in hot water, regardless of any disclaimers they put up.

To the Securities and Exchange Commission's credit, they have thus far been relatively open to innovations like crowdfunding, as well as the potential for blockchain technology. As SEC Chairwoman Mary Jo White recently said in an address at Stanford University:

Blockchain technology has the potential to modernize, simplify, or even potentially replace, current trading and clearing and settlement operations ... We are closely monitoring the proliferation of this technology and already addressing it in certain contexts ... One key regulatory issue is whether blockchain applications require registration under existing Commission regulatory regimes, such as those for transfer agents or clearing agencies. We are actively exploring these issues and their implications.

Beyond financial regulation, the broader legal treatment of DAOs is a murky subject. With applications running on Ethereum, it's not always clear what the point of enforcement is. You can't exactly sue a DAO in court and then seize its assets. And, while The DAO's creators were in the public eye, that doesn't necessarily have to be the case; it could be deployed anonymously.

Maybe the next DAOs should be anonymous. Avoids the blame game and force us to use tools to build trust despite not trusting the creators.

Even if DAOs are created without a formal legal status, governments may impose legal status on them. As business lawyer Stephen Palley writes at CoinDesk:

If you don't formalize a legal structure for a human-created entity, courts will impose one for you. As most lawyers will tell you: a general partnership, unless properly formalized or a deliberately created structure, is a Very Bad Thing ... [T]he members of a general partnership can end up jointly and severally liable on a personal basis for partnership obligations.

Even if the SEC or other government entity decides to crack down on DAOs, it might be easier said than done. Because they operate on pseudonymous distributed computers, those parties may not be easy to track down (notably, we still don't know who Satoshi Nakamoto is). Even if you did, they might not have any control over it or know what it was doing. Its code also may have been radically altered from its original programming/intent.

But as far as The DAO is concerned, are we in for a slew of lawsuits or calls for SEC action by disgruntled investors? Not so fast. Investors in The DAO may yet be able to recover their losses.

Various prominent stakeholders in the Ethereum community, from Ethereum inventor Vitalik Buterin to Slock.it's Christopher Jentzsch, have suggested that the only sensible solution is to create a "fork" of the Ethereum network that could freeze the attacker's stolen funds and shut down The DAO, with the option to create a “hard fork” to fully reverse the theft and return investors' funds. Some have criticized this approach as a “bailout” or “asserting centralized control.” But it's worth noting that it would require a plurality of miners to adopt it voluntarily; whether they will remains to be seen.

Either way, Ethereum's credibility may be adversely affected. On the one hand, people need to trust that smart-contracts do what they are supposed to — particularly where millions of dollars are on the line. On the other hand, the credibility of the platform is also tied to its immutability. If developers and miners collude to reverse transactions they don't like, that sets a bad precedent.

Additionally, if the community decides The DAO's investors need to take a haircut, it could open up a Pandora's box of legal troubles for its developers and promoters (and maybe even miners and investors), potentially stifling advancement of this important technology.

But wait a minute. Why didn't the attacker see the this coming? Surely if he was sufficiently sophisticated to find a "recursive call" bug, he would have known that split funds would be locked away for 27 days — giving the community time to get wise to his activities and find a solution like the fork.

As previously mentioned, The DAO theft also crashed ETH prices. Savvy readers will note that a DAO vulnerability doesn't mean the Ethereum platform itself was compromised (any more than a nasty bug in Photoshop means that everyone with Windows 10 is at risk).

Was it possible this whole event was a ruse to pull off a "big short", as one user suggests on Reddit? As of now, there's no proof of that, but it's an interesting theory.

But was this even a theft at all? As Slock.it's representative said, "code is law!" If the code doesn't do what you think it does — that's your fault. At least, that's the theory behind an anonymous letter uploaded to Pastebin and purportedly authored by The DAO's attacker:

I have carefully examined the code of The DAO and decided to participate after finding the feature where splitting is rewarded with additional ether. I have made use of this feature and have rightfully claimed 3,641,694 ether, and would like to thank the DAO for this reward. It is my understanding that the DAO code contains this feature to promote decentralization and encourage the creation of "child DAOs".

I am disappointed by those who are characterizing the use of this intentional feature as "theft". I am making use of this explicitly coded feature as per the smart contract terms and my law firm has advised me that my action is fully compliant with United States criminal and tort law.

Adding that:

I reserve all rights to take any and all legal action against any accomplices of illegitimate theft, freezing, or seizure of my legitimate ether, and am actively working with my law firm. Those accomplices will be receiving Cease and Desist notices in the mail shortly.

If the fork moves forward to freeze or seize the attacker's digital assets, could that open up the broader Ethereum community and its miners to legal liability? We'll have to wait and see what happens.

Regardless how The DAO "theft" is resolved, regulators shouldn't be in a rush to impose stricter regulations on Ethereum, which is just a platform, or DAOs in general or even The DAO specifically, should it be reincarnated with better security practices.

While The DAO attack raises serious questions about the viability of creating this "DAO 2.0", that doesn't mean we should stop it from happening. Whether or not you believe all the hype about Ethereum being as important as the invention of the internet, it's an exciting technology that's worth giving the opportunity to grow.

Unlike Bitcoin, which has been around for eight years, Ethereum is only a year old. It officially launched in July 2015, but is already the second-largest cryptocurrency by market capitalization. It's vastly more complex than Bitcoin and still in its infancy; it will have inevitable growing pains on the way to maturity.

Just as the internet wasn't built in a day, neither will smart-contract technology come to fruition without a permissive regulatory environment to grow, much like the Clinton administration's Framework for Global Electronic Commerce did for the internet.

Certainly, vetting DAO code (particularly new proposals) is a big problem. More fundamentally, smart-contract security is an emerging area where people are rightly starting to pivot, following the lessons of The DAO attack. As Ethereum developer Peter Borah writes:

In his response to the bug, Slock's COO expressed shock, referring to it as "unthinkable", and pointing to the "thousands of pairs of eyes" that somehow missed this. It's certainly hard to blame anyone for being shaken by the sudden disappearance of tens of millions of dollars. However, this natural reaction hides the simple truth that anyone who has dabbled in programming knows: bugs in programs are far from unthinkable — they are inevitable.

Making code open-source is not enough. We need mechanisms to create smarter (i.e., fault-tolerant) smart contracts. This could mean more rigorous independent testing, strategies to implement better development practices or, at least, more time to develop through trial-and-error in a lower-risk context. Stakeholder interests also must be aligned to make sure appropriate vetting happens, particularly where voting on code alterations is involved and particularly if we want to develop more complex autonomous programs.

The DAO is an instance of people getting carried away with an exciting new technology, while not effectively managing the new cybersecurity risks that come with it. But just because a group of people screwed up The DAO, it doesn't mean all DAOs are DOA.

While there's an overabundance of utopian thinking in this space, blockchain-based experiments in decentralized governance and peer-to-peer commerce could have immensebenefits that offer truly revolutionary potential. Regulators should continue to take a wait-and-see approach and not use this as an invitation to try to shut them down or impose harsh new regulations.

from the no-leg-to-stand-on dept

I'll admit that very few things in this existence we all share give me as much pleasure at poking at the prudish censorship employed by Facebook. The overly broad puritanical guidelines, theoretically designed to save our sensitive eyes from anything as horrible as a breast or a penis, often instead results in the censorship of parody, renowned artwork, and bronze statues. That sincere but misguided attempt to keep things PG on its site is inherently funny, but nearly as inherently funny as is the fact that the following image was (rather innocently) included in one of a collection of children's books in France, entitled Images of Ponies and Horses.

Right about now you're thinking that you just witnessed a French how-to manual on having a horse be all that it can be inside of you. But it isn't! Honest! What this actually is is an attempt by the illustrator to show how similar the bone structure of human beings and horses are by aligning their respective physiology in this way. A rep from the publisher told BuzzFeed:

"Obviously, we never wanted to shock our readers with that drawing," a Fleurus spokesperson told BuzzFeed. "We publish educational books and make realistic or explanatory illustrations. In that case, our goal was to make the child visually comprehend that the bone structure of the horse and the human being are similar," they said. "Putting them in the same position makes the likening more understandable and concrete."

So, we have an unfortunately designed illustration that was supposed to be educational now going viral entirely because of the context our own dirty minds adds to the image. BuzzFeed wrote a post on the image, only to find that -- you guessed it -- Facebook had begun flagging the article as it was being shared on the site.

Well, things got even weirder with this horse thing. Facebook seems to be flagging this article — the one you're reading right now — as pornography. And then, after Facebook removes this article from your feed, it makes you go through your photos and verify that none of them are pornographic. In fact, Facebook's moderators seem to find this horse picture so inappropriate, a member of BuzzFeed's social media team received a 24-hour ban from posting on BuzzFeed's Facebook page.

And this is why Facebook should get out of the morality business to every last degree possible. An article about a hilarious, but innocent, educational illustration is being flagged, users are being hassled about their other photos on the site, and some folks are even getting banned. Because? Well, because it appears that Facebook moderators have the same perverse baseline psyche as the rest of us, resulting in an image of a man and a horse being compared physiologically becoming suspected horse-man-porn. And the article pointing out what it actually is is the one that got flagged. That's as much of a failure of this sort of thing as we could hope for.

from the what-a-mess dept

Geolocation is one of those tools that the less technically minded like to use to feel smart. At its core it's a database, showing locations for IP addresses, but like most database-based tools, the old maxim of GIGO [Garbage In, Garbage Out] applies. Over the weekend Fusion's Kashmir Hill wrote a great story about how one geolocation company has sent hundreds of people to one farm in Kansas for no reason other than laziness. And yes, it's exactly as bad as it sounds.

Most people often aren't the most technically minded, give them a tool, tell them it CAN produce an output, and they'll assume that any output that looks like the best quality possible, IS the best one available. It's extremely common with 'forensic evidence' and jurors in court cases, where it's given weight well beyond its actual evidentiary value (to the point that they now distrust cases without it) – there's even a name for it, "the CSI effect", named after one of the TV shows that uses it as a cornerstone.

One of the latest tools to get the blind trust of morons is IP Geolocation. At its basic level, it's a database of IP addresses with latitude and longitude listed, so when you look up an IP address, you get a pair of coordinates you can associate as an 'origin' for that.

However, there's a number of problems with that.:

First, what about those that don't have a lat/long listed?

Secondly, how often are they updated?

Third, how do they deal with cellular or 'mobile' devices?

So let's quickly address them.

Those that don't have a lat/long listed.

Well, there's a few ways to do it, but the way some chose to do, is just to guess. In the article that started me on this, it points out that the company MaxMind decided to guess at the average closest place it could – the geographical center of the US, except 39°50'N 98°35'W. is a messy decimal (39.8333333 N,98.585522W) so it rounded them to 38N, 97W. It's the front yard of a farm in Kansas.

Other times they just guess and get a town and put it somewhere there, although even that can be off a bit. It can be a lot off, as you'll see shortly.

How often are they updated?

There's no telling. With the great shortage of IPv4 addresses now, but with an ever-expanding list of devices, from cell phones to thermostats and even fridges, IP addresses are shifting around everywhere. There's also mergers and splits of companies, bankruptcies and so on. So unless the database is frequently updated, there's no chance that anything it has to say will be accurate – again we'll see that directly.

Finally, how does it deal with cellular devices?

Simply put, they don't. The handoff mechanism means that you'll often carry one IP address from one tower to the next (otherwise you'd have to terminate and restart any data transfer as you shifted between towers. In addition most cellular providers hide their cell customers behind NAT, precisely because of the lack of discreet IPv4 addresses to give out (and their… slowness in migrating to IPv6)

Odds are you're going to get a local network control center, or regional corporate office instead, which means it's practically no use at all.

Oh dear....

This all assumes as well that entries are made in good faith. One of the more common uses of geolocation is for targeted adverts, especially with 'adult websites', where they promise there's a horny woman (or man, if your browsing is detected as such, or the 'content' suggests you may be female) close by. Or you may have seen it in the scam adverts on news sites that should know better than to accept low-rate advertising based on scams (with easy to tell, clickbait headlines about insurance 'tricks' or similar).

This means that if you can 'rig' the database, you can expose the stupidity in parts of it, as was best demonstrated by Randall Munroe in his XKCD comic series.

So just how inaccurate are these systems? The easiest way to tell by far is to run some IP addresses where you know the location through these systems and see how far off they can be. So I did.

The most obvious one to start with is my own home connection's IP address. So I tried the link in the story, and boy was it off! Just for the record, I live on the south side of Atlanta's metro area, near Macon – Walking Dead country in fact

That's right, it put me in Ottawa, capital of Canada, roughly 1900km (1180 miles) and 1 whole country off. Part of that comes from the second question, how current the data is. It's listing my IP as belonging to Nortel networks. Problem is, I'm not a subscriber to Nortel – no-one is, the company was wound down years ago. Yet some databases still have them listed.

Cellphones don't fare much better either. I used the same service on a 4G Verizon phone sitting at my computer. It's location, San Diego. That's 1900 miles (3050km) off. Others services gave locations of New York, Atlanta, and Macon.

Wondering if it's just my semi-rural system that's messed up, I called a few friends who live in the Atlanta suburbs (a few streets from each other) and asked for their IP addresses, one used Comcast, and the other AT&T. Maybe things will be better and more accurate in a big-city environment?

I ran a number of different GeoIP services, and it was a very mixed bag of results.. One thing's certain though, none of the four set of coordinates gave an accurate location for the person (for obvious reasons I'm not going to give you their address, or mine for that matter)

Of them all, only one service – IPCIM.com – gave an error circle with a location, (twenty five mile radius), but it didn't do it for all. To me that indicates knowledge of its inaccuracy, but it's lack at other times seems to show it just doesn't care.

The second and third locations are the same coordinates, but they're less certain of the third than the second, despite both being off.

There's also something specific to note. There's 4 providers covered here. Two were done from the exact same location, yet their locations came nowhere near matching. Two more were IP addresses just streets away, but they also didn't match that well, although many went to the same default locations, including two which went to the 'lazy US Center' investigated in the Fusion piece.

More importantly, of the 30+ geolocating attempts made here, not a single one managed to be within a mile of the actual location (although one location was within a mile and a half, while another was within 3 miles – again, I'm not going to give out specifics). So for those who want to rely on them as being a source of where something is, the simple answer is "don't". This applies as much to those tracking down people who are leaving spammy comments, as it does to police officers and lawyers seeking to use them for court actions criminal or civil.

In fact lawyers and the police have absolutely NO excuse to use these kinds of databases in litigation at all as there are better, more accurate tools at their disposal – the courts themselves. In criminal cases a warrant is the preferred method, obtaining subscriber information from the ISP (fixed or cellular) which is far more accurate than any geolocation service because it's data coming from the entity actually providing the connection. In a civil trial you have a discovery subpoena to do pretty much the same thing and for the same reasons.

If you're doing it 'on your own', remember that these tools are as accurate as taking a dart and throwing it not at a map on the wall, but at a Google map display on your computer screen. Sure you'll be out a display, but you won't be potentially facing criminal charges when you go to act on what it basically bullshit data. At the very best, it can be used to advise, but it can be INCREDIBLY off, sometimes thousands of miles.

If you'd rather see them on a map, they're here. (Legend Charter in green, Verizon in red, Comcast in blue, AT&T in yellow)

NOTE: One data source was extremely interesting in its provision of 11+ decimal places in its results. While this might seek to imply accuracy, it actually underscores how inaccurate it actually is. Eight decimal places gives a resolution of 1.1 millimeters – half the thickness of a CD/DVD. 11 decimal places as given in all their results is going to extremes, with locations given to less than a hair's thickness. It has been rounded down.
The "Marietta (bedroom)" label was actually on the output from their database.

I would like to thank David and James for their help with this. And for obvious reasons, we have forced changes in IP addresses for all our connections (and the release of this article was delayed to ensure that).

Use of one of the research community's most valuable and extensively applied tools for manipulation of genomic data can introduce erroneous names. A default date conversion feature in Excel (Microsoft Corp., Redmond, WA) was altering gene names that it considered to look like dates. For example, the tumor suppressor DEC1 [Deleted in Esophageal Cancer 1] was being converted to '1-DEC.'

Here we have the interesting interaction of two very different fields, where the name of a gene involved in esophageal cancer, DEC1, was interpreted by Excel to mean the date, 1 December. As the paper points out, these kinds of substitution errors are already to be found in key public databases:

DEC1, a possible target for cancer therapy, was incorrectly rendered, and it could potentially be missed in downstream data analysis. The same type of error can infect, and propagate through, the major public data resources. For example, this type of error occurs several times in even the immaculately curated LocusLink database.

As that notes, a gene that might be relevant for treating cancer could well be missed because of this incorrect conversion to a date by Excel. Although it is unlikely that any serious harm has been caused by this -- yet -- it's a useful reminder of the dangers of depending a little too heavily on the results of software without checking for corruption of this kind.

According to the Prairie Village Post, earlier this month lawyer Mark Molner was driving through a Kansas City suburb on his way home from his wife’s sonogram. All of a sudden, his BMW was blocked in front by a police car as another officer on a motorcycle pulled up behind him. (His pregnant wife witnessed the incident from a nearby parked car.)

According to what Molner told the Post, one of the officers then approached his car with his gun out.

“He did not point it at me, but it was definitely out of the holster,” Molner told the Post. “I am guessing that he saw the shock and horror on my face, and realized that I was unlikely to make (more of) a scene.”

The mistake prompting this guns-drawn approach of Molner's video could have been made by anybody. The ALPR read a "7" as a "2" and returned a hit for a stolen vehicle. The hit also returned info for a stolen Oldsmobile, which clearly wasn't what Molner was driving. But that could mean the plates were on the wrong vehicle, which is also an indication of Something Not Quite Right.

The PD's statement on the incident is fairly sensible and measured.

“The officer has discretion on whether or not to unholster his weapon depending on the severity of the crime. In this case he did not point it at the driver, rather kept it down to his side because he thought the vehicle could possibly be stolen. If he was 100 percent sure it was stolen, then he would have conducted a felony car stop which means both officers would have been pointing guns at him while they gave him commands to exit the vehicle.”

That makes sense, but there's still a chance this situation could have been averted. Molner's plate triggered the hit several miles before he was pulled over as pursuing police were unable to verify the plate due to traffic density. But it appears the officers made a last-minute decision to perform the unverified stop shortly before Molner would have driven out of the PD's jurisdiction. The stop occurred on the city/state boundary between Kansas and Missouri.

This lack of verification is what bothers Molner.

“I’m armchair quarterbacking the police, which is not a good position to be in,” Molner told the Post. “But before you unholster your gun, you might want to confirm that you’ve got the people you’re looking for.”

So, when the plate reader kicked back a bad hit, the cops did attempt to verify the plate, but it looks very much like they overrode procedural safeguards in order to prevent possibly losing a collar.

As these plate readers become more common, the number of erroneous readings will increase. If the verification safeguards are followed, problems will be minimal. But if anyone's in a hurry... or the vehicle description is too vague... or it's night... or someone's had a bad/slow day... or if the end of the month is approaching and the definitely-not-a-quota hasn't been met… bad things will happen to good people.

Placing too much faith in an automated system can have terrible consequences. Molner came out of this without extra holes, electricity or bruises. Others may not be so lucky.

from the make-sure-to-dot-all-i's-and-blot-out-all-sensitive-info dept

It appears as if the New York Times, in its latest publication of leaked NSA documents, failed to properly redact the PDF it uploaded, exposing the name of the NSA agent who composed the presentation as well as the name of a targeted network.

As soon as the article was posted, someone from or associated with a popular cryptography website claims to have downloaded a pdf of the Snowden document from The New York Times and discovered that three of the redactions that were intended to obscure sensitive national security information were easily accessible by highlighting, copying and pasting the text. The poorly-redacted file was subsequently posted to the cryptography website, then promoted via Twitter. (We’re not going to post the name of the website that posted the file to protect the information contained within.)

…

So, the identity of an NSA agent is out there in public view within the same document in which a target of this program is named. All of this is due to the incompetence of whoever failed to properly redact the pdf before publishing it for the world to see — as well as for the aforementioned cryptography site to nab and republish it.'

…

This was bound to happen at some point in this ongoing saga: the name of an American agent has been leaked to the public via a document stolen by Edward Snowden. To add to the irresponsibility of how Snowden went about this operation, he distributed untold thousands of documents to a gaggle of technological neophytes who barely understand how to used Adobe Acrobat, much less the phenomenally complicated details of top secret NSA operations.

Cesca somehow feels the privacy of a single NSA agent trumps the public's interest in infringements on their own privacy -- not just here in the US but all over the world. Certainly, the New York Times should have made sure its redactions were actually redactions before publishing the document, but Cesca's hyperbolic attack isn't doing his side any favor.

One agent's name was exposed, one who may not even be employed by the agency at this point. (The documents are from 2010.) The target revealed is nothing more than the Al Qaeda's "branch operation" in Mosul, Iraq. Al Qaeda has been the focus of counterterrorism efforts since before the 9/11 attacks and the revelation that the NSA is targeting mobile networks in Mosul shouldn't come as a shock to anybody, least of all Al Qaeda members.

This doesn't excuse the NYT's carelessness, however. It is disseminating some very sensitive NSA documents and should be ensuring any information it chooses to withhold stays withheld. But this error doesn't invalidate Snowden's exposure of the NSA's programs, no matter how Cesca (and those like him) spin it.

The NSA and other government agencies have suffered redaction failures as well, accidentally exposing information they would rather have withheld from the public. Does the government get held to the same standard by the NSA's booster club? Hardly. Humans make mistakes, no matter which side of this issue they're on.

[The original document uploaded by the NY Times is posted below (via Cryptome). To see the unredacted text, simply click on the Text tab.]

from the stop-not-controlling-things-you-can't-control! dept

The Great Internet Porn Firewall of Britain is now in full effect and, contrary to earlier reports, the no-porn filter will be mandatory even for smaller, "boutique" ISPs. How this will play with Andrews & Arnold, the ISP inviting customers seeking internet filtering to check with North Korea, remains to be seen.

All "questionable content" boxes are to be pre-ticked to provide maximum sanitization, per UK policy, and if someone wishes for a less censored internet experience, they'll have to go through the trouble of informing their ISP that they are indeed a responsible adult capable of handling NSFW material.

Finally, DCMS demand ISPs give them magic beans (“We want industry to continue to refine and improve their filters to ensure they do not – even unintentionally – filter out legitimate content”) and threaten them with regulation if they do not answer to future demands, or “maintain momentum”.

There's nothing quite like a faith-based technological platform crafted by a crack team of professional busybodies and bureaucrats, especially one that assumes the only fuel needed is good intentions and the "momentum" will sustain itself into perpetuity. OR ELSE.

The not-so-veiled threat on the end really drives the point home. What happens if the ISPs fail to deliver the impossible with their inability to prevent something that is by definition unpreventable? What are the consequences of failing to "maintain momentum" or "proactiveness" or whatever term the government is using to redefine "doing what they're told?" The "strategy guide" spells it out this way.

And while Government looks to the industry to deliver, through the self-regulatory mechanisms already established under UKCCIS, we are clear that if momentum is not maintained, we will consider whether alternative regulatory powers can deliver a culture of universally-available, family-friendly internet access that is easy to use.

Jesus. That's frightening. If ISPs don't march in lockstep with Cameron's orders, they'll simply be beaten into shape by restrictive government mandates that ensure "a culture of universally-available, family-friendly internet access." If that doesn't sound like a slightly kinder, gentler version of any totalitarian regime's homegrown "internet," then I didn't just throw up a little in my mouth while typing out that quote.

Why would the government threaten to set up its own internet, one dangerously low on a.) blackjack and b.) hookers? For the children, of course. Every form of media, not just the internet, is subject to these guidelines.

This should be underpinned by a basic, common set of media standards, building on existing standards that already apply in many places. We would expect this to include:

• Protection of minors: including protecting children’s exposure to material that seeks to sexualise them, strong sexual content, violence, imitable and dangerous behaviour, any specific health priorities, safety of children in content and protecting against commercial influence.

The UK government's neverending quest to turn the internet into a Disney-esque wonderland where no one sees anything they don't want to and are never even mildly insulted is pathetic. And disturbing. Cameron's plans infantilize the nation's children and adults, treating them both as precious bundles of stupidity too incompetent to make their own decisions on appropriate content.

If Cameron's ultimate goal is to govern a nation of infants, he's well on his way. But he's going to find the behavior behind the disturbing images will continue on unabated. His solutions will work about as well as slapping band-aids on someone bleeding internally. At some point down the road, he or his successors will triumphantly point at the unstained bandages as proof of their effectiveness. And if something should actually mar the surface, the call will out go out for bigger bandages -- and more of them.

from the i'm-sure-tat's-effective dept

In yet another copyright trolling case, it appears that the trolls are so sloppy that they're suing over the same IP address for sharing the same file (the animated movie, Zambezia) in multiple cases. The story focuses on one guy, who has filed a motion to quash in response, noting that the sloppiness of filing three times raises significant questions about the trolling operation. Either they're incredibly sloppy and not very careful, or they're hoping that by repeating the same IP address in multiple lawsuits, at least one judge will let the subpoena go through, leading to the inevitable demand letter. Either way, it should raise some eyebrows from the court about why anyone would file against the same IP address for the same movie in three different cases.

from the look-forward,-not-back dept

There's been a lot of hand-wringing among the types of people who hand-wring about these things, that there was a flurry of activity on Reddit and Twitter late last night / early this morning believing that one of the suspects in the Boston Bombings was Sunil Tripathi, a Brown student who went missing last month (and, for what it's worth, when people thought it was him, folks from 4Chan started complaining that they had done the real sleuthing, and were pissed off that Reddit got the credit -- but, now that it turned out to be wrong, 4Chan seems happy to let Reddit take the heat). Alexis Madrigal has the basics of the story, which has allowed the usual crew of folks who hate the concept of "citizen journalism" or whatever it's called today to whine about how awful "Reddit" journalism is. Defender of legacy newspapers, Ryan Chittum, seemed particularly gleeful in calling out that Reddit "fails again," and saying that the mainstream media did it right.

But the bigger problem is this idea that it's "Reddit" or, as some people have argued) "the internet"against the legacy media. That's not true at all. Everyone made mistakes during the rapidly changing story, but only on Reddit did you actually see the details of the process. The legacy news organizations present things as if coming from a place of authority, while Reddit is like an open newsroom where anyone can jump in. The conversation about Tripathi, for example, was about whether or not Suspect #2 was him -- it wasn't based on a declaration that it absolutely was him. Furthermore, when you look at the reason why the story actually spread, it was after some more known "press" names retweeted the initial tweet from Greg Hughes, which claimed (incorrectly) that Tripathi's name went out on the police scanner (ironically, he posted that about a minute after posting "This is the Internet's test of 'be right, not first' with the reporting of this story").

But here's the real issue: people can fret about all of this, but it doesn't change one thing: this is going to happen and continue to happen. People are naturally curious and they're going to talk to people when there's a news story going on and they'll try to figure things out. That happens all the time in newsrooms already before stuff goes on the air or is officially published. It's just that the public doesn't see the process. On Reddit, or anywhere else that the public can converse, it does happen in public. The problem is to assume the two things are the same. Furthermore, it's even more insane to blame "Reddit" or "the internet" as if those are singular entities that anyone has control over. They're not. As Karl Bode noted, they're just massive crowds of people.

An even better point was made by Charles Luzar, who noted that "the crowd doesn't implicitly profess its empirical correctness like the media does," but rather admits quite openly that it's a process in action. Further, he notes that even if the crowd presents false information before finding factual information, that's still "effective crowdsourcing" and, if anything, provides a greater role to the media to be effective curators of the actual facts.

In the end, it seems likely that this incident will actually help a lot the next time there's a big breaking news story, because (hopefully) it will give people more reason to be at least somewhat skeptical of stories coming out, but it's not going to change the fact that groups on various platforms are going to talk about things, and often try to do a little sleuthing themselves. Sometimes they'll get it right, and sometimes they won't -- just the same as many others. It seems like a much better focus looking forward is in providing more training and tools to help the world be better at it.

from the getting-it-wrong dept

Amongst economists and those who draw on their thinking, the names Reinhart and Rogoff are well known for work published under the title "Growth in a Time of Debt," which sought to establish the relationship between public debt and GDP growth. The key result, that median growth rates for countries with public debt over 90% of GDP are about one percent lower than otherwise, and that the mean growth rate is much lower still, has been cited many times, and invoked frequently to justify austerity economics -- the idea being that if the public debt is not reduced, growth is likely to suffer badly.

In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadsheet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.

They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result.

In his post, Konczal goes on to give a good explanation of just what went wrong. Correcting those three major errors produces the following result:

So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.

That is, not only is there no significant difference between countries whose public debt-to-GDP ratio is over 90%, and those with much lower values, there is apparently no critical number above which growth falls catastrophically. Put another way, from the corrected research, there does not seem to be any reason why the public debt-to-GDP ratio cannot keep on rising while preserving normal levels of growth.
That clearly runs entirely contrary to the current dogma that public debt must be reduced at all costs in order to keep growth at a healthy level. As the authors of the new paper conclude (pdf):

RR's [Reinhart and Rogoff's] findings have served as an intellectual bulwark in support of austerity politics. The fact that RR's findings are wrong should therefore lead us to reassess the austerity agenda itself in both Europe and the United States.

That debate about public debt reduction and the need for austerity measures certainly won't stop just because a key justification for the approach has been found to be completely wrong. But it's worth noting that alongside the major political ramifications of this new finding, there is another, rather less contentious, conclusion to be drawn.

The three errors in the original work by Reinhart and Rogoff finally came to light when they allowed other researchers to examine their model and the data they employed in it. It then became clear that the model was flawed, and that not all the relevant data had been included in the calculation. Neither was obvious from the result alone.

This reinforces a point we have made before. Alongside the results of their work, academics also need to release the datasets and any mathematical/computational models that they have used to derive them. Without those additional resources, it is not possible for other researchers to reproduce the results, which may -- as turns out to be the case for Reinhart and Rogoff's famous paper -- contain fundamental errors that completely undermine the conclusions drawn from them.