from the quite-possible dept

Last week, Gizmodo's Dell Cameron has a great report on how the DOJ's Amber Alert site was configured so stupidly that it could be used to redirect people to any website (this was also true of weather.gov and the National Oceanic and Atmospheric Administration). And it was being used. To redirect people to hardcore porn. Basically, the sites were designed such that just by knowing the right URL and adding a new URL to the end, it would redirect to those sites. Porn sites used this for a couple of reasons: first, since they'd now be getting referrals from high ranking sites, it can help their Google ranking. Second, because the primary URL would come from a trusted source again, it would help their Google ranking. And, finally, the links may look much more legit to people doing searches (though that would be more true of scam sites than porn sites).

Redirect scripts like this used to be fairly common, but they died off long ago. Except in the federal government. From Cameron's article:

“This is like the 1990s called and wants its vulnerable redirect script back,” said Adriel Desautels, founder of the penetration testing firm Netragard.

But, here's the thing: does this mean that the DOJ (and the NOAA) could be violating SESTA/FOSTA? It's possible! And that just goes to show how poorly drafted the law is. Remember, under the law, it is now illegal to "participate in a venture" that "knowingly" is "assisting, supporting, or facilitating" a violation of sex trafficking laws. So, if someone were to create a DOJ Amber Alert redirect to a sex trafficking website (or just an escort site, since people keep insisting those serve little purpose other than sex trafficking) would the DOJ be in violation?

The obvious response is that the DOJ isn't "knowingly" doing this. But... is that true? As Cameron's article notes, every time you hit one of those Amber Alert redirects, the DOJ gives you a nice little parting message:

Is that enough to "knowingly" participate? Maybe. I would bet that if non-governmental websites popped up similar messages, SESTA/FOSTA supporters would argue it's proof of knowledge. After all, Rep. Cathy McMorris Rodgers claimied that merely "turning a blind eye" was enough to prove "knowledge." And here, clearly, the DOJ must be logging those exit pages. Is it ignoring them? Is that turning a blind eye? Does that count as knowledge?

Maybe it's a stretch, but the fact that the language of the bill even makes this a possibility just demonstrates how poorly drafted the bill is, and shame on all the politicians who refused to step up and fix it.

from the didn't-see-that-coming dept

Here's a bit of a surprise. The Wall Street Journal's Editorial board has come out vehemently against SESTA. The reason this is surprising is that much of the push for SESTA has been a fairly obvious attack on internet companies, especially Google, by trying to undermine CDA 230. And the Wall Street Journal has spent years attacking Google at every opportunity.

But, this time, the editorial gets the story right -- highlighting that the effort is clearly being driven by anti-Google animus, even though it will create all sorts of other problems (problems that Google can mostly survive easily). However, the most important part of the editorial details why SESTA is not actually needed. Throughout the process, the backers of the bill always point to Backpage.com as the reason the bill is necessary. As we pointed out, when the bill was first released, nearly every quote from Senators backing it mentioned how it was necessary to take down Backpage.

But what none of them want to talk about is that we don't need this law to take down Backpage. As the WSJ report notes, Backpage is currently in court in Massachusetts where it seems likely that a judge will say that it's not protected by CDA 230, because of recent evidence showing that it was a more active participant in creating advertisements involving trafficking. Earlier ruling saying that Backpage was protected by CDA 230 did not have the evidence revealed later that Backpage actually was an active participant:

Yet neither the California nor federal judge was presented with the evidence in the Senate report that Backpage had edited ads to elude law enforcement. And in light of new evidence, a federal judge in Boston is considering allowing another case against Backpage to proceed.

At the very least, you have to wonder why Congress can't wait to see what happens in Boston.

Separately, it's widely been reported that the DOJ has begun a grand jury investigation into Backpage. And, despite what people will tell you, nothing in Section 230 prevents the Justice Department from going after federal crimes (including sex trafficking). The WSJ points this out as well:

Justice last year initiated a grand jury investigation into Backpage’s sex-trafficking, and courts may correctly decide the case in due course.

And that leaves out two other points that show that SESTA is not needed to take down Backpage. First, two years ago, Congress passed another law, the SAVE Act, which was also clearly targeted at Backpage, making it illegal to advertise sex trafficking. And, for reasons no one has explained, that law has never actually been used. Throughout the entire SESTA process, politicians have acted as if the SAVE Act never even happened. No one has discussed why we need another law on top of that one, or why the SAVE Act has not been used, even though it targets the same issue in a more targeted fashion.

And the biggest reason of all that we don't need SESTA to take on Backpage: Backpage has already shut down its adult ads section, due to the mounting pressure from politicians, lawsuits and law enforcement. The only thing SESTA adds to Backpage is even more ways to pile on after the fact. It's hard to see how that is necessary at all.

Either way, as the WSJ Editorial points out, SESTA seems totally unnecessary to deal with the problem that everyone is claiming it's meant to help... but it would cause lots and lots of other problems:

Most website operators are aware that people use their platforms for nefarious purposes, but ferreting out criminal activity isn’t always easy. While algorithms can help, human judgment is often necessary. Google plans to hire 10,000 employees to review YouTube videos for inappropriate content.

Revising Section 230 for sex-trafficking could open up a Pandora’s box. Small websites might create overly restrictive screens that filter out non-objectionable content such as ads to help sex-trafficking victims. This could also make it harder to ask other countries to provide internet companies legal immunity for user content.

The WSJ editorial actually underplays the so-called "moderator's dilemma" created by SESTA. As Eric Goldman has explained multiple times, under the wording of SESTA, most websites are encouraged to take one of two extreme positions, neither of which are helpful. One is that if they moderate, they will become overly cautious, quick to take down anything that might put them at risk, or for which they received a notice. So that could include (as the editorial notes) content designed to help sex-trafficking victims, or it might just be anything that mentions certain keywords. Or maybe it'll be anything that someone flags as "sex-trafficking content" just because they want it gone (as we see with tons and tons of DMCA notices).

The other option in the moderator's dilemma is not moderate at all. Since SESTA has a "knowledge" standard, the best way to avoid liability is not to have any knowledge of what people are doing -- and the best way to do that is not to moderate at all. What's quite incredible to me is that the backers and supporters of SESTA refuse to acknowledge that as CDA 230 is currently written it encourages moderation, and SESTA removes that encouragement. In other words, in an effort to get websites to stop allowing sex trafficking, they're actually encouraging more sites to look the other way deliberately.

One more point that the WSJ editorial board gets right: this is going to lead to a ton of lawsuits. And almost all of those will be misdirected: suing the company operating a website for actions of its users, rather than suing the users. Given that even the anti-internet Wall Street Journal understands the many problems of the bill, why is it that so many in Congress refuse to deal with it?

from the it-really-does dept

If you represent a tech company, please consider signing our letter to Congress from tech companies about concerns regarding the latest attempt to dismantle Section 230 of the CDA in a manner that will be completely counterproductive to the stated goals of the bill. Sex trafficking is an incredibly serious issue, and we support efforts by law enforcement and various groups to fight it -- but we are greatly concerned that the approach being put forth here will actually be counterproductive to that goal, and create numerous other problems.

As we've discussed over the past week, Congress has launched a highly questionable attempt to modify Section 230 of the CDA, ostensibly as an effort to takedown Backpage (ignoring (1) that they already passed another law two years ago targeting Backpage and then never used it (2) that the DOJ is already able to go after Backpage if it broke the law and may be investigating the company as we speak and (3) that Backpage has already shut down its adult section), but which will actually create havoc for basically the entire internet. It will do a variety of damaging things, including opening up every tech platform to frivolous lawsuits from individuals and fishing expeditions from states Attorneys General, if anyone uses any part of a platform in a manner that touches on sex trafficking. We've already discussed how the bill could kill Airbnb, for example.

But, really, the worst part of the bill is that it's entirely counterproductive. The tech industry may not be experts in trafficking, but we now have decades of experience dealing with what happens when you blame platforms for the actions of their users -- and the bill here won't help deal with trafficking, but could make a bad problem worse. CDA 230 works by encouraging platforms to moderate their content, by making sure that any attempts to moderate don't make the platform liable for the actions of users. This enables platform companies to freely monitor usage and manage their platforms responsibly. But under the new bill, should it become law, any "knowledge" of trafficking activity using the platform puts the platform itself at risk of violating both civil and criminal law. As such, any monitoring behavior is likely to be used against the platform. This only incentivizes companies to take a total "hands-off" approach to policing their own platform. And this is doubly ridiculous because the tech industry has worked closely with law enforcement over the years to combat trafficking, creating a variety of tech platforms and using big data to help find, target and stop trafficking. But under this bill, participating in those programs very likely will be used against these platforms. And this would be a real tragedy as it could lead to more trafficking, rather than less.

And, on top of that, as we saw when Craigslist was targeted in the past and trafficking just moved over to Backpage, the trafficking will continue and will move to platforms less interested in working with law enforcement (perhaps overseas platforms). The end result: (1) doesn't stop trafficking, (2) pushes tech companies not to cooperate for fear of greater liability and (3) creates massive other problems for those tech companies in the way of increased liability and frivolous lawsuits. It's a bad idea on nearly every front.

... the bill would be as if Congress decided that FedEx was legally liable for anything illegal it ever carries, even where it’s ignorant of the infraction and acts in good faith. That would be a crazy notion in itself, but rather than applying only to FedEx's tech equivalents—the giants like Google and Facebook—it also would apply to smaller, less well-moneyed services like Wikipedia. Even if the larger internet companies can bear the burden of defending against a vastly increased number of prosecutions and lawsuits—and that’s by no means certain—it would be fatal for smaller companies and startups. Amending Section 230's broad liability protection for internet service providers would expand the scope of criminal and civil liability for those services in ways that would force the tech companies to drastically alter or eliminate features that users have come to rely on. It could strangle many internet startups in their cribs.

Because of all of this, our think tank organization, the Copia Institute, teamed up with our friends at Engine to put together an open letter from tech companies to Congress about this bill. We've put it up at 230matters.com. Some great tech/internet companies have already signed onto the letter, including Reddit, GitHub, Cloudflare, Medium, Automattic, Rackspace, Tucows and more. We'll be sending a second version of the letter in a few weeks, so if you represent a tech/internet company and have the authority to do so, you can sign the letter on that site (if you work at such a company and don't have the authority, please forward this to someone who does...). We all support the larger goal of stopping sex trafficking, but as these tech companies know all too well, this bill will actually harm that goal by making it harder for the companies to help in that process.

from the liability-liability-liability dept

Earlier this week, we wrote about a dangerous bill to punch a giant hole in Section 230 of the CDA. We spent a lot of time in that post detailing how problematic the bill is and how it would actually be counterproductive to the stated goal of stopping human trafficking. But, beyond just being counterproductive to the stated goal, the bill would likely create fairly massive negative consequences for tons of internet companies. If anyone used any part of that company's products and services for trafficking, it would open up companies not just to liability, but to costly legal action, even if they're eventually vindicated.

And I wanted to dig into one example: Airbnb. As we've discussed in the past, Airbnb relies heavily on CDA 230, because otherwise, any time anything went wrong with an Airbnb hosted place, Airbnb would face potentially crippling lawsuits. And I'm thinking about Airbnb specifically, because of a recent ruling that Eric Goldman pointed out, in which Airbnb's largest competitor VRBO was saved by CDA 230. You can go over to Eric's blog to read the details, but the really short version is that someone booked a "luxury resort" via VRBO for a ridiculous sum of money, and the rental units never happened. The victims targeted VRBO with the lawsuit, but the court has none of it.

The plaintiffs claimed VRBO qualified as a “seller of travel services,” which would subject it to heightened regulation. The court disagrees: “HomeAway merely provides a venue for others to sell or provide lodging, but does not provide the actual facility where people can ‘lodge.'” See the uncited SF Housing Rights Committee v. Homeaway ruling. Further, if VRBO did constitute a seller of travel services, Section 230 would apply: “To hold HomeAway liable for misleading or inaccurate material (e.g., images from another property listing appearing on a different HomeAway website being duplicated on the VBRO.com Jewels of Belize rental account) in the third party created Jewels of Belize listing contravenes Section 230 of the CDA.”

To get around Section 230, the plaintiffs invoked VRBO’s “Basic Rental Guarantee,” which provides reimbursement for certain types of fraud (but doesn’t cover direct wire transfers like those at issue here). VRBO denied the reimbursement claim, concluding that the listing came from an authorized property owner and the property actually existed (but apparently had serious problems). The court says VRBO did what it promised to do in the “guarantee.” The fact VRBO used the term “guarantee” (a term I wouldn’t have chosen personally) didn’t convert the reimbursement promise into something more.

Now, there's a somewhat reasonable argument to be made that it would be smart business strategy for VRBO to handle the situation better and to reimburse these people. But making the company legally liable is another story.

So, now let's get back to SESTA, the bill that was released earlier this week. Surely, some of you are thinking, that's not a big deal for Airbnb? After all, everyone's just talking about how it's designed to takedown Backpage (ignoring that Backpage already shut down its adult section, and current law already lets the DOJ go after Backpage if it violated federal trafficking laws). And Airbnb isn't Backpage. But... there actually have been a whole bunch of stories this year claiming that prostitutes are now using Airbnb. And, with some folks trying to conflate prostitution with trafficking, is it any stretch of the imagination to think lawyers will start suing Airbnb, claiming it's violating federal anti-trafficking laws?

As Eric Goldman asks in his post:

In light of that, how would Airbnb and VRBO change their behavior to reduce their potential liability for sex trafficking pursuant to the proposed bills? Heck if I know, and I doubt Congress knows either.

Put yourself in the shoes of Airbnb and tell me: how would you completely rule out that anyone could possibly abuse the Airbnb service for trafficking? It's... not easy. This doesn't mean Airbnb wants this activity to happen via it's platform. I'm 100% sure that it does not. It would be thrilled if there were an easy to way to stop it. But... it's not. Like, this is a basic impossibility. And yet, if it happens, then suddenly the companies themselves could be liable for criminal activity. Criminal activity that was done by others, that Airbnb would have no effective way to stop, short of shutting down the site or doing something ridiculous and drastic. That... seems like a pretty big overreaction. Yes, trafficking is a problem, but this bill is sending a wrecking ball through the internet as it seeks out one particular company.

And, yes, I get that some people don't like Airbnb, so maybe they're fine with this kind of collateral damage, but Airbnb is just one example here. Plenty of other sites that you probably do like will face similar problems. This is a bad bill that won't even help with its stated goals, but will create serious problems for people around the globe.

from the unintended-consequences dept

We're back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt.

One of the issues that we focus on quite frequently around here is the "unintended consequences" of politicians and regulators regulating in good faith. Some argue that many of these "unintended" consequences really are quite intended (i.e., regulatory capture creating barriers to new entrants), but in many cases the consequences truly are unintended. You have people trying to regulate extremely complex systems by doing a fairly superficial attack on one or a small number of variables, never even bothering to consider how that might impact other variables. Or, if you want to think about it more visually, if you squeeze the toothpaste tube in one spot, the toothpaste is going to just move around elsewhere (and potentially fly out and all over everything). With that in mind, Greg Ip's Foolproof: Why Safety Can Be Dangerous and How Danger Makes Us Safe is a fun read on such unintended consequences.

It covers a variety of different areas where there were attempts to effectively "regulate" safety, but which actually caused the toothpaste to shoot right out of the tube, and make everyone less safe (even as they thought they were safer). In fact, part of the problem is the false belief that we're safer, leading people to take riskier and riskier actions. It's not all that however. A second part of the issue is the added layer of complexity. Increase complex regulations and the focus becomes on loopholes, and hiding activity through more complex behaviors, that can mask the danger as well -- something that pops up in the financial system time and time again. While there are some who like to use the kinds of examples in the book to argue for doing away with regulations altogether, that's not necessarily the solution. But regulating for "safety" without understanding these kinds of unintended consequences can lead to serious problems. And, as Ip's book notes, it's easy to look for "villains" when things go catastrophically wrong, but it often is the well-intentioned people seeking "safety" or "protection" that create these unintended consequences in the first place.

from the unintended-effects dept

A month ago, we wrote about Kim Dotcom's plans to form his own political party in New Zealand. But that's not the only way that Dotcom is going on the attack against the system. Here's Vikram Kumar, the Chief Executive of Dotcom's "privacy company" Mega, on another bold move:

Mega chief executive Vikram Kumar told ZDNet that the company was being asked to deliver secure email and voice services. In the wake of the closures [of other email services like Lavabit], he expanded on his plans.

Kumar said work is in progress, building off the end-to-end encryption and contacts functionality already working for documents in Mega.

according to the site's former lawyer, Newzbin intends to make a comeback with new services designed to defeat NSA spying.

"We are horrified by the recently disclosed antics of the NSA in indiscriminately spying on all Internet users," David Harris told TorrentFreak.

"We cut our teeth protecting our users from the surveillance of corrupt US corporations like the Hollywood movie studios. We now plan to bring a range of privacy services to help protect the public from governments. Our first product is a secure email service which is about to enter closed early beta testing."

What's interesting here is the common thread with Dotcom. Both he and Newzbin's David Harris ran services that tried to resist unwanted scrutiny. That puts both of them in a good position when it comes to establishing secure email services -- at least, insofar that is possible given the limitations imposed by email's architecture. It looks like the US government's uncritical support for the copyright industry's frenzied pursuit of file-sharing sites could ultimately lead to the creation of new online services that thwart its law agencies in the far more important area of national security.

The police should be allowed to hack into mobile phones and computers, even when these are located abroad. This is proposed by the Dutch government on May 2nd of 2013. While this appears to be a powerful asset for law enforcement, in reality it creates unnecessary vulnerabilities for citizens.

The bill would also make it a crime for a suspect to refuse to decipher encrypted files during a police investigation.

It is expected the draft legislation will be put to parliament by the end of the year.

The bill singles out child pornography and terrorism as two areas of special concern. The publication of stolen data would also become punishable.

It's easy to see how the last of those could be abused to silence inconvenient whistleblowers. Bits of Freedom sums up well the key danger with the bill:

other countries, such as China, will use the powers as a justification for their own activities. They will follow the Dutch example by allowing their police to use the same methods, including hacking abroad, in order to delete controversial data. Civilians will become the victims in an arms race between hacking governments.

Indeed, it's worth considering for a moment what the Chinese response will be when it finds Dutch police, with the full approval of the Dutch government, deleting files or installing spyware on computers on its territory. It won't matter if the latter were involved in breaking into Dutch systems, or controlling a global botnet: national pride will be at stake over what will effectively be an attack on Chinese citizens and property. So as not to lose "face", a robust response is guaranteed. Is the Netherlands (population 6,065,459 16,788,973) really ready to take on China (population 1,353,821,000) over this?

from the unintended-consequences dept

One thing we've talked about for years is that lawmakers are notoriously bad at thinking through the unintended consequences of legislation they put forth. They seem to think that whatever they set the law to be will work perfectly, and that there won't be any other consequences. This is one reason why we're so wary of simple "fixes" even when the idea or purpose sound good up front. "Protecting artists" sounds good... unless it destroys the kinds of services artists need. Cybersecurity sounds good, unless it actually makes it easier to violate your privacy. And, now, people are realizing that not only may cybersecurity rules like CISPA be awful for privacy, but they could potentially lead to more "cyber" attacks, as companies look to "hack back" against those who attack them. As Politico describes:

The idea is known as "active defense" to some, "strike-back" capability to others and "counter measures" to still more experts in the burgeoning cybersecurity field. Whatever the name, the idea is this: Don't just erect walls to prevent cyberattacks, make it more difficult for hackers to climb into your systems — and pursue aggressively those who do.

So, how would cybersecurity rules create more hacking? Well, possibly by encouraging this kind of behavior by providing some amount of cover for it. The Cybersecurity bill in the Senate last year included an undefined allowance for "counter measures." CISPA doesn't explicitly mention that, but some in the security field are interpreting the bill to provide some amount of cover for such "counter measures" in which they could "perform hacks against threats." But, if you're trying to discourage online attacks, that seems like a problem. The likelihood of someone attacking the wrong target is quite high, and it could create quite a mess.

Thankfully, the folks behind CISPA suggest that they're willing to change the bill to make it more explicit that such countermeasures are not allowed, but until that's in place, it's a serious concern:

Some of those fears have reached Rep. Mike Rogers (R-Mich.), chairman of the chamber's Intelligence Committee and one of CISPA's lead authors. In fact, panel aides told POLITICO they're open to revising the relevant definitions in the bill. And Rogers himself this year has railed on the idea of an aggressive active defense, describing it as a "disaster for us" at a time when the country's digital defenses remain subpar.

Even if they fix this particular hole, it's these kinds of things that should worry all of us about broad laws that provide things like blanket immunity over ill-defined concepts like "cybersecurity" and "cyberattacks." The likelihood of it being abused is quite high, especially in an ever changing technology world. Just look at computer laws like the CFAA and ECPA, which cover various computer crimes and privacy today. Both are ridiculously outdated, with concepts that are laughable by any rational view today. And thus, there are massive unintended consequences associated with both laws. Before we rush into creating new laws with big broad vague terms, perhaps we should focus on fixing the old laws and proceeding with caution on any new ones.

from the creating-more-problems-then-you-solve dept

Since we've learned of the plan for so-called Six Strikes programs by ISPs, there has been protest and warnings from multiple sources about multiple issues. Accordingly, Daily Dot has a nice little piece about what sort of unintended consequences we can expect to come out of this plan. A couple of them are well-traveled ground here at Techdirt, including whether businesses will still offer WiFi when "pirates" naturally flock there to carry out their piratey actions. Likewise, we've discussed the importance of the Open Wireless movement, which will certainly take a massive hit if and when these ISP plans are spun up. All that being said, the third unintended consequence mentioned in the article is probably the most important, since it will render all of this an exercise in futility: greater adoption of privacy tools by the masses.

According to comments Lesser made at an Internet Society meeting in November 2012, the definition of who the CAS is after is extremely narrow, at least for its planned first iteration. It only tracks those who upload the most-popular copyrighted content, like blockbuster movies and best-selling albums, via the peer-to-peer service BitTorrent, and it only identifies them by their Internet protocol (IP) addresses. That's it. So pirates who can avoid BitTorrent, or peer-to-peer altogether, or download without uploading (a major faux pas on some torrent sites), or hide their IP addresses, will avoid detection.

Learning to conceal one’s IP address is already a major point of Internet activism, for reasons that have nothing to do with piracy. The Electronic Frontier Foundation, for instance, suggests bloggers in dangerous parts of the world hide their IP addresses to ensure their anonymity from authoritarian governments.

In other words, these plans will spark an interest in privacy tools designed to get around the "strikes". It's an arms race that essentially cannot be won, because every new tactic simply spurs the growth of interest in counter-tactics and probably leaves the average computer user even more prepared for the next attempt than they would have been otherwise. This type of thing likely creates tech-saavy people where there previously would have been none. Meanwhile, businesses and WiFi device owners will close off access out of fear.

BitTorrent proxies and VPN services are the preferred way for people to remain anonymous while downloading. These services replace a user’s home IP-address with one provided by the proxy service, making it impossible for tracking companies to identify who is doing the file-sharing. In the U.S. 16% of all file-sharers already hide their IP-address, and this is likely to increase when the copyright alert system goes live.

What's missing from all of this is exactly how any of these plans are going to get previous "pirates" to turn into paying customers for media companies. History suggests they will not do so, will not curb piracy, and will in fact only annoy people who like open WiFi connections and prepare users for the next round of the race all the more. If there were a more perfect definition of a plan that achieves nothing except collateral damage than 6 strikes legislation, I cannot imagine what it'd be.

from the the-world-works-in-bizarre-ways dept

If you haven't yet, you owe it to yourself to read Kevin Drum's recent article for Mother Jones about the possible link between crime rates and leaded gasoline. The article makes a rather convincing case that the massive growth, and then subsequent decline, in crime over the last six decades or so was influenced quite strongly by the fact that automobile gasoline had lead -- and then went unleaded due to environmental concerns. The article cites numerous studies that all seem to suggest the same thing -- and carefully tries to get past the "correlation is not causation" issue by looking at multiple studies that tackle the same question from different angles (different time periods, locations, population types, etc.) to try to eliminate other possible explanations. One of the parts that struck me as most interesting was the data on big cities as compared to other regions:

Like many good theories, the gasoline lead hypothesis helps explain some things we might not have realized even needed explaining. For example, murder rates have always been higher in big cities than in towns and small cities. We're so used to this that it seems unsurprising, but Nevin points out that it might actually have a surprising explanation—because big cities have lots of cars in a small area, they also had high densities of atmospheric lead during the postwar era. But as lead levels in gasoline decreased, the differences between big and small cities largely went away. And guess what? The difference in murder rates went away too. Today, homicide rates are similar in cities of all sizes. It may be that violent crime isn't an inevitable consequence of being a big city after all.

The article has not gone entirely without criticism. Drum has distanced himself from the claim of the key researcher he relies on in the piece that 90% of the rise and fall of crime (not 90% of crime) is attributable to lead, suggesting that 50% might be a more reasonable number. Separately, Ronald Bailey has reasonably taken Drum to task for blithely making statements about "blindingly obvious" things concerning IQ and ADHD that turn out to be... not true. When you take those things out of the equation, some of the report relies on "aggressiveness" and "impulsivity," but as Bailey notes, there is no national data series on aggressiveness or impulsivity. And, having seen way too many "studies" on video games / violent media causing greater "aggressiveness" and "impulsivity," but always failing to show that those traits actually lead to more crime, it pays to be somewhat skeptical.

That said, the data is very interesting, and certainly worth much more research and better understanding. At the very least, it's a reminder of our complex ecosystem and economy, where understanding cause and effect is often incredibly complicated, and the end results may be quite surprising. It is all too easy to jump to conclusions about cause and effect (and, yes, we are just as guilty of this as others at times) -- but the real world is an impossibly complex mixture of inputs and variables, that rarely succumb to simple explanations that follow the initial "most obvious" rationale.