from the good-move dept

While some aspects of YouTube's ContentID feature have been quite cool, creating new ways for content creators to monetize their works, there have been significant problems too, especially in taking down legitimate content with little recourse for the uploader. Thankfully, it appears that the folks at YouTube have finally realized that the counter-notification/appeals process for ContentID takedowns was bogus. A lot of people get DMCA takedowns and ContentID takedowns confused, but they're different. With the DMCA, you have an official counternotice process, and if the copyright holder doesn't sue (realistically, file for an injunction), then YouTube puts your content back up after 10 business days. However, with ContentID, there are no legal rules. Google handled ContentID disputes by letting the copyright holder simply "reject" the dispute -- and that was about the end of it, even in cases where they were putting ads on someone else's content. Now, however, YouTube has revamped the appeals process so that if someone disputes a ContentID takedown, the copyright holder would need to file an actual DMCA claim if they want to keep claiming infringement:

Users have always had the ability to dispute Content ID claims on their videos if they believe those claims are invalid. Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute. When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.

This is a much more reasonable process that doesn't allow people claiming copyright to effectively take over a video regardless of whether or not the video's uploader disputes it. This probably should have happened a long time ago, but it's good to see it finally has.

The announcement also claims that their system is becoming better at avoiding "invalid claims." It sounds as though there's some sort of threshold now, where if something is borderline, it goes into a manual review queue, rather than automatically being taken down. So the more "gray area" cases will get a human review first.

We'll see how all of this works out, but it's good to see that Google is taking many of the complaints about ContentID's overeager takedowns seriously.

from the but-still-no-youtube dept

As the fervor over the hatefulInnocence Of Muslims movie is beginning to die down, you may have heard that in response to that film the Iranian government blocked access to Gmail. There has been much postulation over why Gmail suddenly became a target, including what seems to be a ridiculous claim from the Iranian Telecommunications Ministry that they were simply trying to put a heavy block on YouTube (it's been blocked since long before this movie showed up). But, as most of us probably expected, Gmail is back on.

Regardless of whether or not the block on Gmail was intentional, the obstruction to one of the world’s most popular email services resulted in many complaints from Iran officials. Legislator Hossein Garousi reportedly threatened to summon Iran’s telecommunications minister Reza Taqipour for parliamentary questioning if the service was not unblocked.

Iran continues to block any site or network that expresses “anti-government views,” including sites like Twitter, Facebook and YouTube, which helped rally citizens and circularize the massive protests following the questionable re-election of President Mahmoud Ahmadinejad.

Now, the blocking of such sites probably doesn't shock any of us anymore. It's unfortunate, but they're doing it. Hell, Iran has previously announced plans to build their very own internet. The good news is that Iranian citizens aren't simply rolling over at their government's heavy-handed censorship of the internet. They know how to use technology to get around the filters too.

Even though YouTube was previously blocked in Iran before the film was released and Gmail access was barred, Reuters reports on the ability of Iranian citizens to “circumvent Internet restrictions” using virtual private network (VPN) software, which makes it appear as if the computer accessing the content is located in another country.

So best of luck to you, Iranian government, because you're going to need it if you think that suppressing thought and the freedom to access an unfettered internet is going to work out for you in the long term. At least you can rest easy knowing that your citizens can't play online roleplaying games. We've got that covered from our end.

from the urls-we-dig-up dept

Autonomous robots are popping up everywhere. Some can fly, and some can drive. Others can swim across the ocean. Considering that there are still a lot of places in the oceans not yet explored, fish-like robots could gather amazing amounts of data and help us keep an eye on 70% of the Earth's (water-covered) surface. Here are just a few projects that are working on ocean-faring bots.

from the here's-to-15-more-(at-least) dept

We recently wrote about Techdirt turning 15. A few weeks later, without anyone (including us) noticing, we also published our 50,000th post. Every day I'm in awe of the community here and the conversations and discussions that we've had. In the back of my head, it still feels like the early days, when I'd publish something and know that absolutely no one would read it -- even though that doesn't appear to be the case any more. I'm thrilled that I get to "virtually" spend time with all of you every day, and to celebrate, we thought it would be great to see some of you in person too! We're having a 15th Anniversary Meetup/Happy Hour in San Francisco next Wednesday, October 10th from 6pm to 8pm. The event is being hosted by our good friends at Hattery Labs and should be a lot of fun. It's free, but we have limited space, so please sign up to get on the guest list before we run out of space... We look forward to seeing many of you next week!

from the think-small dept

There's plenty of breathless writing about the imminent 3D-printing revolution, but realistically, what is it likely to mean for most people? They probably won't all be printing out their own planes, but they may well be printing out small replacement parts for goods they own. Here's an early example of that from the world of electronics, spotted by the Shapeways site:

Teenage Engineering not only make one of the sexiest synthesizers but also get the prize for being the first electronics company to offer their replacement parts as downloadable 3D Printed files.

We work hard to make our OP-1 [synthesizer] users happy with free OS updates and added functionality. But sometimes we fail. As some have noted, the shipping cost of the OP-1 accessories is very high. This is because we can't find a good delivery service for small items. Meanwhile, we have decided to put all CAD files of the parts in our library section for you to download. The files are provided in both STEP and STL format. Just download the files and 3D print as many as you want.

Worth noting that this is about serving customers by helping them avoid high shipping costs -- not something every company cares about. Notice, too, that Teenage Engineering explicitly encourages people to print as many replacement parts as they want -- no attempt to limit this to "one-offs" through stupid licensing agreements, for example.

Of course, that's exactly as it should be -- but too often isn't. However, it's also a shrewd move. It means that customers are likely to use their synthesizers for longer, and to become more attached to them. Building customer loyalty in this way is likely to turn them into good ambassadors for the company, and makes the next sale more likely, so Teenage Engineering's generosity is also good business. Similarly, making the CAD files available encourages users to modify and customize the parts, again building loyalty to the brand, and enriching the ecosystem that grows up around the product.

It would be surprising if this kind of approach did not become more widespread among manufacturers of many categories of goods, given the clear advantages it offers. It's not quite as exciting as printing out a car or a plane, but is a practical application of 3D printers that might well help drive their wider use thanks to the direct, everyday savings they can bring.

from the just-can't-wait dept

California college students hit with tuition increases in recent years will get a little financial help after Gov. Jerry Brown signed legislation Thursday to create a website on which popular textbooks can be downloaded for free.

Twin bills by Senate President Pro Tem Darrell Steinberg (D-Sacramento) will give students free digital access to 50 core textbooks for lower-division courses offered by the University of California, California State University and California Community College systems. Hard copies of the texts would cost $20.

The bill establishes a new California Open Education Resources Council, which will be required to choose the 50 core textbooks and then:

to establish a competitive request-for-proposal process in which faculty members, publishers, and other interested parties would apply for funds to produce, in 2013, 50 high-quality, affordable, digital open source textbooks

The ripple effect of this legislation should spread way past California and throughout the whole country. With quality publisher-grade peer-reviewed options becoming newly available in open format and competing against the high-priced publishers' textbooks, faculty will need to pause to review these and see how they can be used in their classroom.

After all, if high-quality textbooks are freely available in digital form -- or hence for low prices as printed copies -- hard-pressed universities elsewhere in the US (and internationally) would be crazy not to consider them. The cc-by license means that the text can be freely modified for local needs as necessary, or translated.

A group of Finnish mathematics researchers, teachers and students write an upper secondary mathematics textbook in a booksprint. The event started on Friday 28th September at 9:00 (GMT+3) and the book will be (hopefully) ready on Sunday evening.

from the surveillance-society dept

We've written plenty about how the US government has been quite aggressive in spying on Americans. It has been helped along by a court system that doesn't seem particularly concerned about the 4th Amendment and by the growing ability of private companies to have our data and to then share it with the government at will. Either way, in a
radio interview, Wall Street Journal reporter Julia Angwin (who's been one of the best at covering the surveillance state in the US) made a simple observation that puts much of this into context: the US surveillance regime has more data on the average American than the Stasi ever did on East Germans. And, of course, as we've already seen, much of that data seems to be collected illegally with little oversight... and with absolutely no security benefit.

To be fair, part of the reason for why this is happening is purely technical/practical. While the Stasi likely wanted more info and would have loved to have been able to tap into a digitally connected world like we have today, that just wasn't possible. The fact that we have so much data about us in connected computers makes it an entirely different world. So, from a practical level, there's a big difference.

That said, it still should be terrifying. Even if there are legitimate technical reasons for why the government has so much more data on us, it doesn't change the simple fact (true both then and now) that such data is wide open to abuse, which inevitably happens. The ability of government officials to abuse access to information about you for questionable purposes is something that we should all be worried about. Even those who sometimes have the best of intentions seem to fall prey to the temptation to use such access in ways that strip away civil liberties and basic expectations of privacy. Unfortunately, the courts seem to have very little recognition of the scope of the issue, and there's almost no incentive for Congress (and certainly the executive branch) to do anything at all to fix this.

from the tpp-vs.-wcit dept

We've been covering two big stories lately: the ongoing negotiations over the TPP agreement (Trans Pacific Partnership) as well the upcoming fight to change key internet governance issues via the UN's ITU (International Telecommunications Union) as part of WCIT (the World Conference on International Telecommunications). Of course, there's an interesting contrast about these two discussions. Both have been hit with accusations of government bureaucrats keeping things way, way too secret. The ITU has been notoriously secret, despite claims of opening up (which generally involve releasing redacted versions of documents which have already been widely leaked in un-redacted form much earlier). Then we have the USTR, claiming unprecedented transparency, even as the USTR seems to think that "transparency" means getting people to testify about a document they're not supposed to have seen.

Of course, part of the USTR's claims about "transparency" are that it has a variety of ITACs (Industry Trade Advisory Committees) who have more or less full access to the negotiating positions of the US -- which the public and Congress do not. That those ITACs are limited to just a few industries -- often the legacy industries seeking greater protectionism, and not the up and coming innovators who are more important for economic growth. Basically, these ITACs are locked up providing a clearly protectionist attitude towards copyright.

The TPP process has its own ITACs as well -- though, in that context, it means an "International Telecommunications Advisory Committee." More or less the same thing.

Except there's one major difference. Whereas the ITACs having to do with TPP have been quite secretive, the flipside is happening with the ITACs related to the WCIT fight. And it's the US driving the transparency. There are lots of reasons to be concerned by the ITU WCIT process, but the US government is making it easy for the public to participate:

Join me and make a difference. 303,000,000 Americans have just been offered access to the notoriously secret ITU WCIT documents. Just join ITAC, the State Department International Telecommunications Advisory Committee, and enjoy access. “It takes a simple email with a request to be placed on the ITAC listserv, based on some material interest in a given topic,” Paul Najarian of State writes. Simply send an email to join ITAC_Listserve_Requests@state.gov and you automatically have access to ITAC.

So, basically, anyone can join the ITAC concerning WCIT. That's quite different than with TPP, certainly. Oh, and then there's this bit of transparency, straight from the State Department. The following is an email from Terry Kramer, the US Ambassador and head of the US delegation dealing with WCIT:

First, we welcome all interested stakeholders to participate in our WCIT preparatory process and help the U.S. Government form positions in advance of the conference. We solicit this input and feedback through the United States International Telecommunications Advisory Committee (ITAC). I believe that the ITAC process is critically important in helping the U.S. Government convene the type of open, public, and necessary consultations from all stakeholders that helps strengthen our positions in advance of the WCIT. The ITAC has advised the Department of State on U.S. participation in international telecommunications treaty organizations such as the International Telecommunication Union for decades and has, accordingly, been critical in the preparation of prior U.S. positions for meetings of international treaty organizations, developing and coordinating proposed contributions to international meetings and submitting them to the Department of State for consideration. For the WCIT, the ITAC will continue to serve this critical role. Therefore, we welcome any person and any and all organizations, whether corporate or non-profit, to participate in the ITAC if they would like to assist with the WCIT preparatory process.

Second, all WCIT preparatory documents – including revisions of the TD-62 compilations of Member States proposals, the final report of the Council Working Group, and Member State proposals – have been and will continue to be made available to interested ITAC member. It is imperative that we ensure full consideration of a WCIT proposal’s impact on economic growth, the Internet’s openness, and the world at large and this is best done through the adoption of open and transparent processes that allow for wide consultation. Thus, we will continue to share these WCIT documents with stakeholder so that they can provide more informed views and help us develop positions that reflect the input of the diverse range of interests in the United States.

Starting this week, I will proactively communicate our positions on participation and document availability to underscore the US Government’s commitment to
transparency.

Okay, just to translate, if that's a bit dry: when it comes to WCIT, where the US finds itself on the defensive, suddenly it's a lover of openness and transparency. The State Department readily invites anyone to join an ITAC and promises to quickly reveal all relevant documents it receives. Furthermore, it knows that sharing the documents will lead to "more informed views."

In other words, all of the things that the USTR refuses to do with TPP -- and which it claims are effectively impossible in an international agreement. Of course, the reality seems to suggest that when the US is in control (as with TPP), then it seeks to hoard and limit info, preferring secrecy to openness and transparency. Yet, when it's not, as with the ITU process, suddenly government officials are magically in love with openness and transparency. In those cases, it's willing to let anyone join an ITAC and is willing to share whatever documents it can provide.

All of this really highlights the dishonesty of the USTR in all of this. While, yes, the negotiation process between these two issues are somewhat different, there's no reason that the USTR can't take after the State Department in terms of transparency concerning an international negotiation. It just chooses not to do so, because then experts and the public might stand up and point out why the TPP is dangerous.

from the surprising... dept

One of the more frustrating things about debates on copyright is how many more established lawyers in the government seem to refuse to recognize the reality of what's happening in the market and instead prefer to rely on disproved ideas such as that without strong copyright protection, we get less output or that there is no way to "compete with free." So it's a bit of a surprise to see a recent paper from John Newman, a Justice Department trial attorney discussing the nature of what he calls "copyright freeconomics."

Innovation has wreaked creative destruction on traditional content platforms. During the decade following Napster’s rise and fall, industry organizations launched litigation campaigns to combat the dramatic downward pricing pressure created by the advent of zero-price illicit content. These campaigns attracted a torrent of debate, still ongoing, among scholars and stakeholders — but this debate has missed the forest for the trees. Industry organizations have abandoned litigation efforts, and many copyright owners now compete directly with infringing products by offering licit content at a price of $0.

This sea change has ushered in an era of “copyright freeconomics.” Drawing on an emerging body of behavioral economics and consumer psychology literature, this Article demonstrates that, when faced with the “magic” of zero prices, the neoclassical economic model underpinning modern U.S. copyright law collapses. As a result, the shift to a freeconomic model raises fundamental questions that lie at the very heart of copyright law and theory. What should we now make of the established distinction between “use” and “ownership”? To what degree does the dichotomy separating “utilitarian” from “moral” rights remain intact? And — perhaps most importantly — has copyright’s ever-widening law/norm divide finally been stretched to its breaking point? Or can copyright law itself undergo a sufficiently radical transformation and avoid the risk of extinction through irrelevance?

I honestly don't think there's too much new or surprising in the report, though I'd argue that part of the problem is an improper definition of "classical economics." Though, I've long felt that the "behavioral economics" crowd tries to distinguish itself by setting up strawmen about what "classical economics" says -- and this report has a bit of that. That is, I don't think that the use of "free" in economics breaks classical economic models, unless you set up the model incorrectly, which I think Newman does a bit in this paper, leaving out additional variables beyond "cost" that go into the equation. That said, that's a nitpick: the overall point does actually stand. Free economics can well be described in classical economics or new behavioral economic models showing how free fits into a perfectly reasonable market, rather than destroying it. And, in the end, Newman seems to come to the same realization even if we disagree about how it fits into classical economics: free isn't horrifying, it's a part of the economic landscape, and there are ways that can be viewed as a good thing, and this report generally supports that view.

The other interesting bit of the report is Newman's suggestion that an interesting proposal for changing copyright laws that might actually make traditional "maximalists" and "minimalists" both happy is to increase more moral rights for copyright -- and allow copyright holders to effectively choose if they want to enforce the "economic" rights to exclude by going after statutory damages, or, alternatively, enforce the "moral" rights to protect their reputation. His argument is that this might fit better with the nature of content creation today:

Because some creators and distributors
are now realistically motivated solely by non-pecuniary incentives while others are
motivated by pecuniary ones, yet both groups often create the same “types” of works,
segregating rights based on type of work (as does the current legal structure) is likely an
inefficient means of incentivizing authorship and dissemination. Instead, copyright law
could be altered such that copyright owners may choose to enforce one of two bundles of
utilitarian-based rights: either the pecuniary-focused rights (reproduction, distribution, et
al.) or the social-status-based rights (attribution and integrity). This structure would
operate somewhat similarly to the current remedies structure, under which copyright
owners can choose to pursue either actual damages (and/or lost profits) or statutory
damages. Importantly, it would allow creators and distributors—who are in the best
position to do so—to self-segregrate based on primary incentive type. Thus, such an
enforcement structure may well be a much more efficient means of stimulating creative
output than our current set of copyright laws.

I'm not convinced that this is really such a wise course of action, and I'm a bit nervous about expanding moral rights for a whole host of reasons. But it is an interesting thought exercise to wonder if a limited set of moral rights might limit crazy cases with ridiculous statutory damages -- giving copyright holders an alternative for what they're really after in at least a segment of copyright lawsuits.

But what's more interesting is that a DOJ lawyer would be exploring this topic at all. While Newman is explicit that these are his views alone, and do not represent the DOJ in any way, I think it's a good sign to see that at least one DOJ lawyer is grappling with this topic, rather than taking the traditional "the law is the law" view in which "free" is clearly bad and destructive towards the economy. Hopefully more of this kind of thinking and economic explorations filter through to others in the government as well.

from the make-it-stop dept

There were actually a few different interesting events happening in San Francisco last night, all of which were tempting, but it was impossible not to head over to The Commonwealth Club to hear former Senator and current MPAA boss Chris Dodd being interviewed by former SF mayor and current California Lieutenant Governor Gavin Newsom. Given pretty much everything we've written about Dodd during his short tenure at the MPAA, I could have guesssed most of what he was going to say... and, indeed, there were few surprises.

As in the past, he stuck to his favorite themes since the defeat of SOPA, pretending to extend an olive branch to Silicon Valley, to talk about how we all need to "work together," but ignoring that Silicon Valley has tried repeatedly to help Hollywood innovate, and every time we're called thieves for doing so. Or, worse, Hollywood starts demanding ever increasing fees, making it impossible to build a profitable business, or innovators are told to make the product worse to slow the inevitable move into the future. What Dodd really means is not that he wants Silicon Valley to help Hollywood innovate, but rather wants Silicon Valley to figure out ways to prop up the obsolete parts of Hollywood's business models with technological forms of protectionism.

As per usual, Dodd also tried to completely ignore the fact that there were many, many times during the crafting of SOPA and PIPA that the tech industry asked for a seat at the table, and Dodd's MPAA rejected it. He ignored the fact that, during the height of the debate, when Senator Feinstein tried to broker a meeting between top tech companies and Hollywood studios, it was the MPAA studios who rejected the meeting. When asked directly (after the on-stage interview) about the failures of the MPAA itself to actually work with the tech industry, Dodd more or less tried to pass it off on past MPAA leadership, despite much of it happening under his watch.

And, of course, Dodd continues to focus on the tech industry as being who he needed to talk to... and not the public. This, honestly, is the biggest problem and misconception with Dodd's approach to all of this. He's still viewing it as a fight between the tech industry and the movie industry. He still hasn't figured out that it was really the users of technology -- i.e., the public at large -- who form the key party here. While speaking at the Commonwealth Club is one way to "reach out" (though, it didn't seem like there were very many tech industry folks there), that's not the people he needs to reach (I would guess that the majority of the audience were AARP members). What Dodd could have done is actually met with the public. He could have gone on Reddit and done an AMA. Even the President of the US can do that -- why not Chris Dodd?

Perhaps it's because Dodd and the MPAA know that the folks on Reddit would actually fact check his bogus statements in real time.

Because if there's one other common thread through Dodd's speeches since the whole SOPA/PIPA fight blew up, it's that he often has a rather loose relationship with something called "facts." And last night was no exception. He, once again, argued that the movie industry employs 2.1 million people. As the Congressional Research Service has shown, the actual number is 374,000 -- oh, and it's growing, except possibly at theaters, but that's got everything to do with consolidation, not copyright issues.

Dodd's bizarre move of the night was to use The Hurt Locker as his key example of why we need greater copyright protectionism. He argued that the movie was a financial disaster, because of piracy. Unfortunately, the evidence says... no freaking way. The movie had a production budget of $15 million. Yet, it made $17 million in the domestic box office, $33 million in the international box office, and then another $34 million on DVD. And that doesn't count any additional licensing, such as for Netflix streaming or TV broadcast. So, between box office and DVD rentals, we're talking a take of $84 million on a $15 million production budget. Another report claims that the movie was rented 8 million times, and was purchased on pay-per-view or VOD another 3 million times by mid-2010 (and probably plenty more since then). So there's likely to be a few more millions to pile on top there.

Now, that doesn't include the marketing budget, but the same report that details the rentals also highlights that the studio behind The Hurt Locker, Summit Entertainment, didn't spend that much on marketing the flick. In fact, people in the article complain that "Summit is not spending any money." Even if we go crazy and assume that Summit spent twice the production budget on marketing (so another $30 million in marketing the film), it seems pretty clear that the movie did quite well. To argue that it was in trouble due to piracy is simply hogwash.

Even worse, Dodd conveniently left out that the producers of The Hurt Lockersued tens of thousands of fans, and called any fans who criticized this bizarre move morons and thieves. He also ignored that among those that the producers sued was a dead man. So far, this strategy of suing fans has not met with legal success. Either way, you'd think such things would be relevant, but Dodd didn't mention them at all. In fact, quite bizarrely, he later claimed that one of the things the movie industry learned from the failures of the recording industry was that suing "the kids" who are file sharing is "misguided."

And yet his one shining example of a movie decimated by piracy (even though it wasn't) is a film whose producers directly sued over 20,000 of "the kids" and continues to do so? Really?

Perhaps this is why Chris Dodd doesn't want to have an open discussion with the public. The public might call him out (and, if you were wondering, people could only submit written questions at the event, rather than getting to stand up and ask).

Again, when Dodd was asked about The Innocence of Muslims film, after first distancing himself from it and noting that it was not an MPAA production, Dodd delivered a stirring defense of free speech, directly arguing that he "gets uncomfortable" with the idea of the movie industry "becoming a cop on speech." That's kind of funny, because so many of his efforts are about forcing others -- mainly the tech and broadband industries -- to "become cops" on expression.

There were a couple points at which Dodd went into his current favorite stump speech. Newsom asked him a question about whether Hollywood was "all red carpets." That had to have been fed to him by Dodd, who has been using the line about how Hollywood is not all red carpets for months now. He then does his "pull on the heartstrings" bit, about how the makeup artist and "the guy behind the microphone" are all suffering because of piracy -- but he fails to explain how. Again, the industry is making more films than ever before, and they're actually doing pretty damn well. He also ignores the real reason why those people might be suffering: because they're union employees, and the big MPAA studios have been trying to do non-union productions or move filming offshore to avoid having to pay American salaries.

Finally, he did the politician thing where he made statements that he'll ignore later or weasel out of at some point. He talked about how he would "do anything and everything... to protect the vitality of the internet." Yet, it was under his watch, and via direct MPAA suggestion and later pressure, that both SOPA and PIPA included DNS blocking which would have undermined the internet in a big, bad way. In fact, from what we've heard, even when Congress talked about dropping DNS blocking early on, it was Dodd's MPAA who was adamant that it had to stay in. Later, he also claimed that SOPA and PIPA were dead and that they needed a completely different approach. When asked directly afterwards, he insisted that he didn't think there would be any more legislation... but, of course, he left out the international trade forums that the MPAA has its fingers deeply in. Things like ACTA and TPP are heavily influenced by the MPAA, and while ACTA is on life support, the TPP is still very much alive, and may be significantly worse. So, don't think for a second that the MPAA isn't still pushing legislative and regulatory "solutions" to its perceived problem.

All in all, there was nothing too surprising, but it all highlights, yet again, how Chris Dodd is absolutely the wrong person for the job. There was no visionary talk. There was no recognition of a truly new approach. There was no recognition of the public's concerns. There was no realization that the talk needs to be with the public, not with top execs from a few big tech companies. In other words, he's still doing business as usual, when what the MPAA really needs is a visionary who will actually recognize that the path forward is learning to embrace, not fear, innovation, and working with the public to understand what they want and to try to fulfill that. The MPAA needs a visionary right now, and that's not Chris Dodd.

from the how-does-NOW-sound? dept

Well, if nothing else, you can't knock Death Grips' work ethic. After becoming an indie sensation with their critically-acclaimed 2011 debut, "Exmilitary" (still available for free on Soundcloud), Death Grips signed with Epic and released "The Money Store" in April, 2012.

Rather than rest on their newly-signed laurels, Death Grips announced that they would release another album in October. And release it they did, only without Epic's involvement or blessing. The unofficial release of their third album began with this tweet:

The label wouldn't confirm a release date for NO LOVE DEEP WEB "till next year sometime"

"The label wouldn't confirm a release date for NO LOVE DEEP WEB 'till next year sometime'"

Death Grips was looking to put another album out in October and if Epic couldn't keep up with their release schedule, so be it. Another tweet followed, implying that Epic itself hadn't even heard the new album yet:

And away they went, dumping their brand new album into various file lockers and tweeting the links to every new upload and blog entry referring to their impromptu release party.

We only have Death Grips' version of the events at this point, but it looks as though release date negotiations must have gone off the rails sometime on September 30th. A string of tweets paraphrasing a sample used on "Exmilitary's" first track, "Beware," set the stage:

He came to me with money in his hand
He offered me I didn't ask him. I wasn't knockin someones door down. I was running from that.
I looked at it and said this is a bigger jail that I just got out of.
I run the underworld guy. I decide who does what and where they do it at.
What am I gonna run around and act like I'm some teeny bopper somewhere for someone else's money?
I ROLL THE NICKELS. THE GAME IS MINE. I DEAL THE CARDS.

2. The album art (definitely NSFW -- unless you're treating this person for erectile dysfunction or are This Guy) was still under discussion.

3. Epic wasn't happy with Death Grips topping the Bittorrent charts, legal or no. Death Grips seems to be fine with it. They're still giving away their first album at Soundcloud (although you're more than welcome to purchase it). Possibly they were considering "alternate distribution" and Epic iced the album in order to talk some sense into them. Not that this plan worked...

So, what have we learned? For starters, pissing off your artists in this day and age can have some serious repercussions, especially if you're in the business of collecting a chunk of every album sold. Windows are made to be broken. Buyer beware. Etc. Does this mean you should kowtow to every demand from your signed artists? No, but this does mean that setting release dates arbitrarily simply won't work anymore.

You also might want to take a good look at the artist you're signing and ask yourself, "Is this a good fit for a major label?" Between the explicit album cover, the Bittorrent numbers, the abrasive, uncompromising musical style, the fact that their first album sounded "like it was recorded under someone's house with a webcam" and the general volatility of the recording industry, maybe everyone involved should have realized it was never going to work out.

from the have-we-done-anything-useful? dept

Since September 11th, the government has often had something of a blank check (and the equivalent lack of oversight) for anything labeled as being part of an anti-terror effort. As such, it should hardly come as a surprise that programs are wasteful, possibly fraudulent, bad for civil liberties and (oh yeah) completely useless (to actively harmful) in fighting terrorism. A Congressional investigation into the Department of Homeland Security's (DHS) "fusion centers," which were supposed to be a key force in anti-terrorism efforts, presents an absolutely scathing condemnation of the effort.

The Subcommittee investigation found that DHS-assigned detailees to the fusion centers forwarded "intelligence" of uneven quality - oftentimes shoddy, rarely timely, sometimes endangering citizens' civil liberties and Privacy Act protections, occasionally taken from already-published public sources, and more often than not unrelated to terrorism. The Subcommittee investigation also found that DHS officials' public claims about fusion centers were not always accurate. For instance, DHS officials asserted that some fusion centers existed when they did not. At times, DHS officials overstated fusion centers' "success stories." At other times, DHS officials failed to disclose or acknowledge non-public evaluations highlighting a host of problems at fusion centers and in DHS' own operations.

Oh, and did we mention how wasteful they were? Apparently, taxpayer money simply "disappeared" into the program often being spent on totally unrelated things like flat screen TVs:

The Subcommittee investigation also reviewed how the Federal Emergency Management Agency (FEMA), a component of DHS, distributed hundreds of millions of taxpayer dollars to support state and local fusion centers. DHS revealed that it was unable to provide an accurate tally of how much it had granted to states and cities to support fusion centers efforts, instead producing broad estimates of the total amount of federal dollars spent on fusion center activities from 2003 to 2011, estimates which ranged from $289 million to $1.4 billion. The Subcommittee investigation also found that DHS failed to adequately police how states and municipalities used the money intended for fusion centers. The investigation found that DHS did not know with any accuracy how much grant money it has spent on specific fusion centers, nor could it say how most of those grant funds were spent, nor has it examined the effectiveness of those grant dollars. The Subcommittee conducted a more detailed case study review of expenditures of DHS grant funds at five fusion centers, all of which lacked basic, "must-have" intelligence capabilities, according to assessments conducted by and for DHS. The Subcommittee investigation found that the state and local agencies used some of the federal grant money to purchase: dozens of flat-screen TVs; Sport Utility Vehicles they then gave away to other local agencies; and hidden "shirt button" cameras, cell phone tracking devices, and other surveillance equipment unrelated to the analytical mission of a fusion center.

Of course, this kind of thing isn't all that uncommon. I remember a story from nearly a decade ago about all the money designated for things like E911 services, instead being used to pay for boots and pens. We recently wrote about the failure of a NY City program to spy on Muslims to turn up a single lead, but this takes that kind of failure to a whole new level.

Of course, the scary part in all this isn't just the misuse of funds or the failure to produce anything relevant. It's that what was done almost certainly violated the public's rights. And apparently, such violations of civil liberties were a very common problem.

The inappropriate reporting appears to have been a regular problem. An April 2009
email from an alarmed senior I&A official stated: “[State and Local Fusion Center officials] are
collecting open-source intelligence (OSINT) on U.S. persons (USPER), without proper vetting,
and improperly reporting this information through homeland information reporting (HIR)
channels,” wrote Barbara Alexander, then director of the Collection and Requirements Division,
which oversaw HIR reporting. “The improper reporting of this information through HIR
channels is likely a result of a lack of training on proper collection and reporting procedures . . .
they are inadvertently causing problems.” In an interview with the Subcommittee, Ms.
Alexander said she recalled being told the Reporting Branch was “flooded” with inappropriate
reporting. “A lot of information was coming in inappropriately,” she remembered. “The
information was not reportable.”

[....] Ms. Schlanger’s presentation, a copy of which DHS provided to the Subcommittee,
indicated that areas in which DHS intelligence reporters had overstepped legal boundaries
included: Reporting on First Amendment-protected activities lacking a nexus to violence or
criminality; reporting on or improperly characterizing political, religious or ideological speech
that is not explicitly violent or criminal; and attributing to an entire group the violent or criminal
acts of one or a limited number of the group’s members.

The investigation goes on to quote numerous examples of "reports" prepared on information that DHS is not allowed to report on as it violates civil liberties.

In the end, as with so many "anti-terror" programs, what we have is a program that took in a ton of taxpayer funds, with almost no oversight as to what happened to those funds (leading to $1.4 billion disappearing), no intelligence of any use but undertook plenty of efforts that were clearly beyond the mandate of Homeland Security. And all of this is supposed to make us feel safer?

from the shrink-shrink-shrink dept

Famed economist Gary Becker and appeals court judge Richard Posner have long teamed up to publish The Becker-Posner Blog, in which they pick key issues and each of them discuss the same issue in separate posts. It's a really great blog, and we've mentioned it in the past -- in a situation where we disagreed with Posner's suggestion that copyright should be expanded to help newspapers. More recently, we've noted that Posner's been very interested in patent issues, and has been somewhat vocal on how the system is mostly broken. So it's no surprise that patents are a recent topic on the blog.

Posner's contribution actually touches on both patents and copyrights, both of which he admits seem to be excessive, though (somewhat surprisingly) he argues that patents are a bigger problem. I get the sense that he hasn't spent that much time on copyright issues given some of the statements that he makes. Perhaps if he explores that issue more deeply he'll realize that some of the problems are just as, if not more, serious in copyright law.

Posner starts with the premise that IP works in cases where there are high capital expenditures for creation/invention, with low barriers to copying -- but that it doesn't work otherwise. There's increasing evidence that the premise is a bit faulty, and there are reasonable questions about whether or not patents and copyrights really are the best thing in those high capital expenditure cases, but his recognition that it barely works at all otherwise is welcome. He falls into the cliche of basically comparing pharma patents (which he claims works) to software patents (where they clearly don't work) -- and suggesting that it's merely about recognizing that the costs and benefits in different industries are different. His conclusion, though, is that the "costs" probably outweigh the benefits in most cases:

The pharmaceutical and software industries are the extremes so far as the social benefits and costs of patent protection are concerned, and there are many industries in between. My general sense, however, bolstered by an extensive academic literature, is that patent protection is on the whole excessive and that major reforms are necessary.

I think if he explores the issue more deeply, he'll realize that the different impact on different sectors is more a symptom of the problem with how the system is set up, rather than the problem itself.

On copyrights, he recognizes problems, but seems to think it's more limited, arguing that the law treats different industries differently (his basic suggestion for fixing patents):

For example, when
recorded music came into being, instead of providing it with the same copyright
regime as already governed books and other printed material, Congress devised a
separate regime tailored to what were considered the distinctive
characteristics of music as a form of intellectual property. Patent law could
learn from that approach.

That's a... generous retelling of copyright law. It is true that Congress has tried to duct tape on "fixes" to copyright law when innovation challenges the status quo, but it's also created newer and bigger problems, as the different "rights" start to overlap and blur together. Is a stream a performance, a distribution or a reproduction? All three? These things get complex fast... and in part it's because of Congress's constant patching without recognizing the longer term impact.

Posner also has this oddity:

The problem of copyright law is less acute than the problem of patent law, partly because copyright infringement is limited to deliberate copying; patent infringement does not require proof even that the infringer was aware of the patent that he was
infringing.

I think "deliberate" is a slightly sloppy word choice here. He's correct that copyright requires (mostly) actual copying, but it need not be "deliberate" in the traditional sense. Law prof. John Tehranian has famously covered just how much we accidentally infringe on a daily basis. Posner seems to be suggesting that to infringe on copyright, you know you're infringing. These days that is true for fewer and fewer cases. That said, he does still recognize some key problems:

Nevertheless, as in the case of patent law, copyright protection
seems on the whole too extensive... The most serious problem with copyright law is the length of copyright protection, which for most works is now from the creation of the work to 70 years after the author’s death. Apart from the fact that the present value of income received so far in the future is negligible, obtaining copyright licenses on very old works is
difficult because not only is the author in all likelihood dead, but his heirs
or other owners of the copyright may be difficult or even impossible to
identify or find. The copyright term should be shorter.

He also argues that a good solution would be a much broader and clearer definition of fair use -- something we agree would be helpful:

The problem is that the boundaries of fair use are ill defined, and copyright
owners try to narrow them as much as possible, insisting for example that even
minute excerpts from a film cannot be reproduced without a license.
Intellectual creativity in fact if not in legend is rarely a matter of creation
ex nihilo; it is much more often incremental improvement on existing, often copyrighted, work, so that a narrow interpretation of fair use can have very damaging effects on creativity.

The various harmful effects of the patent and copyright systems encouraged Arnold Plant, an English economist, to publish over 75 years ago two influential articles on why England and other countries would be better off without patents and copyrights. Among other things, he argued that the temporary monopoly power given to patent holders often led to inefficiently high prices and reduced output of patented goods, that many persons would continue to invent even if they could not patent their inventions, and that patents distort innovations in favor of goods and processes that can be patented and away from innovations that cannot be patented. His favorite example of the latter is basic research in the sciences that produced the theory of relativity, the theory of evolution, and in more modern times our understanding of DNA and genes. He believed that the patent system induced some creative scientists to work in areas that could be patented rather than on basic scientific research that could not be patented.

He notes that, in the end, getting rid of them entirely would upset too many apple carts, and he basically follows Posner's reasoning that high cap-ex + low barriers to copying should get protected, while everything else should be excluded:

Probably the best solution would be to maintain the patent system on drugs and a few other products that are expensive to innovate and cheap to copy, and eliminate patents on everything else. In particular, this means eliminating patents in the software industry, the source of much of the patent litigation and patent trolling.

He admits that he's not sure where to draw the line, and his best suggestion is to "start by eliminating the ability to patent software, and go on from there to prune the number and type of inventions and innovations that are eligible for patent protection." That seems to be an admission that there's a problem, but not a deep enough understanding to know how to fix the system. Once again, it seems like a case of targeting the symptoms rather than the cause.

All in all, I find that the argument from Boldrin & Levine about why the system should be done away with completely a lot more compelling. In that, they specifically note Posner's occasional blindness to the root of the problem, and his focus on symptoms. I'd like to believe this is a question of familiarity with the subject matter, actually. The solutions and ideas that Becker and Posner speak to are similar to what we often hear from people when they see the problems of the patent system, but haven't fully thought through the issues. One hopes that as they explore the details more thoroughly, that they might realize that the problems run deeper than they seem to believe.

the internet has become a part of the public space where new forms of cross-border trade are achieved, along with innovative market development and social and cultural interaction

and then in line with the reference in its title to a "Digital Freedom Strategy", it

calls on the Council and the Commission, in the context of free trade agreements, to consider the possibility of implementing objective and transparent safeguards aimed at preserving unrestricted access to the open internet and ensuring the free flow of information and related services in accordance with existing legislation

before plucking the following surprising statement out of nowhere:

Is aware that there is concern that some people increasingly hear the word copyright and hate what lies behind it;

But that's nothing to what comes afterwards:

Calls on the Member States and the Commission to develop IPR policy in order to continue to allow those who wish to create their own content and share it without acquiring IPR to do so;

Yes, you read that correctly: an official document from the important trade committee of the European Parliament is calling for the option to create without copyright being attached. Had this come from some obscure and informal grouping, buried deep in the bowels of Brussels, and infested with pirates, such a call might be dismissed as simply a wacky and totally irrelevant view. But this has been published by one of the main committees, which had just one pirate politician present, but many representatives from other parties that traditionally have regarded the sanctity of copyright as somewhere north of the sanctity of life.

Of course, the proposal stands no chance of being implemented because EU countries are signatories to the Berne Convention, which requires copyright to be automatic as soon as a work is "fixed," which means that creation without copyright is not permitted. But equally, this is an official request to another European Parliament committee, that for Foreign Affairs. It will be fascinating to see how the latter responds to this extraordinary production of the EU machine.

from the stop-it dept

I made this same basic argument almost seven years ago, but it seems that many news websites still think it's a good idea to break up stories into multiple pages. Farhad Manjoo, over at Slate, has an article arguing why paginating long articles is a bad idea, whose only purpose is to goose page view numbers and ad views for websites -- and it does nothing to make the reading experience better. Somewhat ironically, he's writing this on Slate, which does paginate stories. At least Slate has a "single page" option, which is what I linked to above, though you can look at the idiotically broken up version if you'd like as well.

From my standpoint, sites that do excessive pagination -- especially if they have no "single page" option -- are automatically less interesting to me. I find Forbes to be one of the worst here. While I think the site has some good reporting, I will often look for alternative sources to link to stories, because I don't want to send users to a page where they'll have to click five times just to read a single story. To me, this makes Forbes look really bad: like it knows it has to trick readers to get page views, rather than trusting its content. If Forbes doesn't trust its own content, why should I?

Thankfully, Manjoo points out that some newer, more innovative sites -- such as Buzzfeed and The Verge (both of which are immensely popular) -- have decided that breaking up stories into multiple pages just doesn't make any sense:

I asked Joshua Topolsky, the Verge’s editor, whether he had a hard time convincing the advertising sales department at the magazine to ditch pages. He said he didn’t: “From the beginning, there's been a company-wide belief that we can marry great advertising with great content and not have to cheat or trick our users,” Topolsky emailed. “And so far, that's proven 100 percent correct. Our traffic has been on a big climb, and I believe advertisers are really beginning to see the true value in engaged users who care (and return) versus sheer volume of pageviews (though our pageviews have also been through the roof).”

Jonah Peretti, BuzzFeed’s founder, echoed this sentiment. BuzzFeed publishes dozens of photo galleries daily, and lately it’s been getting into longform reporting, too. (See Doree Shafrir’s 7,000-word piece on nightmares.) If it paginated, it could boost its pageviews significantly. But it has never paginated, and Peretti suspects the site never will. (Even BuzzFeed’s homepage isn’t paginated—it keeps loading older stories as you scroll.) BuzzFeed can afford to run stories in full because its advertising model—which relies more on “branded content” and not banner ads—doesn’t rely on pageviews. For Peretti, the most important metric for a story is how many unique people click on it, and how widely it’s shared. He says: “If you build things that people are excited about sharing with their friends—if you build things that don’t annoy people and if it’s presented in a user-friendly way—then, long-term, people will share content more, new people will come and check out what you’re doing, people will have more positive feelings about you, and … OK, maybe it’s a little bit utopian of a view, but it’s working for us.”

In other words, what I suspected seems to be true for those sites. If you trust your content, and trust your readers -- and want them to share your content -- you don't break up your stories in annoying ways. You make it easier for your readers to consume and share your content. Hopefully other sites will begin to realize this, though considering how long we've been discussing this, I doubt it.

For the most part, it really does seem that those who go for pagination seem to come from more old school media businesses -- and perhaps that's not surprising. They look at things through older metrics, such as how many page views, rather than metrics like how many people share your content. On top of that, I find the arguments "in favor" of pagination, questionable. Manjoo asked Slate why it paginates the articles, and was told that readers like it better that way.

“Pages that run too long can irritate readers,” Plotz said in an email. “We run stories of 2,000, 4,000, even 6,000 words, and to run that much text down a single page can daunt and depress a reader. So pagination can make pages seem more welcoming, more chewable.” An editor at another site made a further point that pagination can be a useful signal to readers about the length of an article—if you see an article with 10 pages, you know to set aside a lot of time to read it (or skip it).

Depress a reader? Really? I recognize that there's TL;DR syndrome, but that would apply to long articles whether they're paginated or not. Also, the idea that the number of pages is a "symbol" ignores that we've already got this magic thing called... scrollbars, which effectively do the same thing.

The whole thing just seems like a rationalization for trying to boost pageviews by annoying readers, and that doesn't seem like a good long term strategy.