The movie finally opened for real and the results -- $16.2 million -- were considered a disappointment. The credulous reporters over at Variety immediately have decided to pin the blame on the leak, rather than the fact that almost everyone agrees the movie sucksand that the third film in a crappy franchise almost never does particularly well anyway. The report points to some research claiming that when a film leaks, "it loses nearly 20 percent of its potential revenue." Variety conveniently leaves out the fact that the research was done via a program "made possible through a gift from the MPAA," which kinda seems relevant....

Meanwhile, it seems relevant that another study of a leak a few years ago of Wolverine under fairly similar circumstances suggested that the leak actually helped the film at the box office. At best, it seems that Hollywood might legitimately claim that the leaked copy made people realize that the movie sucked and told their friends not to go, but then they're left arguing that they "made a movie so bad that pirates--who paid nothing to watch--told people it wasn't worth seeing." That doesn't really sound like it's the leak's fault... so much as the fact that the movie sucked.

As always, the same basic rule has applied to movies: make a good product and any leak isn't going to have significant impact at the box office. People go out to the movies for the social experience of it. A good movie is an event. Make a good movie and the fact that it leaks online isn't going to have much of an impact. That's not what happened here.

Around the time of the Jaffe and Lerner book, the USPTO seemed to actually take much of the criticism to heart. One big part of Jaffe and Lerner's criticism was the simple fact that patent examiners had significant incentives to approve patents, and almost none to reject patents. That is, the metrics by which they were measured included the rate of how many patent applications they processed. But, since there is no such thing as a truly final rejection of a patent, people would just keep asking the USPTO to look at their application again. Each time an examiner had to do this, their "rate" would decline, since they'd be spending even more time on the same old patent application. But approving a patent got it off your plate and let the court system sort out any mess. However, after the book was published, the USPTO actually seemed to pay attention and changed its internal incentives a bit to push for high quality approvals. Not surprisingly, this meant that the approval rate dropped. But, since there was more demand for bogus patents to sue over, more people appealed the rejections and the backlog grew.

Patent system lovers started whining about the "backlog," but what they were really pissed off about was the fact that their bogus patents weren't getting approved. Unfortunately, their message resonated with the new regime of the Obama administration, mainly Commerce Dept. boss, Gary Locke, and head of the USPTO, David Kappos. Back in 2010, we noted that the USPTO had shifted back to approving "pretty much anything" and had clearly decreased their quality standards in an effort to rush through the backlog. Not surprisingly, in stating this, we were attacked mercilessly by patent system supporters, who insisted that we were crazy, and the truth was that David Kappos had found some magic elixir that made all USPTO agents super efficient (or something like that -- their actual explanations were not much more coherent). No matter what, they insisted that it was entirely possible to massively ramp up the number of approvals, decrease the backlog and not decrease patent quality.

Needless to say, we've been skeptical that this was possible.

And now the data is in, suggesting we were absolutely right all along. A new study done by Chris Cotropia and Cecil Quillen of the University of Richmond and independent researcher Ogden Webster used information obtained via FOIA requests to delve into what was really going on in the patent office (link to a great summary of the research by Tim Lee). The key issue, is (once again) the fact that patents are never truly rejected in full, and the people applying for patents just keep on trying again and again until someone in the USPTO approves it. However, the USPTO, to hide some of this, counts some of those "rejections" that eventually get approved as "rejections" to artificially deflate the actual "approval rate" of patent applications.

When the researchers corrected for all of this, they found that the actual patent approval rate in 2012 was almost 90% of all patents eventually get approved. 90%! That's about where it was in 2004 and 2005 (as discussed above), though in 2001 it actually came close to 100%! However, as noted above, by the second half of 00's corrections had been put in place and the approval rate had declined to under 70% in 2009 -- meaning that the USPTO was actually rejecting bad patents. But over the past three years, we've shot right back up. And it's clear that if the approval rate is much higher, the USPTO is approving many, many more bad patents.

In fact, it's likely that the story is even worse than before. Back in 2004 and 2005 when the approval rates were similar, it was really before the public was aware of just how bad the patent troll problem was, so you had many fewer people trying to get their own bad patents to troll over. In the past five years or so that has changed quite a bit. So the number of applications has shot up massively as well. In 2004 there were 382,139 applications. By 2011 that had shot up by 50% to 576,763.

I don't think anyone thinks that we suddenly became 50% more inventive between 2004 and 2011. No, the truth is that people were suddenly flooding the USPTO with highly questionable patent applications on broad and vague concepts, hoping to get a lottery ticket to shake down actual innovators. And, the USPTO under David Kappos complied, granting nearly all of them. Incredible.

When Thomas Jefferson put together the first patent system -- after being quite skeptical that patents could actually be a good thing -- he was quite careful to note that patents should only be granted in the rarest of circumstances, since such a monopoly could do a lot more harm than good. And yet, today, we encourage tons of people to send in any old bogus idea, and the USPTO has turned into little more than a rubber stamp of approval, allowing patent holders to shake down tons of people and companies, knowing that many will pay up rather than fight, and then leaving the few cases where someone fights back to be handled by the courts (who seem ignorant of the game being played).

The end result is a true disaster for actual innovation and the economy. We should all be able to agree that bad patents are not a good thing. And the USPTO is, undoubtedly, approving tons of awful patents when its true approval rate is hovering around 90%.

from the the-(record)-needle-and-the-damage-done dept

Neil Young has been unhappy with the state of digital audio for a while, and he's made various overtures about fixing it. Now, some trademark applications found by Rolling Stone suggest his plans are in motion, though details on those plans are scarce. The only real clue comes from a tangential mention in an unrelated press release:

A press release issued last September by Penguin Group imprint Blue Rider Press, which is publishing Young's upcoming memoir, may have revealed the working title of Young's entire project. In addition to the memoir, says the release, "Young is also personally spearheading the development of Pono, a revolutionary new audio music system presenting the highest digital resolution possible, the studio quality sound that artists and producers heard when they created their original recordings. Young wants consumers to be able to take full advantage of Pono's cloud-based libraries of recordings by their favorite artists and, with Pono, enjoy a convenient music listening experience that is superior in sound quality to anything ever presented."

But does Young actually have a new idea? There are already lossless formats like FLAC that some audiophiles swear by, not to mention uncompressed formats like WAV and AIFF. But there is theoretically room for improvement: most uncompressed digital audio is sampled at a rate of 44,100 Hz, but some pro studio equipment can record at twice that, and technologies like DSD can go much, much further. Moreover most consumer audio consists of 16-bit samples, which could be bumped up to 24-bit. So on the technical side, there is the potential for new formats to popularize higher-quality digital audio. Who knows if that's what Young has in mind.

That, however, leads to the bigger question: is there really a market for such a format? The digital audio debate has been raging for years, and it has a lot of contours—not just the strengths and weaknesses of digital and analog formats, but also changing approaches to sound engineering and the debates over loudness, audio compression and overprocessing. While some audiophiles insist they can tell the difference, blind listening tests have proved they rarely can. For the average listener, convenience, selection and price surely trump such a negligible (and possibly undetectable) quality difference—and since it sounds like Young hopes to develop a proprietary, cloud-only format, I'm guessing those other factors aren't high priorities. Moreover, since most people are listening to their music on earbuds and other low-definition systems, the quality bottleneck exists much further down the line than the file format—and since an increasing amount of music is recorded with consumer tools like GarageBand that operate at the standard sampling rates for uncompressed AIFF/WAV files, there's another bottleneck above the file format too. Though, in theory, these factors are part of what Young wants to change with his push towards higher quality—and there may be some potential in that direction over time as bandwidth and storage space increases, and even some sort of immediate market among audiophiles. But it's hard to see what he could offer that existing formats don't already provide.

I know some people will insist that digital audio sucks, and that they can tell the difference—but frankly that's a meaningless assertion if they haven't done a controlled test. There are simply too many biases to account for. But even if it is a real problem for some people, it is likely to be a very small niche market, not a cultural sea-change like Young seems to envision. Some of his proclamations about the effect of music sound eerily close to Prince's insane ramblings about how audio interacts with the brain, which is hard to swallow. Music may create transcendent human experiences once it's inside your head, but your ears are still made of flesh and bone, not magic. And evidence suggests that most people's ears can't tell the difference.

from the well,-there's-that... dept

Neil Young apparently isn't too concerned about copyright infringement these days, according to his comments at the D: Dive into Media conference:

It doesn't affect me because I look at the internet as the new radio. I look at the radio as gone. [...] Piracy is the new radio. That's how music gets around. [...] That's the radio. If you really want to hear it, let's make it available, let them hear it, let them hear the 95 percent of it.

Of course, that's a bit of a reverse from back when he was angry that YouTube wasn't paying him money when people uploaded his songs. Still, it's good to see him come around to the view that infringement is, basically, a new form of radio. Artists like Chuck D have been making that argument for over a decade.

Young is still concerned... but about the fact that the quality of MP3 files sucks. He'd prefer technologies that provide a much fuller sound:

Steve Jobs was a pioneer of digital music, his legacy was tremendous. [...] But when he went home, he listened to vinyl.

from the do-books-need-to-be-expensive-to-be-good? dept

Seth Godin is nothing if not prolific. As well as publishing a string of popular marketing books with catchy titles like "All marketers are liars", "The big moo" and "Small is the new big", he writes short but smart blog posts every day, some of which are rather obvious, but many of which contain real gems of insight.

This fluency with words means he is well placed to comment on the age of abundance we are entering thanks to the rise of digital technologies. One of his latest pieces is entitled "How the long tail cripples bonus content/multimedia", and appears as part of The Domino Project, "a new way to think about publishing. Founded by Seth Godin and powered by Amazon" -- a partnership that is itself symptomatic of the digital times.

The post is in response to a HuffPo interview with President and CEO of Ingram Content Group, David "Skip" Prichard. Prichard shows himself optimistic and surprisingly open to new ideas for someone leading a book distribution company -- not a sector known for its innovation.

But Godin concentrates on one particular aspect of Prichard's replies, which is typified by the following exchange:

Are there enhanced books available this holiday season that have already changed the definition of a book?

Yes, for example, a biography can to come to life in many ways. Jacqueline Kennedy: Historic Conversations on Life with John F. Kennedy has all of the interview audios, videos, photographs, text, and transcripts available. Even classics -- Penguin has updated Pride & Prejudice with clips from the movie and even instructions on dancing. For the 75th anniversary of The Hobbit, HarperCollins released an e-version with exclusives including J.R.R Tolkien's book illustrations and recently discovered Tolkien recordings. Publishers are still learning what added value readers will or won't pay for. I expect we'll continue to see lots of experimentation in this arena.

Godin describes these "breathtaking visions of the future" as "economically ridiculous", and comments:

The Long Tail creates acres of choice, so much as to make the number of options almost countless. But at the same time, it embraces (in every format) much lower production values. For what Michael Jackson and Sony paid to produce the Thriller album, today's artists can make and market more than 5,000 songs. You just can't justify spending millions of dollars to produce a record in the long tail world.

This is an important point that the copyright industries are extremely reluctant to acknowledge, because it's at odds with their business models based on just a few massive blockbusters that are highly profitable. There's a good reason for their preference: the elevated costs involved in creating these works act as a barrier to entry for newcomers, and help preserve the status quo. The new model, based around large numbers of low-cost products, is available for anyone to adopt -- including artists selling directly to their public.

As Godin puts it:

it's not a few publishers putting out a few books for the masses. No, the market for the foreseeable future is a million publishers publishing to 100 million readers.

He explains what that means for ebooks:

The typical ebook costs about $10 in out of pocket expenses to write (more if you count coffee and not just pencils). But if we add in $50,000 for app coding, $10,000 for a director and another $500,000 for the sort of bespoke work that was featured in Al Gore's recent 'book', you can see the problem. The publisher will never have a chance to make this money back.

Finally, Godin addresses the inevitable complaint that the imminent loss of those $500,000 multimedia ebooks -- like the imminent disappearance of $100 million movies - means the end of creativity as we know it:

The quality is going to remain in the writing and in the bravery of ideas, not in teams of people making expensive digital books.

from the well-that's-not-going-to-play-well... dept

Eric Goldman points us to a very, very interesting new research paper by Atanu Lahiri and Debabrata Day, showing all sorts of real examples about how "piracy" appears to increase the quality of the related goods that are being infringed upon. Of course, this counters the "common sense" argument that such infringement inevitably lowers the quality of content, since the creators and distributors of said content can no longer invest as much in the content.

The key explanatory factor here: the best way to compete with piracy is to offer a better product yourself. And one way to do that is to increase the quality. For example:

A case in point is the European unit of the cable TV channel HBO, which is fighting
against unauthorized distribution of its content by illegal torrent websites by raising the quality of
its offerings. The piracy rate faced by HBO is estimated to be between 30% to 50%. HBO has
responded to this high piracy rate by churning out new high quality contents in different European
languages (Briel 2010). New contents are available through both HBO’s cable TV channels as well
as its new IPTV channels. HBO’s innovative offerings have reduced piracy and brought in new
subscribers. Valve, a video game manufacturer, has also adopted a similar strategy. Since releasing
its game Team Fortress 2 in 2007, it has made frequent quality enhancements, including addition
of new weapons and avatars. This strategy has encouraged enthusiastic gamers, who have a strong preference for the latest version, to switch to legal downloads.

The study doesn't just look at such anecdotal cases. It digs in on some evidence as well, showing how investments in R&D from software companies continues to increase, almost directly in line with claims that "piracy" rates for those companies has increased. The conclusion: less enforcement of copyright laws will likely lead to greater quality in output in many cases, and conversely that greater enforcement likely leads to less social benefit as the quality decreases, in markets facing the same conditions. In fact, they find that content creators (or distributors) are likely to increase profits by focusing on product quality, rather than enforcement.

Most of the paper focuses on creating and testing an economic model that explains this behavior, and highlights when such factors apply and when they don't, for the purpose of trying to optimize policy as well as an individual copyright holder's response to piracy. That is, they do find some conditions under which the traditional "common sense" view holds, but it seems relatively rare. In fact, one part of the study models whether or not there are "ethical" consumers who don't infringe for ethical reasons -- and finds that in such a world, there tends to be even fewer reasons for increasing enforcement.

Of course, when you think about much of this, it makes sense. We've argued from the beginning that there are tons of ways to "compete" with unauthorized access, and providing quality is definitely one such way. It's nice to see this bit of research adding deeply to this debate, both with real world examples of this happening today and a detailed economic model that explains the behavior.

And yet... our policy makers continue to think that the best answer is simply to keep on ratcheting up enforcement.

from the ain't-what-the-data-shows dept

One of the key tenets of those who support stronger copyright law and stronger copyright enforcement is the idea that it is a necessary incentive for a great deal of creative output. We regularly hear claims of how creative output would drop without such strong copyright protections. However, the actual evidence has simply not supported this theory at all, with multiple studies showing that even as there was a massive increase in infringement, thanks to the internet, there has actually been a very large increase in creative output as well. But, of course, some will shoot back that just the creation of new works alone may not be indicative of what's really going on. After all, what if all of that new music is terrible because the "good stuff" can't make money. Thankfully, it looks like new research is tackling this question.

Hypebot points us to some new research, by economist Joel Waldfogelm, in which he attempts to determine if the rise of file sharing has had any significant impact on the creation of quality new music and artists, and his answer is no, it has not. In other words, the very theory underlying an awful lot of the copyright industry's claims simply is not borne out by the evidence. This study does not just take a superficial look at how much new content was produced, but really tries to dig deeper and focus on quality. I won't go into all the details of the methodology, but suffice it to say that it's a creative way of trying to separate out quality, by running a statistical analysis on multiple critics' "best" lists and indices. From there, it looks at how many new albums each year pass specific "quality" thresholds, and finds that contrary to the theory, there is really no difference in output of quality works pre- and post-Napster.

Even more interesting, this study also appears to debunk the other claim by the recording industry that the rise of file sharing means that new acts are no longer developed and able to grow and release quality albums. In fact, the study finds no support for that claim:

The evidence thus far indicates no decline in the volume of new recorded music products
forthcoming since Napster. It is possible, however, that the new music is coming from artists
who were established prior to Napster. While products still come to market, it is possible that
new artists are not establishing careers.

To explore this we examined the albums on three analogous best-of lists, for the 1980s,
the 1990s, and the 2000s, from Pitchfork Media. For each of the 300 albums, we determined the
year the artist released his, her, or their first recording (whether an album, a single, or an "EP").
These data allow us to calculate the career age of an artist at the time he has an album on a best-of
list. The question is whether artists have continued to establish careers since 1999. To
explore this, we calculate the share of best albums since 1999 whose artists’ first recordings
appeared after 1999. Since 1999, 49 percent of artists on the best of the 2000s list debuted
following Napster. Figure 9 shows this year-by-year pattern: there is a systematic, although not
a monotonic, rise from 10 percent of albums in 2000 to 100 percent at the end of the decade. On
average, about half of the best-of albums since Napster are from artists whose recording debut
occurred since Napster.

Although this is clearly a substantial share, determining whether the launching of new
artists has changed requires a comparison with earlier periods. To this end, we calculate
analogous annual shares for the two previous decades, the annual share of 1980s best-of albums
from artists debuting after 1979, and the share of 1990s best-of albums from artists debuting after
1989. All three patterns are very similar, rising fairly steadily to 100 percent by the end of each decade. A regression of a dummy for whether an artist debuted since the decade of his
appearance on dummies for years since the beginning of the decade and a dummy for the post-
Napster decade confirms the lack a statistically meaningful difference in the tendency for new
artists to appear on the list since Napster.

In other words, there's no evidence that new artists are no longer being developed or are creating high quality, successful music. So, contrary to the theoretical claims, the evidence shows that more content is being created, despite greater infringement, and that there has been no noticeable decline in quality output or in the development of new artists. So why is it that the industry still makes such claims, and the press and politicians believe them?

Oh, there is one other interesting tidbit in the research: The only real noticeable difference that the research turned up between pre- and post-Napster music production... was that more of the successful new musicians are coming from independent labels, rather than the major labels. For the two decades prior to Napster, the percentage of successful indie artists remained constant, but it jumped post-Napster. That makes sense. The independent labels, for the most part, have been more willing to experiment and embrace new models, while the majors have fought them more. On top of that, artists no longer need to feel as obligated to go through one of the gatekeeper "major labels."

That certainly helps explain why the major labels like to perpetuate these kinds of myths... but not why anyone believes them.

from the not-adding-value dept

For years, the strategy of the entertainment industry is to come out with "new formats" that more or less require people to rebuy the content they've already purchased so that they can use it on modern equipment. One of the things that worries them so much about digital content is the idea that it might be somewhat future-proof, in that it can be moved from device to device with ease. Yet, it won't stop them from trying. A whole bunch of you sent in this story about how Apple and some of the labels are looking at ways to sell (really "license") higher quality versions of digital music files. Amusingly, almost everyone who submitted this sent it in with some sort of sneering line about how this is clearly yet another attempt by the labels to get people to re-"buy" the same music they had already bought, suggesting an awful lot of people aren't very interested in such a deal. Honestly, if the labels are serious about offering higher quality files, they should let people upgrade their existing authorized versions as a thank you for actually paying, instead of getting unauthorized versions. Otherwise, it seems pretty likely that people will decide to go for the unauthorized option anyway. Consumers aren't stupid, no matter how much some folks in the industry seem to think they are.

from the if-so,-how? dept

I caught most of Commerce Secretary Gary Locke's speech (to a surprisingly small audience) yesterday at CES. There really wasn't that much that was worth commenting on, as it was mostly filled with typical political platitudes, and statements that often were based on questionable assumptions. For example, when he spoke about patents, as he's done before, he talked up the importance of approving more patents faster. But, right after that, he also talked about the importance of increasing the quality of approved patents, and getting rid of bad patents. What he didn't explain is how the USPTO would deal with the inherent conflict. If you speed up the pace of approving patents, you're inevitably going to let more bad patents through. It's nice to just say you want to speed up patent approvals while improving the quality of patents, but you have to at least recognize that the two goals are clearly in conflict. There may be ways to mitigate that (though, I'm not convinced any would actually work all that well in the long run), but it seems like the typical political promises of things that work against each other, such as claiming to want to increase government funded social services, while decreasing taxes. The two concepts are inherently in conflict, but politicians make such promises all the time. Still, if Locke really believes it's possible to bridge that conflict, it would be nice if he actually explained how.

from the and-it-looks-pretty-good dept

A few years back at a Cato Institute conference on copyright, a guy from NBC Universal challenged me with the question of "how will we make $200 million movies?" if content is freely shared. As I noted at the time, that's really the wrong question. No one watching a movie cares about how much the movie costs. They just want to see a good movie. The question for a good filmmaker or producer or a studio should be "how do I make the best movie I can that will still be profitable?" Starting out with a "cost" means that you don't focus on ways to save money or contain costs. You focus on ways to spend up to those costs. That's backwards, and it's how you fail as a business.

Imagine if Dell or IBM or HP went around saying "but how can we profitably make $5,000 computers?" It's a silly question, and it doesn't get you to focus on things like reducing costs. And, it's important to note that technology keeps making the cost of making, distributing and promoting content cheaper. No, it's still not cheap to make movies, but you can make better and better films for less and less money these days.

Jim Harper (who, it should be noted, was the guy who invited me to that Cato event in the first place), reminds us of this with a blog post jokingly entitled How to Make a $200 Million Movie, but which actually shows how it's getting cheaper and cheaper to make a film these days. Specifically, he shows an amusing new short film from Futuristic Films, which looks pretty good and notes in the opening that the whole damn thing was shot with a Pentax K-7 DSLR, which you can find these days for around $800 or so:

After that he shows the following "making of" video that highlights how the fillmmakers were able to make such a film for very little money:

Now, no one will claim that the quality is equivalent to a $200 million movie. But it keeps getting better and better and better... at the same time that it's getting cheaper and cheaper and cheaper. Oh, and you might recognize the filmmakers in question. They're the same folks who made the movie Ink and then celebrated when a copy was leaked via BitTorrent, helping the film become incredibly popular, shooting way up IMDB's movie meter, making it (for a time) one of the 20 most popular films on the site, despite being a small indie production.

I would bet these guys aren't going around whining, "but how can we make a $200 million movie?"