from the urls-we-dig-up dept

We've previously pointed out how some charities might need some changes to improve their reputation, public image and effectiveness. Some charities have been vilified for million-dollar CEO salaries or inefficient operations, but in the end, everyone wants to see more net good come from their donations than would have happened by doing nothing, right? Here are just a few examples of charitable organizations getting some good results -- even if their methods may still be debated.

from the if-you're-using-gdp,-you're-missing-the-point dept

For years we've pointed out how GDP (Gross Domestic Product) isn't a great way to measure the economy, especially in the digital age. Even if we assume that GDP can be calculated accurately (and, really, it can't), it's an aggregate piece of information, hiding lots of important things underneath. In the extreme, you could have one massively wealthy person who collects all the money, while everyone else has no money, and you could still see a "healthy" economy in GDP terms. Even worse, when it comes to the information age, GDP calculations get... both terrible and terribly misleading. Part of the problem is assuming that value only comes from things that are paid for. There's always been some element of this problem in traditional GDP calculations when dealing with more informal economies (how do you calculate the GDP of a stay-at-home parent who cares for a kid and cooks the meals?). But, when it comes to the information age, this issue has grown exponentially -- especially since so much online is "free to the user."

On top of that, the ongoing march of technology continues to make things cheaper and better (yay, Moore's Law), but getting a computer that's twice as powerful for half the price shows up in GDP calculations as half the economic output, rather than 4x the value. That's why it's great to see economic historian Joel Mokyr take this issue on in a great Wall Street Journal piece pointing out that too many economists focus on GDP and don't understand the information age.

Many new goods and services are expensive to design, but once they work, they can be copied at very low or zero cost. That means they tend to contribute little to measured output even if their impact on consumer welfare is very large. Economic assessment based on aggregates such as gross domestic product will become increasingly misleading, as innovation accelerates. Dealing with altogether new goods and services was not what these numbers were designed for, despite heroic efforts by Bureau of Labor Statistics statisticians.

The aggregate statistics miss most of what is interesting. Here is one example: If telecommuting or driverless cars were to cut the average time Americans spend commuting in half, it would not show up in the national income accounts—but it would make millions of Americans substantially better off. Technology is not our enemy. It is our best hope.

And, yes, economists will argue that they understand the problems of GDP, and yet they still rely on it, because there isn't something better. As we've noted, however, when you have a bad metric, even if you know it's a bad metric, you still tend to optimize for that metric. Because that's what you have. Yet optimizing for GDP could actually limit and hinder innovation, creating results that are actually negative for the well being of the public, just because of the impact on GDP.

And that leads to bad policies, misdirected concerns and dangerous views on innovation itself.

from the overreacting dept

Wired Magazine recently had an interesting cover story entitled How Airbnb and Lyft Finally Got Americans to Trust Each Other. It's a basic discussion of how much of the so-called "sharing economy" is built around trust, and how that's a sort of surprising thing, considering how little people tend to trust each other. Over at New York Magazine, writer Kevin Roose played the grumpy contrarian, arguing that the success of the sharing company has nothing to do with trust, but is all about desperation. And thus a big debate has kicked off. I'm going to argue that the debate is meaningless because it's not an either/or situation and neither point really matters much anyway.

First, the whole "desperation" argument is really somewhat nonsensical. You could make the same argument about any major cultural or economic shift in history if you wanted to. Did mass production and the industrial age come about because of desperation? Why, yes, you could show how that's the case as well. But it still created tremendous benefits around the globe and created tremendous progress (even for those who were desperate). I mean, you could equally argue that nearly all work is the result of desperation. If you don't have a job and aren't independently wealthy, you're pretty desperate for a job. But we don't automatically argue that all economic productivity is because of desperation.

Second, the whole "trust" issue is overstated in the Wired article. Even the article itself notes that studies show that Americans actually trust each other a lot less today than in the past (potentially for good reason). And that's because people seem to be confusing general trust with specific trust. What these services enable are ways to have a better sense of who you can trust. In fact, you could argue that what these services have done is help show who you can trust within an inherently untrustworthy population.

But my major takeaway from this argument is that both sides are missing the larger point of why this is so important. Just recently, we were discussing Jeremy Rifkin's new book, in which he argued that this is actually the beginning of an entirely new economic paradigm that eclipses capitalism. But, as I argued in the piece, I think it's just a much more true form of capitalism that allows for much more efficient uses of resources for everyone -- and that's regardless of whether or not there's more trust or desperation in the system.

Prior to industrialization, trust was more prevalent, in part because you would have many, many interactions with the same small group of people. You didn't deal much with outsiders, and tended to know the people you dealt with on a regular basis. That engendered trust, because relationships were built, and you knew that abusing trust would come back to bite you in future interactions. With industrialization and urbanization, some of that trust broke down, because you no longer only dealt with a close-knit community of folks over and over again. You had many more transactions where you likely would never deal with the counterparty ever again -- opening up a lot of opportunity for fraud and scams. Like in the classic Prisoner's Dilemma experiment -- when it's run only once, people tend to cheat. When you know it'll be run many times over, people learn to "trust" each other, because it leads to much better long term outcomes.

So, without those regular interactions with the same kinds of people, government often stepped in with regulations to try to effectively force a more trustworthy framework on the world. You had health inspectors for food, safety regulations for work, general regulations on hotels and taxis and a variety of similar laws -- all designed to make sure that these kinds of transactions, which are generally one-offs, can be trustworthy and safe. Given the overall world they existed in, those regulations made perfect sense.

However, as is often the case in a regulatory environment, they also introduced certain inefficiencies in the process, making running those business more expensive, locking in certain (perhaps less-than-efficient) business practices, and often keeping out new upstarts and innovators. As we've seen, overtime, incumbents (despite claiming to hate regulations) will often embrace such regulationsbecause they keep out competitors.

So here's where the interesting shift has come into play. Things like Lyft and AirBnB are using a combination of transparency and information to create systems that allow for both the more efficient use of resources and making transactions more trustworthy even without making use of those regulations. This freaks some people out and it clearly does not always work perfectly. But, on the whole, it has created some really amazing new opportunities on both sides of the markets, in which greater information transparency steps in and provides a better solution to legacy regulations, with significantly less overhead. That's freeing up economic resources by increasing efficiency in a really compelling way.

And this is why the traditional players, who had embraced the regulations wholeheartedly, are so pissed off. It does seem unfair that AirBnB can effectively compete with hotels without complying with hotel regulations. But part of the reason it does so is that the system that AirBnB has created doesn't need those kinds of regulations. While it's just anecdotal, my own experience using AirBnB has consistently resulted in a much better experience than at hotels, and one where that kind of trust that is built up matches much more with the pre-industrial version. As an example, I'm actually Facebook friends with one AirBnB host whose apartment I used once, and I will likely stay at his place again in the future. He didn't join AirBnB out of desperation, but because to him it's a great way for him to run his own business, which he's always wanted to do.

Thus, I think arguing over whether or not these services have increased trust or are a result of desperation is sort of a meaningless argument. It's happening one way or the other. What's much more interesting about this is how it's actually opening up all sorts of new efficiencies and economic opportunities for everyone -- and doing so by using information to show why old regulations, no matter how much they made sense at the time, may be inefficient and obstructionist today.

from the so-of-course-the-entertainment-industry-wants-to-kill-it dept

We've written a few times in the past about research done by Paul Heald on copyright and its impact on the availability of certain content. He's recently published an interesting new study on how the DMCA's notice-and-takedown regime facilitates making content available by decreasing transaction costs among parties. As we've discussed at length, the entertainment industry's main focus in the next round of copyright reform is to wipe out the notice-and-takedown provisions of the DMCA. The legacy recording and movie industries want everyone else to act as copyright cops, and hate the idea that notice-and-takedown puts the initial burden on themselves as copyright holders.

However, Heald's research looks at music on YouTube and concludes that the notice-and-takedown system has actually enabled much greater authorized availability of music, by reducing transaction costs. The idea is pretty straightforward. Without a notice-and-takedown provision, someone who wants to post music to YouTube needs to go out and seek a license. Of course, getting permission from all the various rightsholders is frequently impossible. The transaction costs of getting permission make it such that it's way too high. Yet, with notice-and-takedown, the person can upload the content without permission, and then the copyright holder is given the option of what to do with it. On YouTube, that includes the option of monetizing it, thus "authorizing" the use. That creates a natural experiment for Heald to explore, in which he can see how much content is "authorized" thanks to such a setup. And the result, not surprisingly, is that this system has enabled much greater authorized (and monetized) access to music than an alternative, high transaction cost system, under which uploaders must first seek out permission to upload everything.

In fact, the analysis shows a tremendous number of popular music hits from the US from 1930 to 1960 are available in what's likely an authorized (i.e., monetized) fashion, even thought nearly all of it was almost certainly uploaded by those without permission. Under the system that the RIAA and MPAA would like, this would be next to impossible. Instead, they'd want to negotiate deals first, making it nearly impossible for such works to be available, and meaning that both the availability and monetization of those works wouldn't be happening. As Heald concludes:

Congress should resist calls to dismantle platforms like YouTube which take advantage
of current limits on secondary liability to create a marketplace that radically reduces the high
cost of negotiating over rights to music and visual content. The access YouTube provides to
valuable cultural products is far from perfect, but it provides a partial solution to the problem of
disappearing works, at least in the music context. In any event, no new legislative initiative
should proceed in the absence of concrete data testing the claim of copyright owners that their
proposals make works more, rather than less, available to the public.

from the interesting-book,-questionable-premise dept

As regular readers here on Techdirt will know, I've been talking about the importance of understanding what happens to economic equations when the marginal cost of something is zero for over 15 years already. It's a very common theme around here. One of my complaints has been that those who came out of an economic world viewpoint in which economics is entirely about dealing with the efficient allocation of scarce resources, tend to fall into a weird intellectual black hole when they try to put a zero in the equation. But I've long argued that this is the wrong way to look at things. The basic equations still work fine, it's just that you have to recognize the flip side of zero is infinity. When you have a zero marginal cost item, you are creating an infinite good -- a resource that can never run out. When you begin to realize that you have a new form of resources -- inputs in economic terms -- suddenly you realize that you're massively expanding the pie, allowing incredible new things to be created from that limitless pool of resources. That's powerful stuff.

So, as you can imagine, I was excited when the publisher of Jeremy Rifkin's new book, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, reached out to send me a promo copy a few weeks ago. I am only halfway through it, so I'll probably write more about it when it's done, and there's an awful lot of really interesting examples and profound thinking going on. So I'm really enjoying the basic part of it. However, there's one aspect of the book that I have trouble with, and it's exemplified in Rifkin's op-ed in the NY Times a few weeks ago, called The Rise of Anti-Capitalism. You can probably already suspect the problem I'm seeing, based on the title. The explanation of zero marginal cost and how more and more of our economy is heading there is spot on. And, as we've been noting for over a decade as well, this goes way, way beyond just "content" like music and movies. It's going to impact nearly every important industry in our lives:

The first inkling of the paradox came in 1999 when Napster, the music service, developed a network enabling millions of people to share music without paying the producers and artists, wreaking havoc on the music industry. Similar phenomena went on to severely disrupt the newspaper and book publishing industries. Consumers began sharing their own information and entertainment, via videos, audio and text, nearly free, bypassing the traditional markets altogether.

The huge reduction in marginal cost shook those industries and is now beginning to reshape energy, manufacturing and education. Although the fixed costs of solar and wind technology are somewhat pricey, the cost of capturing each unit of energy beyond that is low. This phenomenon has even penetrated the manufacturing sector. Thousands of hobbyists are already making their own products using 3-D printers, open-source software and recycled plastic as feedstock, at near zero marginal cost. Meanwhile, more than six million students are enrolled in free massive open online courses, the content of which is distributed at near zero marginal cost.

Frankly, I think the power of zero marginal cost goods -- or, as I prefer to call them, infinite goods -- is almost entirely ignored in energy, manufacturing and education (and, importantly, also in healthcare and finance). So it's certainly encouraging to see Rifkin highlight where this is all heading. Where I run into trouble, however, is his belief that this then leads to "the end of capitalism" or "anti-capitalism." To be clear, he explains how what comes out of this, a more collaborative society, will be a great thing. And, again, there's some agreement there. I just think that it's still very much capitalism. Capitalism does not mean that collaboration does not happen. In fact, collaboration is a key part of a well-functioning capitalist society. Ronald Coase famously laid out his theory of the firm in 1937, which explains how transaction costs are a key element in leading people to create long term collaboration. A zero marginal cost world will change the nature of those transaction costs, and will certainly change the nature of collaboration and companies, but it's not anti-capitalist. It's actually more exactly capitalist, where collaboration takes place with more transparency and more information. Those who believe that collaboration is anti-capitalist tend to misunderstand capitalism -- either as extremist Randian Objectivists, or those so opposed to capitalism, often based on believing capitalism is what Randian Objectivists say it is.

Take, for example, this aspect of Rifkin's argument in the NY Times piece:

THE unresolved question is, how will this economy of the future function when millions of people can make and share goods and services nearly free? The answer lies in the civil society, which consists of nonprofit organizations that attend to the things in life we make and share as a community. In dollar terms, the world of nonprofits is a powerful force. Nonprofit revenues grew at a robust rate of 41 percent — after adjusting for inflation — from 2000 to 2010, more than doubling the growth of gross domestic product, which increased by 16.4 percent during the same period. In 2012, the nonprofit sector in the United States accounted for 5.5 percent of G.D.P.

[....]

This collaborative rather than capitalistic approach is about shared access rather than private ownership. For example, 1.7 million people globally are members of car-sharing services. A recent survey found that the number of vehicles owned by car-sharing participants decreased by half after joining the service, with members preferring access over ownership. Millions of people are using social media sites, redistribution networks, rentals and cooperatives to share not only cars but also homes, clothes, tools, toys and other items at low or near zero marginal cost. The sharing economy had projected revenues of $3.5 billion in 2013.

Except, when you look, the most successful and disruptive examples of this "collaborative" approach are not non-profits or civil society, but rather perfectly capitalist companies, that have actually unlocked tremendous potential for revenue not just for themselves, but their users. Things like AirBnB, Uber, Lyrt, Sidecar, FlightCar, RelayRides, Zaarly, LendingClub, AirTasker, Kickstarter, LiquidSpace and many, many more are disrupting all sorts of industries, but doing so in ways that are actually about the more efficient use of resources, unlocking potential that had previously been locked up (often because the transaction costs were too high). But they're not anti-capitalistic at all. They're making capitalism much better. They're helping to move away from power being held by just a few large companies, towards ones where individuals have more power directly.

These aren't non-profits or civil society creating these disruptions, and it seems odd for Rifkin to imply that's what's happening. That's not to knock non-profit organizations or civil society groups -- both of which do great things in many cases. But it conflates a variety of different issues to argue that the response to a zero marginal cost society and infinite goods is that non-profits and civil society "take up the slack." Instead, what we are seeing is that new forms of (very capitalist) companies are forming. They're disruptive -- but disruptive in a good way. They're often about providing more economic freedom and power out to users, such that the transactions are actually beneficial to all players, rather than having a few large companies hoarding the power in the middle.

But having companies hoard power has never been true capitalism in the first place. It's always been the problem that occurs when you have transaction costs that are too high, sometimes driven through political and regulatory capture, allowing certain firms to gain monopoly or oligopolistic control over certain markets, allowing them to create economic friction, increase transaction costs, and keep most of the value created, rather than distributing it to the end points. However, the new disruptive players in the market are often reversing that trend. They're increasing trust, decreasing transaction costs, spreading much of the value to the end points, and simply taking a small cut of the transaction along the way. That's not anti-capitalist, or the "end of capitalism" -- it's about a better recognition of what true capitalism is supposed to be about: more efficient transactions, with minimal friction, where all parties benefit from the transaction.

So there's plenty that I find compelling in Rifkin's book and theories, but I think that he makes a leap too far in arguing that it somehow goes against capitalism, or that civil society and non-profits are somehow "the solution" to a problem that's not clearly a problem.

from the you-have-to-be-kidding-me dept

We've argued for years, that there are different kinds of middlemen involved in making markets. Some are efficient, leading to better reach, easier access, and more convenient transactions, while some are inefficient, blocking access, keeping prices inflated, and generally limiting a market. We tend to separate these into two camps: gatekeepers, who limit efficiency, and enablers, who increase efficiency. In truth, there's a pretty big spectrum between those two endpoints, and a single company can shift back and forth along the spectrum between being a gatekeeper some of the time and an enabler at other times. Historically, it's generally (though not always) been true that disruptive innovators are enablers, breaking down the walls set up by the gatekeepers, making markets more efficient, and generally distributing power away from a central gatekeeper out to the end points (the actual participants in the market, rather than the middleman). However, I had thought that it was at least generally recognized and accepted that gatekeepers tend to be bad for markets, and enablers tend to be good.

Obviously, you'd expect some who work for gatekeepers to disagree, though they tend to disagree by arguing that they're not really gatekeepers. However, perhaps I overestimated some of those who support gatekeepers. I was somewhat shocked to hear, recently, that when Public Knowledge's Gigi Sohn spoke at the World Creator's Summit, she was hissed at and booed for suggesting that gatekeepers are a problem. Apparently, the attendees of the World Creators Summit like gatekeepers who hold back actual creators, set up barriers to reaching a market, and only provide a winning lottery ticket to a very small number of creators. Weird.

I'd thought that perhaps that was a one-off situation. However, the "director of legal policy" for the Copyright Alliance -- a front group for the music labels and movie studios -- Terry Hart, has now written a blog post that could be entitled a defense of gatekeepers controlling artists' works. The article is really a summary of a new paper by a law professor, Guy Pessach, who argues that disintermediation is bad in the copyright realm. I will note that this is not a research paper or a study. It's an "essay." And the central theme is to take the rather contrarian position that "disintermdiation" (i.e., doing away with gatekeepers) in the copyright market will "undermine cultural diversity, decentralization and authors' welfare." This, despite all evidence to the contrary -- so it's worth a read.

Frankly, the paper is a mess. It more or less misinterprets the whole "disintermediation" argument, saying it's about getting rid of middlemen entirely, rather than moving from gatekeepers to enablers. The paper instead turns into an ill-informed and confused attack on internet companies as the problem. It actually seeks to argue that artists have less power and control when using internet services than they do in signing deals with major labels/studios. Bizarrely, and incorrectly, it tries to argue that internet intermediaries are locked in and static, while suggesting that traditional intermediaries (record labels, publishers, movie studios, etc.) are not.

Additionally, it is both anticipated and apparent that markets for
Internet intermediaries are highly concentrated, with very few entities
dominating. Since much of the cost of producing an Internet intermediary
(design, technological innovation) is unrelated to the number of users of the
service, the average cost of providing service to each additional user may fall
as the number of users increases. Economies of scale reduce the level of
competition. Cost of entry is rapidly rising while strong network effects give
advantages to large-scale intermediaries

But, of course, that's false. Anyone who's paid even the slightest attention to the dynamic in both markets knows that's false. The major labels have dominated the recorded music business for many decades. Ditto the major studios. In the internet world, it's constantly changing. A decade ago, Yahoo was on top. Apple was just starting to return to being interesting. Facebook didn't exist. YouTube didn't exist. Even MySpace didn't exist a decade ago. Kickstarter didn't exist. Twitter, Tumblr, Hulu, IndieGogo, SoundCloud, SongKick, Bandcamp, TopSpin, TuneCore, Pandora, Spotify -- none of them existed. And that list could be much, much bigger. To argue that the internet world is stagnant and unchanging as compared to the recorded music or movie worlds is dumbfounding.

Then there's this bit of insanity:

I begin by referring to authors and creators while presuming that it is
authors and creators, rather than traditional corporate media, who are in control
of their copyrights. Even so, the bargaining position of originating authors and
creators, versus a handful of Internet intermediaries, may be weaker than it was
for traditional distributors and corporate media.

Really, now? With traditional intermediaries -- i.e., gatekeepers -- if you weren't able to sign a deal, you basically didn't have a career as in music or movies. And so those traditional intermediaries signed ridiculous contracts, in which you gave up your copyrights and nearly all of the royalties. The new enabling companies don't act as a gate. They let anyone make use of them, and they tend to give you full control and ownership of the effort -- you retain your copyright, and you tend to get a much larger percentage of the money earned.

Amusingly, the paper also seems to argue that the wide open internet is somehow less egalitarian than when you had an A&R guy at a major label deciding who the next big music act would be. Really?

It is now apparent and documented that
due to network effects and power law distribution, 40 the typology of the
Internet is such that there is a “a complete absence of democracy, fairness, and
egalitarian values on the web . . . . [T]he topology of the web prevents us from
seeing anything but a mere handful of the billion documents out there.”

Whereas, the old record label system basically cut that off much earlier. It wasn't egalitarian at all. It would sign a very small number of artists, and tell the rest to go do something else with their lives, and then it would select a very few acts each year, put all of its marketing muscle behind a payola scheme to convince the public "this is what you like this year."

Pessach also doesn't seem to understand the nature of promotion, and the concept of multiple revenue streams. Take, for example, his "case study" around YouTube, which he trots out to "prove" that artists suffer under the success of YouTube:

YouTube operates a content partnership program that enables creators
who upload content to YouTube to earn revenues from advertisements that
appear along with their video clips. This is YouTube’s main and only option
that enables creators to get remuneration for making their content available to
the public.

Actually, no, that's not the only option for monetization. First, YouTube allows links directly to buy the songs in question, on at least iTunes, Amazon and Google Play. YouTube also pays ASCAP/BMI and others a license for streaming, and so artists make money that way, contrary to Pessach's later claims that YouTube never pays for direct usage. To leave all that out suggests Pessach simply is unfamiliar with the site he's critiquing and basically removes all credibility from the argument. Second, it leaves out entirely the nature of indirect benefits to widespread attention on YouTube, which leads to sales, concert tickets, opportunities for licensing and much, much more. And, even if we assume that Pessach is accurate in claiming that the only option is to monetize through ads, the deal still tends to be better than most record label deals, where they take 85 to 90% of any royalties. He suggests it's unfair that artists get a "take it or leave it" deal from YouTube on the revenue sharing, but apparently he's never spoken to an artist who gets a record label contract. It tends to be the same thing, unless they're already a huge star.

If we recognize that YouTube is really the equivalent to radio, rather than a label, as Pessach seems to be trying to analogize, then the deal is so much better with YouTube. On radio, first of all, most artists never get any airtime. The few that do often have to have massive payola behind them, and then there are no performance rights royalties (in the US) for the musicians, though there are songwriting/publishing fees to ASCAP and such (but, again, that's true on YouTube as well). On radio there's no choice. On radio there are no direct links to buy as there are on YouTube (which Pessach apparently never noticed). On radio there's no ease of sharing with friends, no embedding to promote the artists you like to your friends. Oh, and there's no revenue share at all, a la YouTube's partner program. Pessach's argument, in short, is to compare apples and oranges, and then misrepresent the apples. Yikes.

He later gets to the crux of his argument, which is basically that the "new intermediaries" "don't finance or invest in the production of content." But, again, he's making a false comparison, pointing to YouTube or Facebook or whatnot, as if they're supposed to do advances. But that's silly, because we're talking about totally different types of intermediaries, ones that are more like radio, than a label. And, it's not like radio ever financed or invested in the production of content either. But if we want to talk about financing the creation of new production of content, let's talk about crowdfunding platforms like Kickstarter, IndieGoGo, PledgeMusic and more. Kickstarter is never mentioned in the paper. Not once. Or how about direct to fan models? TopSpin? Not mentioned. Bandcamp? Not in there at all. And yet, all of those services are used by thousands of artists to finance and "invest" in the production of new content by allowing artists to go directly to their fans and get support.

Instead, the paper keeps going back to YouTube as the problem. But, again, if you compare YouTube to terrestrial radio, using the same "metrics" that Pessach keeps going back to, it seems like YouTube wins every single time. Take, for example, the following:

Finally, in addition to authors’ and creators’ economic welfare,
YouTube’s model may also give rise to long-term alienation that creators and
authors may feel against their almost only effective channels to exposure and
audience attention. Ironically, or not, it is the psychological and sociological
motives of creativity (the same ones that underlie the disintermediation
movement) which make creators and authors disadvantaged. Creators’ desire
to be exposed and gain as much audience attention (and love) as possible to
their creative works is a parameter, which further undermines their bargaining
position against a handful of dominant networked intermediaries who control
the bottlenecks to audience attention.

Beyond the fact that this paragraph is entirely speculative, rather than based on even the slightest bit of evidence, radio is a much much much bigger bottleneck for artists reaching their audience, in that most artists can never, ever get on the radio at all. Yet, somehow Pessach wants to believe that YouTube is worse for artists? Tell that to the growing number of artists like Alex Day, Jack Conte, Dan Bull, Macklemore and others who have built success stories around their YouTube videos.

Pessach's other "examples" of bad internet intermediaries are just as laughable. He points to the widely debunked story about Huffington Post being able to sell for $300 million and not giving any of that money to the bloggers who "made the site popular." Except, most of that story is a myth. HuffPo pays for a large editorial and reporting staff, and many of its most popular stories come from paid staff. For unpaid contributors, it's a tradeoff between whether they want the promotion of the platform. If not, they have a myriad of other options, including setting up their own damn blog. Unlike with the record labels where it used to be either "get signed to a label or go home," those who wish to blog could go in all different directions to make money.

His next example is what he says was "Instagram's failed attempt to commercially utilize, for advertisement purposes, photos that were uploaded by its users." That, again, is a total bastardization of reality. There was a lot of hype about this, but Pessach's interpretation of what happened is wrong. The reality was that a bunch of people didn't understand some boilerplate language used on tons of sites, and assumed, incorrectly, that Instagram was going to put your photos in ads. As the company explained, that had never been its intention at all -- but it was some boilerplate language in the terms, which it quickly changed to clarify for users. Again, when Pessach seems to continually misrepresent things, it really detracts from the argument. No wonder the Copyright Alliance is such a huge fan of the paper.

In the end, the paper is basically just an attempt to tar and feather new enablers that have given many new artists new ways to create, to promote, to connect and to monetize their art -- while bizarrely suggesting, absent of any proof -- that the old gatekeepers were somehow better for artists. About the only explanation it presents is "advances." Yes, the labels gave out advances to a very small number of artists... and then basically holds them as indentured servants as they seek to recoup that money, piling on more and more "expenses," and only counting the tiny fraction that is their royalties towards recouping. Or they could make use of the new platforms, retain control over their work, and only have to pay small fees (between 5 and 30% at the top) for the services provided. Furthermore, the suggestion that these new enablers have meant less diversity in content creation is simply laughable on its face, and deserves no further comment.

If the various RIAA and MPAA front groups are going to try to push support for gatekeepers, one would hope that they'd come up with slightly more competent arguments that can at least pass the laugh test.

from the please-make-it-stop dept

So... we'd already taken a stab at debunking Jaron Lanier's "gobbledygook economics" a few weeks back when it started appearing, but since then there's been more Lanier everywhere (obviously, in coordination with his book release), and each time it seems more ridiculous than the last. Each time, the focus is on the following economically ridiculous concepts: (1) there should be micropayments for anyone doing anything free online because someone benefits somewhere (2) modern efficiency via technology has destroyed the middle class. Both of these claims make no sense at all.

So Kodak has 140,000 really good middle-class employees, and Instagram has 13 employees, period. You have this intense concentration of the formal benefits, and that winner-take-all feeling is not just for the people who are on the computers but also from the people who are using them. So there’s this tiny token number of people who will get by from using YouTube or Kickstarter, and everybody else lives on hope. There’s not a middle-class hump. It’s an all-or-nothing society.

The Kodak/Instagram comparison comes up over and over again, and it's moronic. It makes no sense. To demonstrate, let's take something else that's old and something else that's modern that sorta-kinda seems similar, and compare the two: Very, very, very few people make money "auctioning" goods via Christie's. Yet, a few years ago, eBay noted that 724,000 Americans made their primary or secondary incomes from eBay sales, with another 1.5 million supplementing their income. In the simplistic world of Jaron Lanier, this should be proof that eBay is good, and Christie's is bad, right? But, of course that's silly.

The fact that Instagram only employed a few people and Kodak employed a lot says nothing about the impact of technology on modern society or the economic status of the middle class. Even if we take the ridiculous leap and pretend that the two companies are somehow equivalents (and they're not even close), you could just as easily point out that Instagram created a lot more value for people than Kodak did. First off, it didn't involve toxic chemicals that create massive amounts of waste and pollution. Second, because people don't have to buy expensive rolls of film to take pictures any more, they get to save money and put it to better use. Third, because we no longer have to worry about the expense of each photo, people are free to take many more photos and capture more memories and generally enjoy photography more. Fourth, because instagram makes the sharing of photos much easier, it enables much greater communication among family and friends, building stronger community bonds. I mean, you could go on and on and on.

But I'll let economics professor Donald Boudreaux go even further and explain how just because one company or industry employed a lot of people at one time, it doesn't mean crap about how modern technology is "destroying jobs" because it's just not true.

Mr. Lanier sounds profound, I suppose, to people unfamiliar with history. So let’s re-write Mr. Lanier’s prose just a bit in order to put his fears in historical context:

“At the height of its power, agriculture employed 90 percent of the population and produced output worth vastly more than half of U.S. GDP. It even invented countless plant hybrids and animal breeds. But today nearly all farms of the past have gone bankrupt (or, seeing the economic writing on the wall, were transformed to other uses). Agriculture today employs only about one percent of the workforce. Where did all those jobs disappear? And what happened to the wealth that all those good agricultural jobs created?”

Economic efficiency often shifts jobs around, but creates a much larger pie, which leads to new job creation. We can reasonably question whether the there are people who get left behind, or what kinds of skills are favored as industries become obsolete, but the idea that it destroys a middle class is just silly.

But it gets worse. Lanier repeatedly claims that the only reason we have jobs today is because of some sort of "social contract" that has been broken:

We kind of made a bargain, a social contract, in the 20th century that even if jobs were pleasant people could still get paid for them. Because otherwise we would have had a massive unemployment. And so to my mind, the right question to ask is, why are we abandoning that bargain that worked so well?

When did "we" make this "bargain" and, honestly, what is he talking about? There was no such bargain made. Jobs have nothing to do with whether they are "pleasant." And we didn't create jobs to avoid unemployment. We created jobs because there was demand for work, meaning there was demand for products and services, just as there still is today. But he doubles down on this crazy thought and says that when old jobs became obsolete it was this non-existent "social contract" that created new jobs:

Of course jobs become obsolete. But the only reason that new jobs were created was because there was a social contract in which a more pleasant, less boring job was still considered a job that you could be paid for. That’s the only reason it worked. If we decided that driving was such an easy thing [compared to] dealing with horses that no one should be paid for it, then there wouldn’t be all of those people being paid to be Teamsters or to drive cabs. It was a decision that it was OK to have jobs that weren’t terrible.

I'm just left shaking my head here because this statement is so ridiculous and so ignorant that it, alone, should cause people to assume that Lanier knows absolutely nothing about economics or history. New jobs were created because of demand, and because new technologies create efficiencies which create and enable new jobs. It has nothing to do with "decisions" being made or "social contracts." It has to do with efficiency and new things being enabled through innovation.

Next up, Lanier did an interview with Eric Been at the Nieman Journalism Lab and it was more of the same, except here he lays out his "theory" for a massive system of micropayments, in which any time you do anything that provides useful data to someone else, they need to pay you royalties. Let's take one example and think about how insane this process would be:

So, for instance, let’s suppose you translate between languages, and some of your translations provide example phrase translations that are used in automatic translators. You would keep getting dribbles of royalties from having done that, and you start accumulating a lot of little ways that you’re getting royalties — not in the sense of contractual royalties, just little payments from people that are doing things that benefited from information you provided. If you look at people’s interest in social networking, you see a middle-class distribution of interest. A lot of people would get a lot of little dribs and drabs, and it would accumulate over a lifetime so you’d start to have more and more established information that had been referenced by you that people are using. What should happen is you should start accumulating wealth, some money that shows up because of your past as well as your present moment.

Well, if he wants to create jobs, I guess adding the most incredibly massive inefficient bureaucracy and tollbooth for sharing any bit of information is one way to do so. The fact that it would cause the internet economy to come to a screeching halt apparently isn't figured into all of this. There are tremendous benefits to information sharing and information exchange. Putting a price tag and an ongoing royalty on every single such action would make it incredibly expensive to do anything that involves information, which would mean less information sharing, less information exchange and less benefit from that information.

This is the broken window fallacy exploded exponentially for a digital era. It seems to assume that the only "payment" is monetary. That is, if you do something for free online -- share a video or a photo, like a link, listen to a song -- that you're somehow getting screwed because some company gets that info and you're not getting paid. But that's ridiculous. The people are getting "paid" in the form of the benefit they get: free hosting and software for hosting/streaming videos and pictures, free ability to communicate easily with friends, access to music, etc. The list goes on and on, but Lanier seems to not understand the idea that there are non-monetary benefits, which is why various online services which he seems to hate are so popular.

A token few will find success on Kickstarter or YouTube, while overall wealth is ever more concentrated and social mobility rots. Social media sharers can make all the noise they want, but they forfeit the real wealth and clout needed to be politically powerful. Real wealth and clout instead concentrate ever more on the shrinking island occupied by elites who run the most powerful computers.

This is bullshit, plain and simple. Under the "old" system, you had a smaller "token few" who found success via getting a major label contract or having a publisher accept them into the club of published authors. Most people who wanted to create did so as a total hobby and made no money whatsoever from it. You had a tiny tiny few who were able to do so. Today, lots of people can make some money creating -- it may not be a huge amount for all, but almost anyone can make some amount, and some of them can make a good amount, and unlike before the creators themselves are in control now. They're not signing contracts that give away 90% of the revenue and all control over their works. They still control it and they control where most of the revenue goes.

In fact, as I was finishing up this post, I saw the news that Kickstarter had now surpassed 100,000 projects, 44% of which got funded to the tune of $535 million. And the site has only been around for four years (and there are many more crowdfunding platforms). Compare that to the success rate of someone ten years ago who wanted to make a career in music. They likely wouldn't even get in the door. And, even if they did, the labels admit that nearly all signed musicians end up failing out of the system. This system seems a hell of a lot better.

It's as if Lanier is talking about a mythical past that never existed to make some point about the future. But all of the evidence suggests that more people are now able to make use of these tools to create new incomes and new opportunities to make money, while in the past you had to wait for some gatekeeper. Lanier, a beneficiary of the old gatekeepers, may like the old system, but he's confused about history, facts, reality and economics in making this ridiculous argument -- and it's a shame that those interviewing him or publishing his ridiculously misinformed screeds don't seem to ever challenge him on his claims.

from the that's-all-bullshit dept

Intellectual Ventures is at it again -- playing the "oh, little innocent us? we're not doing anything that should really concern anyone" card in the press. Wired is running an op-ed by Raymond Hegarty, IV's "VP Of Global Licensing in Europe." He's a long term patent maximalist... and it shows. The article is entitled: Intellectual Ventures: Why the Patent System Needs Aggregators Like Us, which should give you an idea of the fanciful rewriting of history you're about to read.

The U.S. patent system borrowed from mainland Europe a concept that had evolved over hundreds of years: the “moral right” for inventors to protect their ideas. But America's founders went even further – they also included the obligation for inventors to publish.

This extra part of the deal was ingenious: It has been key to America's history as a global leader in innovation.

Except, of course, that patents weren't actually based on "moral rights" at all. Hegarty is making that part up. It was an economic right designed solely for the purpose of "promoting the progress" of "the useful arts." And, yes, it had a disclosure component -- which would be brilliant if it worked. But it doesn't. Especially in the technology industry where IV holds most of its patents. Time and time again we've seen the same thing. No one in technology learns anything from looking over broadly written patents full of legal phrases that mean nothing to engineers. It's not uncommon to hear engineers say that they don't even understand what's in their own patents, once the lawyers get through reworking them. The whole idea that patents disclose things is a myth.

If Hegarty really wanted patents to disclose things, he'd support the idea that any software code covered by a patent had to have the original, working source code be submitted with the patent. Of course, that would make an awful lot of IV's patents not worth very much, since they could no longer be used to shake down companies which are actually innovating.

Because inventors were incentivized by protection, yet still obligated to publish, their ideas became available for everybody to see. Not only did this increase the global pool of knowledge, it also allowed follow-on developers to avoid the blind alleys experienced by the original inventor.

Again, this is part of the myth -- but it's bunk, as most engineers will tell you. It's much, much easier to learn from other products just by looking at those products or reverse engineering what they do, rather than reading over the patent. Hegarty is simply making up a world that does not exist.

The published patent also provides a roadmap to further innovation: the work-around. When developers become too enamored with popular features, they stop innovating. By preventing access to such successful features, patents conversely force competitors to come up with the new ideas or workarounds that lead to fresh innovation.

Another myth with no real support. The idea that developers become "too enamored with popular features" and only innovate because patents block them from staying enamored is laughable. It makes you feel like he's never spent any time with any living engineers or innovators. People innovate for a variety of reasons, and the idea that people don't try something new unless forced to by a patent limitation is simply ridiculous. Studies have shown that the driving causes of innovation are self-need first of all and to stay ahead of the competition second. Both of those give plenty of reasons for innovation without the artificial restriction of a patent.

Having started out his article by rewriting history and how and why people innovate, he then goes on to suggest that in such a purely mythical world, massive, obnoxious patent trolls like the one who pays his salary don't just have a place, but are somehow vital to the system.

But as technologies converge and the products we use become increasingly complex, the system needs intermediaries within the market – companies like Intellectual Ventures – to help sift through and navigate the published landscape. By developing focused expertise, these patent licensing entities and intermediaries can function as patent aggregators, assembling portfolios of relevant inventions and providing access through licensing.

Don't you see? Intellectual Ventures didn't just buy up 30,000 or so patents from a bunch of universities struggling to defend their overeager decisions to set up tech transfer offices just for the sake of shaking down actual innovators -- it did it to help companies "sift through and navigate" the patent "landscape." This is the point at which most normal, living, thinking people who are familiar with the patent system call this out for what it is: bullshit. 100% bullshit.

No company is going to Intellectual Ventures and paying them upwards of $100 million to have IV help them sift through the patents that are out there, to better understand the "disclosures" so they can further innovate. They're paying up to avoid getting sued and hit with a judgment that could be many hundreds of millions of dollars. In more colloquial language, this is generally known as a shakedown. But, thanks to our patent system, it's a "legal" form of a shakedown.

Yes, sometimes aggregators have to go to court to protect their patent rights – and get labeled with all kinds of nasty names for doing so.

Oh, Hegarty, be fair now: people were calling Intellectual Ventures a patent troll since long before it started suing companies. And no one is calling you nasty names for going to court. We're calling you nasty names for abusing the system massively to take money away from actual innovators to move it to those who have done nothing to move the market forward.

He then goes on to wax rhapsodic about the wonderful smartphone and how it's just so chock full of patent goodness. And, you see, what that really means is not that there are tons of companies abusing the system, but that the little guy -- the mythical sole inventor laboring away in his garage -- is somehow at risk of not getting his due, if it weren't for the kindly and benevolent likes of Intellectual Ventures... here to save the day.

Patent aggregators sift through the issued patents with an expert eye, and provide efficient access to the long tail of patents. When tens of thousands of patents touch a product, hundreds of inventors spread around the globe deserve to be paid. But in the race to market, product companies often ignore the long tail; small inventors have very little power to do anything about this unless they can enlist the help of patent aggregators.

In other words: please small-time patent holder, sell us your patent, so we can shakedown big companies for more money. The whole idea that anyone at Intellectual Ventures "sifts through" its patents with an "expert eye" for the sake of helping companies innovate is laughable. They look to bundle as many patents together as possible, so that they can go to companies and use the modern equivalent of the famous line from an IBM lawyer to Sun execs back in the day: "OK, maybe you don't infringe these seven patents. But we have 10,000 U.S. patents. Do you really want us to go back to Armonk [IBM headquarters in New York] and find seven patents you do infringe? Or do you want to make this easy and just pay us $20 million?"

It's notable, by the way, that last year, when the reporters at This American Life did their episode all about patent trolls, mainly focusing on Intellectual Ventures, and they asked the company to give them any evidence of individual inventors helped along by IV, the company could only come up with one name, and when TAL tried to track that guy down, they discovered nothing to support the claims at all, but rather another troll case with a questionable patent being used to shake down actual innovators.

But aggregators, in order to maximize returns from the patents they've acquired, are incentivized to package and license patents as broadly as possible. If patents are available to all-comers, not just used to exclude, companies can focus on improving their products and competing through innovation.

You have to sit back and wonder if Hegarty actually believes this stuff or if he's really just getting a chance to exercise the more "creative" muscles in his writing skills. The company has done more to block competing through innovation than probably anyone else. It's done more to funnel money that could have gone towards actual innovation into the wallets of its own execs and investors. No one is running to IV because they think that it will help them "improve their products." In talking to lots of companies who have dealt with IV over the years I've never, not once, come across one who did a deal with the company eagerly or for the sake of helping them innovate. No, everyone does it for one reason only: to not get sued (or, possibly, to have access to patents to hit back against others who sue). IV isn't helping innovation, it's trying to monopolize the arms dealership business in the patent wars. And it's laughing all the way to the bank.

Despite this complexity, we must maintain the founding principle of the U.S. patent system – providing an incentive for inventors to create without fear of being ripped off. Only then can inventors continue to focus on doing what they do best: inventing. Society benefits when the value of ideas is recognized.

Society benefits when innovative products are brought to market and people who want them buy them in a free market. Society does not benefit when one company buys up a ton of useless, broad and vague patents that have nothing to do with the innovative products on the market, and then demands cash from companies if they don't want to get sued.

Aggregators also provide a signal to the market as the debate around patent quality continues. Every time Intellectual Ventures purchases a patent, we are making a bet that it is a quality patent. We purchase only 15 percent of the tens of thousands of patents we review, drawing on and continually building the expertise of our acquisitions team. Sometimes patents come as a package deal so we have to buy 10 to get the six or seven we really want, which is why only 40,000 of our 70,000 assets are in active licensing programs. But we continuously prune our portfolio to maximize quality – thus helping the market navigate the long tail of patents.

Translation: yes, some patents are so bad that even we can't figure out ways to misread what they were supposed to cover into pretending they cover something entirely different.

Ultimately, the users of those products – you – are the ones who benefit.

By paying a tax that increases the cost of the products you buy by a massive amount.

The whole thing is, once again, ridiculous. It's based on myths and an attempted rewriting of what's actually happening. It's sick and it's cynical to make such claims knowing full well that the only thing that Intellectual Ventures is doing for this market is sucking money out of actual research and development.

from the not-even-close dept

Continuing our series of posts concerning the Republican Study Committee report on the problems of the copyright system and how to fix them (which it quickly retracted under industry pressure), today we're going to explore the second "myth" that author Derek Khanna helped debunk: that "copyright is free market capitalism at work." We've already covered the first myth, about the purpose of copyright, as well as responded to various responses to the report by copyright maximalists.

That response feeds nicely into this post, because the whole argument that copyright is "free market capitalism" depends almost entirely on the key claim of maximalists: that copyright is property, full stop. However, as we noted in our response, copyright has both property-like attributes and many non-property-like attributes. And it's when you look at the actual market that you have to recognize that those non-property-like attributes start to stand out. The only way you can argue that copyright is free market capitalism at work is to flat out ignore the ways in which copyright is unlike property.

To hopefully demonstrate this clearly, we'll start out with two examples of other "markets" that show that just because you set up a property right and create a market, that doesn't mean it's a free market. First up: air. Yes, that stuff we all breathe. It's clearly a valuable good. Extremely valuable. But... if we're to believe the maximalist view, because we don't directly pay for the air we breathe (even if we pay for it indirectly) it must be "valueless" or "worthless." So, clearly, the best way to deal with this is to set up a monopoly privilege in air -- such that you need to buy a "license" to breathe air that isn't yours.

Think of the massive industry that would be built up around this. It would really be a tremendously large industry, because people would be willing to pay every last penny to make sure that they had air to breathe. Talk about having inelastic demand! But, of course, the "problem" is that we have (mostly) abundant supply. Yet, putting monopoly rights on it would solve that problem right away, restricting supply through artificial monopolies, and allowing owners to charge. Boy, would that create a market! Of course, it would be complex, so perhaps we could "ease" things along by creating an Airrights Royalty Board to set some compulsory rates to make the whole market function "better." Think of how we could juice the economy there! Every single person needs air, so they would pay. Clearly, overnight, it would boost the economy.

Of course, this is silly. Everyone knows that it's silly, but as you listen to the arguments for copyright as being a free market, recognize that it's no different than the scenario above. The problem is basically a restating of Bastiat's broken window parable. The government can introduce artificial inefficiencies into the market, but that doesn't mean that it's part of a free market. A free market is one in which resources are being allocated more efficiently. But a market in which you have entities choosing to introduce inefficiencies on purpose to create new markets isn't a "free market" at all. It just creates an inefficient market that draws money to that market and away from more efficient purposes and allocation. You can, if you want, argue that this government / market interference is good for society or a particular group -- but you cannot argue that it's "free market capitalism" because it's not.

The second example is similar. It's the idea that Ed Felten came up with a few years back, known as the Pizzaright Principle, which stated simply is:

Pizzaright – the exclusive right to sell pizza – is a new kind of intellectual property right. Pizzaright law, if adopted, would make it illegal to make or serve a pizza without a license from the pizzaright owner.

Creating a pizzaright would be terrible policy, of course. We’re much better off letting the market decide who can make and sell pizza.

The Pizzaright Principle says that if you make an argument for expanding copyright or creating new kinds of intellectual property rights, and if your argument serves equally well as an argument for pizzaright, then your argument is defective. It proves too much. Whatever your argument is, it had better rest on some difference between pizzaright and the exclusive right you want to create.

This is the same basic concept again. You can create new artificial markets by inserting property-like rights anywhere you want. But most people in other situations recognize that's not free market capitalism at all, but market distorting interference. So, as you listen to those who argue that copyright is free market capitalism, apply these tests. Does it apply equally to airrights and pizzarights? If so, the argument is defective. To date, I have yet to hear an argument for copyright being free market capitalism that doesn't equally apply to airrights or pizzarights.

Of course, there are other important ways in which copyrights are actually against the free market -- and, again, it's here where recognizing the key differences between copyright and scarce property come into play. As Rick Falkvinge recently reminded us, copyright is something that actually limits property rights rather than creates new ones:

Which brings us to the third notable item: “the exclusive right”. This is what we would refer to colloquially as a “monopoly”. The copyright industry has been tenacious in trying to portray the copyright monopoly as “property”, when in reality, the exclusive rights created are limitations of property rights (it prohibits me from storing the bitpatterns of my choosing on my own hardware).

This is a key point that often gets lost in all of this. The only thing that copyright does is limit others' actual property rights. Now, again, this doesn't mean you can't make an argument that this limitation is valuable and important. But it's a simple fact that all the "exclusive right" copyright provides to someone is a way to try to stop people from actually exercising their own property rights over products they own.

In the end, it's fine to argue that copyright has important benefits and value -- but that's not the same thing as arguing that it's a part of free market capitalism. Because it's not.

from the disruption-and-broken-windows dept

There's been a ton of talk from politicians lately about the importance of "creating jobs." This comes from both major political parties, of course. We've seen the Democrats jump heavily on the jobs agenda and the Republicans have been hyping up their ability to create jobs as well. A few months ago, This American Life produced a fantastic episode on the hilariousness of politicians claiming that they're going to "create" jobs, with a focus on Wisconsin Governor Scott Walker (one of the few stories about him that has nothing to do with unions).

All of this talk about "job creation" from politicians has really been bugging me... with the only really "honest" politician I've seen being the totally ignored Presidential candidate and former New Mexico Governor, Gary Johnson. After the National Review praised him for being "the best job creator of them all," (based on jobs numbers associated with all the GOP Presidential candidates), rather than accepting the cheap political accolade, Johnson responded by rejecting the crown:

"The fact is, I can unequivocally say that I did not create a single job while I was governor."

Instead, he noted that it was "entrepreneurs and businesses" that created the jobs, and all he tried to do was keep obstacles out of the way.

Still, it is true that governments can create jobs. It's just that they're almost never the jobs that actually help the economy. The government can hire 20 million people to move piles of dirt around or to just sit around if it wants. That will "create jobs." But it won't be good for the economy, because those people are not productive for the economy. They won't be adding value or producing something of value that expands the economy.

This, of course, was famously explained a century and a half ago by Frederic Bastiat, who explained the fallacy of the broken window as an economic or "jobs" stimulator. And, yet, it's still oh so tempting for politicians to jump on this train. But the problem for those who buy into the "broken windows fallacy," is that they make really bad decisions on "jobs," because they create the easiest jobs to create, which will almost always add the least value to the economy (and most likely take away value from the economy).

It's why you get amazing statements from President Obama (who really must know better) in which he talks about ATMs meaning fewer jobs for tellers and auto check-in kiosks at airports that mean fewer jobs for airline employees. But this turns out to be wrong in oh-so-many ways. First, it's just wrong on the facts:

At the dawn of the self-service banking age in 1985, for example, the United States had 60,000 automated teller machines and 485,000 bank tellers. In 2002, the United States had 352,000 ATMs--and 527,000 bank tellers. ATMs notwithstanding, banks do a lot more than they used to and have a lot more branches than they used to.

It's "easy" to claim that technology "destroys" jobs, but it's never the case in practice. It may change jobs, but increased efficiency creates jobs through economic growth. There are all sorts of complex economic proofs of this in action, but the simplest way to understand it (and there's lots of both empirical and formulaic proof to back this up) is that when you increase efficiency, you can produce more for less, and thus, by the very definition, you have increased the size of the overall pie. Now plenty of people can (and do!) quibble about how that pie is divided and allocated, but arguing that jobs are destroyed by technology is a red herring.

It's for that reason that I'm a bit surprised to see Jeff Jarvis more or less jumping on this bandwagon by claiming that "we're going to have a jobless future":

Our new economy is shrinking because technology leads to efficiency over growth. That is the notion I want to explore now.

Pick an industry: newspapers, say. Untold thousands of jobs have been destroyed and they will not come back. Yes, new jobs will be created by entrepreneurs -- that is precisely why I teach entrepreneurial journalism. But in the net, the news industry -- make that the news ecosystem -- will employ fewer people in companies. There will still be news but it will be far more efficient, thanks to the internet.

Take retail. Borders. Circuit City. Sharper Image. KB Toys. CompUSA. Dead. Every main street and every mall has empty stores that are not going to be filled. Buying things locally for immediate gratification will be a premium service because it is far more efficient -- in terms of inventory cost, real estate, staffing -- to consolidate and fulfill merchandise at a distance. Wal-Mart isn't killing retailing. Amazon is. Transparent pricing online will reduce prices and profitability yet more. Retail will be more efficient.

While I agree with Jarvis on many, many things, he's missing half of the equation here, and doing a sort of reverse "broken window fallacy." He's looking at jobs that are changing, but not looking at the massive new opportunities it creates. Eric Reasons points me to my own post which touches on this.

It's easy to look at how jobs appear to "disappear" in a dynamic market. Whether it's the tellers President Obama is talking about, or the "journalists" that Jarvis talks about. But that ignores all of the new jobs created around the new efficiencies. Take, for example, the fears that a telephone switching network would wreak havoc on our economy, decades ago. After all, telephone companies employed thousands of operators whose job it was to "connect calls." Automate that, and all of those women (and they were predominantly women) were "out of work." Devastating, right? Well, no, actually. Not at all.

A switched telephone network not only made the phone system more valuable and useful (increasing its usage), but opened up all sorts of new opportunities for businesses and jobs. At a basic level, you could just note that call centers were suddenly possible, as was the ability to do customer service (and, annoyingly, telemarketing) on a large scale. But, it also did much more. A switched telephone network also paved the way to an eventual internet system, which has led to a huge revolution, millions upon millions of jobs, and the fact that you are reading this today.

The idea that technology leads to efficiency over growth is preposterous. Efficiency is growth. But it's not always obvious how or where that growth occurs.

And that's why I think there's something of a paradox of job creation. The job creation we really want for the economy is the job creation that initially looks bad. It's the job creation that worries Obama and Jarvis, in that they believe it's somehow "taking away jobs." And yet, it's not. It's actively creating more jobs -- it's just not as obvious how or where, but they are being created, without question. Instead, the focus is put on the exact wrong kinds of jobs. You hear things about stimulus projects that grant money or protectionism to certain industries. On the face, that appears to create jobs, because those companies that are recipients of that support "hire" more people. But it's at the expense of productive and economic growth that would create real long term jobs and real long term opportunity.

So the best way to create jobs is the politically impossible plan of increasing efficiency, which may appear to replace jobs, even as it's creating many more. It means allowing real competition to take place, rather than propping up a few big legacy players. It means supporting true innovation, through encouraging startups and entrepreneurship, rather than rewarding the legacy players who seek to hold back the innovators. Job creation is a paradox. Anything politicians do to try to force it almost always does the opposite.