Category Archives: Political Economy

Post navigation

Whether we can continue to get the journalism we need, given the declining revenues and funding in journalism, is a concern of many people around the world, including in Hong Kong. To what extent is it possible to have independent journalism under such economic conditions? Before we get there, let’s ask first, what does it mean for journalism to be “independent”? What exactly should it be independent of?

Journalism is often at its best when it can “speak truth to power”, when journalists can ask the questions nobody else wants to ask, or even speak out against the powers-that-be when nobody else has the courage to do so. It is why it is important to think about how journalism is funded, who pays the bill and who subsequently can exert pressure on editors and journalists. For example, newspapers rely on advertisers (57%) more so than circulation (36%) for their revenues (Pew, 2015). That means it is important for newspapers to keep advertisers happy. It also means that these advertisers can exert disproportional pressure and influence: this is a problem if we agree that journalism is not only a business, but also serves a larger, indeed a public function to society. It is a lesson Hong Kong learned the hard way when House News, an online news outlet, closed down in 2014 because several major advertisers pulled out because of political pressure (SCMP, 2014). In the words of Tsoi, the founder of House News:

“Despite our popularity, many big companies don’t place advertisements on our website because of our critical stance towards the government and Beijing”.

So how can we have “independent” journalism and what kind of funding would this require? We need to start thinking about what I call models of “decentralized funding” for journalism. “Centralized funding” is when your funding comes from only a few, and subsequently, powerful and influential sources. In contrast, “decentralized funding” is when funding comes from many small amounts provided by multiple funders, or indeed, citizens. If all these small amounts add up to something significant, then that creates a situation where no particular source is powerful enough to exert meaningful influence, and where journalism can be more or less “independent”. But is that possible? Here are a few examples of journalism that rely on “decentralized funding”. This is not meant to be an exhaustive list, nor do I claim these are entirely new phenomena; that said, new technologies have given rise to several interesting ideas and opportunities worth exploring.

Crowdfunding: websites like Kickstarter, Indiegogo and Fringebacker are online platforms that enable a project to raise funds from a large number of people. Recent cases of journalism funded this way in Hong Kong include Factwire and Hong Kong Free Press. Patreon is another example of a crowdfunding platform, but instead of a one-time fundraising effort to jumpstart a project (like Factwire and HK Free Press) it instead allows people to be a “patron”; that’s to say, to financially support an individual or project on a regular basis.

Subscription: traditional subscription still exists, even online. For example, Malaysiakini, an online news website in, you guessed it, Malaysia, receives significant funding from its many subscribers who are willing to pay a sum every month. People are willing to subscribe, and pay money, because whereas the traditional media in Malaysia are highly censored, the online media are still relatively free and open: the internet is where they can get actual news. Malaysia’s situation is a bit peculiar like that: thanks to a pledge it made in 1998 in an attempt to attract foreign investment, the government will not censor the internet (Open Net).

Micro-payment: for the longest time, micro-payment was seen as the holy grail that would save quality journalism. While this has yet to happen, and I am not sure if it ever will, that doesn’t mean there are no interesting changes in this domain: in China, several platforms now allow users to “tip” content they like. For example, WeChat allows its users to tip writers for posts they like.

Centralized, but independent: Last but not least, independent journalism does not necessarily require “decentralized funding” to exist. Traditionally, foundations have always played an important role in funding important works of journalism. A recent example is ProPublica, funded by the Sandler Foundation, whose aim is to do quality investigative journalism. That said, many places around the world do not have the necessary foundations that have an explicit mission to serve the public interest, including in Hong Kong, a society that is already relatively well-off (I’ve never seen so many luxury cars than here in Hong Kong).

It is paramount that we start thinking and experimenting with models of “decentralized funding” for journalism; so that we can continue to get the journalism we need. If you know of any examples of decentralized funding that I should learn more about, I’d love to hear about them!

There was an interesting article in Politico yesterday, titled [gulp] “Honey, I shrunk the Obama data machine.”* The article discusses next steps for the Democratic data machine in the leadup to the 2013 and 2014 elections. The big question: can the Obama analytics tools translate to the state and congressional levels?

The answer (to paraphrase): “yes, but only some of them.”

When people talk about the #Demdata advantage in campaigns, they’re really talking about (at least) three distinct phenomena. Two translate well to smaller campaigns, the third doesn’t. The dividing line is something that I call the analytics floor.

(1) One of the biggest advantages Democrats hold over Republicans is the rich voter file that Democrats have developed. Republicans are working to build their own national database, to sometimes-comedic ends. That voter file can be exported to congressional campaigns, special elections, governor’s races, etc. OFA alumni like Dan Wagner of Civis Analytics specialize in just this sort of data modeling. Obama invested millions in developing the voter file and built a network of hundreds of experts in combining the voter file with polling data to produce much clearer maps of the electorate. As those experts turn to consulting and expand their reach outward, the price of these services will become more affordable over time.

(2) A second advantage comes in the form of lessons learned through persuasion and turnout experiments. The Analyst Institute was very busy during the 2012 election cycle, running tests to determine what sort of techniques and appeals can best sway undecided voters and motivate disinterested supporters. These lessons in political behavior are transportable from one election to another — if they’ve determined that voter “report cards” drive people to the polls, that’s a lesson that can improve off-year elections as well. Democrats have invested in cutting-edge social science, and are in no rush to share their findings with Republican competitors. This advantage will echo into 2014 and beyond.

(3) The third facet of #DemData is what Daniel Kreiss calls “computational management.” Computational management refers to the day-to-day role that analytics can play in campaign management, and the “culture of testing” it promotes. The Obama campaign tested everything. It tested e-mail subject lines. It tested font sizes. It tested niche television spots. Data settled arguments and maximized investments. Here’s where things get dicey.

Day-to-day inputs aren’t going to be available and/or useful to smaller campaigns the way they were to the Obama campaign. If you’re running a mayoral race in Hartford, there will be one or two polls conducted *at most*, and they’ll probably come from a relatively unknown firm. That’s exponentially less data than the Obama “cave” was working with. If you’re running a Rockville City Council race, there may be no polling available. And the number of people visiting your website/receiving your emails/reading your tweets is so small that you can’t run tests to find out which messages/frames/asks are most effective. You need scale for computational management. The analytics floor is the dividing line between large-scale and small-scale.

Computational management is a solution to large-scale problems, though. Honestly, running for city council just isn’t that complicated. Talk to your neighbors, earn the endorsements of community leaders, place a table at community events. The districts are small enough that you will mostly be relying upon personalized political communication anyway. The Hartford mayoral race is a bit more complicated, so data and modeling play a modest role. Think of that as a rule: as we increase the size of the electorate, the power of the office, and the (resultant) money being spent on the election, the size and complexity of the campaign apparatus increases as well.

The Obama campaign’s biggest managerial innovation was using multiple forms of data to improve decision-making in this complex environment. Analytics is a solution to the problems introduced at massive scale. Below the analytics floor, the tools are less useful, but they’re also less necessary.

In an interview with Salon and his newest book, “digital visionary” (Salon’s words) Jaron Lanier claims that the internet has destroyed the middle class. Kodak employed 140,000 people, while at the point of its sale to Facebook, Instagram employed just 13, and (without much exaggeration) thus, the internet killed the middle class. QED.

What a crock.

Lanier is apparently incapable of stepping back from technological determinism and looking at the actual causes of our ballooning economic inequality — which, to cut to the chase, is primarily a result of our policy choices. Yet the role of government in determining the overall shape of the economy is too often understated or outright ignored by those who wring their hands about growing economic inequality.

With some noted exceptions, those who criticize Lanier still mostly point at the old standby twin bogeymen of automation and outsourcing. The HuffPost chat in which all of the guests are willing to challenge Lanier’s conclusions is typical on this count but hardly alone. To his credit, Buffalo State College economist Bruce Fisher starts heading in the right direction with his concerns about fostering and preserving the political and social engagement of those who are being left out, but he fails to take it the next step and discuss the major policy changes and political neglect that have brought us to this point.

The best explanation that I’ve seen of America’s growing wealth inequality is Winner-Take-All Politics, in which Jacob Hacker and Paul Pierson start with a simple look at other industrialized countries to show that inequality isn’t an inexorable outcome trade and automation. The Germans and Swedes certainly have similar chances to outsource their manufacturing and use technology to reduce labor forces.

The wealth distribution in particular is just shocking — the US has a wealth Gini of .801 (where 1.000 is “one person owns everything”), the fifth highest among all included countries and almost exactly the same as the distribution of wealth across the entire planet (.803). Think about that for a second; we have the same radically unequal distribution of capital within the US as among the entire population of the world across all countries — from Hong Kong and Switzerland to Nigeria and Haiti.

Across countless major policy areas —health care, education, financial regulation, taxation, support for the unemployed, and many more — the rest of the industrialized world generally does far more to make their societies fairer for all. Our shrinking protections for workers may be the greatest single cause of the shrinking middle class. Of course, this can be done badly — I would certainly not want to swing as far as Italy and Spain, where it’s nearly impossible to fire somebody once they’re a regular, fulltime employee. Yet we should not allow employers to fire union organizers with near impunity. We should not force organizers to wait for months between card check and votes to unionize so that employers can “educate” their captive audience workforce with the most pernicious disinformation and intimidation. We should not sit idly while nearly half of states fail to meet even “minimum workplace-safety inspection goals, due to state budget cuts and reduced staffing.”

It’s true that the middle class is being gutted in the US, but this is primarily due to how our political system turns the act of surviving and thriving into a high-wire act for an ever-larger slice of the population. Laid-off baby boomers, even those with desirable skills, are having a devil of a time finding work in a country where age discrimination is only nominally illegal. Meanwhile, our children attend public schools with an unconscionably unequal distribution of funding, so moving or being born into a more affordable neighborhood may cost kids their futures, too.

Teens and laid off workers alike are told that college is the route to a better future, but the cost of education is skyrocketing as states and the feds slash public investment in higher education. Many families — even many families with health insurance — are one major medical problem away from unemployment and bankruptcy. Since it’s totally legal to use credit reports and current employment status in making hiring decisions, being laid off or losing one’s job after a medical problem can quickly become a death spiral. None of this is due to outsourcing or automation, but is instead the result of a noxious combination of deliberate policy changes (the privileged seeking to strengthen their own hand) and policy drift (the rest of us sitting idly by or being ignored when we do speak up).

Frankly, I’m glad that Lanier has released this book, sloppy though it may be. (The people raving about this book as a carefully wrought masterpiece are deluding themselves — and not, as Lanier accuses others of doing, “diluting themselves”.) This is not primarily because he has some insights here and there, but because we need to talk about the gutting of the middle class as loudly and as frequently as possible. We must do so, however, in a way that examines how our collective decisions have gotten us to this point. That includes making international comparisons with other “laboratories of democracy” to see how we can do better.

After even a cursory glance abroad, we will see that we should stop returning to the too-easy explanations based on globalization and technology. These forces are at play across the world, and the other wealthy industrialized countries have generally not had the same dismal results. The more likely culprit is in the halls of government.

Still, I’m really excited for my colleague Andrew Lund, who is leading the conversation with Mr. Copps, as well as the many Hunter students and faculty who will be able to attend. Thus, I wanted to share a bit about what I’d like them (and the world) to know about this great public servant.

To fully appreciate how exceptional Copps was as an FCC Commissioner, a role he fulfilled from 2001 to 2011, you need to know how thoroughly the Commission has traditionally been a “captured” agency — that is, generally doing the bidding of the industries that it was constructed, in principle, to regulate.

You should also know how the “revolving door” of government works: After working in government in a position of any real importance, many former public servants often take plum jobs in the private sector where they can leverage their regulatory knowledge and even their interpersonal connections to the advantage of their new employers.

Once he started his term at the FCC, Commissioner Copps knew that, after his time in government, he could easily walk into a plum job in the private sector. After all, this had been the route taken by many of his predecessors — as well as many of his colleagues who stepped down in the interim.

Unfortunately, when looking at the decisions that many of these FCC folks who turned that experience into very-well-paid private sector jobs, one could be forgiven for wondering whether many of them truly had the public interest at heart. Some of their decisions suggest that they were, at least in part, also thinking about their long-term earning potential. I won’t name names, but all of us who follow communication law reasonably closely know the most obvious examples.

When looking at Commissioner Copps’ decisions, however, nobody could possibly doubt that his true allegiance really was with the public for the full decade of his service. Media reform groups like Free Press and Public Knowledge finally had an unabashed, reliable ally with his hand on the levers of power, on issues from broadcasting to telecommunications to pluralism and diversity.

The real sea change on ownership came in late 2002 and 2003, as then-Chair Michael Powell proposed a substantial roll-back in the rules against media consolidation. Copps and fellow Commissioner Jonathan Adelstein pushed to have substantial public discussion around the proposal, including multiple, well-publicized hearings. Powell said no — allowing just one hearing — so Copps and Adelstein went on tour, holding 13 unofficial hearings.

Through this and other efforts, working alongside public interest-minded NGOs, Copps helped bring major public attention to Powell’s proposal, ultimately bringing it to a halt. This slowed (though certainly did not stop) the process of media consolidation, through which ever fewer companies control ever more of our media landscape.

I would love to say a great deal more about Copps’ time at the FCC, but I’ll say just a few more words on one more issue: broadband regulation. He came in just in time to dissent from the FCC’s decisions to give away the keys to the kingdom on broadband interconnection, in the decision that led to the Brand X ruling by the Supreme Court.

The FCC ruled that broadband infrastructure companies — the folks who’ve used imminent domain and massive public subsidies as key tools as they’ve laid the cable, phone, or fiber lines over which broadband is transmitted — are not obligated to share their “last mile” systems with competitors. (This requirement for “interconnection” was already in place for landline local and long-distance telephone service, which led to an explosion of competition and plummeting prices.)

Again, though ownership and broadband policy are among his best-known issues, Copps was a tireless voice for the public interest on virtually every issue imaginable that came before the Commission. Even though he stepped down from the Commission over a year ago, he continues the work today.

Even as a former Commissioner who spent a decade being the thorniest thorn in the sides of those seeking to make a quick buck at the public’s expense, Mr. Copps could still quickly make a quick buck himself working for industry. There are a large number of companies, industry trade groups, and swanky D.C. law firms that would be quite happy to give him a huge salary, cushy office, and first class travel budget to speak on their behalf.

Instead, Copps has moved on to work for Common Cause, one of our nation’s strongest voices fighting for the best interests of ordinary people. This is just the latest in a long line of decisions in which he has chosen to fight for the public interest, even though it’s easier and more lucrative to fight for those who already have disproportionate money and influence.

For public interest advocates, Michael Copps was, at a minimum, the greatest FCC Commissioner since Nicholas Johnson retired nearly 40 years ago — and perhaps the greatest ever. His work at the Commission will be missed, but I look forward to seeing him continue to have a major role in pushing for a fairer, more just media system for many years to come.

One more point, for anybody who’s read this far: As of now, Copps’ Wikipedia page is a mere stump — the Wikipedia term for an article that is too short and needs to be expanded. In this case, a great deal more needs to be said in order to do its subject justice. I call on you to help me do this in the coming weeks. Mr. Copps was and remains a tireless and effective servant of the public, and this is but a small favor we can do in return.

It’s been fascinating to watch the Wisconsin protests unfurl over the past 10 days. Governor Scott Walker has chosen to stuff his budget repair proposal full of Trojan horse provisions, including “emergency” power to sell off state assets through no-bid contracts to his favorite corporate backers and an end to collective bargaining rights for all state employee unions who didn’t endorse him in the last election. That’s not hyperbole on my part. The governor is being exactly that crass. Readers who are interested in more information on the topic should check out Stephanie Taylor’s excellent essay at Slate.com.

Social media has played an augmenting role in these protests. There’s the pizza orders, which are pretty cool. There’s the twitter- and blog-based information diffusion. There are the solidarity events planned around the country, occurring throughout the past week and also this Saturday. There’s the $300,000 raised by DailyKos, DFA and PCCC to support the “Wisonsin 14.” And of course there are the “mundane mobilization tools” used to coordinate events themselves. But in general, whereas the focus on social media in the Arab protests has been so intense as to border on self-parody, no one has really spent much time talking about the internet’s role in this saga. Nor should they. The story here is pretty simple. Walker is trying to destroy the central organizing structure for working class interests. This isn’t about reduced benefits – the unions have already agreed to cuts – it’s about power, plain and simple.

The new generation of internet-mediated organizations can achieve a lot of things. They’re optimized for the new media environment, both in terms of organizational overhead, staff structure, membership communications, and rapid tactical repertoire. But they can’t organize workers in a specific industry or location to increase salary, working conditions, or benefits. MoveOn isn’t going to sit across from management at the negotiating table. Your local Meetup group sure doesn’t have that capacity either. This is the most obvious space where “organizing without organizations” comes up short. You need to build power if you’re going to confront power.

What’s more, MoveOn, DFA, PCCC, DailyKos, Living Liberally, New Organizing Institute and the rest of the netroots are fully aware of this. Professional organizers, old-school and new-school, understand that the Wisconsin fight is about power. Take away the unions, and the super-wealthy will be the only interests in America capable of aggregating massive resources to affect policy change. It looks an awful lot like a coordinated, multi-year strategy to knock out every significant organization of the left (first ACORN, now Planned Parenthood and the Unions).

I feel the need to point this out because the short-version summary of my research is “the new media environment is transforming the interest group ecology of American politics. We’re experiencing a ‘generation shift.’” I want to be absolutely clear about this: what’s happening in Wisconsin isn’t about digital media or about generational displacement in the advocacy group system. It is a coordinated Rightwing assault on the rights of citizens to organize in the workplace. Internet-mediated organizations are doing all they can to support the unions in this fight, because they know full well that the unions fill a niche that internet-mediated issue generalists, online communities-of-interest, and neo-federated organizations cannot. I cannot think of a single serious online organizer who believes otherwise.

Note: I’ll be spending the next few months writing a book about the new generation of internet-mediated political groups. This post will be my first “book blog,” in which I try out new ideas that I’m planning to include in the manuscript. Book blog pieces will be less tied to the politics-of-the-day, and will be a bit lengthier. They also give readers a window into the broader project as it develops. As such, feedback is particularly appreciated.

I’ve written once before on this blog about Moore’s Law, the surprisingly accurate 1965 prediction that computing capacity would double every 18-to-24 months. What I’ve noticed recently is that, while Moore’s Law is common knowledge within the tech community (you see it mentioned in almost every issue of Wired magazine). it’s much less well-understood in the political and social science communities. Those crowds are aware, of course, that their computer from 4 years ago now seems ancient, slow, and lacking in storage space, but it appears to me that thedeep political implications of Moore’s Law (which I’ll be calling “Moore’s Law Effects” in the book, unless someone wants to earn their way into the acknowledgments by suggesting a catchier name!) have largely gone overlooked.

I checked through the indexes of several major internet-and-politics books and, sure enough, there’s no mention of Moore’s Law. Bruce Bimber’s Information and American Democracy, Matt Hindman’s Myth of Digital Democracy, Bimber and Davis’s Campaigning Online, Phil Howard’s New Media Campaigns and the Managed Citizen. I’ll check a few others on Monday when I’m in the office, but I’m pretty sure there’s no mention of it in Kerbel’s Netroots, Davis’s Typing Politics, Chadwick’s Internet Politics or either of Cass Sunstein’s books either. …These are good books I’m talking about here — award-winners that rightly deserve the praise they’ve received. I’d be thrilled if my book ends up half as good as many of them. Yet Moore’s Law doesn’t earn a single mention, nor does it show up in most of the influential articles in the field. It just hasn’t entered the discourse.

The one exception I’ve found is a Berkeley Roundtable on the International Economy working paper by Zysman and Newman that eventually became the lead article of a co-edited volume, How Revolutionary is the Revolution. It’s a political economy treatment of the digital era as a whole and seems pretty promising (amazon should have it to me by mid-week). I really enjoyed the following quote in the working paper: “…Information technology represents not one, but a sequence of revolutions. It is a continued and enduring unfolding of digital innovation, sustaining a long process of industrial adaptation and transition” (pg 8). That “sequence of revolutions” line is what I think we’ve largely been missing when talking about digital politics.

Take Bimber and Davis’s Campaigning Online for instance. They conducted first-rate research in the 2000 election cycle on citizen access to campaign websites. The central finding was that, by and large, the only citizens who visit such sites are existing partisans. The sites are useful for message reinforcement, rather than message persuasion. As a result, Bimber and Davis conclude that the impact of the internet on political campaigns is pretty slight. Web sites simply don’t reach undecided voters, so they aren’t of much use in determining election results.

Their book was released in September, 2003. By that time, the Dean campaign had already attracted overwhelming media attention, leading observers everywhere to rethink the importance of mobilization. It was an unlucky sequence of events, having a definitive work on the internet and American political campaigns come out just as the Dean campaign was overthrowing everything we thought we knew about the internet and American political campaigns.

Here’s the thing, though: Bimber and Davis weren’t wrong. The Internet of 2000 wasn’t particularly useful for mobilization. John McCain raised a bit of online money around his primary, but online bill paying was still in its untrustworthy infancy, and the social web was still restricted to the lead adopter crowd who had heard of Pyra Labs. The suite of technologies making up the Internet changed between 2000 and 2003. It changed again between 2003/04 and 2006. [Pop quiz: what was John Kerry’s YouTube strategy in the ’04 election? (A: YouTube didn’t exist until 2005.)] And it continues to do so. The internet of 2010 is actually a different medium than the internet of 2000. The devices we use to access it have changed. Cheap processing power and increasing bandwidth speeds let us access video and geolocational aspects that were prohibitively expensive and technically infeasible or impossible in 2000. We’ve traveled through five iterations of Moore’s Law, and that means that the devices and architecture of the earlier internet have been overwritten (html to xml being just the tip of the iceberg).

The internet is a sequence of communications revolutions, and that is entirely because of Moore’s Law. It makes the internet different than previous revolutions in information technology. Consider: as the television or radio moved from 10% household penetration to 80% household penetration, how much did the technology itself change? I’d argue it wasn’t much at all. A television set from 1930 is fundamentally pretty similar to a television set from 1960. The major changes of the 20th century can be counted on one hand – color television, remote control, vcr, maybe a couple others. It is frequently noted that the internet’s penetration rate has been faster than these previous communications technologies. But what rarely gets mentioned is that the internet itself has changed pretty dramatically in the process. (Need further convincing? Watch the 1995 movie Hackers and listen for the reference to one character’s blazing-fast 28.8 kb modem. LolCats and YouTube aren’t so fun at 28.8kbs speed. Or read James Gleick’s 1995 New York Times Magazine essay “This is Sex?” in which he explains that the internet is a terrible place for pornography because search is so complicated and the pictures upload so slowly!)

Transitioning into the political sphere, it bears noting that every election since 1996 has been labeled “the internet election” or “the year of the internet” by a set of researchers and public intellectuals. The paradox, of sorts, is that they have been right every time. 2012 will be different than 2010, 2008, 2006 2004, 2002, and 2000. It will be a different medium, in which users engage in modified activities, and this will create new opportunities for campaigns and organizations to engage in acts of mobilization and persuasion. The cutting-edge techniques of last year become mundane, encouraging organizations to maintain a culture of ostentatious innovation.

Now I’m not suggesting that the internet exists in some state of quantum uncertainty, where we can predict basically nothing in the future based on the past or present. In fact, as Rasmus Kleis Nielsen points out, the tools that will have the biggest impact on campaign organizations will be the ones that have become mundane, reaching near-universal penetration rates and no longer subject to a steep learning curve. (As we recently learned with Google Wave, e-mail is much a settled routine at this point.) Indeed, one of the lessons here may be that we are on much safer grounds when studying individual internet-mediated tools that have reached near-universal adoption (within a given community). The techno-centric studies of facebook, youtube, and twitter that are a recent fad of sorts are on much weaker ground, because those tools are themselves still pretty dramatically changing thanks to increasing adoption and the ongoing influence of Moore’s Law.

The other thing it tells us, however, is that we should focus attention on the new organizations and institutions being built out of the digital economy. The continual waves of innovation made possible by Moore’s Law mean that existing industries do not solely need to adapt to a single change in communications media. Rather, an existing market leader who hires the best consultants, purchases a fleet of state-of-the-art hardware and software, and spends two years developing their plan for the digital environment will suddenly find that the internet has changed in a few important ways, their hardware and software is outdated, and the plan those consultants developed has collected more dust than accolades.

Communications revolutions (or changes in “information regime,” if you prefer to avoid talk of revolution) create a classically disruptive moment for various sectors of the economy. Rather than advantaging existing market leaders, whose R&D departments let them lead the way in sustaining innovations, disruptive moments tend to lead to the formation of new markets that undercut the old ones (this is classic Christensen). Startups do better under those conditions, because they have low operating costs and no ingrained organizational routines. And while individual areas of the internet eventually give way to monopolies (particularly if we lose net neutrality and let major firms capture markets and tamp down on competition), those monopolies aren’t as secure as they were in previous eras. Just ask AOL, Compuserv, Microsoft or Yahoo. The wrong policy decisions can still basically kill the internet, but Moore’s Law creates a scenario in which ongoing disruptions continually advantage new entrants, experimenting with new things.

That, frankly, is why my focus has been on the rise of these internet-mediated advocacy groups. It’s because they represent a disruption of the advocacy group system. They embrace ostentatious innovation, keep their staffing and overhead small, and otherwise continue to act like a start-up (and are often founded by technologists with a background in startup culture). They fiddle with membership and fundraising regimes, and develop new tactical repertoires unlike anything found among the older advocacy groups. And Moore’s Law suggests that the internet is still in a state of becoming, that the emergence of these new institutions is much more substantial than the mass behavioral patterns found among citizens in the internet of 2010, which may very well be altered as Moore’s Law allows the internet to become something else in 2012.

Moore’s Law, disruption theory, and new developments at the organizational level. That’s what I think has been missing from our understanding of the internet and American politics thus far.

Markos Moulitsas of DailyKos announced Monday that he is suing Research 2000 for fraudulant activity, based on a statistical analysis conducted by Mark Grebner, Michael Weissman, and Jonathan Weissman. I won’t comment on the details of their study here — NateSilver has done a much betterjob of that already — but instead want to make a broader comment about the internet, markets, and “scalp-taking.”

I’ll note as a caveat that Research 2000 is launching a counter-suit. The facts will be revealed in time, and the way things look today may not turn out to be the reality of the situation. I don’t mean this blog entry to prejudge the results of this trial.

That said, it appears as though the progressive political blogosphere has just claimed a second scalp within the polling industry. The first occurred back in the fall of 2009, when Nate Silver at 538 raised serious concerns about Strategic Vision. Noticing serious anomalies in their data, as well as a lack of public information about the company itself, Silver asked some very public questions about whether they were fabricating their data. The head of Strategic Vision cried foul and claimed he’d see Nate in court, but he then beat a hasty retreat and hasn’t been heard from since.

DailyKos has contracted with R2K since the 2008 election cycle, and has sent them a lot of business. After Nate published his inaugural pollster rankings last month, Markos announced that he’d be rethinking the partnership with R2K (who fared poorly compared to other pollsters). That apparently led to a few statisticians deciding to take a deeper look at R2K’s numbers, which revealed anomalies that would be consistent with mild cooking of the books and/or outright fraud.

Talking Points Memo took a deeper look at the head of R2K, Del Ali, and found that his background consists of 2 degrees in recreation. That’s really pretty odd, to say the least. You would expect the head of a major polling firm to have a background in, well, statistics.

And that leads us to the point I’d like to make: how is this possible? Professional polling is a competitive and lucrative business, with longstanding industry leaders and standard-setting organizations. Neither Strategic Vision nor R2K was a minor player — both were significant pollsters whose findings were reported by mainstream media sources. Both (it appears) were somewhere between shady and fraudulent. In a well-functioning market, incentives should exist for shaming and discrediting such actors. The field of professional polling involves enough statistical wizardry and high enough stakes that, if such incentives operate anywhere, they should operate there. And yet we now have seen two occasions in which, essentially because Nate Silver and company have made a hobby of advocating for responsible polling practices, major irregularities have been uncovered, with field-transforming impacts.

There’s a lesson here about just how robust the market mechanisms in various knowledge industries actually are. Even in a field that has incentives for self-policing, even in a field tied to academic institutions like AAPOR that are full of people who have the means and motive to investigate such irregularities, there has been a distinct lack of accountability for years. The lowered transaction costs of the internet has enabled skilled hobbyists to dramatically affect that market. The internet itself doesn’t magically improve the polling industry (far from it), but it did create a new opportunity structure through which motivated volunteers could challenge and affect existing institutions.

Bravo to Nate Silver for his nearly one-man quest to improve the polling industry. He didn’t have to take on this challenge, and I’m sure it’s made him plenty of enemies in the process. Kudos to Markos Moulitsas as well for partnering with statistical researchers and readily admitting it once he learned there was a problem with his data. The internet doesn’t make the industries perform more responsibly, it just creates new opportunities for motivated outsiders to mobilize knowledge/people/resources in new and interesting ways. Between R2K and Strategic Vision, we have a good example of just how poorly the “statistical wizardry” industry was actually functioning, and also a case study in how networked volunteers can transform such industries.

As the old proverb goes, “may you live in interesting times…” Interesting times, indeed.

Scrolling through Twitter this morning, I noticed the following tweet from Tom Mattzie (@tommatzzie), formerly of MoveOn.org:

I hope all my progressive groups and friends remember #Haiti today. I’d be bummed if they didn’t.

Mattzie has backed up the talk himself, pledging to match up to $1,000 in disaster relief donations from his fellow twitter-donators. As the day has progressed, I’ve already seen online appeals from Color of Change and MoveOn (both urging their lists to donate to groups such as Oxfam and Doctors without Borders). Nothing so far from the single-issue political advocacy groups, though of course Red Cross and others have appropriately sprung into action.

I don’t mean this to be a critique of the single-issue groups, but it does bring one point to mind that bears examination.

In the presentation that I’ve been giving about my research, I use the phrase “Headline Chasing” to describe the distinctions between MoveOn-style targeted fundraising and the direct mail funding appeals that fueled advocacy groups for the past 40 years. It’s an intentionally provocative term. The new generation of advocacy groups organize around whatever issue is at the top of the public agenda, whereas the earlier generation of groups mobilize around specific issue topics, regardless to their immediate salience. That proves very effective as a fundraising tactic, but it implies a sort of nimbleness and fluidity that may or may not be such a good thing.

I think today’s fundraising appeals are an important example of the unquestionably positive side of this “headline chasing.” MoveOn isn’t making a buck off this tragedy. They are mobilizing their large supporter list and asking them to help out through other organizations. When tragedy strikes, tragedy rules the headlines. And in that moment, unless the tragedy impacts an issue group’s central focus, the large majority of organizations remain silent, clearing out of the way while the red cross and others take center stage. The new political economy of advocacy organizations allows the progressive netroots to get behind the red cross, doctors without borders, and other center-stage organizations and quietly help out. Internet-mediated organizations are performing mitzvahs right now, because their structure allows them to. Older organizations, progressive or not, remain sidelined because the logic of their structure demands it.

Let’s hope that organizations, governments, and individuals do all they can to come together in the wake of this tragedy. A 7.0 earthquake is a reminder of just how fragile many social institutions actually can be. Unpredictable tragedy like this can happen anywhere, and national boundaries should not stand in the way of efforts to aid our fellow human beings.

Relying on his framework of the four modalities of control that he used in Code, Professor Lessig explains how the law, markets, norms and architecture together exert influence, and that depending on your policy objectives, these four forces can be complementing or conflicting. He suggests that together they form an “economy of influence” that we need to understand if we want to make effective policy.

He continues to explain “independence”, in the sense that something is not dependent on something. Independence matters, because it means that you try to find the right answer for the right reason, as opposed to doing so for a wrong reason you might be dependent on.

Independence, however, does not mean dependence from everything. Lessig reframes independence as a “proper dependence”. In legal terms, it means that a judge depends on the law for her judgment. So independence is about defining proper dependence, and limiting improper dependence.

Responsibility is the third concept Lessig goes into. He tells us about a case he represented in 2006: Hardwicke vs ABS. It was a case that focused on a series of events concerning child abuse, all perpetrated by a single person. The question that was raised: Who is responsible? Lessig makes the argument that responsibility does not lie with the individual, that this individual has no power to reform, and that this is pathological. Instead, he makes the case that responsibility in this case is all the people who knew about the wrongdoings, but refused to pick up the phone. Nevertheless, the focus of the law was on the one pathological person. Lessig suggests it is more productive to focus responsibility on those who have the power to make changes, instead of those are pathological and are not in a position to reform. He notes it is ironic that the one person who is least likely to reform is held responsible, while the one entity who could do something about it, was immune.

He raises another example of “responsibility” gone awry. He cites Al Gore and his book “The Assault on Reason”, and lambasts its narrow perception of responsibility. It focuses on former president Bush, arguably the man least likely to reform, and instead forgets those who could have done something about it, suggesting that they also have been critically responsible.

His argument is one of “institutional corruption”. What it is not: what happened with Blagojevitch; it is not bribery, not “just politics”, not any violation of existing rules. Instead, institutional corruption is “a certain kind of influence situated within an economy of influence that has a certain effect, either it 1) weakens the effectiveness of the institution or 2) weakens public trust for the institution.

He explains the system of institutional corruption using the White House. Referring to Robert Kaiser’s book “So Damn Much Money”, he argues how the story of the government has dramatically changed in the past fifteen years and how the engine of this change has been the growth of the lobbying industry. He illustrates this with numbers: Lobbyists pay with cash which members use as support for their campaigns. The cost of campaigns have exploded over the years, and subsequently, members have become dependent on lobbyists for cash – he cites that lobbyists make up 30-70% of campaign budgets! This is not new, he carefully explains, but citing Kaiser again, what is new is the scale of this practice has gotten out of hand. Members /need/ and take /much more/, becoming /dependent/ on those who supply. This is only during the tenure, but institutional corruption also needs to be understood as something after tenure: 50% of senators translate their senate tenure into a career as lobbyist, while 42% of the house do the same. This suggests a business model, focused on life after government, that perpetuates itself, and influential people who end up becoming dependent on this system surviving, both during and after their time in Congress.

He goes on to give example after example of institutional corruption. He mentions the important work done by maplight.org that tracks money in politics, who have shown that members who voted to gut a bill had 3x times the contribution from lobbyists than those who voted against. Simply put, policies get bent to those who pay. He cites a study by Alexander, Scholz and Mazza measuring rates of return for lobbying expenditures, who conclude that ROI is a whopping 22,000%! He again cites Kaiser, who suggests that lobbying is a $9-12 billion industry.

Why does this matter? It matters if it
1) weakens effectiveness of institution or
2) weakens public trust of institution

In the first case, he argues how lobbying can shift policy. He cites a study by Hall and Deardorff “Lobbying as Legislative Subsidy” on how the work of congresspersons shift as a result of lobbying. Imagine you’re a congressperson and you see it as your goal to work on two issues: one is to stop piracy, the other is to help mums on welfare. The line of lobbyists that will happily help you with stopping piracy is long, whereas not so many will help you with the latter – so work of the congressperson shifts, and thus work of Congress shifts.

Lessig suggests it also bends policies. Does money really not change results? Citing the Sonny Bono case of October 27, 1998, he shows how in copyright lobbying power had a powerful influence in getting the copyright term extended for another twenty years. Does this advance the public good? A clear no. Lessig backs this up by telling how in the challenge at the Supreme Court, an impressive line-up of Nobel Prize winning economists, including Milton Friedman, supported this and that it would be a “no brainer” to sign the support that copyright extension did not advance the public good. But he concludes that there were “no brains” in the House. An easy case of institutional corruption. There are two explanations: Either they are idiots, or they are guided by something other than reason. He suggests of course it’s the latter. It is not misunderstanding that explains these cases.

Lessig continues to explain how corruption can be seen as weakening public trust. He tells us about how the head of the committee in charge of deciding the future of healthcare is getting $4 million from the healthcare industry. Or how a congressperson ended up opposing the public option even though the majority of his constituency supports it. The idea is not that there might be a direct link between the money and the vote, but that if you take money to do something that is against the public interest, people will automatically make that link, and this weakens public trust. If you don’t take money and you go against the popular vote, that won’t reek of corruption.

Lessig goes on to discuss different fields: medicine and the healthcare industry, citing research by Drummond Rennie from UCSF that shows how there is an overwhelming bias in favor of sponsor’s company drugs. How there are 2.5 doctors to 1 detailer (a detailer being someone who is like a lobbyist for the pharmaceuticals, promoting the drugs to doctors, often giving “gifts”). How the budget for detailing tripled in the past ten years.

Lessig asks us: how can we find out whether these claims are true? Do detailing practices either weaken the effectiveness of medicine, or weaken the public trust for it? What would it take to know?

There is also the issue of “the structure of fact finding” that Lessig suggests is corrupt. Again, he argues we need to understand whether this is a process by which results are affected or trust is weakened. He cites how sponsor funded research can cause delay, and mentions the case of “popcorn lung”.

Lessig makes a strong case that we need more than intuition. That we need a framework or metric to know for sure. Because we all have ideological commitments, that we need to escape this in order to have a proper understanding of corruption. This is, in short, the aim of his new project: The Lab. It should be a neutral ground with a framework that determines whether and when institutional corruption exists, to develop remedies for institutional corruption when it exists. He sees the initial work having three dimensions: 1) data – necessary to describe influence and track its change; 2) perception of institutional corruption and understand how it has changed;
and 3) causation – what can we say about what causes what in these contexts in alleged corruption. Having this information, we can then design remedies.

“Moore’s Law,” first articulated in 1965, tells us that we will see a doubling in transistor capacity roughly every two years. As predictions go, it has proven surprisingly durable, and is a handy conceptual framework for understanding why the internet is evolving so quickly.

Consider: when I was in college a decade ago, I carried around a spare floppy disk for saving files and transfering them from one computer to another. It could store about 100 megabites, which was a lot more storage than you’d find on campus webmail. A few friends of mine had new computers with a whole GIGAbite of space. It wasn’t entirely clear to me what one would do with all that space.

Today, my iPod has 120 Gigs. I save my files to gmail and they reside in the clouds. Wireless networked computing means I can access them anywhere, even on the road from an iPhone. My computer, bought 4 years ago, is antiquated and slow. It’s a mac that doesn’t even have embedded video. And therein lies a disconnect for businesses and governments in the digital age.

Let’s say that the University I work at decides to invest in a complete upgrade of their computer systems. This includes hardward and software, a new fleet of computers; custom-built e-learning tools. (full disclosure: this post was inspired by the “MyCourses” e-learning training I attended this morning) In so doing, they incur some substantial sunk costs. Those computers and those software programs have got to be useable for several years. Organizations and governments don’t move seamlessly up the line presented in the figure above. They move in a stepwise fashion, investing in new tools every few years which they are in turn saddled with as the technology continues to evolve.

Why is this important? Well for one thing, it helps to explain why large bureacracies will virtually always have outdated websites. Purchasing a fleet of new computers or moving into a new area of web-related activity caries a cost, and no organization can afford to keep pace with the rapid expansion. Sunk costs are a reality of egovernment and organizational adoption.

Particularly in terms of software, however, it seems to me to be a strong argument in favor of partnering with organizations like Google or, better yet, relying on open source software platforms. Learning today about Brown University’s “My Courses” software platform (it is similar to the “blackboard” site at Penn… a password protected site for students to access the syllabus, readings, grades, etc), I couldn’t help but think that the more advanced functionalities could be accomplished for free with a Ning site. I’m sure “My Courses” was cutting edge when Brown first made the investment, [well… I’m not sure, but I’m willing to believe it] but companies like Ning have been free to innovate — incentivized to do so, in fact — while Brown has incurred the sunk costs.

In a digital environment that evolves as quickly as the internet, we can draw a hardline distinction between the entities who have an incentive to continually create and innovate (tech giants, startups who hope to challenge tech giants, and open source communities) and the entities who proceed in stepwise fashion, incurring sunk costs, staying pat for several years, and then incurring another round of sunk costs. It then follows that one of the benefits afforded to informal networks (policy networks, activist networks, political networks, communities-of-interest… Clay Shirky’s “organizing without organizations”) is that their lack of resources also leads to a lack of sunk costs, meaning that any online action they engage in is likely to be, and essentially remain at, the cutting edge.

Those are just a few early thoughts on the topic. E-gov hasn’t been one of my major research areas thus far, but I’m warming up to it. Thoughts/reactions/feedback would be appreciated (as always)