About a month ago, news outlets reported that the music
streaming app, Grooveshark, had been pulled from Google Play (Google’s App
distribution service) for violation of its copyright infringement
policies. The app and its related online
service allow users to upload songs they own to Grooveshark’s servers. From there the songs can be streamed to
anyone with access to the app or to Grooveshark’s online site, creating a sort
of on demand radio populated by content uploaded by users.

There are, of course, a number of legal
problems with the service and the technology upon which it is built, not least
of which are claims by the music industry that Grooveshark and its parent
company (Escape Media Group) are engaging in widespread contributory
infringement of copyright. Claims by
Grooveshark that it is an internet service provider (much like YouTube) and
therefore protected from contributory liability by what is arguably a far-reaching interpretation of the Digital Millennium Copyright Act’s “Safe Harbor”
clause may have found some sympathy in the courts, but to this observer, the
deeper issue is how technologies like Grooveshark (looking a lot like Napster
circa 1999 without the download option) continue to have traction among users
who have “grown up” with industry-sponsored lessons on the “rights” and
“wrongs” of digital copyright.

Perhaps
what the industry fails to see is that user practices in the consumption of
digital content are becoming fixed and reproduced by a matrix of other digital
technologies and services that quite literally make an argument for leaky
content. Why should a video stay on your
phone if it can immediately go up on YouTube? There’s an app for that. Why should
your music stay on your hard drive when you can upload it to the cloud? There’s an app for that, too. A host of digital technologies invite users
to leak their content out of their original vessels onto other platforms. The social web is built on that premise. The “upload” option has become not only a
technical standard on digital devices but also a cultural norm among many users
(especially younger ones).

In my opinion,
Grooveshark is not like YouTube enough to warrant “Safe Harbor” protection and
the music industry is likely to make that argument eventually stick. Google may, at least, be legally
vindicated for removing the app. However, Grooveshark is part of a technological ecology that invites sharing,
that generally forgoes questions of copyright for after the fact and that
builds business models upon an upload culture. For the music industry, that will be considerably harder to curb.

September 25, 2012

In the following Election Tuesday post, Michael P. Lynch, author of In Praise of Reason, explains why we need shared standards of reason.

The Romney campaign declared last month that they weren’t
going to be pushed around by fact-checkers. Such remarks were in turns horrifying
and amusing to many, but their open acceptance by some on the Right was
revealing. What it reveals is that current political disputes aren’t just over
the facts. They are also over who has the best methods and way for determining what
the facts are. And many on the Right are suspicious of “fact-checking” as just
another way of using biased methods to impose a liberal view of the world.

This is frustrating. But rejecting it wholesale without
trying to understand the underlying problem is a mistake. The real problem here
is that when we can’t even agree over fact-finding methods, then we are
disagreeing over our very standards of reason—over what counts as rational or
justified and what doesn’t. And when
that happens, we’ve hit rock bottom—the debate has grounded out on principles
so basic it is hard to see how it can be resolved because neither side sees the
other as rational.

Democracies are supposed to be spaces of reasons. In
democratic politics, we ought to give and ask for reasons for our political
views. In order to do that, however, we need some common currency of shared standards—some
common principles to which we can all appeal when assessing each other’s
claims. Without that, reason-giving breaks down and politics becomes war by
other means. And that is what is happening in our country right now.

What this means is that
those of us who favor scientific methods can’t be content with just heaping
scorn on the other’s sides standards of rationality. Nor should we assume that
everyone will just see the virtues of a scientific approach to evidence and
reason. We need to do more, to actively how why—morally and politically and not
just scientifically—some standards of reason are more rational than others.
Ignore this, and we run the risk of more people giving up not only on
fact-checkers—but the facts themselves.

September 21, 2012

Pain is a biological enigma. It is protective, but not always. Its effects are not only sensory but also emotional. There is no way to measure it objectively, no test that comes back positive for pain; the only way a medical professional can gauge pain is by listening to the patient’s description of it. The idea of pain as a test of character or a punishment to be borne is changing; prevention and treatment of pain are increasingly important to researchers, clinicians, and patients. In honor of Pain Awareness Month, here's an excerpt from Understanding Pain by Fernando Cervero. In Understanding Pain, Cervero explores the nature of pain: why it hurts and why some pain is good and some pain is bad.

Think about the simple act of buying a
new pair of shoes. You try them on in the shop, think they fit quite well, and
walk a few steps to make sure they are reasonably comfortable. After that
simple test, you decide to buy them. You know from experience that the first
few times you wear them they are going to hurt a bit. If you are unlucky they
may hurt a lot. A minor rub that in the shop was almost imperceptible may
develop into an unpleasant pain. It may take a while for your feet to get used
to the shoes. Only after they stop rubbing the annoying sore spots will the
pain go away.

The problem with your new shoes is the
consequence of a unique property of pain sensation: its inability to adapt.
Every other sensory experience, after a prolonged and constant stimulus, adapts
to a lower level or even stops being perceived altogether. If you walk into a
room and there is an intense odor, it doesn’t take long for you to stop
perceiving the odor. You don’t hear the rumbling sound of your washing machine after
a few minutes. Interestingly, you can tell when the washing machine stops,
because you detect that the noise has ended even though by then you weren’t
really hearing it. We have a powerful mechanism of sensory adaptation that
eliminates a continuous noise or a persistent odor from our perceptual world
and helps us to see in very bright or very dark conditions. Our senses are
dampened by persistent and constant stimulation and are awakened by sudden
changes and by contrasts. The alternating black and white stripes of pedestrian
crossings and the two-tone sirens of fire engines and ambulances keep our
senses alert to these important signals by preventing sensory adaptation.
Nothing blunts our senses more than constant and uniform stimulation.

Pain is the only exception to the
adaptation rule. In fact, pain not only doesn’t adapt; it produces the opposite
effect: it amplifies as it persists. Hence the problem with your new shoes. In
the shop you may not even have noticed the slight rubbing, and you would hardly
have called it a pain sensation. Yet as this very small source of minor pain
bombards your brain continuously, the tiny pain becomes progressively larger
and larger. It amplifies to a point where wearing your new shoes may became torture.
The amount of pain that you feel once the amplification process has set in is
out of proportion to the minor rubbing. You are suffering the consequences of a
process known as sensitization.

Using the tools of psychophysics (the
science that studies the relationship between physical stimuli and the
sensations they produce), sensory adaptation is revealed by a shift toward the
right of the curve that relates stimulus intensity to sensory perception. This
rightward shift means that after your senses adapt it will take a greater
intensity of the stimulus to produce the same amount of sensation, and that
your sensory threshold (the intensity at which you begin to perceive a
stimulus) will be higher. However, when we use the same techniques to measure
pain perception after a continuous painful stimulus, we note that the
pain-perception curve has shifted in the opposite direction, toward the left,
showing sensory amplification rather than sensory adaptation. Now, less intense
stimuli produce more intense pain. We call this process hyperalgesia, meaning increased pain sensitivity. And because the
pain threshold has also moved toward lower stimulus intensities, we may now
feel pain at intensities of stimulation that hadn’t been painful before. We
have a special word—allodynia—for the
feeling of pain caused by stimulations that don’t normally produce pain.
Allodynia and hyperalgesia are consequences of pain amplification, the
properties that make pain unique among sensory perceptions and that demonstrate
that pain doesn’t adapt to prolonged and continuous stimulation.

The pain caused by your new pair of shoes is trivial when compared to the pain
of patients who suffer from chronic pain. A pain that doesn’t go away is a pain
that increases and increases until eventually it dominates all aspects of a
person’s life. The lack of adaptation to pain is what drives many chronic-pain
patients to anxiety and then to depression. The pain is always there. You may
learn to live with it, but it will never go away. Pain amplification can be
helpful under normal circumstances because it helps you to take care of an
injured body part. This is essential for the healing process, and it is a
consequence of the protective nature of pain. For people with chronic pain,
however, the amplification of pain sensitivity expressed as allodynia and
hyperalgesia becomes the dominant symptom of their diseases and ruins the
quality of their lives. Pain amplification adds suffering to the unpleasantness
of chronic pain.

September 20, 2012

The political news this week has been dominated by a secret film of Mitt Romney speaking in May at a fundraising event in Florida. In it, Romney, speaking "off the cuff," described a bloc of Americans that support President Obama:

There are 47 percent of the people who will vote for the president no matter what. All right, there are 47 percent who are with him, who are dependent upon government, who believe that they are victims, who believe the government has a responsibility to care for them, who believe that they are entitled to health care, to food, to housing, to you-name-it. That that's an entitlement. And the government should give it to them. And they will vote for this president no matter what.

The remarks have generated a heated discussion about "the 47%" - who they are, how they vote, and the role of the government. We thought we should check in with Peter Wenz, author of Take Back the Center, a new book arguing for the return of a progressive tax code. Not surprisingly, he saw little of value in Romney's remarks:

If Mitt Romney had read Take Back the Center he’d have known better than to suppose that the 47 percent of Americans who don’t pay any federal income tax are loafers looking for government handouts. In the first place, they pay other taxes and therefore support the government. If they have a job, they pay into the Social Security and Medicare funds. If they’re unemployed they still pay sales taxes and property taxes (either directly to the government or indirectly to their landlords).

Most of the 47 percent depend on Social Security after a lifetime of work, or are employed at jobs that pay less than a living wage. Workers who clean motel rooms, wait on tables at Denny’s, cashier at grocery stores, or greet customers at Walmart typically pay no income tax if they have dependents. Walmart actually instructs its new workers on how to apply for such government benefits as subsidized housing, free school lunches for their kids, and food assistance for their home. Walmart knows that it doesn’t pay a living wage. The family of its founder, by contrast, the Walton family, has assets of $69 billion, which just about equals the assets of the entire bottom 30 percent of the U.S. population.

Romney seems to imagine that rich people who pay a significant amount of income tax are supporting the government by dint of their own hard work. He neglects to notice how handsomely the government is supporting them. All of the beneficiaries of businesses that pay less than a living wage are being supplied workers by government subsidy. Without housing and food assistance, poor people would be too busy trying to stay warm, feed themselves, and keep out the rain to show up to make beds at motels, wait on tables, or greet customers at Walmart.

More directly, the government supports whole industries with tax dollars. Most basic research is done by the government and then turned over to private enterprise at little cost. So anyone making money from computers, cell phones, and the Internet, all resulting from government research, is a beneficiary of government favor. Basic medical research is done for the most part by government-supported institutions, which is an enormous subsidy of the pharmaceutical industry. An additional subsidy is the provision of Medicare Part D that disallows the government from bargaining for lower drug prices. The constraint on bargaining costs taxpayers about $50 billion a year.

Mining companies pay below-market rates for extractions from government-owned land, and ranchers and farmers in the west pay below-market rates for the water they need. General tax revenues pay for most of the roadways and road repairs in the United States, not the tax on gasoline, so the automotive industry which paid for Romney’s affluent youth is a major beneficiary of government favor. The nuclear industry exists only through government subsidies and loan guarantees. Our economy’s financial sector, which garners 40 percent of corporate profits, would have imploded but for massive government bailouts.

In short, most of the 47 percent who pay no income tax are hard-working Americans or people dependent on Social Security after a lifetime of hard work. The very rich, by contrast, benefit from government favors out of all proportion to their economic and social contributions.

September 18, 2012

This week's Election Tuesday post is by Ian Bogost, author of Persuasive Games and Newsgames(with Simon Ferrari and Bobby Schweizer), among others. It discusses political games and communication (or lack thereof).

Recently, the journalist Monroe Anderson asked Obama strategist David Axelrod “why so many voters were so clueless as to how President Obama had spent the
first two years of his first term.” Axelrod's response: “information gridlock.”
Essentially, the White House hadn't been able to communicate effectively with
the public about its accomplishments. Anderson siphoned this state
of affairs through the lens of games, asking two speakers on a newsgames panel
at a journalism conference how games might be used to communicate “Obamacare”
more effectively. The two responses are pretty good game designs. One involves
simulating the experience of different illnesses: “Let them walk through and let
them see it with the Obamacare version and without the Obamacare version, not
telling them which is and which isn't.” The other is a game about “how to
survive without health insurance...People will say, 'Oh, wow'; if these things
happened to me, I'd be screwed.”

In the presidential election of 2004, Gonzalo
Frasca and I helped create the first ever official US presidential candiate
game, for then-Democratic sweetheart Howard Dean. Several more officially
endorsed games appeared that election cycle. In 2008, only a couple surfaced,
including Pork Invaders, a silly Space Invaders knock-off from the McCain
campaign. This year, as far as I know, not a single official political game was
conceived or created. Meanwhile, the two designs Anderson's panelists suggest
are just the sort I love, just the sort I have been advocating for in my
research and my game development for years. The problem is this: neither the
Obama White House nor the Obama campaign would ever make games like the ones
Anderson's interviewees suggest. That's not because the designs are bad;
ironically, it's because they are good. As I've arguedbefore,
the representation of policy choices and their outcomes is anathema to politics,
because the latter is concerned more with politicking than with
policy, with campaigning over legislating. This is a different sort of
failure to communicate, one rooted in the widespread misconception of politics
as a matter of professionals getting, keeping, or losing their jobs, rather than
citizens living in (hopefully) better and better communities. Meanwhile, the
administration and the campaigns alike keep Facebooking and Tweeting their
soundbites, hoping two sentence answers will be enough.

September 17, 2012

Happy Monday! Here's some eye candy from The Color Revolution by Regina Lee Blaszczyk. There are so many great images in this book (121 color illustrations, to be exact) that we're splitting this eye candy post in two--check back next week for more from this book. As always, click on each image to enlarge.

The color revolution grew out of
American industry’s drive for efficiency in design, production, and
distribution. This is the cover of a 1939 catalog published by the Kalamazoo
Stove Company.

How You Can Do Your Own Color
Planning with Sears Harmony House “Go-Together” Colors (Chicago, 1955).

Advertisement by Monsanto Chemical
Company in Fortune, September 1946.

Five
of the U.S.A’s 55 presidential elections were won by a candidate other than the
wish of the electorate—and almost equally damning, in at least twelve the
winner was doubtful—three of them within the last century. Woodrow Wilson was
elected in 1912 with 42 percent of the votes, but Theodore Roosevelt and William Taft
received together over 50 percent: had Wilson run against either one alone, he would
most likely have lost. Bill Clinton was the winner with 43 percent in 1992, yet
together George Bush and Ross Perot polled 56 percent: pitted alone against Clinton
the evidence shows Bush would have won. In 2000 George W. Bush defeated Al
Gore, but had Ralph Nader not been a candidate in Florida most of his 97
thousand votes would have gone to Gore, giving him the state, so the election
with 291 Electoral votes to Bush’s 246.

Why
can the electorate’s will be denied? Because of majority voting: Picking one single
candidate among many denies a voter the right to express even the simplest
opinion concerning the worth of any candidate.
She votes for one—he is, in her estimation, excellent, very good or merely
acceptable, though she is unable to
say so—and she can express absolutely nothing about whether any other is good, poor or simply to reject.

Early
prognoses promise a close election between Barack Obama and Mitt Romney in
2012, some twenty-two other candidates are on the ballots of one or more
states, with the Libertarian Gary Johnson on those of at least 44 states with
493 Electoral votes and the Green Jill Stein on those of at least 32 states
with 403 votes. The errors of 1912, 1992, 2000 and before may well occur again.
Even if this year’s election was a two-man race the “wrong” man could win since
majority voting bars any evaluation whatsoever.

What
can be done? The remedy is simple. Elect presidents by “majority judgment.” A
voter evaluates every candidate as either excellent,
very good, good, acceptable, poor or to
reject. The majority opinion determines which of these grades to assign
each candidate. The grades rank the candidates, the one with the highest grade wins.
Why this is the best known method of election is explained in the book, Majority Judgment: Measuring, Ranking and
Electing, where the many theoretical reasons for using this system of
election are developed and confirmed by extensive descriptions of uses and
experiments.

September 04, 2012

September brings us another month closer to the 2012 election (where did summer go?). We'll be posting election-related content each Tueday as we count down to Election Day.

Today's Election Tuesday post is by Steven M. Schneider and Kirsten A. Foot, authors of Web Campaigning, and explains how the use of the Web in political campaigns has changed (and also stayed the same) since the early 2000s.

The
2012 campaign represents the high-water mark of online political action. The sheer quantity of Web sites and online apps dwarf what we saw online
just two or three cycles ago. We have moved far from 1990s-era “virtual
billboards” (D’Alessio, 1997), and even well beyond the email-address gathering,
volunteer-form laden, “roll your own Website” era exemplified by the 2004 Dean
campaign (Trippi, 2004). It is safe to say that, beginning with the 2008
Obama primary and general election campaign efforts, and spreading almost
ubiquitously by 2012 to many federal and even state-level candidates, online
structures facilitating connection through social networking sites like
Facebook have redefined the candidate Web site.Yet in many
respects, much of what we see in the 2012 cycle -- especially on the Web --
extends what we observed in the early 2000s. Hardly revolutionary: We
might even call the 2012 online campaign “politics as usual” (Margolis &
Resnick, 2000).

In Web
Campaigning, we argued that what campaigns do on the
web can be analyzed through four practices: informing, involving, connecting,
and mobilizing—and that these practices remain relatively stable over time. As the Web has moved from its producer-centric first generation to a
platform emphasizing sharing and co-production, Web producers have increasingly
sought to mobilize site visitors and to connect them with other supporters and
organizations, rather than simply involve or inform them. Mobilization as a
practice, first hinted at in the 2000 McCain primary campaign, became the
dominant motif in the 2008 Obama campaign, and is the primary practice in many
2012 efforts.

As a
result of the dramatic expansion of mobilizing, several techniques that emerged
in 2008 have become de rigueur in 2012. The familiar Facebook and
Twitter icons invite visitors to interact with their friends and followers,
pushing campaign materials into a realm far outside the control and oversight
of candidate organizations. “Sharing” is the behavioral successor to the
“linking” technique we observed in 2004 and before. In the 2012 cycle,
campaigns engage in what we called “co-production” far beyond what was imagined
or thought strategic in the formerly-cautious world of campaign Web managers.

Finally,
we should comment on the area of tremendous growth (and to date, largely
unexplored) online campaigning: the world of the apps. As our online
world moves beyond the Web to cloistered apps accessed from tablets and smart
phones, the challenge for scholars to identify practices and define techniques
requires new methods and new tools. While scholarship from the 2000s
focused on the Web as an object of study; scholarship in this decade and the
next should focus on the screen as the object of study. Such an effort
will require new approaches to observation and analysis.

D'Alessio, David. 1997. "Use of the World Wide Web in the 1996 U.S.
Election." Electoral Studies 16 (4):489-500.