Category Archives: Motivations

“How do I get people to do X”, or “more of X”? That question is pretty much the motivation for the notes I write myself here.
We economists are pretty expert at answers of the form “pay them the right amount, at the right time, as the right function of observables”. But on the interwebs, the question often is how to get people to work harder or contribute more for free. For one thing, a lot of ventures don’t bother with things like, well, revenues (at least initially). And often more important, the transaction costs of identifying, contracting with, and setting up and executing payments to a large number of micro-contributors exceed the benefits of paying them.
So, there is much attention to intrinsic motivation: making people feel good enough about what they’re doing that they want to do it without something messy, like being paid. A lot of sites have been developing and refining tools like leaderboards and badges to give people a sense of accomplishment, some recognition, perhaps a reputation.
More recently, especially following the explosive success of lo-fi casual social gaming (can you spell F-A-R-M-V-I-L-L-E?), folks are trying to combine gaming with intrinsic motivators, in what is called “gamification”. Foursquare does this with its badges; so does Scvngr. A recent article in Incentives Magazine (of course) provides a pretty detailed overview of the emerging gamification industry. A number of firms now sell tools, widgets and platforms allowing folks to gamify any web site.
Games have been used for a while to induce socially useful work: these are usually called “games with a purpose”, and their early growth and success is due largely to the work of Louis van Ahn and Laura Dabbish. The idea behind GWAP is to deisgn a game that is intrinsically fun to play, but the playing of which directly produces useful work. One well-known example is the ESP game: two people anonymously matched over the web are shown the same images: they type in labels. The more times they type in the same labels, the more points the score. Meanwhile, labels that are popular are saved as tags for the image. Google uses this system now in its image labeler.
The gamification business generalizes this. The games themselves need not produce useful work: rather , the fun of being able to play them motivates the user to do something (not necessarily the game) that the provider values. For example, customers might stay on a site longer, or engage more so that they remember the site (or develop loyalty) and return later.
I particularly like the following observation from the article, because it touches on the critical importance of storytelling in effective (and persuasive) communication: storytelling. Barry Kirk, solution vice president of consumer loyalty at Maritz Loyalty & Motivation, said, “before slapping badges on everything, make sure your ‘game story’ is well thought out”. He added, “If this were a game, would it be interactive, playful, and engaging? All good games are special experiences, and how to apply gamification is just getting started.”

Carrot: Verizon was getting about 6100 customers a week to switch to paperless billing; then in August it entered all switchers into a contest to win a Toyota Prius hybrid; the rate increased to 17,000. But is sends out about 20.5 million paper bills per month. At that rate: about 23 years to full conversion (and presumably more than one giveaway Prius.
Stick: T-Mobile was getting about 7,000 customers weekly to go paperless. In August it added a $1.50 to every print-on-paper bill; the rate went up to about 231,000 per week. Years to full conversion: about 1.25.
Jawbone: T-Mobile’s customers screamed. A class action suit was filed alleging this was a change of contract terms. About a month after adding the fee, T-Mobile reversed course and is back to encouraging customers to “go green”.\
Via The New York Times.

The State of New York announced settlement of a lawsuit it filed against LifeStyle Lift for “astroturfing” (paying its employees to “flood” the net with false positive reviews). The company will pay a $300,000 fine, plus an undisclosed amount of New York’s legal costs.
Lifestyle Lift is a facial cosmetic surgery procedure that purports to be quicker and safer than traditional facelift procedures, with shorter recovery time and cost.
According to the NY State Attorney General’s office, employees published anonymous reviews to the web to trick potential customers. They did this on legitimate review sites, and they also created standalone web sites that purported to be independent, where they created all of the “reviews” or edited reviews by third parties to skew the discussion.
See also this New York Times story.
Laws that impose possible fines or other punishments (such as jail time) are an incentives-based approach to shape behavior. A simplified version of the idea is that if the expected cost of the punishment, times the likelihood that the agent will be caught and punished (discounted to present value), is greater than the expected benefit from the improper behavior, it will not be in the agent’s self-interest to engage in the behavior.
One concern about using legal punishment incentives is that they involve multiple sources of uncertainty (about punishment size and likelihood of being caught and punished), and that seemingly large ex post punishments may not be that much of an ex ante deterrent.
Lifestyle Lift was fined $300,000 plus legal costs. Suppose that it had known with certainty that it would have to pay this fine several years after earning money as a result of publishing false reviews. Would it have chosen to be honest? That depends, of course, on how many consumers it falsely induced to get the procedure, and the profit on the procedure. According to current customer comments on one review site that claims to have been abused (RealSelf.com), the procedure costs on the order of $5000, only some portion of which will be profit. Suppose that the profit rate is 10% (about $500): then of the “nearly 100,000” customers it claims to have served, Lifestyle Lift would have had to falsely induced at least 600 of them. If many more than 600 had been tricked, then even knowing the fine would occur may not have been sufficient deterrent. Multiply that by the uncertainty and the number of customers they had to successfully trick might have been less (there were also uncertainties about the benefits of lying that would have to be taken into account).
There is at least one reason the incentive might be greater: harm to Lifestyle Lift’s reputation. For example, this settlement was reported in the New York Times, and the story is starting to circulate through blogs and other information sources.
On RealSelf.com, where presumably the false reviews have now been removed, 65% say that the procedure is not worth it. Meanwhile, Lifestyle Lift now posts a badge and promised “Internet Code of Conduct” on its own web site, stating that it “is proud to take a leadership role in establishing new standards of Internet conduct and communications.” I don’t know when that “code” first appeared, but it seems likely that this is an example of trying to turn lemons into lemonade.

The idea is not news, but it’s a charming example of for-profit enterprises seeking donated labor: LinkedIn asks translators if they want to translate LinkedIn pages for free. And gosh, some professional translators don’t like the idea.
Actually, it’s not clear that LinkedIn was asking the translators to work for nothing. Apparently it was planning to credit their work, thus offering some compensation in the form of marketing or advertising. And at least for some, this could be valuable compensation. It’s not just “free advertising” in the sense of getting to post one’s name and profession in a public place. Instead, it’s a free demonstration of the professional’s work: “Like this translation? It was done by Tom, and here’s his LinkedIn page”.
There seems to be a fair bit of consensus that a powerful motivator for many people who work on open-source software projects for “free” are doing it to get training (in working in a complex, team-based software engineering environment) and to get publicly documented evidence of their skills (the code checked in and permanently associated with your user account). I’ve speculated that a similar motivation might explain some who write book reviews (for “free”, and which help a for-profit company) at Amazon.com: writing practice with grading by others (without paying nasty tuition), and a public record of one’s skills (but how many paid jobs are there to which budding book reviewers can apply these days?).

The University of Michigan switched to an online course evaluation system this year (Fall 2008 was the first full run).
One of the primary concerns during the study that went into the decision, and during the implementation, is what would happen to response rates. Selection bias for respondents is a concern for any survey, and there are obvious (and documented) reasons to expect that response is correlated with student satisfaction with a course and a teacher, which definitely would create a bias, the more so the lower is the response rate. With the traditional fill-in-a-form system, if administered during a class late in the semester when attendance is high (perhaps because students are getting concerned about what will be on the exam!), through convenience and peer pressure the response rate can be fairly high (as I recall, the norm at UM was above 70%). With an online system, evals are filled out at the student’s convenience. This might catch students who would have missed the evaluation class day, but might lose many others.
The evidence UM collected from other campuses (and from two schools at UM that had already implemented their own online system), response rates do tend to be a bit lower, though not much.
My ICD colleague, Yan Chen, and I have chatted a bit about providing incentives to students to complete the online evaluations. One idea that she’d heard seemed effective in another department is to award a nominal amount of “extra credit” to students who submit an evaluation (in some systems, even though the evaluation content is anonymized, it is possible to track whether a student submitted something).
Here is a blog entry on the use of course evaluation incentives from the Center for Teaching, Learning and Technology at Washington State University. Their impression is that the incentives (including extra credit and randomized cash awards) have had little impact on participation (at least at the level at which the incentives were offered). Their conclusions are mostly based on anecdotes, but they provide a link to a discussion of a comparative study that was done in their College of Engineering which, it appears, was being submitted for publication.

The Peer-to-Patent system created by Beth Noveck’s group at NYU Law School and being piloted by the U.S. Patent Office has gotten a fair bit of attention. The basic idea is to gather user-contributed content from experts who can help patent examiners figure out whether a proposed invention is novel (no prior art). Anyone can submit comments on the posted patent proposals, and in particular can cite to evidence of prior art (which generally leads, if valid, to denial of the patent application). The purpose is to speed up patent reviews, and in particular to help prevent the granting of invalid patents, because it is often costly, time-consuming and chilling to later innovation to fight and prove a granted patent is invalid.
Andy Oram wrote an editorial in the Feb 2008 Communications of the ACM urging computer scientists to participate (viewing article may require subscription). He explained the system, and why it would be good for innovation for experts to donate their time to read and comment on patent applications.
Why would experts — whose time is somewhat valuable — want to do this? Andy argues that the primary reason is public service: donate to create a public good (better software patent system) for all. There are lots of ideas of things that would be “good for all” that require volunteer donations of time, effort, money. It’s actually not a given that such public goods are a good idea: the value of a public good does not always or automatically exceed the cost of the time or other resources donated by the people who created it. The experts who Andy seeks to contribute to Peer-to-Patent are highly trained people whose time is generally valued quite highly. In any case, if P-to-P depends on volunteer contributions by experts, how likely is it to succeed? These are people who already feel deluged by requests to volunteer their time to referee conference and journal articles, advise students on projects, advice government, serve on department and university committees, serve on professional organization committees and edit journals, etc., etc. I know few serious, successful academics who work less than 50 or 60 hours a week already.
Andy also suggests another reason to volunteer time for Peer-to-Patent: the bad patent you block may save your startup company! Now we’re talking….a monetary incentive to “volunteer” time. But this is a bit problematic too: it points out a strategic concern with P-to-P. Potential competitors, or entrepreneurs who at least want to use the disclosed invention, have an interest in trying to block patent applications, and may try to do so even if the invention is legitimate? They can flood the Patent Office with all sorts of “prior art”, which may not be valid, but now the patent examiners will have more work to do. And just as patent examiners may conclude incorrectly that a patent application is valid, so may they conclude incorrectly that one is invalid. It’s not prima facie obvious, especially given that those most motivated to “donate” time and effort are those who themselves have a financial stake in the outcome, that user-contributed content in this setting will be a good thing, on balance.

In another June 2008 American Economic Review article, Ellingsen and Johannesson introduce a standard concept from social psychology into a standard economic model of incentives, and find that it helps explain some well-known empirical puzzles.
This is not at all the first article in the economics literature that explores the role of social motivations, and the authors provide a good discussion of prior work.
“In Pride and Prejudice: The Human Side of Incentive Theory“, Ellingsen and Johannesson add two motivational premises to the standard principal-aget model: people value social esteem, and the value they experience depends symmetrically on who provides the esteem: they value esteem more from those who they themselves esteem.
Their main result is to show how an incentive that otherwise would have a positive effect on behavior can have a negative effect for some people because of what the incentive tells the agent about the principal. For example, they suggest this as an explanation for “the incentive intensity puzzle that stronger material incentives and closer control sometimes induce worse performance” (p. 990).