Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.

The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).

What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).

These benefits may not exceed the costs (is inducing contributions all that useful with improved market mechanisms like assurance contracts - made increasingly famous thanks to Kickstarter?) Now, to forestall objections from someone like Robin Hanson that these traits - if negative - can be ameliorated by improved technology and organizations and the rest just represents our egalitarian forager prejudice against the elites and corporations who gave us the wealthy modern world, I would point out that these traits look like they would be quite effective at maximizing utility and some selected for in future settings…

(Additional cautions include that, in order to control for all sorts of confounds, these are generally small WEIRD samples in laboratory or university settings involving small-scale power shifts, priming, or other cues; as such, all the usual criticisms apply.)

In five studies, we explored whether power increases moral hypocrisy (i.e., imposing strict moral standards on other people but practicing less strict moral behavior oneself). In Experiment 1, compared with the powerless, the powerful condemned other people’s cheating more, but also cheated more themselves. In Experiments 2 through 4, the powerful were more strict in judging other people’s moral transgressions than in judging their own transgressions. A final study found that the effect of power on moral hypocrisy depends on the legitimacy of the power: When power was illegitimate, the moral-hypocrisy effect was reversed, with the illegitimately powerful becoming stricter in judging their own behavior than in judging other people’s behavior. This pattern, which might be dubbed hypercrisy, was also found among low-power participants in Experiments 3 and 4. We discuss how patterns of hypocrisy and hypercrisy among the powerful and powerless can help perpetuate social inequality.

…feelings of power reduce sensitivity to social disapproval (Emerson, 1962; Thibaut & Kelley, 1959), thus reducing the grip of social norms and standards on power holders’ behavior (Galinsky et al., 2008). As a result, even very strong norms, such as those regulating sexual behavior or compassion, are often ignored by the powerful (Bargh, Raymond, Pryor, & Strack, 1995; Van Kleef et al., 2008).

Powerful people who feel that their position is illegitimate are less inclined to assertively take what they want (Lammers, Galinsky, Gordijn, & Otten, 2008) and at the same time are less inclined to judge others for doing so, compared with people who feel their power is deserved (Chaurand & Brauer, 2008). Therefore, in our final study, we independently manipulated power and its legitimacy to test whether legitimacy crucially moderates the effect of power on hypocrisy.

We show with a laboratory experiment that individuals adjust their moral principles to the situation and to their actions, just as much as they adjust their actions to their principles. We first elicit the individuals’ principles regarding the fairness and unfairness of allocations in three different scenarios (a Dictator game, an Ultimatum game, and a Trust game). One week later, the same individuals are invited to play those same games with monetary compensation. Finally in the same session we elicit again their principles regarding the fairness and unfairness of allocations in the same three scenarios.

Our results show that individuals adjust abstract norms to fit the game, their role and the choices they made. First, norms that appear abstract and universal take into account the bargaining power of the two sides. The strong side bends the norm in its favor and the weak side agrees: Stated fairness is a compromise with power. Second, in most situations, individuals adjust the range of fair shares after playing the game for real money compared with their initial statement. Third, the discrepancy between hypothetical and real behavior is larger in games where real choices have no strategic consequence (Dictator game and second mover in Trust game) than in those where they do (Ultimatum game). Finally the adjustment of principles to actions is mainly the fact of individuals who behave more selfishly and who have a stronger bargaining power.

…Individuals destroy the resources of others because of envy (Mui, 1995; Maher, 2010; Charness et al., 2010; Harbring and Irlensbusch, 2011) or for the joy of destruction (Zizzo and Oswald, 2001; Abbink and Sadrieh, 2009); the power of public office sometimes leads politicians to use it for their personal gain (Aidt, 2003); feelings of entitlement push leaders to take more than followers from a common resource (de Cremer and van Dijk, 2005).

de Cremer, D., van Dijk, E. (2005). When and why leaders put themselves first: Leader behaviour in resource allocations as a function of feeling entitled. European Journal of Social Psychology, 35, 553-563.

social psychologists studying moral hypocrisy have shown that individuals evaluate more negatively the moral transgression of fair principles when this transgression is enacted by others than when enacted by themselves (Valdesolo and deStefano, 2008).

Such an illusory preference for fairness has been identified by Dana, Weber and Kuang (2007) (see also Larson and Capra, 2009; Grossman, 2010; van der Weele, 2012). Indeed, fairness decreases substantially when the link between fairness and outcome is obfuscated. The choice to play fair is frequently motivated by the willingness to appear fair more than by the willingness to produce a fair outcome and this is why greater anonymity leads to more selfish transfers in the dictator game (Andreoni and Bernheim, 2009; Ariely et al., 2009).

Four studies support this hypothesis. Individuals who took coffee from another person’s can (Study 1), violated rules of bookkeeping (Study 2), dropped cigarette ashes on the floor (Study 3), or put their feet on the table (Study 4) were perceived as more powerful than individuals who did not show such behaviors. The effect was mediated by inferences of volitional capacity, and it replicated across different methods (scenario, film clip, face-to-face interaction), different norm violations, and different indices of power (explicit measures, expected emotions, and approach/inhibition tendencies).

…‘‘Power tends to corrupt, and absolute power corrupts absolutely,’’ wrote Lord Acton to Bishop Mandell Creighton in 1887. This classic adage not only reflects popular sentiments about power; it is also supported by scientific research (e.g., Kipnis, 1972).

Kipnis, D. (1972). Does power corrupt? Journal of Personality and Social Psychology, 24, 33-41

IMPORTANT: Regarding the scientific fraud of my former supervisor Stapel: the committee Levelt has investigated all my work with Stapel. All my work on the topic of power has been cleared from suspicion of data-fraud. This research is all based on data that I collected myself or collected together with other co-authors (i.e. not Stapel). There is one paper (on racism in legal decisions) where I was misled. This paper contains false data. It is currently being retracted.

In this chapter, we propose one answer to the question of when values and moral principles play a central role in people’s judgments and plans. We explore the possibility that values and moral principles are more prominent in judgments and predictions regarding psychologically more distant events. This perspective is based on construal level theory (CLT; Liberman & Trope, 2008; Liberman, Trope, & Stephan, 2007; Trope & Liberman, in press), according to which the construal of psychologically more distant situations highlights more abstract, high-level features. Because values and moral rules tend to be abstract and general, people are more likely to use them in construing, judging, and planning with respect to psychologically more distant situations.

For example, Nussbaum, Trope, and Liberman (2003, Study 2) conceptualized personal dispositions as high-level construals and situational constrains as low-level construals and demonstrated that people expect others to express their personal dispositions and act consistently across different situations in the distant future more than in the near future. In the study, participants imagined an acquaintance’s behavior in four different situations (e.g., a birthday party, waiting in line at the supermarket) in either the near future or the distant future and rated the extent to which the acquaintance would display 15 traits (e.g., behave in a friendly vs. an unfriendly manner) representative of the Big Five personality dimensions (extraversion, agreeableness, conscientiousness, emotional stability, and intellect). Cross- situational consistency was assessed by computing, for each of the 15 traits, the variance in each predicted behavior across the four situations and the correlations among the predicted behaviors in the four situations. As predicted, participants expected others to behave more consistently across distant-future situations than across near-future situations. This finding was replicated with ratings of participants’ own behavior in different situations: Participants anticipated exhibiting more consistent traits in the distant future than in the near future (Wakslak, Nussbaum, Liberman, & Trope, 2008, Study 5).

Wakslak, C. J., Nussbaum, S., Liberman, N., & Trope, Y. (2008). “Representations of the self in the near and distant future”. Journal of Personality and Social Psychology, 95, 757-773

For each scenario (e.g., national flag), participants chose between two restatements of each action. One restatement referred to an abstract moral principle (high-level construal; e.g., desecrating a national symbol) and the other restatement referred to the means of carrying out the action (low-level construal; e.g., cutting a flag to created rags). We found that distant-future transgressions were identified in moral terms more often than near-future transgressions. These findings suggest that people are more likely to think of a temporally distant action, rather than one in the near term, as having moral implications. CLT predicts similar results for other forms of psychological distance: Situations should be more readily construed in terms of moral principles when they occurred further back in the past, when they apply to more socially or spatially distant individuals or groups, and when they are less likely actually to occur. When the same actions are proximal, they are more likely to be construed in terms that are devoid of moral implications. For example, accepting minority students with lower grades into one’s university will be seen as “endorsing affirmative action” when it is unlikely to be implemented, but it will be seen in more concrete terms (e.g., as “making acceptance rules more complicated”) when it becomes more likely.

The vignettes also included situational details that rendered the transgressions harmless (low-level information; e.g., the siblings used contraceptives, they had sex just once, they kept it a secret). Participants were instructed to imagine that the transgressions would occur tomorrow (the near-future condition) or next year (the distant-future condition) and judged the extent of its wrongness. We found that moral transgressions were judged more severely when imagined in the distant future compared to the near future. The same pattern occurred with social distance (Eyal et al., 2008, Study 3), which was manipulated by asking participants to focus either on the feelings and thoughts they experienced while reading about the events (low social distance) or to think about another person they knew, such as a colleague, a friend, or a neighbor, and focus on the feelings and thoughts that this person would experience while reading about the events (high social distance). Notice that the social distance manipulation did not involve judging one’s own versus another person’s actions, but only one’s imagined perspective. Notably, this manipulation does not support interpreting the results in terms of moral hypocrisy, according to which people judge their own moral transgressions less harshly than another person’s transgressions because they wish to appear better than others. As predicted, moral transgressions were judged more harshly when imagined from a third person perspective (high social distance) compared to one’s own perspective (low social distance). Another study (Eyal et al., 2008, Study 4) examined temporal distance effects on judgments of moral acts. Participants read vignettes that described virtuous acts related to widely accepted moral principles (high-level information; e.g., a couple adopting a disabled child) as well as low-level, situational details that rendered the acts less noble (e.g., the government offering large adoption payments). It was found that these behaviors were judged to be more virtuous when they were described as happening in the distant future rather than the near future.

Temporal distance from moral transgressions was also found to affect people’s emotional responses. Agerstrom and Bjorklund (2009, Studies 1 and 2) asked Swedish participants to imagine situations that involved a threat to human welfare taking place in the near future (today) or in the distant future (in 30 years). For example, one scenario, set in Darfur, Africa, described a woman who was raped and beaten by the Janjaweed militia. Each scenario was followed by a description of a prosocial action that, if taken, could improve the situation (e.g., donate money). Participants rated how wrong it would be for another Swedish citizen not to take the proposed prosocial action given that they had the means to do so. They also rated how angry they would feel if the target person failed to take the prosocial action. It was found that distant-future moral failures were judged more harshly and invoked more anger than near-future moral failures.

In another study, Agerstrom and Bjorklund (2009) examined whether the greater reliance on moral principles in judgments of distant-future compared to near-future transgressions would generalize to individuals’ self-perceptions. Participants rated the likelihood of engaging in prosocial actions in reaction to other people’s moral transgressions. For example, participants indicated how much money they were willing to donate to help improve the situation in Darfur. As predicted, participants were more likely to express prosocial behavioral intentions when imagining the act occurring in the more distant future. Taken together, these findings suggest that moral rules are more likely to guide people’s judgments of distant rather than proximal behaviors.

For example, individuals for whom altruism was subordinate in importance to achievement were more likely to refuse to help a fellow student in the distant future than in the near future, whereas individuals for whom achievement was subordinate to altruism were more likely to help a fellow student in the distant future than in the near future. These findings show that secondary values, which are nonetheless part of an individual’s self-identity, may mask the influence of central values on near future intentions. Centrality of values may be defined not only within an individual but also within a situation. For example, when medically treating a person from a rival group in a war, the competition is central and mercy is secondary, whereas in a hospital, the reverse is true. An interesting prediction that follows from CLT is that the secondary value will guide behavioral intentions in the near future more than in the distant future. Thus, in a war, benevolence will come into play in near-future plans more than in distant-future plans, leading people to be more merciful than would otherwise be expected. In his poem “After the Battle”, Victor Hugo tells about his father (“that hero with the sweetest smile”), an officer in the war against Spain, who encounters a Spaniard soldier asking for something to drink. Although on the battlefield, and although the Spaniard tries to kill him, the officer orders: “All the same, give him something to drink.”

But lying does not come without cost. Ordinary lie-tellers experience negative emotions, decrements in mental function, and physiological stress. Liars are also at risk of getting caught. Despite people’s best attempts to get away with their prevarications, lies are often behaviorally “leaked” through subtle changes in body movement and speech rate. Power, it seems, enhances the same emotional, cognitive, and physiological systems that lie-telling depletes. People with power enjoy positive emotions, increases in cognitive function (4-5), and physiological resilience such as lower levels of the stress hormone cortisol (6-7). Thus, holding power over others might make it easier for people to tell lies.

Participants were assigned to the role of “leader” or “subordinate” and engaged in a series social interactions in which the leader had control over the subordinate’s monetary and social outcomes (9)….If the individual could successfully convince the experimenter (regardless of whether they were lying) they could keep the $100 in cash. All participants were then interviewed about whether they had stolen the money: half were lying and half were telling the truth. The interviewer (blind to experimental condition) asked all participants the same critical questions (e.g., “Did you steal the $100?”; “Why should I believe you?”). After the interview, participants completed measures of moral emotional feelings (rated emotion terms: bashful, guilty, troubled, scornful) and a computerized task assessing degree of cognitive impairment. All participants provided saliva samples before and after the experiment to assess changes in the stress hormone cortisol (9). The interviews were videotaped and coded for two, classic nonverbal markers of deception: one-sided shoulder shrugs and accelerated prosody (9). Low-power individuals showed the expected emotional, cognitive, physiological, and behavioral signs of deception; in contrast, powerful people demonstrated no evidence of lying across emotion, cognition, physiology, or behavior (see Figure). In other words, power acted as a buffer allowing the powerful to lie significantly more easily (less disturbing emotion, less cognitive impairment, less of a rise in the stress hormone cortisol) and more effectively (fewer nonverbal cues associated with lying). Only low-power individuals felt badly after lying (panel A), suffered cognitive impairment (panel B), spiked in levels of the stress hormone cortisol (panel C), and demonstrated nonverbal “leakage” (more one-sided shoulder shrugs and accelerated prosody; panel D). (9)

But the investment game has been manipulated in numerous ways that produce differing levels of trusting and greater selfishness. One of particular interest is the introduction of the possibility that, at the end of the game, the trustor will learn whether she gets something back but will not know whether this is the result of the trustee’s choice or some exogenous force – e.g., luck.34 Given the opportunity to hide behind the possibility that a return of nothing was just bad luck for the trustor, trustees predictably keep more for themselves, presumably rationalizing the outcome as fair in an uncertain world. The authors of one such study recently drew parallels to financial relationships between investors and securities professionals, because the financial markets generate a great deal of good and bad luck that obscures the value added by professional trustworthiness.35

Unfortunately, high testosterone levels do not fit well with fiduciary characteristics like empathy and moral decision-making. Emerging research on the subject suggests that testosterone buffers emotional constraints on aggression and risk-taking, leading to a more “cold” utilitarian calculus and a greater willingness to do harm to gain a preferred outcome.49

See Dana R. Carney & Malia F. Mason, Decision Making and Testosterone: When the Ends Justify the Means, 46 J. EXPERIMENTAL SOC. PSYCHOL. 668, 668-69 (2010). As the authors point out, the ends need not necessarily be immoral. Id. at 670.

Power also seems to increase hypocrisy – insistence on adherence to strict norms by others, while enjoying far greater nimbleness in justifying one’s own departures on utilitarian or other rationalized grounds52 – and optimism and risk-taking.53 Of course, power may be gained in the first place by those skilled at rationalization and willing to take risks, in which case there is a dynamic feedback loop that is likely to generate increasing hypocrisy and hubris over time.

Although disclosure is often proposed as a potential solution to these problems, we show that it can have perverse effects. First, people generally do not discount advice from biased advisors as much as they should, even when advisors’ conflicts of interest are disclosed. Second, disclosure can increase the bias in advice because it leads advisors to feel morally licensed and strategically encouraged to exaggerate their advice even further. As a result, disclosure may fail to solve the problems created by conflicts of interest and may sometimes even make matters worse.

…In the domain of medicine, for example, research shows that while many people are ready to acknowledge that doctors might generally be affected by conflicts of interest, few can imagine that their own doctors would be affected (Gibbons et al. 1998). Indeed, it is even possible that disclosure could sometimes increase rather than decrease trust, especially if the person with the conflict of interest is the one who issues the disclosure. Research suggests that when managers offer negative financial disclosures about future earnings, they are regarded as more credible agents, at least in the short term (Lee, Peterson, and Tiedens 2004; Mercer, forthcoming). Thus, if a doctor tells a patient that her research is funded by the manufacturer of the medication that she is prescribing, the patient might then think (perhaps rightly) that the doctor is going out of her way to be open or that she is “deeply involved” and thus knowledgeable. Thus, disclosure could cause the estimator to place more rather than less weight on the advisor’s advice. Third, even when estimators realize that they should make some adjustment for the conflict of interest that is disclosed, such adjustments are likely to be insufficient. As a rule, people have trouble unlearning, ignoring, or suppressing the use of knowledge (such as biased advice) even if they are aware that it is inaccurate (Wilson and Brekke 1994). Research on anchoring, for example, shows that quantitative judgments are often drawn toward numbers (the anchors) that happen to be mentally available. This effect holds even when those anchors are known to be irrelevant (Strack and Mussweiler 1997; Tversky and Kahneman 1974), unreliable (Loftus 1979), or even manipulative (Galinsky and Mussweiler 2001; Hastie, Schkade, and Payne 1999). Research on the “curse of knowledge” (Camerer, Loewenstein, and Weber 1989) shows that people’s judgments are influenced even by information they know they should ignore. And research on what has been called the “failure of evidentiary discreditation” shows that when the evidence on which beliefs were revised is totally discredited, those beliefs do not revert to their original states but show a persistent effect of the discredited evidence (Skurnik, Moskowitz, and Johnson 2002; Ross, Lepper, and Hubbard 1975). Furthermore, attempts to willfully suppress undesired thoughts can lead to ironic rebound effects, in some cases even increasing the spontaneous use of undesired knowledge (Wegner 1994).

…More interesting, and as predicted, all three measures also reveal that disclosure led to greater distortion of advice. The amount that advisors exaggerated, calculated by subtracting advisors’ own personal estimates from their public suggestions, was significantly greater in the high/disclosed condition than in either of the other two conditions (p<0.05) and significantly greater by the other two measures as well: advisor suggestion minus actual jar values and advisor suggestion minus the average of personal estimates in the accurate condition (p<0.05 for both). In the accurate condition, for example, advisors provided estimators with suggestions of jar values that were, on average, within $1 of their own personal estimates. In the high/undisclosed condition, however, advisors gave suggestions that were $3.32 greater than their own personal estimates, and in the high/disclosed condition, they gave suggestions that were inflated more than twice as much, at more than $7 above their own personal estimates. Disclosure, it appears, did lead advisors to provide estimators with more biased advice.

…Although disclosures did increase discounting by estimators, albeit not significantly, this discounting was not sufficient to offset the increase in the bias of the advice they received. As Table 6 (fourth row) shows, estimator discounting increased, on average, less than $2 from the accurate condition to the high/undisclosed condition and less than $2.50 from the high/undisclosed condition to the high/disclosed condition. However, Table 5 (second row) shows that suggestions increased, on average, almost $4 from the accurate condition to the high/undisclosed condition and increased $4 again from the high/undisclosed condition to the high/disclosed condition. Thus, while estimators in the high/disclosed condition discounted suggestions about $4 more than did estimators in the accurate condition, the advice given in the high/disclosed condition was almost $8 higher than advice given in the accurate condition. Instead of correcting for bias, estimates were approximately 28 percent higher in the high/disclosed condition than in the accurate condition (first row of Table 6).

…Even in one-shot dictator games (Forsythe et al. 1994), research has long shown that many people will share resources and show self-restraint toward anonymous others (Camerer 2003), especially when it is common knowledge that the recipient expects such benevolence (Dana, Cain, and Dawes 2006). Likewise, research on cheating behavior shows that people do not tend to cheat as much as they can get away with, only to the extent that they can rationalize to themselves (Mazar, Amir, and Ariely 2008).

…When the welfare of others is a consideration, disclosure might reduce moral concerns. Prior research has suggested that when people demonstrate ethical behavior, they often become more likely to subsequently exhibit ethical lapses (Jordan, Mullen, and Murnighan 2009; Zhong, Liljenquist, and Cain 2009). For example, people who are given an opportunity to demonstrate their own lack of prejudice are more likely to subsequently display discriminatory behavior (Monin and Miller 2001). Likewise, after a conflict of interest has been disclosed, advisors may feel that advisees have been warned and that advisors are “morally licensed” to provide biased advice.

…Disclosure of a conflict of interest can also reduce the perceived immorality of giving biased advice by signaling that bias is widespread and therefore less aberrant (Schultz et al. 2007). If advice recipients’ expectations affect advisor behavior (Dana et al. 2006), then the lowered expectations for honesty that come with disclosure might allow an advisor to rationalize providing biased advice because that is exactly what the advisee expects, or should expect, to receive.

…Why is the call for disclosure so popular despite how it can backfire? One possible explanation is that most people are simply not aware of disclosure’s pitfalls. At first glance, disclosure seems like a sensible remedy to a situation in which one party possesses an otherwise hidden incentive to mislead another party. A more cynical explanation would play on the Chicago Theory of Regulation (Becker 1983; Peltzman 1976; Stigler 1971), which posits that regulation typically exists not for the general benefit of society but for the benefit of the regulated groups. These entities might be aware of the ineffectiveness of disclosure but accept it because it benefits them. For example, even though consumer advocates fought hard for warning labels on cigarette packages, the tobacco industry has defended itself against litigation since then by citing the warning labels as evidence that consumers knew the risks. “What was intended as a burden on tobacco became a shield instead” (Action on Smoking and Health 2001). Moreover, even the regulators may be attracted to disclosure if they see it as absolving them of responsibility for protecting consumers by ostensibly empowering consumers to protect themselves. Disclosure may also be perceived as the lesser of evils for those who might otherwise face more substantive regulation. For example, pharmaceutical firms are often strong proponents of disclosure laws, since it is better for them (and for researchers who receive their funding) if researchers must disclose financial ties to the industry rather than actually having to sever them. This all suggests that disclosure may be problematic for more reasons than those identified by the experiments reported above. It would be a mistake, however, to conclude that disclosure is always counterproductive, as some recent laboratory research illustrates (Church and Kuang 2009; Koch and Schmidt 2009). Research on practical examples of disclosure, summarized in Full Disclosure (Fung, Graham, and Weil 2007), also shows that disclosure can have real beneficial effects. For example, following a spate of highly publicized SUV rollovers, regulations that required auto manufacturers to publicly disclose rollover ratings led to significant and rapid changes in auto design, resulting in a general decrease in the rollover risk for SUVs. Disclosure is likely to be helpful when information is disclosed in an easily digestible form (or is made available to intermediaries, e.g., ratings companies, who process it for consumers) and when it is clear how one should respond to the disclosed information. The rollover ratings met both criteria: the ratings were represented simply as one to five stars, making it easy for consumers to compare—that is, evaluate jointly—the relative rollover risks of various SUVs. Even when information isn’t presented in such a simple form, disclosure is likely to prove helpful when the recipients are savvy repeat-players who know what to do with the disclosed information, such as institutional investors, experienced attorneys, or managers in government agencies (Church and Kuang 2009; Malmendier and Shanthikumar 2007). Disclosure is much less likely to help individuals such as personal investors, purchasers of insurance, home buyers, or patients, who are unlikely to possess the knowledge or experience to know how much they should discount advice or whether they should get a second opinion in a given conflict-of-interest situation (Malmendier and Shanthikumar 2007).

As predicted, results revealed that posing in high-power (vs. low-power) nonverbal displays caused neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in powerful displays caused advantaged and adaptive psychological, physiological, and behavioral changes – findings that suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices.

…The neuroendocrine profiles of the powerful differentiate them from the powerless, on two key hormones—testosterone and cortisol. In humans and other animals, testosterone levels both reflect and reinforce dispositional and situational status and dominance; internal and external cues cause testosterone to rise, increasing dominant behaviors, and these behaviors can elevate testosterone even further (Archer, 2006; Mazur & Booth, 1998). For example, testosterone rises in anticipation of a competition and as a result of a win, but drops following a defeat (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989), and these changes predict the desire to compete again (Mehta & Josephs, 2006). In short, testosterone levels, by reflecting and reinforcing dominance, are closely linked to adaptive responses to challenges.

Power is also linked to the stress hormone cortisol: Power holders show lower basal cortisol levels and lower cortisol reactivity to stressors than powerless people do, and cortisol drops as power is achieved (Abbott et al., 2003; Coe, Mendoza, & Levine, 1979; Sapolsky, Alberts, & Altmann, 1997). Although short-term and acute cortisol elevation is part of an adaptive response to challenges large (e.g., a predator) and small (e.g., waking up), the chronically elevated cortisol levels seen in low-power individuals are associated with negative health consequences, such as impaired immune functioning, hypertension, and memory loss (Sapolsky et al., 1997; Segerstrom & Miller, 2004). Low-power social groups have a higher incidence of stress-related illnesses than high-power social groups do, and this is partially attributable to chronically elevated cortisol (Cohen et al., 2006). Thus, the power holder’s typical neuroendocrine profile of high testosterone coupled with low cortisol—a profile linked to such outcomes as disease resistance (Sapolsky, 2005) and leadership abilities (Mehta & Josephs, 2010)—appears to be optimally adaptive.

It is unequivocal that power is expressed through highly specific, evolved nonverbal displays. Expansive, open postures (widespread limbs and enlargement of occupied space by spreading out) project high power, whereas contractive, closed postures (limbs touching the torso and minimization of occupied space by collapsing the body inward) project low power. All of these patterns have been identified in research on actual and attributed power and its nonverbal correlates (Carney, Hall, & Smith LeBeau, 2005; Darwin, 1872/2009; de Waal, 1998; Hall, Coats, & Smith LeBeau, 2005).

Despite people’s positive perceptions of narcissists as leaders, it was previously unknown if and how leaders’ narcissism is related to the performance of the people they lead. In this study, we used a hidden-profile paradigm to investigate this question and found evidence for discordance between the positive image of narcissists as leaders and the reality of group performance. We hypothesized and found that although narcissistic leaders are perceived as effective because of their displays of authority, a leader’s narcissism actually inhibits information exchange between group members and thereby negatively affects group performance. Our findings thus indicate that perceptions and reality can be at odds and have important practical and theoretical implications.

…For example, narcissists tend to overestimate their intelligence (Campbell, Rudich, & Sedikides, 2002), creativity (Goncalo, Flynn, & Kim, 2010), academic abilities (Robins & Beer, 2001), and leadership capabilities (Judge, LePine, & Rich, 2006). Generally, other people do not agree with narcissists’ idealized self-images and perceive narcissists as arrogant, egocentric, overly dominant, and even hostile (Paulhus, 1998). However, the context of leadership constitutes a notable exception in which narcissists tend to be judged positively. For example, individuals with high levels of narcissism receive higher leadership ratings than individuals with low levels of narcissism do (Judge et al., 2006) and tend to emerge as leaders in groups (Brunell et al., 2008; Nevicka, De Hoogh, Van Vianen, Beersma, & McIlwain, 2011). In addition, higher narcissism in U.S. presidents is associated with more positive evaluations of their leadership (Deluga, 1997). It is therefore not surprising that narcissistic characteristics are ascribed to many prominent leaders, such as Nicolas Sarkozy (De Sutter & Immelman, 2008) and Steve Jobs (Robins & Paulhus, 2001).

…Of the two prior studies investigating this question, one found no effects of narcissistic leadership on performance (Brunell et al., 2008), and the other showed that organizational performance was merely more volatile, but no worse or better, because of narcissistic leaders’ risky decision making (Chatterjee & Hambrick, 2007). Unfortunately, neither of these studies examined the effects of narcissistic leaders on group dynamics, communication, and information exchange, factors that are critically important to group decision making (Stasser, 1999), group performance (De Dreu, Nijstad, & van Knippenberg, 2008), and organizational effectiveness (Zaccaro, Rittman, & Marks, 2001)…Prior research has hinted at a potentially negative effect of narcissistic individuals on group and organizational performance. For example, in one study, individuals with high levels of narcissism allocated more resources to themselves than did individuals with low levels of narcissism—at a long-term cost to other group members (Campbell, Bush, Brunell, & Shelton, 2005). However, prior research did not provide a clear link between leader’s narcissism and group or organizational performance.

Participants were assigned to a high power or control role and then performed a computerised spatial cueing task in which they were required to direct their attention to a target that had been preceded by either a valid or invalid location cue. Compared to participants in the control condition, power-holders were better able to override the misinformation provided by invalid cues. This advantage occurred only at 500 ms stimulus onset asynchrony (SOA), whereas at 1000 ms SOA, when there was more time to prepare a response, no differences were found. These findings are taken to support the growing idea that social power affects cognitive flexibility…Post-test questionnaires confirmed that these effects could not be attributed to differences in positive affect or self-efficacy. We suggest that power most affected performance during invalid trials because these required a greater degree of cognitive flexibility; individuals needed to ignore the cue and unexpectedly orient attention towards the opposite location. In line with this account, the effect was only evident at relatively short SOAs where participants had little time to prepare an appropriate response. At longer SOAs or on valid trials, the need for flexibility was lower which may explain why no effect was seen.

Social power affects the way in which information is attended and discriminated (Fiske, 1993; Guinote, 2007a). Power holders have more resources and fewer constraints which gives them more attentional resources and allows them to discriminate between relevant and irrelevant information (Guinote, 2007a; Overbeck & Park, 2001). In contrast, powerless people face more constraints and environmental threats (Keltner, Gruenfeld, & Anderson, 2003). Their dependency encourages them to attend to multiple cues in the environment, in search of any potentially useful information. Thus, they treat information more equally, attending not only to the central information but also to the peripheral or distracting information (Slabu & Guinote, 2010). This overflow in information processing makes powerless people less able to respond promptly to specific situational demands, and induces attentional inflexibility (Guinote, 2007a).

Fiske, S. T. (1993). Controlling other people: The impact of power on stereotyping. American Psychologist, 48(6), 621-628. doi: 10.1037/0003-066X.48.6.621

Research using basic cognitive paradigms supports these claims. For example, Guinote (2007b) showed that high power participants are better able to focus their attention to target objects and ignore the influence of irrelevant background distracters (see also Smith & Trope, 2006). A further outcome of the cognitive flexibility experienced by powerful individuals is the increased ability to adjust their actions in line with changing contextual cues. This includes the ability to suppress dominant responses and implement non-dominant ones when the task calls for non-dominant responses (Guinote, 2007b).

Smith, P. K., & Trope, Y. (2006). You focus on the forest when you’re in charge of the trees: Power priming and abstract information processing. Journal of Personality and Social Psychology, 90(4), 578-596. doi: 10.1037/0022-3514.90.4.578

For example, several studies have shown that having power increases the ability to resolve conflicts and plan action sequences; power-holders are immune to stimulus-response compatibility effects, and are better able to switch attention between the holistic and detailed components of stimuli, as changing task demands dictate (Guinote, 2007b; Smith, Jostmann, Galinsky, & van Dijk, 2008)… More broadly, our findings build on those reported by Willis, Rodriguez-Bailon and Lupianez (2011) who showed that powerful individuals can make a better use of cues present in the environment to increase their executive control (see also Smith, et al., 2008). Their data support the idea that social power can impact rudimentary processes associated with spatial orienting and control.

Elevated power increases the psychological distance one feels from others, and this distance, according to construal level theory (Y. Trope & N. Liberman, 2003), should lead to more abstract information processing. Thus, high power should be associated with more abstract thinking—focusing on primary aspects of stimuli and detecting patterns and structure to extract the gist, as well as categorizing stimuli at a higher level—relative to low power. In 6 experiments involving both conceptual and perceptual tasks, priming high power led to more abstract processing than did priming low power, even when this led to worse performance. Experiment 7 revealed that in line with past neuropsychological research on abstract thinking, priming high power also led to greater relative right-hemispheric activation.

Though the abstraction hypothesis has not been directly tested, there is some research that supports it. For example, in Overbeck and Park’s (2001) experiments, high- and low-power participants interacted via e-mail with several different targets holding the opposite power role and received various kinds of information from them. Some of this information was relevant to the task at hand (e.g., Jim waited until the last minute to try to schedule a meeting), and some was irrelevant (e.g., Jim just started a jazz ensemble). Not only did participants in the high-power role recall more information overall than did the low-power participants, but they were especially superior at recalling relevant information. Thus, high-power participants focused more on primary information, a hallmark of abstract thinking.

Portuguese participants used more abstract language to describe both their ethnic group and an outgroup when they were part of the majority (i.e., a higher power group) than when they were part of the minority (i.e., a lower power group; Guinote, 2001). Similarly, participants who played the role of judges during a task used more abstract, trait-like language in referring to themselves than did participants who were workers (Guinote, Judd, & Brauer, 2002).

Guinote, A. (2001). The perception of group variability in a non-minority and a minority context: When adaptation leads to outgroup differentiation. British Journal of Social Psychology, 40, 117–132.

Guinote, A., Judd, C. M., & Brauer, M. (2002). Effects of power on perceived and objective group variability: Evidence that more powerful groups are more variable. Journal of Personality and Social Psychology, 82, 708 –721

Powerholders, more than the powerless, should thus be guided by their primary, overriding goals rather than by subordinate, incidental concerns. This would mean that powerholders are more likely to act in accordance with their core attitudes and values (Chen et al., 2001). Indeed, individuals placed in high-power roles or those higher in personality dominance have been found to express their true attitudes more during a discussion than have participants lower in power or dominance (Anderson & Berdahl, 2002). Such goal-driven behavior also has implications for stereotyping. Powerholders should be more likely to stereotype those beneath them when such stereotyping is seen as an effective means to their goals. Evidence for this has already been found in the context of the Social Influence Strategy ĎŤ Stereotype Match hypothesis (Vescio, Snyder, & Butz, 2003).

Chen, S., Lee-Chai, A. Y., & Bargh, J. A. (2001). Relationship orientation as a moderator of the effects of social power. Journal of Personality and Social Psychology, 80, 173-187

Anderson, C., & Berdahl, J. L. (2002). The experience of power: Examining the effects of power on approach and inhibition tendencies. Journal of Personality and Social Psychology, 83, 1362–1377

Thought condition again had different effects on performance for the two priming conditions, F(1, 161) 54.67, prep 5 .91, Zp 2 1⁄4 :03 (see Fig. 1). Low-power participants performed significantly better after unconscious thought than after conscious thought, prep 5 .96. High-power participants performed equally well in both thought conditions and did not differ from low-power participants in the unconscious-thought condition, Fs < 1. Furthermore, our manipulations did not significantly affect participants’ confidence in and certainty of their attitudes, preps < .70, their reported effort or motivation, preps < .84, or the amount of apartment information they correctly recalled, Fs < 1. Differences in performance could not be attributed to depth of processing. When given problems requiring a complex decision, high-power participants were equally good at identifying the better choice after conscious versus unconscious thought, whereas the performance of low-power participants suffered when they consciously deliberated. These results provide further evidence that conscious and unconscious thought differ in the type of processing that occurs. The powerful seem to be able to handle so many impactful decisions, without making excessive errors, in part because they generally think more abstractly.

We further manipulate status by allocating the central position to the person who earns the highest, or the lowest, score on a trivia quiz. These high-status and low-status treatments are compared, and we find that the effect of organizational structure – the existence of a central position – depends on the status of the central player. Higher status players are attended to and mimicked more systematically. Punishment has differential effects in the two treatments, and is least effective in the high-status case.

In this study, we ask whether social status serves as a useful mechanism for solving public goods problems. Status can act as a coordinating device, as it does in pure coordination games, with higher-status individuals more likely to be mimicked (followed) by others. In addition, in a setting with costly punishment, social status may enhance the effectiveness of punishment and reduce anti-social punishment, enhancing overall efficiency…Status is awarded by the experimenter using scores on a general-knowledge trivia quiz that is unrelated to the experimental game. The central position is given to either the high scorer (high-status treatment) or the low scorer (low-status treatment). Subjects play two games: a standard linear voluntary contribution mechanism (VCM) and a VCM with costly punishment. We find that higher-status central players are more likely to be “followed” in the key situation when the peripheral player is contributing less than the central player. We also find that high status central players punish less, and peripheral players are more responsive to punishment by a higher-status central player…Our results suggest that punishment, while important to enforcing cooperative norms in many social dilemmas, does not boost contributions in all instances. Punishment is used more readily by low-status groups, and increases overall contributions only among low-status groups. However this seems to be primarily a main effect of the punishment institution, as there is little evidence that punishment tokens levied actually increase contributions in low-status groups; indeed there is weak evidence that the response to punishment is greater in high-status groups. Retaliatory punishment of central players is seen only in the low-status groups. An unexpected consequence of these differences is that punishment is not efficiency- enhancing when the status of the central player is high. Costly punishment is used less in these groups, but contributions are not higher than without punishment. This generates a flat contribution pattern, and no differences between the VCM with and without punishment opportunities. At the other extreme, low status central players punish and are heavily punished, and make significantly less money in the experiment than any other type of subject. But the reaction of low status groups to the new environment generates a significant increase in the provision of the public good.

Second, high-status agents may have a strong influence on others, as others seek their company and guidance, affecting choices and decision making by lower-status individuals. Thus high-status individuals are more likely to be mimicked or deferred to (Ball et al. 2001, Kumru and Vesterlund 2005). Imitating or learning from higher- status exemplars can help solve coordination problems (Eckel and Wilson 2007); the behavior of the higher-status individual provides an example that is observed and can be followed by others.

Gil-White and Henrich (2001) argue that attending to and mimicking high status individuals is a valuable strategy in a world where successful individuals may have superior information. Cultural transmission is enhanced when higher-status, successful individuals are copied by others. Copying successful individuals has evolutionary payoffs, so that humans may have evolved a preference for paying attention to and learning from high-status agents (see also Boyd and Richerson 2002, Boyd et al. 2003). Bala and Goyal (1998) capture the essence of the idea of attending to a high-status agent in a model where the presence of a commonly-observed agent, which they term the “royal family”, can have a significant impact on which among multiple equilibria is selected…Experimental research confirms the tendency of individuals to mimic high-status agents. Eckel and Wilson (2001) show that a commonly observed agent can influence equilibrium selection in a coordination game…Imitation makes the population of subjects more likely to reach a Pareto-superior, but risk- dominated, equilibrium, an outcome that rarely occurs otherwise (Cooper et al. 1990). Kumru and Vesterlund (2005) show a related result, with high-status first-movers more likely to be mimicked in a 2-person sequential voluntary contribution game. In their setting, high status enhances the ability of leaders to increase total contributions.

Now, some 25 years later, seven studies we conducted [Piff et al 2012], some on this same campus, have proved the opposite, that greed, far from being good, undermines moral behavior….Unethical behaviors among the wealthy are as timeless and pervasive as the ethical principles that try to rein them in. Our research pinpointed why wealth produces unethical conduct with such regularity: greed. Across studies, wealthier subjects expressed the conviction that greed is moral, echoing [Ivan] Boesky and Gekko and their intellectual companions (e.g., Ayn Rand). And it was their greed-is-good attitudes, we found, that gave rise to their unethical behavior. Wealth gives rise to a me-first mentality, and the ideology of unbridled self-interest serves as its lofty justification. Greg Smith is to be applauded for calling out the culture of greed at Goldman Sachs. It is a knockout blow, one as important as Ivan Boesky’s proclamation nearly a generation ago. Nobel laureate Milton Friedman famously argued that the single social responsibility of business is to increase profits as long as “it stays within the rules of the game.” The problem is, when greed for profits is the bottom line, the rules may fall by the wayside.

Videos of 60-s slices of these interactions were coded for nonverbal cues of disengagement and engagement, and estimates of participants’ SES were provided by naive observers who viewed these videos. As predicted by analyses of resource dependence and power, upper-SES participants displayed more disengagement cues (e.g., doodling) and fewer engagement cues (e.g., head nods, laughs) than did lower-SES participants….Research relevant to this hypothesis is limited, but suggestive. For example, in a meta-analytic review of status and nonverbal behavior, upper SES individuals were found to speak in ways that are less attentive to the audience, for example, with fewer turn-inviting pauses (Hall et al., 2005)…SES was measured objectively using self-reports of family income and education (e.g., Lachman & Weaver, 1998). [They used undergraduates, not people who had personally clawed into power.]

Consistent with the previously cited studies about how acting rude or defecting is perceived as power.

Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment.

See the previous discussions of blame, self-centeredness, lack of empathy, and rule-breaking; related: fundamental attribution bias.

Previous research indicates that lower-class individuals experience elevated negative emotions as compared with their upper-class counterparts. We examine how the environments of lower-class individuals can also promote greater compassionate responding-that is, concern for the suffering or well-being of others. In the present research, we investigate class-based differences in dispositional compassion and its activation in situations wherein others are suffering. Across studies, relative to their upper-class counterparts, lower-class individuals reported elevated dispositional compassion (Study 1), as well as greater self-reported compassion during a compassion-inducing video (Study 2) and for another person during a social interaction (Study 3). Lower-class individuals also exhibited heart rate deceleration-a physiological response associated with orienting to the social environment and engaging with others-during the compassion-inducing video (Study 2)…For example, when describing environmental trends in economic inequality and everyday life outcomes (e.g., being laid off from work), undergraduates of lower subjective socioeconomic status—measured by ranking oneself in society in terms of income, education, and job status relative to others—attribute the causes of economic inequality to more external reasons (e.g., political influence, educational opportunity) than dispositional reasons (e.g., hard work, talent), relative to their upper-class counterparts (Kraus et al., 2009)…Converging evidence also suggests that lower-class individuals favor an interdependent view of the self, whereas upper-class individuals are more inclined to espouse beliefs in an individuals’ independence and autonomy (Stephens, Fryberg, & Markus, 2011; Stephens, Markus, & Townsend, 2007). For instance, in one study lower-class university students, whose parents’ highest level of education was a high school diploma, tended to make choices that helped them blend in with others (e.g., by choosing a pen that resembled other pens; Stephens et al., 2007). In contrast, upper-class individuals, whose parents graduated from college, tended to prefer choices that helped them stand out (e.g., by choosing a unique pen). In recent work, Stephens and colleagues (2011) suggest that stronger relational norms among working-class individuals result in a less positive perception of individual choice, which favors an individual’s own needs.

In studies 1 and 2, upper-class individuals were more likely to break the law while driving, relative to lower-class individuals. In follow-up laboratory studies, upper-class individuals were more likely to exhibit unethical decision-making tendencies (study 3), take valued goods from others (study 4), lie in a negotiation (study 5), cheat to increase their chances of winning a prize (study 6), and endorse unethical behavior at work (study 7) than were lower-class individuals. Mediator and moderator data demonstrated that upper-class individuals’ unethical tendencies are accounted for, in part, by their more favorable attitudes toward greed…Individuals from upper-class backgrounds are also less generous and altruistic. In one study, upper-class individuals proved more selfish in an economic game, keeping significantly more laboratory credits—which they believed would later be exchanged for cash—than did lower-class participants, who shared more of their credits with a stranger (7). These results parallel nationwide survey data showing that upper-class households donate a smaller proportion of their incomes to charity than do lower-class households (10)…Research finds that individuals motivated by greed tend to abandon moral principles in their pursuit of self-interest (13). In one study, a financial incentive caused people to be more willing to deceive and cheat others for personal gain (14). In another study, the mere presence of money led individuals to be more likely to cheat in an anagram task to receive a larger financial reward (1)…Why are upper-class individuals more prone to unethical behavior, from violating traffic codes to taking public goods to lying? This finding is likely to be a multiply determined effect involving both structural and psychological factors. Upper-class individuals’ relative independence from others and increased privacy in their professions (3) may provide fewer structural constraints and decreased perceptions of risk associated with committing unethical acts (8). The availability of resources to deal with the downstream costs of unethical behavior may increase the likelihood of such acts among the upper class. In addition, independent self-construals among the upper class (22) may shape feelings of entitlement and inattention to the consequences of one’s actions on others (23). A reduced concern for others’ evaluations (24) and increased goal-focus (25) could further instigate unethical tendencies among upper-class individuals. Together, these factors may give rise to a set of culturally shared norms among upper-class individuals that facilitates unethical behavior.

If there are particular studies you want to read, you can ask here (or on the research request page) and I'll jailbreak them for you. There are so many possible studies that I didn't feel like jailbreaking them all in advance...

There is, she says, a common misperception that at moments like this, when people face an ethical decision, they clearly understand the choice that they are making. "We assume that they can see the ethics and are consciously choosing not to behave ethically," Tenbrunsel says. This, generally speaking, is the basis of our disapproval: They knew. They chose to do wrong.

But Tenbrunsel says that we are frequently blind to the ethics of a situation. Over the past couple of decades, psychologists have documented many different ways that our minds fail to see what is directly in front of us. They've come up with a concept called "bounded ethicality": That's the notion that cognitively, our ability to behave ethically is seriously limited, because we don't always see the ethical big picture. One small example: the way a decision is framed. "The way that a decision is presented to me," says Tenbrunsel, "very much changes the way in which I view that decision, and then eventually, the decision it is that I reach." Essentially, Tenbrunsel argues, certain cognitive frames make us blind to the fact that we are confronting an ethical problem at all.

Tenbrunsel told us about a recent experiment that illustrates the problem. She got together two groups of people and told one to think about a business decision. The other group was instructed to think about an ethical decision. Those asked to consider a business decision generated one mental checklist; those asked to think of an ethical decision generated a different mental checklist.

Tenbrunsel next had her subjects do an unrelated task to distract them. Then she presented them with an opportunity to cheat. Those cognitively primed to think about business behaved radically different from those who were not — no matter who they were, or what their moral upbringing had been."If you're thinking about a business decision, you are significantly more likely to lie than if you were thinking from an ethical frame," Tenbrunsel says. According to Tenbrunsel, the business frame cognitively activates one set of goals — to be competent, to be successful; the ethics frame triggers other goals. And once you're in, say, a business frame, you become really focused on meeting those goals, and other goals can completely fade from view.

Though people in positions of power have many advantages that sustain their power, stories abound of individuals who fall from their lofty perch. How does this happen? The current research examined the role of illusions of alliance, which we define as overestimating the strength of one’s alliances with others. We tested whether powerholders lose power when they possess overly positive perceptions of their relationships with others, which in turn leads to the weakening of those relationships. Studies 1 and 2 found that powerful individuals were more likely to hold illusions of alliance. Using laboratory as well as field contexts, Studies 3, 4, and 5 found that individuals with power who held illusions of alliance obtained fewer resources, were excluded more frequently from alliances, and lost their power. These findings suggest that power sometimes leads to its own demise because powerful individuals erroneously assume that others feel allied to them.

So not only do bosses set too much store by their strengths, as our Schumpeter column notes, they also habitually overestimate their ability to win respect and support from their underlings. Somehow, on reaching the corner office, they lose the knack of reading subtle cues in others’ behaviour: in a further experiment Mr Brion found that when a boss tells a joke to a subordinate, he loses his innate ability to distinguish between a real and fake smile.

I sat alone on election night as the results came in. I wanted it that way. I wanted to just let myself be swept up in it.

Losing power is felt physically, emotionally, in waves of sensation, in moments of acute distress.

I know now that there are the odd moments of relief as the stress ekes away and the hard weight that felt like it was sitting uncomfortably between your shoulder blades slips off. It actually takes you some time to work out what your neck and shoulders are supposed to feel like. I know too that you can feel you are fine but then suddenly someone’s words of comfort, or finding a memento at the back of the cupboard as you pack up, or even cracking jokes about old times, can bring forth a pain that hits you like a fist, pain so strong you feel it in your guts, your nerve endings. I know that late at night or at quiet moments in the day feelings of regret, memories that make you shine with pride, a sense of being unfulfilled can overwhelm you. Hours slip by.

I know that my colleagues are feeling all this now. Those who lost, those who remain.

Caruso, Vohs, and Baxter's recent paper in the Journal of Experimental Psychology ("Mere Exposure to Money Increases Endorsement of Free Market Systems and Social Inequality," 2012) suggests that critics should also object to commercialism on instrumental grounds. Mere exposure to money makes people more pro-market:

[S]ubtle reminders of the concept of money, relative to non-money concepts, led participants to endorse more strongly the existing social system in the United States in general (Experiment 1) and free market capitalism in particular (Experiment 4), to assert more strongly that victims deserve their fate (Experiment 2), and to believe more strongly that socially-advantaged groups should dominate socially-disadvantaged groups (Experiment 3).

[P]articipants read about the current organ transplant system in the United States. They were told that because organs such as kidneys are in short supply, the United Network for Organ Sharing (UNOS) uses a systematic formula to determine which patients get priority. In addition to assessing the likelihood that the transplant will work, this formula aims to ensure that the socially disadvantaged get preferential access to kidneys because they tend to lack other alternatives (such as dialysis) and therefore are most in need.

Participants then learned that although this is the existing system in the United States, in other countries there is a free market for organs. Just as wealthier and more successful people can afford to purchase relatively better medical care if they choose, in a free market system anyone can buy or sell organs. Accordingly, priority does not necessarily go to those who are the most needy or disadvantaged, but to whoever can most afford to pay.

Result: Fully 37% of Americans subtly exposed to money supported a free market in organs - versus 0% of Americans who were not so exposed.*

It seems relevant to what powerful uploads might do; or just future humans period - Hanson has pointed out that we should expect ever more extreme differences in wealth & power as time & economic growth go on.

In addition to gwern's reply, it's also highly relevant for replying to arguments suggesting that game theory shows co-operation to be the most beneficial course of action, therefore we should expect AIs to want to co-operate with humans. The obvious flaw with that argument is that somebody might have more beneficial courses of action - such as exploitation - available to them, in which case we would expect AIs to ruthlessly exploit us if they were in the right position. It seems reasonable to presume that if you are in a position of power over someone, it easily becomes more beneficial to exploit them than to co-operate.

"Are humans more likely to exploit other humans when they have more power over them" is a testable prediction of this hypothesis: if exploitation is more beneficial than co-operation when you're powerful enough, then we would expect our brains to be evolved to take advantage of that.

Americans may be more narcissistic now than ever, but narcissism is not evenly distributed across social strata. Five studies demonstrated that higher social class is associated with increased entitlement and narcissism. Upper-class individuals reported greater psychological entitlement (Studies 1a, 1b, and 2) and narcissistic personality tendencies (Study 2), and they were more likely to behave in a narcissistic fashion by opting to look at themselves in a mirror (Study 3). Finally, inducing egalitarian values in upper-class participants decreased their narcissism to a level on par with their lower-class peers (Study 4). These findings offer novel evidence regarding the influence of social class on the self and highlight the importance of social stratification to understanding basic psychological processes.

We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.

Here we show that employees of a large, international bank behave, on average, honestly in a control condition. However, when their professional identity as bank employees is rendered salient, a significant proportion of them become dishonest. This effect is specific to bank employees because control experiments with employees from other industries and with students show that they do not become more dishonest when their professional identity or bank-related items are rendered salient. Our results thus suggest that the prevailing business culture in the banking industry weakens and undermines the honesty norm

...On average, the bank employees had 11.5 years of experience in the banking industry. Roughly half of them worked in a core business unit, that is, as private bankers, asset managers, traders or investment managers. The others came from one of the support units (for example, risk or human resources management). Subjects participated in a short online survey. After answering some filler questions about subjective wellbeing, subjects in the professional identity condition were asked seven questions about their professional background (for example, ‘‘At which bank are you presently employed?’’ or ‘‘What is your function at this bank?’’). Those in the control condition were asked seven questions that were unrelated to their profession (for example, ‘‘How many hours per week do you watch television on average?’’).

After the priming questions, all subjects anonymously performed a coin tossing task that has been shown to reliably measure dishonest behaviour in an unobtrusive way 18–20 and to predict rule violation outside the laboratory 17 . The rules required subjects to take any coin, toss it ten times, and report the outcomes online. For each coin toss they could win an amount equal to approximately US$20 (as opposed to $0) depending on whether they reported ‘heads’ or ‘tails’. Subjects knew in advance whether heads or tails would yield the monetary payoff for a specific coin toss. Moreover, subjects were informed that their earnings would only be paid out if they were higher than or equal to those of a randomly drawn subject from a pilot study. We introduced this element to mimic the competitive nature of the banking profession 9 . Given that the maximum payoff is approximately $200, subjects faced a considerable incentive to cheat by misreporting the outcomes of their coin tosses.

...We conducted a manipulation check in which subjects converted word fragments into meaningful words. For example, they could complete the word fragment ‘‘_ _ oker’’ with the bank-related word ‘‘broker’’ or an unrelated word such as ‘‘smoker’’. This allowed us to test whether the treatment increased professional identity salience. The frequency of bank-related words in the professional identity condition was increased by 40%, from 26% in the control to 36% (P 5 0.035, rank-sum test), indicating that our manipulation was successful.

Figure 1a shows the binomial distribution of earnings in the coin tossing task that would result if everyone behaved honestly, and the empirical distribution from the control condition. Both distributions closely overlap, suggesting that the control group behaved mostly honestly. On average, they reported successful coin flips in 51.6% of the cases, which is not significantly different from 50% (95% confidence interval: 48%, 56%). By contrast, the bank employees were substantially more dishonest in the professional identity condition (Fig. 1b). On average, they reported 58.2% successful coin flips, which is significantly above chance (95% confidence interval: 53%, 63%) and significantly higher than the success rate reported by the control group (P 5 0.033, rank-sum test). Figure 1 shows that the treatment effect appears to be driven by two factors: (1) a higher fraction of subjects claiming the maximum earnings; and (2) an increase in incomplete cheating (that is, reporting 6, 7 and 8 successful coin flips). Assuming that subjects did not cheat to their disadvantage, the rate of misreporting is 16% in the professional identity condition. Alternatively, we can compute the fraction of subjects who cheated, which is 26% (Supplementary Methods).

Regression analysis (Extended Data Table 1) shows that the treatment effect is robust even when we control for a large set of individual characteristics such as age, gender, education, income, and nationality (P 5 0.034, Wald test). When also controlling for business unit and experience in the banking industry, we find that employees in core business units were more dishonest than those in support units (P 5 0.008, Wald test). However, the treatment effect is not stronger in these units because the interaction between the professional identity condition and working in a core unit is not significant (P 5 0.960, Wald test).

Power dynamics are a ubiquitous feature of human social life, yet little is known about how power is implemented in the brain. Motor resonance is the activation of similar brain networks when acting and when watching someone else act, and is thought to be implemented, in part, by the human mirror system. We investigated the effects of power on motor resonance during an action observation task. Separate groups of participants underwent a high-, neutral, or low-power induction priming procedure, prior to observing the actions of another person. During observation, motor resonance was determined with transcranial magnetic stimulation (TMS) via measures of motor cortical output. High-power participants demonstrated lower levels of resonance than low-power participants, suggesting reduced mirroring of other people in those with power. These differences suggest that decreased motor resonance to others’ actions might be one of the neural mechanisms underlying power-induced asymmetries in processing our social interaction partners.

The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).

What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).

I did, at first; and rethought it before I posted. And I figured that the same response was also roughly correct if it was a "dig at Alicorn." Doing useful drudgery despite bystander effects is remarkable and surprising, so arch comments about someone not doing so would be silly.

Given that everyone around here is usually pretty reasonable, if prone to fallacies of transparency, I therefore assume that Eliezer's actually giving straightforward applause, rather than being ironic. (If I'm wrong ... well, that'd be useful to learn.)

Men are in more positions of power than women or children. And by more, I mean across all cultures and all times and by a large margin. This was not stated in the above. Stating so might further illuminate the psychology of power.

Most anthropologists hold that there are no known societies that are unambiguously matriarchal, but possible exceptions include the Iroquois, in whose society mothers exercise central moral and political roles.

However, this reluctance to accept the existence of matriarchies might be based on a specific, culturally biased notion of how to define 'matriarchy': because in a patriarchy 'men rule over women', a matriarchy has frequently been conceptualized as 'women ruling over men', whereas in reality women-centered societies are - apparently without exception - egalitarian.

In other words, there isn't a trivial symmetry between those societies that are called "patriarchy" and those that are called "matriarchy".

Feminism is necessary because of auto-correct feature that suggests closely related words that are used more often?

The word "patriarchy" and its derivatives are used extensively by feminists and feminist scholars. If anything, the fact that the word "patriarchy" pops up more often can indicate that there is more awareness of current and historic gender asymmetry.
Also, the word patriarchy is bound to be used more often simply due to historical context.

There are legitimate reasons for the feminist movement to continue. This is not one of them.

I've seen too many comments like yours on Facebook or on Reddit without a hint of irony to think that it's a joke. Or maybe I'm just terribly dull when it comes to differentiating what is and isn't humor.

I think it's fair to say that most of us here would prefer not to have most Reddit or Facebook users included on this site, the whole "well-kept garden" thing. I like to think LW continues to maintain a pretty high standard when it comes to keeping the sanity waterline high.

No problem; even before your comment I considered adding a disclaimer so that people wouldn't misinterpret it as a slam against feminism. (The downvote on the great-grandparent wasn't from me, by the way.)

For this to illuminate the psychology of power, we'd first have to be able to accurately articulate the differences between "men" and "women" (the quotes are because I understand those terms to be gender roles, which makes universals tricky; I still don't know all the true differences between males and females).

In yet another study, the Berkeley researchers invited a cross section of the population into their lab and marched them through a series of tasks. Upon leaving the laboratory testing room the subjects passed a big jar of candy. The richer the person, the more likely he was to reach in and take candy from the jar—and ignore the big sign on the jar that said the candy was for the children who passed through the department. Maybe my favorite study done by the Berkeley team rigged a game with cash prizes in favor of one of the players, and then showed how that person, as he grows richer, becomes more likely to cheat...A team of researchers at the New York State Psychiatric Institute surveyed 43,000 Americans and found that, by some wide margin, the rich were more likely to shoplift than the poor. Another study, by a coalition of nonprofits called the Independent Sector, revealed that people with incomes below twenty-five grand give away, on average, 4.2 percent of their income, while those earning more than 150 grand a year give away only 2.7 percent. A UCLA neuroscientist named Keely Muscatell has published an interesting paper showing that wealth quiets the nerves in the brain associated with empathy: if you show rich people and poor people pictures of kids with cancer, the poor people’s brains exhibit a great deal more activity than the rich people’s. (An inability to empathize with others has just got to be a disadvantage for any rich person seeking political office, at least outside of New York City.) “As you move up the class ladder,” says Keltner, “you are more likely to violate the rules of the road, to lie, to cheat, to take candy from kids, to shoplift, and to be tightfisted in giving to others. Straightforward economic analyses have trouble making sense of this pattern of results.”...Not long ago an enterprising professor at the Harvard Business School named Mike Norton persuaded a big investment bank to let him survey the bank’s rich clients. (The poor people in the survey were millionaires.) In a forthcoming paper, Norton and his colleagues track the effects of getting money on the happiness of people who already have a lot of it: a rich person getting even richer experiences zero gain in happiness. That’s not all that surprising; it’s what Norton asked next that led to an interesting insight. He asked these rich people how happy they were at any given moment. Then he asked them how much money they would need to be even happier. “All of them said they needed two to three times more than they had to feel happier,” says Norton.

Based on observing my own thoughts, I strongly suspect there's an Asch conformity-type effect for high status folks, where the opinions of high-status folks end up getting disproportionate weight in your beliefs just because they're high status. (The fact that philosophers have identified "appeals to authority" as frequently fallacious is additional evidence.) Is anyone aware of any research on this?