November 30, 2013

A century ago or so, each college had its own admissions tests, but the inefficiencies of that became apparent. Eventually the country came up with two competing college admissions tests -- the SAT and the the ACT -- with colleges free to pile on eccentric essay questions or whatever they feel like. A few colleges may still have their own unique entrance exam (All Souls College * at Oxford has a famous one), but in the U.S. customized admissions tests are quite rare.

Now you might think that if national tests are good enough for Yale, they ought to be good enough for, say, the New Haven Fire Department. And there are companies that publish tests for fire departments across the country to use. Yet, as we saw with the 2009 Ricci Supreme Court case, nobody thought it odd that New Haven had spent $100,000 on a consulting firm to dream up a customized test just for New Haven. That's pretty common in the fraught world of hiring firemen.

In fact, much of the defense against Mr. Ricci's reverse discrimination lawsuit consisted of the allegation that the test wasn't customized enough: the consulting firm had borrowed a question from earlier tests that referred to "downtown" when there is no downtown in New Haven (or something like that -- all I remember is that the lack of local customization of one question was a big deal in the national press for a couple of weeks in 2009.)

How, exactly, is the New Haven FD so different from, say, the New Canaan FD that New Haven needs to spend one hundred grand on its own test? Caltech, Yeshiva, the Air Force Academy, and Smith are probably more different from each other than most municipal fire departments are different from each other, and yet they all find the SAT and ACT helpful.

Now there are some nationally available general purpose job hiring tests like the Wonderlic IQ test famously used by the NFL. Still, a glance at the 17,000 words of federal guidelines on whether or not the EEOC will come down on you like a ton of bricks if you use formal testing as part of the hiring process demonstrates clearly that the answer to any legal question involving testing and hiring is Maybe. (Or Consult Your Attorneys or You'll Find Out.)

If you started your own college and then, as it got more successful, you announced that you were going to mandate the SAT and/or ACT as part of the admissions process, nobody would blink an eye. But if you start a business that grows big enough to get on the EEOC's radar, can you assume that you can just use some battle-tested national test? Or do you have to validate that the national test specifically works at your not-so-unique company?

Maybe.

Over the last century, industrial/organizational psychologists have put a lot of effort into understanding the mysteries of testing. One of their major findings is that you don't need all that many different tests. The kind of things that can be measured well by testing aren't very unique to one college or one company or one job or whatever. Test optimization runs into pretty severe diminishing returns.

And yet, almost nobody outside of the profession is aware of this major discovery of 20th Century social science. Why not?

One reason is because a large number of the people who understand this -- professional I/O psychologists -- are employed to come up with new tests that will do what 100 years of I/O research says can't be done.

Hey, it's a living.
------------------------

* By the way, once every century the learned dons of All Souls College stage a drunken torchlight parade while singing the "Mallard Song" about a giant duck devoured by their predecessors in the 15th Century. They are led by the specially chosen "Lord Mallard" (typically a distinguished classicist or a future Archbishop of Canterbury) who waves a duck on a pole. The last Hunting the Mallard was in 2001, while the next one will be in 2101.

As I mentioned in the post below, the Atlantic has a long article about Silicon Valley start-ups attempting to use Big Data for job hiring testing. In the post-WWII era, the article says, American corporations did lots of testing of job applicants, but that fell out of fashion because science. Or something. So for the last generation, firms mostly rely upon resumes and interviews and try to avoid putting much in writing where it can get subpeonaed.

But now in 2013, instead of giant corporations like P&G doing the testing themselves, it's going to be done for the giant corporations by cool little start-ups with cute names like Knack, Evolv, and Gild. So the New Testing won't be like the bad Old Testing of the 1950s when the racist, sexist white male power structure was building a giant middle class with secure jobs and pensions. Or something.

A reader writes:

Saw your post about pre-employment testing. It's been a while since I've had my head in the selection literature, but historically the general pattern of predictive validity for various selection methodologies has tended to follow a consistent pattern. The most predictive procedures tend to be those that emphasize biographical data, primarily work history and education, with coefficients in the .5 range or even better. This is followed by formal testing and is is where it gets a little tricky. Much of the early selection research was conducted or funded by the military, primarily the Navy through the NPRDC (now the NPRST). This is because the military can't spend a lot of time messing around trying to figure out what MOS people are suited for. Therefore, you have the two component tests you describe, an IQ-ish type of test and a group of aptitude-type tests. This kind of testing has relatively high predictive validities, almost as high as those for biographical data. The problem with both of these methodologies is disparate impact. Therefore, employers are going to great lengths to try to avoid correlations with prohibited dimensions. If you want to read something that strikes fear into the heart of HR managers, peruse section 5 of the Uniform Guidelines on Employee Selection Procedures.

The whole federal guideline related to job testing is 16,942 words long. From skimming it, I'd say that, yeah, you could get away with testing. After all, P&G still gives a "Reasoning Test" that looks much like the GMAT that MBA applicants take. But this Guideline is intended to make you think long and hard before trying it. Sure, P&G got their test validated, but then P&G is largely staffed by competent people hired in part through testing. Can your staff successfully jump through every hoop in the 16,942 words?

After all, your managers like interviewing. They each think that -- while everybody else is terrible at interviewing -- they are way above average at it. Interviewing lets them hire people they like. So why risk a federal lawsuit to make them hire people they feel less sympatico with just because they'd be better workers?

What you really have above is one methodology that screens for conscientiousness (work history) and one that screens for intelligence (tests). Privately, Industrial/Organizational Psychology types will tell me that this is really the only thing that matters in selection. The rest is just BS.

The real scandal in employee selection is the almost zero predictive validity of interviews. No matter how they are constructed they tend to contribute almost nothing to employee selection. Probably the best they do is identify obvious jerks, but even that is questionable. The reason we continue to do them is mostly cultural, I suppose. There is also a cult of "personality testing" inside of HR these days, the MBTI seems to be the favored tool, although there are others. This is obviously quite distinct from the AQFT-type testing referenced above.

Understanding the HR field today one has to realize it embodies two very different and often contrary functions; selection and compliance. The first area is dominated by I/O psych types and the second by lawyers. The academic side of HR, in particular university based research into selection, is almost universally populated by the former. On the practitioner side, while most HR people are not lawyers, the legal issues tend to overwhelm everything. This has led to growth in formal structured procedures for all of these types of decisions and associated documentation requirements. Unfortunately, despite all of the attempts to create procedural fairness, the drift has been back into interview type "fit" exercises, and hence what HR people call the "like-me" problem.

November 29, 2013

What happens when Big Data meets human resources? The emerging practice of "people analytics" is already transforming how employers hire, fire, and promote.

... By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential. “P&G picks its executive crop right out of college,” BusinessWeek noted in 1950, in the unmistakable patter of an age besotted with technocratic possibility. IQ tests, math tests, vocabulary tests, professional-aptitude tests, vocational-interest questionnaires, Rorschach tests, a host of other personality assessments, and even medical exams (who, after all, would want to hire a man who might die before the company’s investment in him was fully realized?)—all were used regularly by large companies in their quest to make the right hire.

Hilariously elaborate testing suites were fashionable in the immediate postwar era. Robert Heinlein's 1948 sci-fi juvenile Space Cadet begins with the hero undergoing a couple of days of extremely expensive testing to try to get into the Space Academy (rooms turn upside down, ringers try to provoke test-takers into fistfights, etc.). The actual astronaut applicant testing a decade later was even more convoluted than Heinlein had imagined.

The process didn’t end when somebody started work, either. In his classic 1956 cultural critique, The Organization Man, the business journalist William Whyte reported that about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles. “Should Jones be promoted or put on the shelf?” he wrote. “Once, the man’s superiors would have had to thresh this out among themselves; now they can check with psychologists to see what the tests say.”

Remarkably, this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,” Peter Cappelli told me—the days of testing replaced by a handful of ad hoc interviews, with the questions dreamed up on the fly. Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased. Instead, companies came to favor the more informal qualitative hiring practices that are still largely in place today.

But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.

Some were based on untested psychological theories. Others were originally designed to assess mental illness, and revealed nothing more than where subjects fell on a “normal” distribution of responses—which in some cases had been determined by testing a relatively small, unrepresentative group of people, such as college freshmen. When William Whyte administered a battery of tests to a group of corporate presidents, he found that not one of them scored in the “acceptable” range for hiring. Such assessments, he concluded, measured not potential but simply conformity. Some of them were highly intrusive, too, asking questions about personal habits, for instance, or parental affection.

Unsurprisingly, subjects didn’t like being so impersonally poked and prodded (sometimes literally).

Tom Wolfe's The Right Stuff has a very funny chapter about how around 1959 researchers went nuts with joy trying out any test they could think of on the initial astronaut applicants. The doctors were used to testing either sick people or average people, but here were hundreds of above-average test pilots and fighter aces willing to put up with anything to go into outer space. A radioactive enema test? Sure!

The federal government's 1960 Project Talent exam, a post-Sputnik study of 440,000 high school students, contained two dozen subtests and took two days to administer.

One discovery from all these massive exercises in social science was that you didn't actually need all these different kinds of tests. Some tests were just fashionable Freudian quackery. But lots of other tests all came up with reasonable but highly correlated results. Standard IQ-type tests would carry most of the load.

For example, the military expanded its hiring test from the four-part IQ-like AFQT to the ten-part ASVAB, but it continues to use the AFQT subset to eliminate applicants. The other six parts of the ASVAB superset are then used for placement: e.g., if you score well on the vehicle repair knowledge subtest you might find yourself fixing trucks. But even if you ace the auto repair subtest, you have to make the grade on the IQ-like AFQT core to be allowed to enlist.

I spent a couple of hours on the phone nine years ago with the retired head psychometrician of one of the major wings of the armed forces and he told me that the biggest discovery of his decades on the job was that g dominated practically anything else you could test for.

This finding actually took a lot of the fun out of psychometrics. You'd dream up some seemingly brilliant test to find the perfect fighter jock or cook or file clerk, but when you'd get done extracting the general factor of intelligence from the results, you'd find that all the customization for the job you'd done hadn't added much predictive value over that of the heavily g-loaded AFQT scores. It makes sense to test for how much applicants already know about flying planes or fixing engines because the military can save time on training and how much they've already learned likely says something about their motivation to learn more. But testing for specific potential hasn't worked out the way Heinlein expected. Instead, testing for g works, and other tests for potential haven't proven terribly helpful.

For all these reasons and more, the idea that hiring was a science fell out of favor.

Which mostly shows how fad driven corporate America is. Serious institutions like the military (AFQT) and Procter & Gamble still use IQ-type tests in hiring. Procter & Gamble provides a sample of its venerable Reasoning Test here. P&G paid a lot of money to validate that its Reasoning Test was correlated with on-the-job performance to get the EEOC off its back.

In contrast, the federal government developed a superb test battery in the 1970s for federal civil service hiring, the outgoing Carter Administration junked it in January 1981 because of disparate impact in the Luevano case. The Carter Administration promised that Real Soon Now they would replace PACE with a test that was equally valid at hiring competent government bureaucrats, but upon which blacks and Hispanics didn't score worse. That was 32 years ago.

Similarly, at the moderate-sized marketing research firm where I worked, initially they just gave Dr. Gerry Eskin's Advanced Quantitative Methods in Marketing Research 302 final exam from the U. of Iowa to each MBA who walked in the door looking for a job. It did a pretty good job at hiring good people. Eventually the company grew large enough that the EEOC noticed the hiring exam. Instead of ponying up the money to validate Eskin's exam, though, we just junked it and winged it after that, with less satisfactory results.

The turn against the postwar objective P&G-style testing hasn't made America more fair. Peck notes:

Perhaps the most widespread bias in hiring today cannot even be detected with the eye. In a recent survey of some 500 hiring managers, undertaken by the Corporate Executive Board, a research firm, 74 percent reported that their most recent hire had a personality “similar to mine.” Lauren Rivera, a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests. “The best way I could describe it,” one attorney told her, “is like if you were going on a date. You kind of know when there’s a match.” Asked to choose the most-promising candidates from a sheaf of fake résumés Rivera had prepared, a manager at one particularly buttoned-down investment bank told her, “I’d have to pick Blake and Sarah. With his lacrosse and her squash, they’d really get along [with the people] on the trading floor.” Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.

Funny how that works.

It's not a coincidence that when I read up on the history of psychometrics in the U.S. in the mid-20th Century, an awful lot of breakthroughs took place at land grant colleges rather than at Harvard and Yale. People in places like Iowa City thought better objective testing was going to be better for people in Iowa. And they were largely right. Of course, we now know -- instinctively! -- that these midwestern methodologies were a giant conspiracy by the white male power structure. So today we fight the power by just hiring Harvard and Yale grads.

But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before. For better or worse, a new era of technocratic possibility has begun.

Consider Knack, a tiny start-up based in Silicon Valley. Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential.

Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test.

A lot of what Silicon Valley does these days is wheel re-invention. Nobody remembers the past because so much effort has been invested in distorting memories to validate current power arrangements, so a lot of things that are sold as technological breakthroughs never before possible are really just ways to get around government regulations that were imposed because they seemed like a good idea at the time.

For example, there are now a lot of Ride Sharing companies that you can hire via your smartphone to come pick you up and drive you somewhere. In other words, they are taxicab companies, but because they are High Tech and all that, they feel entitled to ignore all the expensive rules the government has piled on taxicab firms about how they have to take people in wheelchairs to South-Central.

Here's a guess: much of what these Silicon Valley startups measure that's actually useful is good old IQ. And it will have the same disparate impact problems as everything else did.

... Because the algorithmic assessment of workers’ potential is so new, not much hard data yet exist demonstrating its effectiveness.

Actually, the military has been measuring job performance versus test scores for 60 years. Much of the results are available online, typically in Rand Corp. documents. But, who is interested in that?

There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people. The distance an employee lives from work, for instance, is never factored into the score given each applicant, although it is reported to some clients. That’s because different neighborhoods and towns can have different racial profiles, which means that scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.” Citing client confidentiality, he wouldn’t say more.

That's what my marketing models professor at UCLA B-school said in 1982: on the hiring and insurance sides of the business, it's easy to come up with highly effective models of who you want and who you don't want if you are allowed to use race. But you aren't allowed to, so that's where the challenge is.

A long time ago, Americans thought that one of America's advantages was that we were pretty good at building and maintaining giant organizations like Procter & Gamble that just keep going decade after decade. Motley Fools says:

Of all the Dow Jones Industrial Average components, Procter & Gamble (NYSE: PG) might stand out as being one of the most boring ...

But now we know that Americans are actually terrible at institutional maintenance and the only thing we are good at is creating tiny Silicon Valley start-ups with whimsical names. Thus, these little job applicant testing companies are the only hope big firms have of ever hiring anybody any good because it's impossible to come up with an effective system like P&G has. (Sarcasm alert.)

Jeff sent me an email with the above title and a link to a press release, “Nut consumption reduces risk of death,” which begins:

According to the largest study of its kind, people who ate a daily handful of nuts were 20 percent less likely to die from any cause over a 30-year period than those who didn’t consume nuts . . . Their report, published in the New England Journal of Medicine, contains further good news: The regular nut-eaters were found to be more slender than those who didn’t eat nuts, a finding that should alleviate fears that eating a lot of nuts will lead to overweight. . . .

For the new research, the scientists were able to tap databases from two well-known, ongoing observational studies that collect data on diet and other lifestyle factors and various health outcomes. The Nurses’ Health Study provided data on 76,464 women between 1980 and 2010, and the Health Professionals’ Follow-Up Study yielded data on 42,498 men from 1986 to 2010. . . .

Sophisticated data analysis methods were used to rule out other factors that might have accounted for the mortality benefits. For example, the researchers found that individuals who ate more nuts were leaner, less likely to smoke, and more likely to exercise, use multivitamin supplements, consume more fruits and vegetables, and drink more alcohol. However, analysis was able to isolate the association between nuts and mortality independently of these other factors. . . .

The authors noted that this large study cannot definitively prove cause and effect; nonetheless, the findings are strongly consistent with “a wealth of existing observational and clinical trial data to support health benefits of nut consumption on many chronic diseases.” . . .

The study was supported by National Institutes of Health and a research grant from the International Tree Nut Council Nutrition Research & Education Foundation.

Similarly, back in the mid-1990s, my doctor said my cholesterol was too high so I should take a statin. An affable detail man had just given him a box of free Mevacor pills. (Mevacor had been the first statin on the market when introduced in 1987.) So my doctor gave me a fistful and wrote a prescription. But I looked up statins on the new-fangled Internet and found that the hot new statin was Lipitor, which went on to become the biggest moneymaking pill in the world in the 2000s.

The most striking Lipitor study was one from Scandinavia that showed that among middle-aged men over a 5-year-period, the test group who took Lipitor had a 30% lower overall death rate than the control group. Unlike the nuts study, this was an actual experiment.

That seemed awfully convincing, but now it just seems too good to be true. A lot of those middle-aged deaths that didn't happen to the Lipitor takers didn't have much of anything to do with long-term blood chemistry, but were things like not driving your Saab into a fjord. How does Lipitor make you a safer driver?

I sort of presumed at the time that if they had taken out the noisy random deaths, that would have made the Lipitor Effect even more noticeable. But, of course, that's naive. The good folks at Pfizer would have made sure that calculation was tried, so I'm guessing that it came out in the opposite direction of the one I had assumed: the more types of death included, the better Lipitor looks. Apparently, guys who took Lipitor everyday for five years were also good about not driving into fjords and not playing golf during lighting storms and not getting shot by the rare jealous Nordic husband or whatever. Perhaps it was easier to stay in the control group than in the test group?

Here’s how I would approach claims of massive reductions in overall deaths from consuming some food or medicine:

Rank order the causes of death by how plausible it is that they are that they are linked to the food or medicine. For example:

1. Diabetes

2. Heart attacks

3. Strokes

4. Cancer

5. Genetic diseases

6. Car accidents

7. Drug overdoses

8. Homicides

9. Lightning strikes

If this nuts-save-your-life finding is valid, then the greater effects should be found in causes of death near the top of the list (e.g., diabetes). But if it turns out that eating nuts only slightly reduces your chances of death from diabetes but makes you vastly less likely to be struck by lighting, then we’ve probably gotten a selection effect in which nut eaters are more careful people in general and thus don’t play golf during thunderstorms, or whatever.

One of the weirder status markers of the most exclusive colleges is that undergraduates normally can't major in business. West Hunter summarizes the distribution of majors nationally v. at Harvard:

Nationally [ 2009-2010], about 21% are business majors, 10% major in social sciences and history, 8% in “health professions and related programs”, 7% are educational majors, 6% psych majors, 5% in visual and performing arts, 5% in biological and biomedical sciences, 5% in “communication, journalism, and related programs”, 4% in engineering, 3% English and literature, 2% in computer and information sciences, 1.4% in physical sciences, 1% in mathematics and statistics. I haven’t mentioned everything.

Harvard [2009] is different. They had 758 kids majoring in economics, 495 in government, 306 in social studies, 290 in psychology, 247 in English, 236 in history, 158 in history and literature, 155 in neurobiology, 154 in molecular cell biology, 154 in sociology, and so forth.

A few colleges have done well for themselves having elite business schools for undergrads, such as Wharton at the U. of Pennsylvania and, in recent years, Stern at NYU. In general, though, letting undergrads study something directly applicable to getting a white collar corporate job is considered de classe. Studying business -- at least before your late 20s in MBA school -- is downscale.

Instead, economics has become the de facto business major at the Harvards of America. You're not dirtying your hands studying anything all that relevant to the vast majority of corporate jobs. Yet majoring in economics shows: Although I Am A Gentleman, I Am Also Interested in Money. So, it's the ideal major for the career path of Wall Street or consulting entry level analyst --> MBA School --> Big Money.

Something striking is how little relevance a strong knowledge of economics has for most jobs. As an old Econ major, I noticed this in the mid-1980s when I was in charge of introducing personal computers into the large marketing research company where I worked. I hired a PC tech guy who turned out to be notably smarter than me in sheer problem solving ability. But, like the Scarecrow in the Wizard of Oz, he didn't have a diploma. He'd enlisted in Admiral Rickover's nuclear Navy out of high school and spent six years minding the reactors on nuclear subs.

I quickly let him move up from fixing computers to playing a sizable role in tactics and strategy. That led to the one incident in which I noticed a major benefit from my having taken nine full semester courses in Econ. One day in 1987 my assistant proposed that we should stop buying PCs from Dell (or PC's Ltd. or whatever it was called way back then). Instead, we should buy the parts in bulk and hire workers to assemble them for us.

This was back before highly engineered laptops, when PCs were simply big metal boxes with lots of slots in them, so assembling the 100 (or so) parts didn't require any impressive machine tools. Lots of hobbyists assembled their own PCs in those days. (There were opportunities for hot rodding PCs in those days. For example, I got an early IBM PC AT. The CPU's clock speed was set at 6 megahertz, but I ordered a part that made it run at 8 megahertz and enjoyed 33% faster speeds.)

I replied that PCs were a highly competitive business with no obvious barriers to entry, so it's precisely the fact that his plan wasn't ridiculous that meant that it most likely wasn't worth doing: competition would have driven the rate of return down to a level where we'd be indifferent between assembling PCs ourselves or having Dell assemble them for us. Management attention was our main limiting resource, so outsourcing to young Mr. Dell the obsessing over PC parts purchasing and assemblage was a task I was happy to pay him a small premium to do.

My colleague did not believe me, so I encouraged him to make up a spreadsheet and see for himself whether he could beat Dell's price by much. A couple of hours later he came back and said I was right: if we did everything perfectly, we'd only save $10 or $15 per PC, and we wouldn't do everything perfectly. So why bother?

So that was one instance in which my Econ major's knowledge of economic theory could have saved us two hours of work. But there were surprisingly few others. (Of course, this was in managing a high school grad who would have maxed out a Raven's Progressive Matrices IQ test. A lot of other economic theory he simply grasped without having studied it because it was obvious to him.)

Fortunately for the dynamism of American business, most econ majors never take the quietist lessons of econ theory to heart. While U. of Chicago economics professors are said to never bother to pick up $20 bills off the sidewalk because surely the efficient market would have done it already, most Harvard econ majors are blithely immune to econ's basic model of perfect competition reducing profits to the risk-adjusted interest rate.

Two questions:

1. What are the effects on the student of the upscale study path of econ -> business v. the downscale path of business -> econ?

2. What are the effects on America of a social structure in which econ majors make up a sizable fraction of the rich and connected?

I've never seen any academic studies of either question.

Off the top of my head, it would seem like it would make more sense when you are younger to study narrower topics of business; then, if you wish, come back to academia once you have some real world experience and study the broader topics of economics.

I was an unusual student in that I went directly from undergraduate to MBA school in 1980. Something I noticed was that at age 21-23 I enjoyed the MBA school topics more than the undergrad topics because they felt more age-appropriate. As an undergrad econ major, I was supposed to worry about things like: What should the Fed do now? I had lots of opinions on the Fed, but I never felt like the Fed was on the verge of calling me up and asking me for advice, much less paying me money to tell them what to do. In B-School, however, we talked about case studies like: should a pet foods company bring out a line of gourmet refrigerated dog foods for the luxury market? * That seemed like the kind of topic that a company might plausibly pay me money in the next few years to advise them upon.

As for the current dominance of econ majors in our national lives, it wasn't always like this. As an econ major in the 1970s, there was a sense that we band of brothers, we happy few were taking on the palpable ignorance that had led to idiocies like the President declaring a national wage and price freeze on August 15, 1971.

The good news is that a country run by econ majors isn't likely to do that again.

The bad news is that certain fetishes of econ majors, such as an unthinking advocacy of free trade or open borders, become totemic status markers of having majored in Big Money Econ. Americans respect people with lots of money and so they respect econ majors, so they tend to be unskeptical about ideas that econ majors absorbed during their impressionable late teenage years.

Hence the elite credulity about open borders is understandable: dismissing immigration restrictionists as ignoramuses who obviously haven't majored in econ does correlate at some level with making a lot of money on Wall Street.

But it's a helluva way to run a country.

* In case you are wondering about gourmet refrigerated dog foods, a little poking around on the Internet shows they they were not successful back in my day, but have over the last few years become an established category. I suspect a necessary condition was the development of stand-alone pet food stores and/or small, cheap refrigerators that are now ubiquitous in retail outlets, typically for soda pop. The big problem I noticed in 1981 with the idea (besides the obvious questions about whether it's stupid and does your dog really care?) was that supermarkets would be reluctant to stock dog food in with refrigerated human food, while pet food aisles in supermarkets didn't have refrigerators.

As Dave Barry explained about ski jumping, "This exciting sport got its start as a symptom of mental illness in Northern climes such as Norway and Sweden, where it is cold and dark and there is very little to do except pay taxes." The centerpiece of the ancient "Wide World of Sports" opening (video above, 0:13 - 0:17) was a horrific ski jumping crash used to exemplify "the agony of defeat."

As I mentioned awhile ago regarding urban bicycle riding, our society is generally progressing toward greater and greater safety, but some fields seem largely exempt from safety scrutiny for reasons of politics and fashion: cycling is one, and female sports are another.

Since there isn't much disparate treatment by sex left anymore (heck, the Summer Olympics have added women's boxing), in the usual You Go Girl media cheerleading we wind up with lots and lots of disparate impact articles about less than galvanizing topics such as the feminist crisis of women at Harvard Business School blowing off their homework to go on hot dates with eligible bachelors.

But, Olympic ski jumping really did have disparate treatment. Women weren't allowed to ski jump in the Winter Olympics ... until the 2014 Sochi games! The New York Times Magazine celebrates in the traditional Patriotic Feminist Chauvinist manner that should be familiar from all the other American women's Olympic team fads of recent decades. Mireille Silcoff's article focuses on the American superstar, Sarah Hendrickson (19-years-old and 94 pounds):

First there was the struggle to make women’s ski jumping an Olympic sport. Now the American team just wants to win.

... It has been a decade-long fight to get women’s ski jumping into the Olympics — it was one of the last restricted winter sports — and [Sarah] Hendrickson’s outsize talent, a natural ability honed since age 7 that far surpasses that of most male jumpers, was like a banner to parade at the opening ceremony. You said we can’t? Well, look at this.

... The resistance to women in ski jumping makes frustratingly little sense when you recognize what female jumpers can do. “The gap between men and women in ski jumping is so small, you can’t believe it,” Bernardi told me. “Every year, with girls like Sarah, the girls are flying better, better, better.” Today, he said, there might not actually be another sport in which, at the superelite level, the differences in male and female capability are so minimal. “Maybe there is something with horses? Equestre? But even there it is half the horse.”

Van said she believed that this is also the reason women have been excluded from the top competitions in the sport for so long. “If women can jump as far as men, what does that do to the extreme value of this sport?” she asks. “I think we scared the ski-jumping [establishment].”

Ski-jumping is part of Nordic skiing (as opposed to Alpine skiing), and we all know how male chauvinist Nordic cultures are.

There is so little difference between women and men in the sport because lightness and technique count just as much as muscle and power.

But, if you actually read the article, it turns subversive, although, judging from the reader comments, almost nobody notices that.

The story of America's best hope for gold, 19-year-old, 94-pound Sarah Hendrickson, turns out to be a horrorshow.

Hendrickson is recovering from a training crash in Germany. She gives the reporter her smart phone with the video of her crash on it, but won't watch it herself.

On the couch next to me, Hendrickson clutched her cardigan sleeves, yawning loudly to miss the horrible clatter of her 94-pound body landing at more than 70 miles an hour on the ground where the jump hill flattens, the area that means you’ve gone too far.

Hendrickson’s surgeon calls her knee injury “the terrible triad, plus one”: the A.C.L. ruptured completely, the M.C.L. pulled right off the tibia and severe damage was done to both the lateral and medial meniscus.

So, one problem with ski jumping for women is that, as Bob Dylan cruelly said, "But you break just like a little girl."

Within a few months, in 2013 five female top ski jumpers suffered serious knee injuries and had to withdraw for long recovery periods, thus putting their good chances at the Olympics in Sochi at risk. On 12 January 2013, Daniela Iraschko, the 2011 World Champion, fell in Hinterzarten and withdrew,[14] Anja Tepeš suffered a serious injury on 17 March in Oslo,[15]2013 Cup de France winner Espiau suffered a knee injury in June[16] and on 12 August 2013 Alexandra Pretorius, two-times women's Grand Prix winner, suffered a serious knee injury in Courchevel.[17] On 21 August 2013, Sarah Hendrickson, the 2013 World Champion, suffered a knee ligament damage in Oberstdorf.[18]

But there's more than just crashes:

Increasingly, women are prioritizing lightness as well. ... as the reedlike Hendrickson explained to me, it’s only a matter of time before extreme skinniness becomes the norm on the women’s side as well.

Hendrickson separated a small portion of her tofu curry from the rest and cut it into pea-size pieces. It was hard to tell whether she was dividing her lunch to encourage herself to eat more or less. “I don’t like the feeling of being full,” she said. “I hate it.” She ate the cut-up pieces, then asked to take her soup, rice and remaining tofu home in a doggie bag. She looked at my nearly cleaned plate and asked whether I wanted a doggie bag too, as if the few morsels left could possibly make for a meal.

Earlier in the season, I watched her lose her usual composure in a hotel lobby when she realized that Bernardi hadn’t told her she could eat dinner early. “You said 8, but I heard some teams got to eat at 6!” she said, stamping a bunny-slippered foot. “You know I hate eating late! You know I never eat late!”

Since 2004, Federation Internationale de Ski has implemented rules to address concerns about eating disorders among [male] ski jumpers. ...

Hendrickson’s coaches had been concerned enough about her strength to ask her to build “a little more body mass.” She was encouraged to begin eating snacks before bed, and they also wanted her to drink protein shakes. ...

Throughout this first week back in training in Park City, her teammates suggested that Hendrickson’s rise was causing tension among them. “Sarah’s different than before,” Hughes said. ... And then we have to go through all this stuff on the team, like, ‘Is Sarah happy today, or is she going to start screaming?’”

“Let’s just put it this way, I know I get cranky when I am under a certain weight,” Jerome said. “And with Sarah, people are just walking on eggshells.” ...

Hendrickson’s biggest obstacle [to recovery from her injuries] now, she said, is strength. “I really need to work on eating enough, even if, because I am not as active, my mind is kind of like, ‘Well, you don’t need food,’ or, ‘I’m not hungry.’ So that’s one of my battles — I just have to eat.”

For the first six weeks, a physiotherapist brought Hendrickson a smoothie every day at 3 p.m. “And I don’t know what gets put in these smoothies,” she said, laughing. “Because if I made them, they’d probably have half the calories.”

And then there's this postscript to the article:

Postscript: November 22, 2013

Coach Paolo Bernardi quit the United States women’s ski jumping team on Thursday after this article went to press. “I resign for personal reasons, and it was a hard decision,” Bernardi wrote on his Facebook page, in a post that has since been deleted. “... I hope to find another team soon that can give me the motivation to start again.” In an email to The Times, Bernardi said the reason for his quitting was “deeper” than only Sarah Hendrickson’s setback, but “of course has a lot to do with Sarah.”

If you read the article closely with an open mind, you might come to the conclusion that the conventional wisdom is wrong: Combine the brutal injuries with ski jumping's incentives to become anorexic and this sounds like a terrible sport to encourage American girls to take up.

In fact, a remarkable percentage of the best New York Times articles are like that. The NYT employs smart, hardworking reporters. If you read their output closely enough and they often wind up undermining The Narrative.

Perhaps The Hunger Games works best as an allegorical critique of poor dumb Red State provincials volunteering to serve in the Capitol’s wars without even getting a cut of the Beltway’s black-budget contracts.

November 25, 2013

Can we just exhaust all epistemic uncertainties right now and be done with it? Thumbing through my History of Western Philosophy, here is a crib sheet for the benefit of the Reality-based Community:

pre-Socratic: The knockout game is impossible because a fist would have to traverse an infinite number of infinitesimally small spaces just to reach a head.

Socratic: Those who admit they know nothing about the knockout game are wiser than those who think they do.

Aristotelian: No man can be called happy until he has died never having suffered the knockout game.

Scholastic: The knockout game is mentioned in neither the Bible or the Greek philosophers [sic- bad, second-hand translation from Arabic of a bad translation from the Greek]

Descartes: All I can know for certain is that I am thinking of the knockout game.

Leibniz: If the knockout game were real, this would not be the best of all possible worlds.

Hume: That getting punched in the head, blacking out, and hitting the concrete have always tended to follow one other in the past does not mean we ever have grounds to believe that getting punched in the head causes one to black out and hit the concrete (this one's for you, Yglesias!)

Kant: It is never permissible to lie, even if it is to misdirect knockout game players from their intended victim.

Hegel: The knockout game is the necessary antithesis to the Trayvon Martin shooting's thesis. Their synthesis will advance the World Spirit.

Nietzsche: All higher culture is based on the knockout game.

William James: The point is not whether the knockout game is good or bad, but is it useful?

Heidegger: To be a victim of the knockout game is to experience true dasein by being thrown into the world and having one's crushed, bleeding temple be finally ready-at-hand instead of just present-at-hand.

With crime rates in the news again, let me just quote my April 2013 Taki'sarticle on the data for anybody looking for an authoritative source for use in on-line debates:

At present, the best source for Obama Administration data on homicides by race is a 2011 PDF by Alexia Cooper and Erica L. Smith of the federal Bureau of Justice Statistics: “Homicide Trends in the United States, 1980-2008.” ... From page 2:

Based on available data from 1980 to 2008—

Blacks were disproportionately represented as both homicide victims and oﬀenders.…The oﬀending rate for blacks (34.4 per 100,000) was almost 8 times higher than the rate for whites (4.5 per 100,000).

Violent and property crime rates rose for U.S. residents in 2012, the Bureau of Justice Statistics (BJS) announced today. These estimates are based on data from the annual National Crime Victimization Survey (NCVS) which has collected information from victims of crime age 12 or older since 1973.

It's not a huge change, but it's unsettling after we were getting used to the good news on crime during the first couple of years of the recession. You can read my discussion of the numbers here.

I remember the summer of 2011, a story about a crowd of teenagers at the Wisconsin State Fair randomly attacking fairgoers went viral as a sign of a burgeoning race war. The Milwaukee Journal Sentinel fanned the flames, calling the teenagers "rampaging youths" who caused "mob-like disturbances":

"Then around the closing time of 11 p.m., witnesses told the Journal Sentinel, dozens to hundreds of black youths attacked white people as they left the fair, punching and kicking people and shaking and pounding on their vehicles."

"Dozens to hundreds"? When witnesses can't differentiate between 24 and 100, should we really rely on them to speculate whether a crime was racially motivated?

In other words when the number of attackers is too many to count, we shouldn't listen to the victims and other witnesses. What do they know? Are you going to trust people with concussions? They're just wimps who got beat up. They're losers with memory loss. They probably had it coming.

One of the reasons the story gained so much traction could have stemmed from the fact that Milwaukee is the most segregated city in the country, and it validated white residents' fear that their black neighbors are dangerous.

Like I said, those racists had it coming.

Now, the false trend story of black mob violence has cropped up again, as it seems to do annually, in conservative media outlets. (McKay Coppins wrote about this phenomenon in BuzzFeed last year.) The new scare is the "knockout game," in which black youths supposedly attack innocent people just for fun.

It's a respectable story now, finally, because it recently happened to some New York City Jews, and Jewish leaders are allowed to complain about hate crimes against their people.

Conservative pundits decry the MSM for suffering from political correctness and whitewashing crimes perpetrated by black people, but a more reasonable explanation for why most media outlets aren't devoting round-the-clock coverage to the knockout game is that—sorry, Sean Hannity—there is no hard data showing that it's a trend.

An important clarification: the game definitely exists, and has been around for at least a couple of years.

It's not news if it's not a trend. If an earthquake levels San Francisco tomorrow, we shouldn't cover that because that's not a story because it's not a trend. Earthquakes are just something random that happens. They aren't increasing so you shouldn't notice them.

I'm not claiming the game doesn't exist. But the idea that it's reached epidemic levels, or that it's only being played by young black people, is a fallacy. As Alan Noble convincingly writes, "Analyzing data is not as simple as watching some YouTube videos and Googling 'knockout game.'" And when it comes to the knockout game's supposed popularity, the data is almost entirely anecdotal:

Here’s the fascinating thing about this “spreading” trend: nobody seems to have any evidence that it’s spreading, or that it’s new, or that it’s racially motivated, or that black youths are the ones typically responsible, or that whites are typically targeted. This hasn’t stopped Mark Steyn, Thomas Sowell, and Matt Walsh from describing this specifically as a crime committed by blacks against whites, CNN from claiming that it is “spreading,” or Alec Torres at NRO from say it is “evidently increasing [in] popularity.”

This is precisely the type of story meant to animate the deepest recesses of our lizard brains—"Danger lurks around every corner! Identify your enemy!" At the epicenter of this narrative is Colin Flaherty, a writer for WorldNetDaily who probably has a Google alert set up for "black suspect." He's made it his life's work to report any single crime perpetrated by a black person in the U.S. against a white person. In a recent blog post, he lists as evidence six separate crimes in Philadelphia over the course of two years, which share nothing in similarity except for the fact that they involved black people.

Imagine if another national "journalist" started doing the same for, say, any crime committed in Alabama, or any arson charge in the country. People would start to think Alabama was going through a crime epidemic, or that arson was becoming all the rage with criminals.

Like the white-racists-are-burning-down-black-churches-in-Alabama quasi-hoax that was a huge respectable story in the 1990s? (What happened then was this: there are a lot of churches in America, many of them closed most of the week, more than a few of them more or less abandoned. And every year hundreds of churches across the country catch fire, more than a few due to arson. Whether this arson is for more for functional reasons [e.g., nobody around most of the time], or because churches attract firebugs for psychological reasons [flames of hell?], or because some financially failing ministers unleash a little Protestant Lightning to collect fire insurance, is unknown. What happened was that the national media started paying selective attention to black churches being subject to arson, and soon we had an national crisis on our hands.)

That would be ridiculous, because it's ridiculous to assume that a few unrelated counts of arson make arson an epidemic. But when you inject race into the equation, it conveniently aligns with the assumptions of people who happen to be racist. That's the sort of twisted logic that justifies why more than half of the U.S. prison population is made up by black and Hispanic people, even though they comprise a quarter of the total population.

Crime happens to every type of person, and is perpetrated by every type of person. What makes the false narrative of the knockout game—or any "black mob violence" story—crop up every year is the fact that some people will always believe the color of someone's skin predisposes him to commit a crime. When a few YouTube videos are able to convince terrified white folks that young black people are dangerous, they may as well assume that all cats can play the keyboard.

Seriously, as I've been pointing out for several years, the spread of video recording technology is having multiple effects on crime and our perceptions of crime.

It encourages some idiots to commit more crimes so they can post them on World Star Hip Hop for their idiot friends to watch.

But the Surveillance Society also works to discourage crimes. Recall the 2011 bus shooting in Philadelphia that was recorded on multiple high definition video cameras newly installed. The half dozen or so perps are readily distinguishable. In the long run, that level of surveillance ought to discourage crime as the lesson sinks in that you can't intimidate video cameras into not testifying against you in court.

But mostly, the spread of video just makes it harder for the media to limit our awareness of what really goes on in our country.

The overarching trend is that the spread of readily accessible information increasingly converts the prestige press into Gatekeepers who see their job as preventing the public from engaging in acts of pattern recognition.

True story. A couple of years back, I was walking home at night on North Capitol Street here in Washington, D.C., when two dudes randomly assaulted me before running away without stealing anything. At the time, I didn't think it was all that strange—I've lived in urban areas all my life, and plenty of people I know have been victims of anonymous street crime. The good news is that urban crime rates have been trending downward since I've been about 9 years old, so we're making important progress in this regard.

The weird thing was that after I blogged briefly about this, a number of conservative bloggers, particularly those of a racist bent [me, of course], decided that this wasn't just one of many random acts of criminality that occur in the big city. No! It was an instance of "Knockout King," which I suppose was the 2011 version of 2013's more robust Knockout Game white racial panic.

But to be clear about something—insofar as there's supposedly a "game" here where the contestant tries to knock someone out with one punch, that absolutely isn't what happened. I was knocked down, but definitely not out, and then after that I got kicked a bunch of times. If you're familiar with the phrase "don't kick a man while he's down," take note—it really hurts quite a bit to be kicked while you're down. In fact, this substantial deviation from the "rules" of the "game" is a lot of what made getting violently assaulted for no reason such a physically unpleasant experience. ...

People shouldn't minimize these concerns about urban violence, but it accomplishes nothing in terms of tackling them to concoct weird trends and games out of thin air.

I'm struck by how our society wants victims of black-on-white violence to play the macho tough guy in public, as Yglesias tries to do here, or as journalist Brian Beutler (who was shot during a robbery while walking with a journalist named Matthew) does here.

Being a crime victim is, among other things, being psychologically assaulted.

Beyond physical injuries, well, I've never been the victim of street violence, but judging from the psychological trauma I've felt merely from being the victim of burglars -- the reminder of one's own insecurity, the insult to one's self-respect -- that aspect of crime shouldn't be overlooked. And being punched and kicked by strangers is far worse.

Our culture has made much progress in better psychological treatment of women who are raped -- e.g., providing female cops to do the interviewing of the victim, counseling, support groups, and so forth. It helps crime victims to have your culture acknowledge the terrible thing done to you, and that it wasn't your fault.

But this balm is selectively dispensed upon our culture's usual Who? Whom? lines.

For youngish white male victims of black violence, the media's message is: Don't be a wimp.Walk it off, dude. Don't go crying about how it makes you feel. It's not part of a larger pattern, it's just something that happened to you personally. Nobody else cares about what happened to you because it's not a Thing we are supposed to talk about on TV, it's just your problem. Remember, what does not kill you makes you stronger, so, shut up and deal with it.

November 24, 2013

Economist Paul Collier, CBE, co-director of the Centre for the Study of African Economies at Oxford, writes in the New Statesman:

As part of my research, I have come up with ten building blocks needed for reasoned analysis of migration. Some are straightforward; others are analytically tricky and you will need to chew on them. Indeed – with apologies for a self-serving remark – you will need to read the book.

Block 1 Around 40 per cent of the population of poor countries say that they would emigrate if they could. There is evidence that suggests this figure is not a wild exaggeration of how people would behave. If migration happened on anything approaching this scale, the host societies would suffer substantial reductions in living standards. Hence, in attractive countries, immigration controls are essential.

Block 2 Diasporas accelerate migration. ... These links cut the costs of migration and so fuel it. As a result, while diasporas are growing, migration is accelerating.

Block 3 Most immigrants prefer to retain their own culture and hence to cluster together. This reduces the speed at which diasporas are absorbed into the general population. The slower the rate at which they are absorbed, the lower the rate of immigration that is compatible with stable diasporas and migration. By design, absorption is slower with multicultural policies than with assimilative policies.

Block 4 Migration from poor countries to rich ones is driven by the wide gap in income between them. ... Migrants are escaping the consequences of their systems but usually bring their culture with them.

Block 5 In economic terms, migrants are the principal beneficiaries of migration but many suffer a wrenching psychological shock. ...

Block 6 Because migration is costly, migrants are not among the poorest people in their home countries. The effect on those left behind depends ultimately on whether emigrants speed political and social change back home or slow it down. A modest rate of emigration, as experienced by China and India, helps, especially if many migrants return home. However, an exodus of the young and skilled – as suffered by Haiti, for example – causes a haemorrhage that traps the society in poverty.

Block 7 In high-income societies, the effect of immigration on the average incomes of the indigenous population is trivial.

But, what about the costs of the indigenous population? What everybody is interested in is not incomes or costs, but their net: standard of living.

Block 8 The social effects of immigration outweigh the economic, so they should be the main criteria for policy. These effects come from diversity. Diversity increases variety and this widening of choices and horizons is a social gain.
Yet diversity also potentially jeopardises co-operation and generosity. Co-operation rests on co-ordination games that support both the provision of public goods and myriad socially enforced conventions. Generosity rests on a widespread sense of mutual regard that supports welfare systems. Both public goods and welfare systems benefit the indigenous poor, which means they are the group most at risk of loss. As diversity increases, the additional benefits of variety get smaller, whereas the risks to co-operation and generosity get greater. ...

Block 9 The control of immigration is a human right. The group instinct to defend territory is common throughout the animal kingdom; it is likely to be even more fundamental than the individual right to property. ... It sometimes makes sense to grant the right to migrate on a reciprocal basis. Thousands of French people want to live in Britain, while thousands of Britons want to live in France.

Block 10 Migration is not an inevitable consequence of globalisation. The vast expansion in trade and capital flows among developed countries has coincided with a decline in migration between them.

These ten building blocks are not incontrovertible truths but the weight of evidence favours them to varying degrees. If your views on migration are incompatible with them, they rest on a base too fragile for passionate conviction.

Read the whole thing there. So far, after three days up, this long, important article has a grand total of 15 comments.

The middle class is doing well in Silicon Valley, if you define the middle class to be people like my old Rice U. roommate Fritz, an engineer and former U.S. Navy officer who is head of quality control for a Silicon Valley firm that makes pacemakers and other more advanced life-and-death medical devices. Back in the 1990s, he and his wife tired of their 900 square foot house in the Valley, so they moved their family to Half Moon Bay on the foggy Pacific. Fritz does the long commute over the Santa Cruz Mountains. It's a fine middle class life, if you assume that the middle class starts at about Rice STEM grads and goes up from there.

Here's the Google Wallet FAQ. From it: "You will need to have (or sign up for) Google Wallet to send or receive money. If you have ever purchased anything on Google Play, then you most likely already have a Google Wallet. If you do not yet have a Google Wallet, don’t worry, the process is simple: go to wallet.google.com and follow the steps." You probably already have a Google ID and password, which Google Wallet uses, so signing up Wallet is pretty painless.

You can put money into your Google Wallet Balance from your bank account and send it with no service fee.

Google Wallet works from both a website and a smartphone app (Android and iPhone -- the Google Wallet app is currently available only in the U.S., but the Google Wallet website can be used in 160 countries).

Or, once you sign up with Google Wallet, you can simply send money via credit card, bank transfer, or Wallet Balance as an attachment from Google's free Gmail email service. Here'show to do it.

(Non-tax deductible.)

Fourth: if you have a Wells Fargo bank account, you can transfer money to me (with no fees) via Wells Fargo SurePay. Just tell WF SurePay to send the money to my ancient AOL email address steveslrATaol.com -- replace the AT with the usual @). (Non-tax deductible.)

Fifth: if you have a Chase bank account (or, theoretically,other bank accounts), you can transfer money to me (with no fees) via Chase QuickPay (FAQ). Just tell Chase QuickPay to send the money to my ancient AOL email address (steveslrATaol.com -- replace the AT with the usual @). If Chase asks for the name on my account, it's Steven Sailer with an n at the end of Steven. (Non-tax deductible.)

My Book:

"Steve Sailer gives us the real Barack Obama, who turns out to be very, very different - and much more interesting - than the bland healer/uniter image stitched together out of whole cloth this past six years by Obama's packager, David Axelrod. Making heavy use of Obama's own writings, which he admires for their literary artistry, Sailer gives the deepest insights I have yet seen into Obama's lifelong obsession with 'race and inheritance,' and rounds off his brilliant character portrait with speculations on how Obama's personality might play out in the Presidency." - John Derbyshire Author, "Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics" Click on the image above to buy my book, a reader's guide to the new President's autobiography.