If you listen to Bruce Bueno de Mesquita, and a lot of people don’t, he’ll claim that mathematics can tell you the future. In fact, the professor says that a computer model he built and has perfected over the last 25 years can predict the outcome of virtually any international conflict, provided the basic input is accurate. What’s more, his predictions are alarmingly specific. His fans include at least one current presidential hopeful, a gaggle of Fortune 500 companies, the CIA, and the Department of Defense.

The debate over the definition of beauty has been waged by both scientists and philosophers for centuries. We tested the idea that a facial configuration close to the population mean is fundamental to attractiveness.

First, we digitized images of faces of male and female college students (i.e., transformed the facial images into little dots of lightness and darkness called "pixels"). Each face is represented by a matrix of pixel values that can be mathematically averaged with the matrices of other faces. Once digitized and averaged together, we can turn the averaged pixel values back into images and have the composite faces rated for attractiveness.

College students rated the male and female composite faces as significantly higher in attractiveness than the individual faces used to create them, if the composites had at least 16 different faces in them. Thus, averaged faces are attractive. Note that when we use the word, "average," we mean the arithmetical mean, and not an average-looking person. If, for example, you take a female composite (averaged) face made of 32 different faces and overlay it on the face of an extremely attractive female model, the two images line up almost perfectly indicating that the model's facial configuration is very similar to the composites' facial configuration.

[...] we view averageness as fundamental and necessary to facial attractiveness. Averageness is not the only component of attractiveness, but without it, no face will be attractive.

My two cents: what makes you ugly are extreme characteristics (e.g. big nose or ears); averaging simply takes care of these 'large errors'. The same principle is behind the frequently superior performance of composite forecasts (e.g. of economic variables), where the arithmetic mean of a number of forecasts is often more accurate than any of the individual components.

THE current tempest over whether Toyota is hypocritical in selling the Prius while at the same time signing up with Ford, GM, and Chrysler in opposing a Senate bill mandating higher fuel mileage standards illustrates the dilemma of so-called "corporate social responsibility." Toyota isn't being hypocritical at all. Toyota isn't even a person. The company exists to maximize shareholder value [...]

Presumably Toyota invented the Prius to fulfill an important and potentially growing market niche in cars that consumed less energy. As such, it was being neither socially responsible nor irresponsible; it was simply responding to market demands. In making a political decision to join with the Big Three in opposing higher fuel standards, Toyota was also acting as a profit-maximiser [...]

The problem is that Toyota, and other companies that venture into politics, are undermining the democratic process. Most people are not just consumers and investors. They're also citizens, who have citizen values—including saving the planet from global warming. Toyota has every right to respond to the part of our heads that correspond to consumer and investor values, but companies have no ethical grounds for entering into the democratic realm which should be reserved for citizen values. If corporate social responsibility means anything at all, it should mean refraining from corrupting the political process.

There are three issues here:

1) Given that they are the shareholders's agents, are CEOs justified in straying from the goal of profit maximisation?

2) Companies (and the rich in general) have a much louder 'voice' (through their ability to fund political campaigns, control of the media, better information and contacts etc) when it comes to government policy and setting social priorities. Where should we draw the line? Would it be acceptable, say, for the big auto-makers to collude to decrease emissions? Do companies have the right to have their own social policies when they do not have a democratic mandate?

3) What is the effect of corporate social responsibility and related activities (e.g. charity) on the equilibrium level of public good provision? In other words, and to turn the question as it is usually posed on its head, does private giving crowd out public spending?

A reader named Warren Smith informs me of an Australian TV commercial (which you can watch on YouTube), in which two fashion models have the following conversation:

Model 1: But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves — then what is it about?

Model 2: Well, from my perspective, it’s about information, probabilities, and observables, and how they relate to each other.

Model 1: That’s interesting!

The commercial then flashes the tagline “A more intelligent model,” followed by a picture of a Ricoh printer.

More intelligent, or simply more shameless? Ladies and gentlemen of the jury, allow me to quote from Lecture 9 of my Quantum Computing Since Democritus notes:

But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves, or particles — then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.

Postscript: Responding to his (many, many) comments, Scott stumbles upon two fundamental truths that many a blogger will relate to:

The longer I blog, the more I despair of ever achieving my central goal in life, namely for everyone to like me.

The longer I blog, the more I despair of ever achieving my secondary goal in life, namely for everyone to understand me.

Burning our money, in a venomous post with a number of ridiculous assertions, asks whether immigrants have made 'us' (indigenous Britons) richer:

Has it (immigration) made us richer?

* No: overall GDP per head has been little changed by immigration

The most widely quoted study is still the one published by the highly respected National Institute for Economic and Social Research (NIESR) last year (and see their Lords submssion here). They reckoned that immigration between 1997 and 2005 had raised GDP by 3.1%. But since it had increased the population by 3.8%, GDP per head had actually fallen.

Why might that be?

First, because many of these new immigrants take low-skilled jobs which produce less than the average UK worker. High productivity investment banking immigrants are very much the exception.

Second, they displace indigenous workers who join the dole queue (eg see this blog on the problems in Slough where hungry incomers from Eastern Europe consigned existing Pakistani immigrants to an 18% fall in their employment rate in three years).

I won't question the facts he quotes or address the morality of the issue (as an immigrant, I suffer from many defects including attention deficit disorder). But this guy can't even get his algebra right.

He presents the fall in GDP per capita that is attributable to immigration as proof that immigration does not make us richer. But, of course, it is perfectly conveivable for immigration to be making EVERYONE better off (both the indigenous population and the immigrants) while leading to a lower GDP per capita. How?

Let's say what's his name is the sole inhabitant of Britain and he earns (produces) £100 a year. GDP per capita in Britain in this scenario is £100 pounds.

Now let's say I come to live in Britain, from a country where I was earning £20 a year, and I now earn (produce) £30 a year. Let's also say that, because of immigration, what's his name now earns £150. GDP per capita in Britain is now down to £90 from £100, yet we are all much, much better off.

Ah, the wonderful things you learn in primary school when you are not busy harassing little Johnny immigrant...

Attent™ [...] tackles the problem of information overload in corporate email using psychological and economic principles from successful games. Attent creates a synthetic economy with a currency (Serios) that enables users to attach value to an outgoing email to signal importance. It gives recipients the ability to prioritize messages and a reserve of currency that they can use to signal importance of their messages to others.

1) Who is responsible for controlling the currency supply (and will internal corporate communications enter a boom and bust cycle?)

2) Can a goods market work without a credit market to enable capital to find its way to its most productive use?

3) Will the reduced informational cost faced by the readers compensate for the increased informational cost faced by the senders?

4) Is the system incentive compatible? Do senders' valuations correspond to those of the readers, even on average?

5) Does this formal system offer much at all? Don't we already have ways to show that a particular piece of information is important? (the word URGENT in the subject title, coupled with a couple of follow-up calls, together with a reputation for not sending useless email does it for me)

On related news, in today’s America there are more World of Warcraft players than farmers.

You can say the Soviets were a tad bit too keen on planning, but you sure can't blame them for lack of optimism:

Soviet plan for WW3 nuclear attack unearthed by Nato historian: While most Western planners were convinced that any first strike would lead to total mutual destruction, the plan - written in matter-of-fact language - shows that Warsaw Pact nations presumed a massive ground war would follow nuclear attacks.

Mr Lunak described the military plans as “fairy tale” thinking based on World War II warfare: “They (the Soviets) really planned to send ground troops out in the field and have them fight for a few days until they died from radiation,” he said.

Emerging evidence—crunchy statistics from real data, not the mushy self-help stuff—supports the contention that giving stimulates prosperity, for both individuals and nations. Charity, it appears, can really make you rich.

[...] People do give more when they become richer—research has shown that a 10 percent increase in income stimulates giving by about 7 percent—but people also grow wealthier when they give more.

How do we know this? When two variables like giving and income are interrelated, economists use something called an instrumental variable to see which is pushing and which is pulling. In a nutshell, that means selecting something that’s closely related to donations but not directly to income, like volunteering. Volunteers tend to be money givers and vice versa because of the same charitable impulse. But income doesn’t always directly affect volunteering. (While people have differing amounts of money, they all have the same amount of time.)

We start by predicting how much money people would donate based on how much they volunteer, regardless of income. This projection essentially strips out the role of income in giving. Next, see if that predicted donation level correlates with income. If it does and the correlation is positive, it means that giving pushes up income and not just vice versa.

This is precisely what is found in the S.C.C.B.S. data: More giving doesn’t just correlate with higher income; it causes higher income. And not just a little. Imagine two families that are identical in size, age, race, education, religion, and politics. The only difference is that this year the first family gives away $100 more than the second. Based on my analysis of the S.C.C.B.S. survey, the first family will, on average, earn $375 more as a result of its generosity.

How can this be? Is it a statistical anomaly—or even a metaphysical phenomenon? While the link between giving and prosperity is not as mechanistic as returns on municipal bonds, there are some very earthbound explanations for it. Psychologists and neuroscientists have identified several ways that giving makes us more effective and successful. For example, new research from the University of Oregon finds that charity stimulates parts of the brain called the caudate nucleus and the nucleus accumbens, which are associated with meeting basic needs such as food and shelter—suggesting to the researchers that our brains know that giving is good for us. Experiments have also found that people are elevated by others into positions of leadership after they are witnessed behaving charitably.

The financial advantages of giving aren’t limited to individual givers. There is also evidence that donations push up income even more at the level of an entire nation’s economy. We can demonstrate this by looking at average household charity and per capita G.D.P. as they change over time. Charity and G.D.P. levels have moved together over the years. Corrected for inflation and population changes, U.S. government data show that G.D.P. per person in America has risen over the past 50 years by about 150 percent. At the same time, donated dollars per person have risen by about 190 percent.

These trends by themselves don’t tell us which force is pushing and which is pulling, however. To figure that out, we need to determine whether past values of one affect future values of the other. By using a method called vector autoregression, economists can see how changes in this year’s G.D.P. are affected by past values of both G.D.P. and charity. If an increase in last year’s charity levels correlates with a jump in this year’s G.D.P., it is logical to conclude that donating is stimulating the economy.

As in the case of individual income, the evidence is that increases in G.D.P. and giving mutually reinforce each other: Economic growth pushes up charitable giving, and charitable giving pushes up economic growth. Data from the Statistical Abstract of the United States and the Center on Philanthropy at Indiana University provide examples: In 2004, $100 in extra income per American drove about $1.47 in additional charitable giving per person. At the same time, $100 in giving stimulated more than $1,800 in increased G.D.P. This rate of social return shows that economic-multiplier effects are not limited to private investment. In short, giving plays a positive role in American economic growth. [...]

Although I do have my reservations (I'm not sure volunteering is a terribly good instrument, people may expect that their income next year is going to increase so that their giving this year is affected by next year's income as well, giving to charity may be a proxy for being religious/ having extensive social contacts, and so on) this is fascinating research.

The findings also seem to support my own pet theory of a causal link from more more charity to higher GDP and a higher personal income for those giving to charity, which operates partly through the resulting lower taxes and lower provision of public goods (OK, I did not include the dynamic bit in the original post, but the story is essentially one of higher incomes under a lower tax regime and it has been told myriad of times before).

The news of a possible diagnostic test for Alzheimer’s disease is very interesting [...]

But let’s run some numbers. The test was 91% accurate when run on stored blood samples of people who were later checked for development of Alzheimer’s, which compared to the existing techniques is pretty good. Is it good enough for a diagnostic test, though? We’ll concentrate on the younger elderly, who would be most in the market for this test.The NIH estimates that about 5% of people from 65 to 74 have AD. According to the Census Bureau (pdf), we had 17.3 million people between those ages in 2000, and that’s expected to grow to almost 38 million in 2030. Let’s call it 20 million as a nice round number.

What if all 20 million had been tested with this new method? We’ll break that down into the two groups – the 1 million who are really going to get the disease and the 19 million who aren’t. When that latter group gets their results back, 17,290,000 people are going to be told, correctly, that they don’t seem to be on track to get Alzheimer’s. Unfortunately, because of that 91% accuracy rate, 1,710,000 people are going to be told, incorrectly, that they are. You can guess what this will do for their peace of mind. Note, also, that almost twice as many people have just been wrongly told that they’re getting Alzheimer’s than the total number of people who really will.

Meanwhile, the million people who really are in trouble are opening their envelopes, and 910,000 of them are getting the bad news. But 90,000 of them are being told, incorrectly, that they’re in good shape, and are in for a cruel time of it in the coming years.

The people who got the hard news are likely to want to know if that’s real or not, and many of them will take the test again just to be sure. But that’s not going to help; in fact, it’ll confuse things even more. If that whole cohort of 1.7 million people who were wrongly diagnosed as being at risk get re-tested, about 1.556 million of them will get a clean test this time. Now they have a dilemma – they’ve got one up and one down, and which one do you believe? Meanwhile, nearly 154,000 of them will get a second wrong diagnosis, and will be more sure than ever that they’re on the list for Alzheimer’s.

Meanwhile, if that list of 910,000 people who were correctly diagnosed as being at risk get re-tested, 828 thousand of them will hear the bad news again and will (correctly) assume that they’re in trouble. But we’ve just added to the mixed-diagnosis crowd, because almost 82,000 people will be incorrectly given a clean result and won’t know what to believe.

I’ll assume that the people who got the clean test the first time will not be motivated to check again. So after two rounds of testing, we have 17.3 million people who’ve been correctly given a clean ticket, and 828,000 who’ve been correctly been given the red flag. But we also have 154,000 people who aren’t going to get the disease but have been told twice that they will, 90,000 people who are going to get it but have been told that they aren’t, and over 1.6 million people who have been through a blender and don’t know anything more than when they started.

Sad but true: 91% is just not good enough for a diagnostic test.

Yes, doctors need to be able to calculate the probability a patient has a given disease taking into account not only the accuracy of the test but also other available information (e.g., for random testing, prevalence of the disease amongst an age-group); and they need to communicate this information clearly to the patient. This misunderstanding is a real problem, and something that doctors and everyone else need to be educated about.

But to go from that to '91% is just not good enough' is a huge leap.

As long as there isn't a 100% accurate test, we can never be certain whether the disease is present or not; but the test does give a lot of relevant information and we can lower the probability of a false alarm as much as we like by administering the test again and again.

If a disease affects 1 in 20 people and the test is 90% accurate, a 'positive' result means you have a mere 32% probability you are actually ill. If you administer the test a second time and you get a second positive, this probability jumps to 81%, and this keeps rising with the number of positive results. For a negative test result, the news are even better: the first negative result translates to a 99.5% you are healthy, the second negative to a .999% that you are.

(18% of the people will get one positive and one negative, which simply means there is a 95% probability they are healthy - i.e. the same as before taking any tests. Instead of 'not knowing what to believe', as Lowe speculates, their doctors should just explain to them that they need more testing if they want to increase the accuracy of the standard, pre-test prediction (healthy) above 95%)

Pay attention now, here comes the correct conclusion: If you don't have any symptoms, a positive test result for most diseases doesn't mean much - in most cases, you are still more likely to be healthy than not.

Next time you take a test, ask your doctor to calculate the probability you are actually ill or healthy; and if you want more certainty, take the test again, and again, until you are content with the degree of certainty on offer. And thank all those nice researchers for them 90% accurate tests - at least if they are not painful.

...is not a measure of confidence in the point estimate; it says nothing about accuracy.

Statistical significance simply means that the true value of the statistic being estimated has a higher than 5% or 1% probability (the levels conventionally chosen) to be away from a range around zero (the range being determined by sample size and degrees of freedom of the estimator in question). Any statistic which is large enough will be found to be statistically significant even in small samples; this doesn't mean, however, that the accuracy of the point estimate can't be very poor.

James Watson (discoverer, with Francis Crick, of the double helix structure of DNA) comments on development policy. From the Independent:

One of the world's most eminent scientists was embroiled in an extraordinary row last night after he claimed that black people were less intelligent than white people and the idea that "equal powers of reason" were shared across racial groups was a delusion.

James Watson, a Nobel Prize winner for his part in the unravelling of DNA who now runs one of America's leading scientific research institutions, drew widespread condemnation for comments he made ahead of his arrival in Britain today for a speaking tour at venues including the Science Museum in London.

The 79-year-old geneticist reopened the explosive debate about race and science in a newspaper interview in which he said Western policies towards African countries were wrongly based on an assumption that black people were as clever as their white counterparts when "testing" suggested the contrary. He claimed genes responsible for creating differences in human intelligence could be found within a decade.

Dr Watson told The Sunday Times that he was "inherently gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really". He said there was a natural desire that all human beings should be equal but "people who have to deal with black employees find this not true".

I won't comment on the thing that most needs commenting on (I may return on this tomorrow, and I'm sure there is going to be ample talk elsewhere on the blogosphere), but I can't help but notice that his words show a deep appreciation for economists. Assume you drop a 'dump' bomb on Cameroon or Belgium and everyone's IQ falls by 20 points. As an economic adviser to Cameroon or one of the countries that want to see Cameroon become rich, how on earth would you change your tune to reflect that? Are 'good economics' different for clever and for dumb nations? Politician: 'Hey, I have new data here, average IQ fell from 124 to 104. What should we do to maximise our growth prospects? Economic Adviser: 'Gosh, I had given you the right policy prescription for clever people. For dumb people, you need to raise the marginal rate of income tax to 30%, impose tarrifs on imported goods and start subsidising your farmers.'

(I am not saying that there are clever and dumb nations, so don't attack me in the comments for that. I know, the post does not deal with the important aspect of the matter here, but hey, that's what I felt like commenting on)

Swearing at work helps employees cope with stress, academics at a Norfolk university have said. A study by Norwich's University of East Anglia (UEA) into leadership styles found the use of "taboo language" boosted team spirit.

Professor Yehuda Baruch, professor of management, warned that attempts to prevent workers from swearing could have a negative impact.

He said: "In most scenarios, in particular in the presence of customers or senior staff, profanity must be seriously discouraged or banned.

"However, our study suggested that, in many cases, taboo language serves the needs of people for developing and maintaining solidarity, and as a mechanism to cope with stress. Banning it could backfire. Managers need to understand how their staff feel about swearing.[...]"

The graphics are really helpful in two different ways for figuring out why you match up better with one candidate or another. First, there is a bar chart sort of effect to figure out why you match up better with one than another [...] Secondly, there are sliders so that you can emphasize issues you are particularly concerned with.

I like it, although I do get slightly irritated with multiple choice questions when no available answer reflects my position (I guess that's partly the candidates fault though). And if you are curious, I get a fairly even mix of Democrats and Republicans, with no candidate agreeing with me on more than a handful of issues.

Alex Tabarrok links to a pretty neat optical illusion, which gave me the spur for this follow-up. If you think these may be computer tricks, rather than you brain playing games with you, simply print out the page - the illusions will work just as well.

None of the images below really move...

Are the lines parallel?

Circle or coil?

And the best one for last (you may want to click on the image to maximise its size):

1. Look at the four little dots in the middle of the picture for 30 seconds2. Then look at a wall near you - a bright spot will appear3. Twinkle a few times and you will see a figure

Who do you see? A miracle or what?

Sorry, no sources for these (forwarded to me in emails etc), other than the 'rotating snake illusion' (second from top) which is due to AkiyoshiKitaoka. If you liked this post, his webpage has a very large number of illusions to occupy your mind with.

And if you feel like digging deeper, check out my very-worthwhile-looking latest purchase - Mind Hacks (the book's blog is here). No unconditional 'buy' recommendation yet as I haven't had time to read it, but stay tuned.

MEXICO CITY -- After apparently covering 15 kilometres of a marathon course in a time faster that any human being could possibly run it, not to mention crossing the finish line wearing a windbreaker, hat and skintight running pants in 16-degree weather, Mexican politician Roberto Madrazo was disqualified Monday as winner of his age category in the Sept. 30 Berlin marathon. [...]

Race officials said Monday they disqualified him for apparently taking a shortcut - an electronic tracking chip indicates he skipped two checkpoints in the race and would have needed superhuman speed to achieve his win.

According to the chip, Mr. Madrazo took only 21 minutes to cover 15 kilometres between the 20-kilometre and 35-kilometre marks - faster than any human can run. The world record for 15 kilometres is 41 minutes, 29 seconds, by Felix Limo of Kenya.

As a member of the Institutional Revolutionary Party, which often resorted to fraud to win elections, Mr. Madrazo's reputation at home was already tarnished.

In 1996, Mexico's attorney-general confirmed reports that he had spent tens of millions of dollars more than the legal campaign spending limit in his winning 1994 bid for the Tabasco state governorship.

While under investigation on those charges, Mr. Madrazo told police he had been kidnapped for seven hours, beaten and threatened with death by unidentified assailants. Police couldn't find evidence of any such abduction, and many saw it as a sympathy ploy.

There are times when being proven right brings no pleasure. For several years, I argued that America's economy was being supported by a housing bubble that had replaced the stock market bubble of the 1990's. But no bubble can expand forever. With middle-class incomes in the United States stagnating, Americans could not afford ever more expensive homes. [...]

[...] Record-low interest rates in 2001, 2002 and 2003 did not lead Americans to invest more - there was already excess capacity. Instead, easy money stimulated the economy by inducing households to refinance their mortgages, and to spend some of their capital.

It is one thing to borrow to make an investment, which strengthens balance sheets; it is another thing to borrow to finance a vacation or a consumption binge. But this is what Alan Greenspan encouraged Americans to do. When normal mortgages did not prime the pump enough, he encouraged them to take out variable-rate mortgages - at a time when interest rates had nowhere to go but up.

Predatory lenders went further, offering negative amortisation loans, so the amount owed went up year after year. Sometime in the future, payments would rise, but borrowers were told, again, not to worry: house prices would rise faster, making it easy to refinance with another negative amortisation loan. The only way (in this view) not to win was to sit on the sidelines. All of this amounted to a human and economic disaster in the making. Now reality has hit: newspapers report cases of borrowers whose mortgage payments exceed their entire income. [...]

But lower short-term interest rates have led to higher medium-term interest rates, which are more relevant for the mortgage market, perhaps because of increasing worries about inflationary pressures. It may make sense for central banks (or Fannie Mae, America's major government-sponsored mortgage company) to buy mortgage-backed securities in order to help provide market liquidity. But those from whom they buy them should provide a guarantee, so the public does not have to pay the price for their bad investment decisions. Equity owners in banks should not get a free ride. [...]

It is the victims of predatory lenders who need government help. With mortgages amounting to 95% or more of the value of the house, debt restructuring will not be easy. What is required is to give individuals with excessive indebtedness an expedited way to a fresh start - for example, a special bankruptcy provision allowing them to recover, say, 75% of the equity they originally put into the house, with the lenders bearing the cost.

So, let me get this straight. People took on massive variable-rate mortgages (rather than the more expensive fixed rate ones that offer protection against the risk of rising interest rates) because the lenders said that house prices can only go up. Furthermore, Alan Greenspan apparently told them 'to borrow to finance a vacation or a consumption binge'. And, even more to the point, a lot of people took on those big bad mortgages*. Hence, there is a strong moral case that the government should arrange for them to be compensated.

I'm not against bail-outs in general, because politics and economic policy should be guided by practical considerations, not by a desire to be 'fair' in each situation. I am not dismissive (or supportive) of Stiglitz's case that government should give some money to these people to avoid 'a human and economic disaster in the making'. But to argue that the people that invested in housing do not bear any responsibility for their actions and are somehow entitled to compensation on moral grounds is simply ridiculous.*the number of affected individuals is central to Stiglitz's argument. If you borrow money to start a company and the company fails, too bad. But if you and a lot of other people borrow to invest in the housing market the moral calculus is completely transformed.

Andrew Gelman weighs in the issue of voting allocation fairness in a two-stage voting system, such as block voting in the EU (an issue I covered, albeit from a different perspective, in an older post) and the electoral college in the US. Here's an edited down version of his argument:

Commentators and experts have taken two positions on the allocation of votes in a two-stage voting system, such as block voting in the European Union or the Electoral College in the United States. From one side (for example, this article by Richard Baldwin and Mika Widgren), there is the claim that mathematical considerations of fairness demand that countries (or, more generally, blocks) get votes in proportion to the square root of their populations. [...]

My claim (and that of Jonathan Katz and Joe Bafumi, my coauthors), thus, is that even if one accepts the voting power criterion, the square-root rule is inappropriate. Could we be right? Is it possible that the consensus of experts in voting power in Europe are wrong, and three political science professors from the United States got it right?

A quick summary of our argument: The square-root-rule is derived from a game-theoretic argument that also implies that elections in large countries will be much much closer (on average) than elections in small countries. This implication is in fact crucial to the reasoning justifying the square-root rule. But it's not empirically correct. For example, if a country is 9 times larger, its elections should be approximately 3 times closer to 50/50. This doesn't happen. Larger elections are slightly closer than small elections, but by very little, enough that perhaps a 0.9 power rule would be appropriate, not a square-root (0.5 power) rule.

[...]I think it's really time for the voting-power subfield of political science, economics, and mathematics to move beyond this silly model (i.e. the square-root).

Unlike their sisters in the animal kingdom, human females don't openly advertise their ovulation. But even without a human version of the baboon's bright pink behind, signs of fertility sneak out, according to several studies. Subconsciously, women dress more provocatively and men find them prettier when it's prime time for conception. And a report from the University of New Mexico demonstrates that the cyclic signs have economic consequences.

Psychologist Geoffrey Miller and colleagues tapped the talent at local gentlemen's clubs and counted tips made on lap dances. Dancers made about $70 an hour during their peak period of fertility, versus about $35 while menstruating and $50 in between.

Miller links the wage fluctuations to changes in body odor, waist-to-hip ratio, and facial features. Despite operating at the upper limits of flirtatiousness already, he says there may also be subtle shifts in their behavior—"how they talk and move when enticing a customer to buy a dance, and how they perform the dance itself."

Women on the pill averaged $37 (and had no performance peak) versus $53 for women off-pill. The contraceptive produces hormonal cues indicating early pregnancy, not an enticing target for a would-be suitor. Birth control could lead to many thousands of dollars lost every year.

The researchers were surprised that almost no one in the business had noticed the pattern before. But if you're a woman in any service-industry job looking to maximize your tips, Miller suggests scheduling more shifts for the phase right before ovulation: "It might help to know about this so that you can exploit these effects."

The queues that formed outside Northern Rock, the country's fifth-biggest mortgage lender, represented the first bank run in Britain since 1866. The panic was prompted by the very announcement designed to prevent it. Only when the Bank of England said that it would stand by the stricken Northern Rock did depositors start to run for the exit. Attempts by Alistair Darling, the chancellor of the exchequer, to reassure savers served only to lengthen the queues of people outside branches demanding their money. The run did not stop until Mr Darling gave a taxpayer-backed guarantee on September 17th that, for the time being, all the existing deposits at Northern Rock were safe.

[...] the Bank of England emerges worst. At the outset, Mervyn King, its governor, talked tough. Mr King wanted to teach financiers that they should not expect the central bank to bail them out if they took on too much risk. Unlike the European and American central banks, the Bank of England held back from pumping cash into the markets and then did so modestly, insisting on the usual top-notch collateral. It argued that central-bank money could do little to save the three-month interbank lending market, which had gummed up.

The Bank of England's tough line has turned out to be wrong, and events have forced Mr King to relent. On September 19th, the day after the run on Northern Rock had ended, the Bank of England performed a breathtaking volte-face. It announced that over the next few weeks it would indeed be providing funds to try to sort out the three-month market. Furthermore, it said that it would lend against riskier collateral, including mortgages.

The charge against Mr King is that his purism turned a crisis into a fiasco. If the Bank of England had acted more promptly to restart seized-up lending markets, his critics say, Northern Rock might have muddled through. No one will ever know whether that is true. Either way, the lurches in the central bank's policy leave Mr King looking either as if he made a mistake, or as if he cannot stand up for his views.

I disagree with the Economist's take, at least in part. What would a sensible central bank/regulator/government would like to do in a case like Northern Rock's? Two things: 1. ensure the financial system does not collapse alongside the troubled bank, and 2. send a clear signal that any financial institution that is taking on too much risk is in for a world of pain. The problem is that these aims are contradictory: if a bank collapses, it is bound to send ripples through the financial system and it may well set off a domino effect; if it is rescued, the message received by other banks is that they should be more relaxed about risk - the central bank will come to the rescue if things turn ugly.

Now look at the aftermath of Northern Rock. The bank is still open and its remaining depositors are safe (good). The rest of the financial system is safe too (good). And as for moral hazard, take a look at the graph above: would you really like that happening to the bank you run or have invested money in?

With regards to the 'critics' the Economist refers to: Why should Northern Rock be allowed to 'muddle through'? These fellas took some risks which didn't work out and they should suffer as a result. The recent reforms that made it impossible for the Bank to mount a covert operation actually help reduce moral hazard: if you are in trouble, my friend, the whole world will find out - so make sure you don't walk too near the edge.

With regards to the 'purists' argument - that a bank should never be bailed out by government: do you believe that only the death penalty has any deterrent effect on murder? Is the extra deterrent effect of committing to never help out a bank in trouble worth the additional risk of an across the board financial meltdown?

Long story short: The Northern Rock affair ended pretty much the best way it could have - except on the communications front.

You may have spotted the little 'From the Blogosphere' widget that I recently put on the sidebar. The service is called Blogrush, and it has attracted a fair amount of attention in the blogosphere. Here's more, from their site(emphasis mine):

BlogRush is a "Cooperative Syndication Network." It's a network of blogs that run a small "widget" on their pages. Each time this widget is loaded it will contain 5 clickable headlines which are the blog post titles to other users' posts. Clicking on any of these links will open a new browser window and load the blog and full post. Users earn "syndication credits" based on each time their blog loads the widget as well as each time any of their referrals (users that signup after clicking the "add your blog posts" link on the widget) loads the widget. They also earn additional credit based on all the activity through 10 generations of referrals. 1 Syndication Credit = having one of their recent blog post titles served inside the widget on another member's blog.

Blogrush has been labelled a pyramid scheme by many in the blogosphere - and it is. This, in itself, is not a problem: if you are offered a deal and you accept it, it must make sense to you. Furthermore, there's no real money at stake, and you can choose to remove the widget from your site any time you please.

What I'm pissed off about is that they make an incredible commitment (that is, they lie) about the conversion rate from pageviews on your site to links to your site from widgets in other blogs. The total number of pageviews in the system (i.e. the number of 'syndication credits' of all members) will never be equal to the total number of pageviews times 5 (remember, each widget displays five links). In the absence of the referral system, the conversion rate from your own pageviews to links to your site should be 1 to 5. (the FAQ does not specify what happens with excess link spots: they post paid-for ads perhaps? The quoted piece refers to '5 clickable headlines which are the blog post titles to other users' posts') With the referral system, this conversion rate can only be higher and it is likely that it will go beyond 1 to 1 before long, so Blogrush will simply not be able to deliver on their commitments. Also, it is very unlikely that they are not aware of the math behind this: after all, they put bloody link allocation algorithm together.

To cut a long story short, I don't mind that blogrush is unlikely to be a good source of traffic (what most bloggers tend to complain about), but I am angry that they blatantly lie about the terms in the deal - which is probably illegal too.

Advance warning: This is a tedious post, and it is extremely unlikely you will find it either interesting or informative.

Santosh Anagol is an economics PhD student at Yale and he blogs at Brown Man's Burden. Going through his stuff, I came across a short paper he wrote back in 2004 about the implications of multicollinearity (I won't link to Wikipedia on this, as the article on multicollinearity is lacking and potentially misleading. For more information, read a standard econometrics textbook.)

What he does is simple enough:

with the error normally distributed and uncorrelated with the x's, etc. He then proceeds to run the regression three times, with σ12 (the covariance of x1 with x2) going from zero to .99.

At correlations below .999 our statistical model nails the point estimates and has large t-values. So we don’t need to worry about correlated regressors unless the correlation is EXTREMELY high.

Talking about variables with a correlation of .99 is not very relevant for practical purposes (For many popular datasets, I doubt the correlation between the recorded values and their true values is even as high as .95). In any case, the sample size chosen (1000) is large, and it is not surprising that the OLS estimators yield estimates close to the true value even in the presence of .95 correlation (it is not surprising to an experienced econometrician; see the conclusion to the post). What is more interesting to observe is how the confidence interval around these point estimates changes as σ12 is chosen to be higher. With x's barely correlated, x1 is roughly .13 points wide, with σ12=.5 it goes to .15 units and at σ12=.95 it reaches almost .4 units.

Continuing with the results:

With regressors that have correlations around .99, we get some bad results. In this case the point estimates are off, and one of them is significant. This would obviously be the wrong conclusion about the DGP.

'Statistical significance' is often misunderstood to be a measure of confidence in the point estimate, but it is nothing of the sort. Finding an estimate to be 'statistically significant' simply means that the (95% in this case) confidence interval does not include zero - in other words, there's a low chance that the true value of the statistic in the population is zero, and thus the variable of interest is likely to have an effect on y.

So, the conclusions we would draw about the DGP from the above results are actually the right ones: x1 is not likely to be equal to zero (and it isn't; it equals 2 by construction), and there's a 95% chance that x2 lies between -3.12 and 2.9897 (which it does; by construction, x2=1). The only reason β1 is found to be statistically significant and β2 not is the fact that x1 was picked to equal 1 and x2 was picked to equal 2, so we need to feed our estimators with more information in order to establish that x1 has an effect on y than is the case with establishing the same thing for x2.

The point estimates are indeed off, but this is purely due to the particular random sample - and the large confidence intervals alert us as to the possibility this is the case. Run the same model with a larger sample size (or pick a large number of other random samples and draw the probability distribution of your estimators), and the OLS estimates will be spot on.

And a final observation:

If two variables are highly correlated, will it screw up coefficients on other, exogenous variables? I ran the model with another regressor x3 that was uncorrelated with x1 and x2 , and with a coefficient of 3 in the data generating process. The degree of correlation between x1 and x2 DOES NOT CHANGE point estimates and t-stats of our coefficient on x3.

...which is to be expected from theory. Any explanatory variable that is not correlated with the x's of interest does not need to enter the model at all - it can safely reside in the error term without any bias being introduced as a result. (the Gauss Markov assumptions only call for the error to equal zero given x). The coefficient on x3 would be the same even if x1 and x2 were not included in the regression, and the coefficients on x1 and x2 are not affected by the inclusion of x3 in the specification.

Before leaving this post, I should make clear that I am not critical of Santosh's note; in fact, I think it's great and his effort is to be applauded. From the introduction to the paper:

I’ve been confused for a while about the effects of having x variables that are correlated. This is pretty embarrassing, given this is undergrad metrics stuff. But I’ve also seen enough grad students and professors throw around ”multicollinearity” without really understanding its implications that its worth straightening out.

This is not 'undergrad metrics stuff' at all. It is true that economics undergrads learn about the qualitative effect of 'multicollinearity', but developing an understanding of its significance in practice only comes after substantial exposure to the literature and hands-on experience (as with so many things econometrics). Santosh's attitude is the right one, and playing around with simulated data is a great, low cost way to digest the theory and really understand econometrics - one that tutors should be encouraging far more than is currently the case.

The World Freedom Atlas is a “geovisualization tool” for world statistics. The amount of information is impressive - Peter Klein went into the trouble of linking to some of the sources:

It includes the most important variables used by economists including income and purchasing power from the Penn World Table, legal origin from LLSV, economic freedom from the Fraser Institute and the Heritage Foundation, policy constraints from Witold Henisz, the World Bank’s governance indicators, and a host of other variables from Acemoglu, Johnson and Robinson; Barro and Lee; Easterly and Levine; Persson and Tabellini; and several others.

When you have a working knowledge of economics, it’s like having a mild super power.

This is Scott Adams, creator of Dilbert, who goes on to describe how he is using his. Many holders of economics degrees do not really believe in economics and its ability to explain human behaviour; Adams is not one of those people. His training is evident in the Dilbert books, where he often analyses situations using thinly disguised economics models (with comically questionable assumptions).

Economists, and those that have to bear with us, will agree that learning economics changes the way you look at the world. But will it make you happy?

For the sake of argument, forget the fact that economics degrees tend to make you rich, famous and popular with the sex of your preference. Forget that it can transform mere mortals to social analysis gods. Focusing purely on the ways in which learning economics alters the way you feel, should a rational, perfectly informed, utility-maximising individual choose to study economics?

Judging from my own experience, the answer is yes. Here's why:

1. I cherish my consumer surplus. I value most of the stuff I buy way more than what I have to pay for them; vanilla ice cream makes me happy beyond belief, and the same is true for the music of Dream Theater and the (soon to be purchased) Apple iphone. And what am I asked to pay for them? Peanuts.

2. I cherish my producer surplus. I am getting paid way, way more than the salary that would make me indifferent between supplying labour and staying at home.

3. I never have regrets: I did the best I could given the information available to me at the time. Judging I could have done better using information I acquired at a later date makes as much sense as regretting the existence of gravity. On a related topic, I understand the irrelevance of sunk costs.

4. While I do care for my welfare in relative terms, my welfare in absolute terms looms large in my utility function - and, boy, look how its value has been growing.

5. The selfishness of my fellow human beings does not make me anxious or depressed. Adam Smith (or was it Mandeville?) taught me that humans, selfish as they are, can make happy societies. And perhaps more to the point, they can make me happy.

Bluematter. is proud to award Carlos M. Jarque the Distinguished Medal for Most Misspelled Economist of All Times. On Google Scholar, a search for Jacque-Bera (see note to editors at the bottom of this post) returns no less that 177 results, while a search for 'Jacque Bera normality' generates 581. Many of these papers are published in prestigious journals, and my brief inquiry revealed a handful of papers (e.g.) which include tables that give the value of the Jarque-Bera statistic while referring to the Jacque-Bera statistic in the main text. Analysis suggests that the Jarque Bera statistic is misspelled between 5% and 15% of the times in published papers. Further anecdotal evidence points to less than 20% of the people getting the true name of the JB statistic right in conversation and unpublished work. To (roughly) quote Indifference Merv: 'It makes it difficult to figure out whether there are in fact two distinct statistics, Jarque-Bera and Jacque-Bera. Jarque has been done a great injustice by the profession'.

Note to Editors: Carlos M. Jarque is an economist with a long and distinguished career in economics, politics and management. Amongst econometricians, he is best known for his contribution (with Anil K. Bera) to testing for normality of observations and regression residuals: the Jarque-Bera statistic (here is wikipedia, and here is the paper)

Postscript: Anil K. Bera does not fare much better. A search for Jarque-Berra returns 173 papers.

About and contact

Subscribe

*Disclaimer

This blog reflects my personal views and is in no way representative of those of my employer or my mum. To make sure no misunderstanding arises and their lives stay stress-free, I will remain anonymous.