About Me

Monday, July 31, 2006

From Paul Kedrosky, the figure below was displayed by Microsoft's Craig Mundie at an analyst presentation. If the numbers are correct, Microsoft is now the leading technology company when it comes to R&D spending. Somehow I can't believe they're getting a decent return on their investment.

For comparison, the entire US high energy physics annual budget is under $1 billion, as is CERN's budget. Total annual US defense spending on R&D (this includes weapon systems, "missile defense", etc.) is about $60 billion.

Saturday, July 29, 2006

I want to recommend a book I've been reading recently, The Eighth Day of Creation by H.F. Judson. It's the most detailed intellectual history of molecular biology I've yet found, covering not just the science but the scientists as well. Someone described it as a New Yorker-style book covering the discovery of DNA, RNA and protein synthesis.

It may be chauvinistic, but I can't help noticing the prominent role played by physicists who crossed over into molecular biology: Bragg, Delbruck, Crick, Wilkins, Gamow, Szilard (yes, the Gamow and Szilard you know from big bang cosmology and the atomic bomb, respectively), Walter Gilbert, etc. The influence of Schrodinger's little book What is Life? is pervasive.

It's hard for me to think of many scientific histories as good as this one, in which the writer has a deep understanding of both the science and the personalities involved. Two examples are Subtle is the Lord (Abraham Pais on Einstein) and QED and the Men Who Made It (Sam Schweber on quantum electrodynamics), but these border on unreadable for the non-specialist. Perhaps Genius, Gleick's biography of Feynman, and The Enigma, Andrew Hodge's biography of Turing, also qualify. Can anyone suggest others?

Tuesday, July 25, 2006

This Times article describes controlled (should I say directed?) breeding experiments on small mammals such as foxes and rats performed by a Siberian scientist Dmitri Belyaev. Belyaev has managed to produce nice (tame, domesticated) and nasty (aggressive) versions of each animal in a surprisingly short time by simply breeding according to exhibited trait. The tame foxes can even perform difficult cognitive tasks like reading humans well enough to determine what they are looking at -- dogs can do this, but generally smarter chimps cannot. It's a great demonstration of how fast evolution can proceed when selection pressure is strong enough. One of the fascinating side effects of selection based on behavior is that certain physical traits (in foxes: floppy ears, white patches of fur and differently shaped skulls) were also altered, so the tame foxes can be readily identified versus wild or aggressive ones. (I don't see why this would be a priori implausible: the genes for tameness might have superficial physical as well as behavioral effects.)

These results rule out any simple-minded conclusions about whether superficial physical traits might be correlated to cognitive or behavioral traits. Sub-populations that look alike might actually behave alike. Sometimes you can judge a book by its cover :-)

NYTimes: Belyaev chose to test his theory on the silver fox, a variant of the common red fox, because it is a social animal and is related to the dog. Though fur farmers had kept silver foxes for about 50 years, the foxes remained quite wild. Belyaev began his experiment in 1959 with 130 farm-bred silver foxes, using their tolerance of human contact as the sole criterion for choosing the parents of the next generation.

“The audacity of this experiment is difficult to overestimate,” Dr. Fitch has written. “The selection process on dogs, horses, cattle or other species had occurred, mostly unconsciously, over thousands of years, and the idea that Belyaev’s experiment might succeed in a human lifetime must have seemed bold indeed.”

In fact, after only eight generations, foxes that would tolerate human presence became common in Belyaev’s stock. Belyaev died in 1985, but his experiment was continued by his successor, Lyudmila N. Trut. The experiment did not become widely known outside Russia until 1999, when Dr. Trut published an article in American Scientist. She reported that after 40 years of the experiment, and the breeding of 45,000 foxes, a group of animals had emerged that were as tame and as eager to please as a dog.

As Belyaev had predicted, other changes appeared along with the tameness, even though they had not been selected for. The tame silver foxes had begun to show white patches on their fur, floppy ears, rolled tails and smaller skulls.

...There was far more to Belyaev’s experiment than the production of tame foxes. He developed a parallel colony of vicious foxes, and he started domesticating other animals, like river otters and mink. Realizing that genetics can be better studied in smaller animals, Belyaev also started a study of rats, beginning with wild rats caught locally. His rat experiment was continued after his death by Irina Plyusnina. Siberian gray rats caught in the wild, bred separately for tameness and for ferocity, have developed these entirely different behaviors in only 60 or so generations.

The collection of species bred by Belyaev and his successors form an unparalleled resource for studying the process and genetics of domestication. In a recent visit to Novosibirsk, Dr. Brian Hare of the Planck Institute used the silver foxes to probe the unusual ability of dogs to understand human gestures.

If a person hides food and then points to the location with a steady gaze, dogs will instantly pick up on the cue, while animals like chimpanzees, with considerably larger brains, will not. Dr. Hare wanted to know if dogs’ powerful rapport with humans was a quality that the original domesticators of the dog had selected for, or whether it had just come along with the tameness, as implied by Belyaev’s hypothesis.

He found that the fox kits from Belyaev’s domesticated stock did just as well as puppies in picking up cues from people about hidden food, even though they had almost no previous experience with humans. The tame kits performed much better at this task than the wild kits did. When dogs were developed from wolves, selection against fear and aggression “may have been sufficient to produce the unusual ability of dogs to use human communicative gestures,” Dr. Hare wrote last year in the journal Current Biology.

Saturday, July 22, 2006

Via Economist's View. I think Robert Reich's view is a bit too rosy, although his points are all well taken. Regarding point (1), the top central planners in China are likely quite good, but the system is still such that tremendous resources are misallocated -- for example, into unnecessary property development projects. They are gaining very, very fast in productivity and mastery of new technologies, although the 11 percent number in (2) isn't to be trusted entirely. Regarding point (3), I just watched the movie Syriana and recommend it to anyone who isn't in the oil industry :-)

I've been watching the statistics coming out of China about its economic growth. Here are three things you should know. (1) The people managing China's economy (I'm not talking about the politicians but about the financial and economic wizards who are actually making decisions about money supply, capital markets, and the like) are extremely good. They match the best economic minds anywhere in the world. In other words, they know what they're doing. (2) The latest data show China is now growing at a rate faster than 11 percent. That's extraordinary. It's faster than China has been growing for the last five years -- and that was faster than anyone had predicted. China's rate of economic growth is the biggest economic news in the world. (3) That growth is putting huge demands on world energy supplies, and raw materials. Oil prices will continue to rise, as will all other commodities. This is the most important economic fact in the world right now. It is also among the most important political facts in the world.

Some new data on genetic vs environmental influences on IQ in this recent Times magazine article. Until recently, twins studies could only examine the effect of environmental variation within a limited range -- from working to upper class -- because very poor families are generally not allowed to adopt babies. The effect of family background has been found to recede to almost nothing by late adulthood in these twins studies, but the possibility that severe deprivation might have a stronger effect has not been ruled out. Recent investigations, as detailed below, have focused on very poor families and found a significant effect. We might characterize this as discovering the non-linear region of gene--family environment interaction :-)

NYTimes: A century’s worth of quantitative-genetics literature concludes that a person’s I.Q. is remarkably stable and that about three-quarters of I.Q. differences between individuals are attributable to heredity. This is how I.Q. is widely understood — as being mainly “in the genes” — and that understanding has been used as a rationale for doing nothing about seemingly intractable social problems like the black-white school-achievement gap and the widening income disparity. If nature disposes, the argument goes, there is little to be gained by intervening. In their 1994 best seller, “The Bell Curve,” Richard Herrnstein and Charles Murray relied on this research to argue that the United States is a genetic meritocracy and to urge an end to affirmative action. Since there is no way to significantly boost I.Q., prominent geneticists like Arthur Jensen of Berkeley have contended, compensatory education is a bad bet.

...When quantitative geneticists estimate the heritability of I.Q., they are generally relying on studies of twins. Identical twins are in effect clones who share all their genes; fraternal twins are siblings born together — just half of their genes are identical. If heredity explains most of the difference in intelligence, the logic goes, the I.Q. scores of identical twins will be far more similar than the I.Q.’s of fraternal twins. And this is what the research has typically shown. Only when children have spent their earliest years in the most wretched of circumstances, as in the infamous case of the Romanian orphans, treated like animals during the misrule of Nicolae Ceausescu, has it been thought that the environment makes a notable difference. Otherwise, genes rule.

Then along came Eric Turkheimer to shake things up. Turkheimer, a psychology professor at the University of Virginia, is the kind of irreverent academic who gives his papers user-friendly titles like “Spinach and Ice Cream” and “Mobiles.” He also has a reputation as a methodologist’s methodologist. In combing through the research, he noticed that the twins being studied had middle-class backgrounds. The explanation was simple — poor people don’t volunteer for research projects — but he wondered whether this omission mattered.

Together with several colleagues, Turkheimer searched for data on twins from a wider range of families. He found what he needed in a sample from the 1970’s of more than 50,000 American infants, many from poor families, who had taken I.Q. tests at age 7. In a widely-discussed 2003 article, he found that, as anticipated, virtually all the variation in I.Q. scores for twins in the sample with wealthy parents can be attributed to genetics. The big surprise is among the poorest families. Contrary to what you might expect, for those children, the I.Q.’s of identical twins vary just as much as the I.Q.’s of fraternal twins. The impact of growing up impoverished overwhelms these children’s genetic capacities. In other words, home life is the critical factor for youngsters at the bottom of the economic barrel. “If you have a chaotic environment, kids’ genetic potential doesn’t have a chance to be expressed,” Turkheimer explains. “Well-off families can provide the mental stimulation needed for genes to build the brain circuitry for intelligence.”

Thursday, July 20, 2006

This Economist article describes backend problems in the rapidly growing market for credit derivatives. The idea of keeping track of billion dollar trades on scraps of paper seems alarming to me. On the other hand I know how hard it is to build an IT infrastructure for a growing business.

OVER a year ago, a whiff of something nasty filled the nostrils of the world's financial regulators. It came, appropriately, from the back end of the credit-derivatives market, an unregulated asset class that was growing so fast that banks and hedge funds that dabbled in it had lost track of their trades.

In other markets where trading is private (rather than on an exchange), the problem might have seemed minor, involving thankless back-office tasks with monotonous names like matching and confirmation. But this time regulators saw a threat to the stability of banks, because of the popularity of credit-default swaps (CDSs), instruments that disperse lending risk around the financial system.

From almost nothing in 2000, trading in CDSs has ballooned to a notional value of $17 trillion at the latest count. That still leaves plenty of room to grow—interest-rate swaps, for example, are a $160 trillion market. But the CDS market, which allows banks and other financial firms to buy and sell protection against the risk of default by a borrower, punches above its weight. According to the International Monetary Fund, it is far bigger than the world's corporate-bond markets, it helps set the cost of borrowing by companies, and it might even reduce swings in the credit cycle.

For all its virtues, however, its practitioners have been hopeless at keeping tabs on their own trades, especially in the secondary market into which hedge funds have stormed. So last year regulators pressed the industry's 14 top dealers, which they do supervise, to put their rubber gloves on and sort out the plumbing.

Since then, bankers in a group known, like a ruling clique, as the “14 families” have laboured to Dyno-Rod a backlog of unconfirmed swap trades. Prodded by regulators, traders have shut themselves in hotel rooms for one weekend after another to sort out discrepancies with their counterparties. Some banks have even temporarily halted trading to allow the back offices to catch up with the sales staff.

The efforts appear to have borne fruit. The main dealers agreed to a June 30th deadline to cut the backlog of unconfirmed trades by 70% from their levels when they were first summoned by the Federal Reserve Bank of New York last September. “Everyone's already achieved that, as far as I know,” said Mark Davies, head of global credit trading at Bear Stearns, one of the firms.

Yet the smell has not quite gone away. Last month Alan Greenspan, former chairman of the Federal Reserve, startled bond traders at a dinner in New York with both a friendly pat and a slap on the wrist. Credit derivatives, he gushed, were “becoming the most important instruments I've seen in decades.” But he then went on to say how appalled he was at the “19th-century technology” used to trade credit-default swaps, with deals done over the phone and on scraps of paper. In London the Financial Services Authority, which has warned that unconfirmed trades could cause liquidity problems and accelerate a financial crisis, is partially mollified. “We've gone from a red light to an amber,” an official says.

It goes without saying that an automated, transparent back-office system is a good way to bring new investors into the market and improve liquidity. There is also broad support for the way regulators have let banks find their own answers to the problems, rather than imposing rules.

But the supervisory oversight, as well as the solutions dreamt up by the big dealers, make some people nervous. They think there may be subtle changes in the $220 trillion market for over-the-counter derivatives, which is unregulated because it involves trades between private parties.

A good deal of the grumbling comes from hedge funds. Some of these, bankers say, have unsuccessfully resisted moves to automated trading, preferring to keep details of their trades to themselves and to play dealers off against each other.

There are other worries. At the centre of the complex trading infrastructure is a vast industry-owned utility called the Depository Trust & Clearing Corporation (DTCC), which the 14 dealers—ten of which have seats on its board—see as integral to automation (handily, it gave such users a rebate of $528m last year). The utility seeks to stitch together electronic platforms that stretch from traders' desks, through confirmation, to storing records of derivative contracts until they expire.

DTCC's prevalence has led to concerns that putting so much global information into storage in America may one day make the industry subject to American regulation, no matter where trades take place. The utility's Peter Axilrod believes this fear is unwarranted. He points to the light touch shown by supervisors so far.

Another fear is that DTCC might trample on private competitors as it moves into other areas of derivatives. When it began testing a system for recording initial agreements to trade last month, it described itself as a tap-dancing gorilla.

Such concerns are likely to loom larger now that the backlog of paperwork has been reduced. Of particular interest is whether DTCC will realise its ambitions to store hundreds of trillions of dollars' worth of credit, equity and interest-rate derivative contracts. This would be a hugely complex task: already it admits that its plans to warehouse CDS contracts are taking longer than expected. And the rest of the world might well worry that too much of the plumbing of a global market would be on American soil.

Tuesday, July 18, 2006

Via an intrepid correspondent... the Times covers numerous areas where human "experts" are outperformed by machine intelligence. Of course, we're not quite ready to let the machines take over just yet. I think IA = Intelligence Amplification is much more promising in the short term than AI = Artifical Intelligence.

The professor, Chris Snijders of the Eindhoven University of Technology, has been studying the routine decisions that managers make, and is convinced that computer models, by and large, can do a better job of it. He even issued a challenge late last year to any company willing to pit its humans against his algorithms.

“As long as you have some history and some quantifiable data from past experiences,” Mr. Snijders claims, a simple formula will soon outperform a professional’s decision-making skills. “It’s not just pie in the sky,” he said. “I have the data to support this.”

Some of Mr. Snijders’s experiments from the last two years have looked at the results that purchasing managers at more than 300 organizations got when they placed orders for computer equipment and software. Computer models given the same tasks achieved better results in categories like timeliness of delivery, adherence to the budget and accuracy of specifications.

No company has directly taken Mr. Snijders up on his challenge. But a Dutch insurer, Interpolis, whose legal aid department has been expanding rapidly in recent years, called in Mr. Snijders to evaluate a computer model it had designed to automate the routing of new cases — a job previously handled manually by the department’s in-house legal staff.

The manager in charge of the project, Ludo Smulders, said the model was much faster and more accurate than the old system. “We’re very satisfied about the results it’s given our organization,” he said. “That doesn’t mean there are no daily problems, but the problems are much smaller than when the humans did it by hand. And it lets them concentrate more on giving legal advice, which is what their job is.”

Mr. Snijders’s work builds on something researchers have known for decades: that mathematical models generally make more accurate predictions than humans do. Studies have shown that models can better predict, for example, the success or failure of a business start-up, the likelihood of recidivism and parole violation, and future performance in graduate school.

They also trump humans at making various medical diagnoses, picking the winning dogs at the racetrack and competing in online auctions. Computer-based decision-making has also grown increasingly popular in credit scoring, the insurance industry and some corners of Wall Street.

The main reason for computers’ edge is their consistency — or rather humans’ inconsistency — in applying their knowledge.

“People have a misplaced faith in the power of judgment and expertise,” said Greg Forsythe, a senior vice president at Schwab Equity Ratings, which uses computer models to evaluate stocks.

The algorithms behind so-called quant funds, he said, act with “much greater depth of data than the human mind can. They can encapsulate experience that managers may not have.” And critically, models don’t get emotional. “Unemotional is very important in the financial world,” he said. “When money is involved, people get emotional.” Many putative managerial qualities, like experience and intuition, may in fact be largely illusory. In Mr. Snijders’s experiments, for example, not only do the machines generally do better than the managers, but some managers perform worse over time, as they develop bad habits that go uncorrected from lack of feedback.

Other cherished decision aids, like meeting in person and poring over dossiers, are of equally dubious value when it comes to making more accurate choices, some studies have found, with face-to-face interviews actually degrading the quality of an eventual decision.

“People’s overconfidence in their ability to read someone in a half-an-hour interview is quite astounding,” said Michael A. Bishop, an associate professor of philosophy at Northern Illinois University who studies the social implications of these models.

And the effects can be serious. “Models will do much better in predicting violence than will parole officers, and in that case, not using them leads to a more dangerous society,” he said. “But people really don’t believe that the models are as accurate as they are.”

Models have other advantages beyond their accuracy and consistency. They allow an organization to codify and centralize its hard-won knowledge in a concrete and easily transferable form, so it stays put when the experts move on. Models also can teach newcomers, in part by explaining the individual steps that lead to a given choice. They are also faster than people, are immune to fatigue and give the human experts more time to work on other tasks beyond the current scope of machines.

So if they’re so good, why aren’t they already used everywhere?

Not everyone is convinced that managers are incorrigibly myopic. “I’ve never seen any evidence that there is a pattern of decline at all, and it just doesn’t fit with the way management literature is going, which is all around the emotional intelligence angle,” said Laura Empson, the director of the Clifford Chance Center of the Said Business School at Oxford University.

“I think there are a lot of people who have a strong technological orientation who would agree life would be a lot simpler if it weren’t for the humans,” she said. “But the reality is, organizations do have a lot of very intense and complicated human issues within them.”

Max H. Bazerman, a professor at Harvard Business School, wonders how many managerial decisions can actually be modeled. “The vast majority of decisions that we make in professional life don’t have this quality,” he said.

He agrees that models can make better decisions about credit card applications and college admissions, he said, “but there are many decisions that are much more unique, where that database doesn’t exist. I’m as skeptical about human intuition as these folks, but it’s not only a computer model that we replace it with. Sometimes it’s thinking more clearly.”

Many in the field of computer-assisted decision-making still refer to the debacle of Long Term Capital Management, a highflying hedge fund that counted several Nobel laureates among its founders. Its algorithms initially mastered the obscure worlds of arbitrage and derivatives with remarkable skill, until the devaluation of the Russian ruble in 1998 sent the fund into a tailspin.

“As long as the underlying conditions were in order, the computer model was almost like a money machine,” said Roger A. Pielke Jr., a professor of environmental studies at the University of Colorado whose work focuses on the relation between science and decision-making. “But when the assumptions that went into the creation of those models were violated, it led to a huge loss of money, and the potential collapse of the global financial system.”

In such situations, “you can never hope to capture all of the contingencies or variables inside of a computer model,” he said. “Humans can make big mistakes also, but humans, unlike computer models, have the ability to recognize when something isn’t quite right.”

Another problem with the models is the issue of accountability. Mr. Forsythe of Schwab pointed out that “there’s no such thing as a 100 percent quantitative fund,” in part because someone has to be in charge if the unexpected happens. “If I’m making decisions,” he said, “I don’t want to give up control and say, ‘Sorry, the model told me.’ The client wants to know that somebody is behind the wheel.”

Still, some consider the continuing ascendance of models as inevitable, and recommend that people start figuring out the best way to adapt to the role reversal. Mark E. Nissen, a professor at the Naval Postgraduate School in Monterey, Calif., who has been studying computer-vs.-human procurement, sees a fundamental shift under way, with humans becoming increasingly peripheral in making routine decisions, concentrating instead on designing ever-better models.

“The newest space, and the one that’s most exciting, is where machines are actually in charge, but they have enough awareness to seek out people to help them when they get stuck,” he said — for example, when making “particularly complex, novel, or risky” decisions.

The ideal future, then, may lie in letting computers and people each do what they do best. One way to facilitate this development is to train people to identify the typical cognitive foibles that lead to bad choices. “I’ve now worked with these models for so long,” Mr. Snijders said, “that my instincts have changed along the way.”

As Mr. Bishop of Northern Illinois University puts it, by making smart use of computer models’ advantages, “you’ll become like the crafty A student who doesn’t work that hard but gets mostly right answers, rather than the really hard-working student who gets lots of wrong answers and as a result gets C’s.”

Prosopagnosia, or face blindness, affects a surprisingly large fraction (few percent) of the population. Those who suffer from it have difficulty in distinguishing human faces, except by conscious effort (recalling particular features, or contextual clues). Preliminary evidence is that (a) we have a specialized module in our brains for face recognition and (b) there are one or more alleles (gene variants) which disable this function to various degrees.

How could these alleles survive selection? One would guess that face blindness is an evolutionary handicap, at least to some degree (although perhaps less so in small hunter gatherer groups, or in theoretical physics ;-). Is there a compensating advantage provided by the mutation?

It's fascinating to consider how many other strange cognitive mutations are present in our population at the percent (or fraction of percent level). Memory? Musical ability? Specialized mathematical ability (e.g., visualizing geometrical shapes, or a "feel" for magnitudes of quantities, or lightning calculation)?

I suspect we'll find more and more of these, and their associated alleles, as time goes by. See GNXP.com for more discussion and references.

It just occurred to me that there are likely dozens of readers of this blog who have prosopagnosia. Would anyone care to share their (anonymous) comments on how they adjust to the condition, and when they noticed having it?

NYTimes: Dr. Sellers, a professor of English at Hope College in Holland, Mich., has a disorder called prosopagnosia, or face blindness, and she has had it since birth. “I see faces that are human,” she said, “but they all look more or less the same. It’s like looking at a bunch of golden retrievers: some may seem a little older or smaller or bigger, but essentially they all look alike.”

Face blindness can be a rare result of a stroke or a brain injury, but a study published in the July issue of The American Journal of Medical Genetics Part A is the first report of the prevalence of a congenital or developmental form of the disorder.

The researchers say the phenomenon is much more common than previously believed: they found that 2.47 percent of 689 randomly selected students in Münster, Germany, had the disorder.

Dr. Thomas Grüter, a co-author of the paper, said there were reasons to believe that the condition was equally common in other populations. “First,” he said, “our population was not selected in terms of cognition deficits. And second, a study done by Harvard University with a different diagnostic approach yielded very similar figures.”

Dr. Grüter is himself prosopagnosic. His wife and co-author, Dr. Martina Grüter of the Institute for Human Genetics at the University of Münster, did not realize he was face blind until she had known him more than 20 years. The reason, she says, is he was so good at compensating for his deficits.

“How do you recognize a face?” she asked. “For most people, this is a silly question. You just do. But people who have prosopagnosia can tell you exactly why they recognize a person. Thomas consciously looks for the details that others notice unconsciously.”

Monday, July 17, 2006

Herb Allen (of boutique investment bank Allen & Company) has traditionally run a summer retreat for the biggest movers and shakers in media and technology in Sun Valley, Idaho. This year it's reported that financiers (hedge fund and private equity guys) have invaded the party, perhaps signalling a shift in the balance of power.

I once hoped for an invite to this event a few years ago, as one of our startup's investors (a hedge fund) was a regular attendee and its founder an Allen & Co. alumnus. Alas, the invite never materialized, but I did have several meetings with Allen & Co. bankers (one of whom turned out to be the son of a famous former Treasury Secretary) in their luxurious Manhattan offices and our not so exciting Oakland digs. The Manhattan offices are lushly carpeted, wood paneled and sound proofed. A remote control device on the oak table let us order coffee and drinks, delivered by a uniformed servant, during the meeting.

NYTimes DealBook: The annual mogul-fest that Allen & Company holds here every year is best known for A-list attendees like Rupert Murdoch of the News Corporation, Richard D. Parsons of Time Warner, Howard Stringer of Sony and Sumner M. Redstone of Viacom.

It is a conference that has taken on an almost supernatural reputation for deal making amid barbecues, discussion panels and whitewater rafting. (The seeds of Walt Disney’s acquisition of ABC/Capital Cities in 1996 were sown here.) But this year, the ones to watch were deep-pocketed people you may have never heard of, from the world of private equity and hedge funds.

“I walked into dinner last night and didn’t recognize three-quarters of the people there,” said the chief executive of one of the world’s largest media companies, who refused to speak on the record for fear of upsetting Herbert A. Allen of Allen & Company, who frowns on attendees talking publicly about the invitation-only conference, which ended yesterday. “It was all these hedge fund and finance guys I had never met before. The balance of power is shifting.”

The guest list at Mr. Allen’s conference, which began in 1982, may be the ultimate barometer of where the center of influence lies in corporate America and on Wall Street. For most of the 1980’s and early 90’s, the power players were Hollywood studio moguls like Barry Diller, then of Paramount, and Michael D. Eisner, formerly of Disney. By the mid-1990’s, cable and telecommunications executives like John C. Malone of Liberty Media and Brian L. Roberts of Comcast became the belles of the ball. In the late 1990’s and early 2000’s, technology and Internet executives like Stephen M. Case, the founder of America Online, and Jerry Yang, co-founder of Yahoo, were drawing crowds at the hotel bar.

Now, flush with billions in cash and the ability to borrow heavily on top of that, the private equity bigwigs and hedge fund managers have become the stars. Call it Predators’ Ball 2.0 — a kind of outdoorsy reprise of Michael Milken’s famous gathering of leveraged-buyout mavens of the 1980’s.

“We used to come here every year to sniff each other,” said the chief executive of another media company, who also did not want his name used. “Now, all these finance people are sniffing us.” With media stocks down virtually across the board, some may smell opportunity.

Friday, July 14, 2006

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle.

Thursday, July 13, 2006

You may have noticed little Google ads on this blog, placed through their AdSense program. A snippet of JavaScript on this page is executed by your browser, which then loads the appropriate text ads from a Google server. The ads are targeted by key word and the placement of the ads is sold in a key word auction to advertisers.

I put the ads up mainly out of curiosity -- as you know, I am fascinated by all things Google :-)

What I've learned recently is that certain key words are very valuable. If you are reading this blog and see ads related to, e.g., hedge funds, derivatives, FX trading, volatility, etc., you can send me a dollar just by clicking! Please click early and often -- you'll be transferring funds from rapacious luxocrats to a humble physics professor ;-)

Google's $6 billion-a-year advertising business is at risk because it can't be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors...

But the overarching problem is both hard to solve and important: How do you tell if there's an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn't automated his responses, and isn't being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that's infected with a Trojan.

Monday, July 10, 2006

Latest on income inequality (via Economist's View). From 2003-2004 the top 1% gained 17%, while the other 99% of the population barely advanced. So, the fruits of economic growth went overwhelmingly to a small group. The top 1% of earners now account for about 20% of pre-tax earnings. IIRC, the threshold for the top 1% is about $275k per annum.

Economists Thomas Piketty and Emmanuel Saez have recently made available an updated version of their groundbreaking data series on U.S. income inequality.[1] The data are unique because of the detailed information they provide regarding income gains at the top of the income spectrum, and also because they extend back to 1913. By contrast, widely used Census data on income developments do not capture income trends among the top one percent of households and go back only to the end of World War II.

...The Piketty and Saez data offer the first real snapshot of income trends among those at the pinnacle of the income spectrum in 2004. The data show that income gains between 2003 and 2004 were particularly large for those at the very top of the income spectrum, resulting in a nearly unprecedented one-year increase in income concentration.[3] The Piketty and Saez data show:

1) From 2003 to 2004, the average incomes of the bottom 99 percent of households grew by less than 3 percent, after adjusting for inflation.

2) In contrast, the average incomes of the top one percent of households experienced a jump of almost 17 percent, after adjusting for inflation. (Census data show that real median income fell between 2003 and 2004. Average income is pulled up by gains at the top of the income spectrum; the 3 percent rise among the bottom 99 percent seems to largely reflect gains by households in the top quintile of the income spectrum. In contrast, trends in median income capture the experience of households in the middle of the income spectrum.)

3) The top one percent of households garnered 36 percent of the income gains in 2004.

Gdrive, a Web-based hard drive application from Google, has been rumored for some time. Enterprising sleuths seem to have found some interesting tidbits -- looks like client software that syncs with a distributed storage network. I can't wait for mine! (Although, do you want all your files searchable by Google?)

This is what was on the page just some hours ago (the page isn’t active anymore, but Corsin made a backup):

Platypus (Gdrive)

A filer for the world. But better.

Storing your files in Platypus has a number of advantages over storing your files on either your C: drive or filer.

Backup. If you lose your computer, grab a new one and reinstall Platypus. Your files will be on your new machine in minutes.

Sync. Keep all your machines synchronized, even if they run different operating systems.

VPN-less access. Not at a Google computer? View your files on the web at http://troutboard.com/p.

Collaborate. Create shared spaces to which multiple Googlers can write.

Disconnected access. On the plane? VPN broken? All your files are still accessible.

The page also offers you to “find a new bug, get a free Platypus t-shirt!” and to browse a Platypus share with your username or group name. The Gdrive download is available for Windows, Mac and Linux. Within the page source, Google author Justin Rosenstein is listed (Justin was Product Manager for Google Page Creator). Also, a couple of feature listings are hidden as comments in the source:

Publish. All of the files you store on Platypus are automatically accessible from the (corporate) web.

Share. Other Googlers can mount your Platypus folders and open your files in read-only mode.

Collaborate. ... It also has advantages over storing your files in your filer or WWW directory

Local IO speeds. Open and save as quickly as you could if you were accessing them from your C: drive.

Can we say good-bye to traditional methods of saving files, accessing them, and creating backups?

Tuesday, July 04, 2006

Mysterious hedge funds have reached the popular consciousness! Details magazine, in an article entitled The New American Class System, profiles the new money "luxocrats" that have risen to the top of our winner take all economy.

Don't miss the slideshow field guide to luxocrats -- the hedge fund guy, the Silicon Valley geek, the old money trust funder, the struggling guy trying to make it in NYC on $500k a year.

The article is, of course, more amusing than realistic, but see here and here for accurate numbers on the US wealth distribution. If you want something even meatier, read this Economist article on the growth of credit derivatives (hedge funds are the largest traders of credit derivatives). The CDS (Credit Default Swap) market has a notional value of $17 trillion now! If I recall correctly, this market barely existed five years ago.

Bonus! Here's a long profile of James Simons, one of the most successful hedge fund managers of all time (via the new blog angryphysics; where's the anger, though? :-) The article has some interesting details about a lawsuit between Renaissance and two former employees, physics guys trained at MIPT (one of Landau's institutes in Moscow) and MIT.

Details: You know this guy from college. He runs a hedge fund—let's call it Colossal Capital Strategies—and if you’re diligent or masochistic enough to do a little research, you’ll find out that he made about $100 million last year. He’s 36 years old and he has billions under management. The Wall Street Journal likes to refer to him as a “high-net-worth individual,” but that wording seems so odd and clunky—the sort of boardroom-pretzel terminology that some lawyer from Enron might use to explain away a phantom transaction.

No. Let’s call him a luxocrat.

A luxocrat is not merely rich, but rich in a way that you couldn’t have imagined back in college. (Otherwise you would have been nicer to the guy.) He’s rich enough to guarantee that every calorie that passes his lips has been fussed over by a master chef, rich enough to share his truffled foie gras with the treasury secretary on a Gulfstream headed for Barbados, rich enough to impulse-buy doodads from the Robb Report with a black American Express Centurion card (whose existence you weren’t even aware of), rich enough to make your proud and dutiful little 401(k) look, in comparison, like the mound of coins that an Appalachian beet farmer might stash away in a pickle jar. A luxocrat is enormo-rich, robber-baron rich, 21st-century rich, swelled up with a wealth of such magnitude that it suggests the American class system would have to undergo a wholesale restructuring in order to accommodate it.

Which is arguably what’s happening at this very moment.

Usually when you read about the widening gap between the rich and the poor, you think of the insane chasm that yawns between, say, Bill Gates and a laborer trying to subsist on a few cents a day on the Bangladeshi floodplain. But there is a new status gap opening up in the American consciousness, one in which a young guy can earn or inherit what used to be a very respectable sum (well into the six figures, for instance) and still feel as if he’s stuck on a treadmill of perpetual proletarian worry and strain. Meanwhile he can’t help but notice that he’s got ultra-wealthy contemporaries (hedge-funders, real-estate moguls, Google geeks, surfers who retired at 30 after hitting the Silicon Valley jackpot) whose daily activities seem to have been shorn of all that worry and strain. The end result is the emergence of a different social pecking order—one in which the new money dwarfs the old, in which the currency-market globalists and Palo Alto venture capitalists conquer the top while even the Hollywood producers and Merrill Lynch desk jockeys get shunted to the bottom. A world in which the old rituals and pathways to success have been vaporized.

Let’s begin with the luxocrat’s home. “There is so much more big money in the hands of younger people,” says Pam Liebman, the president and CEO of the Corcoran Group, a national real-estate powerhouse with almost $12 billion in annual sales. “And their way of thinking is much less traditional.” In Manhattan, for instance, every budding tycoon used to lust for the pedigreed grandeur of an apartment in one of the pre-war buildings on Fifth Avenue or Park Avenue; he could acquire status by living in proximity to old money. Now “old,” in any configuration, has lost its allure. The luxury tyro’s ideal habitat is a brand-new, Lysol-scrubbed steel-and-glass monument to personal service. (Consider the residential tower that Eric Packer, the cold-blooded 28-year-old billionaire in Don DeLillo’s 2003 novel Cosmopolis, likes to call home: “It had the kind of banality that reveals itself over time as being truly brutal. He liked it for this reason.”) Even if the building happens to be beautiful and designed by Richard Meier or Santiago Calatrava, what you see from the outside is less important than what the luxocrat has access to on the inside. The magic word is amenities: He wants a concierge, a pool, a gym, a spa, a playroom for the kids, valet parking, haute cuisine delivered to his door. He wants things taken care of. He wants, in effect, to live in a five-star hotel 24-7. You can’t really blame him. “People are busy, and they want to be pampered,” Liebman says. “They want things that make their lives easier, more pleasant.”

Sunday, July 02, 2006

Physicist turned author and screenwriter Leonard Mlodinow has a nice article in the LA Times on the hit or miss nature of the movie industry. He recapitulates the myth of expertise as it applies to studio executives, whom he compares to dart throwing monkeys (a la fund managers in finance).

Mlodinow wrote a charming memoir about his time as a postdoc at Caltech in the early 1980s. Fresh from Berkeley, having written a PhD dissertation on the large-d expansion (d is the number of dimensions), he was in over his head at Caltech, but found a friend and mentor in the ailing Richard Feynman.

We all understand that genius doesn't guarantee success, but it's seductive to assume that success must come from genius. As a former Hollywood scriptwriter, I understand the comfort in hiring by track record. Yet as a scientist who has taught the mathematics of randomness at Caltech, I also am aware that track records can deceive.

That no one can know whether a film will hit or miss has been an uncomfortable suspicion in Hollywood at least since novelist and screenwriter William Goldman enunciated it in his classic 1983 book "Adventures in the Screen Trade." If Goldman is right and a future film's performance is unpredictable, then there is no way studio executives or producers, despite all their swagger, can have a better track record at choosing projects than an ape throwing darts at a dartboard.

That's a bold statement, but these days it is hardly conjecture: With each passing year the unpredictability of film revenue is supported by more and more academic research.

That's not to say that a jittery homemade horror video could just as easily become a hit as, say, "Exorcist: The Beginning," which cost an estimated $80 million, according to Box Office Mojo, the source for all estimated budget and revenue figures in this story. Well, actually, that is what happened with "The Blair Witch Project" (1999), which cost the filmmakers a mere $60,000 but brought in $140 million—more than three times the business of "Exorcist." (Revenue numbers reflect only domestic receipts.)

What the research shows is that even the most professionally made films are subject to many unpredictable factors that arise during production and marketing, not to mention the inscrutable taste of the audience. It is these unknowns that obliterate the ability to foretell the box-office future.

But if picking films is like randomly tossing darts, why do some people hit the bull's-eye more often than others? For the same reason that in a group of apes tossing darts, some apes will do better than others. The answer has nothing to do with skill. Even random events occur in clusters and streaks.

...If the mathematics is counterintuitive, reality is even worse, because a funny thing happens when a random process such as the coin-flipping experiment is actually carried out: The symmetry of fairness is broken and one of the films becomes the winner. Even in situations like this, in which we know there is no "reason" that the coin flips should favor one film over the other, psychologists have shown that the temptation to concoct imagined reasons to account for skewed data and other patterns is often overwhelming.

...Actors in Hollywood understand best that the industry runs on luck. As Bruce Willis once said, "If you can find out why this film or any other film does any good, I'll give you all the money I have." (For the record, the film to which he referred, 1993's "Striking Distance," didn't do any good.) Willis understands the unpredictability of the film business not simply because he's had box-office highs and lows. He knows that random events fueled his career from the beginning, and his story offers another case in point...