About Me

Wednesday, August 16, 2017

OpenAI has created a Dota 2 bot that plays at the level of human professionals. Humans can look forward to coexistence with increasingly clever AIs in both virtual and real world settings. See also Robots taking our jobs.

OpenAI: Dota 1v1 is a complex game with hidden information. Agents must learn to plan, attack, trick, and deceive their opponents. The correlation between player skill and actions-per-minute is not strong, and in fact, our AI’s actions-per-minute are comparable to that of an average human player.

Success in Dota requires players to develop intuitions about their opponents and plan accordingly. In the above video you can see that our bot has learned — entirely via self-play — to predict where other players will move, to improvise in response to unfamiliar situations, and how to influence the other player’s allied units to help it succeed.

About the game ("Defense of the Ancient").

Wikipedia: Dota 2 is played in matches between two teams of five players, with each team occupying and defending their own separate base on the map. Each of the ten players independently controls a powerful character, known as a "hero", who all have unique abilities and styles of play. During a match, a player and their team collects experience points and items for their heroes in order to fight through the opposing team's heroes and other defenses. A team wins by being the first to destroy a large structure located in the opposing team's base, called the "Ancient".

Related: this is a nice recent interview with Demis Hassabis of Deep Mind. He talks a bit about Go innovation resulting from AlphaGo.

Monday, August 14, 2017

These authors extrapolate from existing data to predict sample sizes needed to identify SNPs which explain a large portion of heritability in a variety of traits. For cognitive ability (see red curves in figure below), they predict sample sizes of ~million individuals will suffice.

Summary-level statistics from genome-wide association studies are now widely used to estimate heritability and co-heritability of traits using the popular linkage-disequilibrium-score (LD-score) regression method. We develop a likelihood-based approach for analyzing summary-level statistics and external LD information to estimate common variants effect-size distributions, characterized by proportion of underlying susceptibility SNPs and a flexible normal-mixture model for their effects. Analysis of summary-level results across 32 GWAS reveals that while all traits are highly polygenic, there is wide diversity in the degrees of polygenicity. The effect-size distributions for susceptibility SNPs could be adequately modeled by a single normal distribution for traits related to mental health and ability and by a mixture of two normal distributions for all other traits. Among quantitative traits, we predict the sample sizes needed to identify SNPs which explain 80% of GWAS heritability to be between 300K-500K for some of the early growth traits, between 1-2 million for some anthropometric and cholesterol traits and multiple millions for body mass index and some others. The corresponding predictions for disease traits are between 200K-400K for inflammatory bowel diseases, close to one million for a variety of adult onset chronic diseases and between 1-2 million for psychiatric diseases.

This figure shows predicted effect size distributions for a number of quantitative traits. You can see that height and intelligence are somewhat different, but not in any fundamental sense.

Thursday, August 10, 2017

The Spring 2017 issue of the Stanford Medical School magazine has a special theme: Sex, Gender, and Medicine. I recommend the article excerpted below to journalists covering the Google Manifesto / James Damore firing. After reading it, they can decide for themselves whether his memo is based on established neuroscience or bro-pseudoscience.

Perhaps top Google executives will want to head down the road to Stanford for a refresher course in reality.

... Nirao Shah decided in 1998 to study sex-based differences in the brain ... “I wanted to find and explore neural circuits that regulate specific behaviors,” says Shah, then a newly minted Caltech PhD who was beginning a postdoctoral fellowship at Columbia. So, he zeroed in on sex-associated behavioral differences in mating, parenting and aggression.

“These behaviors are essential for survival and propagation,” says Shah, MD, PhD, now a Stanford professor of psychiatry and behavioral sciences and of neurobiology. “They’re innate rather than learned — at least in animals — so the circuitry involved ought to be developmentally hard-wired into the brain. These circuits should differ depending on which sex you’re looking at.”

His plan was to learn what he could about the activity of genes tied to behaviors that differ between the sexes, then use that knowledge to help identify the neuronal circuits — clusters of nerve cells in close communication with one another — underlying those behaviors.

At the time, this was not a universally popular idea. The neuroscience community had largely considered any observed sex-associated differences in cognition and behavior in humans to be due to the effects of cultural influences. Animal researchers, for their part, seldom even bothered to use female rodents in their experiments, figuring that the cyclical variations in their reproductive hormones would introduce confounding variability into the search for fundamental neurological insights.

But over the past 15 years or so, there’s been a sea change as new technologies have generated a growing pile of evidence that there are inherent differences in how men’s and women’s brains are wired and how they work.

... There was too much data pointing to the biological basis of sex-based cognitive differences to ignore, Halpern says. For one thing, the animal-research findings resonated with sex-based differences ascribed to people. These findings continue to accrue. In a study of 34 rhesus monkeys, for example, males strongly preferred toys with wheels over plush toys, whereas females found plush toys likable. It would be tough to argue that the monkeys’ parents bought them sex-typed toys or that simian society encourages its male offspring to play more with trucks. A much more recent study established that boys and girls 9 to 17 months old — an age when children show few if any signs of recognizing either their own or other children’s sex — nonetheless show marked differences in their preference for stereotypically male versus stereotypically female toys.

Halpern and others have cataloged plenty of human behavioral differences. “These findings have all been replicated,” she says.

... “You see sex differences in spatial-visualization ability in 2- and 3-month-old infants,” Halpern says. Infant girls respond more readily to faces and begin talking earlier. Boys react earlier in infancy to experimentally induced perceptual discrepancies in their visual environment. In adulthood, women remain more oriented to faces, men to things.

All these measured differences are averages derived from pooling widely varying individual results. While statistically significant, the differences tend not to be gigantic. They are most noticeable at the extremes of a bell curve, rather than in the middle, where most people cluster. ...

The recent SMPY paper below describes a group of mathematically gifted (top 1% ability) individuals who have been followed for 40 years. This is precisely the pool from which one would hope to draw STEM and technological leadership talent. There are 1037 men and 613 women in the study.

The figures show significant gender differences in life and career preferences, which affect choices and outcomes even after ability is controlled for. (Click for larger versions.) According to the results, SMPY men are more concerned with money, prestige, success, creating or inventing something with impact, etc. SMPY women prefer time and work flexibility, want to give back to the community, and are less comfortable advocating unpopular ideas. Some of these asymmetries are at the 0.5 SD level or greater. Here are three survey items with a ~ 0.4 SD or more asymmetry:

# Society should invest in my ideas because they are more important than those of other people.

# Discomforting others does not deter me from stating the facts.

# Receiving criticism from others does not inhibit me from expressing my thoughts.

I would guess that Silicon Valley entrepreneurs and leading technologists are typically about +2 SD on each of these items! One can directly estimate M/F ratios from these parameters ...

For example, if a typical male SV entrepreneur / tech leader is roughly +2SD on these traits whereas a female is +2.5SD, the population fraction would be 3:1 or 4:1 larger for males. This doesn't mean that the females who are > +2.5SD (in the female population) are ill-suited to the role (they may be as good as the men), just that there are fewer of them in the general population. I was shocked to see that even top Google leadership didn't understand this point that Damore tried to make in his memo.

A 6ft3 Asian-American guard (Jeremy Lin) might be just as good as other guards in the NBA, but the fraction of Asian-American males who are 6ft3 is smaller than for other groups, like African-Americans. Even if there were no discrimination against Asian players, you'd expect to see fewer (relative to base population) in the NBA due to the average height difference.

The image below is from the actual memo. Does Damore sound like a sexist brogrammer Neanderthal?

OKRs = Objectives and Key Results. Damore is pointing out that pro-diversity objectives may incentivize managers to discriminate by gender or race in hiring and promotion.

According to Margot Cleveland (attorney who teaches labor law at Notre Dame):

The Federalist: ... Damore wrote “Google has created several discriminatory practices.” This reads of a classic case of opposition to an unlawful employment practice. (Under the case law, the practice need not actually be illegal if the employee reasonably believed it discriminatory.)

This passage may well be Google’s undoing. Damore can present a prima facie case of illegal retaliation: he engaged in protected activity by opposing several discriminatory practices, and was fired from his job. The close temporal nexus creates an inference that Google fired him because of his opposition to illegal discrimination.

... Google will counter that it fired him not because of his opposition but because of the gender stereotypes he included in the memo.

But of course the Google Brain was simultaneously using these "stereotypes" = correlations as its core revenue driver:

Professor Cleveland concludes:

... Once before a jury, Google will be hard-pressed to justify Damore’s firing because the jury will be force-fed the actual words Damore wrote, not the press’ hysterical gloss. In this regard, Google was in a no-win situation: Once the Neanderthal narrative formed, Google had no real choice but to fire Damore—which doesn’t make it right or, as Google is likely to find out soon, legal. In the meantime, the rest of the country will be treated to a nice civics refresher course and a deep-dive into federal employment and labor law.

Not to mention a deep-dive into the science of statistical / distributional group differences!

Here is Damore's brief summary of his memo (which contains many citations to original scientific research), and the conclusion:

Google’s political bias has equated the freedom from offense with psychological safety, but shaming into silence is the antithesis of psychological safety.
● This silencing has created an ideological echo chamber where some ideas are too sacred to be honestly discussed.
● The lack of discussion fosters the most extreme and authoritarian elements of this ideology.
○ Extreme: all disparities in representation are due to oppression
○ Authoritarian: we should discriminate to correct for this oppression
● Differences in distributions of traits between men and women may in part explain why we don't have 50% representation of women in tech and leadership.
● Discrimination to reach equal representation is unfair, divisive, and bad for business.

I hope it’s clear that I’m not saying that diversity is bad, that Google or society is 100% fair, that we shouldn’t try to correct for existing biases, or that minorities have the same experience of those in the majority. My larger point is that we have an intolerance for ideas and evidence that don’t fit a certain ideology. I’m also not saying that we should restrict people to certain gender roles; I’m advocating for quite the opposite: treat people as individuals, not as just another member of their group (tribalism).

This actual excerpt is of course very different from the heavily biased (mendacious) characterizations of the memo in the (lying) media. Perhaps that should make you wonder about the reliability of mainstream accounts concerning this matter.

Damore correctly anticipated his own demise! CEO Sundar Pichai's company-wide message seems to ban almost all scientific discussion of statistical or distributional group differences, on threat of termination:

This has been a very difficult time. I wanted to provide an update on the memo that was circulated over this past week.

First, let me say that we strongly support the right of Googlers to express themselves, and much of what was in that memo is fair to debate, regardless of whether a vast majority of Googlers disagree with it. However, portions of the memo violate our Code of Conduct and cross the line by advancing harmful gender stereotypes in our workplace. Our job is to build great products for users that make a difference in their lives. To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK. It is contrary to our basic values and our Code of Conduct, which expects “each Googler to do their utmost to create a workplace culture that is free of harassment, intimidation, bias and unlawful discrimination.”

The memo has clearly impacted our co-workers, some of whom are hurting and feel judged based on their gender. Our co-workers shouldn’t have to worry that each time they open their mouths to speak in a meeting, they have to prove that they are not like the memo states, being “agreeable” rather than “assertive,” showing a “lower stress tolerance,” or being “neurotic.”

At the same time, there are co-workers who are questioning whether they can safely express their views in the workplace (especially those with a minority viewpoint). They too feel under threat, and that is also not OK. People must feel free to express dissent. So to be clear again, many points raised in the memo—such as the portions criticizing Google’s trainings, questioning the role of ideology in the workplace, and debating whether programs for women and underserved groups are sufficiently open to all—are important topics. The author had a right to express their views on those topics—we encourage an environment in which people can do this and it remains our policy to not take action against anyone for prompting these discussions. ...

Larry Summers was fired from the Harvard presidency (at least in part) for pointing out (correctly, it seems) that males exhibit higher variance in performance on cognitive tests (more very low- and high-scoring men than women per capita). His detractors justified the termination due to his highly public and symbolic role as the leader of the institution. In contrast, Damore was simply an engineer (with a background in computational biology) expressing his opinion on some basic scientific questions still under active investigation by researchers all over the world. His firing has to be regarded as scary authoritarian policing of thought.

Mr. Damore, who worked on infrastructure for Google’s search product, said he believed that the company’s actions were illegal and that he would “likely be pursuing legal action.”

“I have a legal right to express my concerns about the terms and conditions of my working environment and to bring up potentially illegal behavior, which is what my document does,” Mr. Damore said.

Before being fired, Mr. Damore said, he had submitted a complaint to the National Labor Relations Board claiming that Google’s upper management was “misrepresenting and shaming me in order to silence my complaints.” He added that it was “illegal to retaliate” against an N.L.R.B. charge.

Monday, July 31, 2017

... Estimates suggest that an extra robot per 1000 workers reduces the employment to population ratio by 0.18-0.34 percentage points and wages by 0.25-0.5%. This effect is distinct from the impacts of imports, the decline of routine jobs, offshoring, other types of IT capital, or the total capital stock.

If the robot does the work of a few workers, that explains the fraction of a percent (negative) effect on employment and compensation in a model with direct substitution of robot labor for human work, and smaller second order (positive) effect from comparative advantage of humans in complementary jobs. This is not the optimistic scenario where buggy whip makers displaced by the automobile easily find good new jobs in the expanded economy. We can expect to see many more robots (and virtual AI robots) per 1000 workers in the near future.

Related talk at HKUST by Harvard labor economist Richard Freeman: Work and Income in the Age of Robots and AI. This time it's different?

Here's Richard in 2011 when we were working on a project at Alibaba headquarters :-)

Rapid growth in number of Chinese S&E articles, reaching parity with US in 2013, and well ahead of Japan and India.

Fraction of high impact (top 1% most cited) papers highest for US research (~1.9%). China and Japan comparable at ~0.8% as of 2012. China's fraction roughly doubled between 2001 and 2012.

As of today total number of high impact papers is still probably ~2:1 in favor of US. But I think most people would be surprised to see that China has caught up with (surpassed?) Japan in this quality metric.

US and China now each account for ~30% of global high tech value-added manufacturing. Value-added means net of input components -- going beyond simple assembly.

Sunday, July 30, 2017

This is a Caltech TEDx talk from 2013, in which Doris Tsao discusses her work on the neuroscience of human face recognition. Recently I blogged about her breakthrough in identifying the face recognition algorithm used by monkey (and presumably human) brains. The algorithm seems similar to those used in machine face recognition: individual neurons perform feature detection just as in neural nets. This is not surprising from a purely information-theoretic perspective, if we just think about the space of facial variation and the optimal encoding. But it is amazing to be able to demonstrate it by monitoring specific neurons in a monkey brain.

An earlier research claim (which, four years ago, she recapitulates @8:50min in the video), that certain neurons are sensitive only to specific faces, seems not to be true. I always found it implausible.

On her faculty web page Tsao talks about her decision to attend Caltech as an undergraduate:

One day, my father went on a trip to California and took a tour of Caltech with a friend. He came back and told me about a monastery for science, located under the mountains amidst flowers and orange trees, where all the students looked very skinny and super smart, like little monkeys. I was intrigued. I went to a presentation about Caltech by a visiting admissions officer, who showed slides of students taking tests under olive trees, swimming in the Pacific, huddled in a dorm room working on a problem set... I decided: this is where I want to go to college! I dreamed every day about being accepted to Caltech. After I got my acceptance letter, I began to worry that I would fall behind in the first year, since I had heard about how hard the course load is. So I went to the library and started reading the Feynman Lectures. This was another world…where one could see beneath the surface of things, ask why, why, why, why? And the results of one’s mental deliberations actually could be tested by experiments and reveal completely unexpected yet real phenomena, like magnetism as a consequence of the invariance of the speed of light.

Researchers have demonstrated they can efficiently improve the DNA of human embryos.

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

... Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential "weapon of mass destruction.”

... Mitalipov and his colleagues are said to have convincingly shown that it is possible to avoid both mosaicism and “off-target” effects, as the CRISPR errors are known.

A person familiar with the research says “many tens” of human IVF embryos were created for the experiment using the donated sperm of men carrying inherited disease mutations.

Work on cognitive enhancement will probably be done first in monkeys, proving Planet of the Apes prophetic within the next decade or so :-)

His team’s move into embryo editing coincides with a report by the U.S. National Academy of Sciences in February that was widely seen as providing a green light for lab research on germline modification.

The report also offered qualified support for the use of CRISPR for making gene-edited babies, but only if it were deployed for the elimination of serious diseases.

The advisory committee drew a red line at genetic enhancements—like higher intelligence. “Genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people,” said Alta Charo, co-chair of the NAS’s study committee and professor of law and bioethics at the University of Wisconsin–Madison.

In the U.S., any effort to turn an edited IVF embryo into a baby has been blocked by Congress, which added language to the Department of Health and Human Services funding bill forbidding it from approving clinical trials of the concept.

Despite such barriers, the creation of a gene-edited person could be attempted at any moment, including by IVF clinics operating facilities in countries where there are no such legal restrictions.

Tuesday, July 25, 2017

Prior to the modern era of genomics, it was claimed (without good evidence) that divergences between isolated human populations were almost entirely due to founder effects or genetic drift, and not due to differential selection caused by disparate local conditions. There is strong evidence now against this claim. Many of the differences between modern populations arose over relatively short timescales (e.g., ~10ky), due to natural selection.

Most of our understanding of the genetic basis of human adaptation is biased toward loci of large phenotypic effect. Genome wide association studies (GWAS) now enable the study of genetic adaptation in highly polygenic phenotypes. Here we test for polygenic adaptation among 187 world- wide human populations using polygenic scores constructed from GWAS of 34 complex traits. By comparing these polygenic scores to a null distribution under genetic drift, we identify strong signals of selection for a suite of anthropometric traits including height, infant head circumference (IHC), hip circumference (HIP) and waist-to-hip ratio (WHR), as well as type 2 diabetes (T2D). In addition to the known north-south gradient of polygenic height scores within Europe, we find that natural selection has contributed to a gradient of decreasing polygenic height scores from West to East across Eurasia, and that this gradient is consistent with selection on height in ancient populations who have contributed ancestry broadly across Eurasia. We find that the signal of selection on HIP can largely be explained as a correlated response to selection on height. However, our signals in IHC and WC/WHR cannot, suggesting a response to selection along multiple axes of body shape variation. Our observation that IHC, WC, and WHR polygenic scores follow a strong latitudinal cline in Western Eurasia support the role of natural selection in establishing Bergmann's Rule in humans, and are consistent with thermoregulatory adaptation in response to latitudinal temperature variation.

From the paper:

... To explore whether patterns observed in the polygenic scores were caused by natural selection, we tested whether the observed distribution of polygenic scores across populations could plausibly have been generated under a neutral model of genetic drift ...

...

Discussion

The study of polygenic adaptation provides new avenues for the study of human evolution, and promises a synthesis of physical anthropology and human genetics. Here, we provide the first population genetic evidence for selected divergence in height polygenic scores among Asian populations. We also provide evidence of selected divergence in IHC and WHR polygenic scores within Europe and to a lesser extent Asia, and show that both hip and waist circumference have likely been influenced by correlated selection on height and waist-hip ratio. Finally, signals of divergence among Asian populations can be explained in terms of differential relatedness to Europeans, which suggests that much of the divergence we detect predates the major demographic events in the history of modern Eurasian populations, and represents differential inheritance from ancient populations which had already diverged at the time of admixture. ...

Sunday, July 23, 2017

This paper suggests that some genetic variants which increase risk of coronary artery disease (CAD) have been maintained in the population because of their positive effects in other areas of fitness, such as reproduction.

Traditional genome-wide scans for positive selection have mainly uncovered selective sweeps associated with monogenic traits. While selection on quantitative traits is much more common, very few signals have been detected because of their polygenic nature. We searched for positive selection signals underlying coronary artery disease (CAD) in worldwide populations, using novel approaches to quantify relationships between polygenic selection signals and CAD genetic risk. We identified new candidate adaptive loci that appear to have been directly modified by disease pressures given their significant associations with CAD genetic risk. These candidates were all uniquely and consistently associated with many different male and female reproductive traits suggesting selection may have also targeted these because of their direct effects on fitness. We found that CAD loci are significantly enriched for lifetime reproductive success relative to the rest of the human genome, with evidence that the relationship between CAD and lifetime reproductive success is antagonistic. This supports the presence of antagonistic-pleiotropic tradeoffs on CAD loci and provides a novel explanation for the maintenance and high prevalence of CAD in modern humans. Lastly, we found that positive selection more often targeted CAD gene regulatory variants using HapMap3 lymphoblastoid cell lines, which further highlights the unique biological significance of candidate adaptive loci underlying CAD. Our study provides a novel approach for detecting selection on polygenic traits and evidence that modern human genomes have evolved in response to CAD-induced selection pressures and other early-life traits sharing pleiotropic links with CAD.

Author summary

How genetic variation contributes to disease is complex, especially for those such as coronary artery disease (CAD) that develop over the lifetime of individuals. One of the fundamental questions about CAD––whose progression begins in young adults with arterial plaque accumulation leading to life-threatening outcomes later in life––is why natural selection has not removed or reduced this costly disease. It is the leading cause of death worldwide and has been present in human populations for thousands of years, implying considerable pressures that natural selection should have operated on. Our study provides new evidence that genes underlying CAD have recently been modified by natural selection and that these same genes uniquely and extensively contribute to human reproduction, which suggests that natural selection may have maintained genetic variation contributing to CAD because of its beneficial effects on fitness. This study provides novel evidence that CAD has been maintained in modern humans as a by-product of the fitness advantages those genes provide early in human lifecycles.

From the paper:

... research in quantitative genetics has shown that rapid adaptation can often occur on complex traits that are highly polygenic [29, 30]. Under the ‘infinitesimal (polygenic) model’, such traits are likely to respond quickly to changing selective pressures through smaller allele frequency shifts in many polymorphisms already present in the population [13, 31].

For a subset of CAD loci, we found significant quantitative associations between disease risk and selection signals and for each of these the direction of this association was often consistent between populations ...

In the comparison across populations, directionality of significant selection-risk associations tended to be most consistent for populations within the same ancestral group (Fig 1B). For example, in PHACTR1, negative associations were present within all European populations (CEU, TSI, FIN), and in NT5C2 strong positive associations were present in all East Asian populations (CHB, CHD, JPT). Other negative associations that were consistent across all populations within an ancestry group included five genes in Europeans (COG5, ABO, ANKS1A, KSR2, FLT1) and four genes (LDLR, PEMT, KIAA1462, PDGFD) in East Asians. ...

... By comparing positive selection variation with genetic risk variation at known loci underlying CAD, we were able to identify and prioritize genes that have been the most likely targets of selection related to this disease across diverse human populations. That selection signals and the direction of selection-risk relationships varied among some populations suggests that CAD-driven selection has operated differently in these populations and thus that these populations might respond differently to similar heart disease prevention strategies. The pleiotropic effects that genes associated with CAD have on traits associated with reproduction that are expressed early in life strongly suggests some of the evolutionary reasons for the existence of human vulnerability to CAD.

Bonus: ~300 variants control about 20% of total variance in genetic CAD risk. This means polygenic risk predictors will eventually have a strong correlation (e.g., at least ~0.4 or 0.5) with actual risk. Good enough for identification of outliers.

Genome-wide association studies (GWAS) in coronary artery disease (CAD) had identified 66 loci at 'genome-wide significance' (P < 5 × 10−8) at the time of this analysis, but a much larger number of putative loci at a false discovery rate (FDR) of 5% (refs. 1,2,3,4). Here we leverage an interim release of UK Biobank (UKBB) data to evaluate the validity of the FDR approach. We tested a CAD phenotype inclusive of angina (SOFT; ncases = 10,801) as well as a stricter definition without angina (HARD; ncases = 6,482) and selected cases with the former phenotype to conduct a meta-analysis using the two most recent CAD GWAS2, 3. This approach identified 13 new loci at genome-wide significance, 12 of which were on our previous list of loci meeting the 5% FDR threshold2, thus providing strong support that the remaining loci identified by FDR represent genuine signals. The 304 independent variants associated at 5% FDR in this study explain 21.2% of CAD heritability and identify 243 loci that implicate pathways in blood vessel morphogenesis as well as lipid metabolism, nitric oxide signaling and inflammation.

This is a recent review article (2016):

Genetics of Coronary Artery Disease ...Overall, recent studies have led to a broader understanding of the genetic architecture of CAD and demonstrate that it largely derives from the cumulative effect of multiple common risk alleles individually of small effect size rather than rare variants with large effects on CAD risk. Despite this success, there has been limited progress in understanding the function of the novel loci; the majority of which are in noncoding regions of the genome.

Tuesday, July 18, 2017

From what I have read I doubt that a hybrid team of human + AlphGo would perform much better than AlphaGo itself. Perhaps worse, depending on the epistemic sophistication and self-awareness of the human. In hybrid chess it seems that the ELO score of the human partner is not the main factor, but rather an understanding of the chess program, its strengths, and limitations.

... Some interpret this unique partnership to be a harbinger of human-machine interaction. The superior decision maker is neither man nor machine, but a team of both. As McAfee and Brynjolfsson put it, “people still have a great deal to offer the game of chess at its highest levels once they’re allowed to race with machines, instead of purely against them.”

However, this is not where we will leave this story. For one, the gap between the best freestyle teams and the best software is closing, if not closed. As Cowen notes, the natural evolution of the human-machine relationship is from a machine that doesn’t add much, to a machine that benefits from human help, to a machine that occasionally needs a tiny bit of guidance, to a machine that we should leave alone.

But more importantly, let me suppose we are going to hold a freestyle chess tournament involving the people reading this article. Do you believe you could improve your chance of winning by overruling your 3300-rated chess program? For nearly all of us, we are best off knowing our limits and leaving the chess pieces alone.

... We interfere too often, ... This has been documented across areas from incorrect psychiatric diagnoses to freestyle chess players messing up their previously strong position, against the advice of their supercomputer teammate.

For example, one study by Berkeley Dietvorst and friends asked experimental subjects to predict the success of MBA students based on data such as undergraduate scores, measures of interview quality, and work experience. They first had the opportunity to do some practice questions. They were also provided with an algorithm designed to predict MBA success and its practice answers—generally far superior to the human subjects’.

In their prediction task, the subjects had the option of using the algorithm, which they had already seen was better than them in predicting performance. But they generally didn’t use it, costing them the money they would have received for accuracy. The authors of the paper suggested that when experimental subjects saw the practice answers from the algorithm, they focussed on its apparently stupid mistakes—far more than they focussed on their own more regular mistakes.

Although somewhat under-explored, this study is typical of when people are given the results of an algorithm or statistical method (see here, here, here, and here). The algorithm tends to improve their performance, yet the algorithm by itself has greater accuracy. This suggests the most accurate method is often to fire the human and rely on the algorithm alone. ...

Saturday, July 15, 2017

Largest component of genetic variation is a N-S cline (phenotypic N-S gradient discussed here). Variance accounted for by second (E-W) PC vector is much smaller and the Han population is fairly homogeneous in genetic terms: ...while we revealed East-to-West structure among the Han Chinese, the signal is relatively weak and very little structure is discernible beyond the second PC (p.24).

Neandertal ancestry does not vary significantly across provinces, consistent with admixture prior to the dispersal of modern Han Chinese.

As are most non-European populations around the globe, the Han Chinese are relatively understudied in population and medical genetics studies. From low-coverage whole-genome sequencing of 11,670 Han Chinese women we present a catalog of 25,057,223 variants, including 548,401 novel variants that are seen at least 10 times in our dataset. Individuals from our study come from 19 out of 22 provinces across China, allowing us to study population structure, genetic ancestry, and local adaptation in Han Chinese. We identify previously unrecognized population structure along the East-West axis of China and report unique signals of admixture across geographical space, such as European influences among the Northwestern provinces of China. Finally, we identified a number of highly differentiated loci, indicative of local adaptation in the Han Chinese. In particular, we detected extreme differentiation among the Han Chinese at MTHFR, ADH7, and FADS loci, suggesting that these loci may not be specifically selected in Tibetan and Inuit populations as previously suggested. On the other hand, we find that Neandertal ancestry does not vary significantly across the provinces, consistent with admixture prior to the dispersal of modern Han Chinese. Furthermore, contrary to a previous report, Neandertal ancestry does not explain a significant amount of heritability in depression. Our findings provide the largest genetic data set so far made available for Han Chinese and provide insights into the history and population structure of the world's largest ethnic group.

The Loveless (free now on Amazon Prime) was the first film directed by Kathryn Bigelow (Point Break, Zero Dark Thirty) and also the first first film role for a young Willem Dafoe. Dafoe has more leading man star power in this role than in most of his subsequent work.

Loveless was shot in 22 days, when Bigelow was fresh out of Columbia film school. The movie could be characterized as a biker art film with some camp elements, but overall a fairly dark and nihilistic mood. The video above is a fan mash up of Loveless and Bruce Springsteen's Born to Run. It works well on its own terms, although Born to Run is more romantic than nihilistic, at least musically. The lyrics by themselves, however, fit the film rather well.

Born To Run

Bruce Springsteen

In the day we sweat it out on the streets of a runaway American dream
At night we ride through the mansions of glory in suicide machines
Sprung from cages out on highway nine,
Chrome wheeled, fuel injected, and steppin' out over the line
H-Oh, Baby this town rips the bones from your back
It's a death trap, it's a suicide rap
We gotta get out while we're young
`Cause tramps like us, baby we were born to run

Yes, girl we were

Wendy let me in I wanna be your friend
I want to guard your dreams and visions
Just wrap your legs 'round these velvet rims
And strap your hands 'cross my engines
Together we could break this trap
We'll run till we drop, baby we'll never go back
H-Oh, Will you walk with me out on the wire
`Cause baby I'm just a scared and lonely rider
But I gotta know how it feels
I want to know if love is wild
Babe I want to know if love is real

Oh, can you show me

Beyond the Palace hemi-powered drones scream down the boulevard
Girls comb their hair in rearview mirrors
And the boys try to look so hard
The amusement park rises bold and stark
Kids are huddled on the beach in a mist
I wanna die with you Wendy on the street tonight
In an everlasting kiss

One, two, three, four

The highway's jammed with broken heroes on a last chance power drive
Everybody's out on the run tonight
But there's no place left to hide
Together Wendy we can live with the sadness
I'll love you with all the madness in my soul
H-Oh, Someday girl I don't know when
We're gonna get to that place
Where we really wanna go
And we'll walk in the sun
But till then tramps like us
Baby we were born to run
Oh honey, tramps like us
Baby we were born to run
Come on with me, tramps like us
Baby we were born to run

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.

Tuesday, July 11, 2017

Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-) Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.

In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?

There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...

Sunday, July 09, 2017

Romero is 40 years old! He is a former World Champion and Olympic silver medalist for Cuba in freestyle wrestling. Watch the video -- it's great! :-)

He lost a close championship fight yesterday in the UFC at 185lbs. The guy he lost to, Robert Whittaker, is a young talent and a class act. It's been said that Romero relies too much on athleticism and doesn't fight smart (this goes back to his wrestling days). He should have attacked Whittaker more ruthlessly after he hurt Whittaker's knee early in the fight with a kick.

Saturday, July 08, 2017

I'm old enough to have been aware of Donald Trump since before the publication of Art of the Deal in 1987. In these decades, during which he was one of the best known celebrities in America, he was largely regarded as a progressive New Yorker, someone who could easily pass as a rich Democrat. Indeed, he was friendly with the Clintons -- Ivanka and Chelsea are good friends. There were no accusations of racism, and he enjoyed an 11 year run (2004-2015) on The Apprentice. No one would have doubted for a second that he was an American patriot, the least likely stooge for Russia or the USSR. I say all this to remind people that the image of Trump promulgated by the media and his other political enemies since he decided to run for President is entirely a creation of the last year or two.

If you consider yourself a smart person, a rational person, an evidence-driven person, you should reconsider whether 30+ years of reporting on Trump is more likely to be accurate (during this time he was a public figure, major celebrity, and tabloid fodder: subject to intense scrutiny), or 1-2 years of heavily motivated fake news.

In the article below, Politico considers the very real possibility that Trump could have run, and won, as a Democrat. If you're a HATE HATE HATE NEVER NEVER TRUMP person, think about that for a while.

Politico: ... Could Trump have done to the Democrats in 2016 what he did to the Republicans? Why not? There, too, he would have challenged an overconfident, message-challenged establishment candidate (Hillary Clinton instead of Jeb Bush) and with an even smaller number of other competitors to dispatch. One could easily see him doing as well or better than Bernie Sanders—surprising Clinton in the Iowa caucuses, winning the New Hampshire primaries, and on and on. More to the point, many of Trump’s views—skepticism on trade, sympathetic to Planned Parenthood, opposition to the Iraq war, a focus on blue-collar workers in Rust Belt America—seemed to gel as well, if not better, with blue-state America than red. Think the Democrats wouldn’t tolerate misogynist rhetoric and boorish behavior from their leaders? Well, then you’ve forgotten about Woodrow Wilson and John F. Kennedy and LBJ and the last President Clinton.

There are, as with every what-if scenario, some flaws. Democrats would have deeply resented Trump’s ‘birther’ questioning of Barack Obama’s origins, and would have been highly skeptical of the former reality TV star’s political bona fides even if he hadn’t made a sharp turn to the right as he explored a presidential bid in the run up to the 2012 election. His comments on women and minorities would have exposed him to withering scrutiny among the left’s army of advocacy groups. Liberal donors would likely have banded together to strangle his candidacy in its cradle—if they weren’t laughing him off. But Republican elites tried both of these strategies in 2015, as well, and it manifestly didn’t work. What’s more, Trump did once hold a passel of progressive stances—and he had friendships all over the political map. As Bloomberg’s Josh Green notes, in his Apprentice days, Trump was even wildly popular among minorities. It’s not entirely crazy to imagine him outflanking a coronation-minded Hillary Clinton on the left and blitzing a weak Democratic field like General Sherman marching through Georgia. ...

Application of the experimental design of genome-wide association studies (GWASs) is now 10 years old (young), and here we review the remarkable range of discoveries it has facilitated in population and complex-trait genetics, the biology of diseases, and translation toward new therapeutics. We predict the likely discoveries in the next 10 years, when GWASs will be based on millions of samples with array data imputed to a large fully sequenced reference panel and on hundreds of thousands of samples with whole-genome sequencing data.

Background
Five years ago, a number of us reviewed (and gave our opinion on) the first 5 years of discoveries that came from the experimental design of the GWAS.1 That review sought to set the record straight on the discoveries made by GWASs because at that time, there was still a level of misunderstanding and distrust about the purpose of and discoveries made by GWASs. There is now much more acceptance of the experimental design because the empirical results have been robust and overwhelming, as reviewed here.

... GWAS results have now been reported for hundreds of complex traits across a wide range of domains, including common diseases, quantitative traits that are risk factors for disease, brain imaging phenotypes, genomic measures such as gene expression and DNA methylation, and social and behavioral traits such as subjective well-being and educational attainment. About 10,000 strong associations have been reported between genetic variants and one or more complex traits,10 where “strong” is defined as statistically significant at the genome-wide p value threshold of 5 × 10−8, excluding other genome-wide-significant SNPs in LD (r2 > 0.5) with the strongest association (Figure 2). GWAS associations have proven highly replicable, both within and between populations,11, 12 under the assumption of adequate sample sizes.

One unambiguous conclusion from GWASs is that for almost any complex trait that has been studied, many loci contribute to standing genetic variation. In other words, for most traits and diseases studied, the mutational target in the genome appears large so that polymorphisms in many genes contribute to genetic variation in the population. This means that, on average, the proportion of variance explained at the individual variants is small. Conversely, as predicted previously,1, 13 this observation implies that larger experimental sample sizes will lead to new discoveries, and that is exactly what has occurred over the last decade. ...

Tuesday, July 04, 2017

This is the best technical summary of the Los Alamos component of the Manhattan Project that I know of. It includes, for example, detail about the hydrodynamical issues that had to be overcome for successful implosion. That work drew heavily on von Neumann's expertise in shock waves, explosives, numerical solution of hydrodynamic partial differential equations, etc. A visit by G.I. Taylor alerted the designers to the possibility of instabilities in the shock front (Rayleigh–Taylor instability). Concern over these instabilities led to the solid-core design known as the Christy Gadget.

... Unlike earlier histories of Los Alamos, this book treats in detail the research and development that led to the implosion and gun weapons; the research in nuclear physics, chemistry, and metallurgy that enabled scientists to design these weapons; and the conception of the thermonuclear bomb, the "Super." Although fascinating in its own right, this story has particular interest because of its impact on subsequent devel- opments. Although many books examine the implications of Los Alamos for the development of a nuclear weapons culture, this is the first to study its role in the rise of the methodology of "big science" as carried out in large national laboratories.

... The principal reason that the technical history of Los Alamos has not yet been written is that even today, after half a century, much of the original documentation remains classified. With cooperation from the Los Alamos Laboratory, we received authorization to examine all the relevant documentation. The book then underwent a classification review that resulted in the removal from this edition of all textual material judged sensitive by the Department of Energy and all references to classified documents. (For this reason, a number of quotations appear without attribution.) However, the authorities removed little information. Thus, except for a small number of technical facts, this account represents the complete story. In every instance the deleted information was strictly technical; in no way has the Los Alamos Laboratory or the Department of Energy attempted to shape our interpretations. This is not, therefore, a "company history"; throughout the research and writing, we enjoyed intellectual freedom.

... Scientific research was an essential component of the new approach: the first atomic bombs could not have been built by engineers alone, for in no sense was developing these bombs an ordinary engineering task. Many gaps existed in the scientific knowledge needed to complete the bombs. Initially, no one knew whether an atomic weapon could be made. Furthermore, the necessary technology extended well beyond the "state of the art." Solving the technical problems required a heavy investment in basic research by top-level scientists trained to explore the unknown - scientists like Hans Bethe, Richard Feynman, Rudolf Peierls, Edward Teller, John von Neumann, Luis Alvarez, and George Kistiakowsky. To penetrate the scientific phenomena required a deep understanding of nuclear physics, chemistry, explosives, and hydrodynamics. Both theoreticians and experimentalists had to push their scientific tools far beyond their usual capabilities. For example, methods had to be developed to carry out numerical hydrodynamics calculations on a scale never before attempted, and experimentalists had to expand the sensitivity of their detectors into qualitatively new regimes.

... American physics continued to prosper throughout the 1920s and1930s, despite the Depression. Advances in quantum theory stimulated interest in the microscopic structure of matter, and in 1923 Robert Millikan of Caltech was awarded the Nobel Prize for his work on electrons. In the 1930s and 1940s, Oppenheimer taught quantum theory to large numbers of students at the Berkeley campus of the University of California as well as at Caltech. Also at Berkeley in the 1930s and 1940s, the entrepreneurial Lawrence gathered chemists, engineers, and physicists together in a laboratory where he built a series of ever-larger cyclotrons and led numerous projects in nuclear chemistry, nuclear physics, and medicine. By bringing together specialists from different fields to work cooperatively on large common projects, Lawrence helped to create a distinctly American collaborative research endeavor - centered on teams, as in the industrial research laboratories, but oriented toward basic studies without immediate application. This approach flourished during World War II.

Sunday, July 02, 2017

The excerpt below is from a recent comment thread, arguing that the US Navy should de-emphasize carrier groups in favor of subs and smaller surface ships. Technological trends such as rapid advancement in machine learning (ML) and sensors will render carriers increasingly vulnerable to missile attack in the coming decades.

1. US carriers are very vulnerable to *conventional* Russian and PRC missile (cruise, ASBM) weapons.

2. Within ~10y (i.e., well within projected service life of US carriers) I expect missile systems of the type currently only possessed by Russia and PRC to be available to lesser powers. I expect that a road-mobile ASBM weapon with good sensor/ML capability, range ~1500km, will be available for ~$10M. Given a rough (~10km accuracy) fix on a carrier, this missile will be able to arrive in that area and then use ML/sensors for final targeting. There is no easy defense against such weapons. Cruise missiles which pose a similar threat will also be exported. This will force the US to be much more conservative in the use of its carriers, not just against Russia and PRC, but against smaller countries as well.

Given 1. and 2. my recommendation is to decrease the number of US carriers and divert the funds into smaller missile ships, subs, drones, etc. Technological trends simply do not favor carriers as a weapon platform.

Basic missile technology is old, well-understood, and already inexpensive (compared, e.g., to the cost of fighter jets). ML/sensor capability is evolving rapidly and will be enormously better in 10y. Imagine a Mach 10 robot kamikaze with no problem locating a carrier from 10km distance (on a clear day there are no countermeasures against visual targeting using the equivalent of a cheap iPhone camera -- i.e., robot pilot looks down at the ocean to find carrier), and capable of maneuver. Despite BS claims over the years (and over $100B spent by the US), anti-missile technology is not effective, particularly against fast-moving ballistic missiles.

One only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, or even (surreptitious) cooperation from Russia/PRC (or a commercial vendor!) via their satellite network.

... the Navy plans to modernize its carrier program by launching a new wave of even larger and more expensive ships, starting with the USS Gerald Ford, which cost $15 billion to build — by far the most expensive vessel in naval history. This is a mistake: Because of changes in warfare and technology, in any future military entanglement with a foe like China, current carriers and their air wings will be almost useless and the next generation may fare even worse.

... most weapons platforms are effective for only a limited time, an interval that gets shorter as history progresses. But until the past few years, the carrier had defied the odds, continuing to demonstrate America’s military might around the world without any challenge from our enemies. That period of grace may have ended as China and Russia are introducing new weapons — called “carrier killer” missiles — that cost $10 million to $20 million each and can target the U.S.’s multibillion-dollar carriers up to 900 miles from shore.

... The average cost of each of the 10 Nimitz class carriers was around $5 billion. When the cost of new electrical systems is factored in, the USS Ford cost three times as much and took five years to build. With the deficit projected to rise considerably over the next decade, defense spending is unlikely to receive a significant bump. Funding these carriers will crowd out spending on other military priorities, like the replacement of the Ohio class ballistic missile submarine, perhaps the most survivable and important leg of our strategic deterrent triad. There simply isn’t room to fund an aircraft carrier that costs the equivalent of the entire Navy shipbuilding budget.

... The Navy’s decision on the carriers today will affect U.S. naval power for decades. These carriers are expected to be combat effective in 2065 — over 150 years since the idea of an aircraft carrier was first conceived. ...

Thursday, June 29, 2017

This is a beautiful result. IIUC, these neuroscientists use the terminology "face axis" instead of (machine learning terminology) variation along an eigenface vector or feature vector.

Scientific American: ...using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at Caltech have finally cracked the neural code for face recognition. The researchers found the firing rate of each face cell corresponds to separate facial features along an axis. Like a set of dials, the cells are fine-tuned to bits of information, which they can then channel together in different combinations to create an image of every possible face. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees, by simply tracking the electrical activity of its face cells.”

I never believed the "Jennifer Aniston neuron" results, which seemed implausible from a neural architecture perspective. I thought the encoding had to be far more complex and modular. Apparently that's the case. The single neuron claim has been widely propagated (for over a decade!) but now seems to be yet another result that fails to replicate after invading the meme space of credulous minds.

... neuroscientist Rodrigo Quian Quiroga found that pictures of actress Jennifer Aniston elicited a response in a single neuron. And pictures of Halle Berry, members of The Beatles or characters from The Simpsons activated separate neurons. The prevailing theory among researchers was that each neuron in the face patches was sensitive to a few particular people, says Quiroga, who is now at the University of Leicester in the U.K. and not involved with the work. But Tsao’s recent study suggests scientists may have been mistaken. “She has shown that neurons in face patches don’t encode particular people at all, they just encode certain features,” he says. “That completely changes our understanding of how we recognize faces.”

... To decipher how individual cells helped recognize faces, Tsao and her postdoc Steven Le Chang drew dots around a set of faces and calculated variations across 50 different characteristics. They then used this information to create 2,000 different images of faces that varied in shape and appearance, including roundness of the face, distance between the eyes, skin tone and texture. Next the researchers showed these images to monkeys while recording the electrical activity from individual neurons in three separate face patches.

All that mattered for each neuron was a single-feature axis. Even when viewing different faces, a neuron that was sensitive to hairline width, for example, would respond to variations in that feature. But if the faces had the same hairline and different-size noses, the hairline neuron would stay silent, Chang says. The findings explained a long-disputed issue in the previously held theory of why individual neurons seemed to recognize completely different people.

Moreover, the neurons in different face patches processed complementary information. Cells in one face patch—the anterior medial patch—processed information about the appearance of faces such as distances between facial features like the eyes or hairline. Cells in other patches—the middle lateral and middle fundus areas—handled information about shapes such as the contours of the eyes or lips. Like workers in a factory, the various face patches did distinct jobs, cooperating, communicating and building on one another to provide a complete picture of facial identity.

Once Chang and Tsao knew how the division of labor occurred among the “factory workers,” they could predict the neurons’ responses to a completely new face. The two developed a model for which feature axes were encoded by various neurons. Then they showed monkeys a new photo of a human face. Using their model of how various neurons would respond, the researchers were able to re-create the face that a monkey was viewing. “The re-creations were stunningly accurate,” Tsao says. In fact, they were nearly indistinguishable from the actual photos shown to the monkeys.

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells
•Face cells display flat tuning along dimensions orthogonal to the axis being coded
•The axis model is more efficient, robust, and flexible than the exemplar model
•Face patches ML/MF and AM carry complementary information about faces

Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.

200 cells is interesting because (IIRC) standard deep learning face recognition packages right now use a 126-dimensional feature space. These packages perform roughly as well as humans (or perhaps a bit better?).

Monday, June 26, 2017

The Chinese government is not the only entity that has access to millions of faces + identifying information. So do Google, Facebook, Instagram, and anyone who has scraped information from similar social networks (e.g., US security services, hackers, etc.).

In light of such ML capabilities it seems clear that anti-ship ballistic missiles can easily target a carrier during the final maneuver phase of descent, using optical or infrared sensors (let alone radar).

Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology.

Friday, June 23, 2017

2016 was the 10th anniversary of The Prestige, one of the most clever films ever made. This video reveals aspects of the movie that will be new even to fans who have watched it several times. Highly recommended!

Wikipedia: The Prestige is a 2006 British-American mystery thriller film directed by Christopher Nolan, from a screenplay adapted by Nolan and his brother Jonathan from Christopher Priest's 1995 novel of the same name. Its story follows Robert Angier and Alfred Borden, rival stage magicians in London at the end of the 19th century. Obsessed with creating the best stage illusion, they engage in competitive one-upmanship with tragic results. The film stars Hugh Jackman as Robert Angier, Christian Bale as Alfred Borden, and David Bowie as Nikola Tesla. It also stars Michael Caine, Scarlett Johansson, Piper Perabo, Andy Serkis, and Rebecca Hall.

See also Feynman and Magic -- Feynman was extremely good at reverse-engineering magic tricks.

In Destined for War, the eminent Harvard scholar Graham Allison explains why Thucydides’s Trap is the best lens for understanding U.S.-China relations in the twenty-first century. Through uncanny historical parallels and war scenarios, he shows how close we are to the unthinkable. Yet, stressing that war is not inevitable, Allison also reveals how clashing powers have kept the peace in the past — and what painful steps the United States and China must take to avoid disaster today.

At 1h05min Allison answers the following question.

Is there any reason for optimism under President Trump in foreign affairs?

65:43 Harvard
65:50 and Cambridge ... ninety-five
66:04 percent of whom voted [against Trump] ... so we
66:08 hardly know any people in quote real
66:11 America and we don't have any perception
66:15 or understanding or feeling for this but
66:17 I come from North Carolina and my wife
66:19 comes from Ohio ...
66:33 ... in large parts of the
66:36 country they have extremely different
66:38 views than the New York Times or The
66:39 Washington Post or you know the elite
66:43 media ...

I think part of what Trump
67:11 represents is a rejection of the
67:15 establishment especially the political
67:17 class and the elites
67:19 which are places like us places like
67:21 Harvard and others who lots of people in
67:25 our society don't think have done a
67:27 great job with the opportunities that
67:28 our country has had

67:33 ... Trump's willingness to not be
67:37 Orthodox to not be captured by the
67:40 conventional wisdom to explore
67:43 possibilities

... he's not
68:31 beholden to the Jewish community he's
68:34 not beholden to the Republican Party
68:36 he's not become beholden to the
68:38 Democratic Party