Here are my insights into why this paper is fundamentally relevant for anyone working with genetic sequence data in an evolutionary context. . .

Scientific frontiers appear when we integrate analyses from the micro and the macro scale. Examples of this include how biology is informed by chemistry, chemistry is informed by physics, and classical physics is informed by quantum physics. This trend is true for EvoDevo: we are rapidly arriving at an understanding of evolution from increasingly scientific first principles. To be specific, we are beginning to understand how mutations in protein sequence and structure — at the biophysical scale — have consequences for the function and phenotype of cells, species, and individuals — at the macro scale [see Dean and Thornton, Nature Reviews 2007].

In order to reveal the evolutionary trajectory of a particular protein structure, we need to examine ancient forms of that protein. However, the simple acquisition of ancestral molecules can be a major obstacle when we examine evolutionary histories over millions of years because the ancestral forms are typically extinct. As a computational alternative, we can time travel via statistical inference [see Thornton, Nature Reviews 2004].

I study computational and phylogenetic methods that make it possible for us to probabilistically infer phylogenies and reconstruct ancestral gene sequences. One of the most important inventions in the history of phylogenetic methods is the use of Markov models to approximate the evolution of gene sequences. Markov models are used all over the place in information science: to model natural language, radio transmissions, and white noise. Markov models are used in speech recognition, your email’s spam filter, and global weather prediction. Google’s core search algorithm is fundamentally just a complex Markov model.

The core idea of the Markov Model concerns characters transitioning (i.e. mutating) over time. Suppose we have some character — like a single nucleotide or an amino acid — and it currently is in state X, where X is one of the letters in our nucleotide or amino acid alphabet. Over time of length t, X will mutate to state Y with probability determined by a matrix of relative substitution ratios. This model follows the Markov property, where the probability of Y later mutating to state Z over time t2 is independent of its prior state X.

If we calculate transition probabilities for all branches in a phylogenetic tree, we can thus calculate the likelihood of that tree and infer the maximum a posteriori ancestral protein sequence. In this discussion, I will avoid articulating all the mathematical minutiae of how we calculate probabilities for trees and ancestral sequences; you can learn more by reading this excellent book edited by Oliver Gascuel. Instead, I want to focus on the substitution matrix: it is an approximation of molecular evolution and it makes critical assumptions about evolutionary forces.

In it’s simplest form (as a 4×4 nucleotide matrix or 20×20 amino acid matrix) substitution matrices assume that all residues with the same state are in a homogenous biophysical environment, and are thus exposed to the same mutational forces. For example, the WAG matrix assumes that all glutamic acids (E) can be treated equally, and thus the relative substitution rate for any glutamic acid mutating into asparagine (D) is 6.174, while the relative rate of any glutamic acid mutating to cystine (C) is 0.021. The assumption of structural homogeneity is often invalid; for example, as is illustrated in this week’s review by Worth et al., residues buried in solvent-inaccessible cores of a protein tend to be more conserved than residues located on the exterior of proteins. This insight implies that we need a secondary substitution matrix expressing relative mutation rates for residues located in protein cores. As an example, if E stands for an external glutamic acid and E’ stands for a core glutamic acid, we should expect the relative substitution rate for E-to-D to be larger than the relative rate for E’-to-D’.

The article by Worth et al. reviews a large historical body of results concerning protein structure conservation. The article further describes how we can use environment-specific substitution tables (ESSTs) to explicitly capture information about structural conservation into our Markov model of evolution. The insights from this paper are fundamental for anyone working with genetic sequence data in an evolutionary context.

Patrick began by talking about the history of genetics: statistical genetics and Mendelian genetics fragmented into many subfields over the past seventy years (pictured below).

Each subfield asks a unique — but separate — question about genes (pictured below). For example, population genetics explores how fitness is determined by the transmission of genes; whereas, molecular genetics explores how genes have effects on phenotype. Ultimately, an interdisciplenary synthesis provides a holistic understanding of the interplay between genes, gene transmission, gene effects, phenotypes, and fitness.

In the spirit of this “functional synthesis”, Jesse Bloom explained how H1N1 flu virus gained resistance to Oseltamivir (a.k.a. Tamiflu). Oseltamivir binds the neuraminidase active site, which inhbits H1N1 viral release from an infected cell. It is suspected that Tamiflu resistance began in 2006; as of 2009, almost all H1N1 strains are Tamiflu resistant. Resistance is conferred by the H274Y mutation. By itself, H274Y reduces the fitness of H1N1; it was therefore believed that the H274Y mutation would not spread through the flu population. Consequently, why did resistance to Tamiflu spread? Jesse speculates — in general — that some nuetral mutations can increase protein stability, thus creating a “stability buffer” enabling fitness-reducing mutations. For the case of H1N1 Tamiflu resistance, his hypothesis appears to be correct: Jesse revealed that the R194G mutation (a neutral mutation) compensates for the H274Y mutation, thus allowing H274Y to spread through the H1N1 population.

I saw too many talks today to comprehensively discuss them all. Here are a few that stand out:

Matt Hahn discussed the correlation (or lack thereof) between protein sequence similarity and protein function similarity. Although we have increasingly complex models of sequence evolution (using Markov Models, for example), we know almost nothing about how protein function evolves. Matt raised three questions: (1) How fast does protein function evolve? (2) Can we correlate the rate of evolution for protein function to the rate of evolution for protein sequences? (3) Can we find evidence for differential rates of protein function evolution in different types of protein families? Given the short time constraint (15 minutes!), Matt did not conclusively answer any of these questions — but that’s not necessarily a critique of his lecture. His hypothesis was that the rate of evolution for protein function should be slower in orthologs and faster in paralogs. To test this hypothesis, Matt gathered protein function annotations from the Gene Ontology Consortium and plotted this data against rates of evolution for protein sequences. Surprisingly, Matt observed (1) orthologs appear to evolve faster than paralogs, and (2) there is no relation between the rates of sequence evolution and functional evolution. Both of these results are surprising, but difficult to explain. Obviously, Matt’s results depend on the accuracy of the Gene Ontology annotations, which are unlikely to be entirely accurate. I think Matt is asking a set of questions that are critically important, but I don’t think accurate answers will be found until we develop a different method for classifying and measuring protein function.

Paul Hohenlohe discussed RAD sequencing with the Illumina Genome Analyzer II to measure genetic variance (as Fst) in stickleback populations. (RAD sequencing is introduced by Selker et al., Genetics 2007). Sticklebacks are ancestrally a saltwater fish with bony armor plates. Sticklebacks colonize freshwater habitats; colonizing populations lose some — or all — of their armor. Paul used RAD sequencing with Alaskan stickleback populations, and showed that population structures vary between the saltwater and freshwater populations. Paul’s analysis of stickleback populations provides a compelling example of how RAD sequencing is a high-throughput method for population genomics.

Joe Felsenstein talked about “phylogenetic geometric morphometrics.” Given homologous extant morphologies with a set of identified (x,y) coordinates, Joe first showed geometric techniques to rotate and translate the extant geometries such that they are “aligned” in an roughly analogous fashion to sequence alignment. Next, given a phylogeny relating the extant morphologies, Joe discussed a model using Brownian motion to infer ancestral forms — i.e., an ancestral set of Cartesian coordinates. I’m not a developmental biologist, so I can’t offer much critique of this method. I’m curious how he plans to deal with missing data — i.e. extant morphologies with (x,y) coordinates that don’t appear in all descendants.

Finally, James Foster talked about “evolutionary computation.” Specifically, any process which demonstrates replication, variation, and selectionwill necessarily demonstrate evolution. James’ point is that evolution can take place on digital artifacts as well as biological artifacts. He gave several examples of genetic algorithms applied to problems as far-reaching as ML phylogenetic estimation (Zwickl 2006) , electronic circuit construction (Koza 1985), and jet engine design (Rechenberg 1966). I totally agree with James’ point that evolutionary computation is useful to solve a wide gamut of problems, but I’m afraid his point fell on many deaf ears at this biologically-focused conference.

I’m attending the Evolution 2009 conference in Moscow, Idaho. Below are some notes from the first day. There are eight separate lecture tracks, so it’s impossible for me to see everything. I’m mostly attending lectures focused on phylogenetics, systematics, and molecular evolution. . .

This morning, I planned to hear Peter Turchin talk about “warfare and the evolution of social complexity.” Unfortunately, I missed his lecture due to an unpublished schedule rearrangement. Instead, I listened to talks on the subject on speciation. Asegul Birand presented simulations which demonstrate species’ range affects speciation rates. Marcus Kronforst characterized hotspots of genetic differentiation in Heliconius butterflies; specifically, Marcus showed that wing coloration patterns are adaptive traits that generate reproductive isolation.

Later, I attended a mid-morning session on phylogenetic methods. . .

Jennifer Riplinger (from Jack Sullivan’s lab) discussed the problem of model selection for maximum likelihood bootstrap replicates. In theory, we should perform model selection for each bootstrap replicate; in practice, most people use the same maximum likelihood model for all replicates. Jennifer examined the role of replicate model selection on CytB, 18S RNA, and COX1 sequences. Her results show that model selection for individual bootstrap replicates is unnecessary and does not yield significantly different bootstrap values. Jennifer makes a good point, but I would like to see her analysis repeated for simulated datasets, where the true phylogenetic partitioning is known. Furthermore, everyone should be careful about placing too much trust in bootstrap values (see Douady 2003).

Randal Linder presented a software tool “SATe” to simultaneously align sequence data and estimate phylogeny. Given the short time allowance (only 15 minutes!), I had a difficult time determining how SATe is different from ALIFRITZ or Bali-Phy. Randal used the “SP” metric to show that SATe produces more accurate alignments than ClustalW, MAFFT, MUSCLE, or Prank. I am unfamiliar with the “SP” metric, and I wonder if his analysis would yield different results if he used AMA — instead of SP — to measure accuracy.

Jason Evans (of the Sulllivan Lab) talked about his approach for averaging models during phylogenetic inference. Due to the short time constraint, I didn’t entirely understand his cost-based averaging method. I think integrating uncertainty about the evolutionary model is an appealing phylogenetic problem, but I need to read Jason’s publication before I can say anything critical about his particular method.

Rachel Schwartz talked about error in phylogenetic branch length estimation. Rachel used simulations to show that Bayesian branch lengths (estimated using Mr. Bayes) generally underestimate the true branch length, while maximum likelihood branch lengths generally overestimate the true length. The underestimation/overestimation bias is magnified for “deep” internal branches. In general — for a rooted tree — Bayesian branch lengths make old nodes older and young nodes younger. On the other hand, maximum likelihood branch lengths make old nodes younger and young nodes older. Overall, the bias is less-pronounced for maximum likelihood estimates, and therefore Bayesian branch lengths should probably be avoided. Rachel’s talk was robust and comprehensive, and I look forward to reading the forthcoming publication.

Finally, I attended an afternoon symposium in which Michael Alfaro discussed a method (named Medusa) for integrating fossil information into phylogenetic estimates of birth/death rates. Afterwards, Brian Moore (from John Huelsenbeck’s lab) presented a collection of Bayesian tools for estimating phylogenetic divergence times and diversification rates.

Here are two good overview articles on evolutionary computation. The first article is more recent and is targeted primarily at computer scientists; the second article is slightly outdated and targeted primarily at ecologists.