Advances in molecular biology now permit complex biological systems to be tracked at an exquisite level of detail. The information flow is so great, however, that using intuition alone to draw connections is unrealistic. Thus, the need to integrate mathematical biology with experimental biology is greater than ever. To achieve this integration, obstacles that have traditionally prevented effective communication between theoreticians and experimentalists must be overcome, so that experimentalists learn the language of mathematics and dynamical modeling and theorists learn the language of biology. Fifty years ago Alan Hodgkin and Andrew Huxley published their quantitative model of the nerve action potential; in the same year, Alan Turing published his work on pattern formation in activator-inhibitor systems. These classic studies illustrate two ends of the spectrum in mathematical biology: the detailed model approach and the minimal model approach. When combined, they are highly synergistic in analyzing the mechanisms underlying the behavior of complex biological systems. Their effective integration will be essential for unraveling the physical basis of the mysteries of life.

A half century ago, the year 1952 was a banner year for modern quantitative biology. Not only did Alan Hodgkin and Andrew Huxley publish their classic four papers in The Journal of Physiology (1 2 3 4) , which revolutionized our understanding of nerve excitation and subsequently won them the Nobel Prize, but 1952 was also the year that Alan Turing2 published a paper entitled "The Chemical Basis of Morphogenesis" in Philosophical Transactions of the Royal Society (6) . Both papers used differential equations to model biological processes. But whereas Hodgkin and Huxley used a detailed modeling approach to develop a quantitative description of nerve action potential, Turing’s approach was more general. In his paper, Turing used modeling to show how simple nonequilibrium chemical reactions, described by differential equations and diffusion through space, could spontaneously cause patterns to form in time and space, now known as Turing patterns. Turing speculated that this general mechanism might be at the heart of biological morphogenesis. The goal of this perspective is to provide a brief update, and some speculation as well, about how these prescient papers remain relevant to modern biology in the post-genomic era.

Living organisms come into being through the self-organizing process of developmental morphogenesis, a form of spatiotemporal pattern formation. Throughout their lifetime, living organisms must constantly adapt spatiotemporally to new environmental challenges. If life can be understood in terms of physical laws, then there must be rules that determine how conglomerates of nucleic acids, proteins, lipids, carbohydrates, and other organic molecules self-assemble at the microscopic level to create the predictable and reproducible macroscopic structures that characterize living organisms. To understand life, we must undercover those rules.

From basic principles, if a system contains only centrifugal forces, it expands to infinity, whereas if it contains only centripetal forces, it collapses to a point. Turing realized that pattern formation requires a tension between expanding (activating) and collapsing (inhibiting) forces, analogous to the yin and yang in Eastern philosophy. Moreover, he discovered a key principle: that short-range activation coupled to long-range inhibition is a natural recipe for spatiotemporal pattern formation. Consider a chemical reaction in which reactants combine to form products. Suppose a product of the reaction feeds back positively to further activate the reaction and that, at the same time, this activator catalyzes the production of an inhibitor of the reaction. Figure 1 A illustrates the basic concept. If the activator diffuses more slowly than the inhibitor, then Turing’s conditions of short-range activation and long-range inhibition are satisfied. Assuming that fresh reactants (A and B) remain available, the reaction rate is governed by the local concentrations of the activator (C) vs. the inhibitor (X*). When the reaction is initiated, the activator autocatalyzes its own production, but at the same time activates the inhibitor. If the inhibitor diffuses rapidly (or activator production temporarily outpaces inhibitor production), then inhibitor concentration falls below that of the activator, allowing the reaction to proceed vigorously at the initiation site. A modest distance away, however, the concentration of the more rapidly diffusing inhibitor will exceed that of the activator, and the reaction is shut down. Even further away, the concentration of the inhibitor eventually falls off to a low enough level to allow the reaction to proceed autocatalytically again. Thus, nodes of high concentration of the activator form spontaneously, purely on the basis of the physical characteristics of reaction and diffusion (Fig. 1B ).

These chemical activator-inhibitor systems belong to the class called reaction-diffusion systems. The local reaction can be oscillatory, as in the system in Fig. 1B , or it can be excitable, meaning that a suprathreshold stimulus is necessary to produce a large excursion, followed by a return to equilibrium (e.g., the action potential in nerve or heart cells). All these systems exhibit spontaneous pattern formation due to the coupling of short-range activation with long-range inhibition. If we now imagine that the activators and inhibitors are transcription factors turning on and off the melanin gene in skin cells, a physical explanation for a biological phenomenon such as skin patterning can be developed by modeling the appropriate differential equations representing the reaction and diffusion terms (Fig. 2A, B). This approach has been used to develop theoretical models for many types of biological pattern formation (7) . Examples include phyllotaxis (leaf patterning, Fig. 2C ); developmental morphogenesis (Fig. 2D ), in which the activators and inhibitors are called developmental morphogens; and reentrant cardiac arrhythmias such as ventricular tachycardia and fibrillation (8) , in which the diffusing substance is ionic current (Fig. 2E ). A wealth of patterns emerges spontaneously from such systems, from stripes and spots to spiral and scroll waves. These self-organizing patterns are also called emergent properties, which are controlled by global parameters such as diffusion rates and reaction rates that determine the characteristics of long-range activation and short-range inhibition.

Yet many experimental biologists are either unaware or discount these elegant theoretical approaches to pattern formation in biological morphogenesis. As intellectually attractive as the theory is, it has been daunting to test experimentally. In complex biological systems, the ability to identify the relevant activators and inhibitors among the many thousands of proteins and signaling molecules in a typical cell has been challenging, although not always hopeless (9) . In addition, recent studies suggest that the simpler but dynamically richer emergent systems may tend to become replaced by more complex but more stable hierarchic signaling mechanisms through evolution (10 11 12) .

Nevertheless, remarkable advances in molecular biology raise the possibility that this impasse may soon be breached. Three key achievements are the decoding of the genomes, high throughput array technology, and genetic engineering/gene transfer technology. With decoding of the genomes, the first step necessary to identify the proteins that regulate biological pattern formation has been completed. A current limitation is that we do not yet know the function of most of the proteins encoded by the genes, the goal of proteomics. The second advance, array technology, now permits the expression levels of thousands of genes to be tracked simultaneously at multiple time points. For the first time, it is feasible to monitor how all the genes in a biological process of interest change over time. For example, Fig. 3 illustrates a DNA array study clocking how all 6000 genes of budding yeast turn on and off during the cell cycle (13) . Although gene expression does not always correlate with protein expression, the first protein arrays have been developed. Many technical challenges remain before gene and protein arrays become routine as reliable biological tools, but the proof of concept has now been established. The third key advance has been the development of genetic engineering and gene transfer technology. These techniques allow highly selective perturbations to be imposed on in vitro or in vivo biological systems. With these three tools, experimental biologists will eventually be able to characterize the profile of a biological system in complete detail as it evolves over time and space and to determine how this profile changes in response to highly selective perturbations.

However, the obvious problem with so many genes turning off and on in time and space, as well as their gene products being post-translationally modified by multiple signaling cascades, is that the complexity is immense. How can we sort out which candidate genes/proteins drive the process of interest and which are merely passively responding? Does it even make sense to try to group individual genes/proteins into "actors" vs. "spectators", or do they typically serve both roles due to the multiple interdependencies inherent in redundant signaling pathways? In complex interdependent systems, perturbing one element creates a domino effect that sweeps through the entire system, making cause and effect often impossible to intuit.

It is clear that the complexity is too great to rely on intuition alone. To deal with complexity of this magnitude, it is essential to integrate experimental biology with mathematical biology, in particular with the types of modeling approaches illustrated 50 years ago by Hodgkin & Huxley and Turing. These two modeling approaches, although fundamentally different, are highly complementary. Turing’s paper (6) illustrated the "minimal model" approach. With no specific biological data to incorporate, Turing’s model was purely conceptual, using the minimum of parameters necessary to produce interesting dynamical behavior. The advantage is that by minimizing parameters, the full dynamical repertoire of the system can be systematically explored, to identify the range of emergent behaviors and global parameters that control them. The disadvantage is the limited ability to directly relate predictions of the model to experimental observations, since there often are no explicit counterparts in the model corresponding to the biological details.

Hodgkin and Huxley, on the other hand, had precise biological data and used a "detailed modeling" approach to represent the individual biological processes (ionic currents) in physiological detail. The advantage of this approach is that the model can be compared directly with experimental observation and used to predict experimental behavior. For example, when their simulations based solely on Na and K currents failed to exactly reproduce the observed action potential, Hodgkin and Huxley predicted the existence of a third "leak" current, now known to be a Cl current. The disadvantage, however, is that the ability to identify the global parameters that control emergent behaviors decreases as the model grows in complexity. As the number of free variables and adjustable parameters increases (i.e., as the model becomes high dimensional), so does the number of potential behaviors that are consistent with the model. To paraphrase Stanislaw Ulam, a lead mathematician on the Manhattan Project, "Give me 15 free parameters, and I can draw an elephant. Give me 16, and I can make it dance." Hodgkin and Huxley had set out to describe the neuronal action potential in a detailed model, and succeeded extremely well. But the real dynamical understanding of the system required the low-dimensional reduction to the concept of an excitable element.

Thus, both approaches to modeling have advantages and disadvantages. The most effective strategy is to combine both approaches, as illustrated by the following example of cardiac arrhythmias. In this field, the legacies of both Turing’s and Hodgkin and Huxley’s studies have been profound. Although not directly extrapolated to cardiac tissue, Turing’s concepts had a major indirect impact. In the 1960s, Moe et al. (14) used a minimal model of cardiac tissue with cardiac cells represented as simple automata, and postulated that cardiac fibrillation was caused by multiple reentrant wave fronts. At the same time in the Soviet Union, Krinsky developed a minimal model of a 2-dimensional cardiac tissue, and demonstrated that reentrant excitation could be sustained by spiral waves (15) . Subsequently, Zaikin and Zhabotinsky (16) and Winfree (17) showed that chemicals in a dish, known as the Belousov-Zhabotinsky reaction, could produce spatiotemporal patterns according to the theoretical principles discovered by Turing. Winfree demonstrated spiral waves as one of these patterns. Gul’ko and Petrov (18) took the next step by modeling cardiac tissue with reaction-diffusion equations, and substantiated that spiral wave reentry could develop. Winfree, after visiting Krinsky in the Soviet Union, brought this exciting information to the attention of Western scientists (19) and went on himself to further explore its cardiac implications (20) . Although a spiral wave in cardiac tissue is not formally equivalent to a Turing pattern, the analogy is conceptually straightforward: excitability serves the role of short range activation, allowing an electrical wave to propagate regeneratively, and refractoriness (inactivation of the excitable element for a specified period of time) serves the role of long-range inhibition, preventing the tissue from being reexcited until after the refractory period has passed. Subsequently, functional reentry (21) (i.e., reentry in the absence of an anatomical obstacle) and then spiral waves (22) were described experimentally in real cardiac tissue and postulated to be the basis of fibrillation (23) .

Meanwhile, other investigators (24 25 26 27) studying 1-dimensional reentry around an anatomical circuit had identified a property of refractoriness called restitution (the dependence of refractory period on the interval since the last excitation) as playing a key role in producing oscillations in the wavelength of cardiac waves. If these oscillations grew large enough, reentry self-terminated. Karma (28) reasoned that in 2- or 3-dimensional tissue, oscillations in wavelength, if spatially localized, would terminate reentry locally, but not globally, and could be a mechanism of spiral wave breakup producing a fibrillation-like state. To prove his speculation, Karma developed a minimal 2-variable model of the cardiac action potential in which the property of restitution could be directly controlled. In his landmark study, he demonstrated that the steepness of restitution was a global parameter controlling spiral wave breakup in this simple model. If restitution was shallow (slope was steep (slope >1), the spiral wave broke up.

Although these findings were intriguing, their relevance to real cardiac tissue was unclear. Karma’s minimal 2-varable action potential model consisted of a generic inward (excitatory) current and a generic outward (recovery) current, neither of which have a direct biological counterpart. Here the legacy of Hodgkin and Huxley’s modeling approach assumed critical importance, in the form of detailed ionic models of the cardiac action potential that included the relevant biological details (ionic currents). Subsequent simulations in these physiologically realistic models of cardiac tissue have confirmed the importance of the restitution slope as a determinant of spiral wave stability (29 , 30) , and these predictions have now been verified experimentally in real cardiac tissue using drugs that flatten restitution (8 , 31) . Moreover, these findings have significant clinical relevance in providing a strategy for developing effective antiarrhythmic drugs (32) .

Could the importance of restitution steepness as a global parameter controlling spiral wave behavior have been ascertained from the detailed model without the minimal model as a guide? Perhaps, but the problem is that there are literally billions of ways to modify individual ionic currents in the detailed model, all of which could produce the same effect on restitution steepness. Without the insight provided by the minimal model, it would be very difficult to ascertain whether the change in spiral wave behavior was due to specific effects of the ionic current(s) modified or to the effect on restitution. On the other hand, without the detailed model, identification of the appropriate targets for modifying restitution pharmacologically or genetically would be purely empirical.

How can this integrated approach of mathematical and experimental biology be adapted to study other complex biological systems in the era of genomics and proteomics? Figure 4 summarizes our thoughts on this issue. At the genome/proteome level, bioinformatics (information science) is essential for organizing the vast genomic/proteomic databases. In the future, medicine will be revolutionized by accurate linking of genetic polymorphisms (SNPs) to human disease, permitting physicians to focus to a much greater extent on prevention of disease rather than treatment after the ravages of the disease process are already manifest. At the protein level, theoretical structural biology has become an essential tool for understanding how molecules perform their functions. If function can be related directly to structure, then there is great promise for designing molecules to selectively modulate physiological functions and thereby improve the state of human health and the treatment of disease. For addressing fundamental mechanistic questions about living organisms, however, the combined approaches taught to us half a century ago by Turing and by Hodgkin and Huxley are perhaps the most illuminating.

As a general strategy, we have learned that in fundamental biological processes, such as cell cycle regulation, morphogenesis, and signaling cascades, interesting dynamical behaviors such as spontaneous oscillations (limit cycles), sensitive biochemical switches (bistability), and excitability require the presence of positive and negative feedback loops, often coupled with time delays. By examining a complex system for the subsets of its components comprising these loops, we gain an initial clue about potentially sensitive targets for modifying dynamical behavior. For example, multi-site phosphorylation of cell cycle proteins, which targets them for degradation via ubiquination, has recently been identified as a key source of nonlinearity required for normal cell cycle dynamics (33 , 34) . Because multi-site phosphorylation is such a common motif in signaling networks, it is interesting to speculate that it may represent a general biological mechanism conferring nonlinearity and generating interesting dynamical behavior. The importance of such loops to dynamics can be best explored in minimal models and, if significant, can be extended to detailed models to formulate experimentally testable hypotheses.

In conclusion, the need to integrate mathematical biology with experimental biology is greater than ever. To achieve this integration, obstacles that have traditionally prevented effective communication between theoreticians and experimentalists have to be overcome by teaching experimentalists the language of mathematics and dynamical modeling, and teaching the theorists the language of biology. This is often difficult to accomplish within the traditional organization of highly territorial university departments. It is becoming less and less likely that individual scientists, however gifted, will be able to cover the full experimental-theoretical territory within a single laboratory. Regardless of how it is structured, within the traditional department structure or as "big science" programs, interdisciplinary approaches will be essential to achieve progress. It is an exciting time to be a biomedical scientist. The optimists believe that a glimmer of light at the end of the tunnel is discernible. We suspect that Alan Turing, Alan Hodgkin, and Andrew Huxley would count themselves in this group.

Acknowledgements

Supported by National Institutes of Health P50 HL52319 and the Laubisch and Kawata Endowments. We thank Boris Kogan for his critical comments and historical perspectives.

Footnotes

2 Alan M. Turing (1902-1954) was one of the great under appreciated geniuses of the 20th century, who in addition to the contribution cited above is credited with inventing digital computation leading to the modern computer and breaking the secret code used by the Axis during World War II, which saved Britain from defeat. He was persecuted for his open homosexuality, leading to his apparent suicide at age 52, which may account for his relative obscurity compared to Hodgkin and Huxley (5) .