Thursday, March 21, 2013

The genetics of emergent phenotypes

Why are some brain disorders so common? Schizophrenia,
autism and epilepsy each affect about 1% of the world’s population, over their
lifetimes. Why are the specific phenotypes associated with those conditions so
frequent? More generally, why do particular phenotypes exist at all? What
constrains or determines the types of phenotypes we observe, out of all the
variations we could conceive of? Why does a system like the brain fail in
particular ways when the genetic program is messed with? Here, I consider how
the difference between “concrete” and “emergent” properties of the brain may
provide an explanation, or at least a useful conceptual framework

There is now compelling evidence that
disorders like epilepsy, schizophrenia and autism can be caused by mutations in
any of a very large number of different genes (sometimes singly, sometimes in
combinations). This is fundamentally changing the way we think about these
disorders. It is no longer tenable to consider them as unitary categories.
Instead, it is very clear that the underlying etiology is extremely
heterogeneous – possibly more so than for any other human disease.

How can this fact be explained? Why is it that mutations in
so many different genes (perhaps thousands) can give rise to the specific
phenotypes associated with those disorders?

The normal logic of genetic analysis entails some
correspondence between the phenotypes associated with mutations in specific
genes and the functions of the products encoded by those genes. This connection
between mutation and phenotype is one of the main reasons why experimental
genetics is so powerful. For example, if we carry out a genetic screen for
mutations affecting cell death in a worm, or embryonic patterning in a fruit
fly, the expectation is that the genes we discover will be directly involved in
those processes. That is how the molecular processes regulating cell death and
embryonic patterning were discovered.

This logic can sometimes be applied to humans too – but not
always. Let’s consider two genetic conditions – microcephaly and epilepsy –
both affecting the brain, but in quite distinct ways.

Microcephaly
is a rare condition characterised by a small brain. In particular, the cerebral
cortex is smaller than normal, due to a defect in the generation of the normal
number of neurons in this brain area. It can be inherited in a simple,
Mendelian fashion, due to a mutation in any one of at least six different
genes. Remarkably, the proteins encoded by these genes are all involved in
some aspect of cell division of neuronal progenitors. In particular, they
determine whether early divisions expand the initial pool of progenitors (in
the normal situation) or prematurely generate neurons (when any of these genes
is mutated).

The genes implicated in microcephaly are thus directly
involved in the process affected: the generation of neurons in the cerebral
cortex. It is not too inaccurate to say that that is what these genes are
“for”.

This is not the case for epilepsy. It too can be inherited
due to specific mutations, but there are many, many more of them and the known genes involved have diverse functions:
from controlling cell migration or specifying synaptic connectivity to encoding
ion channels or metabolic enzymes. These are not genes “for” regulating the
spatial and temporal dynamics of electrical activity in neuronal networks.

Put another way, the reason that we see microcephaly as a
phenotype is that there are genes that control the process we are looking at –
generation of neurons in the cortex. The existence of that phenotype thus
reflects a property of the genetic system. In contrast, the generation of
seizures does not relate in any meaningful way to the genetic system – instead,
it is an emergent property of the neural system. We see that phenotype not
because there are many genes directly controlling that process, but because it
is a state that the brain tends to get into, in response to a wide diversity of
insults. (Indeed, seizures are one of the symptoms sometimes associated with
microcephaly).

I have used the term “emergent” twice now without defining
it and had better do so before I get pilloried by those allergic to the word.
There is good reason for a negative reaction, as the term is fraught with
multiple meanings and seemingly mystical connotations.

Concepts of emergence range from the mundane (the whole is
more than the sum of its parts) to the magical (where the behaviour of a system
is not reducible to or predictable from the state and interactions of all its
components, and where new properties emerge apparently “for free”). In fact, it
is possible to allow for new principles and properties at higher levels without
invoking such mystical concepts or over-riding the fundamental laws of physics.

Nature is organised hierarchically into systems at different
levels. Subatomic particles are arranged as atoms, atoms into molecules,
molecules in cells, cells into tissues and organs, and ultimately organisms,
individual organisms in collectives and societies. At each level, qualitatively
novel properties arise from the collective action of the components at the
level below. Emergence refers to the idea that many of these properties are
highly unexpected and extremely difficult to predict (though not necessarily
impossible in principle). One objection to the term is that it is therefore
essentially a statement about us (about our level of understanding) and not
about the system itself. I think it goes further than that, however, and does
denote some principles of nature that actually exist in the world, regardless
of whether we understand them or not.

While the emergent behaviour of a system is reducible to the
microstates of the components at the level below and the fundamental physical laws
controlling them, the emergent properties are not deducible purely from those laws. To put it another way, the
microstates of a system are sufficient to explain the properties or macrostates
observed at any moment but are not sufficient to answer another question – why those
properties exist. Why is it that those are the properties observed in that
particular system, or that tend to be observed across diverse systems? These
properties arise because additional laws or principles apply at the higher
level, which constrain the arrangements of the components at the lower level to some purpose.

Many of these principles of functional organisation are
abstract and apply to diverse systems – principles of network organisation, cybernetics and control
theory, information
content, storage and processing, and many others. All of these principles
constrain the architecture of a system in a way that ensures its optimality for
some function.

In artificial design of complex machines, these engineering
principles are incorporated to ensure that the parts are arranged so as to
produce the desired functions of the system as a whole. In living organisms, it
is natural selection that does this work, leading to the illusion of design (or
teleonomy), apparent only
in hindsight. System architectures that produce useful emergent properties at
the higher level (i.e., the phenotype of the organism, which is all that
selection can see) are retained and those that do not are removed. In this way,
the abstract engineering principles constrain the functional organisation of
the components of the system – there are only certain types of arrangements
that can generate specific functions. This is top-down causation, but over a
vastly different timescale from the mystical, moment-to-moment versions
proposed by some emergence theorists.

Let’s move from the abstract to a more specific example and
think about how these issues relate to the kinds of phenotypes we see when a
system is challenged. Consider a complicated, highly specified system like a
fighter jet. It has many different parts – engines, turbines, fuselage, flaps,
wheels, weapons, etc. – each with multiple subcomponents and each with a
specific job to do. If we were examining multiple designs for a jet, we might
consider various specs for, say, the turbines. We might vary the number of
blades, their size, angle, etc. These are all concrete properties of the system
and there are a finite number of them.

Contrast that with an emergent property of the jet,
something like aerodynamic stability, fuel efficiency or even something harder
to define, like “performance”. These properties depend on the specs of all the
individual components of the plane, but also, more importantly, on their
functional organisation and the interactions between them (and the interactions
of the whole system with the environment). A property like performance is not
easily linked to any specific component – instead it emerges in a highly
non-linear fashion from the specs of all of the components of the system and
how they are combined.

If you randomly broke one component in the jet, it is thus
much more likely that you would affect performance than that you would affect
the turbines specifically. The bits of the turbines are not “for performance”,
per se – they are for whatever job they do in the turbine. There aren’t any
bits of the jet that you would say are “for performance”, in fact, but all of
them can affect performance.

The kinds of functions affected by disorders like epilepsy,
autism or schizophrenia are like performance. For epilepsy, it is the highest-order
properties of neural systems – the temporal and spatial dynamics of electrical
activity. For schizophrenia and autism, it is functions like perception,
cognition, sense of self, executive planning, social cognition and orderly
thought – the most sophisticated and integrative functions of the human mind.
These rely on the intact functioning of neural microcircuits in many different
areas and the coordinated actions of distributed brain systems. Evolution has
crafted a complex and powerful machine with remarkable capabilities, but those
capabilities are consequently vulnerable to attack on any of a very large
number of components.

Thinking about these phenotypes in this way thus provides an
explanation for why epilepsy and schizophrenia are so much more common than
microcephaly. The mutational target – the number of genes in which mutations
can cause a particular phenotype – is much, much bigger. (This obviates the
need to invoke some kind of counter-balancing benefit of the mutations that
cause these disorders to explain why they persist at a high frequency. The
individual causal mutations do not
persist – they are strongly selected against, but new mutations arise all the
time. Under this mutation-selection
balance model, the prevalence of a disorder is determined by an equilibrium
between the mutational target size and the strength of selection).

But this perspective does not explain everything that needs
explaining. These conditions do not manifest simply as a general decrease in
brain “performance”. It is not just that normal brain functions are somewhat
degraded. Instead, qualitatively new states or phenotypes emerge. Psychosis is probably the
most striking example – psychiatrists call the hallucinations and delusions
that characterise psychosis “positive symptoms”, reflecting the fact that they
are a novel, additional manifestation, not just a decrease in the function of
specific mental faculties (as with the negative symptoms, such as a decrease in
working memory).

Why does this specific, qualitatively novel state arise as a
consequence of so many distinct mutations? This is where our fighter jet runs
out of steam, as a (now mixed) metaphor. The problem with that metaphor is that
fighter jets are designed and built from a blueprint. Parts of the blueprint
correspond to parts of the jet and their arrangement is also specified directly
on the blueprint.

This is not at all the case for the anatomy of the brain.
The genome is not a blueprint – there are no parts of the DNA sequence that correspond
to parts of the brain. Instead, the structure of the brain emerges through epigenesis –
the execution of the developmental algorithms encoded in the genome, which direct
the unfolding of the organism. (Aristotle coined this term epigenesis, which
contrasted with the prevailing theory, known as pre-formationism – the idea
that the fertilised egg already contains within it a teeny-weeny person, with
all its bits in place, which simply grows over the period of gestation).

The ultimate phenotype of an organism is thus emergent in
the more common sense of that word – it is something that arises over time. This
emphasises the need to consider developmental trajectories when trying to understand
the highly heterogeneous etiology of these disorders.

Complex, dynamic systems tend to gravitate towards certain
stable patterns of activity and interactions in the network. Such patterns are
called “basins of attraction” or “attractors”, for short. You
can think about them like hollows in a flat sheet, with the current network
state represented by the position of a ball rolling over this landscape. The
flat bits of this landscape represent unstable, fluid states that are likely to
change. The hollows represent more stable states – particular patterns of
activity of the network that are easy to get into and hard to get out of. Generally
speaking, the deepest such basin will represent the typical pattern of brain
physiology. It takes a big push to get the ball up and out of this basin. But
there are other basins – alternative stable states and the pathophysiological
state we recognise as psychosis may be one of those.

Such alternate states may exist as by-products of the
functional organisation of the system. The system architecture will have been
selected to robustly
generate a particular functional outcome. However, when individual
components are interfered with, new functional states may emerge – ones that
are unexpected and that the system has not been selected to produce. They arise
instead as an emergent property of the broken system, as a specific failure
mode.

It is vital to understand not just the nature of such
states, but the trajectories that dynamic systems (in this case organisms)
follow to get into them. (In dynamic systems, the relations between components
of the system are not fixed but change over time). If we take our flat sheet
and tilt it from one end, turning it into a board with channels in it, rather
than hollows, then we can represent the path of a developing organism through
phenotype space, over time.

This is Conrad Waddington’s famous “epigenetic landscape”
– a powerful metaphor for understanding how dynamic systems can be channelled
into specific, stable states. The shape of the landscape will be determined by
an individual’s genotype – some people may have much deeper channels heading
towards typical brain physiology while others may have a greater chance of
heading towards particular pathophysiological states, like psychosis or
epilepsy.

One reason why psychosis and epilepsy may be common states
is that they can reinforce themselves, through altering the relations of
components of the system. In a process known as “kindling”, seizures
induce changes in
neuronal networks that render them increasingly excitable and more likely
to undergo further seizures. A similar
dynamic process, involving homeostatic processes in dopaminergic signaling
pathways, may be involved in psychosis. These homeostatic mechanisms in the
developing brain can, under certain circumstances, be maladaptive, pushing the
network state into a particular pathophysiological pattern, in response to
diverse primary insults.

Finally, a developmental perspective can also provide an
explanation for the high levels of phenotypic variability observed with
mutations conferring risk for psychiatric disorders. Such mutations can
manifest in different ways, statistically increasing risk for multiple
conditions. A person’s risk for developing schizophrenia is statistically much
higher if they have a close relative with the condition, but their risks of
developing autism or epilepsy (or bipolar disorder or depression or
attention-deficit hyperactivity disorder) are all also higher. Even monozygotic
(“identical”) twins are often not concordant for these clinical diagnoses. So,
while genetics can lead to a much greater susceptibility to these conditions,
whether a specific individual actually develops them depends also on other
factors.

One of those factors, often overlooked, is intrinsic
developmental variation. The development of the brain is inherently probabilistic,
not deterministic (more like a recipe than a blueprint). This is evident at the
level of individual cells, nerve fibres and synapses and can manifest at the
macro level as variation in specific traits or symptoms in individuals with the
exact same genotype.

Waddington’s landscape can also visualise this important
role of chance in determining an individual’s eventual phenotypic outcome. If
you roll a marble down this board multiple times, you will get multiple
outcomes, essentially by chance (due to thermodynamic noise at the molecular
level, affecting gene expression, protein interactions, etc.).

For a concrete property such as brain size, the amount of
noise affecting the phenotype will be low, as a small number of components and
processes are involved. The correspondence between genotype and phenotype will
therefore be quite linear for concrete properties. In contrast, emergent
properties that depend on large numbers of components will be more subject to
noise and the relationship between genotype and phenotype will be far less
linear.This explains why
mutations causing psychiatric disorders show lower penetrance and higher
variability in phenotypic expression – this is the predicted pattern for
emergent properties.

To sum up, thinking about these kinds of disorders as
affecting emergent properties can explain why they are common, why the genes
responsible are so diverse, why their products are only distally and indirectly
related to the processes affected by the clinical symptoms and why the
phenotypic outcomes are inherently variable.

This is a great post on the emergent properties of mental states. I'm going to keep the jet analogy in mind for the next time I get into an argument about the materialism of consciousness at a party. ;) I do have one question, though: How do you account for the wide range in severity of mental phenotypes using your "attractor" model? Are Aspergers and severe autism different troughs in the epigenetic landscape? What about life-altering OCD vs. moderate counting and cleaning obsessions? Your explanation works neatly in the case of "all or nothing" diseases, but I think you'll agree that mental states are not so black and white.

You're absolutely right - the attractor metaphor works best for clear distinctions. But if you think about the attractor landscape as being shaped by a person's particular genome (and other factors), then the exact phenotypes that emerge (including the severity) are easier to accommodate. (Throw a good dash of chance in the mix and the range of ultimate phenotypes is not so surprising).

excellent review and insights - thanks. i agree with your explanation for the observations of wide range of phenotypes. i also think that part of problem with explaining the range of symptoms is taxonomy. we classify diseases (for those we don't have molecular conformation) according to their symptoms hence there are many overlaps. if we saw these conditions as a continuum of expressions of modified functions, model would fit well.

I agree - the classification scheme used is clearly arbitrary in many respects. There is evidence that disorders like schizophrenia and autism do have some validity, in that cluster analyses show a non-random clustering of symptoms. However, it is equally clear that these categories overlap other clinical categories in many respects(certainly for individual symptoms) and that their etiology is highly overlapping.

Jim Ranck, my post-doc advisor, used to say, "its amazing that brains don't go into seizure all of the time" meaning that its hard to maintain cerebral cortex in an active, but not hyperactive state. This way of thinking about it makes the 'emergent' state of a seizure not so surprising. If there are many road-blocks to a seizing cortex, eliminating any one of them may be sufficient.

This is one of the best pieces I've ever read on development. It is wide-ranging, conceptual, accessible, and yet technically deep enough to have real meat to it. Reading this makes me hope that you are working on a book that expands on your thinking! You have a rare and welcome combination expository ability and deep expertise.

Todd, thanks very much for those kind comments. It's always very encouraging to get feedback like that. I am, as it happens, working on a book, at very early stages so far, and hope to have some time to really work on it sometime soon. Just working on cloning myself first!

Thanks for this thoughtful piece and your many other excellent contributions to the scientific literature. They are a pleasure to read. I am a cognition and 'schizophrenia' researcher, with a strong interest in genetics, and I have been struggling with many issues that you touch on here and elsewhere. I find your characterization here of schizophrenia, epilepsy, etc, as 'specific failure modes' very persuasive. Other pieces I have read, including some of your articles, make a strong case for the importance of highly penetrant, rare genetic variants as a starting point for such complex disorders. I am having trouble drawing these two threads of thinking together, though. It seems that, if 'typical' brain wiring is based on a probabilistic yet very robust program, it should require more than a single nudge to redirect the program into a pathological but stable alternative. I understand that you are not arguing for Mendelian-like transmission in every case, but I guess I feel instinctively that there should generally be a more complex constellation of causal factors undelying a developmental endpoint as complex as psychosis. Thanks.

Thanks Dwight for your kind comments and also for a great question. It is an issue I have struggled to try and make sense of too - if the program of neural development is so robust then why can it be disrupted by single mutations in so many different genes? One possible answer is that the genetic network controlling development has evolved robustness to deal with environmental variance and, more importantly, intrinsic noise in the system. It can deal with small fluctuations of many components because it has evolved to do so. As a byproduct, it evolves robustness to mutations- at least to the cumulative effect of many minor mutations (like SNPs). There may be less selective pressure to evolve robustness to major mutations because that requires a kind of foresight that evolution does not have - the system does not know it might be advantageous in the future to evolve more robustness now to a challenge it has not yet encountered. When it does encounter it - when a mutation arises in a specific individual - it is too late. I realise that's all a bit hand-wavy, but trying to answer "why" questions in biology always is!

Another possible factor is that maybe the system is not so hypersensitive to mutations - maybe we're just really good at detecting what are actually minor changes in the function of the system because we are so attuned to interpreting each other's behaviour: http://www.wiringthebrain.com/2012/08/are-human-brains-especially-fragile.html

I basically make about.........$6,000k-$8,000k a month online.......... It's enough to comfortably replace my old jobs income, especially considering I only work about 10-13 hours a week from home. go to this site home tab for more detail .... JOBS AT HOME