600 trillion synapses and Alzheimers disease

Mark Wanner

Late-onset Alzheimer’s disease is extremely difficult to model in the laboratory, one reason why no successful therapy for it exists. But scientists are using new capabilities to create more accurate, more useful research tools, bringing new hope for progress and better outcomes for patients.

Neuroscience always makes me think of astrophysics.

Why? Well, our own galaxy, the Milky Way, is pretty big as galaxies go, containing 250 billion stars, give or take 100 billion or so. Our brains contain about 100 billion neurons.* So you get the idea. There’s the numerical equivalent of a smallish galaxy in each of our skulls.

But that’s not all. Each neuron has, on average, about 7,000 synaptic connections with other neurons. That puts the synapse count in the neighborhood of 600 trillion. In young children, before synaptic pruning begins in earnest, the estimated number reaches as high as 1 quadrillion. The system of interconnections almost defies comprehension.

Add in underlying genetics, environmental variables that can have significant effects, and human behaviors that are both derived from neural circuits and can also affect their future function, and the subject becomes even more complex. Of course, it’s only thanks to our extensive neural circuitry that we can even try to understand it. But can we actually figure out how our brains work and what goes wrong in a neurological disease ?

Which brings us to Alzheimer’s disease (AD). AD is actually pretty simple for a neurological disease, at least on the surface. In AD, neurons die over time. A lot of them. After quite a while — usually decades — neuron death affects memory and cognition, and ultimately the damage becomes so extensive that the patient dies. Hallmarks of AD include accumulation of protein aggregations thought to disrupt neuron function, and much of the effort around therapy development has revolved around trying to reduce these aggregations, called beta-amyloid plaques and tau tangles, in patient brains.

But scratch the surface and it’s not at all simple. For patients with the most common form of AD, late-onset or LOAD, the environmental and behavioral contributions to AD accumulate over an extremely long time. Diet and exercise (or lack thereof) are among the leading risk factors, but their long-term effects are very difficult to recreate in an experimental setting. Even sleep habits early in life are coming under scrutiny. Not one therapeutic has succeeded in clinical trials so far, including some that looked highly promising in preclinical tests. Other physiological traits also come into play, and some recent attention has been focused on immune function and vascular health as factoring into disease progression. Finally, genetically speaking there is a strong connection between a certain gene subtype, known as ApoE4, and LOAD risk, but no one is quite sure how the mechanism works. And other genetic signals are still indistinct, so identifying the at-risk population beyond ApoE4 remains out of reach.

The current clinical perspective on therapy development was of particular interest. I was somewhat surprised to find that the experts in the field are mostly still following what’s known as the “amyloid hypothesis,” the belief that the beta-amyloid plaques seen in patients are central to AD pathogenesis and hold the key to therapy development. Most of the therapies tested to date were developed to reduce or eliminate the plaques, and some have done so quite effectively, but none have mitigated disease progression. Understandably, the hypothesis has come under quite a bit of fire in recent years, and in animal models the buildup of amyloid plaques is not necessarily associated with neuron degeneration and death. And if things weren’t already difficult enough, new insight into some additional causes of variability in the large amyloid precursor protein, from which beta amyloid is cleaved, further muddies the picture. So what might be a way to solve the puzzle?

Some clinicians and researchers think that the plaques themselves are a stepping stone to later disruption caused by other factors, most notably tau tangles. The mechanism is not yet well understood, but research indicates that tau disruption and aggregation spreads in the brain in the presence of amyloid plaques, but is localized without them. The scenario suggests that tau and/or associated mechanisms underlie actual neuron death, and as tau spreads to more areas of the brain, more neurons die. It also explains why reducing beta amyloid only after plaques form has no therapeutic effect, as the amyloid plaques will have already served as stepping stones for tau by the time the treatment regimen starts. If plaque formation can be prevented or reversed at an early stage, however, tau won’t spread as quickly and neurodegeneration will be mitigated. This modified amyloid hypothesis, as it were, is gaining traction, and has an appealing aspect: some of the previously tested drugs that failed to slow or reverse AD in patients might actually work as preventatives if used far earlier in disease progression, long before symptoms arise.

But how do you know who’s in the pre-symptomatic stages of AD? After all, until quite recently any definitive diagnosis of AD had to be posthumous. Therefore it’s imperative to find biomarkers to identify relatively young people at the earliest stages of disease or at the highest risk before it even starts. For this purpose, advanced imaging methods hold perhaps the most promise, at least in the short term, and have been the focus of significant interest over the past couple of years. Positron emission tomography (PET) in particular allows visualization of amyloid and tau at preclinical stages, and the latest work is identifying early pathophysiological patterns in both human patients and model organisms such as mice. These patterns must be established as preceding disease at a high confidence level, as any pre-symptomatic therapy or preventative should be administered only to those with the highest likelihood of subsequent disease.

Speaking of mice, there has also been important progress in how AD is studied in the laboratory. In addition to the duration and environmental challenges noted above, mouse-based research has been impeded by patent trolls, confounded by phenotypic inconsistencies (e.g., a popular mouse model for amyloid accumulation exhibited no neurodegeneration), limited by lifespan and environmental differences, and more. For a long time it also focused on early-onset AD because of the clearer genetic contributors, but that work yielded data that is likely not applicable to most human patients with LOAD.

To overcome the challenges, it will take more than one lab or one group. In a session about the current efforts underway to improve model organism research for AD, the predominant theme was collaboration and the importance of sharing resources and data. JAX’s work in this area, spearheaded by the Breaking the bottleneck in Alzheimer's drug developmentFor the first time, researchers have the tools to build new mouse models that truly represent patients with Alzheimer’s disease. MODEL-AD
center, was a centerpiece, as more and more mice that carry variants and mutations seen in human LOAD patients are being produced and characterized. Not all will display relevant disease phenotypes, but the early indications are promising for several new strains. Additional and complementary work is being done by the UK Dementia Research Institute, Riken in Japan, and the Cure Alzheimer’s Research Consortium in the United States. The effort to understand the complex processes underlying LOAD is an ongoing imperative, and timely, accurate preclinical research data is necessary if an effective preventative or treatment is to be developed.

The upshot, it seems, is that it’s no surprise that AD, and LOAD in particular, has been so difficult to address clinically. The good news, though, is that the field is rising to meet its challenges, and the near future holds exciting potential. It took the Hubble Space Telescope to peer far enough into space to learn about the universe’s origins. We may now have the tools we need to peer far enough into our own brains to learn what we need to know about how it works that we can learn how to keep it working well, even as we age and many of our neurons, in days past, would have died.

* Or slightly fewer, perhaps. Calculations are confounded by uneven distribution of neurons in different areas of the brain, so extrapolation can be difficult. In fact, a few years ago a scientist liquefied four human brains and counted nuclei in a defined volume of the liquid to overcome this issue. The rather disagreeable process yielded a slightly lower average total: 86 billion neurons per brain.

Mark Wanner followed graduate work in microbiology with more than 25 years of experience in book publishing and scientific writing. His work at The Jackson Laboratory focuses on making complex genetic, genomic and technical information accessible to a variety of audiences. Follow Mark on Twitter at @markgenome.