Link List

Wednesday, May 9, 2018

The funding mantra of genomewide mapping is that common variants cause common disease (CV-CD). This was convenient for HapMap and other association-based attempts to find genetic causation. The approach didn't require very dense genotyping or massive sample sizes, for example. Normally, based on Mendel's widely known experiments and so on, one would expect anything 'genetic' to run in families; however, because of notions like low penetrance--low probability of having the trait even if you've inherited the genetic variants--small nuclear families can't work, as a rule, and big enough families would be too costly or even impossible to ascertain. In particular, for traits due to the effects of many genes or environmental factors and/or only weak causal variants' effects, families would not really be practicable.

So, conveniently, when DNA sequencing on a genomewide scale became practicable, the idea was that sequence variants might not have wholly determinative effects but the effects might be enough that we just need to find them in the population as a whole, not the smallish families that we have a hard enough time ascertaining. People carrying such a variant would have a higher probability of showing the trait.

It was a convenient short-cut, but there is a legitimate evolutionary rationale behind this: The same mutation will not recur very often so that if there are many copies of a causative allele (sequence variant) in a population, these are probably identical by descent (IBD), from a single ancestral mutational event. In that sense, genomewide association studies (GWAS) are finding family members carrying the same allele, but without having to work through the actual (inaccessible, very large, multi-generation ancestral) pedigrees that connect them. If the IBD assumption were not basically true, then different instances of the same nucleotide change will have different local genomic backgrounds, and the effects will often or even likely vary among the descendants of different mutations, affecting association tests, though the analysis rarely, if ever, attempts to detect or adjust for this.

In principle it can work well if a trait really is caused by alleles at a tractably small number of genes. That's a very big 'if', but assuming it, which is similar to assuming the trait is a classical Mendelian trait, then one can find association of the allele with the trait among affected people, because, at that site at least, they are distant relatives. If it is to be considered, though, the effect of a given allele does have to be strong enough, and its frequency in the sample large enough, to pass a statistical significance test. This is a potential major issue since in very large samples searching countless sites across the genome, reaching significance means many observations, and in a sense requires an allele's frequency and/or individual impact to be high.

In essence, this is the underpinning and implicit justification for the huge GWAS empire. There are many details, but one important assertion by the leaders of the new EverBigger (and more costly) AllOfUs project, is that common diseases are their target. Rare diseases generally just won't show up often enough to find statistically reliable 'hits'.

Of course, 'common' is a subjective term, and if one searches millions of genome sites and their allele frequencies vary in the sample, tons of them might be 'common' by such a definition. And they will have to have strong enough effects to be detectable as well based on suitably convincing significance criteria. So we might expect CV-CD to be a proper description of such studies. But there is a subtle difference: the implication (and once, 20 years ago, the de facto expectation) is that this meant that one or a few common variants cause the common disease.

Obviously, if that assumption of convenience were roughly true, then one can think of pharmaceutical or other preventive measures to target the causal variants in these genes in affected persons. In fact, we have largely based the nearly 20-year GWAS effort on such a wedge rationale, starting with smaller-scale projects like HapMap. Unfortunately, that was a huge success!

Why unfortunately? Because, no matter how you define 'common', what we've clearly found, time and again, trait after trait, is that these common diseases are in each case due to effects of a different set of 'common' alleles whose effects are individually weak. In that sense, the individual allele per se is not very predictive, because many unaffected people also carry that allele. Every case is genetically unique so one Pharma does not fit all. It is, I would assert, highly misrepresentative if not irresponsible to suggest otherwise, as is the common PR legerdemain.

Instead, what we know very clearly is that in many or most 'common' disease instances, since each case is caused by different sets of alleles, not only is each case causally unique, but no one allele is, in itself nearly necessary for the disease. There isn't usually a single 'druggable' target of Pharma's dreams. There was perhaps legitimate doubt about this 20 years ago when the adventure began, but no longer.

Indeed, it is generally rare for anything close to most of cases, compared to controls, to share any given allele, and even when that happens, the cause, as statistically estimated, say, by comparing cases and controls, is usually only slightly attributable to that allele's effects. Even then most variation is typically not being accounted for, as measured by the trait's estimated heritability, because it seems due to a plethora of alleles too weak or rare to be detected in the sample, even if they're there and are, collectively, the greatest risk contributor. And, of course, we've not mentioned lifestyles and other environmental factors, nor the often largely non-overlapping results from different populations, nor various other factors.

The non-Mendelian Mendelian reality of life
I think that as a community we were led into this causal cul de sac by taking Mendel too literally or too hopefully. To be sure, some traits are qualitative--they appear in two or a few distinct states, like green vs yellow peas--and these are basically the kinds of traits Mendel studied, because they were tractable. In such cases each gene transmits in families in a regular way, that in his honor we call 'Mendelian'. And human genetics had great success identifying them and their causal genes (cystic fibrosis is one well-known example but there have been many others). However, common diseases are generally not caused by individual alleles at single genes. Quantitative geneticists, such as agricultural breeders have basically known about the complexity of most traits for a century, even if specific contributing genes couldn't be identified until methods like GWAS came along 15-20 years ago.

Since we know all this now, from countless studies, it is irresponsible to hijack huge funding for more and more again of the same, based on a CV-CD promise that neither the public nor many investigators understand (or if they do, dare acknowledge). One might go farther and suggest that this makes 'CV-CD' a semantic shell-game, that the Congress and public are still buying--bravely assuming that the administrators and scientists themselves, who are pushing this view, actually understand the genomic (and environmental) landscape.

NIH Director Collins is busy and has to worry about his institute's budget. He may or may not know the kinds of things we've mentioned here--but he should! His staff and his advisors should! We have not invented them, no matter whether we've explained them fully or precisely enough. We have no vested interest in the viewpoint we're expressing. But the evidence shows that research should now be capitalizing, so to speak, on what we've actually learned from the genomic mapping era, rather than just doing more of the same, no matter how safe that is for careers (a structural problem that society should remedy).

Instead of ever more wheel-spinning, what we really need is new thinking, different rather than just more of the same Big Data enumeration. Until new ideas bubble up, neither we nor anyone else can specify what they should be. Continuing to pay for ever bigger data serves several immediate interests very well: the academic enterprise whose lifeblood includes faculty salaries and overhead funding for research done in their institution, the media and equipment suppliers who thrive on ever-biggerness, and the administrators and scientists whose imagination is too impoverished to generate some actual ideas. More is easier, more insightful is very much harder.

So, yes, common diseases are caused by common variants--tens or hundreds of them! Enumerating them is becoming a stale, repetitive costly business and maybe 'business' is the right word. The public is paying for more, but in a sense getting less. Until some day, someone thinks differently.

Sunday, May 6, 2018

So the slogan du jour, All Of Us, is the name of a 1.4 billion dollar initiative being launched today by NIH Director Francis Collins. The plan is to enroll one million volunteers in this mega-effort, the goal of which is, well, it depends. It is either to learn how to prevent and treat "several common diseases" or, according to Dr Collins who talked about the initiative here, "It's gonna give us the information we currently lack" to "allow us to understand all of those things we don't know that will lead to better health care." He's very enthusiastic about All of Us (aka Precision Medicine), calling it a "national adventure that's going to transform medical care." This might be viewed in the context of promises in the late 1900s that by now we'd basically have solved these problems--rather than needing ever-bigger longer-term 'data'.

And one can ask how the data quality can possibly be maintained if medical records of whoever volunteers vary in their quality, verifiability, and so on. But that is a technical issue. There are sociological and ontological issues as well.

All of Us?
Serving 'all of us' sounds very noble and representative. But let's see how sincere this publicly hyped promise really is. Using very rough figures, which will serve the point, there are 320 million Americans. So 1 million volunteers would be about 0.3% of 'all' of us. So first we might ask: What about achieving some semblance of real inclusive fairness in our society, by making a special effort to oversample African Americans, Hispanics, and Native Americans, before the privileged, mainly white, middle class get their names on the roles? That might make up for past abuses affecting their health and well-being.

So, OK, let's stop dreaming but at least make the sample representative of the country, white and otherwise. Does that imply fairness? There are, for example, about 300,000 Navajo Native Americans in the country. If All Of Us means what it promises, there would be about 950 Navajos in the sample. And about 56 Hopi tribespeople. And there are, of course, many other ethnic groups that would have to be included. Random (proportionate) sampling would include about 600,000 'white' people in the sample.

These are just crude subpopulation counts from superficial Google searching, but the point is that in no sense is the proposed self-selected sample of volunteers going to represent All Of Us in anything resembling fair distribution of medical benefits. You can't get as much detailed genomewide (not to mention environmental) data from a few hundred sampled individuals compared to hundreds of thousands. To be fair and representative in that sense, the sample would have to be stratified in some way rather than volunteer-based. It seems very unlikely that the volunteers who will be included are in some real sense going to be representative of the US, rather than, say of university and other privileged communities, major cities, and so on--even if not because of intentional bias but simply because they are more likely to learn of All Of Us and to participate.

Of course, defining what is fair and just is not easy. For example, there are far more Anglo Americans than Navajo or Hopi. So the Anglos might expect to get most of the benefits. But that isn't what All Of Us seems to be promising. To get adequate information from a small group, given the causal complexity we are trying to understand, they should probably be heavily oversampled. Even doing that would leave room for enough samples from the larger populations of Anglo and African-Americans adequate for the kind of discovery we could anticipate from this sort of Big Data study of causes of common disease.

More problems than sociology
That is the sociological problem of claiming representativeness of 'all' of us. But of course there is a deeper problem that we've discussed many times, and that is the false implied promise of essentially blanket (miracle?) cures for common diseases. In fact, we know very well that complex causation, of the common diseases that are the purported target of this initiative, involves tens to thousands of variable genome locations, not to mention the environmental ones that are beyond simple counting. Further, and this is a serious, nontrivial point, we know that these sorts of contributing causes include genetic and environmental exposures in the sampled individuals' futures, and these cannot be predicted, even in principle. These are the realities.

And, even if the project were truly representative of the US population demographically, as a sample of self-selected volunteers there remains the problem of representing diseases in the population subsets. Presumably this is why they are focusing on "common diseases", but still the sample will have to be stratified by possible causal exposures (lifestyles, diets, etc) and ethnicity, and then they'll have to have enough controls to make case-control comparisons meaningful. So, how many common diseases, and how will they be represented (males/females, early/late onset, related to what environmental lifestyles, etc.?)? One million volunteers isn't going to be representative or a large enough sample that has to be stratified for statistical analysis, especially if the sample also includes the ethnic diversity that the project promises.

And there's the epistemological problem of causation being too individualistic for this kind of hypothesis-free data fishing to solve--indeed, it is just that kind of research that has shown us clearly how that kind of research is not what we need now. We need research focused on problems that really are 'genetic', and some movement of resources to new thinking, rather than perpetuating the same kind of open-ended, 'Big Data' investment.

And more
In this context, the PR seems mostly to be spin for more money for NIH and its welfare clients (euphemistically called 'universities'). Every lock on Big Money for the Big Data lobby, or perhaps belief-system, excludes funding for focused research, for example, on diseases that would seem to be tractably understood by real science rather than a massive hypothesis-free fishing expedition.

How could the 1.4 billion dollars be better spent? A legitimate goal might be to do a trial run of a linked electronic records system as part of explicit move towards what we really need, and which would really include all of us; a real national healthcare system. This could be openly explained--we're going to learn how to run such a comprehensive system, etc., so we don't get overwhelmed with mistakes. But then for the very same reason, a properly representative project is what should be done. That would involve stratified sampling, and more properly thought-out design. But that would require new thinking about the actual biology.

Thursday, April 26, 2018

Here's a link to a famous John Cleese (of Monty Python fame) sketch on gene mapping. We ask you to decide whether this is funnier than the daily blast of GWAS reports and their proclaimed transformative findings: which is more Monty than the full Monty.

Why we keep spending money on papers that keep showing how MontyPythonish genomewide association with complex traits is, is itself a valid question. To say, with a straight face, that we now know of hundreds, much less of thousands, of genomewide sites that affect some trait--in some particular sample of humans, with much or most of the estimated heritability yet unaccounted for, without saying that enough is enough, is almost in itself a comedy routine.

We have absolutely no reason--or, at least, no need--to criticize anything about individual mapping papers. Surely there are false findings, mis-used statistical tests, and so on, but that is part of the normal life in science, because we don't know everything and have to make assumptions, etc. Some of the findings will be ephemeral, sample-specific, and so on. That doesn't make them wrong. Instead, the critique should be aimed at authors who present such work with a straight face as if it is (1) important, (2) novel in any really novel way, and (3) not saying that the paper shows why, by now with so many qualitatively similar results, we should stop public funding of this sort of work. We should move on to more cogent science that reflects, but doesn't just repeat, the discovery of genomic causal (or, at least, associational) complexity.

The bottom line
What these studies show, and there is no reason to challenge the results per se, is that complex traits are not to be explained by simple, much less additive genetic models. There is massive causal redundancy with similar traits due to dissimilar genotypes. But this shouldn't be a surprise. Indeed, we can easily account for this in terms of evolutionary phenomena, both related to processes like gene duplication and the survival protection that alternative pathways provides.

Even if each GWAS 'hit' is correct and not some sort of artifact, it is unclear what the message is. To us, who have no vested interest in continuing, open-ended GWAS efforts with ever-larger samples, the bottom line is that this is not the way to understand biological causation.

We reach that view on genomic considerations alone, without even considering the environmental and somatic mutation components of phenotype generation, though these are often obviously determinative (as secular trends in risk clearly show). We reach this view without worrying about the likelihood that many or perhaps even most of these 'hits' are some sort of statistical, sampling, analytic or other artifact, or are so indirectly related to the measured trait, or so environment-dependent as to be virtually worthless in any practical sense.

What GWAS ignore
There are also three clear facts that are swept under the rug, or just ignored, in this sort of work. One is somatic mutation, which are not detected in constitutive genomewide studies but could be very important (e.g., cancer). The second is that DNA is inert and does something only in interaction with other molecules. Many of those relate to environmental and lifestyle exposures, which candid investigators know are usually dreadfully inaccurately measured. The third is that future mutations, not to mention future environments are unpredictable, even in principle. Yet the repeatedly stressed objective of GWAS is 'precision' predictive medicine. It sounds like a noble objective, but it's not so noble given the known and knowable reasons these promises can't be met.

So, if biological causation is complex, as these studies and diverse other sorts of direct and indirect evidence clearly show, then why can't we pull the plug on these sorts of studies, and instead, invest in some other mode of thinking, some way to do focused studies where genetic causation is clear and real, rather than continuing to feed the welfare state of GWAS?

We're held back by inertia, and the lack of better ideas, but another important if not defining constraint is that investigator careers depend on external funding and that leads to safe me-too proposals. We should stop imitating Monty Python, and recognize that if the gene-causation question even makes sense, some new way of thinking about it is needed.

Wednesday, April 25, 2018

Drug resistant malaria has emerged in Southeast Asia several times in history and subsequently spread globally. When there are no other antimalarials to use this has led to public health and humanitarian disasters, especially in high transmission settings (parts of sub-Saharan Africa).

Currently there is a single effective antimalarial left: Artemisinin. But malaria parasites in Southeast Asia are already developing resistance to this antimalarial, leading many in the malaria research community and in public health to worry that we will soon be left with untreatable malaria.

One proposed solution to this problem has been to attempt to eliminate the parasite from regions where drug resistance consistently emerges. The proposed strategy uses a combination of increasing access to health care (so that ill people can be quickly diagnosed and treated, therefore reducing transmission) and targeting asymptomatic reservoirs by asking everyone who lives in a community where there is a large reservoir to take antimalarials, regardless of whether or not they feel ill (mass drug administration).

In Southeast Asia malaria largely persists in areas that are difficult to access and remote. The parasite thrives in conflict zones and in the fringes of society. These are the areas that frequently don’t have strong healthcare or surveillance systems and some have even argued that control or elimination would be impossible in such areas because of these difficulties.

Today on World Malaria Day my colleagues and I published the results after 3 years of an elimination campaign in Karen State of Myanmar. The job is not complete. But this work has shown that it is feasible to set up a health care system, even in remote and difficult-to-access areas, and that most villages can achieve elimination through beefing up of the health care system alone. In places where there are high proportions of people with asymptomatic malaria, access to health care alone doesn’t suffice and malaria persists for a longer period of time. With high participation in mass drug administration, which requires a large amount of community engagement, these communities are able to quickly eliminate the parasites as well. We are hopeful that similar programs will be expanded throughout Southeast Asia, regardless of the geographic and political characteristics of the regions, so that elimination can be achieved and sustained.

Malaria (P. falciparum) incidence in the target area over three years. The project expanded over the three years, and overall incidence has decreased.

Tuesday, April 24, 2018

When I was active in the grant process, including my duty to serve as a panelist for NIH and NSF, I realized that the work overload, and the somewhat arbitrary sense that if any reviewer spoke up against a proposal it got conveniently rejected without much if any discussion, meant that reviews were usually scanty at best. Applications are assigned to several reviewers to evaluate thoroughly, so the entire panel doesn't have to read every proposal in depth, yet each member must vote on each proposal. Even with this underwhelming consideration, the panel members simply cannot carefully evaluate the boxes full of applications for which they are responsible. In my experience, once we got down to business, for those applications not immediately NRF'ed (not recommended for funding), there would be some discussion of the surviving proposals; but even then, with still tens of applications to evaluate, most panelists hadn't read the proposal and it seemed that even some of the secondary or tertiary assignees had only scanned it. The rest of the panel usually sat quietly and then voted as the purported assigned readers recommended. Obviously (sssh!), much of the final rankings rested on superficial consideration.

When a panel has a heavy overload of proposals it is hard for things to be otherwise, and one at least hoped that the worst proposals got rejected, those with fixable issues were given some thoughtful suggestions about improvement and resubmission, and at least that the best ones were funded.

But there was always the nagging question as to how true that hopeful view was. We used to joke that a better, fairer reviewing system was to put the proposals to the Stairway Test: throw them down the stairs and the ones that landed closest to the bottom would be funded!

Well, that was a joke about the apparent fickleness (or, shall we say randomness?) of the funding process, especially when busy people had to read and evaluate far, far too many proposals in our heavily overloaded begging system, in which not just science but careers depend on the one thing that counts: bringing in the bucks.

The Stairway Test (technical criteria)

Or was it a joke? A recent analysis in PNAS showed that randomness is perhaps a best way to characterize the reviewing process. One can hope that the really worst proposals are rejected, but about the rest.....the evidence suggests that the Stairway Test would be much fairer.

I'm serious! Many faculty members' careers literally depend on the grant system. Those whose grants don't get funded are judged to be doing less worthy work, and loss of jobs can literally be the direct consequence, since many jobs, especially in biomedical schools, depend on bringing in money (in my opinion, a deep sin, but in the context of our venal science support system, one not avoidable).

The Stairway Test would allow those who did not get funding to say, quite correctly, that their 'failure' was not one of quality but of luck. Deans and Chairs would, properly, be less able to terminate jobs because of failure to secure funding, if they could not claim that the victim did inferior work. The PNAS paper shows that the real review system is in fact not different from the Stairway Test.

So let's be fair to scientists, and the public, and acknowledge honestly the way the system works. Either reform the system from the ground up, to make it work honorably and in the best interest of science, or adopt a formal recognition of its broken-nature: the Stairway Test.

Wednesday, March 14, 2018

Below is the second installment in a short series of posts by a current Penn State graduate student in Chemical Ecology, Tristan Cofer. The thoughts are based on conversations we have been having, and reading he has been doing on these topics. The idea of the posts is to provide reflections by someone entering the next generation of scientists, and looking at the various issues in understanding, epistemology, and ontology, as they are seen today, by philosophers and in practice:

******************

Probabilities are everywhere. They come up in our conversations when we talk about making plans. They are there in our games as “chances”, “odds”, and “risks”. We use them informally when we make decisions about our health and well–being. And, in a more formal sense, we use them in science when we make inferences about data. Indeed, probabilities are so common that they can at times seem almost familiar.

But just what exactly are we talking about when we talk about “probabilities”? When I say, for instance, that the probability that a tossed coin will land heads up is 50%, am I saying something about that coin’s disposition to produce a certain outcome, or am I only expressing the degree to which I believe that that outcome might occur? Do probabilities exist out there in the real world as things that we can measure, or are they just in our minds as opinions and beliefs?

The short answer seems to be, yes, probabilities are both. They have an objective and a subjective element to them. This duality has apparently been there from the start, at a time when formal probability concepts were first developed in the seventeenth century. According to the philosopher Ian Hacking, during the Renaissance, the term “probable” was taken to mean “approved by some authority” rather than by evidence. It was not until the Enlightenment, when early Empiricists first began looking to Nature for “signs” to support casual associations, that “probable” came to mean “having the power to elicit a change”. Hence, “approval by testimony” from people and institutions was superseded by evidential observations. Transforming signs into evidence helped to advance what we might call frequentists–based induction, which was formalized as a mathematical concept in the Port Royal Logic in 1662.

Of course subjective probabilities have hardly disappeared, and in fact, it may be argued that we have seen their resurgence in the popularity of Bayesian– or conditional–based statistical inquiry. That being said, however, I am not sure that understanding how the term “probability” developed gets us much closer to understanding what probabilities really are.

It seems that in order to make progress here, we must talk about cause and effect. Namely, we need to discuss whether probabilities are like physical laws that define an event, or whether they are contrivances that we use to describe things after the fact. If they are descriptions based on the past, then what rational do we have for extending our inferences into the future? Is there any legitimate guarantee that future events will proceed at the same frequency as their predecessors? And even if they do, then for how long?

On the other hand, we might ask, if probabilities are only descriptive then what makes them so regular? Why does a tossed coin land heads up one-half of the time, almost as though it had some property that we might call its “probability”. Moreover, how are probabilities such as this determined? Could it be that we really are living in a clock–work universe, and that even our uncertainty is defined by deterministic processes? These questions are perhaps beyond what sciences and mathematics is able to answer. But maybe that is okay. This seems to be fertile ground for philosophical inquiry, which might provide insights where they are needed most.

Wednesday, February 28, 2018

Our daughter Ellen wrote the post that I republish below 3 years ago, and we've reposted it in commemoration of Rare Disease Day, February 29th, each year since. I wish I could include an update reporting that the cause of her rare disease has been identified. She would very much like to know, not only because it would explain this thing that has defined so much of her life, but also because, in this genetics age, being able to tell a new doctor the cause of her condition would mean they'd have no doubts. Sometimes a diagnosis isn't enough, and when you have a rare disease doubt can remain a frequent aspect of encounters with the medical system.

It's not there there has been no action. After a lengthy, ultimately failed attempt by a previous lab, which was unsuccessful for reasons unclear to us but probably technology-related, Ellen is currently included in another large sequencing project, and we're hopeful that we'll get some kind of an answer. They've done whole genome sequencing of her DNA as well as Ken's and mine, and are about to begin to look for her causal variant. To date, we know that she hasn't been found to have one of the known variants associated with her disease. There are occasional reports of new variants in other families with the same disease, and that could help identify hers, but what if she doesn't have one of these, either?

Finding a causal gene variant is easiest when a disease is rare and there are multiple cases in one family but Ellen is the only person in our family, for as far back as we can trace on both sides, with HKPP. When the disease is rare and only one family member has it, there's not really a peg to hang your hat on -- where do you start to look for the causal variant?

Ellen has classic hypokalemic periodic paralysis (HKPP), a disease for which causal DNA variants in a small number of ion channel genes have been identified in a number of families, where they essentially act as classical Mendelian variants. There are several possibilities here -- she could have a de novo mutation, a mutation new to her that she inherited from neither parent. If it's one that is shared by other people with HKPP, that would be easy to identify, but if not, even if it's on one of the three genes, to date, that have been found to be associated with the disease, how could it be shown that it is causal, rather than simply a mutation with no effect? And searches of 'her' genome are based on blood samples, and what if she carries a somatic mutation that arose after the embryonic separation of blood-related tissues from other tissues?

Some families with HKPP have members with the supposed causal variant who are symptom-free. This isn't unusual in genetics -- it's been called "incomplete penetrance" for a century, which basically means that one can have a causal mutation without the condition it apparently does cause in others. There can be various explanations for this. For example, when a disease responds to environmental triggers, as does HKPP, it's possible that gene by environment interaction at some critical age is required to set up the cascade of events that lead to paralytic episodes. Curiously, HKPP generally begins at puberty, for some unidentified reason -- perhaps some triggering event doesn't happen in disease-free family members with a causal variant, or perhaps the disease is polygenic rather than monogenic and those who are disease-free don't have the required critical mass of variants. This means that it's possible that Ken or I could have "the" causal variant but, because of incomplete penetrance -- whatever effect that would mean -- we don't have the disease. Or, we gave Ellen a mix of variants that together cause her disease but neither of us had the same mix that came together in her. But, at the very least, neither of us carries a known or plausibly relevant variant in the known HKPP-related genes that have been tested.

Ellen isn't the only person with HKPP whose cause is not known. Perhaps there are other ion channel genes associated with the disease, that are not yet identified. Or, perhaps in some people it's too genetically complex for causation to be parsed. Because of all these possible difficulties, identifying the cause of Ellen's disease is not likely to be straightforward. We are hopeful that the geneticists currently working on this will have something to tell her in the end, but whether it's something simple that she'll be able to tell her doctors we don't yet know.

This is one personal story for Rare Disease Day, but I think it's very relevant to all the promises of "personalized medicine" being made these days. Having your DNA sequenced isn't a magic answer. Sometimes the technology is limiting, sometimes the problem is actually impossible to solve.

By Ellen Weiss

Despite being the product of two of the authors of this blog – two people skeptical about just how many of the fruits of genetic testing that we've been promised will ever actually materialize – I have been involved in several genetic studies over the years, hoping to identify the cause of my rare disease.

February 29 is Rare Disease Day; the day on which those who have, or who advocate for those who have, a rare disease publicly discuss what it is like to live with an unusual illness, raise awareness about our particular set of challenges, and talk about solutions for them.

I have hypokalemic periodic paralysis, which is a neuromuscular disease; a channelopathy that manifests itself as episodes of low blood potassium in response to known triggers (such as sodium, carbohydrates, heat, and illness) that force potassium from the blood into muscle cells, where it remains trapped due to faulty ion channels. These hypokalemic episodes cause muscle weakness (ranging from mild to total muscular paralysis), heart arrhythmias, difficulty breathing or swallowing and nausea. The symptoms may last only briefly or muscle weakness may last for weeks, or months, or, in some cases, become permanent.

I first became ill, as is typical of HKPP, at puberty. It was around Christmas of my seventh grade year, and I remember thinking to myself that it would be the last Christmas that I would ever see. That thought, and the physical feelings that induced it, were unbelievably terrifying for a child. I had no idea what was happening; only that it was hard to breathe, hard to eat, hard to walk far, and that my heart skipped and flopped all throughout the day. All I knew was that it felt like something terrible was wrong.

Throughout my high school years I continued to suffer. I had numerous episodes of heart arrhythmia that lasted for many hours, that I now know should've been treated in the emergency department, and that made me feel as if I was going to die soon; it is unsettling for the usually steady, reliable metronome of the heart to suddenly beat chaotically. But bound within the privacy teenagers are known for, my parents struggled to make sense of my new phobic avoidance of exercise and other activities as I was reluctant to talk about what was happening in my body.

HKPP is a genetic disease and causal variants have been found in three different ion channel genes. Although my DNA has been tested, the cause of my particular variant of the disease has not yet been found. I want my mutation to be identified. Knowing it would likely not improve my treatment or daily life in any applicable way. I'm not sure it would even quell any real curiosity on my part, since, despite having the parents I have, it probably wouldn't mean all that much to this non-scientist.

But I want to know, because genetics has become the gold standard of diagnostics. Whether it should be or not, a genetic diagnosis is considered to be the hard-wired, undeniable truth. I want that proof in my hand to give to physicians for the rest of my life. And of course, I would also like to contribute to the body of knowledge about HKPP in the hopes that future generations of us will not have to struggle with the unknown for so many years.

For many people, having a rare disease means having lived through years of confusion, terrible illness, misdiagnoses, and the pressure to try to convince skeptical or detached physicians to engage in investigating their suffering.

I was sick for all of my adolescent and young adult years; so sick that I neared the edge of what was bearable. The years of undiagnosed, untreated chaos in my body created irrevocable changes in how I viewed myself and my life. It changed my psychology, induced serious anxiety and phobias, and was the backdrop to every single detail of every day of my life. And yet, it wasn't until I was 24 years old that I got my first clinical clues of what was wrong. An emergency room for arrhythmia visit revealed very low blood potassium. Still, for 4 more years I remained undiagnosed, and there was horrible suffering during which my loved ones had to take care of me like a near-infant, accompanying me to the hospital, watching me vomit, struggle to eat or walk to the bathroom, and waking up at 3am to take care of me. For 4 more years I begged my primary physician and countless ER doctors during desperate visits to investigate what was going wrong, asked them to believe that anxiety was a symptom not a cause, and scoured medical information myself, until I was diagnosed. It wasn't until I was 28 that I found a doctor who listened to me when I told him what I thought I had, made sense of my symptoms, recognized the beast within me, and began to treat me.

My existence, while still stained to a degree every day by my illness, has improved so immeasurably since being treated properly that the idea of returning to the uncontrolled, nearly unbearable sickness I once lived with frightens me very much. I fear having to convince physicians of what I know of my body again.

What I went through isn't all that uncommon among the millions of us with a rare disease. Lengthy periods of misdiagnoses, lack of diagnoses, begging well-meaning but stumped, disbelieving, or truly apathetic physicians to listen to us are common themes. These lost years lay waste to plans, make decisions for us about parenthood, careers, and even whether we can brush our own teeth. They induce mistrust, anxiety, exhaustion.

Each rare disease is, of course, by definition rare. But having a rare disease isn't. Something like 10% of us has one. It shouldn't be a frightening, frustrating, lengthy ordeal to find a physician willing to consider that what a patient is suffering from may be outside of the ordinary since it isn't all that unlikely at all. Mathematically, it only makes sense for doctors to keep their eye out for the unusual.

I hope that one day the messages we spread on Rare Disease Day will have swept through our public consciousness enough that they will penetrate the medical establishment. Until then, I will continue to crave the irrefutable proof of my disorder. I will continue to worry about someday lying in a hospital bed, weak and verging on intolerably sick, trying to convince a doctor that I know what my body needs, a fear I am certain many of my fellow medically-extraordinary peers share.

And that is why I, this child of skeptics, seek answers, hope and proof through genetics.

Comments

We always welcome comments, but we moderate them to reduce spam, gratuitous unkindness and so forth. Because we moderate comments, they won't appear on the blog until one of us publishes them, but we try to do that in a timely way.

We've had to make a change to the commenting page. People had told us that Blogger was eating their comments, so now, rather than embedding comment editing with the posts, it has to be done on a separate, full page. Unfortunately, the 'reply' option has disappeared so comments will just follow one another. We'll see how this goes.