About five million people in the US suffer from heart failure, and approximately half of them die within five years of being diagnosed. Only about 2,500 people a year receive a heart transplant – the treatment of last resort. A new heart can be life-saving, but it is also life-changing. Even under the best conditions, the surgery is complex, and recovery carries a heavy physical and emotional burden.

And not all heart transplant recipients fare equally well after the surgery. Researchers have found that black heart transplant patients are more likely to die after surgery than white or Hispanic patients.

While many different factors contribute to the disparity, the research indicates that where patients received their heart transplants played a big role. Black patients were more likely to have their transplants performed at the worst-performing centers.

This is merely one of many examples of health disparities faced by black Americans. But as a cardiologist, I find this finding especially troubling because many of the heart failure patients I treat are black.

So how do patients decide where to have their heart transplants performed? And wouldn’t a person who needs a heart transplant choose to go to a top center?

Quality is obviously a major factor. But there is another big consideration in deciding where to get a transplant: accessibility.

Not all transplant centers have the same results

Researchers at Ohio State University reviewed the records of heart transplants performed at 102 transplant centers in the US from 2000 to 2010. The researchers focused on the rate of death during the first year after the transplant in over 18,000 heart transplant recipients.

They found that black patients had a higher rate of dying within one year of receiving a new heart (15.3%) than either Hispanics (12.5%) or whites (12.8%).

To find out why this was happening, the researchers used a mathematical model to predict the risk of dying within a year after the transplant for every patient based on the severity of their disease and complicating risk factors such as advanced age or reduced kidney function. They then compared the calculated risk with the actually observed death rates. The difference between the prediction and reality allowed them to determine the quality of a transplant center.

It turned out that a greater proportion of blacks received their heart transplant at centers with higher-than-expected mortality as compared with whites and Hispanics (56.4% versus 47.1% versus 48.1%, respectively).

The contrast was even starker between the top- and worst-performing centers. Blacks had the lowest rate of being transplanted at centers with excellent performance (blacks: 18.5%; whites: 25.3%; Hispanics 28.3%). They also had the highest likelihood of undergoing their transplant surgery at the worst-performing centers.

It turns out that where a person has their transplant is critical. Only 8.7% of black patients died during the first year after the transplant if they were fortunate enough to undergo surgery at a top center. But this number was more than twice as high (18.3%) for blacks at the worst-performing centers.

The study didn’t provide any definitive explanations as to why the majority of blacks underwent heart transplantation at centers with lower than expected outcomes.

Choosing a transplant center isn’t much of a choice

Patients do not “choose” a transplant center by simply looking it up in a catalog or on a website. While performance statistics for each organ transplant center in the United States are publicly available in the Scientific Registry of Transplant Recipients, those statistics are only part of the decision for where a patient will get their transplant. The “choice” is often made for the patients by the doctors who refer them to a transplant center and by the accessibility of the center.

I’m a cardiologist, and in the Chicago area, where I practice, there are five active heart transplant centers. We can show the numbers for the centers to our patients when discussing the possibility of a heart transplant and also provide some additional advice based on our prior experiences with the respective transplant teams. Because our patients are nearly all based in the Chicagoland area, most of these programs are reasonable options for them. However, patients and doctors in cities or regions that don’t have as many transplant centers, or who live in more remote areas may not have the luxury of choice.

Accessibility matters because care doesn’t end with the surgery

Unless you’ve had a heart transplant, or know someone who has, it’s hard to understand just how life-changing the surgery is. I’ve noticed that many people are unprepared for the emotional and physical toll from the surgery and recovery. And it’s this toll that can makes accessibility such an important factor when choosing a transplant center.

After surgery, patients spend a couple of weeks recovering in the hospital. Even when they can go home, their health is closely monitored with frequent lab tests and check ups.

After transplant, patients will start taking medications to suppress their immune systems and keep their body from rejecting the new heart. And they have to stay on these medications for their rest of their lives. This means a lifetime of close monitoring to make sure that their heart is functioning well and that there aren’t any complications from the immune suppression.

For instance, during the first couple of months after surgery, patients have heart biopsies, where a small piece of the heart is removed to check for signs of rejection, every one to two weeks. As recovery progresses, biopsies may become monthly. The heart sample is so small that it does not damage the heart, but the biopsy is still an invasive procedure requiring hospitalization. And waiting for results can be stressful.

All of this means heart recipients spend a lot of time during the first year after their transplant seeing doctors and waiting for test results. Being close to a transplant center is important – it’s just easier to get to appointments. But accessibility isn’t just about the patient. It’s also about their support network. Imagine going through all of that alone.

On a practical level, family members and friends provide rides to the hospital, keep track of medications and doctor’s appointments and help with household chores during the recovery period. But what is most important is the emotional support that they provide.

So why do black transplant patients tend to wind up in transplant centers that don’t perform as well? Right now, we don’t know. Is it because they were referred to these centers by their cardiologists despite other feasible alternatives? What role does the health insurance of patients play in determining where they receive the heart transplant? Why are centers with a high percentage of black transplant recipients performing so poorly? And most importantly, what measures need to be taken to improve the quality of care?

These are important questions that physicians, public health officials and politicians need to ask themselves in order to address these disparities.

Back in 2001, when we first began studying how regenerative cells (stem cells or more mature progenitor cells) enhance blood vessel growth, our group as well as many of our colleagues focused on one specific type of blood vessel: arteries. Arteries are responsible for supplying oxygen to all organs and tissues of the body and arteries are more likely to develop gradual plaque build-up (atherosclerosis) than veins or networks of smaller blood vessels (capillaries). Once the amount of plaque in an artery reaches a critical threshold, the oxygenation of the supplied tissues and organs becomes compromised. In addition to this build-up of plaque and gradual decline of organ function, arterial plaques can rupture and cause severe sudden damage such as a heart attack. The conventional approach to treating arterial blockages in the heart was to either perform an open-heart bypass surgery in which blocked arteries were manually bypassed or to place a tube-like “stent” in the blocked artery to restore the oxygen supply. The hope was that injections of regenerative cells would ultimately replace the invasive procedures because the stem cells would convert into blood vessel cells, form healthy new arteries and naturally bypass the blockages in the existing arteries.

As is often the case in biomedical research, this initial approach turned out to be fraught with difficulties. The early animal studies were quite promising and the injected cells appeared to stimulate the growth of blood vessels, but the first clinical trials were less successful. It was very difficult to retain the injected cells in the desired arteries or tissues, and even harder to track the fate of the cells. Which stem cells should be injected? Where should they be injected? How many? Can one obtain enough stem cells from an individual patient so that one could use his or her own cells for the cell therapy? How does one guide the injected cells to the correct location, and then guide the cells to form functional blood vessel structures? Would the stem cells of a patient with chronic diseases such as diabetes or high blood pressure be suitable for therapies, or would such a patient have to rely on stem cells from healthier individuals and thus risk the complication of immune rejection?

The complexity of blood-vessel generation became increasingly apparent, both when studying the biology of stem cells as well as when designing and conducting clinical trials. A large clinical study published in 2013 studied the impact of bone marrow cell injections in heart attack patients and concluded that these injections did not result in any sustained benefit for heart function. Other studies using injections of patients’ own stem cells into their hearts had led to mild improvements in heart function, but none of these clinical studies came close to fulfilling the expectations of cardiovascular patients, physicians and researchers. The upside to these failed expectations was that it forced the researchers in the field of cardiovascular regeneration to rethink their goals and approaches.

One major shift in my own field of interest – the generation of new blood vessels – was to reevaluate the validity of relying on injections of cells. How likely was it that millions of injected cells could organize themselves into functional blood vessels? Injections of cells were convenient for patients because they would not require the surgical implantation of blood vessels, but was this attempt to achieve a convenient therapy undermining its success? An increasing number of laboratories began studying the engineering of blood vessels in the lab by investigating the molecular cues which regulate the assembly of blood vessel networks, identifying molecular scaffolds which would retain stem cells and blood vessel cells and combining various regenerative cell types to build functional blood vessels. This second wave of regenerative vascular medicine is engineering blood vessels which will have to be surgically implanted into patients. This means that it will be much harder to get approval to conduct such invasive implantations in patients than the straightforward injections which were conducted in the first wave of studies, but most of us who have now moved towards a blood vessel engineering approach feel that there is a greater likelihood of long-term success even if it may take a decade or longer till we obtain our first definitive clinical results.

The second conceptual shift which has occurred in this field is the realization that blood vessel engineering is not only important for treating patients with blockages in their arteries. In fact, blood vessel engineering is critical for all forms of tissue and organ engineering. In the US, more than 120,000 people are awaiting an organ transplant but only a quarter of them will receive an organ in any given year. The number of people in need of a transplant will continue to grow but the supply of organs is limited and many patients will unfortunately die while waiting for an organ which they desperately need. The advances in stem cell biology have made it possible to envision creating organs or organoids (functional smaller parts of an organ) which could help alleviate the need for organs. One thing that most organs and tissues need is a network of tiny blood vessels that permeate the whole tissue: small capillary networks. For example, a liver built out of liver cells could never function without a network of tiny blood vessels which supply the liver cells with metabolites and oxygen. From an organ engineering point of view, microvessel engineering is just as important as the building of functional arteries.

In one of our recent projects, we engineered functional human blood vessels by combining bone marrow derived stem cells with endothelial cells (the cells which coat the inside of all blood vessels). It turns out that stem cells do not become endothelial cells but instead release a molecular signal – the protein SLIT3- which instructs the endothelial cells to assemble into networks. Using a high resolution microscope, we watched this process in real-time over a course of 72 hours in the laboratory and could observe how the endothelial cells began lining up into tube-like structures in the presence of the bone marrow stem cells. The human endothelial cells were like building blocks, the human bone marrow stem cells were the builders “overseeing” the construction. When we implanted the assembled blood vessel structures into mice, we could see that they were fully functional, allowing mouse blood to travel through them without leaking or causing any other major problems (see image, taken from reference 3).

I am sure that SLIT3 is just one of many molecular cues released by the stem cells to assemble functional networks and there are many additional mechanisms which still need to be discovered. We still need to learn much more about which “builders” and which “building blocks” are best suited for each type of blood vessel that we want to construct. The fact that human fat tissue can serve as an important resource for obtaining adult stem cells(“builders”) is quite encouraging, but we still know very little about the overall longevity of the engineered vessels, the best way to implant them into patients, and the key molecular and biomechanical mechanisms which will be required to engineer organs with functional blood vessels. It will be quite some time until the first fully engineered organs will be implanted in humans, but the dizzying rate of progress suggests that we can be quite optimistic.

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ)published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.

Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.

Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.

It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.

Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.

One of the world’s largest clinical cell therapy trials has begun to enroll 3,000 heart attack patients, some of whom will have bone marrow cells extracted with a needle from their hip and fed into their heart using a catheter in their coronary arteries.

The BAMI trial has €5.9m in funding from the European Commission and will be conducted in ten European countries. Enlisted patients will be randomly assigned into two groups: one group will receive the standard care given to heart attack patients while the other will get an added infusion of bone marrow cells.

A number of studies, including one in the New England Journal of Medicine and another in the European Heart Journal, have suggested that bone marrow cells could be beneficial to patients with heart disease. However, because these studies were too small to work out whether cell infusions affected patients’ survival, they instead focused on the extent of scar formation after a heart attack or the ability of the heart muscle to contract after cell infusion.

One commonly used surrogate measure is the cardiac ejection fraction, which measures the fraction of blood squeezed out by the heart during a contraction. A healthy rate ranges from 55% to 65%. Bone marrow cell infusion has been associated with a modest but statistically significant improvement in heart function. In 2012, a comprehensive analysis of 50 major studies with a combined total of 2,625 heart disease patients showed that cardiac ejection fraction in patients receiving these infusions was 4% higher than in control patients.

While the results were encouraging, the study was a retrospective analysis with patients who had varying treatments and endpoints. There also remain questions over 400 patients included in the analysis from trials showing benefits of bone marrow cell infusions that were conducted by controversial German cardiologist Bodo Strauer, who some scientists have accused of errors in research.

The new large-scale BAMI trial will be able to provide a more definitive answer to the efficacy of bone marrow cell infusions and address the even more important question: does this experimental treatment prolong the lives of heart attack patients?

A hard cell

Despite the impressive target of enrolling 3,000 patients, there is a problem with how the trial is being framed. The underlying premise of why bone marrow cells are thought to improve heart function is that the bone marrow contains stem cells which could potentially regenerate the heart. In media reports, the BAMI trial is portrayed as a study which will test whether stem cells can heal broken hearts, and a press release by Barts Health NHS Trust, which is leading on the trial, described the study as “the largest ever adult stem cell heart attack trial”. But the scientific value of the BAMI trial for stem cell research is questionable.

In 2013, a Swiss study reported the results of treating heart attack patients with bone marrow cells. Not only did the study find no significant improvement of heart function with cell therapy, the researchers also reported that only 1% of the infused cells had clearly defined stem cell characteristics. The vast majority of the infused bone marrow cells were a broad mixture of various cell types, including immune cells such as lymphocytes and monocytes.

Scientific studies have even cast doubts about whether any of the scarce stem cells in bone marrow can convert into beating heart muscle cells. A study published in 2001 suggested bone marrow cells injected into mouse hearts could differentiate into heart muscle cells, but the finding could not be replicated in a subsequent study published in 2004.

If there are so few stem cells in the bone marrow and if the stem cells do not become cardiac cells, then how does one explain the improvements observed in the smaller studies? Researchers have proposed a variety of potential explanations, including the release of growth factors or proteins by bone marrow cells that are independent of their stem cell activity.

The disease machine

The success of modern medicine lies in its ability to isolate causal mechanisms of disease and design therapies which specifically target these mechanisms using rigorous scientific methods. Instead of using nebulous “fever tinctures” or willow bark, physicians now prescribe therapies with well-defined active ingredients such as paracetamol (acetaminophen) or aspirin.

Infusing heterogeneous bone marrow cell mixtures into the hearts of patients seems like a throwback to the era of mysterious herbal extracts containing a variety of active and inactive ingredients.

Even if the BAMI trial succeeds in demonstrating that infusion of bone marrow cell mixtures can prolong lives, then the scientific value of the results will still remain doubtful. We will not know whether the tiny fraction of stem cells contained in the bone marrow was responsible for the improvement or whether this effect was due to one of the many other cell types contained in the cell mixtures.

One could argue that it is irrelevant to know the mechanism of action as long as the infusions can prolong patient survival. But for any evidence-based therapy to succeed, it is essential for physicians to know how to dose or modify the therapy according to the needs of an individual patient. This won’t be possible if we don’t even understand how the treatment works.

We should also consider the impact of a negative result. If the BAMI trial fails to show improved survival, will the lack of efficacy be interpreted as a failure of stem cell therapy for heart disease? An alternate explanation would be that a negative result was due to infusing numerous cell types, most of which were not stem cells.

The ultimate test of a treatment’s efficacy is how it fares in controlled, large-scale trials. And these trials need to be grounded in solid scientific data and provide answers that can be interpreted in the context of scientifically sound mechanisms. The BAMI trial might provide an answer to the question of whether or not bone marrow cell infusions are efficacious in heart disease, but it will not teach us much about stem cells.

Jalees Rehman has received research funding from the National Institutes of Health (NIH).

Is it possible to be overweight or obese and still be considered healthy? Most physicians advise their patients who are overweight or obese to lose weight because excess weight is a known risk factor for severe chronic diseases such as diabetes, high blood pressure or cardiovascular disease. However, in recent years, a controversy has arisen regarding the actual impact of increased weight on an individual’s life expectancy or risk of suffering from heart attacks. Some researchers argue that being overweight (body mass index between 25 and 30; calculate your body mass index here) or obese (body mass index greater than 30) primarily affects one’s metabolic health and it is the prolonged exposure to metabolic problems that in turn lead to cardiovascular disease or death.

According to this view, merely having excess weight is not dangerous. It only becomes a major problem if it causes metabolic problems such as high cholesterol levels, high blood sugar levels and diabetes or high blood pressure. This suggests that there is a weight/health spectrum which includes overweight or obese individuals with normal metabolic parameters who are not yet significantly impacted by the excess weight (“healthy overweight” and “healthy obesity”). The other end of the spectrum includes overweight and obese individuals who also have significant metabolic abnormalities due to the excess weight and these individuals are at a much higher risk for heart disease and death because of the metabolic problems.

Other researchers disagree with this view and propose that all excess weight is harmful, independent of whether the overweight or obese individuals have normal metabolic parameters. To resolve this controversy, researchers at the Mount Sinai Hospital and University of Toronto recently performed a meta-analysis and evaluated the data from major clinical studies comparing the mortality (risk of death) and heart disease (as defined by events such as heart attacks) in normal weight, overweight and obese individuals and grouping them by their metabolic health.

The study was recently published in the Annals of Internal Medicine (2014) as “Are Metabolically Healthy Overweight and Obesity Benign Conditions?: A Systematic Review and Meta-analysis” and provided data on six groups of individuals: 1) metabolically healthy and normal weight, 2) metabolically healthy and overweight, 3) metabolically healthy and obese, 4) metabolically unhealthy and normal weight, 5) metabolically unhealthy and overweight and 6) metabolically unhealthy and obese. The researchers could only include studies which had measured metabolic health (normal blood sugar, blood pressure, cholesterol, etc.) alongside with weight.

The first important finding was that metabolically healthy overweight individuals did NOT have a significantly higher risk of death and cardiovascular events when compared to metabolically healthy normal weight individuals. The researchers then analyzed the risk profile of the metabolically healthy obese individuals and found that their risk was 1.19-fold higher than the normal weight counterparts, but this slight increase in risk was not statistically significant. The confidence intervals were 0.98 to 1.38 and for this finding to be statistically significant, the lower confidence interval would have needed to be higher than 1.0 instead of 0.98.

The researchers then decided to exclude studies which did not provide at least 10 years of follow up data on the enrolled subjects. This new rule excluded studies which had shown no significant impact of obesity on survival. When the researchers now re-analyzed their data after the exclusions, they found that metabolically healthy obese individuals did have a statistically significant higher risk! Metabolically healthy obese subjects had a 1.24-fold higher risk, with a confidence interval of 1.02 to 1.55. The lower confidence interval was now a tick higher than the 1.0 threshold and thus statistically significant.

Another important finding was that among metabolically unhealthy individuals, all three groups (normal weight, overweight, obese) had a similar risk profile. Metabolically unhealthy normal weight subjects had a three-fold higher than metabolically healthy normal weight individuals. The metabolically unhealthy overweight and obese groups also had a roughly three—fold higher risk when compared to metabolically healthy counterparts. This means that metabolic parameters are far more important as predictors of cardiovascular health than just weight (compare the rather small 1.24-fold higher risk with the 3-fold higher risk).

Unfortunately, the authors of the study did not provide a comprehensive discussion of these findings. Instead, they conclude that there is no “healthy obesity” and suggest that all excess weight is bad, even if one is metabolically healthy. The discussion section of the paper glosses over the important finding that metabolically healthy overweight individuals do not have a higher risk. They also do not emphasize that even the purported effects of obesity in metabolically healthy individuals were only marginally significant. The editorial accompanying the paper is even more biased and carries the definitive title “ The Myth of Healthy Obesity”. “Myth” is a rather strong word considering the rather small impact of the individuals’ weight on their overall risk.

Some press reports also went along with the skewed interpretation presented by the study authors and the editorial.

“The message here is pretty clear,” said the lead author, Dr. Caroline K. Kramer, a researcher at the University of Toronto. “The results are very consistent. It’s not O.K. to be obese. There is no such thing as healthy obesity.”

Suggesting that the message is “pretty clear” is somewhat overreaching. One of the key problems with using this meta-analysis to reach definitive conclusions about “healthy overweight” or “healthy obesity” is that the study authors and editorial equate increased risk with unhealthy. Definitions of what constitutes “health” or “disease” should be based on scientific parameters (biomarkers in the blood, functional assessments of cardiovascular health, etc.) and not just on increased risk. Men have an increased risk of dying from cardiovascular disease than women. Does this mean that being a healthy man is a myth? Another major weakness of the study was that there was no data included on regular exercise. Numerous studies have shown that regular exercise reduces the risk of cardiovascular events. It is quite possible that the mild increase in cardiovascular risk in the metabolically healthy obese group may be due, in part, to lower levels of exercise.

This study does not prove that healthy obesity is a “myth”. Overweight individuals with normal metabolic health do not yet have a significant elevation in their cardiovascular risk. At this stage, one can indeed be “overweight” as defined by one’s body mass index but still be considered “healthy” as long as all the other metabolic parameters are within the normal ranges and one abides by the general health recommendations such as avoiding tobacco, exercising regularly. If an overweight person progresses to becoming obese, he or she may be at slightly higher risk for cardiovascular events even if their metabolic health remains intact. The important take-home message from this study is that while obesity itself can be a risk factor for increased risk of cardiovascular disease, it is far more important to ensure metabolic health by controlling cholesterol levels, blood pressure, preventing diabetes and important additional interventions such as encouraging regular exercise instead of just focusing on an individual’s weight.

Medieval alchemists devoted their lives to the pursuit of the infamous Philosopher’s Stone, an elusive substance that was thought to convert base metals into valuable gold. Needless to say, nobody ever discovered the Philosopher’s Stone. Well, perhaps some alchemist did get lucky but was wise enough to keep the discovery secret. Instead of publishing the discovery and receiving the Nobel Prize for Alchemy, the lucky alchemist probably just walked around in junkyards, surreptitiously collected scraps of metal and brought them to home to create a Scrooge-McDuck-style money bin. Today, we view the Philosopher’s Stone as just a myth that occasionally resurfaces in the titles of popular fantasy novels, but cell biologists have discovered their own version of the Philosopher’s Stone: The conversion of fibroblast cells into precious heart cells (cardiomyocytes) or brain cells (neurons).

Fibroblasts are an abundant cell type, found in many organs such as the heart, liver and the skin. One of their main functions is to repair wounds and form scars in this process. They are fairly easy to grow or to expand, both in the body as well as in a culture dish. The easy access to large quantities of fibroblasts makes them analogous to the “base metals” of the alchemist. Adult cardiomyocytes, on the other hand, are not able to grow, which is why a heart attack which causes death of cardiomyocytes can be so devastating. There is a tiny fraction of regenerative stem-cell like cells in the heart that are activated after a heart attack and regenerate some cardiomyocytes, but most of the damaged and dying heart cells are replaced by a scar – formed by the fibroblasts in the heart. This scar keeps the heart intact so that the wall of the heart does not rupture, but it is unable to contract or beat, thus weakening the overall pump function of the heart. In a large heart attack, a substantial portion of cardiomycoytes are replaced with scar tissue, which can result in heart failure and heart failure.

A few years back, a research group at the Gladstone Institute of Cardiovascular Disease (University of California, San Francisco) headed by Deepak Srivastava pioneered a very interesting new approach to rescuing heart function after a heart attack. In a 2010 paper published in the journal Cell, the researchers were able to show that plain-old fibroblasts from the heart or from the tail of a mouse could be converted into beating cardiomyocytes! The key to this cellular alchemy was the introduction of three genes – Gata4, Mef2C and Tbx5 also known as the GMT cocktail– into the fibroblasts. These genes encode for developmental cardiac transcription factors, i.e. proteins that regulate the expression of genes which direct the formation of heart cells. The basic idea was that by introducing these regulatory factors, they would act as switches that turn on the whole heart gene program machinery. Unlike the approach of the Nobel Prize laureate Shinya Yamanaka, who had developed a method to generate stem cells (induced pluripotent stem cells or iPSCs) from fibroblasts, Srivastava’s group bypassed the whole stem cell generation process and directly created heart cells from fibroblasts. In a follow-up paper published in the journal Nature in 2012, the Srivastava group took this research to the next level by introducing the GMT cocktail directly into the heart of mice and showing that this substantially improved heart function after a heart attack. Instead of merely forming scars, the fibroblasts in the heart were being converted into functional, beating heart cells – cellular alchemy with great promise for new cardiovascular therapies.

As exciting as these discoveries were, many researchers remained skeptical because the cardiac stem cell field has so often seen paradigm-shifting discoveries appear on the horizon, only to later on find out that they cannot be replicated by other laboratories. Fortunately, Eric Olson’s group at the University of Texas, Southwestern Medical Center also published a paper in Nature in 2012, independently confirming that cardiac fibroblasts could indeed be converted into cardiomyocytes. They added on a fourth factor to the GMT cocktail because it appeared to increase the success of conversion. Olson’s group was also able to confirm Srivastava’s finding that directly treating the mouse hearts with these genes helped convert cardiac fibroblasts into heart cells. They also noticed an interesting oddity. Their success of creating heart cells from fibroblasts in the living mouse was far better than what they would have expected from their experiments in a dish. They attributed this to the special cardiac environment and the presence of other cells in the heart that may have helped the fibroblasts convert to beating heart cells. However, another group of scientists attempted to replicate the findings of the 2010 Cell paper and found that their success rate was far lower than that of the Srivastava group. In the paper entitled “Inefficient Reprogramming of Fibroblasts into Cardiomyocytes Using Gata4, Mef2c, and Tbx5” published in the journal Circulation Research in 2012, Chen and colleagues found that very few fibroblasts could be converted into cardiomyocytes and that the electrical properties of the newly generated heart cells did not match up to those of adult heart cells. One of the key differences between this Circulation Research paper and the 2010 paper of the Srivastava group was that Chen and colleagues used fibroblasts from older mice, whereas the Srivastava group had used fibroblasts from newly born mice. Arguably, the use of older cells by Chen and colleagues might be a closer approximation to the cells one would use in patients. Most patients with heart attacks are older than 40 years and not newborns.

These studies were all performed on mouse fibroblasts being converted into heart cells, but they did not address the question whether human fibroblasts would behave the same way. A recent paper in the Proceedings of the National Academy of Sciences by Eric Olson’s laboratory (published online before print on March 4, 2013 by Nam and colleagues) has now attempted to answer this question. Their findings confirm that human fibroblasts can also be converted into beating heart cells, however the group of genes required to coax the fibroblasts into converting is slightly different and also requires the introduction of microRNAs – tiny RNA molecules that can also regulate the expression of a whole group of genes. Their paper also points out an important caveat. The generated heart-like cells were not uniform and showed a broad range of function, with only some of the spontaneously contracting and with an electrical activity pattern that was not the same as in adult heart cells.

Where does this whole body of work leave us? One major finding seems to be fairly solid. Fibroblasts can be converted into beating heart cells. The efficiency of conversion and the quality of the generated heart cells – from mouse or human fibroblasts – still needs to be optimized. Even though the idea of cellular alchemy sounds fascinating, there are many additional obstacles that need to be overcome before such therapies could ever be tested in humans. The method to introduce these genes into the fibroblasts used viruses which permanently integrate into the DNA of the fibroblast and could cause genetic anomalies in the fibroblasts. It is unlikely that such viruses could be used in patients. The fact that the generated heart cells show heterogeneity in their electrical activity could become a major problem for patients because patches of newly generated heart cells in one portion of the heart might be beating at a different rate of rhythm than other patches. Such electrical dyssynchony can cause life threatening heart rhythm problems, which means that the electrical properties of the generated cells need to be carefully understood and standardized. We also know little about the long-term survival of these converted cells in the heart and whether the converted cells maintain their heart-cell-like activity for months or years. The idea of directly converting fibroblasts by introducing the genes into the heart instead of first obtaining the fibroblasts, then converting them in a dish and lastly implanting the converted cells back into the heart sounds very convenient. But this convenience comes at a price. It requires human gene therapy which has its own risks and it is very difficult to control the cell conversion process in an intact heart of a patient. On the other hand, if cells are converted in a dish, one can easily test and discard the suboptimal cells and only implant the most mature or functional heart cells.

This process of cellular alchemy is still in its infancy. It is one of the most exciting new areas in the field of regenerative medicine, because it shows how plastic cells are. Hopefully, as more and more labs begin to investigate the direct reprogramming of cells, we will be able to address the obstacles and challenges posed by this emerging field.

Image credit: Painting in 1771 by Joseph Wright of Derby – The Alchymist, In Search of the Philosopher’s Stone via Wikimedia Commons

Any research related to cannabis is bound to be sensationalized or politicized because people have strong emotional and political views about its usage. A few months ago, my fellow Scilogs blogger Suzi Gage wrote an excellent blog post about a study that investigated the link between cannabis usage and intelligence. That study had many critical flaws which were often ignored when the research was reported and discussed in the media. All research should be conducted and reported cautiously. However, when research touches on highly controversial topics, it is even more important that researchers clarify whether their research establishes statistically rigorous associations and true cause-effect relationships or whether they are merely pointing out important observations that require further research to derive definitive conclusions.

“In regard to the literature, cannabis-related stroke is not a myth, and a likely mechanism of stroke in most cannabis users is the presence of reversible MIS induced by this drug. The reality of the relationship between cannabis and stroke is, however, complex because other confounding factors have to be considered (ie, lifestyle and genetic factors). To confirm that cannabis may be a precipitating factor of RCVS with severe complications, an epidemiological study to determine the incidence of MIS, complicated or not by stroke, in the general population and in the cannabis users is necessary.”

The abbreviation MIS stands for multifocal intracranial stenosis, and refers to the presence of multiple blockages in the blood vessels of the brain that are impeding the blood flow and causing the stroke. RCVS stands for reversible cerebral vasoconstriction syndrome and describes a transient spasm of the blood vessels that briefly interrupts the blood flow to the brain, thus causing the stroke. The novelty of the paper by Wolff and colleagues is their idea that cannabis usage causes reversible MIS, i.e. that cannabis transiently causes multiple blockages that are reversed when cannabis is removed.

This is indeed an interesting idea, but unfortunately, they do not present any convincing data to back up this intriguing hypothesis. The data in their paper consists of a table listing 59 cases of stroke in patients who used cannabis, combining their own data with data that has been published by others. The authors admit that many of the patients who used cannabis were also active users of tobacco and alcohol, which makes it difficult to attribute the stroke to cannabis as opposed to these other confounding factors.

Most of the patients had some sort of brain imaging performed to diagnose the stroke, but less than half of them had a follow up scan later on to see if the blockage was still present. In those few cases where follow up imaging was performed, most of them did show some degree of reversibility of the blockages in the brain blood vessels. However, they do not present data on whether strokes in non-cannabis users also show similar reversibility patterns.

Wolff and colleagues also reference an older 2001 paper “Triggering Myocardial Infarction by Marijuana” by Mittleman et al and state that their current findings are in accordance with the conclusions of the 2001 paper. The paper by Mittleman and colleagues studied heart attacks and not strokes, but both diseases are caused by reduced blood flow, so it is not unreasonable to compare the data. The 2001 paper stated that the risk of having a heart attack is increased 4.8 fold within an hour of using cannabis. However, one has to bear in mind that the 2001 paper studied heart attacks in 3882 patients, of whom only 3.2% used cannabis and only 9 patients had used cannabis within an hour of the heart attack. The 4.8 fold risk determination was therefore based on this tiny sample of 9 patients!

In summary, the paper by Wolff and colleagues does not really answer the question of whether cannabis-related stroke is myth or reality. The small sample size, the observational nature of the data, the lack of follow up imaging on all the patients and the lack of controlling for confounding risk factors such as tobacco (which has a very strong association with stroke) make it difficult to draw definitive conclusions. All we can say is that Wolff and colleagues have presented an intriguing hypothesis that cannabis might cause strokes by inducing transient blockages or spasms of blood vessels in the brain. We need more definitive data to determine how cannabis usage is “related” to strokes: Is it a true cause of stroke or is it just an indicator of other more established risk factors, such as tobacco usage.