Archives For
health

Generally, when skeptics or popular science writers talk about medicine and money, it’s to ward off something one could call an argument ad-shillium, or rejecting scientific studies outright with declarations that anyone who sticks up for doctors and pharmaceutical companies over the hot and trendy snake oil salesperson of the month must be a paid shill. Shilling certainly happens in both the real world and online, but when one’s argument rests in basic science, money is not a topic relevant to the conversation. However, that doesn’t mean that it’s not important when new ideas come along and gain some serious traction. Case in point, Theranos, a company which a lot of people rightly suspect can shake up healthcare in the United States by offering dozens of blood using just a drop of blood at your corner pharmacy, is facing a barrage of questions as to how exactly its tests work and seems to be unwilling to tell anyone about their lab on a chip.

Ordinarily, this is where an experienced skeptic would look for signs of quackery. Useless tests, pseudoscientifc mumbo-jumbo on the website, avoidance of the FDA, and special pleading for the enigmatic technology which offers vague benefits that don’t run afoul of the agency’s rules for the same of pharmaceuticals and medical devices. But that’s not the case with Theranos. In fact, the company recently got a nod from the FDA to continue its work and is seeking approval of its technology and testing methods, and scientists who have tried to parse how it can test for so many things with so little blood say that it’s more than likely upgrading old technology into a new, compact toolkit. There’s no voodoo or snake oil here, just good old fashioned science and faster, better computers and machinery. Furthermore, the fees for each test are posted openly, and they’re a lot less than what’s offered by its competitors, whose pricing is opaque at best.

So if there’s nothing amiss at Theranos, why all the secrecy? Well, after many millions spent on research, development, and testing, the company wants to expand significantly and if it shares how it does what it does with the world, especially if it’s just an overhaul of existing methodology with better machinery, its competitors can quickly catch up and limit its growth. I’m sure it’s also trying to avoid getting patent trolled and bogged down in expensive litigation, more than likely of the frivolous, made to line lawyers’ pockets variety, since there’s no shortage of people with an abandoned medical testing device patent from which a troll can manufacture an infringement or two and file in East Texas. Perhaps this is unfair to scientists, and to some degree patients who may want a second opinion after Theranos’ tests show something alarming, but this is the result of setting up a healthcare system with opaque pricing and strict regulation, and legal minefields in the technology world through easy to obtain and vaguely worded frivolous patents.

Anti-vaccine activists would have us believe that autism is the result of some sort of undefined, or scary sounding toxicity and should be cured by a gluten-free diet and detoxification typically conducted by a profiteering quack. However, the real scientific evidence points to genetics and brain development, meaning that no one develops autism or turns autistic, but is born this way and will fall at some point along the spectrum when the condition can be diagnosed. Recently, another study provided additional evidence for this theory by comparing how modified skin cell cultures taken from those with autism, reverted into stem cells, and induced to grow into micro brains developed to skin cells from their non-autistic parents, subjected to the same treatment. Right away, the researchers noted an over-abundance of inhibitory neurons which created the roadblocks to forming necessary connections for sensory and social input processing.

While this isn’t confirmation that this is in fact what causes autism, it’s a substantial step toward identifying the culprits. It also narrowed down the gene responsible and gave the researchers a good idea for how to control its expression. While some pop sci outlets trumpet this as work we can use to develop a cure for autism, I’m not so sure that it’s so simple. After all, autism isn’t a structural disorder in which an excess of inhibitory neurons blocks important functions and pills or even gene therapy would suddenly turn autistic individuals into neuro-typical ones. With their brains affected from birth, their lives have been built around their neurons compensating for all the neurotransmitter dead ends. It would take many years for their brains to re-wire themselves and fashion a new personality. And while those with severe autism would greatly benefit, would this be a desired, or even an ethical treatment for high functioning autistic people?

If autism shapes how you see the world and you have always had it, yes, it can make life really confusing and difficult. But when one learns to overcome, to recognize one’s problems and find coping mechanisms, the journey has made this person who he or she is today. It’s tempting, in the words of autism quacks to “fix” them, but considering how integral autism has been to how they became who they are, the “fix” in question would mean undoing a lifetime of learning, and in some way undoing what they are today for the ability to better process certain stimuli, social interactions, and better emotional coping skills. Again, for low functioning autistic people, there are arguments in favor of the benefits outweighing the risk, but for those who’ve learned to see this condition as a part of who they are and can easily function on their own, even benefiting a little from some of its positive side effects, being “cured” won’t always be the best choice…

When someone dies young, we say that this person’s death is tragic, that he or she died before his or her time. When someone dies in advanced age, we say that the deceased has lived a full life and it must have been their time to go as if age alone was the culprit. Both stances are very problematic form a scientific standpoint because, you see, nowhere does our biological makeup have a kill switch. There is not one gene or one process that acts like a ticking clock and once it runs out, we die. In fact there are creatures that seem to be near-immortal in this regard, weird jellyfish and microbes that can regenerate themselves when their bodies become worn and frail as if to start their lives anew. To blindly submit to mortality as if it’s somehow ordained by some force from above is to neglect the complete lack of any scientific basis for “our time to go” and ascribe to the frequently repeated misnomer that people die from old age when it’s not the old age that kills the person, but simply makes him or her very easy prey for numerous diseases.

Considering aging a disease or a medical condition is actually a lot more important than it might sound because at stake is government approval for anti-aging drug trials, with researchers able to communicate that their work is valid despite the fact that it fights something doctors don’t see as a disease. In reality, aging is a complex degenerative condition that needs to be treated like one and while there is no one switch we can flip to stop it, there are things we can do to slow it, partially reverse some of its effects, and allow for a longer period of life in good health and with fewer aches and pains. If the worst thing that comes out of these drug trials are treatments that don’t actually extend our lifespans but drastically improve our physical and mental fitness, that’s already a huge net gain because not only are people better off, but we’d also save trillions with less acute treatment for typical physical and cognitive problems of old age being necessary. It’s also a very realistic goal over the next 15 years provided that the funding is there, of course.

We should think of aging much the same way we think of HIV and AIDS. Left untreated, it won’t kill us by itself, but it will open enough doors for something to come along to do that dirty work, and so we should fight it with an arsenal of lifestyle changes. It’s going to be many years before we see rousing successes, but we already have promising pathways desperately in need of the funding and scientific rigor and legitimacy to be taken to their full potential. Convincing those in charge of the purse strings and regulatory approvals that they’re fighting a real problem, rather than just messing around with something nature has preordained, will be crucial because when they don’t think a valid problem is being fought or a valid question is being asked, they’re rather unlikely to keep writing the checks and giving green lights. There’s a cultural battle to be fought here because history is replete with those who claimed to know how to beat aging or death with potions and rituals which yielded nothing or even killed their patients. But armed with the basic understanding of how biology actually works, today’s scientists have a real shot at it.

Over all the posts I’ve written about brain-machine interfaces and their promise for an everyday person, one the key takeaways was that while the idea was great, the implementation would be problematic because doctors would be loath to perform invasive and risky surgery on a patient who didn’t necessarily need said surgery. But what if when you want to link your brain to a new, complex, and powerful device, you could just get an injection of electrodes that unfurl into a thin mesh which surrounds your neurons and allows you to beam a potent signal out? Sounds like a premise for a science fiction novel, doesn’t it? Maybe something down the cyberpunk alley that was explored by Ghost In The Shell and The Matrix? Amazingly, no. It’s real, and it’s now being tested in rats with extremely positive results. Just 30 minutes after injection, the mesh unwound itself around the rats’ brains and retained some 80% of its ideal functionality. True, it’s not quite perfect yet, but this is a massive leap towards fusing our minds with machinery.

Honestly, I could write an entire book about all the things easy access this technology can do in the long run because the possibilities are almost truly endless. We could manipulate a machine miles away from ourselves as if we inhabited it, Avatar style, give locked in stroke victims a way to communicate and control their environment, extend our nervous systems into artificial limbs which can be fused with our existing bodies, and perhaps even challenge what it means to be a human and become a truly space faring species at some point down the line. Or we could use it to make video games really badass because that’s where the big money will be after medicine, after which we’ll quickly diversify into porn. But I digress. The very idea that we’re slowly but oh so surely coming closer and closer towards easy to implant brain-machine interfaces is enough to make me feel all warm and fuzzy from seeing science fiction turn into science fact, and twitch with anticipation of what could be done when it’s finally ready for human trials. Oh the software I could write and the things it could do with the power of the human brain and a cloud app…

Let me start this post with something seldom seen on this blog, a personal story. Just a couple of years ago, yours truly was doing a mixed martial arts drill. My opponent had at least 100 lbs. on me and as he tried tackling me, as was his job, physics and gravity let him power through a stance that was supposed to stop him from flipping me onto my back. Felling myself slide, I did what I was trained to do and improvised. Digging in my toes and dropping my weight as low as possible, I slipped out of his grip, pivoted, and managed to flip him over my shoulder and on his back. As he quickly rolled to get up, I managed to catch him with one hand digging into his neck and the other tearing at a tricep, setting myself up for a knee to his jaw. The trainer called time, we let each other go, and he stood up as I straightened myself out. Less than a minute later, it hit me. Every other drill would have to be done with a vicious, shooting pain in my back. After a long, exceedingly painful hour, I was laying face down on an urgent care exam table.

Movies often make feats of superhuman strength look easy, and although I had just pulled off a movie-worthy move, reality quickly stepped in to show me my place. On rainy days, my back is whiny, and if I walked around all day, I have to grin, bear it, and try not to reach for painkillers. I absolutely love doing MMA, but the last several months until I was urged to stop for a while had been supported by compression gear, Vicodin, and muscle relaxers. One of these days, I hope that my back heals up just enough to get back to fighting and if there was a therapy which could fix my back with little more than a 30 minute IV drip and one injection, I’d happily sign up for it, much like journalist Tyler Graham did when receiving stem cell therapy for his shoulder. There was a catch of course. The procedure Graham tried is still unproven, and the evidence of what it’s able to do is purely anecdotal. Patients are paying a lot of money to go to SoCal, have their fat processed by a doctor to induce it to turn into adult stem cells, and have it injected back into the site of tissue damage to seemingly miraculously fix whatever’s torn or worn out.

And that’s really the problem here. How do stem cells do what they do? We don’t know. There are plenty of ideas and trials are ongoing to figure out just how to control these cells’ restorative powers because the potential is revolutionary, to put it mildly. But because stem cell therapies a lot of doctors offer today are a crapshoot, it’s not entirely impossible that your treatment will do nothing at all as it’s attacked by your immune system as dangerous mutant cells to be killed for the sake of your health, or, even worse, result in a painful, malignant tumor. Both scenarios are known to have happened in the lab and in the field, and until scientists get a really good handle on how to perform stem cell treatments, it’s probably not a good idea to have one. This is really a textbook case for why we need basic science rather than maverick doctors willing to turn you into a human guinea pig for a substantial fee. Personally, I’m glad that Graham feels better and hope that he’s one of the lucky ones who’ve been helped. And while I understand his decisions, and appreciate that he was very lucid and skeptical of the whole thing, I won’t follow suit.

In a quote frequently attributed to John Lennon a boy was asked what he wanted to be wanted to be when he grew up and he replied that he wanted to be happy. He was then told that he did not understand the question, to which he retorted that the person asking him didn’t understand life. And he’s right, we all want to be happy. That’s especially true at work, where most of us will spend nearly a third of our waking hours and we’ll deal with countless stresses big and small on a daily basis, seemingly for nothing more than a paycheck. Work should be interesting, give us some sense of worth and purpose, but 70% of all workers are apathetic about, or outright hate their jobs, which clearly means whatever your bosses are doing to make you happy simply isn’t working. Though I’m sort of making a big assumption that your bosses are even trying to make you happy, much less care that you exist, or that they need to worry about whether you like the job they have you doing. And that, objectively, is perhaps the most worrisome part of it all…

You see, social scientists and doctors have long figured out what makes you happy, why it is in the interest of every company’s bottom line to keep employees happy, and how your perpetual case of the Mondays could be eliminated, or at least severely reduced. Most American workers, as we can see from the statistics, are dealing with the stress of being at a job they dislike, which increases their levels of cortisol, a stress hormone that hardens arteries and increases the odds of having a heart attack. If they’re not there yet, the prolonged stress also causes a host of very unpleasant issues like irregular sleep, disordered eating, anxiety, and depression. In fact, close to a quarter of the American workforce is depressed, which is estimated to cost over $23 billion per year in lost productivity. We also know exactly why people hate their jobs, and unlike many business owners think, it has nothing to do with employees being greedy and lazy, it’s usually a terrible management policy, and feeling as if they’re utterly disposable and irrelevant.

People who are unemployed for a year or more are almost as likely to be depressed as working stiffs and their odds of being diagnosed with depression go up by nearly 2% for every time they double their time out of work. So while a bad job can make people miserable, not having one is every bit as bad if not worse. And these are just the numbers for one year of unemployment, so what lies beyond that could be far scarier since every trend shows mental health suffers without work or purpose, and physical health quickly deteriorates as well. This leaves us stuck in an odd dilemma. We know that people need to, and want to work, and we know full well that when they hate their jobs, their performance lags, as does their health, forming a vicious cycle of bad work and disengagement contributing to poor health, worse work, and more disaffection on the job. It seems obvious that something should be done to address this, for the last 15 years, there has been no change in the stats. Why? The short answer? Terrible management.

Think about your own worst bosses. They never hesitated to tell you that you were wrong, or to look down on you, or watch over your shoulder because they had no trust in you and turned any inevitable slip-up or small error, even if you immediately caught and corrected it, into some new justification for watching you like a hawk, right? Or if not, did they simply never talk to you about anything, merely dropped off more work and expected you to be done silently? Combine those daily putdowns with a constant threat of being outsourced simply to save a dollar, being shoved to an open office where you have no personal space or privacy and have constant distractions, on top of a lack of any career progression path in sight, and tell me that’s a job even those who live to work would find engaging. As many organizations grow, managers disassociate from the people they are managing, seeing them as little more than numbers on a spreadsheet because that’s what they are in their daily list of things to do. This breeds disengagement, which breeds frustration, and which causes talented employees to run away for greener pastures.

Keeping one’s employees happy should not be one of those HBR think pieces that makes your executive team “ooh” and “ahh” in a meeting where you run through PowerPoint slides showing how much money you’re losing to turnover, depression, and bad management. It should be the top priority of middle managers and supervisors because happy employees work harder, show loyalty and dedication, and help recruit more good talent. Yes, spending on benefits like catered lunches, or gym memberships, or better healthcare, or easy access to daycare, or flexible time off policies sounds exorbitant, I know, and many businesses can’t afford all of that. But showing employees that you care, that you listen to them, and treating them with respect pays off as the engaged employees become more productive and dedicated. In a knowledge economy there’s no excuse for the employee-employer relationship be much like one between a master and the indentured servant. It should be a business partnership with benefits for both parties extending well beyond “here’s your paycheck, now get to work.” The science says so, and besides, when you’re a manager, isn’t keeping employees motivated and productive your top priority?

We’re using far too many antibiotics. That has been the cry from the FDA and the WHO for the last several years as more and more antibiotic-resistant strains have been found after they had colonized or killed patients. Of course these bacteria aren’t completely immune to our arsenals of drugs, they’re just harder to kill with certain antibiotics or require different ones, but a rather small, yet unsettling number, have required doctors to use every last antibacterial weapon they had available to even make a dent in their populations. There’s not much we can do because in effect, we’re fighting evolution. The more antibiotics we throw at the bacteria, the more chances we give for resistant strains to survive and thrive. Doctors are starting to prescribe less and the pressure on farmers to stop prophylactic use of antibiotics is mounting, but we’re still overdoing it and the problem is growing and in need of some very creative new solutions.

Enter a genetic engineering technique known as CRISPR-Cas9 which replaces DNA sequences that short snippets of RNA are encoded to identity with ones provided by scientists. It’s not new by any means, but this is the first time it has been used in an evolutionary experiment intended to stem the rise of antibiotic resistance. Israeli researchers essentially gave bacterial colony an immunity to a virus, but at the cost of deleting genes which gave it antibacterial resistance. The bacteria happily propagated the immunity as they grew while maintaining the new weaknesses to antibiotics which were only marginally effective on them before. There’s a real advantage for the bacteria to propagate this new mutation because the virus to which it was now immune was lethal, acting as the greater selective pressure, and the susceptibility to antibiotics just wasn’t an important factor, so the bacteria acted like it got a fair deal.

Even better, edits were made by a specially engineered virus, meaning you can, in theory, just infect bacteria-prone surfaces with it and demolish their antibiotic resistance, right? Well, yes, it would be possible. However, the researchers worry that new antibiotic resistant mutations can still evolve and that there’s no way to prevent the bacteria’s genetic drifts from accepting genes for viral immunity while holding on to its existing antibacterial mechanisms. But this technique is still useful for reducing the number of resistant bacteria or targeting strains with very well known resistance mechanisms to allow doctors to use existing antibiotics. Ultimately, what will help the most would be more research into new antibiotics, curtailing their use in doctors’ offices for any viral infection regardless of the patients’ complaints, and eliminating preventative use of animal antibiotics on farms. Still, research like this can still help us identify new resistant strains and give us a fighting chance to slow them down while we find new ways to fight them.

Back in the day, I argued that if we were going to get serious about space exploration, we also had to budget for large, luxury spacecraft rather than just capsules in which we would cram the brave men and women we’d be sending to other worlds with a pat on the back for agreeing to deal with the discomfort and damage to their bodies. Among the reasons listed were the basic physiological problems of spending many months in zero gravity, and mental health hazards of boredom and cabin fever. But now there’s another very important point to add to the list. If you spend too much time out of the Earth’s magnetosphere, you will become less competent at the elementary tasks of exploration. Curiosity, focus, determination, situational awareness, the very traits that make humans such good generalists on our own world, and which robots can handle within very limited contexts, which is why we’d want to aid them when exploring new planets, all will become severely diminished after long-term bombardment by cosmic rays.

This is the result of a recent study which exposed mice genetically engineered to have neurons that glow under the right conditions, to lab-generated cosmic rays. After the equivalent of a few months worth of exposure to particles like ionized titanium and oxygen, the mice became a lot less curious, mentally sluggish, and learned slower. The results were comparable to dementia patients, and under the microscope, the reason was readily apparent. Cosmic rays attacked an inordinate number of dendrites, which are the parts of a neuron exchanging neurotransmitters with its neighbors. Fewer connections meant less efficiency and accuracy in communication, so it resulted in what amounts to reduced competency across the board. This is another reason to hold off on planning grand Mars missions. Damaging the minds of astronauts, perhaps for the rest of their lives, is too high of a price to pay just to get a flag-panting moment…

Mornings are awful. Always have been, always will be, rousing you out of bed, interrupting your sleep in unhealthy ways, rushing you to work at ungodly hours during which you must navigate e-mails and other minutia while your mind shakes itself awake to do real tasks. Unless, you do what most people around you do and reach for the nearest legal stimulant to brush off all those early morning cobwebs. I’m talking about coffee, one of the most frequently consumed drugs in the world, bringing in over $30 billion in revenues from the 2.25 billion cups of coffee drank by people around the world each day, and supporting a network employing over 25 million. And as with every drug, there’s a natural dependency. Forgoing it means anxiety, shakes, cold sweats, headaches, irritability, fatigue, and a general foggy haze in which you struggle to operate. It’s a much less intense version of pretty much any other kind of “dope sick” addicts get when they’re unable to secure their fix. Yet, it’s sold openly, to anyone and everyone, at a profit.

What does that have to do with mornings though? Maybe nothing. Maybe everything. Think for just a moment why you have to go to work so early, especially when you’re not in the logistics, travel, or maintenance business where one could make the case for early mornings or working through the night. Why do you have to be in the office by 8 am or 9 am along with everyone? If you need your coffee fix to no longer feel like a zombie, that’s why. Mornings were invented for one, simple reason. To get you addicted to coffee. Industry shills known as “morning people,” a code obviously denoting the fiction of someone actually enjoying being forcibly woken up at the separation of the gluteal muscles of dawn, have convinced much of the developed world to set work schedules in a way that will maximize their boss’ ability to get you hooked on coffee, then encourage you to be stuck in a never-ending cycle of sleep deprivation to keep you coming for another fix, day in, day out, even when you can sleep in and don’t have to work.

And Big Coffee and its members like Starbucks, Petes, and Coffee Bean, are not the only ones making a profit off your addiction. They’ve allied themselves with Big Ag’s breakfast industry to sell you cereals, granola bars, yogurt, and other “breakfast food” as it’s denoted. Of course it’s not all there is to it. You see, many fast food chains and coffee stores sell breakfast foods that are highly caloric, containing significant amounts of saturated fat and sugar, which coupled with the sedentary lifestyle enforced by many workplaces often leads to weight gain, and that weight gain interferes with sleeping patterns that make people less tired. Basically, we’re looking at an elaborate, vicious cycle of addiction for corporate profit. We need to wake up to the injustice of mornings and petition Big Coffee to stop pushing companies to open early, as well as removing the addictive chemical caffeine from the vast majority of their offerings still containing it in doses as high as 436 grams. I will be putting together an official letter writing campaign and a petition calling for the end of our forced caffeine addiction on Change.org in the next few days.

Likewise, yours truly isn’t sitting back and just counting on these corrupt corporate behemoths, many with the same market caps and annual profits as Monsanto to roll over, and is in the final stages of a partnership with several vendors to offer a new, natural energy drink alternative for those who must start their day early. If we can’t hit Big Coffee in the media, we need to hit it in the only place it really cares about: the wallet. You wouldn’t be just buying an energy drink that helps you stay alert and awake, you’d be giving these corporate drug pushers the finger to say loudly and proudly that you don’t need their damn coffee and “breakfast food,” you can see all their tricks from a mile away, and you’re smarter than to just let them ensnare you. Even better, should you have any of those “reward cards” that encourage you to be a good little addict and come back for a discount on your next fix, why not make a video of you cutting such a card, or creatively destroying it in some pother way, upload it, then link to it in your entry in the petition when it will be up and running? I’m ready to take on mornings. Who’s with me?

With the media fascinated by Bruce Jenner’s transition from male to female and Laverne Cox’s photo shoot for Allure intended to inspire others struggling with gender identity issues, there’s a rare discussion of what it means to be transgendered. More importantly, if someone decides to transition to another gender, what can science do to make this person feel comfortable in what would basically be a new body after all the hormone therapy and surgeries? And what can the kind of technology still in infancy, but barreling towards clinical testing, offer in the foreseeable future? Could modified viruses for gene therapy turn males into females and vice versa? Could printing new organs produce an entire new reproductive system? In short, would gene therapy and printed organs and tissues make the transition more complete?

Despite offering us a way of manipulating the fundamental building blocks of life, they would be dealing with an entire body which developed not just from reading the genome and translating the codons into proteins, but from environmental cues, triggers, and anomalies. Even using the same homebox genes to define our body plans doesn’t quite get you a full instruction set for a human body so changing these genes after the body is formed is unlikely to have much effect. Such genes are like Lego blocks you get to arrange once. Each gets you a finger, a toe, a foot, or a leg, etc. During development you could use chemical signals to tweak them and assemble them how you want. But after they’re finally locked into place, things are more of less done and the formed structures would need to be modified mechanically, i.e. surgically.

We don’t yet know if it’s possible to change a Y chromosome to an X, only that it’s possible for our modified viral agents to silence or promote gene expression. And even if we could, there’s not going to be a mechanism for a penis to suddenly become a vagina or the other way around because, again, these structures are now in place. Surgery would still be the only way to make this step of the transition until we can figure out some sort of nanotechnology to do this, though we could argue that this will also be a form of surgery, just a much less outwardly invasive one than scalpels and saws. And by now it should really go without saying that we couldn’t naturally induce a different reproductive system to grow. But what if we print one, or grow one, using the patient’s modified stem cells, then implant it? Would this work?

From an engineering standpoint, it seems like it would, and after extensive hormonal therapy, they might work as they should, and allow something as radical as a trans-man to impregnate his partner or a trans-woman to become pregnant or give birth. However, there’s a catch. We know how to make the organs but have no guarantee that such complex organs could grow in the lab and function without a hitch. Creating viable germ cells and supporting a gestation don’t seem so complicated to us at first blush because it seem so natural as to be troublesome and leads us to trying to figure out how to stop both until we want them to happen. But consider the fact that if we knew what’s necessary to support a pregnancy, we could create artificial uteri to allow premature babies to develop fully rather than place them in incubators to support them in development and hope for the best. A uterus grown in a lab would seem like a good shortcut at first blush, what ethics board would permit the necessary experiments for clinical studies?

So what’s the takeaway here? For those struggling with gender identity and wanting to make a transition to another sex, there’s a lot of promise in new medical technologies being developed today and on paper, it looks like a complete biological transition could be in the cards. But this technology is not quite there yet and there are so many questions to answer that it will be more than a decade at the very least before we can even think about using them in clinical practice. I would say though, that helping and studying transgender issues raises so many interesting and widely relevant questions, it would be a disservice to the future of medicine not to explore them because answering them will help us understand that does being male or female mean, as well as offer treatments to many reproductive conditions and anomalies, like infertility, ED, or even replace reproductive systems destroyed by cancerous tumors with a brand new one. In other words, transgender people could be a reproductive researcher’s Rosetta Stone…