Category Archives: Chem Education

Post navigation

We chemists tend to not give them much thought, which is a shame seeing in that we work with them daily. Much of our work depends on being able to lower the pressure of a system for whatever reason. Want to evaporate solvent? Your membrane diaphragm pump has your back. Want to distill an irritatingly high-boiling off-yellow mixture to your pristine colorless product? Look no further than your trusty rotary vane pump. Need to shoot your compound onto a mass spec? The instrument’s turbo pump creates the ultra high vacuum environment required for accurate analysis.

I could go on.

Despite the ubiquity and utility of vacuum pumps in the chemistry lab, the trend I’ve noticed is that most workaday chemists know little to nothing about how they work and how to take care of them.

“Steve, this says the last time you changed the pump oil was April 2014.”

As a result of this ignorance around pumps, I propose all chemistry degree programs, at both the graduate and undergraduate levels, teach a mandatory class on vacuum pumps. I submit for your review a syllabus outline for this class:

Vacuum Pumps 101

Introduction to vacuum pump types: How to tell a rotary vane from a diaphragm pump

When to use a vapor trap: Always

Oil changes: Coors Lite = good, Guinness = bad

Handling acid vapors: How to destroy a pump

Quiz: What is that sound?

Gas ballast use: Why is my pump oil in two phases?

Lab practical: Why are there leftover screws?

So maybe it’s seminar series or a 2-credit class. Upon completion, students are given a license to use rotary vane pumps. The lab practical will be graded pass/fail, with failing students relegated to using old rotary evaporator diaphragm pumps.

I’m here to talk about why science is so expensive. This was prompted by the recent news regarding Turing Pharmaceuticals, led by CEO Martin Shkreli. Turing recently acquired exclusive rights to manufacture and sell pyrimethamine, an off-patent drug used primarily to treat parasitic protozoan infections. In a move which Ayn Rand herself would probably describe as “heavy-handed,” Shkreli has opted to increase the price of the drug over fifty-fold overnight*, somewhat disconcerting considering it’s used to keep AIDS patients from, you know, dying.

The issue garnered international attention, with many (many) calling this an example of the absurd pricing power granted to the pharmaceutical industry. Except it’s really not a par for the course industry move, and just as many industry insiders are condemning the price hike. I won’t delve into the details, as plenty of others much more knowledgeable than I already have. I’m also not going to beat the “drug research is expensive” dead horse (spoiler: it is). The point is, many people seem to cry foul**; “There’s no way R&D can be that expensive!”

Instead, I’m going to talk about why science in general is so expensive (second spoiler: it is) and where the associated costs come from. Television seems to have popularized the idea of a genius scientific duo working in tandem and checking off a major scientific breakthrough (bioweapon vaccine, invisibility suit, quantum ion machine gun, etc.) in a montage-filled afternoon. Unfortunately, that idea doesn’t mesh well with reality.

Science Cannot Exist in a Vacuum

Let’s start by examining the core of what science is, as an ideal. What is science? I’d argue that the answer to that question is quite simple: science is discovering new things and telling others about those discoveries. That’s it.

All the impact factors, intellectual property, marketing, and publications? Those are politics, or business; consequences of science, but decidedly not science.

What’s it take to do science? Fundamentally, a scientist, of course. You can’t research new materials without chemists, you can’t map the universe without astrophysicists, or the genome without geneticists. But it turns out, you also can’t map the universe without computer scientists. Likewise, you probably need a few synthetic chemists somewhere along the line to make probes for your geneticists.

Eighteenth century science could be done by a single bowtie-clad bearded guy with sufficiently deep pockets. But modern science cannot. Modern science is collaborative by nature. To the scientists reading, this is a point I shouldn’t need to argue. In the past week alone I have collaborated with: several materials scientists, a biologist, an inorganic chemist, a couple laser specialists, and at least one battery scientist. And that’s not even an exhaustive list.

In order to properly do my job, I need to be surrounded by a dozen or so other scientists, who in turn, each need a dozen or so to support them. My company employs in the ballpark of 100 scientists, and I’ve worked with almost all of them at one time or another.

And that’s where the baseline cost of doing science comes from. To do science, you need a network of scientists. How big that network needs to be depends on how complicated and diverse the problems you are working on are. And as much as we scientists generally love doing science, we don’t do it for free.

Scientists Need Tools

This seems obvious, but most non-scientists probably haven’t stopped to consider how much scientific equipment costs. And I’m talking capital equipment, not stuff that gets used up like glassware and reagents.

Consider: I operate a Varian 300 MHz NMR spectrometer on a daily basis. It’s an old instrument, but it’s in excellent working condition. Despite being old, it still takes time and money to keep it up and running. How much do you think it costs to operate for a year? When you consider the service contract, preventative maintenance, cryogens, and time, I think it’s safe to say that it’s more than I make in a year. And that’s a fixed cost; if I use it every single day, or if it sits in it’s air-conditioned room as a giant paperweight, that cost is the same.

Now imagine a larger company which may operate several such instruments. They’d almost certainly have at least one full-time scientist charged with maintaining them, and that’s on top of the other costs.

And an NMR is actually a pretty simple instrument to take care of. If you’ve got an LC/MS (a modern triple-quad will run around $150k), you now have tons of additional operating costs to consider, and you probably need one technician per 2-3 instruments just to keep them operational. Same goes for GC/MS.

If you’re a materials scientist, you need an electron microscope. Those cost a pretty penny. Plus, they pull huge amounts of power to run. If you’re doing any serious biology experimentation, you probably need a decent confocal microscope. I have it on good authority that a single lens for one can set you back tens of thousands of dollars.

Power, service contracts, technicians, software licenses, instrumentation. Everything adds up, real quick. You could buy a townhouse for the cost of an 800 MHz NMR spectrometer.

In addition to the tools, you need a place to do science. Fume hoods, bench space, ventilation. You’ve got to keep the lights on and water flowing, and pay the building’s rent. When you spend money on science, you’re spending money on this infrastructure, all of which needs to be paid for somehow.

Consumables

Now you’ve got a sufficiently large group of interdependent scientists, all under the same roof, and all standing around with shiny new tools and instruments. But they still need stuff to work with, and that means spending money on consumables. Glassware, solvents, reagents, gases. If you’re of the biological persuasion, cells and animal models. And all of those are sold at a premium, because remember all that infrastructure I talked about earlier? Yeah, the companies supplying researchers have all of that to deal with too, plus the added costs of manufacturing, quality assurance, regulatory compliance and shipping logistics.

All that means that a pair of scissors from Sigma Aldrich costs roughly fifteen times what it would cost at Staples. But really, that’s just a somewhat egregious example that non-scientists can get their head around. I’ve ordered my share of compounds which cost upwards of a thousand dollars for one milligram. I’ll go through almost a 4-liter jug of acetone in a week, a liter or so of DCM, chloroform, or any of half a dozen other common solvents. And if you’re running a lot of NMR (like I am) you’re going to chew through deuterated solvents, which cost around ten times what their isotopically abundant counterparts do.

Working with living organisms? Well now everything, reagents included, needs to be sterile, which takes time and energy and is passed on to you in the form of higher prices.

Plus everything needs to be disposed of, and as you’re probably aware, you can’t just pour your reactions down the sink when you’ve finished with them. Having chemical and biohazardous waste disposal companies on call to haul out the stuff you buy costs money too, sometimes even more than the cost of the stuff you originally bought (I’m looking at you, old lecture bottles).

Time, Time, and More Time

Those of you personally familiar with the scientific process will be intimately aware of the single largest cost of doing research: time. Time is money, and science takes a lot of time. Much, much more often than not, research leads to failure, which leads to slightly more informed failure, which, if you’re persistent and lucky might lead to success. That’s not the fault of the researchers, that’s just the way it is.

And in order to do experiments, even ones that will invariably fail, you still need all those consumables talked about earlier. I recently attempted to prepare a compound according to a slightly modified literature procedure that involved bubbling a fluorinated, gaseous compound through an anhydrous ammonia solution. And guess what? It didn’t work. By all accounts it should have, but hey, research. Over a thousand bucks for a cylinder of specialty gas, another couple hundred for the solvents and ammonia. An entire day of careful planning and preparation. And all of that gone in one shot on a failed*** experiment. I had to order everything again, modify my procedure, and try again; the second time turned out to do the trick.

The point is: it cost twice as much in materials, took more than twice the amount of my time as originally planned, and set my progress back a week. And that’s just a single experiment in the context of a much larger research program. When you consider that even a modestly sized research program will have several scientists performing related but independent experiments, each of which carries a significant risk of failure, it’s easy to see how much time things can really take.

And that’s OK, because failure is a feature of the scientific process, not a bug. Science cannot progress without failure. Most of the time there’s really nothing that can be done about it, even under ideal conditions; sometimes things just don’t work. But what many don’t realize is that failure incurs the same costs as success, and it takes a lot of failure to get to point where you can call a research program “successful.”

Ultimately this is why emergent technologies are so expensive. This is why you see new drugs which cost close to $100,000 for a course of treatment. It’s because for each Sovaldi, there were hundreds (or thousands) of candidate compounds which didn’t make the cut for one reason or another. Every one of Tesla’s high-capacity lithium metal oxide batteries sold will be covering the cost of the undoubted thousands of experiments which led to their development.

_________________________________

*They’ve since back-peddled a bit on the exact price point, but not before very harsh and very public backlash

**To be clear, Shkreli and his ilk don’t seem to be doing any actual research with their profits

***There are no failed experiments, only new ways not to make the compound in question

Oh dear. Well, It didn’t think I’d have cause to write about methamphetamine production again, but here we are. Many readers will have heard news about the explosion that rocked the NIST lab near Washington, D.C. back in July. Luckily, no one was seriously injured; but one security guard did sustain some burns.

No more than a couple days later, initial investigations revealed the cause of the explosion appeared to be… methamphetamine synthesis. Now, any competent chemist in a national lab would (hopefully) be able to perform any of the common meth syntheses without incident. Certainly without blowing the windows out of the building and hospitalizing his or herself.

But as it turns out, the culprit wasn’t a chemist, but the security guard injured in the blast. More details have been emerging since the incident. After resigning from the force, the guard in question pled guilty to attempted methamphetamine manufacture.

It turns out the method the guard was attempting to employ is that known colloquially as the “Shake and Bake” method. This involves reduction of pseudoephedrine to methamphetamine, then treatment of the reaction mixture with hydrochloric acid, forming a salt which is easily separated. And in true MacGyver style, the reagents used in this reduction are all improvised: camping stove fuel as a solvent, lithium from batteries, lye, and ammonium nitrate (fertilizer). HCl is generated by the action of sulfuric acid (sold as drain cleaner) on table salt. Literally everything you need can be purchased at Wal-Mart.

And what do we do with these reagents? Why, toss them in a water bottle, close the cap, and shake, of course. You can’t hear it, but I’m actually screaming behind my keyboard.

The idea is you vent the bottle, as a good amount of gas is going to come off of that particular reaction. The reason people use this method to make meth, aside from easy access to the starting materials, is that it can be done on a very small scale: a few grams.

What I don’t understand is why, if you’re going to illicitly make methamphetamine in a synthetic chemistry lab, you decide to bypass all those fancy solvents, reagents, glassware, and safety equipment. Maybe they were worried someone was taking inventory of the reagents they’d need? In my experience, it’s highly unlikely anyone was.

Instead of doing some homework and using the lab equipment that was already right there, they opted to go straight to the basement-bottom chemistry.

And again, I can only speculate as to exactly what caused the explosion (chemists: take your pick of things that could go wrong with that procedure), but I’d put money on overpressure in the “reaction vessel,” resulting in rupture, and exposure of lithium to air. That would likely generate enough heat to ignite the expanding camp-fuel-solvent cloud. And ka-boom.

Imagine a world in which antibiotics didn’t exist. Imagine if instead of requiring a week-long prescription and bed rest, a bout of pneumonia was fatal. Or if a staph infection couldn’t be cleared up by a simple cephalexin regiment.

You only have to think back about 90 years to realize this hypothetical scenario. Prior to the discovery of penicillin by Alexander Fleming in 1928, such common bacterial infections were debilitating and potentially lethal. And we as a species are slowly (some would argue rapidly) regressing to this scenario.

Thanks to bacterial drug resistance, our line of antibiotic defense is becoming obsolete. Antibiotic research is a tricky area. It’s an evolutionary arms race of sorts — humans come up with new antibiotics, then microbes acquire selective resistance to them. Repeat ad infinitum. Except we can’t repeat forever. Not at our current pace.

See, there are about a dozen classes of antibiotics. They are generally classified by structure and by mechanism of action. Medicinal chemists can make minor tweaks to existing structures, which generate new antibiotics that are very similar to the old ones. This can stave off bacterial resistance for a little while. But in order to really put the pressure on microbes, you need a whole new class of antibiotics. And it’s been over 30 years since we discovered a new class. Until this week.

Enter last week’s article in Nature. A team of researchers spanning industry and academia published a very interesting paper in which they describe the discovery of a new antibiotic, termed “teixobactin.” It’s large, has lot’s of chiral centers, and has some strange looking amino acid residues.

Structure of teixobactin

Teixobactin scores respectably well against a host of pathogens, as shown in “table 1” in the paper (reproduced below). In some cases, it even surpasses the efficacy of current generation antibiotics. In mouse models infected with methicillin-resistant S. aureus (MRSA), all animals treated with teixobactin survived down to doses as low as 1 mg/kg body mass (single dose, i.v., 1 hour post-infection).

Interestingly, the authors report that the compound does not induce drug-resistance in staphylococcus aureus or mycobacterium tuberculosis, two common disease-causing organisms. To quote the article (emphasis my own):

Serial passage of S. aureus in the presence of sub-MIC levels of teixobactin over a period of 27 days failed to produce resistant mutants as well. This usually points to a non-specific mode of action, with accompanying toxicity. However, teixobactin had no toxicity against mammalian NIH/3T3 and HepG2 cells at 100 µg/ml (the highest dose tested). The compound showed no haemolytic activity and did not bind DNA.

This is pretty important, since it indicates that teixobactin seems to act with selectivity; there are lot’s of compounds that kill bacteria very effectively, but will kill your body’s cells as well. The team went on to investigate the mechanism of action of the new compound. They found that it acts as a peptidoglycan synthesis inhibitor; the peptidoglycan layer forms the cell wall for bacteria, and is essential to their growth and survival. Teixobactin appears to consume precursors to peptidoglycan synthesis, and does not appear to act directly on any known protein targets. This distinction is likely the reason bacterial cells do not develop resistance to the compound.

But, what’s even more important than the discovery of this new antibacterial compound is the method used to discover it. All of the antibiotics that have so far been discovered have been through isolation of chemical compounds made by other organisms. Interestingly, bacteria themselves often generate antibacterial compounds. Perhaps somewhat counter intuitive, but it makes a good deal of sense from an evolutionary perspective; if an organism can produce a compound toxic to other species, it gains a powerful tool for survival. A major challenge for scientists is growing bacterial cultures in the laboratory. Some, such as E. coli and S. aureus, grow quite nicely on Petri dishes. The other 99% of bacterial species cannot be cultured in the laboratory.

Since this bulk of the bacterial biosphere cannot be easily cultured, it has never been screened for antibacterial properties. The researchers in this paper came up with a piece of technology they dubbed the “iChip” (Steve would be proud). The iChip allows a single bacterial cell to be isolated from a soil sample, while being kept separate from the rest of the soil microbes. The cell, affixed to the chip, is then placed back in the soil from which it came, where it can multiply into a colony. The large colonies produced can then be grown in vitro as is typical for “well-behaved” microbes.

The optimists out there are hopeful that new-found access to “unculturable” bacteria will allow scientists to rapidly screen for natural product-based antibiotics. Of course, like with so many other discoveries, only time will tell if this method proves to be of practical use for drug discovery.

You may (or most likely, may not) have heard the big news this week regarding one of the oldest players in the nuclear magnetic resonance (NMR) game. Agilent Technologies, owner of Varian Inc. has decided to close down shop in their long-standing NMR sector, leaving Bruker the only sizable company still making the things.

This wasn’t always the case, however. JEOL, Bruker, Varian, Oxford Instruments, even IBM [1] and General Electric were all once major players in the NMR market [2].

This full page ad for an 80 MHz IBM FT-NMR ran in ACS Analytical Chem. in 1980

Now Bruker stands pretty much alone. Carrying with this nostalgic theme, I’ve decided to compile a digital museum of sorts regarding the history of NMR instrumentation. I’ve gathered images and information from across the web, including others’ blogs, advertisements, reports, and even articles (credit given where credit is deserved). I Hope you enjoy the walk down memory lane with me.

Nuclear Magnetic Resonance: From First Principles to Applications

NMR, at least in principle, dates back to 1925 when the idea of spin magnetic moment was first theorized. Electrons, neutrons, and protons (among other particles) all posess a property called “spin”, but not in a classical sense, as the earth spins on its axis, because particles do not have an axis to “spin” around. Magnetic spin is simply an intrinsic property of certain particles, like mass or charge (for a more detailed, but still simplified, explanation of spin click here).

Because the particles that make up a nucleus have spin (protons and neutrons) the nucleus also can have spin (as long as the sum of the particles’ spin does not equal 0). This spin was first measured in 1937 by Isidor Isaac Rabi in lithium isotopes and protons. Rabi’s discovery won him a Nobel Prize in 1943.

The underlying principles of nuclear magnetic resonance having been discovered it was only a matter of time before Edward Mills Purcell adopted the technique to bulk materials in 1945, which won him a Nobel Prize as well.

In 1949 Varian’s F6 Nuclear Fluxmeter became the first commercially available product to employ the principles of NMR.

Varian’s F6 Nuclear Fluxmeter, reproduced from [3]

From there, the field exploded. Engineering developments in magnetic coil designs and field stabilizers allowed crude commercial NMR spectrometers to enter the market by the mid-1950’s.

The first published “high-res” NMR spectrum of ethanol (top) and a spectrum obtained on a modern instrument (bottom). Note the observed splitting patterns [4]

Unfortunately, the practical limits of NMR spectroscopy were soon realized by the late-1950’s. Up to this point, NMR data was acquired by scanning a sample across a broad range of radio frequencies (RF), in sequence, over and over, until enough signal was obtained to be useful. This technique, called continuous wave or “CW-NMR” was a time-consuming process, taking several minutes to scan a single time; dozens, hundreds, or even thousands of scans can be required to resolve a sample.

An early Varian CW-NMR spectrometer. The power supply, control panel, magnet, and cooling unit (left to right) are visible [3]

Running 5000 scans (often required for dilute samples) at 5 minutes per scan would require over two weeks of continuous scanning, clearly not tenable from a resources management perspective.

Two solutions to this problem began to develop, largely side-by-side, in the 1960’s.

Fourier Transform NMR

What if, instead of scanning the RF band sequentially, the entire RF band could be scanned at once? A method of short-pulsed radio frequency excitation had been known since the early days of NMR. A pulse RF could be used to scan an entire frequency range at once, in a matter of seconds. The result of this pulse is called a free induction decay (FID). Unfortunately, FID’s are functions of time, and the resulting spectrum is of no practical use to chemists.

However, other forms of chemical analysis, such are infrared spectroscopy, had successfully employed a mathematical operation known as a Fourier transform to convert data from the time domain (FID) to the frequency domain. In 1957 it was shown that, in theory, it should be possible to convert FID data to the frequency domain, giving data identical to what was obtained by CW-NMR.

In 1966 Ernst and Anderson published the results of their extensive effort to perform a Fourier transform on FID data. The two employed minicomputers to generate tapes that could be processed by larger computers. The results were groundbreaking, and CW-NMR was rapidly phased out in favor of Fourier transform nuclear magnetic resonance. The advent of smaller, cheaper, and faster computers in the early 1970’s made FT-NMR all but ubiquitous.

Varian 100 MHz magnet (right) and control panel (left). The magnet weighs over 8000 pounds and was cooled by water running through the PVC pipes visible. Borrowed from David Purkiss [4]

Superconducting Magnets

Early NMR spectrometers used copper or iron-core solenoids to generate a semi-uniform electromagnetic field. This field was limited by the resistance of the solenoid coils. A stronger magnetic field would lead to higher resolution and faster acquisition times. However, generating a more powerful EM field also requires pumping more electric current through the solenoid, generating large amounts of heat and consuming a huge amount of power. Furthermore, the magnetic capacity of iron eventually maxed out, physically plateauing the progress of high-field spectrometers.

Superconductors had been known since 1911, however, they were large and impractical. In theory, replacing the iron-core magnets in a spectrometer with a superconducting magnet would provide a massive increase in field strength. And again, in theory, if you wanted a higher field spectrometer, you could just build a bigger superconducting coil.

The problem became an engineering one: superconductors must operate at cryogenic temperatures. To achieve these temperatures, the superconducting coil needed to be immersed in liquid helium (4 K, -452 °F), contained in a dewar, which is itself contained in a dewar of liquid nitrogen (77 K, -320 °F). The first such instrument became available in 1964, the Varian HR-200.

Field strength in NMR is generally given in megahertz (MHz), despite frequency not being a direct measurement of magnetic field magnitude. This is done to simplify the relative resolving power across instruments of different field strength. The frequency corresponds to the resonance frequency of a proton in the magnetic field of a particular instrument. A proton in a 7.05 Tesla magnetic field will resonate at 300 MHz. Thus, a 7.05 T instrument is referred to as a “300 MHz.” The HR-200 (200 MHz) represented a massive increase in resolving power over previous non-superconducting magnets, which clocked in around 50 MHz.

Onward and Upward

After the advent of FT-NMR, coupled with superconducting magnets, the practical constraints on NMR previously realized no longer existed. Increasing a magnet’s resolving power from 200 MHz to 500 MHz was simply a matter of scaling up existing technology. Improvements in computing technology, software, and programmed pulse-sequences allowed for more efficient use of the magnet’s hardware. The largest NMR spectrometer [6] currently in existence (to my knowledge) operates at 23.5 T, a whopping 1000 MHz of resolving power.

Bruker’s Avance 1000 NMR spectrometer. It stands two-stories tall, and is presently the most powerful such instrument in the world.

We have reached a point where it is no longer necessary to go larger. A 300 MHz instrument (what I use every day) is more than capable of performing all basic 1-D and 2-D NMR experiments; there’s nothing you could throw at an 800 MHz instrument that it couldn’t easily handle.

Moving away from purely chemical applications, NMR showed significant promise in medical diagnostics. An analytical NMR spectrometer has a sample chamber made to fit a glass tube a few millimeters in diameter, and 8-10 inches in length. Again, what’s to stop us from making an instrument with a giant sample chamber and sticking a whole person inside? If you’ve ever been to the hospital and had an MRI, you’ve done exactly that. MRI stands for Magnetic Resonance Imaging, they dropped the “nuclear” from the name as a public relations move.

A modern MRI Scanner.

Hope you enjoyed this brief history of NMR, and maybe learned something along the way.

You’ve all seen these articles. They circulate Facebook, and are propagated by television personalities (Oprah, Dr. Oz, I’m looking at you), blogs, and aggregator sites such as Elite Daily (and others). The headline will usually be something flashy ending in a question mark. “Can eating a bar of chocolate every day prevent diabetes?” Or “Is this new compound discovered at University X the cure for cancer/HIV/obesity/other?” You get the gist of it.

The latest offender claims something along the lines of red wine being a substitute for physical exercise. Wouldn’t that be nice? If you could down a bottle of cabernet sauvignon instead of hitting the treadmill? The headline, in its various forms claims a generalized version of the following “Scientists determine red wine better than exercise,” brazenly implying that we all got together and agreed.

Faux-science journalism keeps popping up. It’s misleading at best, and unethical, deceptive, and manipulative at worst. It’s a disservice to the actual scientific discoveries being pursued; not every study needs to cure cancer, nor does every new compound need to be a miracle weight-loss drug. I’m going to take you through how real research turns into the abomination that is click-bait journalism in this post.

To the Source!

First thing’s first. We need to go to the source of the reported claim. I’m going forward with the red wine/exercise claim here.

Interestingly (but perhaps not surprisingly), you need to click through several links that claim to be the source until you get to the actual peer reviewed scientific paper from which this crazy claim is derived.

I started at an Elite Daily article titled “OMFG: Science Says A Glass Of Red Wine May Be Equivalent To An Hour At The Gym” [1] (protip: real scientific articles rarely have “OMFG” in the title). Clicking the link to their source, I was taken to another article, this time at Science Daily. Clicking the source link on SD led me to the actual paper, a full-length article published in the Journal of Physiology, a peer-reviewed academic journal.

Let’s examine how the claims evolved from science to complete bullshit over three iterations.

The Peer-Reviewed Paper

The actual paper is published in the Journal of Physiology, and is available to read for free on the publisher’s site.

Let’s examine the claims and methods of the paper. I’ll keep this concise.

Resveratrol is a natural product found in red wine, many fruits, and some other plant matter

Skeletal muscle force, cardiovascular performance, and metabolism were all boosted in rats whose diets were supplemented with resveratrol

So, some scientists added this compound, resveratrol, to the diet of test rats, and maintained a group of rats without resveratrol as a control group.

The natural product resveratrol

Important to note is the dose of the compound given: 146 milligrams per kilogram of body mass per day. Speaking from a pharmacokinetics perspective, this is a HUGE dose. Most drug-like compounds are given in doses 10-1000x lower than that. To put that in perspective, the dose of Tylenol for an adult male is about 10 milligrams per kilogram of body mass. 146 milligrams per kilogram of Tylenol corresponds to 20 extra-strength capsules, and would most likely destroy your liver.

So, after eating this resveratrol-rich diet, the rats were examined for exercise capacity. How was this done? With tiny treadmills, of course. No, I am not joking.

Rat treadmills, a real thing

And what exactly was concluded from this effort?

The performance of rats on the resveratrol diet was 21% better than the rats without resveratrol (at 99.9% confidence interval)

Looking good so far! I think this paper reaches some interesting (though not exactly earth-shattering) conclusions. Apparently they are moving onto limited clinical trials to see if resveratrol helps patients with impaired heart function. However, I would not be at all surprised if resveratrol is cytotoxic or even carcinogenic at the doses given to rats in the study. My own skepticism aside, the paper has valid methodology, reaches real, statistically significant conclusions, and demonstrates potential for further study.

You may notice, however, that nowhere does anyone related to the study claim that red wine somehow equals exercise. Nor do they suggest a diet including red wine is a viable way to ingest resveratrol in biologically relevant concentrations.

So where did we go wrong?

Iteration Two: Science Daily

Science Daily is a scientific news aggregator site. They compile recent scientific articles, and summarize them for a non-technical audience. Generally speaking, they do a pretty decent job of maintaining the conclusions drawn in the original paper without sensationalizing the results. Their articles are short, generally include no data, and tend to over-emphasize the results. That’s not necessarily a bad thing. At least they cite the original paper.

The first line of the article states “A natural compound found in some fruits, nuts, and red wine may enhance exercise training and performance, demonstrates newly published medical research from the University of Alberta.” [3] That claim is not at all false. However, you can probably see how without the underlying context, this claim could be blown out of proportion.

Iteration Three: Elite Daily

Here we go.

The title of the ED article claims that a glass of red wine is equivalent to an hour at the gym. First off, no one ever mentioned anything about equivalence. Who came up with the “one glass equals one hour” thing? Certainly not the authors of the paper. The article goes on to say that “the benefits only come from one single glass,” citing another (even more sensationalized) article at the Latin Times [4].

Let’s first examine the resveratrol content of red wine. Red wine contains between 1 and 13 milligrams of resveratrol per liter [5]. Let’s be generous and assume the upper end. To reach the same dosage of resveratrol given to the rats, an adult male would need to consume about 11 grams of pure resveratrol. This corresponds to 730 liters of wine. For those of you metrically challenged, that’s just shy of 200 gallons. Of wine. Per day. Do not try to drink 200 gallons of wine per day.

Or do. Whatever, I’m not a doctor.

The claim that exercise could be substituted, in part or in whole, by drinking red wine is clearly complete fabrication; there is simply no way a human could consume enough red wine to intake a comparatively useful dose of resveratrol.

In Conclusion

Be wary of scientific news coming out of aggregator sites. The best place to get the real deal behind a paper is, unsurprisingly, to read the paper itself. That’s not always and option for a large number of reasons. However, the abstract is always free to read, and any major conclusions are always (at least in well-written articles)stated upfront in the abstract. If it was discovered that exercise could be substituted for red wine, you better believe the first of second sentence of the paper’s abstract would say as such.

Check the sources! If a news release links to another news release as its source, you’ve most likely entered the realm of unverified speculation, or complete fabrication.

Even the more reputable news sites (like Science Daily) generally sacrifice scientific rigor for the sake of clarity. While fabrications are rare, conclusions can be exaggerated, or key details omitted in the name of brevity.

In the last week or so, I’ve seen dozens of posts on various social media outlets promoting awareness for amyotrophic lateral sclerosis (ALS, Lou Gehrig’s Disease) using #IceBucketChallenge. This is awesome, and spreading public awareness about this disease is certainly an important step in the right direction.

You probably sense a “but” coming. You’d be right.

Admittedly, I don’t work in ALS research. I am, however, a researcher whose work is funded primarily by federal grant money. And let me tell you one thing: getting funded is unequivocally difficult.

An inordinately large number of researchers spend a disproportionately large amount of their time engaging in grant writing, not research.

A 2007 study found that upwards of 40% of university faculty member’s time was spent on the grant securing process. Since you may not work in research, allow me to frame that in a more accessible way:

Imagine you work in a factory making widgets. Now imagine that every time you want to make a widget, which let’s remember is your primary job function, you must walk to your CFO’s office, and give him a 30-minute presentation outlining, in perfect detail, exactly why you want to make a widget. He will consider your request, and 15-20% of the time, he will allow you to make a widget. The other 80-85% of the time, he will say to you “I’m sorry, but right now we can’t give you the resources to make a widget. Come up with a better reason why we should, then come see me again.”

You can probably imagine in this hypothetical situation, widgets are not produced with particularly high efficiency.

Supporting ALS awareness is great. But what’s even better is funding the research that will ultimately allow us to find better treatments.

I’d ask that you do one of two things if you care about the progress of research for ALS treatment:

Don’t have money to spare? That’s fine, you can still help. Call your federal representative. Call your senator. Tell them you think federal funding for biomedical research and development should be a national priority.