Monday, December 31, 2012

In my last post, I gave an overview of why there's no scientific basis for claiming that GMOs by definition are harmful for human health, focusing mostly on the molecular biology angle. Now, I'd like to add on to that by describing how food gets digested on a molecular level. If GMOs are going to be harmful, it's going to happen on this scale, and it's going to be unique to the specific GMO in question. Hopefully, both pieces together will help you spot what the real issues are and what is BS in the GMO debate. Not coincidentally, a major concept here is that DNA is DNA, just like in the molecular biology discussion.

Everything we ingest is essentially made up of four basic types of macromolecules: sugars, fats (or lipids), proteins, and nucleic acids. We're only concerned with the latter two, because all life, genetically modified or not uses the same range of sugars and fats. Let's start with how proteins are digested. Remember, the point of a GMO is to get an organism to code for a specific different, or set of different proteins that it wouldn't otherwise produce. Proteins are fascinating molecules, and I would totally have enjoyed an entire course on just them alone. Depending on its sequence of amino acids, a protein can be an enzyme, a hormone, or used for structural purposes, like collagen or the exoskeleton of a crustacean. Our digestive system is full of hundreds of different enzymes that each break down a single macromolecule, including other proteins, and no matter what type of macromolecule it is, they are broken down largely by the same type of reaction, called hydrolysis.

The dehydration synthesis reaction for two amino acids, in this case glycine on the left, and alanine on the right. (Source)

The proteins we ingest are made up of a combination of 20 different amino acids, each with a unique structure giving its own properties, like whether it is able to be in contact with water or not. They also vary widely in size. Both of these properties are very important in determining the structure and function of the protein by determining how it folds up. Each amino acid is bonded to the next by removing a hydrogen ion (H+ or proton) from one, and a hydroxide ion from another (OH-). The H and OH- spontaneously attract each other to form water, thus giving the reaction the name of dehydration synthesis. The reaction is easily reversed by adding a water molecule back into the bond to cut it (root of hydrolysis = water cutting). Hydrolysis is a thermodynamically favorable reaction, meaning it happens fairly efficiently on its own spontaneously, but enzymes are needed to speed it up to meet our body's demand for new building blocks.

Enzymes in our digestive system that do precisely this include pepsin, trypsin, and chymotripsin, and each specialize in breaking the bond at just a few specific amino acids in the stomach and the duodenum. These enzymes are able to reach virtually all peptide bonds due to the low pH in our stomachs unfolding the protein from its complex shape into more of a chain. The end result is that the protein we ate is broken up into individual amino acids for absorption into the bloodstream through the lining of our intestines. When this happens, they have none of the properties of the protein we ingested, and do not have a function on their own. It doesn't matter which protein it is, or where it came from, it almost always ends up as nonfunctional pieces that are recycled to new dehydration reactions elsewhere in the body to create new proteins. There's an exceptionally small chance for a novel GMO protein to survive the digestive system intact and functional, and GMO proteins are not any more likely than any other protein to do so.

Many food allergies, however, are indeed caused by proteins that do resist full digestion in any one or several of these steps for one reason or another. When this happens, our immune systems recognize that it's a foreign protein and attack it, causing the allergy. It's certainly plausible that this could theoretically happen to any GMO protein, but the risk is not deemed serious enough by the FDA to require a clinical trial for new GMOs. I think it's perfectly reasonable for consumers to demand testing for potential food allergies, but I also think the potential risk is quite a bit overhyped. It's certainly worthy of debate. I'd be open for requiring independently operated small scale clinical trials.

Digesting the nucleic acids DNA and RNA follow a very similar process. They too are polymers where each building block is bound in dehydration reactions, and are broken up by hydrolysis by enzymes called nucleases. The nucleic acids can either be reused to build new nucleic acids, or they can be broken down further into a sugar, the base that makes up the genetic code, and a phosphate group.

Hydrolysis of DNA. An arrow on the bottom left shows a water molecule approaching the phosphate (PO4) group that binds the pair of nucleotides, The pentagon is a sugar, called deoxyribose. The molecular structure of the bases are not shown (Source).

Some critics have claimed that the transgenes have been taken up by bacteria in our digestive system intact, but there is no evidence that they've ever survived through the entire digestive tract. This is a hypothetical and exceedingly unlikely event that currently is not supported by the science. Furthermore, a transgene is no more likely than any other gene we digest to be transferred like this. Also, if, say, the gene providing Roundup resistance in Monsanto's corn were to be transferred in full to a bacteria, how exactly would it provide a survival benefit without constant exposure to Roundup in our digestive systems?

When people claim that the corporations are buying and paying for the science on GMOs, they don't realize it, but they're saying that everything we know over decades of research in molecular biology and digestion is suspect. Because so much of what we do know is fundamental, the FDA long ago declared that GMOs are "substantially equivalent" to any other non-GMO, and required no additional testing. They didn't do it specifically to appease powerful interests, they did it because it would have been irrational for them to do otherwise. Their jurisdiction is public health, and in this case they made the choice with much more science behind it. If you listen to vocal critics of GMOs, it seems they're starting to get this message.

When people say "we have the right to know" what's in our food, hopefully now you can get an idea of why a label saying something is genetically modified doesn't really tell you much except what you already know: this organism was grown using industrial agricultural methods.

Wednesday, December 26, 2012

Oh boy, the first ever genetically modified animal passed the last major regulatory hurdle, and presumably has the green light for FDA approval. AquaBounty Technology's AquAdvantage salmon will soon appear (unlabeled) in supermarkets and restaurants around the United States. Back in April, the FDA finalized its environmental assessment on the salmon, and finally released the results this past week, indicating that they see little risk of negative environmental impact. There's certainly valid reasons to make a personal decision to avoid this salmon, but I do want to try and separate the science from the science fiction to help you make a more informed decision about it. This seems like the perfect time to introduce the concepts and issues around GMOs as someone with personal experience in the lab with them, and, since at this point I think pretty much anyone who reads this knows me, you can vouch that I'm hardly a shill for Big Ag.

At 18 months, the transgenic fish is clearly much larger than the same-age normal fish. But overall growth of the same generation of fish evens out by 36 months. (Image Credit: Aqua Bounty Technologies)

To develop AquAdvantage, AquaBounty isolated the gene in the largest species of salmon, the chinook, that contributes to its size. Then, they inserted it in the genome of a wild Atlantic species to replace their own growth gene, allowing it to grow at twice its normal rate. However, this gene only affects the rate of growth, and does not create a new giant species of Atlantic salmon. Just by looking at the full-grown transgenic fish and a full-grown natural Atlantic salmon, you'd never be able to tell a difference. While I grant it may sound a little bizarre, and I definitely thought so about GMOs in general before I became more familiar with them, the techniques are hardly novel at this point. Every biology major for the last 30 years will likely have performed these techniques dozens of times. The ick factor, I think, comes from a misconception about what genes actually are, and an ethical issue of toying with nature that usually underestimates the prevalence of genes transferring from species to species. There's nothing wrong or shameful about this, really. It's more shameful that this information is so arcane.

Ultimately, the first issue comes down to the fact that DNA is DNA. Think back to your high school biology class, where you learned about its structure. It's the same stuff in every cell in my body, yours, and in the virus I got the other day that's really pissing me off right about now. A gene is basically a stretch of DNA that codes for a specific protein, with molecular switches nearby that turn it on or off. While this stretch of DNA, and the switches that make it up between species may vary in size and the precise code, there's nothing uniquely "chinook" about the chinook's DNA, just as there's nothing uniquely human about ours. Nobody is injecting any crazy new hormones into anything, it's simply replacing the existing hormone with a pretty similar new one using the existing one's own "machinery" to control its expression.

Who wants a tri-colored fish that was injected with some sort of red shit? (Image Source: Yvonnegraphy)

Because of all this, any stretch of DNA is theoretically fungible from species to species. Evolution occurs because the order of base pairs changes over time due to mutations, or perhaps from viruses leaving traces that get inserted into the host's DNA. Roughly 8% of every person's DNA initially came from viruses, just from natural gene transfer. That doesn't mean we're part virus, and it certainly doesn't mean that our ancestors were more natural humans. Nature is constantly changing and adapting to outside stimuli, and the adaptation happens at the DNA level. The concept of genetic modification is to preconceive these types of gene transfers, nothing more, nothing less.

Based on everything we know about molecular biology, chemistry, and toxicology, it just doesn't really make sense for GMOs, by definition and across the board, to present a public health issue, apart from the possibility that the specific novel protein the new gene encodes produces an allergic reaction. This is reflected in a broad consensus (AAAS, WHO, systematic review of 42 peer-reviewed studies, Royal Society of Medicine) of science and health organizations around the world. I understand why many people do not trust this consensus. It's absolutely true that chemicals or pharmaceuticals once deemed completely safe by industry and regulatory agencies were later found out not to be so safe. I'm always very suspicious of industry, but I try not to let my suspicions replace or override evidence. Evidence is the sum of the best of our understanding about a particular topic, and while it can sometimes be incorrect or incomplete, in our case with GMOs the quality of the small amount of contradictory evidence is of very poor quality. I could write an entire post about the problems in a recent article that came out linking GMOs to cancer in rats that ultimately make this experiment of little to no value at all, but the conclusions it claimed to reveal will never fully disappear.

So, getting back to AquAdvantage, how exactly did this salmon pass an environmental assessment? The first question that needs to be answered is whether this new growth gene will escape into natural populations of Atlantic salmon. Now, think back to high school biology again. It's not enough for a gene to get passed on from a parent, the gene must provide a definite selective advantage to survive. It's plausible for quicker growth to be naturally selected for once it hits the ecosystem, but certainly not definite. The risk is real enough, though, that AquaBounty claims they will make, at minimum 95% of the transgenic fish sterile females that cannot possibly pass on the gene in question. Of course, while this does not completely eliminate the theoretical risk, it does reduce the probability of it actually occurring. To further reduce the possibility of a worst-case scenario, the fish will be bred at a facility in Prince Edward Island where, if they do escape, it will be very difficult for them to survive due to the extra cold winter temperatures and high salinity of the Gulf of Lawrence that they would escape to. Ultimately, this was deemed good enough by the FDA. If you have a quibble with the FDA, look at the study and determine how and why you think this is insufficient. They aren't going to listen to assertions and arguments without evidence.

That's the science behind GMOs, and none of what I said is all that controversial within the relevant fields. In the science bubble, these facts speak for themselves. It's a mistake, and a bit arrogant, to act as if that is or should be the case outside of science, in the court of public opinion as they say. Most people make decisions based on much more complicated factors, not the least of which are anecdotes that strike an emotional chord. Sticking only to arguments based in evidence doesn't really take those complicated factors into account. It's important to try and tell a compelling story, one that competes with powerful anecdotes, so my hope is that what follows below is a decent attempt at it.

To someone who uses evidence to guide decisions on science and technology, there is a bit of irony in labeling the political right as "anti-science", because we notice that for many on the left, science is more valid depending on whether or not corporations fight it or hold it up. We on the left are often amazed at the mass delusions the right has accepted as truth, from climate denial, to intelligent design, to conversion therapy to "save" people who "made the choice" to be gay. Unfortunately, we are not as critical of our own misconceptions, from the way we talk about GMOs, to not vaccinating our children. I don't think both sides are totally equivalent here, but they do all stem from a sometimes, but often not justified mistrust. It's a difficult thing to accept, but if you believe in science, you have to accept that this is not an appropriate lens from which to view biotech. I haven't read the full environmental assessment, and I'm certain there are valid criticisms to be made from it; there always are. However, none of them amount to a (albeit probably half-serious) headline saying "The Apocalypse is Here". I think we have the capacity to be a whole lot more rational than the current makeup of the right wing, but this requires examining our own thinking and being secure enough to accept that maybe our gut reactions are leading us to places that aren't totally justified. While it's totally OK to have a visceral reaction to an article saying this fish will destroy humanity, I hope your next reaction is questioning whether those emotions are closing you off to accepting more information that makes things a bit more complex. That is totally not OK, because very little in the world is purely black and white. It wasn't the case when George W. Bush was saying "you're either with us or you're against us", and it's not the case when thinking about organic vs. industrial agriculture.

While Monsanto is certainly is prone to exaggeration, unethical business practices, and are one of the largest contributors to an unsustainable food system, this does not undo the science on their side about GMOs. Technology pretty much always comes with some risk and some benefit, and the question is always whether the benefits outweigh the risks. Often, it seems that the only benefit is in the bottom line of the company that develops the GMO, and AquAdvantage is not really an exception. The benefits do go to AquaBounty, but I think what tends to get overlooked is that fish farmers who can cycle out their enclosures more quickly will also perceive a benefit. There's plenty of family farmers who actively choose to use Roundup Ready corn because they perceive it to be a more reliable means to provide for their families. I may not want to support either one with my own money, because I don't want to support "conventional" agriculture, and I don't approve of fish farming, but I'm an unapologetic pragmatist. The burden is on us to demonstrate a better way accepting that economics matter, and they matter from a self-interest point of view. I don't want to force someone to be more environmentally responsible with no assurance that their ability to finance their huge, expensive equipment is safe, and I don't want to advocate for any sort of policy that is based more in ideology than evidence. Most of these farmers are heavily in debt and make short-term economic decisions because of it, so make your solution more enticing from a short-term economic point of view. Demonstrate the utility of your solution, and do it without expecting much help from government. We really need it.

Right now, scientists are working to develop drought-resistant GMOs that can survive dry spells like we went through in the Midwest this past summer, and also require less irrigation and conserves our aquifers. There's also a group working on crops that use less nitrogen, potentially minimizing the use of synthetic fertilizers made from petroleum, and thus reducing the type of agricultural runoff that has created a giant dead zone in the Gulf of Mexico. Some of these issues could possibly be helped by conventional breeding, but compared to genetic engineering it's less efficient in time and money spent. It's certainly possible that neither GMO ever pans out at all. Sure, the ideal solution is to grow our food in a diverse field, without monocropping and synthetic inputs, but I don't see much value in dismissing something that tries to improve the latter issue without forcing farmers to adopt an entirely new and economically unproven method. We certainly are willing to give electric vehicles time to develop, knowing full well that they are mostly impractical right now, while the electricity is still mostly generated by fossil fuels. They are nowhere near their potential, and even the biggest cheerleaders acknowledge this. Same goes, I think, for GMOs. Monsanto doesn't help by insulting our intelligence and acting as if the potential has arrived, but really, what does that even matter? Nobody really believes that. Why would we want to dismiss GMOs like these out of hand because of the techniques used to create them? Are there really more barriers to competition in biotech than any other industry, where the companies involved now will always control it? Just think about where IT once was. In 20 years, I have little doubt that we'll know full well where biotech stands, and today will be looked back upon like the 1960s are in IT, just replace IBM and Bell with Monsanto and Cargill. They'll lose control, because big corporations are good at using influence and access to maintain their market share, but not through innovation. That's where the proverbial college dropout in his or her garage comes in, and they'll most definitely be coming.

Please don't be afraid to comment on this post if you disagree. I'm happy to engage with people who think I'm crazy.

Friday, December 14, 2012

The language of science, of course, is math. In physics and chemistry, you need to learn algebra and calculus to have more than a passing knowledge of the relevant topics. For our purposes, where we are exploring environmental risk factors or treatments on health and the environment, the language is statistics. If you are not familiar with the basics, quite simply, you will easily be lead astray by hype. There's no possible way to put all of the basics in a single, much less readable and interesting blog post, but I do think you can try to highlight what separates someone like Nate Silver from Dick Morris. So I'm totally gonna. You don't have to understand the results section of a study to correctly gauge what you're being told is important, but you do need to understand the concepts behind it. The two most important concepts deal with the nature of randomness, and they are confidence intervals, and errors in hypothesis testing.

Don't be like Dick. Too many people are Dicks

Virtually every study we will come across involves a rather simple, but important concept: sampling. When you think about it, it's really quite amazing how many of the recent polls in the presidential election, using only around 1500 random people, were able to be so accurate in determining the results of a voting population that ended up being over 125 million people. The assumption that a random sample accurately reflects the larger population like this is the basis for studies that link cancer to various agents, or the prevalence of a certain contaminant in Lake Michigan.

Unlike a presidential poll, however, there's not only a handful of possible results (e.g. Obama, Romney, undecided). If you're looking at something like the blood pressure readings for people taking a certain medication for hypertension, there is the possibility that you could get any conceivable result possible, although the probability can be reasonably considered 0 outside of a certain range. We're dealing with normal distributions, specifically two different normal distributions for exposure and for no exposure, and then comparing them. This is the crux of data analysis in a nutshell. We want to know what the middle of the curve (i.e. the mean) for people or plots of land or crops exposed to a certain treatment tells us, compared to what the middle of the curve for those not exposed tells us.

Before we discuss this, though, let's take a closer look at the normal distribution. The blue shading shows what I mean by the probability of getting really any conceivable number. The percentages are the probabilities that you'll find any single data point in that range. The key takeaway is that almost anything is technically possible. So when you take only a single test, you always know in the back of your mind that these results could be a total fluke. When scientists report the results of their studies, they acknowledge this by listing the confidence intervals (CI) of their curve. It's the same thing in presidential polls, when they openly declare the margin of error (MOE) involved. There has to be some sort of cutoff for readers to understand the impact of your results. A CI or an MOE gives the range of possibilities in which you are 95% confident your true means exist, if you were to theoretically perform an infinite amount of readings. The more individual data points you take, however, the more certainty you have that your curve represents reality, so the tighter this range is, but there's always some uncertainty, and it gets compounded a bit when you start testing it against other curves.

Science always starts with a hypothesis (not synonymous with theory! Don't even!) that needs to be tested. Sometimes your peaks are pretty close together, and sometimes they're further apart. Those of us working with statistics put the burden of proof on showing something "statistically significant", which is to say you put the burden of proof on a meaningful difference in results, and found evidence in the numbers for it. Our hypothesis is always that we will not find evidence that there is a meaningful effect, with the assumption that the two curves (samples) come from the same population. Similar to how you assume that the Gallup and Rasmussen polls, although they may be telling you slightly different things, represent the same electorate.

One of the big questions we are asking, of course, is, "what is the probability that these two curves are not from the same population?" This is represented in studies as the "p-value". Obtaining a p-value first requires you to "fit" your data into a standard normal distribution, which for our purposes is generally more than acceptable. You can't compare two curves unless you compare two curves against the same standard, but I'll spare you the details of how you do it, just be aware that the comparison is made fairly.

You can maybe get a feel for the challenges of this question from the image below. Remember, we're taking one set of readings for each treatment, and if we were to take another set just for good measure, the peak could be quite different just by chance. There's a 95% chance the peak could be anywhere in your confidence interval. Sometimes you see this from week to week in the presidential polls, when the 1500 or so people who respond to the poll in one week seem to show a major swing in opinion from the week prior. News outlets tend to run with the horse race narrative, looking for the one gaffe or moment that caused this crazy swing. The Nate Silvers of the world look at the swing and say, "Simmer down, people. It's almost certainly due to randomness."

For our studies, this is what's being analyzed! Taken from Missouri State

Because of this effect, when your curves are close together, it's going to be pretty difficult to tell the two apart. You are going to have a very high probability of accepting your hypothesis that there is no evidence of an effect, because you (hopefully) have a good amount of data, and the two groups look pretty similar. The yellow part takes the randomness of your sample into account and highlights the possibility that maybe we just caught a fluke. Maybe there is a statistical difference in real life but we just didn't happen to catch it by total chance.

The red shading, conversely, shows how we allowed a 5% chance that we did actually find evidence of an effect, but only by random chance as well. In practice, it's fairly rare that you get curves that are so far apart you're practically certain that you found evidence of an effect. If you are reading about a study, it's because they found this evidence, and the media thought it was going to generate interest. By convention, the researchers allowed themselves the same 5% cutoff point of being in error, and often times the reported p-value is pretty close to that 5% cutoff.

And now you know, in excruciating detail, why any single study is just one piece of evidence to throw onto the scale and weigh, even in the best possible circumstances. But statistical uncertainty is just the very, very tip of the iceberg.

Monday, December 10, 2012

A new meta-analysis published in the latest edition of JNCI provides strong evidence that higher levels of carotenoids in women's bloodstreams may reduce the risk of breast cancer. Here's a pretty accessible article from HuffPo, and another a bit more in-depth about the findings, and what carotenoids are and where you can find them. If you are a woman at risk of breast cancer, it's well worth reading. There's a lot to consider, though, so let's get to it.

In my last post, I said in no uncertain terms that the conclusions of any one study do not represent the full story, but when done properly provide a possible clue. A lot of times, they provide little more than evidence that a hypothesis warrants further investigation. There are two potential exceptions to be on the lookout for when reading articles about health and medicine. Most publications are good about saying what type of analysis was performed.

One is called a systematic review, which is pretty much what it sounds like. The researcher surveys all of the evidence out there on a particular subject and lays it all out in a single article. The other possibility is a meta-analysis, which is similar except that a new statistical analysis is performed on the multiple studies put together, essentially expanding the sample sizes of the people exposed to a treatment or pollutant, and the controls who are not. The latter is what this paper did.

You don't need much experience with data analysis to know that increasing the sample sizes can make results more representative of the general population, thus potentially allowing you to make a stronger conclusion. Every Royals fan knows by now that our April enthusiasm will be sucked dry by Memorial Day, when more games get played and hot starts level off, or revert to the mean, so to speak.

The analysis should also smooth out the variations that can happen in the smaller-scale results due to bias or chance, and provide insight into the "true" effect. Bias, in the sense I'm using it, does not refer to a Fox News-like investigator deviously "cooking the books" for a preferred outcome, but rather a tendency to under or overestimate the effect of the exposure due to characteristics of the study subjects. Any one of those circles could be the result of this bias, and the theory is that looking at the fuller picture minimizes that effect. This particular analysis was done on eight cohort studies, which suffice to say at this point means that the subjects were not randomly selected by the investigator, and are particularly subject to these unintentional biases. This funnel plot, below, provides a visual representation:

Each little circle represents the result of a single published study. Just by chance, there should be some results that do not show a meaningful effect (negative, left of center), and some that do show a meaningful effect (positive, right of center). This is essentially just another visual representation of the bell curve that we're all familiar with.

The big caveat, and this is the case for all reviews and meta-analyses, is that it's possible that literally every published study out there could be over or underestimating the effect, so that your "true" effect may still be questionable. This occurs because a study that doesn't show the hypothesized effect (a negative result) is less likely to be published. Journals like to publish studies that show something interesting, assuming that the alternative is that something interesting didn't occur and not worth reading. Here's a visual representation of what's called publication bias:

When negative results do not get published, you get a funnel plot that skews toward a single direction like the graph on the right. If you look at the circles on both images, you can clearly see that the center (i.e. the mean, a.k.a your result) of them is quite different. The image on the right overestimates the effect of whatever the patients are exposed to. It's certainly plausible that our carotenoids are prone to this situation. How do we know that there aren't 10 studies sitting in various researchers' file cabinets that will never get published because they were negative? They obviously aren't going to be included in a meta-analysis if they aren't published.

So what's the real conclusion to take away from all this?

There seems to be pretty strong evidence that carotenoids may provide some sort of small-ish protective effect in regard to breast cancer, and only breast cancer as far as all of this evidence is concerned. I'll show you how to look at the results section with all the statistics gibberish and gauge the effect for yourself some other time. There's plausible mechanisms for how exactly this would work described in the HuffPo article, so it's not some mysterious shot in the dark. However, we're still well short of definitive proof. Eat fruits with high carotenoid content because they generally are yummy and are healthful in many other ways, too. Do not buy a $20 bottle of carotonoid supplements, and beware anyone trying to sell you on them. And look at a headline such as this one and roll your eyes, now that you know better.

I'll come back to publication bias from time to time, because it's everywhere. The pharmaceutical companies do large controlled experimental trials where the subjects are chosen randomly (i.e. the results are considered more conclusive than a cohort study), and they have plenty of incentive not to publish when the trials have a negative result. Everyone likes a good Big Pharma bashing sesh, and I'm happy to separate the genuine bullshit they pull from the conspiracies that don't really hold up to much scrutiny.

Friday, December 7, 2012

I think most of what I'll be doing in this space will be providing context, nuance, and a scientific perspective on studies making the rounds in health and the environment. A recurring theme will be the cognitive gap between how a scientist reviews these studies vs. how they are presented in the media. Why am I so insistent on this? It's as good a place for a first real blog post as any other.

I think this post by Orac at scienceblogs is a good start, and helpfully illustrates why evidence matters.

One majoor (sic)—perhaps the major—difference between skeptics and cranks like antivaccinationists is that skeptics recognize human cognitive weaknesses that allow us to be misled so easily by spurious correlations. We realize that, far more often than we are prepared to believe, things really do happen by coincidence. When there are enough numbers, and there can be a lot of coincidences.

Scientists are trained in an often counter-intuitive thought process, one that simply doesn't come naturally to humans. This way of thinking even has its own language that is really the exact antithesis of the language used in journalism.

Science is, in essence, inoculation against these tendencies to draw false (conclusions) and to confuse correlation with causation, a weapon against the limitations of individual observations. However, it always interests me “what we’re up against,” because it goes very much against the grain to think scientifically. Our brains are not hard-wired that way. Learning to accept science over one’s own observations does not come naturally; so it is not surprising that so many people have a great deal of difficult doing just that.

My goal will be to try to avoid condescending language and pejoratives, because that ultimately does nothing to inform people who aren't already part of the choir, but I think the gist of this idea couldn't be more spot on. Weird things happen, and often we really don't know what the cause is. To us, that's entirely OK. The challenge is why we do what we do, and it's unforgivable to let ideology and/or an emotional reaction guide us to an answer. Science, of course, is constantly used to promote an agenda, but it's allowed to largely because of what I like to call "single-study syndrome".

Scientists set a very high bar to be convinced of anything. Our first instinct is to essentially tear apart every study and claim that comes out, looking for reasons why its conclusions are limited, or possibly even worthless. Even if the conclusion seems on its face to be totally intuitive. Associations may or may not be meaningful, but they need to be shown as statistically significant (a whole other blog post) more than once, and ideally across a couple of different study designs (another idea for a blog post!). It's best to just go ahead and think like there are no real bombshells in science. Once the headlines fade away, there really aren't.

If you want to keep checking my blog, all I really ask of you is to accept, or even just consider, the two main ideas of this post:

Your instincts and personal experiences cannot be trusted to explain anything across the board for all people.

Neither can any one particular study.

I'm going to have a lot of fun explaining the latter, time and time again. I hope you'll enjoy reading it.

Monday, December 3, 2012

Welcome, welcome. Realizing I share a lot of things on Facebook that are likely of zero interest to ~97% of my friends, I started thinking, there must be some way for people to choose whether they want to be exposed to my links. Two years later I signed up for Blogger.

Getting down to brass tacks, issues of science and technology cannot be uncoupled with politics, and deeply-held political beliefs cannot be challenged by merely stating facts. Environmental risk factors, from GMOs, synthetic chemicals, pollution, vaccines, and so forth are constantly associated with horrible outcomes from cancer and autism to nothing short of the end of humanity. Clearly, there's issues of cultural identification at play (our team vs. your team, etc.), and those simply must be considered if those of us with a particular set of skills are going to communicate effectively. My goals are simple but ambitious: explore different ways of informing people about issues pertaining to molecular biology, toxicology, biotech, medicine, nutrition, and so forth that don't just rely on spouting facts from an air of supposed authority.

I want to help people understand why evidence matters, how you assess the quality of evidence, and appreciate the nuance of the things I spend my days immersed in. Illustrative explanations of, for instance, the limitations of certain study designs and statistical significance cannot be completely done away with, but they should be enhanced with language that attempts to be consensus-building. I look forward to failing, and adjusting, failing some more and seeing if maybe I stumble upon something that clicks from time to time.

The title of this blog comes from Mr. Show. The title of this post comes from Big Trouble In Little China, so you better goddamn well believe there will be miscellaneous cultural bric-a-brac as well.