29 September 2009

I discovered today that I got scooped on a project I was working on. I wasn’t very far along on it, but I had some data in the can on it. Alas, there’s really no point in continuing with the project and no way to salvage it. It’s been done.

This is not something I ever expected to happen. I don’t work in a research field that could exactly be called “fast paced.” C’est la guerre.

These problems with publishing are starting to get old. I was scooped today, and my last two papers have been kicked back from journals for various reasons. I want something to get something finished and published, damn it.

28 September 2009

On the TED blog, Jonathan Haidt says something important about how people interact with evidence. I’ve added some bolding for emphasis.

We engage in moral thinking not to find the truth, but to find arguments that support our intuitive judgments, so that we can defend ourselves if challenged. The crucial insight here comes from psychologist Tom Gilovich at Cornell, who says that when we want to believe a proposition, we ask, "Can I believe it?" – and we look only for evidence that the proposition might be true. If we find a single piece of evidence then we’re done. We stop. We have a reason we can trot out to support our belief. But if we don’t want to believe a proposition, we ask, "Must I believe it?" – and we look for an escape hatch, a single reason why maybe, just maybe, the proposition is false. So people who have a negative intuitive reaction to Obama, or who are fearful about the enormous changes going on, are already inclined to believe rumors against him and his plans. They hear about death panels and forged birth certificates and ask “can I believe it?” The answer is usually yes, particularly if Fox News raises these questions and brings on experts who claim that the propositions are true. Even if Fox News presents both sides, the fact that somebody on TV endorsed a proposition gives viewers permission to believe it, if they want to. Conversely, Democrats can give rebuttals till they’re blue in the face, but if people are asking themselves “must I believe it” about the Democrats’ claims then the answer they will usually reach is “no.” Logic and consistency just aren't very important when it comes to morality.

Haidt frames this in the context of moral thinking, but my experience is that these exact same patterns occur in areas that have little moral repercussions at all. For instance, what is the moral or ethical issue involved in accepting that twelve human beings have walked on the surface of the moon? Part of this may be fear of science, as Haidt notes that:

(M)aterialism is deeply and profoundly threatening to many people.

Science is 100% materialism, through and through, beginning to end. This means that, according to Haidt, scientists are going to have an incredible problem putting their message across on any subject.

Of course, with many scientific issues, I can absolutely see where people think there are ethical issues involved. Is there is any way to break through this pattern of thinking? Haidt’s response fills me with dread, frankly.

While it is useful to rebut charges and get your arguments out in circulation, you have to understand that arguments and evidence have little impact on people as long as their feelings tilt them against you. You’ve got to create trust and liking first, and then people will be willing to listen. People can believe pretty much whatever they want to believe about moral and political issues, as long as some other people near them believe it, so you have to focus on indirect methods to change what people want to believe. You have to get them to the point where they ask themselves “can I believe it?” about your claims, rather than about your opponents' claims.

The first likable, apparently trustworthy person you meet who exposes you to an idea has a huge advantage of convincing you of that idea, regardless of whether it’s true or not. This supports a lot of what Randy Olson is arguing in his book (reviewed here).

23 September 2009

Here’s a questions that prospective grad students and post docs might want to ask prospective supervisors.

“How did you get along with your main supervisor during your Ph.D.?”

Because patterns tend to repeat. You know the old saying that children of alcoholics, children in abusive home, are the ones most at risk of becoming alcoholics or abusers. I suspect that many people mentor their students the way they themselves were mentored, whether consciously or not.

21 September 2009

I’m surprised I've never seen this list of seven warning signs of bogus science. (The second one made the most recent appearance in this blog of the seven, here.)

For long term stuff, I might add something Arthur C. Clarke said on an episode of the television series, Arthur C. Clarke’s Mysterious World. Though I don’t have the exact quote, it was along the lines of:

Science generally gets to the bottom of things in 50 years &ndash if there’s any bottom of things to get to.

If a claim has persisted for decades without good evidence in its favour, it’s unlikely to ever get good evidence in its favour. Doubtless there are a few counter-examples, but these are rules of thumb, not laws.

18 September 2009

This article by Jeff Ello is about information technology (IT) professionals, but it also applies to scientists:

While everyone would like to work for a nice person who is always right, IT pros will prefer a jerk who is always right over a nice person who is always wrong. Wrong creates unnecessary work, impossible situations and major failures. Wrong is evil, and it must be defeated.

That last sentence is one of the best summaries of why scientists are, as Randy Olson puts it, “handicapped by a blind obsession with the truth.”

16 September 2009

In my more cynical moments, I’ve often thought that people will do anything to combat climate change... as long as it doesn’t inconvenience them.

To make a point about climate change, Lewis Pugh did something that must have been mighty inconvenient. He swam across the North Pole. Making that point cost him the feeling in his hands for four months.

As a scientist, I think a fair amount about climate change, even though my research isn’t about that. I’ve mostly discussed it here as an example of how it is that people can end up buying into bad science, the psychology of belief, skepticism and denialism, and so on.

But besides trying to convince people that climate change is true, how many scientists have actually changed their research or their lab to reduce carbon emissions? I haven’t. I’m not proud of that, but it’s true. I wish I knew how to lead on this issue, and have a “green” lab, but I’m not sure how that would be possible. Science is a very energy intensive endeavor. We constantly have administration fretting about the energy costs of our science building compared to other buildings on campus. Then there are the hundreds, if not thousands of miles researchers travel to go to conferences.

At some point, we scientists are not only going to have to talk the talk on climate change, we’re going to have to walk the walk. And I’m afraid it could be mighty inconvenient.

Related links: A Natureeditorial about the importance of universities building energy efficient buildings; a letter to Nature about how applying for a job typically requires eleven (!) paper copies of records.

At MolBiol Research Highlights, Alejandro Montenegro-Montero’s advice to graduate students mentions rotations. I point out that not all programs have them. Strangely, I had made the exact same point only days before at BenchFly’s post on first year grad school strategies.

Along the lines of grad school advice, a request for input on Twitter resulted in this post on the BioData blog.

Randy Olson has been working in Hollywood for over a decade, but he’s still one of us. He gets what being an academic scientist does to you: you become literal, critical, and absolutely focused on destroying error – and it never goes away. He gets us. But he also gets how other people see us, and Olson has a message for us, his former colleagues: For other people, it’s not just about the data, guys.

Olson isn’t the first person to say that persuading non-scientists about the truth of things requires more persuasion than just evidence. This has not been a popular message, particularly among a lot of my fellow science bloggers.* These kinds of messages get characterized as weak-kneed capitulation, compromising the truth.

For that reason, Olson will probably face his strongest criticism for suggesting that scientists not be unlikeable. It sounds a lot like admonitions of other writers never to offend, which has generated a growling response that there are some people that we scientists want to offend: the people who deal out lies, errors, and untruths.

Olson has not cracked that hard problem: how to communicate with those nice people who are just like you and me, except for a few beliefs that are divorced from reality. You know the ones: the creationists, the climate change deniers, the anti-vaccine campaigners, the moon landing conspiracy theorists, the birthers, and so on. Olson’s tips and suggestions won’t matter when dealing with those people, but that’s not Olson’s book. It’s a book that somebody needs to write – badly – but Olson’s approach shouldn’t be dismissed because of that. He’s pointing out that when you launch a full out assault on your enemies, you risk inflicting a lot of casualties on people who might have been on side.

Part of what convinced that Olson is on the right track were uncomfortable moments reading this book when you recognize yourself, and think, “Oh, damn, he’s right.”

For instance, Olson talks about how being an academic means being critical. We academics forget that even honest and correct criticism can be very deflating.

Have you ever walked out of a movie that you loved, and you’re replaying some of those favourite moments and lines in your head... and one of the people you’re with points out something that’s completely illogical? Do you happily respond to that honest and correct criticism, “Wow, I’m so glad you pointed that out!” If so, you’re a better person than me, because my response was an irritated, “That’s not the point.” **

And yet, we scientists are routinely praised for pointing out those annoying little untruths. On the very day I received my copy of Olson’s book, one of my blog posts was picked as an editor’s choice specifically because it was critical.

On that note, I don’t think it’s any accident that the words highlighted in the blurbs on the back are the ones that say how critical this book is. After all, this book is aimed at scientists and academics, so if you want their respect, you’ve got to show them that you’re criticizing! In fact, the tone here is very amiable and affable. The most critical sections of the book seem more exasperated than stinging.

On a similar note, Olson also talks about how scientists are extremely literal. Here again, you don’t have to look further than recent stuff on the blogosphere. The new film Creation is starting to get reviews, and here’s Eugenie Scott’s review on Panda’s Thumb.

As someone with a stake in how the public understands evolution and it’s most famous proponent, the bottom line for me was that the science be presented accurately. The second was that the story of Darwin’s life be presented accurately.

Her bottom line is not whether the movie has a good story, is emotionally powerful, well acted, or any of the other dozens of things that most people look for in a movie. Her bottom line is accuracy. Such a scientist. For many, looking for that first is missing the point of why they watch a movie.

Finally, Olson has something in common with Adam Savage. It’s not just that they do science-y stuff on film. MythBusters host Savage was quoted as saying recently:

I realized that my humiliation and good TV go hand in hand.

Olson is not afraid to make a point at his own expense. Don’t Be Such a Scientist starts with Olson on the receiving end of a truly terrifying bawling out by an acting teacher. Those four pages alone are near worth the price of admission, but it’s not the lowest or most embarrassing moment for Olson in the book. This is self deprecation taken to a new high, and it’s an illustration of one of Olson’s key tactics for communication: don’t “rise above,” as he puts it. In other words, don’t be high and mighty. Audiences tend not to like such people.*** I’ve tried to avoid righteous indignation on this blog, there are occasions where I bet someone reading it thought, “Boy, is he full of himself.”

There is more about this book that I’d like to comment on and explore, but I’ll leave them for later. I’m teaching a class on biological writing this semester, and I hope I can bring some of the issues Olson raises into the class. Don’t Be Such a Scientist is a rich source of ideas, and I’ll be riffing off them for some time to come.

*** I do have to wonder what Olson makes of the success of House, a show that has a character that seems to violate almost single suggestion that Olson has. The character is unlikeable, always rising above...

13 September 2009

Data also has to be accessible and stored securely. 56% of the researchers surveyed stored their data on a hard drive. There is a problem with storing data in one place, as many learned the hard way from Hurricane Katrina, when an incredible amount of data was lost. ... Noble suggests storing data online, as hard drives cannot be accessed once you are out of the lab, and as another researcher pointed out “I think there should be online storage of research data … so when ever you have time you can analyze your data.”

There’s a danger though of thinking that data stored “online” is somehow different than data stored on your desktop computer’s hard drive. It isn’t. That data is still dependent in some way, shape, or form, on being stored someplace on a physical object. Data on the web are not like Cartesian souls, immaterial information free to exist independently of the physical world.

If one of the platters spinning at Blogger or Google malfunctions or is destroyed that contains the archive of this blog, is there another copy to restore it? I don’t know. I haven’t heard any horror stories yet, but that I am ignorant of the answer is worrisome.

12 September 2009

I was watching an interview with Annie Lennox recently. She commented that a lot of people wanted to be famous, but they didn’t really know why.

There are at least two things involved in fame. One is blind, clueless luck. The other is more interesting.

Fame indicates that someone has done something meaningful to other people. To have done something that resonates and connects with other people. And I think this is what a lot of artist, communicators, politicians, and many others want to be famous, because it means they’ve connected with a lot of people.

In science, you can get famous by making some nifty discovery totally by accident. Maybe it’s easier to get famous that way. Being famous for creating meaning, and influencing lives, seems much rarer for scientists.

Maybe that should change. Maybe more scientists should try to become famous not for discoveries, but for influencing people in a positive way.

11 September 2009

“We’re going to have some problems getting this under the microscope...”

There are just times you’d like to be a fly on the wall when certain science projects are being planned. I can’t quite imagine the conversations that led up to this paper. “Let’s look at the brain of the biggest fish in the world.” (I suppose the fish start small and have to grow up big. But still.)

The brains of sharks are interesting, in part because they are much larger than people would think. People tend to think of sharks as primitive (how many shark documentaries have used the phrase, “unchanged for millions of years”?), and primitive means small brains. But compared to body size, shark brains are often as big as birds’ and mammals’.

And when thinking about evolution of brains, extremes are often very informative. The whale shark (Rhincodon typus) is not only extreme in its size (as noted, they’re bigger than any other fish in the world), but extreme in its diet: it’s a filter feeder, living off tiny plankton. This is not the first thing that comes to mind when people hear the words, “giant shark.”

This paper is not only interesting because the species is unusual for neurobiology, it’s interesting because it applies a technique that is used a lot for humans, but quire rarely for other beasties: magnetic resonance imaging (MRI). Now, this is not fMRI, which is constantly in the science headlines: this is purely anatomical data, not imaging the brain of a live shark.

Although the whale shark has a massive brain in absolute terms, it turns out that it isn’t very large relative to its body mass compared to other sharks. In fact, it’s small.

In a situation like this, there are two hypotheses that come to mind. The first is that the feature was inherited from a common ancestor, in which case, you’d predict that the whale shark’s relatives also have small brains. The second is that the feature may be an adaptation to the particular ecology of the species, and the prediction there would be that species with the most similar lifestyle would have small brains.

In this case, the whale shark has a small brain in common with other large filter feeding sharks, like the basking shark (compared using previously published data). It’s easy to think that filter feeders can afford to have small brains, but the authors caution that social behaviours in sharks and allies is another factor that is often strongly correlated with brain size.

When looking at individual regions of the brain, the whale sharks also had something in common with other oceanic, pelagic sharks, but not their relatives: a very large cerebellum. Cerebellum is usually described as being involved in motor coordination. Why would these open ocean sharks need such a large cerebellum? The authors suggest that perhaps the use of that open ocean is more complex than you might expect. The sharks are not just lazing around at the top of the water, but making significant vertical migrations and travel for very long distances.

These possibilities seem a bit foggy, however, based on the traditional notions of cerebellar function. Usually, the cerebellum is involved in coordinating fine movements, not long range navigation. There may be some other undiscovered ecological or behavioural force in play shaping the brains of these massive animals.

10 September 2009

I was listening to an interview with neuro researcher Dwayne Godwin on this week’s Science Talk podcast. After discussion of Godwin’s work at the U.S. capital to convince legislators about the need for science funding, interviewer Steve Mirsky asks what advice Godwin would give to a young researcher about how to build a career when funding is so bad.

What I would say to young people coming into science is, “Hold on.” Because I think that things are going to get better. I think there's a realization in congress, based on my meetings, a realization certainly with the incoming administration, that science is a worthy endeavor, and I feel that the decisions that are being made by the upper levels of our leadership now are based on evidence. And for scientists, that's a good thing. Because what it means is, the realization that science is our future, and is critical to our future, is going to be realized at those levels as well.

I hate to say it, but I think that is bad advice. What I find ironic is that this comes right after he talks about how:

The National Institutes of Health (NIH) funds about 7% of grant proposals, when it used to fund about 30%.

How many politicians are opposed to increasing government spending.

Many scientists seem unwilling to face is the possibility that the party is over.* For all we know, a young scientist may never see U.S. federal research agencies funding 30% of grant proposals. (I’d be stunned to see a return to a 20% success rate.)

Less than one in ten isn’t a competition. It’s a crap shoot. Actually, I take that back: the odds of rolling sevens in craps is better than 7%.

A young scientist should make a career plan that involves a scenario of how they will survive if they do not get federal funding. That means looking hard at what kind of job they want to hold. If you want a job where you are going to be evaluated based on getting federal funds (and that’s most major research universities), ask if you are willing to be fired in six years.

Even if you are one of the lottery winners who gets the federal grant, develop $5 projects if you possibly can. You may not have continuous funding throughout your career, so have projects you can do for very little money may be able to keep you productive through the lean times.

The problem you have to be aware of is that your research career may span decades, but no American administration lasts more than eight years. As the saying goes, “A week is along time in politics.” You will face changes in funding, so figure out how to create a research program that will survive fluctuations.

Update, 25 March 2014: For some reason, I got thinking about this post, and wondered how long ago it was. I tweeted that it was four and a half years on, and the situation has not improved. Godwin noticed the tweet, and replied:

09 September 2009

Marv Wolfman, one of the most successful comics writers around, wrote today about the medium that he loves (and that I do, too):

When you look at the billion dollars plus that Dark Knight grossed, or the hundreds of millions grossed by Iron Man – a character few people outside of comics knew anything about – we see that people love what we do, but that love has not always been reflected in the sales of the comics themselves. Back in the 90s, when I was one of the two founding editors of Disney Adventures magazine – a magazine that sold over a million copies a month – I started calling regular comics a 32 page pamphlet. I meant that to be as derogatory as it sounds.

Comics were trapped in a ghetto; beloved and ignored at the same time. The days of the pamphlet are over. The tail of the dinosaur just hasn’t informed the brain. We need to look ahead to other formats, to other kinds of stories to tell and to other ways to distribute what we do. There is a generation now who gets all their entertainment over the net and that is not going to change. We are not going to go back to a 100% paper society. That is ridiculous both for distribution and for the environment. Computer book readers are going to get more popular and when they move to color, there will be no reason at all to have to print a magazine when you can download one to a perfect flat screen with no glare that looks exactly like paper anyway.

I feel much the same about textbooks in particular and other aspects of higher education in general. Why are we still doing things the same as 20, 30, 40 years ago or more? “The tail of the dinosaur just hasn’t informed the brain.”

Having said that, you know what? I still love comics on paper. I read a lot on the net, but comics are one of the things that I still like the physicality of. But I agree with Marv. The ebook experience seems to be pulling close to paper, and that’s the future.

(Pictured: Vigilante, a recent title by Marv.)

Additional: An interview with cartoonist Berkeley Breathed, who had major success with the newspaper comic strip Bloom County, includes this comment:

Newspapers have about five years left. Young readers of the newspaper comics simply don’t exist anymore in numbers that count. Those eyeballs are elsewhere and will not come back. Online comics are terrific. But they will never have 1% of the readership any major comic had 20 years ago, by the nature of the technology. They’re different beasts now. No, after having 70 million daily readers in 1985, getting 3000 a day online isn’t terribly energizing at this stage.

The best cartoon shows – like Rocky and Bullwinkle, ReBoot, or Avatar: The Last Airbender – work on two levels. There’s one layer of meaning that kids pick up on, and another layer that their parents watching beside them pick up on. The same signal has different meanings to different audiences. This new paper by Moosman and colleagues investigates how one signal might do double duty in the animal kingdom.

Fireflies (which in this case are beetles rather than flies) light up to attract mates. But this is a conspicuous signals, and a conspicuous signal has its downsides: eavesdroppers can pick up on those signals, and even imitate them to lure in prey. Moosman and colleagues suspected this flashing light had yet a third effect: to be a warning sign.

This particular firefly, Photinus pyralis, is thought to have chemical compounds that make it distasteful to come predators. Another firefly, Photuris, eats these smaller Photinus, and gain the same defensive chemical compounds from their prey.

In that case, is it possible that predators might come to recognize their flash as a warning colouration, like the bright colours of poison dart frogs or some venomous snakes? This paper ran several tests of this hypothesis.

First, the authors confirmed that the three species of insect-eating bats they were examining and fireflies overlapped in both space and time: fireflies were signaling at times and places where bats were flying.

Second, they examined a lot of bat poo for traces of firefly remains, and found none. They found plenty of other insects, including others that were firefly sized, suggesting that bats avoided these particular insects.

Third, they gave captive bats food pellets containing portions of fireflies. Bats rejected food significantly more often if it contained traces of fireflies.

At this point, all this is pretty strong evidence that bats don’t like to eat these bugs. But is it because the bats recognize the flashing light signal? The authors tested this capturing several wild bats, and exposing them to artificial lures that flashed... or not. If the lights were a warning sign in the bat’s mind, you’d predict more attacks on the non-flashing lures than the flashing lures.

The behavioural results don’t strongly support the idea. Of the three bats species, only one, Eptesicus fuscus (shown), preferentially attacked the non-flashing lures, and the only the larger, Photuris-sized lures. The authors do point out that E. fuscus is the species that overlaps with the fireflies the most, and thus may be the bat that has the most to gain by recognizing a flash as a danger sign.

Another potentially problematic aspect of their experimental design was that they presented the bats not with a natural, intermittent rate set of flashes, but with a super-firefly, continuous set of flashes. This might be okay, because greater stimuli often generate greater responses (think of the goose that will retrieve an enormous egg over a proper sized one), but it’s also possible that the unnatural stimulus is getting an unnatural response.

Given that only one of the six combinations of three bat species and two sizes of lures showed evidence of flashes being warning signs, this sentence in the conclusion paints the situation with far too wide a brush:

In conclusion, bioluminescence of adult fireflies should indeed be considered in the context of a warning signal against bats(.)

Although it is often important to show that something can happen, that this happens in only one out of six cases tested raises real questions about whether this does happen with regularity. The authors risk overreaching just a little past the data they have.

The internet is changing things. This is known. But how many industries have ignored this and died? And will mine be one of them? This article in Washington Monthly says I’m the academic equivalent of an auto worker at General Motors.

In recent years, Americans have grown accustomed to living amid the smoking wreckage of various once-proud industries—automakers bankrupt, brand-name Wall Street banks in ruins, newspapers dying by the dozen. It’s tempting in such circumstances to take comfort in the seeming permanency of our colleges and universities, in the notion that our world-beating higher education system will reliably produce research and knowledge workers for decades to come. But this is an illusion. Colleges are caught in the same kind of debt-fueled price spiral that just blew up the real estate market. They’re also in the information business in a time when technology is driving down the cost of selling information to record, destabilizing lows.

For me, it’s very easy to envision a scenario like this playing out over the next decade.

Costs of a university education will continue to rise above the average inflation rate.

Universities pretend nothing is wrong.

Some politician decides to start making political hay about the cost of undergraduate university education, and how limited access is harming the nation’s economic potential. (This will be easier in America, for various cultural reasons.)

New online educational organizations, unable to be accredited to offer undergraduate degrees, offer other forms of certification, and offer basic undergrad classes that traditional universities accept the course credit.

The accreditation system will get a grilling like banks and Wall Street recently did. Traditional universities will fight tooth and nail to save it, and by extension themselves, but lose the public relations war in the process.

Online organizations see growth in offering certificates, while traditional undergraduate degrees become less popular.

A lot of universities close up shop, which really does a number on the nation’s long term research prospects.

Note that all of this concerns undergraduate education, which is only one of a university’s roles. I do not see other institutions and internet as changing universities’ roles as research centers, graduate training centers, and so on, as strongly as it will be changing undergraduate education.

But undergraduate education is important. It’s an open question how universities could survive if undergrad enrollment catastrophically declined.

Want me to make it worse? I’ll make it worse. To keep competitive in research and graduate training and some of the other things that universities do, they need good people. This article says universities aren’t going to get those people. The academic hazing process to job security, from post-docs who have poor wages and less security to the draconian process of getting tenure, is so unfriendly to starting and having a family that a lot of potential academics are chucking in the towel.

Evolve or die. Unfortunately, from my vantage point in the thick of it all, I seriously wonder how much universities will let themselves change.

Additional: Here’s a related article on university costs that also feeds into my gloomy mood.

Earlier this year, the National Association of Independent Colleges and Universities announced... that the average increase in tuition and fees at private institutions this school year would be... just a little higher than inflation.

As many know, this is the 150th anniversary of the publication of On the Origin of Species. If I may be so bold, one of the things that might distinguish our thinking about evolution in the last 50 years from the first hundred years might be the speed at which natural selection can operate. For a long time, we thought of evolution taking long times: millions of years would be needed to see the gradual accumulation of changes. We learned in the past few decades that we can see the effects of selection over the course of a few decades.

There are a few fast changing situations that should press the fast forward button on natural selection. Invasions are one. That’s why they’re invasions, not slow expansions. Boronow and Langkilde look at how the invasion of red fire ants are affecting fence lizards.

The ants (Solenopsis invicta) are nasty little buggers. A dozen will kill a fence lizard in less than a minute. You’d think that would apply some pretty strong selection on the lizards if they have any traits in the population that provide even a little defense against the ants.

To test whether natural selection has started acting on the fence lizards (Sceloporus undulatus), they collected lizards from two locations: one was invaded by the ants 70 years ago, and the other has not been invaded yet. Then, they allowed some angry ants to bite restrained lizards, and measured the animals’ performance on several behavioural tasks, like biting, running, and so on. A control group of lizards where handled, but not bitten. They also looked at the effect of dilute venom on the lizards’ blood directly.

The bottom line?

There’s no effect.

The lizards from the region that had been putting up with ants for seven decades had the same behavioural responses to the ants as lizards from the region with no ants. No differences in the blood responses to venom, either, though the blood was affected by venom.

The authors suggest that the ant venom might have a “tipping point.” Less than a certain dose, and the lizard is fine. More than that dose, and you’ve got a scaly corpse. The range in between “fine” and “dead” could be minuscule, in which case, there may not be a lot of variation for natural selection to work on. Thus, if the lizards can keep the bites under the critical value, they suffer no fitness consequences.

Another issue is that the fence lizards do live with other fire ants, like Solenopsis xyloni. These have weaker venom, and they’re not as numerous as the red fire ants, but it might be that the fence lizards have already been pushed to have defenses against fire ants.

A third possibility is simply that there is no existing variation that gives some members of the population greater resistance than others. Seventy years, which is about 35 generations of lizards, is quite a while, but may not be long enough. Who knows when just the right mutation will give some lucky lizard – and its offspring – a selective advantage.

03 September 2009

Does this look like something deserving a message in all capitals and a triple exclamation point?

This little guy found his way into my lab today. It’s a recently hatched Mediterranean gecko (Hemidactylus turcicus), which are quite common here. Herbert (for that is what I named him – or her, makes no nevermind) was duly released back into the local habitat.

This paper on mice evolving a new coat colour has been making a big splash in science news. It’s being touted as a new textbook example of evolution. Actually, not just an example, but an “icon.” I’m not sure what to think about that, given that Icons of Evolution is a notorious creationist book. Plus, the last time someone was touting “it’ll be in all the textbooks” were the promoters of the breathlesslyover-hypedDarwinius / Ida fossil.

Reading the technical paper is very frustrating. I hate to say, but I don’t think the story is as complete or as impressive – yet – as the press releases indicate.

The press release version of this story is that in Sandy Hills, Nebraska, a region that has very light coloured soil, local deer mice have evolved a light coloured fur coat in a relatively short geological period of time. (In the picture, light deer mice are shown on dark soil, not their matching soil; same for the dark mice: they are shown on light soil.) In fact, the light coat was not present in the original population, and only evolved after the region emerged.

That is an interesting story. On what basis do Linnen and collegues make these claims?

First, they measured and compared the coats of five mice from two locations. Based on these measurements, they show the Sandy Hills mice are more reflective across the light spectrum, and this appear to be due to a pigment called pheomelanin.

In lab mice, a well known gene called Agouti affects coat colour. By breeding laboratory deer mice with mutations related to this gene, they showed these differing coat colours are classic Mendelian genes, where light is dominant over dark. Two dark mice will breed true; two light coloured mice will either breed true, but could have a mix of light and dark offspring (about three light offspring for every one dark offspring).

They also measured the expression of the Agouti gene during development, by tracking mRNA levels for the gene. As expected, they found Agouti more heavily expressed in light deer mice than dark.

They then captured deer mice in the wild, near the edge of the Sand Hills, to see if an Agouti mutation was associated with coat colour. As predicted, it was. In doing so, they found no relationship between where they collected the deer mice and their coat colour. From this, they argue, mice of both colours are interbreeding. I would like to see actual behavioural mate choice tests to support this.

The argument that selection has occurred revolves around the variation of the genome sequence. It’s fairly technical and I am not going to pretend that I fully understand the logic here. But I’ll spot that it’s all correct, and I would still argue that this is the weakest part of the paper. The molecular data are an indirect inference suggesting selection for colour. The field experiments I’d like to see demonstrating colour advantage seem to have been done, but they’re in a fairly obscure journal from the 1940s. Anyone have copies of Contributions from the Laboratory of Vertebrate Biology of the University of Michigan handy?

Similarly, there are no archaeological pelts that they base their arguments for a recent evolution of light coat colour on. It’s all mathematical models that assume population size, relate strength of selection and variability. To be clear, I’m not coming down on using models, just pointing out what kind of evidence is being used to make the case. To give an example, they note Sand Hills is about 8,000-10,000 years old, but they don’t give an approximate age for the origin of the Agouti mutation in years. 8,000-10,00 years is ~0.4 to 0.5 4N generations of deer mice, and they estimate the Agouti mutation arose 0.05 to 0.18 4N generations ago. And I am not going to pretend I understand what those figures mean or how they calculated them.

I could be badly biased here, because I am an organismal biologist, and this is mostly a molecular biology paper. But if I’m having problems understanding the details, I wouldn’t unleash this as a case study in evolution on students who weren’t graduate level or very close to it. The level of complexity a student would have to master to get a firm grasp on the evidence is high enough that to a lot of undergraduate students, this would be a plausible “just so” story. This research would fail as a textbook example right now, fine work though it is. For this to become a textbook example of evolution in action, I do think that there would have to be better explanations of the existing field experiments, and probably a whole lot of new ones, too.

I was reading a new column in the Dallas Examiner attacking evolution that contains an oh-so-familiar refrain: “What does Darwinian evolution have to fear from free scientific inquiry?”

The answer, of course, is nothing. I’ve written about this before here, but I came up with a new analogy to express why researchers are bored and frustrated by creationists.

I imagine most people reading this have a driver’s license. To get it, you had to prove you were capable of driving. You occasionally have to get it renewed.

Imagine that to keep your driver’s license, you had to go through that whole lengthy driving exam – written and in the car – once a week. Every week.

Is that a valuable, productive use of your time? No. It’s a waste.

What would be your emotional response? Boredom, frustration and maybe just a pinch of royally pissed off.

And if people said to you, “If you can drive, why are you so afraid of the driving exam?”, might you have a intemperate moment where you consider making them eat their lower molars?

You have proved that you are road ready. While it’s no guarantee that you’ll ever get in an accident, things normally don’t change so much that what held true one week won’t hold true the next. Why go through all the hassle of proving it again... and again... and again?

While I’ve used “creationist” in the title, but it could be replaced with “anti-vaccer,” “climate change denier,” “birther,” or any of the other denialist lines of though out there. They all use the same argument to impugn and create questions about motives rather than evidence.

His point was that American interest in science, science education, and so on, would be driven by its effect on people’s wallets. But it got me thinking about Dan Pink’s recent TED talk on how financial incentives work only for a very limited set of tasks: very rote, defined, mechanical tasks. And I thought, “Is it any accident that this most capitalist of countries did so well throughout the first half of the 20th century, when so much of the economy was based on rote, defined, mechanical tasks, like manufacturing?”

This is turn got me thinking about Richard Florida’s arguments that the “creative class” is becoming the significant driver of the economy. This fits with Pink’s thesis that we are increasingly being asked to solve “candle problems” in work. And science is about those hard, not easily defined, creative problems most of the time.

If you put those three things together, that America is the pre-eminent capitalist society of the world could, paradoxically, hurt its ability to retain its economic competitiveness.

Not only is there a large commercial harvest of Louisiana red swamp crayfish (Procambarus clarkii), and not only is it a terribly invasive pest species in many parts of the world, they are the “white lab rat” of crustacean research. But this picture shows why many people in the southern U.S. call them, “mud bugs.”

Smell is the oldest and most basic sense. Smell is the detection of external chemicals, which bacteria, without even having neurons (because they are one-celled), are able to do with ease. Taste is a mere spin-off of smell, as it is also about detection of chemicals, just those in a little higher concentrations a little closer to the body.

A new paper by Shah and colleagues blurs the already fuzzy line between small, taste, and even nociception (detection of tissue damaging stimuli). They examined skin cells in the interior of the throats and lungs of humans. These cells are not neurons. There are many examples of skin (epithelial) cells generating electrical activity (“skin pulses”) that resembles the action potentials of neurons, however.

The skin cells on inside of the airway have lots of little hairlike cilia (pictured), beating away to keep nasty bits out. Traditionally, it’s been thought these just beat, but some cilia have sensory jobs, and the authors decided to see if these might do both.

This project should the advantage of high-throughout molecular techniques. They were able to look for a whack of genes for bitter taste receptors in one go, using microarrays: little chips with dots that “light up” if the right molecule is present. They found four bitter receptors being expressed in these ciliated skin cells.

It’s not fair to the authors to say it was all downhill from there, as there was clearly a lot of hard work, but I think the authors had a pretty easy time conceptually from here on out.

You’ve got receptors in the cells: where are they, specifically? With antibodies, you can show they’re found in the cilia.

You got receptors: are they physiologically active? By put on a bitter chemical on these cells, you can see all kinds of calcium, which is important in cell signaling, running into these skin cells (using fluorescent molecules that detect influxes of calcium ions).

You’ve got a physiological change inside the cell: does that change the beating of the cilia? Yes, you can increase the beating about 25%. I’m guessing this was just detected with regular old optical microscopes.

The last logical question is the one that the paper doesn’t have data to answer directly. If you get one of those bitter chemicals in your lungs, how does the beating cilia get rid of it? This is one question that can’t be answered with the cultured cells that were used for all the experiments above.