30 October 2012

This is a crustacea. Lindernia crustacea, to give it its proper Latin name. Or, to use an English name, Malaysian false pimpernel. Not sure how the species name connects to the animals, but a lovely little bloom regardless.

Photo by bob in swamp on Flickr; used under a Creative Commons license.

We never use a word unless you can show us a poem in which it’s been used. Because if people don’t love it enough to include it in a poem, it probably means it means nothing to them.

This is a good start, but even poems may be a little too esoteric. Film and television are better tests because they are mass market media. Filmmakers cannot afford to confuse or alienate audience members. Even book titles may be too much of a niche market, although I might be persuaded that if you have a word in a best-seller, that’d pass the jargon test. More generally, a word that you can find in the title of anything that is pure, unabashed, popular culture will pass the jargon test.

A final word of warning, though. The test I am proposing here is not the only reason to rule out using a word or a phrase. For instance, a lot of words that pass my jargon test can be found on Carl Zimmer’s list of banned words for science writers. “Holy grail,” for instance; people know what it means, but it’s still overused.

26 October 2012

When I was at the International Congress for Neuroethology in August, I tweeted this piece of advice offered for neuroethologists:

Use the champion animal.

Speaker Bill Kristan attributed this Walter Heiligenberg. The idea is simple: study the animal that is the best adapted, or makes greatest use of some feature or ability.

I was fascinated that I had never heard this quote before, even though Heiligenberg is well-remembered in the neuroethology community. (He died in 1994). I was further fascinated by how “sticky” this quote was at the meeting. “Champion animal” turned up in talk after talk, until by day 5, I was calling it “the Heiligenberg rule.”

I wondered if Heiligenberg had ever written that that memorable advice down. After running into a few dead ends in Google and Google Scholar, I found this, which seems to be the origin of the phrase (Heiligenberg 1991):

We have learned that some animal species are champions in particular aspect of sensory or motor performance and that such superior capabilities are linked to highly specialized neuronal structures. Such structures incorporate and optimize particular neuronal designs that may be less conspicuous in organisms lacking these superior capabilities (Bullock 1984, 1986a,b). Moreover, the behavioral repertoire of such “champion” species readily offers paradigms for testing the performance of their special designs at the level of the intact animal.

I was a little disappointed that the verifiable version of punchy, memorable advice is stuck in longer, more mundane scientific prose. I suppose I should not be surprised, given that many other great ideas start off as rather lengthy bits in print, and get shorter (and more memorable!) in the retelling.

For instance, the phrase, “an inordinate fondess for beetles,” is often quoted (or misquoted) as being from J.B.S. Haldane. According to Stephen Jay Gould, who researched the phrase (reprinted in his book Dinosaur in a Haystack), Haldane almost certainly said this in conversation. But the versions of this idea that Haldane wrote down (“endowed with a passion...for beetles”) are nowhere near as good as “inordinate fondness.”

Then there’s the story of how a quote from a business professor in the 1960s became widely attributed to Charles Darwin. And in that case, too, the quote got shorter and more memorable with repeated retelling.

I am sort of hoping that Heiligenberg might have said the short version in conversation. The idea is worth encapsulating in a short, powerful sentence instead of academic prose.

24 October 2012

This press release is showing up in my social media feeds because it claims that researchers have got dinosaur DNA.

Schweitzer and her team also tested for the presence of DNA within the cellular structures, using an antibody that only binds to the "backbone" of DNA. The antibody reacted to small amounts of material within the "cells" of both the T. rex and the B. canadensis. To rule out the presence of microbes, they used an antibody that binds histone proteins, which bind tightly to the DNA of everything except microbes, and got another positive result. They then ran two other histochemical stains which fluoresce when they attach to DNA molecules. Those tests were also positive. These data strongly suggest that the DNA is original, but without sequence data, it is impossible to confirm that the DNA is dinosaurian.

Expression of doubt.

We went down this road before in the 1990s. There were multiple papers that was claiming to have found DNA that was tens of millions of years old, with at least one claiming to have closed in on the 65 million year point that was the end of the dinosaur era. Those claims could not be replicated, and are now widely viewed as flase alarms coming from contamination.

Another paper by Allentoft and colleagues just weeks ago argued that DNA’s half life was too short for us to be likely to get any usable DNA from 65 million years or older. See this Wired article with the great title, Jurassic Park impossible because of stupid laws of physics, for a summary.

So the claim of DNA in this new research? I'll bet a dollar it's either degraded beyond any usefulness, or that it’s an some sort of artifact. Antibodies are tricky things.

I’m much less skeptical about the claims the team found ancient proteins. Proteins come in all sorts of varieties, and some are no doubt more stronger and more stable than others.

Maybe I’m just skeptical because my heart’s been broken too many times by this line of research... “We’ve got something...! Oh, wait... no we don’t. We’ve got... ooops. Looking good... er...”

23 October 2012

Artistic “style” is often immediately recognizable, but – given the track record of successful forgeries of paintings – almost indefinable.

Could we turn to honeybees for help?

A new paper by Wu and colleagues asks if bees can learn to differentiate paintings based on artistic style. They’re not interested in detecting forged paintings; they’re interested in learning.

We know bees can learn to tell flowers apart, like flowers that give food rewards from those that don’t. But there’s some controversy on how sophisticated their learning abilities are. Some argue that the bees are looking for only simple primary cues, like "white” and “circle.”

Humans, on the other hand, routinely combine those primary cues into larger categories. For a long time, categorization was considered very high-level thinking, requiring intent. Philosophers of mind often touted categorizing as something that only humans could do, because animals and machines did not have intent (the argument went).

Wu and colleagues had a simple enough experiment: present the bees with two paintings. One was by Claude Monet (on the left in the pairs below), and one was by Pablo Picasso (on the right in the pairs below). They placed a food reward behind one, and let the bees try over and over again to find the food.

The bees were able to learn if a Monet meant munchies or if a Picasso portended a picnic. They never got perfect, though. They tended to top out at choosing the right master about 75% of the time. When given five pairs of pictures to learn, the bees continued to perform at about the same rate, getting to about 75% accuracy across all the pairs.

The bees showed no preference for the impressionistic Monet or the cubist Picasso. They started off choosing each about half the time.

But did the bees actually recognize the artists’ distinct styles, though? Would they generalize? The experimenters tested this by training the bees to several different sets of painting, again to where the bees were choosing better than chance. Then, they gave the bees new pairs of paintings by these artists that they had never seen before. If the bees had developed categories for style, those rewarded for going to a Monet should keep going to a new Monet.

The bees... were not great at this task. Their performance usually sank down to near chance success rates. In a few cases, the bees choose the new “correct” painting by the artist they had been trained to at about the same rate as they had been trained to previously. That they can do this at all suggests that the bees are paying attention to more than just simple properties of the visual stimuli. Bees seem to be able to learn categories, but they’re not great at it.

Wu and colleagues suggest that with time, bees could learn to do this task better. The problem is that honeybees are short-lived, and would not make it through the semester of an art appreciation class.

22 October 2012

In the last few months, the documentary The Revisionairies has been playing at festivals and art house theatres and showing the events around the last set of revisions of the Texas K-12 education standards.

Well, the contentious Texas State Board of Education shown in that film could be getting a complete scrub. All fifteen seats are up for re-election, and seven incumbents are not running again, meaning almost half the faces on the board will be new.

This is the sort of complaint that I see from time to time. And when I saw it, I never got why people got upset about it. There was something about it that always seemed snooty to me.

It occurred to me that I never quite got the ire because in my scientific research, I am operating in a low information environment. I’m confident that I have surveyed almost all the relevant literature for some of the species I work with. There’s not a huge number of people in the field. Every new paper gives something new to work with. And the rate of directly relevant new papers is measured in a few per year.

A lot of people, however, are working in high information environments. Forget about knowing all there is about a single species; so much is known that people have problems knowing about one small aspect of one species. Relevant papers probably come out weekly.

Intellectually, I knew are more active than others. But I don’t think I appreciated how much that affects how people view the “problems” of the scientific literature. Scientist in high information environments desperately want filters. They want glamour mags to tell them what’s important. Scientists in low information environments want more. They want to know why is nobody researching what to them are completely obvious questions.

19 October 2012

Your smartphone and your nervous system have the same problem, but running in in the opposite direction. The problem is conversion.

Your smartphone screen runs on electricity. But for your device to be of any use to you, that electricity has to be converted into light from your screen.

Your nervous system has the same problem, but in reverse: sensory neurons have to convert light back into an electrical signal.

Intuitively, you might expect this to be just one step. You could imagine that a neuron might just have an ion channel in the membrane that opens in response to light.

In fruit flies, there is indeed a molecule that changes when light hits it, and there is an ion channel that opens to cause an electrical current to flow into the neuron. But there is a whole series of links that chain the two together. And Hardie and Franze have discovered that one of those links is that cells in the eye have to twitch.

This movie shows that when you flash a light on one of the “facets” of the eye, it shortens for a moment. The brighter the light, the bigger the twitch.

Using mutants that lack some of the ion channels that are normally active when light is shone on the eye, they were able to introduce a new ion channel, gramicidin, into the sensory cells. Gramicidin does not respond to light: it responds to stretch. Once this was added to the sensory cell, Hardie and Franze were able to get the cells responding to light, suggesting that the shortening of the cells was needed for the cells to generate an electrical current.

They showed that the normal, light sensitive channels responded to changes in the shape of the membrane. They took individual cells, and changing the concentrations of chemicals so that water would tend to flow into the cell. Voila! Pressure changes without light, and the ion channels that normally respond to light still open.

All this suggests that the twitch within the eye is not just incidental: it is absolutely necessary for the fly to see light.

18 October 2012

Scientific papers are conservative and dry, and taxonomic papers are often even more notoriously dry and conservative than normal. So when a new species is described in the title as “extraordinary,” it’s worth popping open the PDF and having a peek at the figures.

Whoa.

This is a sponge.

Not exactly the loofa-shaped blob most people think of as sponges, is it?

This picture above is my favourite, but there is a lot of variation in the number and directions of the main branches, as shown in this set of images:

Not only does this deep sea sponge have the sort of otherworldy appearance that inspired James Cameron for Avatar, Lee and colleagues claim that it’s also a meat eater.

Carnivorous spongers were discovered back in the 1990s. They’re not active “chase and kill” predators, but “wait and digest” predators. Carnivorous sponges have places where invertebrates get snagged on spiny bits (spiracles), and the sponge then sort of dissolves the captured corpses that sits on their body.

Disappointingly, the paper does not have pictures of invertebrates speared on the sponge. In close up, it features some nasty looking hooks and spines:

The evidence that the authors use to claim that this sponge is carnivorous is the combination of the spikey spiracles plus the comb-like arrangement of the vanes, which would seem to provide maximum capturing opportunities for unwary crustaceans swept along in the current. Lee and colleagues note that you see the same sort of arrangement in other filter-feeding carnivores. Just how the sponge might digest its prey remains to be seen. Given how deep it lives – all the specimens found were at least 3,000 meters down – it’ll probably be a while before anyone is able to do some detailed research on this animal’s feeding.

Making the rounds on the science blogosphere this week is a Washington Post op-ed from a father who is wondering why his 15 year old son has to take chemistry. Cue reaction from professional scientists and allies that chemistry is important and that it’s hard to predict what a 15 year old will do as an adult (see external links).

There may be a bigger, more subtle issue here: the standardization of a curriculum. There are a couple of issues that arise from standard curricula.

First is a lack of flexibility. Why should a student take a chemistry class at the age of 15? Why can’t he take it at 16? 18? In university? It is not as though there is a sensitive period of one school year where if you do not learn a topic, your brain shuts off and becomes unable to absorb that information.

Sir Ken Robinson has noted repeatedly that we lump students together by their age, as though the most important feature about people was their “date of manufacture.”

Second is the difficulty in justifying decisions to those not in the immediate loop. For instance, when I teach general biology, I teach the Krebs cycle. This is a notoriously hard subject to teach. I know it comes into play in other classes, but one reason why I teach it is because “everyone else does.” There is a standard set of topics in introductory biology across North American universities, and the Krebs cycle in it. But I could not tell you why it’s taught in first year biology instead of second or third year biology (say). Presumably there is a reason, but it is obscure to me. If I were to try to explain how that bit of information became part of the standard curriculum versus something else, I would probably give an unsatisfactory account.

Now imagine the frustration, not of a kid, but of a parent who is trying to understand the curriculum her child is doing asking why it’s taught, and not being able to get a clear answer from an instructor. The instructor doesn’t necessarily set the curriculum, and may not be able to give a clear answer. That has to be unsatisfactory to someone trying to understand.

Let’s be honest about the degree to which “We’ve always done it that way” guides the behaviour of institutions, including educational ones. Latin was taught in schools for a long time.

16 October 2012

I’ve featured Lybia tesselata before, but then, the picture didn’t show the anemones that this crab commonly carries on its claws. You can see why this is called the “pom pom crab.” Totally looks like it’s cheerleading.

Dr. Doyenne covers a very interesting paper about what happens to papers that are rejected. One of the major conclusions of the paper is that articles that are rejected once get more citations later. The effect is real, but tiny. I also pick this up at the Scholarly Kitchen.

This is a brilliant example of the difference between significance (in the statistical sense) and significance (in the sense of important). They were only able to pull this out because they had a sample size of tens of thousands of papers.﻿ Hat tip to Joe Pickrell for pulling this out.

15 October 2012

The next round of #SciFund is coming! I do not have a project in this round myself, as I’ll be too busy finally going on my expedition that I raised funds for in rounds 1 and 2! But I’ll still be active in helping out and offering advice.

In my rounds, I was pleased by some of the positive feedback I got for the videos I made. Here’s how I did them.

The idea almost always starts with movie music: that sets the tone and gives me a style. I find it easier to riff off an existing style of film or TV, because I know what it’s supposed to “look like.” In my second #SciFund project, I did “monster movie,” so that dictated black and white, bad dubbing.

I use Windows Live Movie Maker (built into Windows 7) for the video. I used this because it’s included on my computer, and I was too cheap at the time to buy proper video editing software. Plus, I figure limitations inspire me to be more creative. Yeah, that’s it.

I shoot any new video with just a consumer point and shoot digital camera or even my phone. The sound from my cameras is never good enough, so I always plan to add sound later rather than doing it while I'm recording video.

Some still images (like my title logos) I make from scratch in a graphics editor, because the Movie Maker text editor is very low end and limited.

I get the graphic and video elements in Movie Maker roughly in the order and duration I want. This is my first draft.

I make the audio in Audacity. Movie Maker only lets you import one audio track at a time, and I always want dialogue playing over the music. I use Audacity to combine multiple audio tracks (speech, music clips, sound effects). I usually do this in “chunks,” maybe tens of second long, depending on how long the video is supposed to be.

I spend a lot of time redoing my voice overs for maximum punch, and splicing “takes” of different sentences together. I think for some sentences, I did 30-50 takes. For instance, listen to the narration of this draft:

You can hear the hesitation a few times in my speaking. Now compare it with the final version:

I add the sound to video in Movie Maker, and fiddle with the time of the video / graphics so that the two align for maximum punch. I spend a lot of time shaving off fractions of seconds here and there.

Then I add the next chunk of audio, and repeat until done. I save it, and then upload it into YouTube and Vimeo.

12 October 2012

Scientists are often admonished to make their research “relevant” when talking to non-scientists. Here’s a recent example.

It is important to establish early on why your work is relevant to your audience. If you don’t tell them why it matters to them, it is much harder to maintain their attention.

Why is it always on scientists to keep proving over and over and over again that what scientists do is relevant?

Have you ever looked the number of stories that appear on news websites that have no relevance to people whatsoever?

Is it relevant to most people if Justin Beiber threw up on stage (if you were in the audience, I guess it would be) or if Lady Gaga gained weight or what Paul Ryan’s fitness routine is or how Olivia Wilde’s sex life is going?

The entire freakin’ sports section. Unless you or someone you know is a pro athlete, how is the outcome of some game in another city relevant to you?

C’mon. It mostly isn’t. It’s gossip. Humans are social animals, and we love gossip, competition, and arbitrarily lining up into teams.

“But I’m interested in those!” Yes, and that’s okay. You’re allowed to be interested in those things and many others. But they don’t have to perpetually justify their existence and coverage and attention the way science does. Large news organizations cover a viral video on YouTube and never have to explain the “relevance” of why they’re doing it.

People love all sorts of things that aren’t relevant to them apart from their own intrinsic interest. Why does science have this higher bar to jump?

11 October 2012

Yesterday, I told the story of how a rant against working women got published in the Canadian Journal of Physics. It has, I’m pleased to say, been a very popular post. The most common reaction to it was, “Wow. Just... wow.”

Sarah Kavassalis read my article, and knew something that I did not. Canadian Science Publishing, which publishes the Canadian Journal of Physics and other National Research Council Journals, has a Twitter account.

It may still not say “retraction,” but it’s a big improvement. Someone could have done nothing, but instead did something good. Thank you, Sarah Kavassalis. And bravo, unknown person manning the @cdnsciencepub Twitter account.

I am not able to attend this year’s Neuroscience meeting, because I have a mess of travel coming up in the next few months, not least of which is my long awaited #Scifund expedition!

I have been to New Orleans a few times at previous Neuroscience meetings, though. In fact, the first Neuroscience meeting I attended was in New Orleans. So trust me when I say this.

Never bet a shoeshine in New Orleans.

Here is what happens.

You will be approached by someone who will offer a friendly wager.

“I’ll bet I can tell you where you got those shoes.”

You agree to purchase a shoeshine if he can tell you where you got your shoes. Most people, being from out of town and tourists, think this is a safe bet, and that this person could not possibly guess where you bought your shoes. Neuroscience attendees are even worse, because they think they are so smart.

The punchline comes in a few different forms.

One is, “You got one one your right foot and one on your left foot. I said I’d tell you where you got ‘em, not where you bought ‘em.”

Another version is, “You got ‘em in bourbon Street in New Orleans.”

And you will be paying for a shoeshine. And it won’t be a cheap shoeshine, either.

When Neuroscience rolls into New Orleans, you have a lot of highly educated people people walking around the French quarter who more or less have the word “SUCKER” tattooed on their forehead. Don’t make it easy to be tagged as a sucker: take your conference badge off when leaving the conference center.

And the moral of the story is: No matter how smart you think you are, you’re not that smart.

10 October 2012

When I want to talk to students about how the scientific publishing process can go awry, I almost always end up telling the story of how the Canadian Journal of Physics published a screed against feminism.

I was in graduate school in the early 1990s when the story broke. The Canadian Journal of Physics had published a set of papers on chaos. Chaos theory was big at the time, following James Gleick’s best-seller Chaos (1987). Among the regular sort of papers one would expect to find in a journal of physics was one very unusual short paper.

First, it was labelled “Sociology.”

Second, it contained exactly two citations. And one reference was to a dictionary for a definition.

Third, the paper was outlandish. It was an almost unimaginable broadside attack on feminism, blaming it for most all of society’s ills. Almost as an aside, it incidentally insulted almost all social science at the same time. To give you a little sampling, this paper claimed:

Women should not be in the work force, because they are “nurturers.”

Half the children of working mothers suffered “serious psychological damage.”

Surveys and controlled experiments in social science distort findings, and “wisdom” is a superior method of obtaining knowledge in the social sciences.

Abstinence until marriage would improve things, which should be vigorously promoted through television advertising campaigns.

It’s rare to find such blunt “women belong in the home” arguments in any print medium, let alone a scientific journal. It was astonishing.

The article was penned by Gordon Freeman (pictured), who was the guest editor of this one issue of the journal. It was pretty obvious what had happened, in broad strokes: he abused his editorial power to get his poisonous opinion piece into the pages of the journal.

The details of exactly how this happened were a little more complicated. Freeman organized a conference on chaos theory, and was assembling papers that had been presented at a conference for publication in the Canadian Journal of Physics. Apparently, the deal was that the journal would publish all the papers Freeman compiled, provided that they were presented at the conference, and that they were peer-reviewed.

Freeman lied about presenting the paper at the conference.

Somehow, Freeman managed to get back a positive review. The mind boggles, but he did.

The paper was published, and the excrement collided with the rotary cooling device.

Once this story blew up, as was inevitable, the regular editor of Canadian Journal of Physics, Ralph Nicholls, refused to reveal who the reviewers of Freeman’s article were, following the standard practice of anonymous peer-review that most journals follow. Nicholls was removed as editor for refusing to show the review to the editor in chief for all the National Research Council journals.

It also came out that Freeman’s institution, the University of Alberta, considered this to be human subjects research, and that Freeman had not obtained approval for it.

The journal apologized for the paper nine months after it was published (Dancik 1991a), but did it in a very uninspired way. The did say Freeman’s work “had no place in a scientific journal,” but it didn’t specifically disavow Freeman’s opinions. And there was no explanation of how Freeman’s paper managed to get into the journal in the first place. Although Science magazine later described the notice as a “retraction,” the word didn’t appear in the three sentence editorial.

The Canadian Association of Physicists certainly considered it a retraction, and wrote to Freeman asking him to shut up about his paper. (Freeman, by all accounts, loved the attention this article was giving him.)

The Editor-in-Chief of the NRC Research Journals has published an editorial note in the Canadian Journal of Physics stating that your article has no place in a scientific journal and expressing regret that it was published. This clearly retracts any approval of the article by the CJP and, in effect, retroactively disavows its publication. ... I would ask you, therefore, in view of the harm engendered to the scientific reputation of the Canadian Journal of Physics and, by extension, to the whole physics community, to refrain from making any further references to this article in any public forum, and not to distribute reprints with the name of the journal attached. I believe that scientific ethics demand that this article effectively be struck from the public record(.)

Moreover, the retraction / apology was printed on an unnumbered page, which made it hard to cite, and to link to the original paper. This led the journal to print the apology a second time a few moths later (pictured; Dancik 1991b).

The outrage over this paper was so great that many people were advocating reprinting the entire issue without Freeman’s article, and sending it to every library, so there would be no trace of it left. But a petition to reprint the issue didn’t get any response until the story appeared in Science magazine in 1992 (Crease, 1992).

The Nation Research Council, the publisher of the journal, at one point considered publishing a special issue about the controversy around Freeman’s paper as a mea culpa. But it was not appearing. At one point, the plan not to publish this came out held a conference on the ethics of publishing (reported in Crease 1993; Huston 1993). The uninspired response for why there was no supplement? “It seems a little late now.” This seems to support a claim by newspaper columnist Morris Wolffe on how the controversy was handled:

The (National Research Council), it was clear, was hoping the Freeman matter would go away.

Ultimately, some additional commentary did appear in the March/April 1993 issue of the Canadian Journal of Physics. Proceedings of the ethics symposium appeared in the journal Scholarly Publishing.

If you go to the Canadian Journal of Physics website today to look up the article, this is what you find:

It’s still available, with no indication that this wretched paper has been retracted, or warranted an apology, not once, but twice.

(Update, 11 October 2012: This part of the story, at least, gets a happier ending: there is now a note and link out to follow-ups that were published in the journal. Read more about how this happened here.)

This story was one of the first cases where I became aware of the concept of retraction. It definitely shaped by view of what retraction was meant to do. The notion of reprinting the issue set it in my mind that the goal of retraction was to, as near as possible, expunge an article from the scientific record. I’ve since realized that in practice, retraction is a much more complicated beast.

But if ever there was a paper that deserved an unambiguous retraction, this would be it.

09 October 2012

This is the character Sawamori, featured in the Japanese television series Daimajin Kanon. Sawamori is a crayfish Onbake: a spirit that has been given a new existence because a human being cared deeply about it.

Oddly, although Japan has a native crayfish, in the story Sawwmori is described as beginning life as a crayfish from... Russia(!?). I have been unable to make a good guess as to which Russian crayfish species Sawamori might have been before becoming Onbake.

08 October 2012

The normal application for an academic position consists of about four things: a CV, a research statement, a teaching statement, and references.

For me, the CV is the most important document. There’s the old adage that the best predictor of future performance is past performance. I
spend much more time scrutinizing CVs than any other parts of the application. Even then, I confess that is usually a
quick read through.

I get the impression that applicants spend the most time polishing their
research statement, and the least on their teaching statement. I understand why. The research statement is the thing you can fine tune the most for each individual job application, but that you can control what it says means you might overestimate its importance. People tend not to spend time on their teaching statement, because much of academic culture does not value it.

Here’s what I look for when I’m looking through CVs. Obviously, what any particular person looks for in a CV will vary.

Publications. I look at the publication titles to get a sense of the kind of research you do. The CV informs me as much about your research, if not more, than the research statement.

Yes, I do count the number of publications, though I don’t have any specific number in mind. I keep a rough comparison with others in the applicant pool. I don’t look for Glamour Mag pubs, though if I notice them, I think, “Good for you,” but that’s about all.

Teaching experience. Given that we’re at a big undergraduate university with a substantial teaching load, I am looking for evidence of some teaching experience. I don’t necessarily expect that you’ve taught a class from stem to stern, but at least that you’ve been a teaching assistant, and maybe given some guest lectures.

Service. I am looking for some evidence that you do more than just run experiments. I particularly like it when I see people who have done stuff for their scientific societies. That shows someone is a good academic citizen, and someone who gives back.

The overall questions I am trying to answer from your CV: Do I think you are a good fit for this department, both in terms of the kind of research you do and your willingness to teach? Do you do research that I can understand? Do you do more, scientifically, than publish papers?

05 October 2012

Earlier this year, I self-published my ebook Presentation Tips. I did this after getting enough positive comments about it on Twitter that I thought it had enough value to make it more widely available.

There were all kinds of reasons to self-publish it. It was probably too short a book to be attractive to a traditional publisher. I could keep costs low to readers, because I didn’t have to make a profit. I ultimately had full control over the project; I could update it with new material if I wanted, for example.

I did this in part to be provocative, to see how people in my own institution would react.

As expected, I got pushed back, with comments that it was self published and not peer reviewed. I bet that if I had sold it to a publisher who published it as a traditional book on paper, it could get published without any peer review beyond editorial review, and that nobody would even raise the question of whether it was a peer reviewed book or not. Such is the power of tradition.

I think I’m learning what we need to do to make the “do it yourself” option for scholarly publishing more viable.

For me, peer review mainly implies review by peers, unsolicited by me. For me, putting up a document publicly (say, as a pre-print), and then tracking unsolicited comments by other scientists, would be peer review.

Other people have a longer set of of what needs to occur for peer review to “count.” It needs to be handled by an a third party (an editor), who selects reviewers unknown to the author. And those reviewers remain anonymous to the author. Of course, this is the standard, conservative professional publishing model.

For some kinds of works, we could hit all the criteria for peer review using the online science community. I envision something like this. I complete my manuscript. I jump on to Twitter and say, “Hey, can someone handle reviews for this?” Someone agrees to act as a one time clearing house for reviews; more a facilitator than editor. The facilitator contacts a few other people in the online science community by private email, says, “Could you review this manuscript?” Facilitator gathers up the review and ships them back to the author. The write then decides what changes to make before publishing.

This creates a trail to show that the work was peer reviewed, but does not take the power to publish away from the author.

In a way, this is what Bora Zivcovic did for blogging with The Open Laboratory blogging anthologies. Gather works, get a bunch of volunteers to review. And it worked! It took blogging from something that was, when the Open Lab anthology started, something slightly disreputable and dodgy in for many academics (and still is, to some degree) and made into something where people are proud to have made the cut.

I don’t think this would work for original technical papers yet. I think the process of finding skilled reviewers with appropriate expertise would be too difficult for people to do on a sort of ad hoc basis. But this might work for short works meant for a general audience.

Additional, 15 February 2013: Naturereports on a company called Rubriq that seems to be planning something very similar to what I outline above.

04 October 2012

Citation is one of the defining characteristics of academic writing. But academic presentations have different rules than articles. To be more exact, there aren’t any rules for presentations.

Some ideas work better than others, though. Don’t treat a talk like a paper. It isn’t.

A surprisingly common strategy is to go through the talk (sometimes with no references apparent on any slide), and say at the end, “And here are my references...”

...show a slide full of small text, but flick through it in less than a second...

...because you have another slide. And that stays up only a second...

...so you can get to this one, end the presentation, and ask for questions. (Those are slides from an actual talk, incidentally.)

This is a horrible way to give references! Yet, I see this frequently. This is the wax fruit of academic citations: no matter what it looks like, you still can’t eat it. Academics are obsessed with citations because they allow other people to fact check. Giving references like this does not do the job.

You cannot put up slides this dense with text, flash them on the screen for less than a second, and expect anyone to get any usable information out of them. You could put in a list of Harlequin romances in the references and nobody would know.

There is no context for the references. In a paper, you can flip back and forth, so you can see that Simon et al. (2004) is given as supporting a particular claim about the infectious pathway of an organism, rather than providing information about the relevant statistical techniques. On slides, you can’t expect to remember that the reference to Simon et al. (2004) was on slide #3 that you put up twelve minutes ago, and that you shouldn’t confuse it with Simon et al. (2005) that was on slide #8, which made a totally different point.

If you want to give references in a talk, it needs to be:

Complete enough that someone can track it down, and

Given at the point of need.

In practice, this means you can’t just put “Simon et al. (2004)” and expect people to know what paper you’re talking about. Heck, I can’t even remember the year for some of my own papers. My suggestion is something concise, so a human being can quickly scribble a note. Something like:

Simon et al. 2004. Mol Biol Evol21: 1409.

Now, put it on the slide where you’re making the claim, not at the end.

There are advantages for you the presenter as well as to the audience. For a long time, I resisted putting references on my slides. I wanted to maximize the signal, and my strategy was to just say what the important sources were. Over time, however, I have often found myself going back to old slides without references, and going, “Argh, where did I find that? There was something else good on that site I want to us, and now I can’t find it!” Putting references on your slides will make it easier for you in the long run.

02 October 2012

Ixa cylindrus, which has no common English name I could find easily. According to this site, they are called “Kabuto-kobushi” in Japanese.

My suspicion is that these wide extensions of the carapace help to prevent it from being eaten by predators by making the crab wider than a predator’s mouth. My only shortcoming in that hypothesis is, if that were so, why wouldn’t they be sharp?

01 October 2012

There were two featured talks on Monday, which was an interesting study in science communication styles.

First, we had Seth Shostak, who talked about the search for life on other planets (something I’ve written about a bit).

A few things particularly interested me. First, Shostak was surprisingly specific and optimistic about when he expected us to find life on other worlds. He bet everyone in the audience a cup of coffee that we would have confirmed the discovery of extra-terrestrial life in two dozen years. He spoke Monday, so mark your calendars for 23 September, 2036. I’ll take that action.

The other thing that struck me was that Shostak was pinning a lot of hope into Moore’s Law: the generalization that computer power is doubling every couple of years. He claimed that we would have computing power equal to a human brain by 2020, which meant that humans would have built our own replacements (!), and that we should be ready for any intelligence that we contact to be machine intelligence.

If you have a chance to hear Shostak speak, do not miss it. He is
fantastic: on target, informed, and extremely funny. He is one of those
people who has taken a sort of standard academic slide talk and ramped
it near to the top of its form.

In the evening, we had Michio Kaku. And I had to admit, I was stunned to see this sort of line-up for a physicist. What you can’t see is how far this stretched; almost literally around the block.There wasn’t enough room, and so a few hundred people had to watch his presentation in an overflow theatre in another building.

I was less than impressed by Kaku’s presentation. He wasn’t speaking as a scientist; he was there to act as an oracle,
telling us about all the wonderful things awaited us “in the future.” With his shock of white hair, Kaku certainly looks the part. Like Shostak, Kaku put a lot of emphasis on Moore’s Law. I
got the impression Kaku talks to a lot of upper class business people
with a lot of privilege, because the talk he gave would be very
reassuring to such people.

Kaku was polished and funny and provocative, but it felt like he had given
it so many times that it was canned. The responses to some of the
question in particular seemed not to be made directly to what was asked,
but felt more like, “This was related to this section of that other
speech a I give, so I’ll pull up that and use that to answer the
question.”

And for someone who had given as many talks as he obviously had, I was a bit stunned by the slides he used. He had stock PowerPoint templates, often with pretty crummy low-resolution graphics. Once an academic, I suppose...

He ended his talk by telling a story about Einstein and his chauffer... without letting us know it’s an untrue story.

Without further ado, here’s a Michio Kaku drinking game.

When Kaku says, “We physicists...”, take a drink.

When Kaku says, “In the future...”, take a drink.

Trust me, those two phrases alone will be sure to leave you feeling no pain.