31 December 2012

The first word that came to my mind when thinking of my publishing this year was "waiting."

I didn't publish as much science this year as last. But that's as expected. 2011 may have been my annus mirabilis, with six papers. This year, just two: one journal article and one book chapter. I haven't even seen the book chapter myself in the flesh yet - but it's been shipped, so I'm willing to believe it exists. I'll have a post about the long genesis of the book chapter once I have the printed copy in my hands.

Almost every attempt to publish this year was an exercise in patience. The typesetting of the published manuscript happened at a crawl. Another manuscript that I mentioned in the same post is still on an editor's desk, where it seems to have been for almost a year and a half since I first submitted it. One person asked my why I haven't contacted the editor about it, and my rationale is that time they spend reading an email from me is time that they're not spending getting papers published. Another manuscript went through a much longer review process than I expected. I am hoping that those papers will see the light of day sometime this year.

And it wasn't just the manuscripts, but new projects. I had a couple of very promising little projects that just need a few last pieces of information before I can write them up and send them out the door. But these are also taking longer to get completed than I was hoping.

But as I thought about it a bit more, I remembered that this was also a year where I dabbled with a couple of self-published experiments. I published a paper here on my blog – not a first, but still unusual enough that the story behind it was my most read post of the year. I also self-published my Presentation Tips ebook in a Kindle version, which has earned me a cool $13 in profit.

Getting a proper research article published through the usual routes feels like a greater accomplishment, just because you have had to go through more barriers. The ease of self-publishing stands in stark contrast, and makes that route look very appealing. It's only been a few months since those self-published projects came out, though. This year may help determine whether I actually made a ripple with those experiments.

28 December 2012

There are certain periods that, in retrospect, seem almost impossible in their unbridled creativity. I want to make the case that Stan Lee’s co-creation of the Marvel universe as one of the greatest outpourings of artistic creativity ever. Name me another writer in any medium in the last 50 years who has given us as many characters that are now etched into the public consciousness as Stan Lee.

You have Spider-Man, the Fantastic Four, the X-Men, and almost all of the characters making up the Avengers – Iron Man, the Hulk, Thor.

In comics, one of things that defines a great comic is not just the hero, but the rogue’s gallery. The villains that the hero faces. Stan Lee not only helped create some of the best rogue’s galleries in all of comics, but he did it in an amazingly short period of time.

Look at the first year of The Amazing Spider-Man. In twelve issues, we get Doctor Octopus, the Lizard, the Vulture, Sandman, and Electro. Go just three more issues, and we get Mysterio, the Green Goblin, and Kraven the Hunter. Other heroes had to wait years, if not decades, to get a line-up of villains that good. Same famous superheroes still don’t have a rogue’s gallery that good. (Quick! Name all the Wonder Woman villains you can.)

In Fantastic Four, Stan created possibly the best villain in all of comics: Doctor Doom.

But I’m not singling out Stan Lee as the greatest writer in comics just because of the characters he created. At their best, Stan’s stories had an amazing economy. It’s no accident that retellings of classic stories Marvel stories often run many more issues than the original story did. Those Marvel issues routinely packed more plot and character in his single issues than most comics writers today manage in three or four.

One of the best examples is Fantastic Four #25, the first major battle between the Thing and the Hulk.

This is an issue that is near a high water mark for superpowered battle. In twenty-odd pages, this fight that takes so many twists and turns. The Thing gains advantage, then loses it, then gains it back. Ben Grimm knows he can’t beat the Hulk in a straight-up contest of strength, but keeps himself in though ingenuity and quick thinking.

It’s just amazing. Fights between superheroes and supervillains are so often used, so cliché, that it’s easy to forget how exciting a good one can be.

Part of the success was no doubt that Stan had great artistic partners. They didn’t call Jack Kirby (penciller of Fantastic Four, above) “The King” for nothing. Much has been written about just who contributed what (not to mention a bunch of court cases), and I don’t want to undervalue the contributions of anyone in creating those books. But those books wouldn’t be as good without Stan’s dialogue between characters, and, more importantly, the inner monologues.

Thought balloons are something of a lost part of comics. The few times I’ve talked to comic writers about them, they seem to think that they’re an unsophisticated storytelling device. I’ve always been intrigued by them, because comics is almost alone in the visual media for having a way to show the difference between what a character says and what a character thinks.

Again, nobody did that better than Stan. He gave his characters inner lives that were typically a stark contrast to their current situation. In fights, characters would get surprised, figure out plans, while retaining a cool exterior – the heroes snapping out the quips that Stan was famous for.

That’s my case. Lots of people have written comics. Many of them have written great comics. But in my mind, Stan Lee, working with the artists he did, is without peer.

19 December 2012

We normally think that each of our senses is more or less distinct. Sure, there’s that condition called synesthesia, where people experience numbers with colours and that sort of thing, but that’s pretty rare, right?

Maybe not. A new paper suggests our different senses may be influencing each other more often than we think. The team looked at how smell, something we normally think of as one of our weaker, less important senses, hold sway over our vision, the sense that most people normally think of as our strongest, most important, senses.

Zhou and colleagues used a phenomenon called binocular rivalry to test this. Binocular rivalry is not when two binocular stores are competing on price.

Normally, the two halves of our brain get complementary information coming from each eye, which the brain stitches together into one almost seamless visual experience. Using a little bit of visual trickery, it’s possible to get all of the left brain being fed one image, and all of the right brain being fed a completely different, incompatible image. Face with two competing sets of information, people see only one image image at a time, alternating with the other, alternating with the other every few seconds, in an unpredictable way

You can get a sense of it from this picture. If you let your eyes cross so that the images are superimposed (a little like a 3-D stereogram).

In the overlapping image in the center, you will tend to see either green circles or red bars, not the half and half images. The two images will alternate back and forth in an unpredictable way.

In their main experiment, Zhou and colleagues showed people rival pictures of a rose and a banana at the same time. While doing this, they gave their volunteers the smell of a rose, and people became more likely to see the image of the rose.

When they gave them the smell of the banana, they were more likely to see the banana.

They also got this effect with a mix of images and words. In a second experiment (which must have been less fun for the volunteers), the rival images were a male torso and a set or words. When presented with the smell of, um, body odor. When presented with good ol’ B.O. (eeewww), the subjects were more likely to see the person instead of the words... but only if the smell of sweat was given in the right nostril.

Why does the nostril matter? Like the rest of our body, each nostril is wired to one half of the brain, so the input from each nostril has a different effect on one side of the brain than the other.

The “nostril” effect can be broken fairly easily, though. If you show a picture of a banana, with a rival image being the word “rose”, the scent of the rose still makes you more likely to see the word “rose,” but it no longer matters which nostril through which you smell the rose-like scent.

The one thing I can’t quite understand is why this paper is in The Journal of Neuroscience. There is no neuroscience in this paper. No brightly lit brain blobs, no EEGs, no neurons, nothing. This is a straight sensory perception paper.

Today, faced with the looming threat of sequestration (the so called “fiscal cliff”), which could cut basic research budgets something like 8%, this is the scene in Washington:

I am surprised that American scientists do not seem to want to demonstrate publicly that they are worried about how much funding cuts could hurt them. Instead, I just see the same low level drone of emails in my inbox from scientific societies asking scientists to contact congress and support research funding. But I’ve seen those emails for years, and there doesn’t seem to be any greater urgency this time. Maybe the leadership of those societies is convinced that sequestration won’t happen, just like the capping of the debt ceiling didn’t happen a while back.

What would it take to get scientists waving placards in the Mall in Washington, DC? I’ve speculated this might just be due to geography (the UK is smaller), but there is a high enough concentration of science on the eastern seaboard to make a decent showing.

And I know it’s not because Americans are more restrained than the Canadians or the British.

Science is Vital picture from here, Death of Evidence picture from here.

10 December 2012

As I expected, the big announcement last week was for a medical school. What I didn’t completely expect was that my institution, The University of Texas-Pan American, is sort of going away.

There is going to be a new University of Texas institution in South Texas in August, 2014. It still has to be approved by the Texas legislature, but I can’t imagine it will not pass.

I had suspicions that something was up when I saw news stories like this one:

Higher education institutions in the Rio Grande Valley could be reorganized as part of a proposal laying out plans for a comprehensive medical school in the region, state Sen. Juan “Chuy” Hinojosa said Tuesday.

When I saw that, I thought, “We’re getting Brownsville.” The University of Texas at Brownsville (UTB) was always a small campus, with only a couple of thousand students. It had, for a long time, been associated with a community college. They split in 2011, and from what I had heard, UTB had been suffering since then. They had no physical space of their own; that was all given to the college. And, as I noted, they were small. Bringing UTB into the fold made sense to me.

I was aware that the Regional Academic Health Centers (RAHC) had been struggling, too. Despite one branch being located on the UTPA campus (for example), it was controlled by the University of Texas Health Sciences Center at San Antonio. That distance caused problems. The RAHC faculty had expectations to teach, but there were effectively no teaching spaces and no students there.

That aspect of the proposal makes a lot of sense to me.

A lot of time on Thursday and Friday went into meetings surrounding this announcement. I went to a town hall meeting on Friday, and it was filled to capacity, and people were excited, though it was hard to tell about what, exactly.

The thing that surprised me was that I expected the first thing that would be talked about would be the medical school, given how long it had been talked about by so many people. But it wasn’t. The first thing that came out was PUF.

It is strange to hear all these administrators talking about this, because they don’t say, “pee you eff,” they say “puff.” One of them even joked, “I thought ‘PUF’ was a magic dragon.”

I don’t think I had ever heard about “puff” in over a decade at this university, but it’s an acronym for “Permanent University Fund.” This is a found set up by the state that is worth around $11 billion. For various reasons, the universities in the region were never eligible to tap into this fund, but a new university would be.

This change to create a new university, with access to PUF, requires two-thirds approval from the Texas legislature. But – and this is a critical “but” – it doesn’t require the legislature spend a dime of money. Besides, announcements of this scale usually don’t go forward unless those involved are extremely confident of getting the votes.

And it was clear that back room deals are the secret to things like this. At the town hall meeting on Friday, it was very clear that this motion to make a new university with a medical school happened because a lot of the key players, like Chancellor Ciguerroa, were from from the Rio Grande Valley. Old boy’s network in action.

Fusing these institutions is going to be a pain. But my institution, UTPA, is probably going to suffer the least upheaval.

I was interviewed by university affairs about this, and they seemed to want to talk about the medical school. I understand why: there is a long standing need in an underserved community.

But the Rio Grande Valley is not about to become one big hospital.

The commitment to creating an emerging research university is, over the long haul, going to have a bigger impact on the area than just the medical school.

03 December 2012

Really? Here’s a small selection of individuals who could be candidates for the gig of “the Newton of neuroscience.”

Luigi Galvani, who discovered that there is electricity in living organisms. To me, if you are to draw a single line between how the ancients understood brains and behaviour and how we understand how brains and behaviour today, I would draw it at Galvani and bioelectricity. That changed everything.

Santiago Ramón y Cajal, who established that the nervous system is composed of individual cells .

Otto Loewi, who proved that neurons release chemical neurotransmitters after having the idea for the critical experiment in a dream – twice.

Alan Hodgkin and Andrew Huxley, who established how neurons send electrical signals along their length. (Surprisingly, this is the only picture I can find with the pair in the same frame, despite their historic association. Hodgkin is on the left.)

All of these are seminal, fundamental advances in our understanding of how nervous systems work. These are findings that apply almost universally across the animal kingdom, and even in cases where they are not true – nonspiking neurons, electrical synapses – we probably couldn’t understand those without the knowledge gained by these people. Like Newton’s findings, these are unifying principles of the field. And also like Newton’s work, they are not the final word.

I think one reason none of these individuals is recognized as a “Newton of neuroscience” is because their discoveries do not address the primary preoccupation of modern neuroscience.

The goal of neuroscience today is not to understand the nervous system; it is to understand the human mind.

You can see this in Marcus’s article. Marcus equates “neuroscience” to human brain imaging of cognitive function. The first example is speech perception. He describes how bright blobs of brain became “became a fixture in media accounts of the human mind”. Every example is about human thinking.

When someone says, “Neuroscience has not found its Newton,” it may be because they are looking for someone who will start to explain human consciousness. And by that standard, no, we’re nowhere close to understanding that. But that’s like declaring, “Physics has not found its Pythagoras, let alone its Euclid,” because Newton didn’t develop a grand unified theory or a theory of everything.

These may look simple and elegant to someone outside the field, but research is fractal: every level is messy and disorganized when viewed from inside the thick of it. Chemistry looks simple to people in biology. Biology looks simple to people in psychology. Individual psychology looks simple to people in social psychology. And so on.

Of the sort of scientists I nominate here, Sci writes:

But even these are not really neuroscience Newtons. They are not because as we gain more and more knowledge of the brain, we are able to see: there is no unifying theory of the brain.

Newton proposed no unifying theory of motion, either. He gave us three independent, non-interacting laws and one for gravity. And needless to say, physicists are still have their hands full of new phenomenon (dark matter, dark energy) that were not even predicted by their theories. And gravity still stubbornly refuses to be integrated with the other fundamental forces in the universe in the standard model.

How long can an insect live? Cicadas might be up near the top. Some cicadas are famous for remaining in the larval stage for thirteen and seventeen years/ That makes them a pretty long lived insect, even if they spend most of that time as larvae underground, out of sight.

A lot of cicadas are synced up in these thirteen and seventeen year cycles, so that in peak years, huge numbers of these insects emerge. Then they are everywhere, singing to attract mates so they can get the next brood of baby cicadas on their long road to maturity.

Now, these two times – thirteen and seventeen years – are notable because they are both prime numbers. As I understood it, the leading explanation is that lots of things in nature tend to cycle. But most of those cycles are fairly short. One possible advantage of something that cycles with a prime number is that it’s unlikely that any other short cyclic events will consistently coincide with the emergence of the new adult cicadas.

Imagine cicadas emerged on a twelve year cycle. Any predator that was on a roughly two, three, four, or six year cycle could sync up with the food feat of cicada emergence – provided there was a little give in their cycles so they could line up in the first place. But that sort of synchronization between predators and prey is much harder to do with a prime number. Thus, cicadas never face large numbers of predators just waiting for them to come out from their long larval stage.

A new paper suggests that the cicadas might even reap a bigger advantage than that.

Koenig and Liebhold do a new analysis estimating how many birds are during each year when cicadas emerge in large numbers, and how many birds when the cicadas don’t. They have population estimates for fifteen predatory bird species over 45 years. Their data set is as old as I am.

Surprisingly, there are routinely fewer birds on the years when cicadas emerge. The authors propose that this indicates that the long cycle has somehow allowed the cicadas to emerge during years that are safer than usual.

The authors do briefly mention alternative hypotheses. Cicadas are famously loud insects. Maybe the cicadas are so abundant and noisy that they actually drive birds away from their normal habitats. They authors say this is unlikely, because the bird counts go down even in places where the cicadas are not calling.

Koenig and Liebhold suggest that it's more or less coincidence that the cicada broods last for a prime number of years. They suggest that the emergence of these huge numbers of insects has some sort of knock-on effects, such that when they occur, the bird populations are effects, and go through booms and busts of their own - and the birds' low point comes around again in about thirteen or seventeen years.

The details of how this might happen aren't clear.

I suppose that the good news about being a cicada researcher is you have time to plan new studies. The bad new is that it probably doesn't take thirteen years to plan those projects... or seventeen years