The neuroscience of emotion: From reaction to regulation

Main menu

Tag Archives: plan

Post navigation

I’m trying to become a critically reflective researcher, a captain who sits at the helm and steers the research ship (rather than functioning more like the ship’s rudder, careening left and right at the whims of the ship itself). In so doing, I recently identified a number of practical goals for my research program. Here they are, slightly updated to include a #4:

I would like to publish more of our existing data and, moving forward, develop an efficient method of publishing data in a way that eliminates back-log.

I would like to use methods and measures that have even greater ecological validity and that will allow us to answer questions that are important and fundable.

I would like to apply for funding to cover FMRI scanning costs so that I can answer questions about neural circuits and train students to do so too. I would also like to collaborate with others to use methods such as TDCS and EEG/ERP.

And, based on subsequent reflection, one addition: I would like to pursue answers to questions that align with my clinical, cognitive, and social interests.

This post articulates a plan that partially addresses #1, namely:

What is the most efficient method of publishing data? (No more back-log!)

In a recent phone conference, Jeff Birk, Sarah Cavanagh, Maryna Raskin, Phil Opitz, and I were talking about a manuscript stemming from our work on emotion regulation among people who do versus don’t have a history of recurrent depression. At one point, the brilliant Sarah posed a meta-question/comment. She essentially said something like this (I’m paraphrasing):

I noticed that we’re on version 11 of this paper and when I went back to look up some study-related detail, it was dated 2012. Without suggesting that anyone has done anything wrong in any way, I’m wondering if there might be a better process we could use to move this thing forward in a more efficient way?

I have to admit that my initial reaction to Sarah’s eminently reasonable and ultimately incredibly helpful question/comment was to feel ashamed that I had steered this particular manuscript in a way that kept us in the harbor (we hadn’t really even gotten very far from the dock). Frankly, the fact that manuscripts can move slowly in my lab in part stems directly from my unspoken desire to make them perfect. Now, I realize in the abstract that perfection is impossible. But it’s hard to set aside the very concrete thought that motivates striving for perfection, namely that if what we submit isn’t stellar, then people in the field (who I admire and respect!) might infer that the science coming out of my lab sucks (or, worse, they’ll make an internal attribution and infer that I suck at being a scientist). Horrors.

But, setting the mental drama aside, I realized quickly that this wasn’t about me. It was about the writing process and the tantalizing idea that the process could change in ways that would make research lives better for everyone. And, indeed, we need to do something. Files on my computer suggest that Phil sent version 1 of the paper to co-authors in September 2013, not too long after we finished collecting all of the data. More than two years and ten drafts later, submission is still just a dot on a distant horizon. Granted, it’s a monster data set, one that stems from a laboratory session for which data were collected from July 2010 to August 2012, a year of every-four-months longitudinal follow-up surveys, and an MRI session, the last of which was run in May 2013. Still, though. It’s November 2015 – that timeline is too darn long.

So, we talked about it a bit and all agreed that our democratic approach of having all authors read and comment on every draft simply wasn’t working. In addition to this process being subject to the vagaries of everyone’s unique scheduling challenges, this approach didn’t ultimately respect the idea that position on the byline should be meaningful. In principle, the first author should contribute the most to the manuscript, followed by the second author, third, and so on. Senior author, me in this case, should contribute heavily too. As pointed out by Maryna, this cascading level of contribution means that the authors in the middle of the byline, who are essentially getting the least amount of credit, should also do the least amount of actual work on the paper.

Right. Now what?

After that conversation, Sarah went off and did some sleuthing and found several interesting and helpful posts like this one and this one that led her to construct a work flow that the group subsequently honed by email, shown next.

The Work Flow

The work flow that follows is meant to reflect the steps to be taken after data have been collected and it’s time to figure out what interesting discoveries we’ve made. There are seven steps as follows:

MESSY DRAFT: FA sends messy draft to team; second author (SA) reads and adds edits/comments (senior author has option to do so so but could wait until clean-up draft to read/edit/comment); the messy draft is discussed by whole team in meeting.

CLEAN UP DRAFT: FA sends clean-up draft to SA and senior author, who both read/edit/comment. Meeting of group if necessary.

And perhaps this plan or something close to it is totally obvious to anyone who happens to be reading this, but it was a welcome epiphany for me. I had been going along wanting everyone to have a voice, everyone to have the opportunity to contribute their ideas and make the manuscript as good (perfect?) as it could possibly be before clicking “Submit.” But the fact is that there are better ways to ensure that everyone contributes and that the manuscript benefits from everyone’s unique perspective.

Moreover, let’s face it. No matter how close to perfection the work seems to be when finally we click “Submit,” nobody else will see it as such. As Phil said during our phone call, read any set of reviews and that’s immediately clear. No matter how hard we work to think and write clearly about the work we’re doing, to write the best manuscript we can, other smart people will come along and see it differently, point out flaws, and suggest improvements. Given that reality, we plan to use the work flow described above moving forward. Doing so promises a steadier flow of manuscripts being published, ideally in no more than three drafts. No more perfection and no more back-log. <insert contented sigh here>

Post navigation

Find me on Twitter

Just submitted a manuscript that is an "open-science" hat trick: (1) It's a stage 2 Registered Report, (2) it's in the COS pre-reg challenge, and (3) it's my first manuscript written entirely in Rmarkdown (it grabs the data from OSF and knits results directly into the manuscript)

For those asking, here is a link to our 36 Most Difficult Concepts in Intro Psyc. Each MDC has 2-3 predictable misconceptions. There are 93 misconceptions in all. With these, we can provide the right help at the right time. #adaptivepathways https://t.co/T3IdRLydeJ