Archive for the 'Science Communication' category

In real science, i.e., that that includes variability around a central tendency, we deal with uncertainty.

We believe, however, that there IS a central tendency, an approximate truth, a phenomenon or effect. But we understand that any single viewpoint, datum or even whole study may only reflect some part of a larger distribution. That part may or may not always give an accurate viewpoint on the central tendency.

So we have professional standards in place that attempt to honestly reflect this variable reality.

Most simply, we present the central tendency of effects (e.g., mean, median or mode) and some indication of variability around that central tendency (standard error, interquartile range, etc).

Even when we present a single observation (such as a pretty picture of a kidney or brain slice all hilighted up with immunohistochemical tags) we assert that the image is representative. This statement means that this individual image has been judged to be close to the central tendency of the images that were used to generate the distributional estimates that contribute to the numerical central tendency and variability graphs / tables presented.

Now look, I understand that it is a bit of a joke. There are abundant cracks and redefinitions that point out that the "most representative image" really means "the image that best makes our desired point".

There is a critically important point here. Our profession does not validate least representative image as an acceptable standard. Our professional standards say that it really should be representative if we ever present N=1 observations as data.

The alleged profession of journalism does not concern itself with truth and representativeness at all.

Their professional ethical standards, to the extent they exist, focus on whether the N=1 actually occurred AT ALL. In addition it focuses on whether that datum was collected fairly by their rules- i.e., was the quote on the record. Accuracy, again for the alleged profession, focuses only on episodic truth. Did this interviewee literally string these words together in this order at some point in time during the interview? If so, then the quote is accurate. And can be used in a published work to support the notion that this is what that interviewee saw, experienced or believes.

It is entirely irrelevant to the profession of journalism if that accident of strung-together words communicates the best possible representation of the truth of what that person saw, experienced or believes. Truth, in this sense, is not the primary professional ethical concern of journalism.

If the journalist pulls a quote out of an hour of conversation that best fits their pre-existing agenda with respect to the story they are planning to tell, it literally does not matter if every other sentence spoken by that person tells a different tale. It's totally okay because that interviewee literally said those words in that order on the record (and it is on tape!).

If a scientist processes twenty brains in the experiement, grabs the one outlier that tells the story they want to tell, trashes the 19 that say the opposite and calls it a representative image (even if by inference if not directly)....this is fraud and data fakery. Not okay. Clearly outside the professional bounds.

That, my friends, is the difference.

And this is why you should only agree to talk to journalists* that will send you a nearly final draft of their piece to ensure that you have been represented accurately.

If every single one of us scientists insisted on this, it would go a long way to snapping the alleged profession into line. And greatly improve the accurate communication of scientific findings and understandings to nonspecialist** audiences.

The life of the academic scientist includes responding to criticism of their ideas, experimental techniques and results, interpretations and theoretical orientations*.

This comes up pointedly and formally in the submission of manuscripts for potential publication and in the submission of grant applications for potential funding.

There is an original submission, a return of detailed critical comments and an opportunity to respond to those critiques with revisions to the manuscript / grant application and/or argumentative rebuttal.

As I have said repeatedly in this forum, one of my most formative scientific mentors told me that you should take each and every comment seriously. Consider what is being said, why it is being said and try to respond accordingly. This mentor told me that I would usually find that by considering even the most idiotic seeming comments seriously, the manuscript (or grant application) is improved.

I have found this to be a universal truth of my professional work.

My understanding of what I was told by my mentor, versus what I have filled in additionally in my similar comments to my own trainees is now very fuzzy. I cannot remember exactly how extensively this mentor stamped down what is now my current understanding. For example, it is helpful to me to consider that Reviewer #3 represents about 33% of peers instead of thinking of this person as the rare outlier. I think that one may be my own formulation. Regardless of the relative contributions of my mentor versus my lived experience, it is all REALLY valuable advice that I have internalized.

The paper and grant review process is not there, by any means, to prove to you beyond a shadow of a doubt** that the reviewer's position is correct and you are wrong. A reviewer that provides citations for a criticism is not by any means the majority of my experience...although you will see this occasionally. Even there, you could always engage cited statements from an antagonistic default setting. This is unwise.

The upshot of this critique-not-proof system means that as a professional, you have to be able to argue against yourself in proxy for the reviewer. This is why I say you need to consider each comment thoughtfully and try to imagine where it is coming from and what the person is really saying to you. Assume that they are acting in good faith instead of reflexively jumping behind paranoid suspicions that they are just out to get you for nefarious purposes.

This helps you to critically evaluate your own product.

Ultimately, you are the one that knows your product best, so you are the one in position to most thoroughly locate the flaws. In a lot of ways, nobody else can do that for you.

Professionalism demands that you do so.

__
*Not an exhaustive list.

**colloquially, they are leading you to water, not forcing you to drink.

Scholarship has absolutely gone to shit these days. It's ridiculous. So I find myself pointing out citation failures on an increasingly frequent basis. If it includes my own pubs too, so what? What difference does that make?

Deciding who should and should not be on the author line of a science publication is not as simple as it seems. As we know, citations matter, publications matter and there are all sorts of implications for authorship of a science publication.

A question about this arose on the Twitts:

Question for the Twitterati regarding paper authorship. I'm in private industry. I'm writing a paper. It is completely my own work. 1/n

Of course, we start from a very basic concept. Authorship of a scientific paper is deserved when someone has made a significant contribution to that paper. I can't distill it down any more than that. Nice and clean.

The trouble comes in when we consider the words significant and contribution.

This is where people disagree.

I also rely on another basic concept which is that someone should try to match, to a large extent, the practices within the subfields from which similar work is published. This can mean the journal itself, the scientific sub-domain or the institution type from which the paper is being submitted.

On to the specifics of this case.

First, do note that I understand that not everyone is in the position to wield ultimate authority when it comes to these matters. @forensictoxguy appears to be able to decide so we'll take it from that perspective. I will mention, however, that even if you are not the deciderer for your papers, you can certainly have an opinion and advocate this opinion with the person in charge of the decision making.

My first observation is that there is nothing wrong with single-author papers. They might be rare these days but they do occur. So don't be afraid to offer up a single-author paper now and again.

With that said, we now move on to the fact that the author line is a communication. Whether you are trying to convey a message about yourself as a scientist or not, your CV tells a story about you. And everything on there has potential implications for some audiences.

I don't want to just throw on authors simply "because". That's obviously not ethical. 6/n.

ethical, schmethical. Again, you don't throw someone on a paper "just because", you do it because they made a contribution. A contribution that you, as the primary/communicating/deciderering author, get to determine and evaluate. It is not impossible that these other people referred to in the Tweet made, or will make, a contribution. It could be via setting the environment (physical resources, administrative requirements, funding, etc), training the author or it could be through direct assistance with crafting the manuscript after all the work has been done. All of these are valid as domains for significant contribution.

This scenario of a private industry research lab appears, from the tweets, to be one where the colleagues and higher-ups are not intimately involved in pushing paper submissions. It appears to be a case where the author in question is deciding whether or not to even bother publishing papers. Therefore, the politics of ignoring more-senior folks (if they exist) is unfamiliar. I can't do much but read through the Tweet lines and assume this person is not risking annoying someone who is their boss. Obviously if someone in a boss-like status would be miffed, it is in your interest to find some way that they can make a contribution that is significant in your own understanding or to have a bloody discussion about it at the very least.

Leaving off the local politics, we can turn to the implications for your CV and the story of you as a scientist that it is going to tell.

If all you ever have are first-author publications it will look, to the modern eye, like you are non-collaborative, meaning not a team player. This is probably an impression you would like to avoid, yes, even within an industrial setting. But this is easy to minimize. I can't set any hard and fast rules but if you have some solo-author and some multiple-author pubs sprinkled throughout your timeline, I can't see this being a big deal. Particularly if your employment particulars do not demand a lot of pubs and, see above, the other people around you are not publishing. Eventually it would become clear that you are the one pushing publication so it isn't weird to see solo-author works.

Consider, however, that you are possibly losing the opportunity to burnish your credentials. The current academic science arc has an expectation for first-author papers as a trainee (grad student, postdoc) which is then supposed to transition to last-author pubs as a scientific supervisory person (aka professor or PI). Industry, I surmise, can have a similar path whereby you start out as some sort of lowly Scientist and then transition to a Manager where you are supervising a team.

In both of these scenarios, academic and industry, looking like you are a team-organizing, synthetic force is good. Adding more authors can be helpful in creating this impression. Looking like you are the driving intellectual participant on a sub-area of science is good. This concern looks like it votes for thinning your authorship lines- after all, someone else in your group might start to leech credit away from you if they appear consistently or in a position (read: last author, co-contributing author) that implies they are more of the unifying intellectual driver.

This is where you need to actually think about your situation.

I tell trainees who are worried about being hosed out of that one deserved first-author position or being forced to accept a co-contributing second author this
; You are in for the long haul. If you are publishing multiple papers in this area of science (and you should be) then for the most part you will have first-authors and in the end analysis it will be clear that you are the consistent and most important participant. It will be a simple matter for your CV to communicate that you are the ONE. So it may not be worth sweating the small stuff on each contentious author issue.

In a related vein, it costs you little to be generous, particularly with middle authors that have next to no impact on your credit for this work.

If you only plan to publish one paper, obviously this changes the calculation.

Do you ever plan to make a push for management? Whether of the academic PI or industry variety, I think it is useful to lay down a record of being the leader of the team. That can mean being communicating author or being last author. At some point, even in industry, an ambitious scientist may wish to start being last author even under the above-mentioned scenario.

This is what brand new PIs have to do. Find someone, anyone to be the first author on pubs so that they can be the last author. This is absolutely necessary for the CV as a communication device. Undergrad volunteer? Rotation student? Summer intern? No problem, they can be the first author right? Their level of contribution is not really the issue. I can see an industry scientist that wants to start making a push for management doing something similar to this.

As always, I return to the concept that you have to do your own research within your own situation to figure out what the expectations are. Look at what most people like yourself, in your situation, tend to do. That's your starting point. Then think about how your CV is going to look to people over the medium and long term. And make your authorship decisions accordingly.

Since many of you are AAAS members, as am I, I think you might be interested in an open letter blogged by Michael Balter, who identifies himself as "a Contributing Correspondent for Science and Adjunct Professor of Journalism at New York University".

I have been writing continuously for Science for the past 24 years. I have been on the masthead of the journal for the past 21 years, serving in a variety of capacities ranging from staff writer to Contributing Correspondent (my current title.) I also spent 10 years as Science’s de facto Paris bureau chief. Thus it is particularly painful and sad for me to tell you that I will be taking a three-month leave of absence in protest of recent events at Science and within its publishing organization, the American Association for the Advancement of Science (AAAS).

sounds serious.

What's up?

Yet in the case of the four women dismissed last month, no such explanation was made, nor even a formal announcement that they were gone. Instead, on September 25, Covey wrote a short email to Science staff telling us who the new contacts were for magazine makeup and magazine layout. No mention whatsoever was made of our terminated colleagues. As one fellow colleague expressed it to me: “Brr.”

Four staff dismissals that he blames on a newcomer to the organization.

I think that this collegial atmosphere continued to dominate until earlier this year, when the changes that we are currently living through began in earnest. Rob Covey came on board at AAAS in September 2013, and at first many of us thought that he was serving mostly in an advisory capacity; after all, he had a reputation for helping media outlets achieve their design and digital goals, a role he had played at National Geographic, Discovery Communications, and elsewhere. I count myself among those who were happy about many of the changes he brought about, including the redesign of the magazine, the ramping up of our multimedia presence, etc. But somewhere along the way Covey began to take on more power and more authority for personnel decisions, an evolution that has generated increasing consternation among the staff in all of Science’s departments.

New broom sweeps?

(In addition, according to all the information I have been able to gather about it, Covey was responsible for one of the most embarrassing recent episodes at Science, the July 11, 2014 cover of the special AIDS issue. This cover, for which Science has been widely excoriated, featured the bare legs [and no faces] of transgender sex workers in Jakarta, which many saw as a crass objectification and exploitation of these vulnerable individuals. Marcia McNutt was forced to publicly apologize for this cover, although she partly defended it as the result of “discussion by a large group.” In fact, my understanding, based on sources I consider reliable, is that a number of members of Science’s staff urged Covey not to use the cover, to no avail.)

This will be interesting to watch, particularly if we hear more about the July 11 cover and any possible role that the individuals Balter references in this statement, "The recent dismissal of four women in our art and production departments", had in the opposition or approval argument.

One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:

How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?

This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.

To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head...sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit...shouldn't this be the primary motivation?

Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.

Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers...this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It's a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their "own project" from more or less scratch until they got to the point of a substantial first-author effort. That's fine and all but I've come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven't written up yet. If I have something like this, I'll encourage trainees to pick it up and massage it into a paper.

Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I've mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.

on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.

It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I'm never furthering manuscripts I said I'd take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I've said before it interacts with his or her motivation to work on your draft. I don't mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.

One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn't want to look stupid and this kept me from bouncing drafts off my PI as a trainee.

Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.

But as I often remind myself, when it is their first few papers, the trainees want theirwords in press. The way they wrote them.

It isn't as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don't ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the "clutter", "signal to noise", "complete story" and "LPU=bad" dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that's your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure's worth of help. Some will merely sow confusion...but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That's the job. That's the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

Yet, publishing LPU's clearly hasn't harmed some prominent people. You wouldn't be able to get a job today if you had a CV full of LPU's and shingled papers, and you most likely wouldn't get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don't completely understand this phenomenon.

We had some incidental findings that we didn't think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) "incidental" results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).