Helping Women Achieve in Academic Science

Posts tagged ‘scientific review’

I feel like a lot of advice for women in male-dominated fields leans toward the “act like a man” type. I have definitely given that type of advice here. For instance, in previous posts (here, here), when I advocated to modulate your voice during public speaking about science. And I also have given advice about what types of clothing to wear (here, here, for the men: here), and I typically say to error on the side of modest. There is also advice about acting confident and negotiating for yourself. These actions are stereotypically seen as male-oriented traits, and women are stereotyped as “less aggressive” and “less self-confident.” Frankly, I know a lot of really successful women who do these things really well, and they are not particularly gendered – except as how society views such activities.

Sometimes, fitting in is important and “acting like a man” in some ways can be helpful. But, in general, I think there are a lot of great things about being more stereotypically “feminine” that can really benefit collaboration, criticism, and science in general. So, I titled this post, “act like a woman.” I do understand that every woman is different, and I do not mean that all women act as I might describe (I include caveats below). Rather, I think that, in our society, women are socialized in certain ways and men are socialized in certain ways. Women are socialized to be better communicators (but, women are not all great communicators) and men are socialized to be more aggressive (but, not all men are aggressive). My point with this post is to point out some of the stereotypically “female” social constructs and how they could be beneficial to science discourse. I also want to say that many of my male colleagues do already act in these nice ways that I describe below, and it is enjoyable and a pleasure to work with them. If you disagree or have other points, I missed, feel free to comment or write a response post for me to post up here.

Saying “I’m Sorry” There was a fabulous sketch on Inside Amy Schumer where she shows a panel of high-achieving, amazing women and they spend the entire panel saying “I’m Sorry,” (episode, sketch, interesting articles about the sketch: above average, Huffington Post). As often with comedy, and especially with Amy Schumer, the sketch had a social point and then went haywire when one of the women apologized herself to death, basically.

I find myself doing this all the time. I apologize for things that are and are not my fault. In my very progressive local environment, many people will actually respond, “There is no need to be sorry.” True, sometimes I say sorry when it isn’t needed, but what I am really saying is, “I see you. I recognize you are there and you are a person who deserves a comment.” It doesn’t always actually mean, “I’m sorry.” Sometimes it means, “I empathize with you,” or “I acknowledge you.” I suppose I could say those other things, but “I’m sorry” is what always pops out of my mouth.

Saying “I’m sorry” is one of those things that men, women and others try to de-socialize out of women. They say it undermines your power to apologize all the time. But, does it? I have male colleagues who are sweet and kind, and guess what? I have noticed that they apologize a lot, too. It doesn’t seem to take away their power. Plus, it is not exactly distracting or bad or anything else (unlike in Amy Schumer’s sketch). In general, it is minor and not even noticeable. Why are we trying to remove people acknowledging each other and, in a sense, just trying to be nice? So, I am keeping the “I’m sorries.” I’m sorry if this bugs you.

Saying “I was Wrong” I find it ironic how adamant scientists can be even in the face of their utter incorrectness. We have to be able to acknowledge that we are wrong, or we might as well stop doing science at all. In general, although, again, not always, I find that women are more capable of accepting their incorrectness, moving to a new thought and accepting the possibility of other options in their science. But why? Is the proverbial woman more empathetic and thus more capable of seeing someone else’s view? Is it that we are bashed and criticized so much (more?), that we are more open to such critique?

Whatever the reason, the ability to accept that you might be wrong is essential, especially in this time when about 30-60% of studies have been shown to be “false.” There was a nice NPR story about this, just this week, actually, talking about the fact that many studies, especially in medicine or biology, are shown to be incorrect. In my opinion, this has to do with:

(1) Biology and parts of science are inherently statistical, yet we do not do a good job of quantifying our results and making the uncertainty clear. For many of their fields, a one-sigma difference is called “significant.” Think about that. One sigma. That means, statistically speaking, you are likely to be incorrect about 30% of the time. That is what a one-sigma difference means!! So, why are we shocked?

(2) People have a hard time admitting their uncertainty in their publications. Indeed, there is not incentive to accurately report your uncertainty when we are pushed to make big, broad claims about our work to publish in “high impact” journals. I find it weird that the old standard journals with good, solid work, much of which is reproducible, have the lowest impact factors. I find it even weirder that the newest journals on the street often have crazy high impact factors, when they have only been around a short time. That system is clearly flawed even more than the one-sigma significance system. At least one-sigma significance has a quantifiable uncertainty!

In the NPR story, the scientist double checking all the work said that he had some of his own original work debunked. He was asked why it is so hard for scientists to face the fact that they might be wrong. He said it was because we feel like the fact we discovered is a personal procession – we own it. I disagree. I am not so tied to my personal possessions, and many scientists are similarly minded. I think it is more that it feels like family or even a part of your own self-identity. Your scientific discoveries define you. To realize that they may be wrong is like realizing you, yourself, are not who you think you are.

So, I see that it doesn’t pay to clearly say “I could be wrong, and it may be by 30%.” On the other hand, your short-term gain is science and society’s long-term loss because we are working off of faulty data. So, overall, I think we could all benefit with being a little more honest with ourselves about our short-comings and admitting that we could be wrong, by as much as 30%.

Listen and summarize – don’t just contribute your own ideas all the time. There is a saying, “You have two ears and one mouth, so you should listen twice as much as you speak.” I have a majorly hard time with this, especially when the topic is something I am excited or passionate about. But, I have found myself in a number of meetings, especially over the summer, where the room was dominated by big voices and personalities talking about things I wasn’t as interested or passionate about. I noticed that the domination was coming mostly from males… OK, entirely. This is partly because I am in a male-dominated field. But, it was more than that. To me, in these meetings, I distinctly had the impression of male animals marking turf and competing with rivals for dominance of the room, ideas, and airtime. If I were to draw a picture, it would look like this:

This situation happened twice in recent memory. In the first instance, I was one of two women in the room. Both myself and the other woman worked with the group to synthesize the discussion and the loud ideas coming from the men. I also contributed many ideas that were incorporated. I felt valued and heard in that instance. It was clear to both myself and the other woman in the room that, without us, very little would have been accomplished because no one else was doing this oversight and group dynamic management that were were doing.

Another more recent occasion, there were three women. We all shirked our responsibilities as the “women who help” to synthesize and steer the conversation to productive avenues (see this article). Why did we do this? It seemed fruitless and a waste of time, given the personalities in the room and the way they interacted with each other. It was easier to keep our heads down. Every now and then, we three women would discuss separately, come to a good idea, and then patiently wait, literally with hands raised, until the males calmed enough to see us. We would give our idea, which was good, and often accepted, and another topic would follow with more “hoo-hoo” and “haa-haa” (gorilla noises in my head, see image) about the next topic. In this second venue, all three of us felt under-valued and unheard, despite the fact that our contributions were significant to making progress for the group. It is demoralizing and marginalizing and off-putting, and worst of the worst – wasteful and impeding to progress.

Again, yes, “Not All Men” and “Not All Women” but what I am advocating is the end of that type of behavior at all. Good leadership and meeting management can help avoid these types of meetings and interactions, but it would be better if such people just acted politer and more gracious – act like a woman – in the first place. I guess I am just saying that the aggressive posturing doesn’t actually work to make progress to solve problems, so why bother doing it?

Be constructive and nurturing – not destructive and critical for the sake of being critical. Many scientists are teachers, many are not – even if they work at a university. Even those who do teach, don’t always value or develop that part of their jobs. Teaching, especially at the K-12 levels, is a primarily female occupation these days, but the opposite is true at the professorial level. Why? Is it because the endeavor of actual teaching is seen as more nurturing and caring than other professions (such as scientific researcher) and women are the nurturers of the society, so they are steered toward those jobs? Whatever the reason for the switch from women educators at lower levels to men at higher levels, is not really my point here – sorry to lead you astray.

Here, I am advocating that science would be more fun, more collaborative, more productive, and more welcoming to under-represented groups if we could be more pedagogical with our criticism. As I have said before (above and in prior posts), criticism is vital for the re-evaluation and assessment needed to understand right and wrong (see above). What many people say though, is our current form of critique is too harsh. This relates to the points made above about impact factors and the cut-throat granting environment. As an editor and scientist who is reviewed, is that reviewers are often emotional, unhelpful, and frankly, a$$h0les, when doing reviews. This attitude doesn’t help science or the authors you are reviewing.

Instead of being harsh, I wish people would try to be educational. As with everything I am saying, there are always specific places where this is not true. I have a favorite “home base” journal where I like to publish. This journal is great because mostly, the reviewers are helpful and pedagogical without being pedantic, patronizing, or condescending. The reviews are helpful to making our papers better. Needless to say, this is not a “high-impact” journal in the short-run. But I have been able to replicate experiments published from that journal, AND using the experimental methods outlined in the papers as published. In the long-run, these papers will be the truly impactful ones – the ones that are correct.

I would like to note, when being pedagogical, try not to be patronizing or fatherly. This can be a hard line to walk. Just remember that the thing you are reviewing is actually written by “the expert” on that subject. You are brought in as an expert on what you do, which is not exactly what the authors you are reviewing do. They are the experts – it is their science. You are there to offer advice to help improve the manuscript or proposed science. Consider them a colleague seeking advise. If you blast off a review and act like a know-it-all, that is a$$h0lish, too. Basically, follow the golden rule – treat these authors the way you would want to be treated by a reviewer. Keep calm, don’t get emotional. Stick to the science and the facts – not your opinion of science and the facts. And, for heaven sake – cite your references in your review!

Man, that was a long one. What do you think? Comments are welcome. If you want to get an email every time I infrequently post, push the +Follow button.

I was chatting a few months ago to a AllyManOfScience who complimented me by saying he uses a lot of the laboratory organizational ideas I present here to organize his lab. (Lab organizational stuff can be found here, here, here, here.) I asked if he had anything to add or modify from what I said, and he added something very interesting. He said that he prefers to hire students who have had some background as an athlete or musician at a high level. He said that people who have done sports or music at a high level are very comfortable with criticism. They have an inherent understanding that even a good performance can still be made better and that critiques are not personal. Critiques are made to make their performance better. I started thinking about it, and I realized that a lot of scientists I know did do sports or music. I was a gymnast who competed at a fairly high level and worked out 24 hours per week to hone my skills. I wasn’t Olympic level, but high enough to be getting a lot of criticism after each routine on a regular basis. HusbandOfScience was a band nerd who taught himself guitar. He spent hours practicing guitar in high school. If you have a good musical ear, you can self-correct, and do not need others to tell you you did it wrong. Other WomenOfScience friends were cheerleaders, synchronized swimmers, and even champion dog show groomers/runners. All of these sports take skills and practice and involve getting criticism.

Science is full of criticism.You have to take it and say thank you. Then ask for more if you want to make it. You do an experiment – you get criticism. You make a figure – you get criticism. You give a talk – you get criticism. You make a poster – you get criticism. You write a paper – you get criticism. You apply for a grant – you get criticism. Over and over and over. It doesn’t stop. It won’t stop. The most famous people in science still get criticism when they submit a paper or a grant – even if they get the paper accepted or grant money a lot easier than you.

If you have a hard time taking criticism, I say practice and get better at it, or leave. You can get better at getting criticism. The first time I got a paper review as a graduate student, I cried. We made the changes and the paper got in. The second time I got a paper review as a graduate student, I cried… OK, so I didn’t learn how to take criticism over night. By the time I was a postdoc, I didn’t cry. I was learning how to take criticism. As a professor, my first couple grant rejections got to me, but after writing 10 proposals and finally getting one funded, I didn’t get so bummed when I didn’t get funded.

Reviews can be too harsh. Sometimes reviews are too harsh, too emotional, or just plain mean. And this sucks. But, your job as a logical scientist is to try to see through the crazy and find the truth in the words. Of course, you are entitled to be pissed off at a mean review or overly harsh or unhelpful critique. But, after you have cooled down, try to figure out what is actually wrong with what you did. Perhaps nothing. Perhaps they misread something that was perfectly clear…but perhaps you could make it clearer. Even bat sh*t crazy reviewer number 3 probably has some point.

There are bad reviews. I don’t want to say that all reviews are equal. I am on the editorial board for a journal, and I serve to find the reviewers and make the editorial decisions. Some reviews are, frankly, emotional. As an editor, I don’t want to see, nor do I care about, your emotions as a reviewer. I also don’t care about your personal opinions about science. I care about facts. Your reviews should be full of science facts. If you think that cats can fly, and that is your scientific opinion, you need to back that up with some references. I am OK with your opinions about the style of the writing as long as you make helpful suggestions to make it a better paper. If your review is emotional and not helpful, I’m not going to take it seriously. You are reviewing a scientific paper – not TROLLING your favorite blog.

So, what do you think? Add your two cents here in a comment, or send me a post. To get an email every time I post, push the +Follow button.

There was a recent funny article on “How to be the Perfect Mother” from Huffington Post that was a hilarious look at how society tells us conflicting information about how we should act as mothers. You should go look at it if you are a mother, know a mother, or have a mother. Just go see it.

This article, combined with two recent manuscript reviews coming back, got me thinking about how reviewers also often write conflicting advice for your manuscripts. So, I decided to write a satirical version of a manuscript review as an example.

***Note: any resemblance to reviews you may have received or written are purely coincidental.

Enjoy!

We have read and reviewed the manuscript, “This Science Thing is Important for this Other Thing” by Prof.Science. This manuscript investigates the ScienceThing and its interactions with OtherThing, a very important and understudied topic. This group performed many new experiments that had never been done before and had 6 figures each with A-J panels. Their work was executed well and revealed new information about the interactions of ScienceThing with OtherThing that we never knew before. Their clearly written manuscript had a simulation that modeled the results and showed similar trends suggesting a mechanism.

Major Concerns:

In performing these experiments, they used well-tested experimental methods along with specific tests to control for errors. They have used these methods to test for effects of ScienceThing on OtherThing and have quantified the effects. Since these methods are well-tested and accepted in the field, they are not novel. We want only novel experiments even if we cannot interpret the results we get from them. Thus, we suggest that the authors perform all new experiments. Further, did the authors investigate how ScienceThing affected OtherThingII? Only one paper on OtherThingII exists, from the OldFart Group, but it is clearly more important than OtherThing, and it should be explored even though almost no reagents exist for OtherThingII. Unless OtherThingII is also investigated, I do not think this paper is very worthwhile.

The authors display histograms of their work and how ScienceThing affects the OtherThing. It is important to be quantitative and have numerical data. For each histogram, they fit to a Gaussian and report the R-squared value of the fit to the data. They use these fits to discuss the results. Why do they do this? Why not use a simple p-value to the data? Isn’t a student’s t-test done on everything? It is clear that the two distributions do not overlap, so they should report the p-value.

The authors used a toy model to show that the ScienceThing behavior that they see could be due to a minimal number of simple rules. Being quantitative and having models is important. We want more quantitative work and models in this field of science. The simulation has the same trends as the experimental data, but it does not exactly match the data, so the model must be worthless. Why did these authors have a model? They are not theorists or modelers; they are experimentalists. They should remove the model, it detracts from the data.

Without the model, the authors do not have a mechanism. We want all science to be mechanistic. It is not good enough to simply observe something and report what happens. For instance, although their toy model uses 3 simple rules and has the same general trends as the data, they cannot rule out a model with 10 complicated rules. Thus, they have not revealed the mechanism behind the results they see, and thus the impact of the work is lower in my opinion. Until their work becomes more mechanistic, their results are purely qualitative, and the work is not work publishing.

Other Issues:

There are a number of sp errs in this manuscript. Don’t they care how they present thmseves? Its like thei didn’t even porrof read before they sent it out. They need to really fix this. There are way too many issues for me to helpfully point out.

They are missing a number of very important citations particularly from the OldFart group, “Science Stuff: A novel Regulator of Nothing,” JSS 1979; “Science Stuff Moves Science Thing,” JSS 1998; and “Science Stuff to Science Thing,” AJSS 2000. These important references about how ScienceStuff is connected to ScienceThing are important and should be added.

Their experimental methods are not good. They didn’t even present them! I suppose they could be in the supplement, but I didn’t read it, so I wouldn’t know. Even if they are in the supplement, they need to have them in the main text. Maybe, once they take out the model, they will have room in this 5-page paper to have detailed methods.

In conclusion, after having read this paper, I feel that these results were obvious and could have been guessed from deductive reasoning. Thus, the experiments were not necessary and the results are not novel. Further, to make the results important and novel, the authors would need to perform a number of extra experiments that were not in the original 60 plots presented, and they would need a mechanism, which they have not proven. Overall, it is clear that this study has no value and, thus, I recommend that this paper be rejected.

Anything to add? Post or comment here. Maybe we can add more examples? If you want to get an email every time I post, push the +Follow button.