Yes, they did include Watt’s Up With That in a list of science sites. Personally, I think it gives an interesting bit of context and flavour to the study. It doesn’t legitimse the site. Remember, they are looking at gender, not truth claims. That said, I did think it was odd they didn’t reflect more on its slightly different status (if only because I think that’s interesting sociologically).

Additionally, they interviewed six ‘web authors’ (Mendick & Moreau, 2010: 8. See also Appendix 4), and carried out group interviews with 32 young ‘web users’ (Mendick & Moreau, 2010: 8-9. See also Appendix 5). The researchers note that they find any distinction between writers and users problematic with respects to online media, stressing a blurring of boundaries around such roles and explictly distancing themselves from the attitude to online science media taken by a report on Science the Media published by the UK government last year (Mendick & Moreau, 2010: 4-6).

What they found

I’m going to focus on the results from their analysis of web content, as this post is already quite long and I want to leave space to also discuss their methodology. Do read the full report yourself if you are interested – it’s quite accessibly written.

Their results suggest online science informational content is male dominated in that far more men than women are present. On some websites, they found no SET women. All of the 14 people in SET identified on the sampled pages of the RichardDawkins.net website were men, and so were all 29 of those mentioned on the sampled pages of the Channel 4 website (Mendick & Moreau, 2010: 11).

They found less hyperlinking of women’s than men’s names (Mendick & Moreau, 2010: 7). Personally, I’d have really liked some detail as to how they came up with this, and what constituted ‘hyperlinking of women’s names’ precisely. It’s potentially an interesting finding, but I can’t quite get a grip on what they are saying.

They also note that when women did appear, they were often peripheral to the main story, or ‘subject to muting’ (i.e. seen but not heard). They also noted many instances where women were pictured but remain anonymous, as if there are used to illustrate a piece – for ‘ornamental’ purposes – and give the example of the wikipedia entry on scientists, which includes a picture a women as an example, but stress she is anonymous (Mendick & Moreau, 2010: 12).

Echoing findings of earlier research on science in the media (e.g. the Bimbo or Boffin paper), they noted that women, when represented, tended to be associated with ‘feminine’ attributes and activities, demonstrating empathy with children and animals, etc. They also noted a clustering in specific fields. For example, in the pages they’d sampled of the Guardian, they found seven mentions of women scientists compared with twenty-eight of men, and three of the these women were in a single article, about Jane Goodall (Mendick & Moreau, 2010: 12-13).

The women presented were often discussed in terms of appearance, personality, sexuality and personal circumstances, again echoing previous research. They also noted that women scientists, when present, tended to be younger than the men, and there was a striking lack of ethnic diversity (Mendick & Moreau, 2010: 14).

There were also some interesting hints about women having a particularly hard time when it came to sceptical communities, women are more likely to be associated with dishonesty, or at least foolishness. This was both in the Bad Science end of this, and what might be seen as ‘pseudo-scepticism’ of Watts Up With That (Mendick & Moreau, 2010: 17-18).

One site they did seem to quite like was Science: So What (Mendick & Moreau, 2010: 19).

What I thought

I’m going to be quite critical of this research. It’s not actively bad, it just seems to lack depth and precision. I suspect Mendick and Moreau were doing their best with low resources and an overly-broad brief. I also think that we are still feeling our way in terms of working out how to study online science media, and so can learn something from such a critique.

Problem number one: it’s a small study, and yet a ginormous topic. I’d much rather they had looked at less, but made more of it. At times I felt like I was reading a cursory glance at online science. Problem number two: the methodological script seemed a bit stuck in the print era. I felt the study lacked a feel for the variety of routes people take through online science. It lacked a sense of online science’s communities and cliques, its cultures and sub-cultures, its history and its people. It lacked context. Most of all, it lacked a sense of what I think sits at the center of online communication: the link.

It tries to look at too much, too quickly. We’re told that of the blog entries sampled from Bad Science, three out of four of the women mentioned were associated with ‘bad science’, compared to 12 out of 27 of the men . They follow up this a note that Goldacre has appeared on television critiquing Greenfield,­ a clip of which is on his site (Mendick & Moreau, 2010: 17-18). OK, but ‘bad’ needs unpacking here, as does the gendered nature of the area Goldacre takes aim at. As for Susan Greenfield, she is a very complex character when it comes to the politics of science and gender (one I’d say it is dangerous to treat representations of simplistically). Moreover, this is a very small sample, without much feel for the broader media context the Bad Science blog works within, including not only other platforms for Ben Goldacre’s voice but comment threads, forums and a whole community of other ‘bad science bloggers’ (and their relationships with each other). NB: I think there are interesting and important discussions to have about gender and sceptic communities, which is precisely why discussion of this needs to be done well.

I also got a bit annoyed at the analysis of the wikipedia entry on scientists (they note the image of a scientist is of a woman, but that she is anonymous). OK, it’s an example of a nameless woman, but the culture of anonymity around the idea of a scientist is important to remember here. There is a gender politics to this, but that needs to be brought out, as do the new ways in which cultures of the web may disrupt or change this politics (personally, I’d start quoting this fascinating statement on Holfordwatch whilst reaching for my copy of Modest_Witness@Second_Millennium).

In fact, I was surprised not to see issues of identifying gender within anon/ pseudonymous identities come up. To me, this flagged up a lack of attention on of forums that many of the sites they looked at contained. Indeed, the relative lack of attention the report played to conversations between people was, I thought, especially odd considering a key finding of one of the rare bits of research that has been done on young people and science online is that they go to the web to talk to each other, rather then to be fed content (admittedly, this study is a bit old, but I was surprised not to see it referenced).

The approach to twitter was, I thought, especially weak. It boiled down to a keyword search for Ada Lovelace, Susan Greenfield, Alice Roberts on one side, and Charles Babbage, Richard Dawkins, and Robert Winston on the other. I guess it could generate some data to then have a play with, but they don’t seem do anything with it, and I remain unconvinced that it’s the best first step anyway. Keywords just don’t capture twitter. Trending terms, maybe (maybe).

The report needed to reflect something of the routes people take through online science. Their use of focus groups does capture this up to a point, but it really was very small and, for me, called out to be supplemented with more ethnographic work. The hyperlink disrupts the basis for a traditional content analysis, news-sharing and link curation sites, folksonomies, etc even more so. You can’t treat twitter like a pile of paper to search for the existence of particular words within: it’s too complex a social system. We need to consider the time people dwell online, and how they interact with each other there.

To conclude, It’s always easy to say what people haven’t done and point a finger with ‘it’s more complicated than that’. My argument is that this study spreads itself too thin. Maybe it’s best to think of it as a first sketch towards later work that will learn how to capture the richness of the subject matter. To make a practical suggestion, the iterative research methodology applied by the Cardiff study, which applied feedback from research subjects along the way, strikes me as extremely applicable to studying online media.

I want to reiterate that I suspect the researchers were working with an overly-broad brief and simply weren’t given the resources to meet it. If we want to understand the cultures and politics of science online – and I think we should – we need to fund people with the time and resources to have a proper look.

17 thoughts on “Studying the politics of online science”

I have to admit I have avoided speaking up in these discussions, partly because there is more anecdote and hearsay than I’m comfortable,* with and partly as some of the views run counter to my own (intuitive) impression of the science blogging scene (but then that reflects my personal view of others).

As you say, there are lots of confounders for work like this. I haven’t read the paper you refer to (haven’t time, sorry – busy with a grant application) but one confounder might be that the measurement reflects those who speak up at the sites, not those who visit the sites.

I personally would have tried to balance or avoid sites obviously lead by one person. One confounder there might be that some proportion of those who speak out will be doing it in part from seeing what they perceive as a “reflection” of part of themselves. Not expressing myself well here, sorry, but I hope you get the drift. Probably a ‘sign’ for me to get back to my grant application, then! ;-)

* Just as an example I felt uncomfortable at another writer using Rebecca Skloot as an example being under-recognised, but I would have had to cast the first stone as there were no other comments at the time I read the piece and I didn’t feel like sticking my neck out! :-/

Thanks for the analysis, Alice. Interesting report, and I do agree with your breakdown of the methodology. I also agree that it seems to be a frequent problem I’ve noticed lately with researching science outreach – studies seem to be reaching broadly to answer bigger questions rather than taking smaller bites and providing details that will contribute to a clearer answer over time. I wonder if it’s because – relatively speaking – it’s such a new field, that we’re desperate for answers now?

I’m also finding myself curious about conclusions on gender division in science communication circles, whether in blogging or elsewhere. I only say this as a result of significant personal bias, as in my social circle sci-com is dominated overwhelmingly by women. In the organisation in which I work, most of the professional science communicators are female, for instance. Just the other week I spoke at the Australian National Youth Science forum, where other industry representatives were also presenting. There was one other male rep in my session, with about six female scientists. At my recent Australian Science Communicator’s ACT chapter meeting, there were two blokes and six girls.

My point is, while I concede there might be male domination in online media, I wonder what this might be a feature of. Do I live in a clump of female sci-commers? Last year’s Australian blogging competition – Big Blog Theory – was won by a pair of female science writers, with other strong female contenders in the running such as Natasha Mitchell’s ‘All in the Mind’ blog. Are some cultures stronger than others? What variables should we be considering in determining the influence of gender in communication?

In any case, thanks again for the breakdown. Good food for thought. :)

Mmm, the MSc I teach on is 20, it’s not THAT new a field. Also, Mendick and Moreau are sociologists more than sci com researchers. I do take your point about a lot of sci com/ outreach research though. I also think sci com research in general has problems, and there are a lot of reasons for this. And that maybe, finally getting a bit better.

As for gender in the field – loads of women in UK community too, so it’s not just an Aussie ‘clump!’ The most high profile journalists are men, everything else is mainly female. I’m not sure about scientists who do outreach compared to professionally sci commers, I’m sure there is some data on this somewhere though (although not sure about rigor of any of it).

Relative to many areas of social research, twenty years is still new enough to have a history only within the current generation. And there are still a lot of people who barely even regard it as a discipline (few of the kids I spoke to at the forum even thought it was possible as a field in its own right, and were excited by the thought of it). However it is something of an arbitrary statement, given relevant studies in sociology, pedagogy, media etc. are far from new.

Good to hear it’s not a local thing. :) And yes, profile might introduce some imbalance. I’ll keep any eye out for anything on the topic. Cheers.

I asked Jon Mendel who, like me, has been thinking through some of the methodological issues involved in studying this issue, what he thought. I asked him this on twitter, which is where he replied and got a load of great replies I thought were worth pasting here.

Sampling – did Science: So What site attract a mass audience? Still on sampling: have they said why they didn’t sample woman-led or -ran specialist sites?

Would agree with a number of your points re link and anon/pseudonym. Also importance of topic.

I get the sense that this flattens out (virtual) space/place/network when extracting sample of content to analyse.e.g. much online content takes on meaning in context of a broader network, community norms, spaces that work in certain ways.

Also how this links to Geography more broadly: how many of sampled sites are largely London/England (not just UK) based?

The mixed approach to content analysis, though, alongside interviews is an approach which is often productive.

Does raise some interesting ideas. And addressing some of my quibbles would take much time/resources: depends on brief given?

I thought the point about choosing sites were fair. Although I could imagine reasonable replies to justify them, they are points (like why include Watts up with that) which I think were worth giving more details on. Maybe this is the sort of detail that might come out in further work.

I like the ‘flatten out’ analogy. I’ve been thinking about this more, and I think my criticism of the methodology comes down to this: if we admit the boundary blurring pro-sumer stuff (which this report is keen to do) then we have to admit that our primary sources have been profoundly changed, and so our methodologies must be too. For me, these sources have become ever-more social (media has always been social) and so the sociological end of media studies really comes into it’s own.

As Jon says, the mix of content analysis alongside interviews is often a good strategy. In many respects it is a strategy developed to deal with the social nature of media. So, as sociality of media has been souped up, we need to soup this end of the research up, perhaps adding more ethnography to the mix.

I also suspect we need to get clever about how we think about content analysis, perhaps drawing in some of the automated metrics devices. I think we need to differentiate within the sites more a bit too, not just lump teh internez together. This was a point a left off the analysis above, but I really wanted more about how these sites were different, even within sites.

The Cardiff study I linked to does this content analysis + interviews approach very well. It’s also funded via the same project, with similar aims – it makes a good comparison:
(1) First off, I think they had it easier. If you are looking at ‘UK mainstream media’ it’s easier to say ‘ok, I’ll look at three of the main tv channels and a handful of the national papers’. I they didn’t have the sampling issues in the same way .
(2) Secondly, as I say above, they took a iterative and gradual approach where they showed the research to stakeholder groups and then fed this back into the research as they were going. Lots of social research does something like this now. It’s a good idea. I also suspect the Cardiff study must have been given a larger budget to do their study with, and possibly a lot more time. Good research takes resources.

I guess I wanted to see the Cardiff study + some refining and thought over studying online issues.

Still on sampling: have they said why they didn’t sample woman-led or -ran specialist sites?

This is part of what I meant when I wrote I personally would have tried to balance or avoid sites obviously lead by one person. Just occurred to me that one variation might be to take blog streams that have writers of both sexes, then you can do comparisons on the same site, incl. tracking users to see if they follow a mix of sexes, mostly one, etc. (By ‘streams’ a mean a stream a posts, not sep. blogs like how BoingBoing works, if you get my meaning.)

Sorry, I’m a little confused :-) Are you referring to my suggestion of studying a mixed stream of posts or an earlier point? I’m not suggesting they “limit” themselves to studying a mixed stream, but that this as another study might give an analyst more direct comparisons in the data. Both sexes of readers (well, people who comment, really), both sexes of writers in same setting, then you’ve got 2×2 tables off the one site, etc. It was only loose thought.

Thanks to Alice for her thoughtful reading of the research and to all who have commented. We acknowledge that the research as it is could only provide a snapshot and give indicators of some key issues. The net was cast wide – when territory is not well researched this is sometimes necessary as a precursor to more detailed work.

We would love to be working with others active in the field to secure funding and resources to enable more in-depth and iterative studies in this area to be undertaken.

As Alice suggested, this project was a small scale (and by necessity low budget), exploratory study of online representations of women in science, engineering and technology. The project was commissioned by the UKRC, with the aim to open a discussion of this relatively new area of research. This is also acknowledged by the authors of this in the report, who further recommend larger scale research that could explore the issues they have identified in greater depth, and to see how far the findings from their work apply more generally. The authors also provide some interesting suggestions for future research on gender and online media such as:

* Explore minority-interest and/or feminist sites on SET that can suggest alternative modes of representation.
*Focus on constructions of ethnicity, social class, age and disability in online representations of women and men in SET.
* Look further at SET online content beyond the UK and develop cross-national comparisons that could indicate directions for change.
* Look at individual choices and narratives and the role of online representations as one of an array of influences on young people’s relationships with SET and their educational and employment choices.
* Produce a mapping of guidelines and policies relating to online content production, including the moderation of online content, explore how they are implemented (or not) and by whom.

We will be producing a toolkit soon to help SET organisations and individuals to consider gender equality when generating content for their websites, blogs and other forums. This will be a tool we can develop in the light of feedback and new research.

thanks for that Anna, I thought about including the notes for further research, but decided the post was long enough. I’m still concerned with the basic issue of not treating social media with due respect to it’s socially-constructed nature. Good luck with your toolkit.

I’m one of the authors of the report – thanks, Alice, for blogging about this, and to everyone else who’s shared their view. It’s great to get all this useful feedback. I’m really glad you feel our report is accessibly written.

I hope you’ll judge the report in context – Marie-Pierre and I had to do the work in 10 weeks part-time. The brief was to look at websites and to interview web authors and young people and then to analyse all this and then write a fairly lengthy report. That’s a big brief to cover in 45 work days. I agree with a lot of your critiques and definitely think that the area would benefit from an ethnographic and iterative approach but (as you guessed) we just didn’t have the time or resources to do this. (We hardly had time to think!) We feel that some research is better than none and hope you agree.

I think the approaches we used for Twitter and YouTube have big problems. These are massive sites – so it’s a question of where to start and what to do if you’re only vaguely familiar with them and have a day to sample and analyse their representations of people doing science. But having said that, even this very limited analysis contributed to our findings. For example, when we combined it with the other sites and with wider research on gender then we could use it to say something about how ‘pornification’ is leading to an increased and visceral sexualisation of women online that undermines their scientific contributions and that’s becoming taken for granted.

There’s not much work out there on this and so we hope that we’ve identified areas where other people can follow up. There’s quite a lot of surprising findings in there – like the association of women with ‘bad science’ that I don’t think anyone’s really talking about (I’ll restrain myself from ranting about this). Marie-Pierre and I would love to follow up the work ourselves so if anyone’s got any spare cash feel free to get in touch :)

Finally, I’m interested in why you’ve focused on the website analysis. Maybe it’s because I work in education, but the most fascinating part of the work for me is the second half where we looked at what web users and web authors had to say.

Thanks for the response (and sorry that it got stuck in my spam folder for several days – for some reason Anna’s did too…!)

I focused on the website analysis partly because that’s the methodology I know best, but I actually went into the report expecting to find the user and author analysis the most interesting. I guess it really didn’t tell me anything I wasn’t expecting. I did draft bits on it, but deleted (I had a gut feeling readers would be more interested in the site analysis too, maybe wrongly). As with the website analysis, I think it does throw up points for further research. I did read it with interest.

Re bad science – I know women have spoken of feel uncomfortable within skeptics groups (although equally many feel very empowered through their involvement with them too). This talk was interesting, as was some of the reaction.