A BOY, his voice heavy with embarrassment and regret, was performing Samuel Beckett in Serbo-Croatian. “Mr. Godot,” he said, “told me to tell you that he won’t come this evening, but surely tomorrow.”

Embattled and under siege, Sarajevo waited for the international community to come to its aid while thousands of soldiers and civilians were killed.

Now 20 years later, Bosnia and Herzegovnia are in the throes of protest and flames over officials’ corruption. But this time, instead of waiting for Godot,

Around the country, protesters are not just occupying streets and public squares but organizing plenums to create alternative governments. In Sarajevo, one such assembly was taking place at the youth center, which before the wars of the 1990s was one of the most popular Western-style clubs in Yugoslavia. During the war it was hit by artillery shells and caught fire.

Now I watched as more than 1,000 people — mothers without a job, former soldiers, professors, students, desperate unpaid workers — gathered here to discuss the future of the country.

Horvat reports the results of these people’s assemblies, which have been powerful. (So read the op-ed.) This reminds me of Arendt’s observatipn of the power that springs up when people gather together and of that slogan that has inspired grassroots movements around the world, “we are the ones we’ve been waiting for.”

This does not mean that the international community should ever sit by and say, let them take care of their own problems. But neither should the international community come in and install an alternate regime or force democratization practices that might be counterproductive. As the director of the southwest Industrial Areas Foundation, Ernesto Cortes, Jr., says, Never do for anyone else what they can do for themselves. That’s his “Iron Rule.” So the best kind of help is that which helps communities organize themselves and decide their own futures.

INTERNET OR ONLINE SURVEYS have become a popular and attractive way to measure opinions and attitudes of the general population and more specific groups within the general population. Although onlinesurveys may seem to be more economical and easier to administer than traditional survey research methods, they pose several problems to obtaining scientifically valid and accurate results. A peer-reviewed article by Responsive Management staff published in the January-February 2010 issue of Human Dimensions of Wildlife details the specific issues surrounding the use of online surveys in human dimensions research. Reprints of the article can be ordered here. Responsive Management would like to thank Jerry Vaske of Colorado State University for his assistance with the Human Dimensions article and for granting us permission to distribute this popularized version of thearticle.

Mark Damian Duda
Executive Director

The above is from

and note this:

Self-Selected Listener Opinion Poll (SLOP)

Paul J. Lavrakas

A self-selected listener opinion poll, also called SLOP, is an unscientific poll that is conducted by broadcast media (television stations and radio stations) to engage their audiences by providing them an opportunity to register their opinion about some topic that the station believes has current news …

Throughout my time as a philosopher, I’ve heard quite a bit of talk regarding ‘epistemic responsibility’ when it comes to discrimination, harassment, and assault. I’ve heard it much more frequently over the last few weeks, and so I feel compelled to say a few words about it. As it happens, I think I have a very different view of the nature of epistemic justification and the conditions under which agents can be said to have it than those who bring up epistemic responsibility in these sorts of conversations, but I want to address a slightly different question: What does moral responsibility require of us when allegations of discrimination, harassment, or assault are made? To be clear, what follows is not an endorsement of a presumption of guilt—rather, it’s an endorsement of action, sympathy, and compassion in the absence of certainty. It seems to me…

Just to round out my current round of complaints about the rankings of the Philosophical Gourmet Report (and then I really will finish those article revisions!), I want to point out another way in which bias shows up. The “top” 25 programs overall have smaller percentages of tenure-stream women faculty than even the already-dismal percentage in doctoral programs throughout the profession. Only eight of those 25 do better than average.

Julie Van Camp estimates that the national average is 22.7 percent. I’m rounding up, so let’s say that anything above 23% is better than average.

Below is the list. For all the programs listed in the APA’s guide to graduate programs, I’ve used their self-reported numbers. For those who did not submit their data to the APA (shame, shame, shame), I’ve used Julie Van Camp’s numbers. The latter I list here with an asterisk. I’ll italicize those better than average.

NYU 21%

Rutgers 21%

Princeton 17%

Michigan 26%

*Harvard 21%

Pittsburgh 22%

MIT 18%

*Yale 16%

*Stanford 25%

*UNC 17%

*Columbia 35%

UCLA 25%

USC 20%

CUNY 18%

Cornell 33%

Arizona 23%

UC-Berkeley 27%

Notre Dame 16%

*Brown 27%

*Chicage 20%

UT-Austin 17%

UCSD 17%

UW-Madison 22%

Duke 20%

Indiana 22%

Of the next 26 that made the top 51, 16 are better than average.

So, what to make of this? Is the problem that these “top” schools are not that interested in hiring more women? Or is it that they are deemed “top” because they are not hiring more women—and hence not doing the kind of “non-philosophical” work those women tend to do? Linda Alcoff writes that the PGR “works to reward convention and punish departments that take the risk of supporting an area of scholarship that is not (yet) widely accepted or respected in the profession. Hiring in the areas of critical race philosophy or feminist philosophy is not going to improve a department’s ranking. As a result, philosophy departments are trying to outdo themselves in conformism and ‘tailism’—tailing the mediocre mainstream rather than leading.”

Additionally, could the fact that 85% of the evaluators were men have anything to do with the problem? There’s not a lot of use in speculating, since the report never pretends to be objective. It is a reputational ranking based on views of those who have, in certain circles, a good reputation. So it is all circular.

And now I worry that it is also sexist. I am not saying that the group of evaluators are themselves sexist but rather that unconscious biases are bound to slip in to a survey that is shoddily constructed.

Can this survey be saved? No, dear colleagues, it is time we walked away. I urge anyone who has been involved in this exercise, whether by turning over your list of faculties or serving on the board or as an evaluator, to stop.

On 11/25/2007 I posted on the dilemma of being a mother and a philosopher, having one’s attention trained in seemingly opposite directions, and what the connection might be to the dearth of women and mothers in philosophy. The comments that poured out in relation to that post are amazing, even six years later. (And some of you will see your younger selves there.) If you care about these issues, give it a read.

I’m wondering now how it seems for younger women / parents in philosophy. So have a look at that old stream and comment here. Are accommodations at conferences any better? Are departments supportive? Are partners helpful? Do you feel that tug between thinking and parenting? Does that have to be an opposition or can it be a productive relationship?

For the 2009 Philosophical Gourmet Report ranking of US doctoral programs, Brian Leiter circulated a list of the faculty at 99 US programs. But for the 2011-12 rankings, the list was of only 60 programs. That’s a 39% drop, in the space of just two years, of departments willing to participate. No wonder Leiter has not published the list in the usual spot under methodology. But it can be retrieved as an rtf document from this page. [Edit: see correction below in my comment replying to Leiter.]

[Nonetheless] I compared the list of 60 faculties [used for the Philosophical Gourmet Report’s 2011 rankings] to Julie Van Camp’s ranking of departments by their percentage of tenure-stream women faculty. From top to bottom of these women-friendly departments (in terms of having above average percentage of women faculty), here is a list of those that do not participate in the PGR rankings:

University of Georgia

University of Oregon

Emory University

Villanova University

SUNY-Albany

University of New Mexico

University of South Carolina

Arizona State

SUNY Binghamton

University of Oklahoma

Loyola University – Chicago

SUNY Stony Brook

University of Cincinnati

University of Kansas

DePaul University

Fordham University

Marquette University

Temple University

University of Memphis

Duquesne University

University of Kentucky

Michigan State University

Bravo to all these programs — both for hiring women to the tenure stream and for saying no to the PGR.

The 2011 Report:
The list of the Top 51 doctoral programs is included in the 2011 Philosophical Gourmet Report. The 56 members of the Report’s Advisory Board for 2011 included nine females (16.1%) and was based on the reports of 302 evaluators, including 46 women (15.2%).

The 2009 Report:
The 55 members of the Report’s Advisory Board for 2009 included eight females (14.5%) and was based on the reports of 294 evaluators, including 37 women (12.6%).

The 2006-08 Report:
The 56 members of the Report’s Advisory Board for 2006-2008 included seven females (12.5%) and was based on the reports of 269 evaluators, including 26 women (9.67%).

The 2004-06 Report:
The 59 members of the Report’s Advisory Board for 2004-2006 included eight females (13.6%) and was based on the reports of 266 evaluators, including 32 women (12.0%).

The 2002-04 Report:
The 43 members of the Report’s Advisory Board for 2002-2004 included five women (11.6%) and was based on reports from 177 evaluators, including 24 women (13.6%).

Van Camp also notes that the very “top” six programs in the PGR have a lower percentage of women on the faculty than the national average for doctoral-granting programs. Go HERE to see her helpful chart showing percentages of tenured and tenure-track women faculty in doctoral-granting programs.

On that chart she includes when and how a school was ranked on the PGR since 2002. Of the top ten on her list, six have no ranking——meaning they have not shown up (since 2002) as one of the PGR’s top 51 programs . That can happen in two ways: (1) the program was ranked at 52d or worse or (2) the program did not turn over its list of faculty, meaning, it chose not to participate at all.

The 2009 PGR was based on a list of faculty from 99 doctoral programs. How many were on the 2011 list? Leiter provides previous lists under methodology, but not the 2011 list, at least not as of this writing. I know anecdotally that many of the programs with more women on the faculty choose not to turn over their lists to Leiter. I think this is because of his explicit bias against self-identified pluralist programs, most of which tend to have more women on the faculty. Regarding some problems with this bias, see this post on see on the New APPS blog.

Is there a systematic bias in the PGR methodology that leads it to value more male-dominated departments? Well, yes. An unrepresentative and hand-picked advisory board plus unrepresentative and hand-picked evaluators will lead to a slanted take on the value of the work going on in the profession. You don’t have to be a stand-point epistemologist to see this.