06/12/2019

On How to Destroy The NDPR; a thought on Clark Glymour's latest

1) We want information about the causal processes that produce the data, and their regularities; and 2) We have available only experimental interventions that are themselves insufficient for those goals. This is not your grandfather's science, or Popper's.

Clark Glymour flunks the minimal requirement of a book review: conveying what a book is about and what its distinctive contribution is. Rather, he rants and he snarls about the book he wishes Profs. Lill Anjum and Mumford had written.

The problem is really three-fold: we really learn nothing about the main arguments of the book. Even if metaphysical justification of the norms of science were a misguided project, it would still be interesting -- for aesthetic or conceptual reasons -- how it is supposed to work, and whether Lill Anjum & Mumford succeed at offering such a justification.

Second, Glymour takes for granted (with a nod to Michael Friedman) that the main business of the philosophy of science is to offer "new frameworks for scientific inquiry." While I am no neo-Kantian, I agree that something in this vicinity is really a valuable enterprise. (I am consistent here; I said so in 2011 when I commented on Glymour's manifesto (which also appealed to Friedman). But Glymour (nor Friedman) never establishes that this is the only valuable enterprise for the philosophy of science. If you can't offer a single argument for the unique significance of your own preferred project, maybe keep an open mind about others's (or pretend to be)?

As it happens, I agree with Glymour that [A] figuring out how to produce genuine causal knowledge from the reams of data-science is truly a significant challenge.+ But it is not the only such challenge. It is [B] equally important, perhaps with an eye on inductive risk even more so, to figure out how that causal knowledge does not end up exhibiting an otherwise invisible, but morally suspect, status quo bias about the background conditions that help produce such causal knowledge (such as it is). I am not suspecting the previous sentence will convince Glymour. But he has suggested (in his manifesto) that society's willingness-to-pay and potential contributions to public policy are a good metric in evaluating philosophical research programs. On that score, [B] is increasingly popular.

Third, at one point Glymour seems to offer an argument against the enterprise of the book:

It would seem that the broad norms of scientific method are pretty clear: follow procedures that have an empirically or mathematically warranted good chance -- and preferably the best chance among available procedures -- of finding the true and avoiding the false. It's means-end. The justification on that scale of abstraction is elementary decision theory. The hard part is finding such methods suitable for the kinds of data we now collect and showing that any particular method or class of methods satisfy those criteria.

I want to make two claims about this quote: 1) I agree with Glymour that justification on that scale of abstraction is elementary decision theory (although on bad days i am sure I flunk precise expression of it). But whatever decision theory is, it's not metaphysical justification. (Did I say yet, it would been nice to get an exposition from Glymour on what this is?) So, it misses its mark against the book's project.

Now 2) the final sentence (about the hard part) is ambiguous between (i) locating methods among the known scientific methods and showing that any particular method or class of methods has a good or even best chance of producing causal knowledge (finding the true, etc.) of the processes that produce the data and (ii) inventing new methods that have a good chance or even the best chance of producing causal knowledge of the processes, etc. Given everything else he says, I assume Glymour means (ii).

I actually think (ii) is much stronger than 'we' need: what we need is (as he says) "methods suitable for the kinds of data we now collect." That these methods "satisfy those criteria" is clearly important for our evaluation of the strengths and weakness of the methods, but it is less significant. My evidence for this is induction over the history of science (a topic I have spent a good chunk of my life on): for, many methods and procedures got used long before we understood what the chances were that they reliable produced the true and false; if no competing procedures exist, they go on being used even if there are grounds for doubting the odds of success.*

Okay, let me wrap up. Regular readers know I think polemical book-reviews can serve an important purpose in bringing philosophical disputes out in the open and so advance our understanding. (But that would require mutual engagement.) This is one reason to cherish the NDPR. But the NDPR discredits itself by publishing Glymour's review; it breaks no new ground and, from the vantage point of rhetoric and clarity, does not advance beyond his manifesto. "Be nice" is a useful motto, especially if you have nothing new to report.**

*I leave it to the decision theorist to justify scientific practice.

+I also agree that during the last few decades 'science' has undergone some very significant changes in evidential practices and standards.

How much editorial review is common on book reviews? I've published reviews in the NDPR twice, and only once even had any discussion at all, on my initiative, on review length. On another couple of reviews I have also talked about length with editors, but never, at all, on content. (I've published 8 book reviews, in a variety of locations so it's not a tiny sample.) If it's in fact very unusual to get editorial input on content, then even if this is a bad review, I don't think it's necessarily a failing of the NDPR.

Because a consistent failure to uphold editorial standards is not blameworthy? (That ordinary procedures have been followed may be a proper standard in a bureaucracy and the law, but it is not a very good one in academic publishing.)

Glymour might think the main business of philosophy is to offer new frameworks for scientific inquiry, but Freidman himself doesn't say anything this narrow. Friedman's main contrast is just with other ways of doing "scientific philosophy" - he certainly doesn't argue that this is the only valuable way to do philosophy.

I wonder what Glymnour makes of this ".. in philosophy (and, mutantis mutnandis, also in the other humanities), it is always to our advantage to let a thousand flowers bloom. Finally, it is folly as well for philosophy (and for the other humanities) to regret this lack of scientific status..." (p. 24, The Dynamics of Reason).

"(That ordinary procedures have been followed may be a proper standard in a bureaucracy and the law, but it is not a very good one in academic publishing."

I don't claim that these _should_ be ordinary procedures, but only that, if they are the ordinary ones, they shouldn't be changed w/o prior notice. Doing that would be a pretty bad policy, including within academic publishing.

That said, I also expect that, if significant editorial review of book reviews was put in place, it would be much harder to get people to write book reviews. (My impression, from talking to a few book review editors at good journals, is that it's actually fairly difficult right now.) As noted, I've published a fair number of book reviews, but I get no "career" credit for it at all, and if I had to worry about dealing with significant editorial control, I'd be much less likely to do them.