Her point appears to be that humanities and social sciences (“social sciences” aren’t mentioned in the title)—including history, literature, psychology, political science, linguistics, and psychology—shouldn’t be treated like “hard sciences.” What does she mean by that? And that’s the problem, for her criterion for “hard science” appears to be the use of mathematics and statistics:

I don’t mean to pick on this single paper. It’s simply a timely illustration of a far deeper trend, a tendency that is strong in almost all humanities and social sciences, from literature to psychology, history to political science. Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise. That, it seems, would confer some sort of missing legitimacy in our computerized, digitized, number-happy world. But does it really? Or is it actually undermining the very heart of each discipline that falls into the trap of data, numbers, statistics, and charts? Because here’s the truth: most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated. And when you try to straighten out the tangle, you may find that you lose far more than you gain.

. . .over and over, with alarming frequency, researchers and scholars have felt the need to take clear-cut, scientific-seeming approaches to disciplines that have, until recent memory, been far from any notions of precise quantifiability. And the trend is an alarming one.

. . . It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not.

Konnikova gives a few examples of what she considers misguided attempts to apply mathematical models or statistics to “humanities,” including assessment of the factuality of stories like Beowulf or The Iliad using likelihood analysis, and forensic linguistics, the application of linguistic methods to legal jargon and practice.

Now I don’t know anything about either of these fields, and perhaps Konnikova is right here. But where she goes wrong is in concluding two things: that all “hard scientific” study must involve math or statistics, and that there are other “nonscientific” ways of knowing involved in what she calls the “humanities.”

First of all, not all science involves math or statistics. Granted, much of it does, but much is simply observational, especially in biology. One example comprises observations of mimicry, like the fly-mimicking beetle I posted the other day. And there is not a single equation in On the Origin of Species, the greatest and most influential biology book of all time. Finding a transitional fossil (like Tiktaalik) in the right sediments is science, for it gives substantial evidence for what early tetrapods were like (yes, I know some math is involved in dating strata).

The point is that the methods of science do not absolutely require statistics or mathematics. Those methods rely on replicated observation, eliminating alternative hypotheses, generating new and testable hypotheses, and constant doubt. That’s not so different from the methods used by archaeologists, historians, linguists, psychologists, and, yes, Biblical scholars. Is Konnikova unaware of the gazillions of psychology experiments that use statistics, including the recent flap about whether our “decisions” are made before we’re conscious of them?

Second, Konnikova fails to make the case that the “humanities” can tell us something real about the world without using the methods of science outlined above. Instead, she just throws sand in the reader’s eye. For example:

It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not. I’m not alone in thinking that such a blanket approach ruins the basic nature of the inquiry. Just consider this review of Jerome Kagan’s new book, Psychology’s Ghosts, by the social psychologist Carol Tavris. “Many researchers fail to consider that their measurements of brains, behavior and self-reported experience are profoundly influenced by their subjects’ culture, class and experience, as well as by the situation in which the research is conducted,” Tavris writes. “This is not a new concern, but it takes on a special urgency in this era of high-tech inspired biological reductionism.” The tools of hard science have a part to play, but they are far from the whole story. Forget the qualitative, unquantifiable and irreducible elements, and you are left with so much junk.

Well, how does one go about finding out whether self-reported experience is influenced by culture, class, experience, and a particular research situation? You do a scientific test! And that often involves statistics. One researcher, for example (I can’t recall the paper), did an analysis of genetic studies of IQ, and discovered that the political leanings, upbringing, and education of the researcher was strongly correlated with whether or not that researcher found significant genetic differences in IQ between races. (The differences were in the expected direction.) In other words, the methods of hard science uncovered a possible observer bias.

And I disagree profoundly with this statement:

Sometimes, there is no easy approach to studying the intricate vagaries that are the human mind and human behavior. Sometimes, we have to be okay with qualitative questions and approaches that, while reliable and valid and experimentally sound, do not lend themselves to an easy linear narrative—or a narrative that has a base in hard science or concrete math and statistics. Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.

I’m not sure what Konnikova means by “qualitative approaches”—I hope it’s not just storytelling—for she gives no examples. But how do you find out if a “qualitative approach” is valid and experimentally sound without a). proper controls, and b). replication? That is, without the methods of science. (The “linear narrative” thing is just postmodern obfuscation.) And of course you can hardly open an experimental psychology journal without finding statistics!

Here’s what she says about history:

To be of equal use, each quantitative analysis must rely on comparable data – but historical records are spotty and available proxies differ from event to event, issues that don’t plague something like a plane crash. What’s more, each conclusion, each analysis, each input and output must be justified and qualified (same root as qualitative; coincidence?) by a historian who knows—really knows—what he’s doing. But can’t you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It’s happened before. Time and time again. And what does history do, according to the cliodynamists, if not repeat itself?

To hear Konnikova tell it, one can’t really learn anything about history, because it’s all mushy and fuzzy. I agree that history doesn’t often use statistics (although read Steve Pinker’s Better Angels of our Nature to see how he deploys fancy statistics to argue that societies are getting better); but historians can still make hypotheses and do sleuthing, as well as interview people and cross-check their statements. That’s how we know that the Holocaust really happened despite the claims of denialists.

Finally, she excludes nearly everything in humanities and social science as being outside the domain of “hard science”:

It’s tempting to want things to be nice and neat. To rely on an important-seeming analysis instead of drowning in the quagmire of nuance and incomplete information. To think black and white instead of grey. But at the end, no matter how meticulous you’ve been, history is not a hard science. Nor is literature. Or political science. Or ethics. Or linguistics. Or psychology. Or any other number of disciplines. They don’t care about your highly involved quantitative analysis. They behave by their own rules. And you know what? Whether you agree with me or not, what you think—and what I think—matters not a jot to them.

Well, if she defines “hard science” as “science that involves mathematics and statistics,” then her claim is true by definition—except that some hard science, as I’ve noted above—doesn’t use math or stats, and some social sciences do. But what she fails to recognize is that for many of these disciplines (I exclude literature), one finds out what is true by using the same methods of rational inquiry that undergird the “hard” sciences. Archaeologists and historians try to cross-check facts and authenticate documents and dates. Linguists do indeed use quantitative analysis, and reconstruct the history of languages in ways similar to those used by biologists to reconstruct the history of life.

And as for ethics, well, yes, you can’t determine what is right by the methods of science, but you can certainly inform moral decisions, and learn about morality, from science. One example is Pyysiäinen and Hauser’s moral-situation study, which showed (statistically) that atheists and believers resolve novel moral dilemmas in the same way. If your stand on abortion or animal rights depends on whether fetuses or animals seem to feel pain, those questions are also subject to scientific study.

In the end, I’m not quite sure why Konnikova goes off on the incursion of “hard-science” methods into social science and the humanities. The underlying principles of finding truth are the same in all of these areas, regardless of whether one uses math or not.

It may be uncharitable of me, but I suspect Konnikova’s trying to tout the humanities and social sciences as “other ways of knowing.” But that won’t work. As the social sciences and humanities mature, they come to realize that their criteria for finding truth are the same as those used in biology and physics. Indeed, they even become more mathematical. It may be harder to suss out what’s true about humans than about, say, ants, but that reflects our more complex culture, not different ways of knowing about different organisms. (This does not apply, of course, to things like literature, where the notion of “truth” is itself slippery.)

There is only one way of finding out what is true, and that doesn’t involve revelation or making up stories.

109 Comments

Related: here’s a blog post on stuff the humanities have actually achieved. A lot of people discount this stuff because of hindsight bias, and because when they get mathematical they get called “not humanities but science”.

“stuff the humanities have actually achieved”
It’s an interesting list but I would suggest that most of it could safely be covered by the umbrella of “science” – albeit social science.
For example, practically all the examples from anthropology, psychology and epidemiology result from the process of rigorous scientific verification.

Talk about thinking in black or white. All science uses a combination observation and statistical analysis. Einstein’s thought experiments required nothing other than his imagination to create his theories on relativity, experimental data and mathematics were only used to back up the theories. It is no different for any of these other fields.

Quantitative ought to mean about the same thing as evidence based. And that’s good even in fields such as history. Her real complaint seems to be with the blind use of poor mathematical models. However, I’m not sure that she even understands what it is she is complaining about.

Lots of science with no statistics/math in any reasonably good crime detection story and/or SF novel by the likes of Isaac Asimov. And they have good linear narratives as well!!!

Without a more empiricist approach, social sciences often run the risk of devolving into the same Tower of Babel as religion with conflict between Marxist, Weberian, Ayn-Randian analyses of various historical events. Empirical/Pragmatic (and even statistical) thinking can get you past these ideologically motivated schools of thinking (and perhaps reveal the half-truths underlying the ideologies.)

There may be “other ways of knowing” but how reliable are they? They may yield great insight but how do you test them?

The sciences and humanities overlap, and the era of CP Snow’s “Two Cultures” is ending.

I think this is an interesting topic and
worth discussion. But I continue to feel that your analysis of the relevant concerns here, while interesting, is missing the point being emphasized by the author you criticize. Consider your description of history as an example. It seems to me your description is based upon the fallacy of composition–of explaining the whole in terms of a part. While it is true that history depends on making some observations and, in this respect, may be said to be like the sciences you describe, this is not what is distinctive about history as a discipline. Historical inquiry is believed by many people to require a method of interpretation where the investigator creates a “narrative” among the phenomena discovered. This method is unlike anything found in the natural sciences because it involves subjective interpretation. This is usually put by saying that the natural sciences offer causal explanations, but fields like history offer interpretive explanations of a different type.

There is a discussion of this idea here (see especially Section 2 on “interpretivism”).

“This method is unlike anything found in the natural sciences because it involves subjective interpretation. This is usually put by saying that the natural sciences offer causal explanations, but fields like history offer interpretive explanations of a different type”

Yes, but surely not every interpretation is equally likely to be true. This applies even if we admit that there is no one “true” interpretation (i.e. no one true narrative of the American Civil War), but rather a range of likely interpretations.

At some point, there must be ways to falsify claims like “Lincoln was a huge racist who only abolished slavery to keep the Brits from siding with the Confeds”, or “The Unites States was founded on Christian principles and should rightly be called a theocracy”.

A lot of things in biology are also interpretive – look at proposed explanations for various events in evolutionary history.

The important point is not that we NEVER offer a suggested interpretive explanation but that we are clear when we are doing it. As blitz442 points out, different interpretations will have different likelihoods. Without taking a scientific approach, though, we can never “know” which is more likely – we just have hunches and feelings, which are notoriously biased and unreliable. It may even be that we can never know which is more likely out of two (or more) interpretations but we still need a scientific approach to come to that conclusion with any confidence. Science seeks answers but it doesn’t have to find them to still be science.

I understand what you are saying here, but don’t think you are entering the actual debate yet. The term “interpretation” doesn’t mean “one possible explanation of X” as it does in your examples. It means something rather different in the context of historical explanation. According to the article I linked to above:

“….interpretivists assert that the social world is fundamentally unlike the natural world insofar as the social world is meaningful in a way that the natural world is not,”

they claim “….the aim of social inquiry [is] to make sense of the actions, beliefs, social practices, rituals, value systems, institutions and other elements that comprise the social world. This involves uncovering the intentions and beliefs that inform human action, which in turn requires making sense of the broader social context in which those beliefs, intentions, and actions reside.”

Notice the phrase “making sense of” which refers to the act of understanding human action in terms of beliefs and intentions in some context. This is NOT a form of causal explanation, which is what the natural sciences and biology are predicated upon. If “meanings” are real aspects of people’s behavior, then they need explanation I presume. But the worry is that “quantitative, statistical, causal etc.” accounts don’t really explain this feature of our lives that many are concerned with.

“Notice the phrase “making sense of” which refers to the act of understanding human action in terms of beliefs and intentions in some context. This is NOT a form of causal explanation, which is what the natural sciences and biology are predicated upon.”

If by causal you mean caused by preceding conditions then I would say that what you described here is most definitely “causal.” Beliefs, intentions, the context of place and time are all things that contribute to causing people to perform certain actions.

I think history is a bad example for you because at its root history is the study of human behavior and I can’t think of anything that could benefit more from scientific methodologies than the study of human behavior.

Science is not in competition with the discipline of history, it is just a very powerful tool that, when used well, will only enhance the discipline. I think all this angst from certain quarters of the humanities may largely be from people that were brought up in their field before scientific methodologies were used in their field, and they are uncomfortable with it largely because of an erroneous understanding of science and its underlying methodologies.

With all due respect, you are not really addressing the issue I think. I don’t think it’s helpful to try to explain away the concern with history here by suggesting the humanities have “an erroneous understanding of science and its underlying methodologies, etc. etc.” This is exactly parallel to a Marxist who replies to his critics by saying: “I don’t need to take your criticism seriously because you have an erroneous understanding of Marxism which I prefer.” The issue turns on whether sciences can do a good job explaining certain historical phenomena or not.

Nobody disagrees with you that beliefs and intentions contribute to causing people’s actions. What needs to be explained here is the nature of these beliefs and intentions *themselves* which you appeal to. The claim being made by the interpretivists is that the natural sciences cannot explain the existence of “meaningful beliefs and intentions” in quantifiable, causal terms because these are subjective features of the social world. Just to give an example to express the general idea (without filling this out much): Suppose a scientist tells me the proximate “causal antecedents” that led to Martin Luther King’s assassination. He still wouldn’t have explained why this sequence of physical events mattered to our society, was immoral, created social unrest, was racist, changed America’s self-consciousness, etc. These phenomena are not really explained by listing sequences of physical events or giving mathematical formulas. But this is something the historian is interested in explaining–the meaning of these social events in our lives.

Suppose a scientist tells me the proximate “causal antecedents” that led to Martin Luther King’s assassination. He still wouldn’t have explained why this sequence of physical events mattered to our society, was immoral, created social unrest, was racist, changed America’s self-consciousness, etc.

What makes you think that the scientist’s explanation wouldn’t encompass all of those things just as much as the historian’s? They are properties of complex human interactions, and as such are just as much within the remit of science as anything else. Why wouldn’t they be?

These phenomena are not really explained by listing sequences of physical events or giving mathematical formulas.

You have too narrow a view of science. When a scientist talks about, for example, hurricanes, they don’t just give lists of molecule-level physical events, they also talk about the high-level phenomena that the aggregation of those events give rise to.

Thanks for this reply since now we are getting to the issue I feel. You ask: “What makes you think that the scientist’s explanation wouldn’t encompass all of those things just as much as the historian’s?” The answer was in my previous post: “….because these are subjective features of the social world” and the natural sciences only address objective phenomenon by objective methods. The interpretevist is claiming that historical explanation includes understanding agents’ perspectives, values, reasons, beliefs, intentions, etc. that shape the social world and these are subjective phenomena. You are quite correct that they are properties of complex human interactions. But the claim is that they are meaningful and subjective properties of complex human interactions, and such states are not explained by causal, statistical, etc. methods. (This point does not depend on focusing on micro-level features of physical processes.)

The answer was in my previous post: “….because these are subjective features of the social world” and the natural sciences only address objective phenomenon by objective methods. The interpretevist is claiming that historical explanation includes understanding agents’ perspectives, values, reasons, beliefs, intentions, etc. that shape the social world and these are subjective phenomena.

I don’t accept that there is any fundamental disctinction here. For example, suppose we study other mammals, studying their complex social interactions, their politics, their jockeying to become an alpha male or depose an alpha male, their giving of alarm calls to warn their group of predators, et cetera. All of this depends on agents’ ” perspectives, values, reasons, beliefs, intentions”, and all of this is quite properly science and accepted as such. And so it is when the mammal being studied is the human.

Okay, so your argument is, “this human cognitive/behavioral phenomena is too complicated to be understood by employing scientific methodologies.” That is an assertion, and it is not well supported. I assert that you are wrong.

“This is exactly parallel to a Marxist who replies to his critics by saying: “I don’t need to take your criticism seriously because you have an erroneous understanding of Marxism which I prefer.””

Depending on the criticism and the accuracy of the critics understanding of Marxism the Marxist may be absolutely right. So what?

“The issue turns on whether sciences can do a good job explaining certain historical phenomena or not.”

I addressed this directly. Apparently you don’t agree. Fine, but not so nice to say that I did not address the issue you presented when in fact I did.

“But this is something the historian is interested in explaining–the meaning of these social events in our lives.”

With all due respect, you are making exactly the same mistake that people who claim that science can’t be used to determine moral values make. Science can be used to better understand all aspects of the issue so that we can make informed value judgments. “The meaning of these social events in our lives” is a value judgment.

I was not trying to not be nice. I was trying to say that what you were referring to in your example about “caused by preceding conditions” was not exactly the issue being raised. Knowing whether X is caused by preceding conditions is not the same point as explaining what X is, and my suggestion was that the issue concerns the latter.

In any case, I think I’ve said all I can and will stop. I will just note that “meanings” are not value judgments. We are both reading the words on the screen in front of us and understanding the meanings of the sentences there. This does not involve making value judgments.

I see no reason why you cannot model beliefs, intentions and “meanings” if you wanted to. (Asimov’s psychohistory comes to mind here!) If belief A changes the likelihood of behaviour B then it IS part of the causal explanation. If not, it is irrelevant speculation and “fluff”. Likewise, the social environment can be a factor just as the genetic and biotic environment can be a factor in trying to tease apart causal genetic relationships.

As far as I can see, anyone making this kind of argument does not appreciate how complicated biology is. Just because the causal explanation is a complex one, it does not mean that there is a non-scientific short-cut to some alternative “truth” about it – it just means that we are a long, long way from achieving full knowledge about the chain of causality. That doesn’t make imperfect knowledge or models useless – otherwise we would never do any science.

Everything is an imperfect model and we are always “wrong” but our models get progressively better and we get progressively less wrong. The only way to know this is through scientific approaches, though, and that must apply to human behaviour just as much as anything else. Saying that something “is not science and should not be treated as such” is just another way to say “this is too hard, I give up, let’s just embrace speculation and stop there”. (Or, as in (most?) art, to say that there is no objective “truth” or knowledge to be found.)

The issue raised above does not concern problems with complexity or modelling of causal factors. It is not claiming that the social world is more complex than biology and hence (sic) is special or something. The claim is that the existence of “meaningful beliefs and intentions” is not well explained in quantifiable or causal terms, and that the modelling you speak of does not do justice to the social phenomena historians are interested in explaining. I don’t know why you insist on interpreting this concern as “fluff”. It seems like a reasonable concern to raise and one I’ve heard many historians express myself.

Apologies. Perhaps “fluff” is the wrong word. “Speculation”. It is interesting to speculate – it can inform public opinion and challenge cherished beliefs. But speculation is not knowledge. Speculation does not actually explain anything – it just speculates about possible explanation. (We do it in biology too but we are – or, at least, should be – always clear that it is speculation until the relevant experiments have been done.)

That is fine if that is what you want to do. Perhaps that is Maria’s point – psychology does not want to explain anything, it just wants to speculate. I don’t think too many psychologists would be happy about that.

The issue that I (and others, I think,) have is with the apparent claim that this speculation is somehow a different route to knowledge, or a different kind of knowledge. It isn’t. (Or, at least, I am as yet far from being convinced that it is.) Worse, the idea that speculative explanations for events should not be subject to the same level of skepticism and objective criticism as “hard sciences” is just dangerous and promotes woolly (or fluffy!) thinking.

If, on the other hand, Maria’s point is simply that speculation can be useful and not everything has to have a full explanation, I think most people would agree. That is not the tone that I get, however.

Just try plotting the frequency of the buzzword “narrative” by year, publication / institution and country, and maybe you’ll see a pattern.

When I was teaching computing classes for archaeologists and historians, I used to challenge students to formalise their work as logical frameworks using the simple methodology of decision flowcharts. Problems becoming explicit through this simple analysis led them to get interested in multi-valued logic and set theory; subjects they wouldn’t have touched with a bargepole otherwise. Even if they didn’t become proficient in the handling of formal languages, those who stayed the course got a sense of logical criteria and tests applicable to their statements about history. Too often, “narrative” is just a fig-leaf for sloppy handiwork in structuring data and hypotheses.

Agree, I don’t think you will find a historian who opposes increasing the probability of discovering what actually happened–whatever that might entail. Since a historian’s duty, at least in my opinion, is to reconstruct what what probably occurred; not what could or might have happened.

I wonder if Carrier’s books on Bayes’ Theorem and History will give some historians conniptions, as they now have to become proficient in probability theory. Might be how physicists felt when the maths their subject required suddenly went crazy.

“It may be uncharitable of me, but I suspect Konnikova’s trying to tout the humanities and social sciences as “other ways of knowing.” “

I’d agree.

“There is only one way of finding out what is true, and that doesn’t involve revelation or making up stories.”

Yup.

I think she is also attempting to “blow out science’s candle” to make the humanities’ “candle burn brighter”. These sentences suggest this:

“Sometimes, there is no easy approach to studying the intricate vagaries…”

Didja hear that? The humanities involve intricate vagaries! Intricate! You scientists don’t often deal with intricacies, do you? And in the humanities, you can’t just take the “easy approach,” like you can in theoretical physics.

And

“It’s tempting to want things to be nice and neat.”

Yes, scientists are simply missapplying rigorous methodology out of some kind of OCD. Folks in the humanities are just so much more well-adjusted.

I should have added that I’m not trying to perpetuate the distinction she makes between “science” and “humanities.” Science is a method, not a field. The humanities can of course avail themselves of scientific methods.

There really shouldn’t be two cultures, but people have artificially imposed them.

Having read this and some of the discussion about the same post on Sandwalk, I wonder whether she is just getting “Science” and “Reductionism” confused? Science does not have to be Reductionist. There is no validity to an argument that something is “too complex” for science, although it might indeed be “too complex” to get a clear answer given our current constraints. But surely knowing that you don’t know is better than pretending that you have a “different way” to get that knowledge?

I am not sure what you mean by science not having to be reductionist, but I note that when it works it is always reductionist.

I heart Marjanović’s description in the comment thread of Konnikova’s post:

The whole is more than the sum of its parts.

Agreed?

Fine.

The whole is the sum of its parts plus the sum of the interactions between those parts plus the sum of the interactions between these interactions plus the sum of the interactions between those interactions plus…

Agreed?

If so, how about we try to figure out the parts first, then the interactions between the parts, then the interactions between the interactions…

That’s reductionism.

This applies for emergent theories such as chemistry as well as currently fundamental theories such as the Standard Model of particles whereof chemistry is built. It applies at all levels of interconnection in between, and even when cosmology connects back to particle theory making one wonder what is “fundamental” and what is “emergent”.

In short, if science isn’t reductionist in toto, what would be a counterexample?

For complex systems, it is often useful to abstract out the lower levels into a “black box” that converts certain inputs into certain outputs according to a set of rules or a probability distribution etc. You do not need to have a reductionist approach where you work out all the parts and all the interactions BEFORE you try to make useful models of the larger system. Often, an empirical distribution or empirical validation of a model’s predictions is sufficient to be useful. Of course, reduction is the only route to “ultimate causal truth” but not all science needs to be concerned with ultimate causes to be useful or informative – indeed much of it (particularly in biology) is not. Furthermore, certain emergent properties can only by seen by “running” the system and seeing what happens – they cannot be predicted from the reductionist parts and interactions.

In an ideal world, the behaviour of your black box would be built on a reductionist breakdown all the way back to fundamental physics but we do not live in an ideal world and if the product is the same, it does not really matter for “higher level” problems. The big example, I guess, would be the whole field of Systems Biology. Now, you can argue about how useful Systems Biology is but I think you would be hard-pressed to argue that it is (a) necessarily reductionist and/or (b) not science. The same applies to people trying to model climate or understand ecosystems.

Discussing the merits and drawbacks of a systems biology approach would have been a more grounded and informative point on which to launch into informed speculation. That’s the essay I would have liked to have read.

I would disagree. It is just that some emergent properties cannot be predicted from their reductionist parts and need to be modelled at a higher level. I don’t really see this as dualism. (Maybe I am wrong.)

As for Group Selection… I am a “selfish gene” fan. When modelling emergent properties it is always important to remember what they are and that to fully understand their cause you DO need to work out what is going on at the lower level. I think Group Selectionists don’t realise (or ignore) that they are dealing with an emergent property of kin selection/inclusive fitness.

Humans are poor natural statisticians. Before a theory is stated, it must be shown that the observed facts are not more simply explained by randomness. Suppose 35 people randomly gather in a room. There is an 81.43% chance that two of them will share the same birthday. Would Mrs. Konnikova be tempted to propose a more appetizing explanation? I suspect the answer is yes. If so be prepared to hear a lot of nonsense.

I might just add that the opposite also happens—studies in biology are sometimes made to seem more “scientific” by including lots of statistical analyses that are not necessarily relevant. Most journal editors seem to think the same —authors often are forced to include some (usually meaningless) p-values in their papers to make them “more scientific”.

Konnikova appears to be venting, and I wonder if she really believes everything she wrote in that Sci Am essay. It does not speak well for any academic to be so contemptuous of science or its underlying methodologies. I am sure that there is much poorly executed/applied science in the humanities, but you need to be able to distinguish between operator error and the usefulness of the tool in question.

She made many comments that illustrate her poor understanding of science and its underlying methodologies.

“Because here’s the truth: most of these disciplines aren’t quantifiable, scientific, or precise.

Surely she understands that whether or not the use of scientific methodologies will be useful depends entirely on what kinds of questions an investigator in any of these disciplines is asking? Even in literature the methods of science can be put to good use. Such as analyzing the structure of various forms of poetry from pre-literate oral traditions to classical forms. I suggest that Konnikova lacks imagination.

They are messy and complicated.”

The methods of science evolved out of humanities attempts to understand reality. Reality is very messy and complicated. The methods of scientific inquiry are a direct result of trying to make sense of the complicated messiness of reality, and they excel at it compared to any other methodology.

For example, if you take the base model of phonons in crystals, the lowest wavelength phonon amounts to all atoms oscillating in lockstep, in effect having a piece of crystal sliding over your table if friction works as a brownian ratchet. Of course we don’t see these perpetual motion machines, because you have to constrain the model with a condition of conservation of energy. But that is not in the math!

Similarly, I’m currently reading Susskind on multiverses. Susskind et al has constructed a multiverse field theory analogous to field theories like electromagnetism. But unlike them, its field propagators tells us how a local volume of the multiverse changes its vacuum energies as universes branch off from each other, not how waves propagates. So here the math is used to abstract away from the physics, again the actual physics is not contained in the math.

Perhaps psychology will one day construct a theory of the mind analogous to evolution. Then a predictive quantification will follow.

As for the rest, I am reminded of something I saw the other day, the use of qualitative research as a skyhook.

“Qualitative research is a method of inquiry employed in many different academic disciplines, traditionally in the social sciences, but also in market research and further contexts.[1] Qualitative researchers aim to gather an in-depth understanding of human behavior and the reasons that govern such behavior. The qualitative method investigates the why and how of decision making, not just what, where, when. Hence, smaller but focused samples are more often needed than large samples.”

I have never heard of it before, but naively it looks to me like case studies. Which is fine by me, in astrobiology Earth is the type case we have access to. But we have to have in mind that empirical learning without theory, contingent on the sample, is not equivalent with empirical science, making generalizable predictions from a sample.

For one, she doesn’t understand predictivity. She takes one appallingly bad example, attempting to see any and all patterns in historical trends (“cliodynamics”), and accuses what seems to be the perfect pseudoscience data fishing pattern search to be – pseudoscience.

For another, she doesn’t understand the use of science in, say, physics. It is a scaffold but not a skeleton for the body of work, because you can’t base science on math alone.

I think quantitative vs. qualitative is a false dichotomy. It is not one or the other. I am not sure it is even really possible to accurately describe real phenomena in a useful way without using both, even if you tried.

Funny, I just finished a class in “Action Research for Educators” in which I found out more about qualitative data. But basically I found that these type of data did little without quantitative data to back them up.
I really didn’t like the postmodernist bent to the class at all.

By the way, action research is meant to be research done by the educator for the educator him/herself.

First, don’t waste any more time reading the adolescent rant of Ms. Konnikova. Read instead the paper by Mac Carron & Kenna on “Universal Properties of Mythological Networks” she’s quoting as a negative example ( arXiv:1205.4324v2). It’s rather well done, as these things go. [Disclaimer: I’m thrilled, because it’s the first wholly independent confirmation I’m getting to see of a case I’ve been making for years, from the other side of the equation.]

Second, if you read German, try and locate a copy of Edzard Visser’s “Homers Katalog der Schiffe” at a public library near you. Visser’s brilliant work is thoroughly scholarly, without an iota of maths, yet his analysis of name places, power spheres, dialectal and geographical locations can be easily translated into a neat formal model that allows geo-locational and statistical analysis. The conclusion dovetails nicely with the more recent quantitative work by Mac Carron & Kenna. It also yields abundant material for further analysis.

I’ve had more than three decades staving off the kind of fluff propagated by Ms. Konnikova. My blunt retort is always a paraphrase of Rutherford:
“All science is either mathematics or stamp collecting.”

The fun part is that people of the mindset exemplified by Ms. Konnikova not only confuse mathematics with “numbers”; they also never fail to get the “stamp collecting” part wrong. Rutherford may have dissed it, I don’t. The “stamp collecting” metaphor stands for the slow, patient work of collecting, organising and structuring observational data. These are the material substrate of scientific work. Once sufficient material is gathered and scientific methods are deployed, scientific criteria apply, whether the practitioners label themselves scientists or not. The distinction between “sciences” and “humanities” is obsolete and should be dropped.

Not impressed. Their “test” for a power law is nothing such, a common failure of these kinds of works, see Shalizi et al who developed a test and showed the presumed “power laws” often were not (but exponential distributions instead).

You got “the other side of the equation” part right, even though it was an excess of indulging in math. =D

Also, what would be the idea of showing a behavior analogous to historical events? That doesn’t make them historical. Here the ‘mythery’ deepens.

the presumed “power laws” often were not (but exponential distributions instead)

Sigh.
Hence my qualification, “rather well done, as these things go“. Been trying to illustrate the difference between power laws and stretched exponentials in my narrow field (quantitative archaeology) since 1997, without much success. Not that many people in the field would care to know about either, anyway.

what would be the idea of showing a behavior analogous to historical events?

Without reading too much into it, precisely that.

Sticking to my example from the Iliad: the distribution pattern in the epic is mapping a real historical geography with, as one example of a very rough metric, the same rank-size metric observed in the real historical geography of the region, plus archaeological evidence showing similar distributions from later periods. This differs from, say, an invented Tolkienian Middle Earth geography. Assign a Bayesian value to that if you wish.
This conveys some plausibility to the relative sizes of the Homeric naval contingents. Of course, plausibility is not historicity; but if invention replaces history, it is the invention of someone informed about historically plausible contingent sizes.
In this case, epic ≠ mythology ≠ invention.

The organizing of the collected data is also mathematics. It’s just the collection itself that’s less mathematical; and even then, the usual first step is to translate the collected experience into something that is a number in disguise.

Correct.
I didn’t want to press the point further, but many workers in the “humanities” would be stunned to realise that they’re performing mathematical operations day in, day out, much like it was a revelation to Molière’s Monsieur Jourdain to learn that he was speaking in prose.

“(This does not apply, of course, to things like literature, where the notion of “truth” is itself slippery.)”

That is only because we do not yet really have the tools to engage with literature scientifically. We are beginning to though….and I predict some interesting improvements in literature over the next 1,000 years.

The point is that the methods of science do not absolutely require statistics or mathematics. Those methods rely on replicated observation, eliminating alternative hypotheses, generating new and testable hypotheses, and constant doubt.

I’d disagree that none of these require mathematics. They may use methods that do not avail of mathematics in a direct and obvious way; however, determing the degree of replication implicitly involves comparative relationships; eliminating alternative hypotheses is done by a functional mapping the hypotheses descriptions to a linear (or at least partial) ordering based on a measure of how well they describe evidence, such that hypothesis A is “better than” hypothesis B. In so far as it is required that a hypothesis must have the potential to be communicated, it relies on formalizable language; and doubt involves the assignment of probabilities. The mathematics is still hiding entwined in the foundations, in deep disguise that prevents quick recognition.

The methods of science absolutely require mathematics; they merely do not require that the mathematics be done with absolutely precise rigor all the time and at every stage.

JAC: As the social sciences and humanities mature, they come to realize that their criteria for finding truth are the same as those used in biology and physics.

That’s the reasonable explanation as to why we’re seeing all this math and statistics in the social sciences. Notice the progression from the old-school naturalists to modern-day systematists. These disciplines are simply becoming more robust and objective. It’s apparent Konnikova is ignoring any historical perspective.

This “old school naturalist” wishes to remind Biologists not to throw the baby out with the bathwater. Geology did that twenty years ago and the mistake is only now being noticed and (to some degree) fixed.

As for the paper, it seems like a longwinded way of saying “But I majored in Psych because I hate math, so I shouldn’t have to do it!”

Everybody should have a statistics class, because it’s the one kind of math that we’re all bombarded with every day, yet even many science majors never have to take it.

I also dare to maintain that history combined with archeology produces harder facts than astronomy and evolution-biology ever can produce.
Since a few months there is simply no doubt anymore that Julius Caesar defeated the Belgians near Thuin.
And they are damn close to establish the historicity of the Kingdom of David, closer than JAC ever can dream to complete the family tree of the primates.
Alas my sources are in Dutch.

I also dare to maintain that history combined with archeology produces harder facts than astronomy and evolution-biology ever can produce.

Harder facts? Now that’s a tall claim.I hae ma doots.

As someone professionally biased towards showing that history and archaeology can produce hard facts at all, I think we should bear in mind what thin lode we are mining.
If only for the paucity of data and direct observables in the archaeo-historical disciplines, compared with the wealth of the geological record, let alone the immensity of astrophysical data. (And that’s not considering the dismal poverty of theoretical frameworks, in the true sense of the term, in the historical disciplines.)

Historicity: if we define it as “the confirmed factuality of a set of identifiable non-trivial past events, n≥1, inferred from or referred to by historical sources”, establishing it is a big deal for a historian. It’s just one observable datapoint for any other empirical scientist.
I know from painful experience that establishing historicity (like making a speech on economics, in the immortal words of Lyndon B. Johnson to J.K. Galbraith), is “a lot like pissing down your leg. It seems hot to you, but it never does to anyone else.”

Is Caesar defeating the Belgians at Thuin and not Huy or some other candidate spot trivial?
The issue I discuss is not the quantity of data, but the quality.
In the end all physics is based on induction by simple enumeration (or whatever it’s called; I’m Dutch). The last time you observed the universe expanded? If you observe the next time it might contract due to some process we do not know or understand yet. We can’t know for sure.
Mind you, this happens. See superconductivity. Several Nobelprices have been awarded for research on this subject: in 1962 and 1970 for theoretical models. Both claimed that high temperature superconductivity was impossible. Until 1985 all observations confirmed this. Then Bednorz and Müller published their research, which showed experimentally that it was possible after all and received the price within one year – essentially for refuting something physicists thought they knew.
Another great example is Hawking’s A brief history of time, where he first argues that black holes exist and in the next chapter that they are not that black at all.
Btw I am a teacher maths and physics.

The age of an object dug up by an archeologist is measurable.
Recently Israeli archeologists have dug up some grain. It will decide by which Hebrew king the palace where it has been found was build. As such it will decide to what extent some parts of the OT are historically reliable.
That science is as hard – based on measurement – as testing Conservation of Energy.
Note: energy is not directly measurable, only indirectly.
Similar for the Milgram Experiment.

Objection, Your Honour.
Dendrochronology aside, things are not so simple.
Radiometric age determination of plant remains is one thing. Establishing an incontrovertible link between the directly datable object, say a handful of charred grains, and the context it is supposed to be dating is much harder. It involves a genuinely archaeological, often tenuous chain of reasoning and evidence weighting.

What most outsiders don’t realise is that archaeologists tend to be rather good at relative chronology by intrinsically archaeological means, not unlike those used by geologists and paleontologists, e.g., stratigraphy and the evolution of typological series. A few anchor points in time have proved sufficient for reasonable guesses about the absolute age (nowadays formalisable as Bayesian inference). A series of artefacts datable by conventional archaeological methods can carry a great relative weight of evidence.

So the accuracy of a radiometric two-sigma date with a spread of +/- 100 years for a find from a recent or historical period can be of very reduced use, if conventional comparatistic methods can estimate the relative age of an artefact within +/- 25 years. This is a problem if one is trying to date an architectural structure, say the ruins of a palace, and the pet hypothesis is that the palace was built in the mid-10th century BCE, according to an indirect historical source. Radiometric dating will only reveal that the charred remains originate within a 95% confidence interval from the early 11th to the late 9th century BCE. This tells you that you are not completely off the wall, but it proves nothing yet. (I’m not entering into more advanced methods like pooled series of radiometric measurements, wiggle matching on the calibration curve, etc. These can effectively help narrow down the dating span.)

With luck, “hard science” — physics — may tell you something, but much of the circumstantial evidence, the entire chain of reasoning and all of the validation process will be based on “stamp collecting”.

I also don’t understand why *everything* has to be called a “science”. If you’re analyzing the lead content of a treasure owned by some historical figure, fine. You’re USING chemistry as part of your study. But why then is history (or literature, or art) suddenly a science?

I also don’t understand why *everything* has to be called a “science”. … why then is history (or literature, or art) suddenly a science?

The substance of the claim is that ultimately the same rules of evidence and reason apply in all these fields, and there are not “other ways of knowing” that use fundamentally different ideas of what is correct, that are incompatible with science.

What you then call this unified sphere of knowledge is mere semantics (but since “science” is just Latin for “knowledge” it’s as good a word as any).

I agree it’s semantics. But maybe if these humanities etc want to maintain any credibility they’ll either describe their studies as multidisciplinary or will say they’re “borrowing” from science rather than hijacking the title.

Why? If humanities use basically the same methods (namely formulating hypotheses and empirical tests) and thus add to human knowledge they are science. Period.
As a teacher maths and physics I enjoy reading books on history. What historians do is not any less strict than what I have learned.

If the study of human history generates hypotheses that can be tested empirically, then yes it is a science.

It’s hard to see how this is not the case, as human history is based on facts of the world and is ultimately constrained by how the Universe actually operates. There also must be a finite number of historical narratives or stories that correspond well to the facts.

If modern historians are cranking out speculations that are not vulnerable in some way to falsification, then how much are they really contributing to the advancement of historical knowledge?

-The “narrative” that the Founding Fathers intended the United States to be a Christian theocracy can be tested empirically.

-The speculation that FDR knew about the Japanese invasion of Pearl Harbor and allowed it to happen can be tested empirically

– The main causes of the Great Depression can be tested empirically.

– The speculation that lead poisoning of Roman leadership was a significant cause of the fall of the Roman Empire can be tested empirically.

– The assertion that the British Empire ultimately was an economic loser can be tested empirically.

The main point that you need to address though, is whether an assertion is worth much in helping us understand history if it corresponds so little with the facts of the world than it cannot (at least in principle) be falsified. It really is up to historians to make their speculations and assertions rigorous enough to be vulnerable to falsification as much as possible.

Although the author laments the increasing use of “fancy statistics”, the example of the “Truth Likelihood” of stories like Beowulf are nothing but pseudoscience. I agree with the author that people should be happy with calling their field of study an art; I would go further and say that it is also their duty to weed out the pseudoscience. I disagree that psychology is not a science – it should be, but unfortunately it is still fettered by bullshit just as medical science is still loaded with bullshit (though that has improved dramatically in the past 90 years).

I seriously can’t understand the sentiment: “Lots of fields of study are usually wishy-washy, and now some people are discovering much more rigorous methods, but we have to fight to keep them wishy-washy.” I find it especially frustrating as a cognitive psychologist who uses mathematical models. We have learned so much from these models that we could have never discovered from “qualitative methods” (I envision asking people to give really detailed descriptions of what they feel like is going on in their head). A notice to all the story tellers: those new methods you hate are called progress. If you are not coming along, you should probably get off the tracks.

That’s the impression I had, but it’s not clear that it is the author’s intention. I think the author is against the inappropriate use of statistics etc. as a justification for something. I also believe the author is also confused and believes that in any particular field such as psychology there may be room for untestable hypotheses as well as testable hypotheses – perhaps this is a version of NOMA in the psychological community? That’s my guess anyway, based on the knowledge of warring camps in psychology – the folks who work to evolve the subject into a more rigorous science vs. the wishy-washy acolytes of Freud and company.

Sigh. Now those who disagree with you and clearly state why are “too emotional”.

You were given clear responses to your queries, by me and others. You have repeatedly evaded several direct queries. And yes, accusing another poster of blindly following whatever JAC says is a strawman argument, because apparently you have no solid rebuttal to the actual argument that was put to you.

I think that you are trolling at this point and I’m not sure that further conversation is fruitful. I can be proven wrong if you would care to tackle any of the substantive points that I or other posters raised.

When you lied and misrepresented your interlocutor’s position as “whatever Jerry says”, that was trolling. You knew exactly what you were doing there.

“They said reading a document was the method. I pointed out that was not a known scientific method.”

Documents are data. Historical hypotheses, such as “FDR knew about the Pearl Harbor Invasion”, can be supported by the data or disconfirmed by the data. How is analyzing data to see if it supports an assertion about what is true not a scientific method?

With regard to GGS, Diamond asserted that the reason that certain societies developed advanced civilization and others did not was due to uneven access to plants and large mammals that could be domesticated. This can be tested and, more importantly, is falsifiable.

I ask again, if Diamond was not doing history using scientific methods, then what in your estimation was he doing?

Nope. I’d thank you to stop putting words in my mouth. I plainly asked the commenter (whoever they were, and whoever you are) IF they were saying that Dr. Coyne was automatically correct.

In any event, at least you’re beginning to address my questions. Yes, data is in certain documents. Heck, in culinary school you could argue that recipes are nothing BUT data. Does that mean cooking should be a science class too?

Once again, and for the fifth time I ask you, to which methods do you refer?

Thanks very much for this post – I tackled the awful article a few days ago myself for the exact same reasons, but you did a far better job. As an undergraduate of English (transferring to cognitive linguistics as soon as possible), I can say that it is people like Konnikova that keep English faculties downright boring and useless. In my time there, I have become increasingly disillusioned to the point of hatred with the subject, and just as I’m about to leave, I start to see some actual science trickle into the lectures as late as 2012, but these are seen as quite the novelty.

I’ve always been very positive about psychology developing as a genuine science (unlike the bullshit that Freud and many others promulgated). People have established patterns of behavior in humans and even when there is no clear answer to the question “why”, at least the phenomena have been established as fact and are indeed repeatable in tests. Many tests for psychological problems are not chemical tests, they are tests which are carefully designed to identify problems despite the self-reporting and the myriad of strange behavior which is perfectly normal in humans. As work progresses in the neurosciences, psychologists will have more tools to work with. I’m betting that psychologists come up with interesting and verifiable ideas about morality (so :P~ to the philosophers out there – your philosophizing isn’t creating any knowledge of morality).

I actually understand where (I think,) Konnikova is coming from on this topic, although I think she takes an interesting insight and goes in a different direction with it than I would have. Konnikova seems to conclude that the answer is to shove science out the door rather than to look at how a scientific framework needs to be adapted and properly applied in various scenarios. I agree that physics and psychology may be vastly different in that framework, though.

The example this brings to mind for me, in the field of psychology, is working with autistic children. I often see a clash between therapists with great form and those with great content. Therapists with great form are often fond of selecting an extraordinarily measurable goal, like having a child say “blue” when a blue card is held up. Data is collected. Charts are made. Percentages, pre and post teaching accuracy, and rate of acquisition is recorded with care. Usually a big fuss is made over the superiority of being “data driven”. Three months later, there is, from a data driven perspective, an absolutely beautiful example of progress to behold – on paper. In reality, the kid can say “blue” when you hold up one, very specific, blue card.

On the other hand, I also frequently see therapists with seemingly great content, with an intuitive feel for what to teach and how to teach it. They may do a lot of natural environment teaching based on their therapeutic intuition, and get results that seem to be much more meaningful. After the three months, the child is more communicative, affectionate, interested in people, and so on. There’s often a tendency, however, for these therapists to shrug their shoulders and say you just have to have “a feel for it”, and besides, how can you collect data on a child becoming more loving?

So yeah, I feel her pain somewhat and I know how frustrating it can be when there’s an assumption in a field like psychology that the more data driven approach is automatically superior, no questions asked. I’ve certainly come up against that as well. That said, again, I think the answer is not to shrug one’s shoulders and give up. I conclude from this that we need to take a good hard look at how to develop an appropriate framework, in various fields, in handling subjective value judgements and interpretations.

This is the kind of post-modernist bullshit that made me lose heart for my Ph.D. The post-modernists took over the editorial boards of all the journals I’d have to publish in to get tenure. Sure, my colleagues in *my* specialty respect the niche journals we’ve been relegated to, but do the non-specialists who would be deciding tenure in my future career really understand how batshit crazy my field became in the 1990s?

I absolutely believe in proof and evidence and multiple attestations before making declarations in my historical field. That’s the scientific method, even if it’s not science. The post-modernists threw all that out because it would necessarily be tainted by the observer’s worldview (they misquote Heisenberg when they say that), and by culture…

….so if I pull something out of my ass and hold it up for examination at a pleneary session of the only association your tenure committee would be impressed by, DON”T TELL ME IT STINKS!

Post-modernism stinks but they have my field by the balls so I’m doing something else.

Amelie
I think you are missing the point. When it is suggested that historians do/should use scientific methods this does not mean using test tubes or electrodes or that it sould be carried out by researchers in white lab coats (although these things may add useful evidence in some cases). Rather it is about using a particular way of thinking: generating hypothoses about the particular event/period you are interested in and seeking the evidence that enables you to test the validity of your hypothesis. The evidence can be in whatever form is relevant whether that be documents, corpses in the ground, or whatever. Ths is the sleuthing that Jerry referred to.

Actually Jonathan it is you who are missing the point. And it is obvious you never read my original comment. I made the point that Konnikova probably defines “hard sciences” as one that uses specific methods.

History can obviously use all sorts of methods. So can archaeology and botany, for cripes sake. Many fields are multidisciplinary.

But those on this comment thread who say history is a type of science are very confused. If you received your doctorate in History, would your certificate say M.Sc.?

He made the point: “But where she goes wrong is in concluding two things: that all “hard scientific” study must involve math or statistics, and that there are other “nonscientific” ways of knowing involved in what she calls the “humanities.””

It is not about whether or not historians use multi-disciplinary approaches or if they use statistical techniques but that they can only advance their field by holding up their hypotheses (ideas) to test against observable facts. These facts may be obtained using the traditional methods of historians or using techniques borrowed from other disciplines but that does not change the fundamental approach.

What you call the study or any degree you might earn from your studies is neither here nor there.

Why would you embarrass yourself with the obvious strawman at #1? Just because someone supports another person’s argument does not mean that they think that person is infallible. Also, Wallace didn’t just parrot Jerry’s argument; he’s attempted to explain to you the logic behind it.

Should we inquire of you whether you think that Konnikova is always right?

Regarding #2, a lot of the nomenclature around how we label scholars and the departments where they hang out is fraught with convention and historical baggage. Jerry has a PhD, but he does not run around calling himself a philosopher. That PhD label just an artifact of when people who attempted to understand the natural world rationally were called “natural philosophers”.

Under a broad definition of science, such as “science is the use of reason, empirical observation, doubt, and testing as a way of acquiring knowledge”, then yes a good historian is in fact going about their discipline not much differently than a good biologist or physicist. But all this means is that the artificial boundary between much of the “Humanities” and the “Sciences” should probably dissolve. No more arguments as to whether psychology belongs in the science department or the humanities department – a psychologist will simply be viewed as a specialist ethologist who focuses on one animal.

Is everything a science? No. Creative, subjective disciplines and other areas that are not really offering any rebuttable, objective, falsifiable knowledge claims should probably not be considered sciences. And I’m not sure that some areas of study that include the name science, such as “Library Science”, fit the definition either.

When Jared Diamond wrote “Guns, Germs, and Steel”, was he operating as a humanities professor or a scientist? What about Steven Pinker’s latest book on human violence? Would a proper historian not have regarded these as legitimate areas for historical study, or would the historian have used entirely different, non-scientific, humanities-department-sanctioned methods to arrive at their conclusions?

If you are fretting whether to include GSS in the history section or the science section of the library, you are missing the point.

I tend to question the knowledge of someone who thinks that use of mathematics entails quantitative. There are many qualitative mathematical theories like Euclidean geometry, set theory, and so on. I wonder what the author thinks of those? Moreover, there’s an exact philosophy movement (even a Society for Exact Philosophy) which involves formal methods in philosophy, some qualitative (even elementary formal logic is such!).

I think the point she’s making more than anything is the same one that you’re making. Observation is a valid part of science, and there’s a notable tendency to ignore that when a person ‘understands’ what science is. Notable enough for you to point out yourself.

The fact that she missed it is evidence of the merit of what she was saying, don’t get hung up on the fact she didn’t know the right words to say it.

The scientific process should be cautious of starting with “philosophical givens.” This strategy can undermine a better explanation. Historically, science begins with “philosophical givens” often pre-determined by those in charge. On occasion, science can begin with an idea and from there pound the circle into the square and say it fits while it is in the observation of splinters that an ineffectual outcome can be determined. Hypothesis testing and the scientific process have their own historical roots to examine. Thank you Konnikova for reminding us that in science “everything is related” and that statistical analysis must entertain alternatives that encompass the big picture. Novel ideas are for science what Houdini was for straitjackets(SJGould). Science needs to search for better alternatives when it meets a road block, not a bigger hammer.

“Don’t worry about someone stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” – Howard H. Aiken, American computer pioneer (1900-1973)

6 Trackbacks/Pingbacks

[…] anything but “soft.” There are huge confusions with terminology, and Jerry Coyne has a response which addresses many of my questions (e.g., what exactly is the alternative to doing statistical tests in […]

[…] anything but “soft.” There are huge confusions with terminology, and Jerry Coyne has a response which addresses many of my questions (e.g., what exactly is the alternative to doing statistical tests in […]

[…] that’s anything but “soft.” There are huge confusions with terminology, and Jerry Coyne has a response which addresses many of my questions (e.g., what exactly is the alternative to doing statistical tests in […]

[…] humanities. That’s something I have previously addressed, though I would particularly recommend Jerry Coyne’s approach to the question. Even if this were true, Konnikova makes the mistake of believing in the stereotype that these […]