Marie-Claire Shanahan has a couple of great posts up about the science of science education, and research on what it takes to actually change someone’s mind. They’re great posts, and hold the promise of many more insightful looks at the skills and approaches best suited to increasing science literacy.

For instance, a survey by Nature Education (gated, here’s a report in USA Today) in 2010 found that while college-level science educators generally think science education is mediocre or poor, 85% of them think they personally have a positive or strongly positive effect. It is, first of all, essentially impossible for 85% of professors to have a positive effect and for the college education system to be crappy. This misperception of competence (the Dunning-Kruger effect) is an obstacle to getting help to the people who most need help with their teaching. The survey also found that good or bad teaching had no realistic impact on hiring and tenure decisions at major universities, which means there’s no incentive for self-improvement among the 15% who know they’re not having a positive effect, let alone the unaware.

This problem is even greater in the realm of skeptical outreach, where the outreach is informal by nature (so there’s no end-of-course exam), and largely conducted by people who are not trained teachers or trained scientists or trained science communicators (and who can be expected to have even greater Dunning-Kruger effects). At a TAM! panel on communicating skepticism, Phil Plait noted that there are no well-established metrics for skeptical outreach, so it’s hard to really know what works and what doesn’t work, let alone to promulgate those effective techniques to the folks in the field.

I was primed for that conversation because, at Netroots Nation a few weeks earlier, I’d attended several workshops in which political activists talked about using controlled experiments to see what tactics and strategies work best. Whether it was testing language on doorhangers to see how it changed voter turnout, or comparing details of the wording of mass emails to see what maximized clickthroughs and donations, there was a lot of focus on testability and use of scientific protocols. If political hacks can use scientific methods to evaluate and improve their outreach efforts, surely scientists and other skeptics dedicated to promoting science in society could do the same.

Immediately after Phil spoke, PZ Myers jumped in (the panel was: Plait, Scott, Myers, Jamy Ian Swiss, and Carol Tavris). PZ disagreed with Plait, saying he is glad we don’t have metrics for skeptical outreach. He said that he doesn’t like when people come to him waving scientific research showing that their technique works and that techniques like PZ’s don’t, because he doesn’t think those papers address the specific situations he’s dealing with, and it’s wrong to say he should change his behavior in response to those studies. Then Jamy Ian Swiss emphasized that point, saying “I don’t want what works best,” just what works best for himself.

Myers, of course, is a biology professor at a small liberal arts state college (thus with a focus on education), and a prominent science communicator. Swiss is a magician who, the previous day, had been given an award in honor of his service to the skeptical community. These are people who take skepticism seriously, and who take promoting skepticism and science seriously. But the attitude they expressed is as unskeptical as could be.

I’ll grant that there are times when one does simply go with what works for oneself. There’s no absolute metric one can use to pick a favorite baseball team, or a favorite novelist, or indeed a religion. So we can’t go with what’s best, we just go with what works for us. And, while recognizing that there’s no absolute metric, we still look to the guidance of literary critics, sportswriters, etc., to guide us away from objectively bad decisions about those topics, even if there’s no objectively best choice.

But on empirical matters, I think a skeptic is defined by insisting on clearly delimited metrics of success and failure, and by a refusal to accept “this works for me” as an answer, and a reluctance to countenance special pleading or other logical fallacies. And special pleading is what PZ’s comments were: he was saying that the peer reviewed research on communication in general, or science communication in particular, wasn’t germane to his specific situation, and therefore had nothing to say at all.

Which is absurd. If someone came to me hocking a homeopathic treatment for male pattern baldness, and dismissed my citations of homeopathy refutations by saying those didn’t test his specific concoction or this specific application of it, I wouldn’t celebrate his sophistry. I’d say that homeopathy’s uniform ineffectiveness and lack of theoretical foundation mean that any claim that homeopathy works must be backed by substantial evidence presented in equally prestigious venues, using clear, well-established, and objective metrics. And if a friend tells me he wants to keep using Chinese herbs even after I show him a paper demonstrating that they work no better than a placebo, I ought not to shrug and accept his claim “I don’t need what works best, just what works for me.” I think Tim Farley is right, that skepticism is about using science to keep people from spending money on stuff that doesn’t work. And what does or doesn’t work has to be based on some clear metric. Skeptics – by definition, I’d say – do not dismiss the utility of metrics!

Evidence matters, and the truth matters, and that’s why skepticism matters. I was more than a bit shocked that people applauded PZ’s and Jamy’s comments, and I even wrote to PZ asking him to clarify. The email exchange generated more smoke than fire, alas, so I throw it open to you, dear readers. Maybe there’s some meaningful distinction I’m missing, or some failure in the analogies above. But if I’m right, if the analogy is legitimate, and science really can tell us which skeptical outreach techniques work, then I urge you to suggest some clear, objectively measurable metrics that skeptics can use in their campaigns. While we’re at it, how can we get research on science education into college and high school classrooms? How can we overcome the Dunning-Kruger effects surrounding educational approaches in science classrooms and informal skeptical efforts?

Comments

Indeed science without scepticism is like the Pacific without water – but better funded by big government.

I found the UK government funded video of a teacher killing children for expressing doubt about alleged catastrophic global warming to be more obscene than a child porn video would have.

Since PZ Meyers barred me from Pharangula for questioning CAGW I do not see him as a communictaor. I may be biased. The fact that he did not ban the guy who first answered ,y question with obscenity, was a niggle.

At the level of the philosophy underlying scientific questioning, there are at least two quite different approaches, which may be behind the works best/works best for me conflict. A mechanist approach uses a clockwork metaphor, in which different configurations of enmeshed gears contribute to an outcome; in this case, there may be a “what works best”, and it makes sense to push teachers toward such methods. A contextualist approach (or functional contextualist approach), on the other hand, uses the metaphor of a behavior in context, in which the same physical behavior in different contexts may have vastly different outcomes, and vastly different behaviors in different contexts may have quite similar outcomes. Context is a source of error in the mechanist model, but a crucial causal variable in the contextualist model.

These are two different philosophical underpinnings, but both are scientific approaches. The vast majority of research in teaching has been done in a mechanistic approach, but more recently pedagogical research has begun using functional analysis (for instance, rather than assuming that a student is acting up “for attention”, actually manipulating the variables to see what reinforces that student’s behavior–peer attention is not the same as teacher attention, for instance)

I feel it is tremendously important to care about evidence. It is absolutely possible to do that in a contextualist approach–indeed, if the situation calls for it, it is (in my opinion) a considerably more effective and productive approach than the mechanist “one size fits all” view. The clockwork approach sweeps too much information under the rug as “error”; variability is meaningful.

Admittedly, when *some* people wave the “it works for me” banner, they have not done the work to demonstrate that it actually does work. Unless they are trained in, say, reversal designs, operationalization of outcomes, etc., then they are using the “it works for me” as a smokescreen. And if that is your experience with “it works for me”, you may be forgiven for seeing a contextualist approach as inferior. As I said, these are fundamentally different underlying philosophies; if you ask questions grounded in mechanism, contextualist answers will not satisfy you, and vice versa. Once you understand the reasons, though, you will be able to see a very different picture.

I agree that metrics and and proven methods are important when teaching to a captive audience (in schools, for example). There, you really have a duty to get the most bang for the buck, and it is the duty of a teacher to drop his or her idiosyncratic approach and do what works best.

However, the non-captive audience for skeptic writing is diverse, and approaches that work on the majority do not necessarily work on all types of people. To reach these people, a diversity of approaches has much value.

Imagine a writer, full of personality and insights, who passionately wants to share with the world. Maybe somebody like Hemmingway. What should he or she do? Go to an airport bookstore and study the best-selling titles, and try to parrot their style? Or be true to his or her creative vision? I think science writers who have self-selecting audiences should follow their passion and their personality. They are likely to attract and engage some people who would not be reached by the “optimal” method. This is not the same as ignoring pedagogical evidence. It recognizes the “tail” in any statistical distribution, and treats non-typical students as worthy of attention.

A complete misrepresentation, but then, it’s about what I expect from you.

people come to him waving scientific research showing that their technique works and that techniques like PZ’s don’t

That’s a lie.

I don’t like it when people come to me waving scientific research showing that their technique works and then tell me that I must follow it now. I don’t disagree with the research; I say if it works, and you like it, do it. But there is no research saying my approach doesn’t work, and I know it does work because I’ve been applying it successfully for years, with good effect. I make no claim that it is the best of all possible procedures, or that everyone else must emulate me — that’s more your style — only that it suits me, and that it appeals to some people well.

What I detest is dishonest clowns like you with an agenda that aims to reduce communication to your chosen formula, and especially clowns like you who will misrepresent the science as a club to shame anyone who doesn’t follow your program. On that panel, I made the point that we need multiple strategies, that different people will respond differently to different approaches, NOT that we should ignore research on communication.

In my Comment 3 I said “approaches that work on the majority do not necessarily work on all types of people.” I should clarify that the single “best” approach, as evidenced by metrics, may not even work on the majority. Maybe the best approach works for 20% of students, the next-best approach works for 10%, etc.The best approach might not work for 80% of the population. (I don’t know the numbers yet, I am just making a logical point). If that is the case, it is even more important (and completely scientific) for the scientific community to deploy a diversity of approaches.

PZ: Whether or not you said it on the panel, people do come to you with research showing that your confrontational approach doesn’t work. You can stick your fingers in your ears all you want, but that research exists, and you’ve been given a chance to address it. Instead you respond with this sort of ad hominem.

“I’ve been applying it successfully for years, with good effect.”

Is that based on some metric, or just on your gut feeling? If you’re against metrics, how do you know it succeeds, and if you’re against metrics, why do you care what effect it has? This is the contradiction I’m getting at in the post.

On that panel, you made the point that you don’t want metrics because you don’t like people trying to tell you how to do what you do. Which is fine as far as it goes, but strikes me as unskeptical.

I agree that multiple approaches are necessary. But they should all be rooted in empirical research and evaluated based on objective metrics. And when people do that for confrontation, it fares poorly.

Where is the research that method A works “Best” and what metric is “Best” ?

Even if method A did work with highest rate of success, method B could be more cost effective and we all only have finite resources. Likewise method B could be better targeted to a specific population, like college students. With medicine, we always have method C — do nothing at all — but that is not the case in communication or education.

Clearly if you want to raise the perceived preparation level of incoming students, you just create higher barriers to entry, so that the only ones who get a science education are those with a science education. But I don’t think colleges work like that.

rpenner. What metric one uses depends on one’s goals. Nor am I saying we all need to do the exact same thing. There could be several good methods. But there could be several good methods and other methods that fail, and we should avoid those, even if we use different of the good methods. Contrary to PZ’s fantasies above, I’ve never said everyone needs to do the same thing, just that they ought to avoid things that don’t work, or are counterproductive. And I reject his claim that we don’t need metrics.

Read the first two links, from Shanahan, which summarize science education research. Even with diverse audiences, etc., we can draw consistent lessons about what works and what doesn’t.

Josh – your real mistake is thinking that PZ is a sleptic and science communicator. It is similar to thinking of Rush Limbaugh as a political thinker. In fact, PZ is an ideologue and a public spectacle. And I don’t say this just to be insulting. His goal is not good science education, it’s the destruction of religion; that’s what makes him an iseologue. The behavior you report here is just more of that. His blogger equivalent of “shock jock” behavior is what makes hikm a public spectacle.

The conversation about communicating science would get a lot more productive if people stopped believing PZ is a serious skeptical thinker, just as the political conversation in the US would be more productive without Fox news and the power of talk radio hosts like Rush Limbaugh.

Science would be a lot better off if either “skeptics” became real skeptics, instead of ideologues or if people stopped mistaking “skepticism” as the public face of science.

In the past forty years “skepticism” has come to mean an ideological position which isn’t skeptical at all. It’s an attempt to enforce a rigid set of position on the media and the intellectual class, at times in the face of evidence. My usual clash with it, though, comes from my skepticism over the social sciences and the imposition of those on real science, the attempt to hold scientists and others to an ideological materialist orthodoxy and similar violations of the “skeptical” POV. It’s quite possible to be far too skeptical for the “skeptics”. They don’t like it anymore than the Bigfoot community likes skepticism about their assertions.

Just now, I’m having duels, both with a creationist here and with materialists, over the inevitable inability to know something as a result of missing evidence on another blog. At least one of them is a “skeptic” having a “skeptical” blog of her own. I’d question her reliance on evidence, which is not there, though she’s hardly the only “skeptic” I’d say that about.

Good thoughts, and I agree. Lately I’ve been focusing on measurement in my skeptical work. Fortunately, what I focus on is internet-based efforts, so there are lots of ways to measure – traffic patterns, pagerank, google placement, and so on. But we need to encourage skeptics to do it more, even in non-digital realms.

One thing I find often left out of the debate on ‘what works best in communication’ is a precise qualification of the desired outcome. Before anybody can discuss methodology, you need to know what the method aims to do.

Within atheism there is a multitude of goals. Some are secular, some are clearly political, some are educational, some are hostile and punitive. I’m most interested in secularisation and education; anything that conflicts with those goals, I’m concerned about.

In discussing methods, there are three things I want to hear: what you’re trying to do, how you’re trying to do it, and what you’re looking for to identify whether you’ve achieved it. If I only hear one of those things clearly, I find it’s not a discussion on methods but merely cheerleading for a vague cause. And a waste of time.

I’m with Mike. Clearly articulated goals, methods, and means of evaluating the results. What else is there?

To toss in my own anecdotal confirmation bias for a moment, it seems like we’ve had a rash of skeptics behaving unskeptically (or rationalists behaving irrationally) in the last few weeks. Since questioning one’s own assumptions is the bedrock of critical thinking, It’s a very good thing that these conversations are going on. Thanks Josh!

Now, he’s right that people wave studies with confidence far more than statistically justified at the range they’re trying to extrapolate to; and especially when they neglect to include the qualifiers and uncertainties. This, however, predisposes him to discount such studies whenever they are waved at him, in that the mere fact that it’s being waved at him by someone of group X has historically been an indicator that it is either badly flawed, or not actually relevant.

Contrarwise, I don’t think such studies are impossible. There’s some definite IRB approval issues for directly relevant experiments in human subjects cognition. That said, I’ve been intermittently looking around for the last two or three years, and there doesn’t seem to have been a lot of research published on the cognitive science of how people change their minds, especially with the light of how TO change people’s minds.

Or in English: he’s not entirely right, but a bit overly dismissive.

The other issue is more one of the “what’s best” and “what works for me”. For example, suppose after getting a Nussbaum Mad Science Grant and finding a suitably apathetic IRB to approve some dubious experiments on undergraduate students, Doctor Potatohead discovers that persuasive dialogs are approximately nineteen times as likely to be effective when cast in iambic pentameter and sung to Greensleeves. Leaving aside the WEIRD nature limiting applicability, such a finding would definitely have a radical impact on science communication. If, however, PZ’s singing is slightly less melodious than cuttlefish flatulence, it nevertheless might be an extremely ineffective approach for him.

Less indirectly: PZ’s personality may leave him less suited to more gentle and diplomatic approaches. Personalities aren’t completely invariant and static, but radical changes are unusual. Even if there are tactics that are more effective than PZ uses now, it may well be that he’s unsuited to them. Which leaves the question of what tactics within his limited repertoire are most effective at contributing to the social attitude changes he hopes for, and whether his current approach is likely to have more beneficial impact than a “quiet down and watch others do the job” approach, or even “spend a few years practicing being nicer until getting good at it”.

There’s also the increasingly complex question at a more ecological level, involving whether multiple people using diverse approaches makes for a more effective meta-approach (almost certainly), and thus what mapping between the set of approaches and set of activists is optimal. (Though this involves the further complexity of the activists not necessarily all having exactly identical optimization goals.) However, for that, PZ’s position makes sense, if you think of him saying “It looks like having a few aggressive and irritating folks contributes to faster beneficial change, and I’m good at that.”

That said, I don’t think he’s interested in the meta-discussion, in part because so many of those who resort to it are tone trolls trying to find a way to get him to shut up.

So, by all means, do the research. Present it. If there’s a significant benefit to be had, there WILL be younger players who will come along, adopt those means, and as a result rise to pre-eminence to rival PZ’s. If there’s a massive benefit (which I doubt), it will reduce the size of the “stinging fly” niche in the social ecology he presently occupies to obscurity.

And besides, one of the ways an advocate of an improved approach to persuasion should be able to demonstrate its effectiveness thereto is to apply that tactic to persuading PZ to adopt it. =)

Also, looking through part II of Shanahan’s writing turns up “The most effective texts were those that directly addressed and refuted common misconceptions. This would seem to support PZ’s preference for a more direct form of confrontation, at least as a first approximation.

At a second approximation, there subjectively would seem to eventually be a point of diminishing returns, and perhaps could be a point of negative returns. However, without measurement, that’s rather speculative; and to further claim PZ is into the negative returns zone without measurement data to support the thesis subjectively reminds me a bit much of conservatives that talk about the Laeffer curve and insist current tax rates must be past the peak.

Also, looking through part II of Shanahan’s writing turns up “The most effective texts were those that directly addressed and refuted common misconceptions. This would seem to support PZ’s preference for a more direct form of confrontation, at least as a first approximation.

The problem is what one means by “confrontation.” Depending on how it’s used, it can mean being direct but civil, or it can mean going all out with the ridicule.

PZ: Whether or not you said it on the panel, people do come to you with research showing that your confrontational approach doesn’t work.

That’s an incomplete sentence that illustrates Cuttlefish’s (and PZ’s?) point. Doesn’t work for whom on whom in what context? What speaker/writer? What audience? What goal(s)–to convince the unconvinced, or bolster the convinced, or shake the opposition’s assumptions, or move the grounds of debate, or what?

Your simplistic sentence embraces the univariate fallacy–the notion that one variable (in this case, confrontational vs. non-confrontational) is all-important and that other variables and especially the interactions among those several variables are irrelevant. They’re not, and the attempt to characterize the system as a univariate system so over-simplifies the case as to be actively pernicious.

We hear about the Dunning-Kruger effect; let’s not forget about the Overton Window. In a lovely irony, it might be PZ’s (alleged) ineffectiveness that enables Josh’s (purported) effectiveness.

Doesn’t work for whom on whom in what context? What speaker/writer? What audience? What goal(s)–to convince the unconvinced, or bolster the convinced, or shake the opposition’s assumptions, or move the grounds of debate, or what?

That’s actually been answered. “tribalscientist,” a.k.a. Mike McRae, brought up his own summary of the research in a comment on John Wilkins’ “Tone Wars” blog post. If you want to rally the base and dissuade members of your own group from defecting, ridiculing outsiders works pretty well. If you are trying to convince either the opposition or those who are on the fence, then it’s not necessarily so effective.

Also, if we’re going to talk science and skepticism, bringing up a data-deficient model like the Overton Window is not that helpful.

@17: J. J. Ramsey:The problem is what one means by “confrontation.” Depending on how it’s used, it can mean being direct but civil, or it can mean going all out with the ridicule.

Confrontation would potentially seem to include either, with the latter being what would usually be termed “more confrontational”. Thus, my earlier comparison to the (misspeled) Laffer curve.

Among other factors, the “Backfire Effect” also would need to be taken into account. I also suspect (in part due to conjecture about that effect) response to variable confrontation might need to be measured with Altemeyer’s RWA metric and Sidanius’s SDO metric as controlled variables.

Any discussion of “what works” that fails to take into account Pharyngula’s traffic statistics is doomed. Numbers are evidence.

PZ’s approach irrefutably “works” as skeptical outreach. His comments are populated largely by people who’d never considered organized scepticism until they were radicalized by Pharyngula.

There is no one best approach, because there is no one group to reach out to. Granted, PZ’s approach works best for reaching out to what one demographer once called the “Fuck-You Boys,” but the numbers certainly indicate his phenomenal success at reaching that audience. And no one in the Accomodationist camp can make a similar claim.

As a different example, Amanda Marcotte and Greta Christina, to name just two, have succeeded in tying skepticism and rationality to Feminism, and in turn have brought in a whole ‘nother group of people to the conversation. And again, the Accomodationist position is undermined by the sheer weight of evidence that this approach also works, and works well.

In science, there’s theory, observation, and experiment. And if the results of experiment after experiment undermine the theory, then it’s the theory that has to change. The wide variety of blogs on science and skepticism, with their varied approaches, are a large-scale experiment on science communications theory. And so far, the results in terms of readership don’t look good for the theorists.

Thanks for linking to my posts Josh, much appreciated. And I agree, evidence does matter. People are complicated but understanding the strategies that are most often effective is important and valuable.
And for those who read my post and were wondering about the direct confrontation, these studies were all done with materials written for schools or universities so the refutation was civil – direct but not antagonistic.

While we’re at it, how can we get research on science education into college and high school classrooms? How can we overcome the Dunning-Kruger effects surrounding educational approaches in science classrooms and informal skeptical efforts?

Good science education requires teachers who truly understand science. These are not as common as they should be. But nobody can fully understand science without the neural circuitry to deal with logic and the open-mindedness that a scientist needs. That develops very, very early. We need to be engaging students in philosophical discussions from kindergarten.

The Dunning-Kruger Effect is a much, much bigger problem. It is a closed loop, particularly in education settings. What some of my own research shows is that it is positively correlated with narcissism, entitlement, external attributions for failures, and poor study strategies. Students who overestimate their competence the most use rehearsal learning strategies (flash cards, bullet-point memorization, template paper writing), blame their poor performance on external factors (luck, teacher), fail to accept that they do not have a grasp on the material, and feel entitled to continue the poor strategies that don’t work.

It is extremely difficult to teach people who think that they already ‘get it’. The most common sentence I heard during my office hours in the past 3 years was, “I understand it. I just don’t know how to word it the way that you want it.” Of course all I wanted was for them to demonstrate understanding; ‘wording’ is not the issue.

Unfortunately, narcissism has increased dramatically in recent years and ‘teaching to the test’ is now the norm, thanks to the NCLBA. Students don’t know how to learn and don’t think that they need to learn. Instead, they simply perform tasks and check off boxes until they get what they are convinced that they deserve: a degree.

What we need is to change the narcissistic, anti-intellectual culture.

Ron Knop’s point shouldn’t be lost here: I don’t believe PZ’s goal is skepticism or science education. That makes it easier to disregard his opinion on how to advance and defend good science education.

On confrontation: Tone is a strawman. I think we’ve seen in the past few weeks some pretty compelling evidence that confrontation isn’t being used as a tool intelligently targeted, it’s a default position used against anyone who disagrees with ‘you’ about anything.

On Overton Windows: A great example. If you’ve actually gone and studied the window, you’ll see see it’s used to engineer compromises that move the debate in a specific direction. RBH wants to claim that the unforeseen result of more people being accepting of Atheists is a result of an Overton Window strategy. Nope, the Overton Window is a carefully considered political strategy that utilizes the results of wedge politics to move a percentage of people toward a goal via compromise.
There is no compromise in what the New Atheists have been doing – they’ve actually uses confrontation against anyone who doesn’t completely agree with them.

Unforseen results are just that – unplanned and unexpected. Using them retroactively to justify what you’re doing ignores two things:
– You don’t know what other unforeseen effects you’re having and
– in employing wedge politics but not the Overton Window strategy, the unforeseen and undiscovered effects you’re having may hurt your movement in the long run in spite of any gains you’ve discovered in the short run.

And if you reject data and introspection, you likely won’t knowif that’s the case until too late.

And as I re-read about the Overton Window at the place that developed it http://www.mackinac.org/7504 I once again wonder if one of the unforeseen results might be a process of “selection” for people who enjoy confrontation – not as a carefully considered tool but as a default response to any disagreement – as the prominent faces of New Atheism?

Ron Knop’s point shouldn’t be lost here: I don’t believe PZ’s goal is skepticism or science education. That makes it easier to disregard his opinion on how to advance and defend good science education.

I don’t think it’s his goals that are the issue. Take a look at the thread to which I’d linked in comment #19, and see how poorly PZ Myers handles himself. He tries a strawman, which fails because he keeps getting called on it, and then finally he grossly misreads, “Did I not say I had the last word? Best to continue this at PZ’s melange of minionry,” to mean that he would swarm the blog with his “minions.” He comes off like someone whose anger has so choked his brain that he acts like a nutter.

I have to admit, though, that I find it rich that Myers is so quick to call others liars when he is hardly fastidious about being truthful himself. Heck, if I want to get personal, Myers went so far as to make false claims about me supposedly making “extremely inappropriate comments about [his] under-age daughter’s sex life.” Real class act, that.

RBH: You are right that the sentence you quoted leaves out a lot, but this is a conversation that’s stretched over years and years, so a certain shorthand is justifiable. But you are right that effectiveness is audience- and goal-specific, and I hope that the post above lays out the outline of what audiences and goals I have.

More broadly, I’d note that PZ’s anti-metric stance would make all of this irrelevant. Whatever the audience, whatever the goal, PZ doesn’t want metrics. Period. And that’s a problem. If we can’t even agree that we can measure the effectiveness of techniques, whether we agree or disagree about audiences and goals and the best metrics to use is irrelevant.

Search This Blog

Subscribe

About TfK

Joshua Rosenau spends his days defending the teaching of evolution at the National Center for Science Education. He is formerly a doctoral candidate at the University of Kansas, in the department of Ecology and Evolutionary Biology. When not battling creationists or modeling species ranges, he writes about developments in progressive politics and the sciences.

The opinions expressed here are his own, do not reflect the official position of NCSE. Indeed, older posts may no longer reflect his own official position.