In the medical profession in particular, there are some very rigid beliefs about what constitutes good enough “evidence of effectiveness” to justify offering, recommending, allowing patients to try, or even just not vehemently opposing a particular type of treatment for a patient. There are some glimmers of hope in other sectors (e.g. in the Best Evidence Synthesis work here in New Zealand). But there are still three areas where there are very serious challenges in building a credible evidence base given the kinds of constraints and realities surrounding them. They are: (1) cutting-edge treatments; (2) treatments that are by their very nature tailored/individualized rather than standardized across patients or populations; and (3) learning what works for small sub-populations

The new funding rules for the US Department of Education’s $650 million Investing in Innovation appear based on an out-of-date model of evidence-based policy and hierarchy of evidence. Recent developments in our understanding of evidence-based policy would suggest changes are needed to the selection criteria and to how successful proposals will be evaluated.

Most lay people can grasp the difference between grading/rating and ranking, so what’s wrong with the media? Following on from Patricia Rogers’ recent posts about the misreporting of evaluation findings, this post looks at an example from the New Zealand media (reporting on the new National Standards for literacy and numeracy) of leading the public astray with a complete lack of understanding of this very fundamental evaluation concept. Jane also ponders the reasons why the mainstream media in particular gets this kind of thing wrong so often …

Another Head Start evaluation, another controversy about whether the results show it works or not. In her comment on our post on the NY School Milk Study Susan Wolf drew our attention to some important differences between the recent evaluation report on Head Start, and how it was represented in an email from the Brookings