No. 76

STONE the crows! Accountability is out of order. At least for guild members who hate the idea of being called to account.

On Thursday, Adam Creswell reported health fund NIB was upsetting 160,000 dentists, podiatrists, optometrists plus people in 14 other related occupations, by giving clients an online opportunity to rate their performance online.[i] There were suggestions the site could work like Trip Advisor, where users report accommodation experiences (hint to new users: look for reviews by Australians and Brits, the Yanks put the “w” in whinge). [ii]

It took as long as no time at all for industry associations to come over all outraged. The disgruntled would post unfair complaints some said, others argued service providers who work from home would have their addresses publicised (which rather missed the point that this is where customers find them now).

And the AMA claimed professional privilege (although why the ABC asked the doctors, who it appears are not included in the NIB plan, confounds the Crows). According to Steve Hambleton from the doctors union, “it is an attempt to commoditise a very personal experience that patients and doctors have.” [iii]

Given it is patients doing the reviewing, the Crows are obviously missing something. But what is clear is criticism of NIB demonstrates two things about guilds – they hate being judged by anyone but their peers and can always demonstrate that any performance measure they do not control is flawed. Especially if it allows customers to compare providers.

Like the National Assessment Program Literacy and Numeracy, which really, really upset teacher union officials last year. Christine Cawsey from the NSW Secondary Principals Council compared the campaign to stop its data being used to rank schools with the defence of Stalingrad.[iv]

The comrades claimed NAPLAN would hold poor performing schools to ridicule and was an attack on teachers. Even worse, the media would transform the data into league tables! That NAPLAN gives parents some sense of how their kids’ class compares to similar schools across the suburb or state does not justify the database to its opponents.

All sorts of people have bought the argument, including NSW education minister Adrian Piccoli who when in opposition backed the idea of prosecuting newspapers that used NAPLAN data to rank schools.[v] (Funnily enough now he is in government journalists from the Sydney Morning Herald and The Australian, which both published Naplan based league tables, are not in chains.)

Above all, enemies of ranking always argue the metrics are mucked up. Thus Anna Patty, not numbered among NAPLAN’s most adamant advocates, argues that standardised testing in New York schools, which inspired Julia Gillard when education minister to create NAPLAN, “amounted to little more than smoke and mirrors – demonstrations of academic improvements where there were none.” [vi]

That metrics are manipulated is always the strongest argument used by enemies of accountability. As Roland Pryzbylewski, who stopped being a cop to become a teacher in David Simon’s great tele-novel of politics in Baltimore explains to a colleague how poor managements always disguise under-performance by “juking the stats,” be they to falsely lower crime rates or improve literacy

Fair enough, but disputes over data are insiders’ arguments, generally between managers who want to justify their existence by demonstrating things are improving and practitioners who think they should be left alone to do the job as they want to do it.

They especially occur between regulators and self-governing guilds, and those that think they should be, notably teachers. And even though NAPLAN is now a fixture on every school calendar insiders still oppose it for providing outsiders with information.

Australian Primary Principals Association president Norm Hart said the exams had become publicly distorted – going from a diagnostic tool about student results to schools and staff reputations being tied to it. [viii]

And just now the guilds are winning. The campaign against using NAPLAN to assess school performance continues. Canberra’s MyHospitals website is a cot case when it comes to comparative data.[ix] While it is possible to compare hospitals against national averages they are not rated against their neighbours – not much use for anybody who needs to know whether to take a suddenly sick kid late on Saturday night to casualty at St Vincents, Prince of Wales or RPA.

We are yet to see how Canberra’s university site (expected to be ready before the next academic year) works. [x] But, given it will use data already available and is not expected to rank institutions from top to bottom, it is hard to see what help it will be for people with no experience of university life in working out which institutions are performing.

Arguments over ranking methodologies are fair enough, measuring service delivery is always difficult. The measure that assesses university research output, Excellence in Research for Australia, used independent experts to list scholarly journals by academic esteem and allocates scores based on where people publish. It sounds sensible but editors of low ranked journals are always happy to explain why it is unfair. That’s the problem with ranking performance of smart people the losers always argue why the process is crook.

As law academic James Allan described the ERA, “this exercise was shot through with subjective evaluations wrapped in opaque processes.” [xi]

But demands for accurate data are also cover for professionals who hate the idea of being held accountable for performance by any outside the guild. Australian Education Union official Peter Job made one of the clearest cases against external accountability before the fight over standardised tests and league tables even got going.

Market based accountability models by their nature emphasise competition and judge successes and failures. Such an approach is accompanied by a strong sense of blame accorded to those perceived to be the latter, whether schools, teachers or students, along with the notion, largely developed by non-teaching “experts”, that educational improvement can be obtained not through resourcing and support, but by surveillance and performance mechanisms based on supposedly measurable data.[xii]

Replace “schools” with health and you see why NIB is upsetting people. And the louder and more detailed guild arguments against accountability, and especially rankings, are the less chance governments will publish league tables that allow the public to admire the winners and ask questions of everybody else.

Professor Allan’s cogent complaint with the ERA is not that the data is dodgy, it’s that the feds will not use it.

It requires government bodies to make tough, rather ruthless and usually politically unpalatable choices, based on what at the margins is shonky data. If they have the cojones to do that, all the many costs of this ERA exercise may be worth it. I’m just not at all sure this government, or any Coalition one either for that matter, has that sort of nerve. [xiii]

Perhaps. But the more guilds complain the greater is the public interest in having their members’ performance assessed.

As Adam Smith put it, “The real and effectual discipline which is exercised over a workman, is not that of his corporation, but that of his customers. It is the fear of losing their employment which restrains his frauds and corrects his negligence.” [xiv]