National Institute of Economic and Social Research

Monday, 17 October 2016

[IMPORTANT DISCLAIMER: This blog represents my personal view only: not that of NIESR or of the evaluation consortium led by Ecorys]People sometimes ask me why I spend so much time correcting
the inaccurate or misleading use of statistics by politicians and newspapers.
After all, they say, it’s just politics and spin, mostly – a Minister getting
her facts wrong doesn’t necessarily mean anything for real people or actual
policy. Perhaps I should calm down and
focus on what's really happening, not the public statements. However, with the publication of the evaluation of the
Troubled Families Programme (TFP), we have a perfect case study of how the
manipulation and misrepresentation of statistics by politicians and civil
servants – from the Prime Minister downwards – led directly to bad policy and,
frankly, to the wasting of hundreds of millions of pounds of taxpayers’ money.The findings of the evaluation are set out here - those
interested in the detail of the TFP should read the synthesis report, produced
by a consortium led by Ecorys and including NIESR, or at least the Executive
Summary of NIESR’s National Impact Study. But, after trawling through literally
hundreds of regressions, the bottom line is quite simple:

The key finding is that across a wide range of
outcomes, covering the key objectives of the Troubled Families Programme -
employment, benefit receipt, school attendance, safeguarding and child welfare -
we were unable to find consistent evidence that the programme had any
significant or systematic impact. The
vast majority of impact estimates were statistically insignificant, with a very
small number of positive or negative results.
These results are consistent with those found by the separate and
independent impact analysis using survey data, also published today, which also
found no significant or systemic impact on outcomes related to employment, job
seeking, school attendance, or anti-social behaviour.

In other words, as far as we can tell from
extensive and voluminous analysis of tens of thousands of individual records,
using data from local authorities, DWP, HMRC, the Department for Education, and
the Police National Computer, the Troubled Families Programme had no impact on the
key outcomes it was supposed to improve at all. It didn’t make people more (or
less) likely to come off benefits .To get jobs. To commit fewer crimes. And so
on. And, just to rub it in, these findings were confirmed by an entirely
separate evaluation, conducted by another research organisation, which used a
different data set and a different methodology to come up with essentially the
same answer – no measurable impact on any of the key outcomes.

But the key point here – and the indictment of
politicians and civil servants - is not that the TFP didn’t achieve what it set
out to do. That's unfortunate of course.
But successful policymaking requires experimentation and risk-taking – and by
definition, sometimes that results in failure. If new programmes never failed
to deliver the promised results, that would show government was not taking
enough risks. That is should not be the issue. Indeed, many social policy experts thought that the basic principles underlying the programme made a lot of sense. The point is that it was the government’s
deliberate misrepresentation of the data and statistics that led to badly
formulated targets, which in turn translated into a funding model that could
have been designed to waste money. Bad stats meant bad policy.And yes, I (and others - I'd note in particular Ruth Levitas and Stephen Crossley) told them so. In February 2012, I explained the fundamental
flaw in the analysis – that the government was taking a set of families who
were undeniably poor and disadvantaged, and redefining them – without a shred
of evidence – as dysfunctional and antisocial.
I said:

What
began as a shortcut taken by civil servants with the data was translated into a
speech by the Prime Minister that simply misrepresented the facts. That in turn
resulted in sensationalist and misleading headlines; the end result, more
likely than not, will be bad policy.

Did
they stop, listen, and think? No. Instead they chose to translate an obviously
flawed analysis, constructed for the purposes of a speech, into local level targets and funding. Four months later, I
wrote:

Even
leaving aside the morality of using the language of "stigmatising"
with respect to a set of families many of whom neither deserve nor will benefit
from any such thing, this is a terrible way to make policy. Using data -
and a completely arbitrary national target number - that everyone knows are
simply wrong, solely because it would be embarrassing to admit a mistake, will
make the programme less effective and risks wasting public money. Not
only does it reflect badly on Ministers, it also does no credit to the senior civil
servants who allow the publication of information which - at the most
charitable - appears to reflect a complete lack of understanding of the
relevant data. This is a clear case for the National Audit Office.

We can now skip ahead three years. It was at this point, in March 2015, that
Ministers decided to pre-empt the result of the evaluation, claiming that:

More
than 105,000 troubled families turned around saving taxpayers an estimated £1.2
billion

This was untrue. And we – including the civil
servants responsible for the press release - knew it at the time, as I pointed out:

We
have, as of now, absolutely no idea whether the TFP has saved taxpayers
anything at all; and if it has, how much. The £1.2 billion is pure,
unadulterated fiction.

But it was worse than that. As Stephen Crossley observed, anyone who actually bothered to read
the CLG report in detail would have realised that the TFP targeting and funding
model was - just as I had predicted - resulting in huge misallocations of
money:

Manchester
(for example) have identified, worked with and turned around a staggering 2385
‘troubled families’. Not one has ‘slipped through the net’ or refused to engage
with the programme. Leeds and Liverpool have a perfect success rate in each
‘turning around’ over 2000 ‘troubled families. By my reckoning, over 50 other
local authorities across the country have been similarly ‘perfect’ in their TF
work. Not one single case amongst those 50 odd councils where more ‘troubled
families’ were identified or where a ‘troubled family’ has failed to have been
turned around.

Commenting on Stephen's analysis, I said:

In
other words, CLG told Manchester that it had precisely 2,385 troubled families,
and that it was expected to find them and “turn them around”; in return, it
would be paid £4,000 per family for doing so. Amazingly, Manchester did
precisely that. Ditto Leeds. And Liverpool. And so on. And CLG is
publishing these figures as fact. I doubt the North Korean Statistical
Office would have the cheek.

At this point, it should have been
blindingly obvious to the most casual observer that TFP was not - as the
government had claimed - a "payments by results" programme. Numbers
which had absolutely no basis in any objective reality had first become the
basis for targets, then for claimed "success", and then for money. It wasn't payment by results. It was make up the
results as you go along. And cash the cheques. The
results of the evaluation should hardly come as a surprise. As a postscript, it's worth noting the
caveats in NIESR's evaluation, which state, carefully and correctly, that
issues with data quality mean that:

the
results cannot be taken as conclusive evidence that the programme had no impact
at all

This is quite true. But CLG's attempt
to use this as an excuse for their failures conforms perfectly to the classic
definition of the Yiddish term "chutzpah"(cheek or audacity in
English) - "the man who kills his mother and father, and asks the court
for mercy because he's an orphan". The data quality issues are entirely
the result of the decision by the government to press ahead with spending
hundreds of millions of pounds on an untested, unpiloted programme, on the
basis of little or no evidence, rather than piloting it and/or rolling it out
in such a way that a more robust data collection and evaluation strategy would
have been possible.

So whose fault is this sorry
saga? The senior civil servant who
directed the Troubled Families Programme, Louise Casey, once said

If
No 10 says bloody 'evidence-based policy' to me one more time, I'll deck them.

With David Cameron, who appointed her
to this role in 2011, we can be pretty sure no physical violence was
required. Nothing he told her interfered
with her instincts to press ahead, and to ignore both the evidence and the
warnings from me and from others.

So it starts at the top. But while
most of the blame rightly rests with Ministers, including the former Prime
Minister, and the responsible senior civil servants at CLG - and they should be
held accountable – it is also important to note that the normal checks and
balances that should have picked up on all this simply failed. What on earth
did the Treasury think it was doing allowing public money to be squandered like
this? Were they busy cutting core local
authority budgets to notice that CLG were throwing hundreds of millions of
pounds away with no serious scrutiny?

Nor was it just civil servants.
Parliament didn’t do any better. Where was the National Audit Office? As far as
I can tell it produced one, frankly mediocre report, that fudged or buried the
key points - which were all by this time in the public domain. The key Parliamentary Committees - the Public
Accounts Committee (PAC) and the CLG Committee? They too were asleep at the
wheel. Opposition parties and MPs do not appear to have ever raised any of these points, even though they had ample opportunity to do so.

On Wednesday, when the PAC holds
a hearing – Ms. Casey will be testifying – they will have a chance to redeem
themselves by asking some of the questions that should have been asked years
ago. You can already read the written evidence – I’d particularly highlight that
from Stephen Crossley.

What
lessons can we learn from this? Most
obvious is the one I started with. Statistics and facts do actually matter.
They translate directly into policy, and hence into real outcomes for real
people. But for that to happen in the
way that it should – and not be distorted by politicians on the way – then the
process not just for the production but for the analysis and interpretation of
statistics and evidence needs to be genuinely independent. I set out some
modest proposals here.