Posts Tagged ‘research’

Industrial strategy is one of the big issues for the What Works Centre and its local partners and innovation is one of the main themes of industrial strategies in the UK, and around the world.

Public policy plays a number of important roles in supporting innovation — see thisdebate between Mariana Mazzucato and Stian Westlake for a good intro. And as I wrote back in January, it’s equally important that we understand what the most effective tools are.

The good news for the UK is thatwe are — slowly — building an evidence base on what works for promoting innovation, as well as other pillars of industrial policy. What’s more, what we have suggests some current UK programmes work pretty well.

*

Our latest case study summarises Innovate UK’s programmes of support for microbusinesses and SMEs: mainly grants but also loans, awarded on a competitive basis, either to individual firms, or to promote partnerships with other companies or with universities.

Using standard UK administrative data, evaluators were able to set supported firms alongside similar non-supported companies, then compare how the two groups did. This ‘difference in difference’ approach is one of the methods we endorse, as it meets our minimum standards for good evaluation.

Encouragingly, Innovate UK’s programmes seem to have raised treated firms’ survival prospects (by 14 percentage points), employment (an extra 32 staff on average), and possibly sales too (although this result is less robust). These positive effects are biggest for 2–5 year old companies and those aged 6–19 years old. That is, these programmes seem to have helped innovative firms to scale.

*

This is another helpful piece of the industrial strategy puzzle, for several reasons.

First, in our innovation evidence review back in 2015, we found lots of evidence that these kinds of programmes raised firms’ R&D — but rather less evidence on growth impacts further down the line. Now we have good UK evidence of those growth and scaling impacts.

Second, we already know that the UK’s R&D tax credit system is pretty effective in stimulating firms’ patenting. We can now add good evidence on grants and loans alongside that.

Third, we can set these innovation findings alongside other evidence on business support programmes — where again, we have a decent stock of UK evidence, with several programmes (e.g. on export support) showing positive impacts.

Finally, it’s reassuring to see that evidence for these types of innovation support programmes in the UK broadly lines up with what we’ve found for OECD countries as a whole. We’ve had a number of conversations with policymakers worried that innovation programmes are very context-specific, so results from one country won’t generalise to others. This may be true in some cases. But for grants, loans and tax credits, what we know suggests that what works across the OECD also works in the UK.

At the What Works Centre we’re keen on experiments. As we explain here, when it comes to impact evaluation, experimental and ‘quasi-experimental’ techniques generally stand the best chance of identifying the causal effect of a policy.

Researchers are also keen to experiment on themselves (or their colleagues). Here’s a great example from the Journal of Economic Perspectives, where the editors have conducted a randomised control trial on the academics who peer-review journal submissions.

Journal editors rely on these anonymous referees, who give their time for free, knowing that others will do the same when they submit their own papers. (For younger academics, being chosen to review papers for a top journal also looks good on your CV.)

Of course, this social contract sometimes breaks down. Reviewers are often late or drop out late in the process, but anonymity means that such bad behaviour rarely leaks out. To deal with this, some journals have started paying reviewers. But is that the most effective solution? To find out, Raj Chetty and colleagues conducted a field experiment on 1,500 reviewers at the Journal of Public Economics (where Chetty is an editor). Here’s the abstract:

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results.

First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing pro-social behavior.

*

What can we take from this?

First, academics respond well to cash incentives. No surprise there, especially as these referees are all economists.

Second, academics respond well to tight deadlines – this may surprise you. One explanation is that many academics overload themselves and find it hard to prioritise. For such an overworked individual, tightening the deadline may do the prioritisation for them.

Third, the threat of public shame also works – especially for better-paid, more senior people with a reputation to protect (and less need to impress journal editors).

Fourth, this experiment highlights some bigger issues in evaluation generally. One is that understanding the logic chain behind your results is just as important as getting the result in the first place. Rather than resorting to conjecture, it’s important to design your experiment so you can work out what is driving the result. In many cases, researchers can use mixed methods – interviews or participant observation – to help do this. Another is that context matters. I suspect that some of these results are driven by the power of the journal in question: for economists the JPubE is a top international journal, and many researchers would jump at the chance to help out the editor. A less prestigious publication might have more trouble getting these tools to work. It’s also possible that academics in other fields would respond differently to these treatments. In the jargon, we need to think carefully about the ‘external validity’ of this trial. In this case, further experiments – on sociologists or biochemists, say – would build our understanding of what’s most effective where.

A version of this post originally appeared on the What Works Centre for Local Economic Growth blog.

The Centre will conduct systematic reviews of UK and international research, ranking the most effective interventions, and will work closely with local government, local enterprise partnerships and other ‘users’ to help develop stronger economic policymaking across the UK. As NICE and the EEF already do, it may eventually commission research too.

Henry Overman is stepping down from SERC to lead the Centre. I’m becoming one of the Deputy Directors, and will be working at LSE alongside my research-focused role at NIESR. I’ll be leading on the academic workstream, co-ordinating the systematic reviews and demonstrator projects, as well as advising Henry on the Centre’s direction.

We’ll be working with a strong team of academics across the country – in Liverpool, Leeds, Newcastle and Bristol, as well as London. We’ll also team up with New Economy Manchester on capacity-building and demonstrator projects. And we’ll be using the UK-wide networks developed by Centre for Cities and Arup.

Developing a new organisation from scratch is exciting, challenging and a huge amount of work, as I can attest from my early days at the Centre for Cities. Unlike most start-ups, we are very lucky to have secure initial funding. And we have an emerging body of good practice to draw on. But we still have a great deal to do in the months ahead. I look forward to working with many of you as we build out.

This is the first phase of a research programme with roots in the resurgence of industrial policy around the world. Like many others, the UK government wants to promote ICT and digital content activities – in the global North at least, this is generally high value activity, with spillover effects to the rest of economy.

A big problem is that we have little idea of the true size and nature of these digital companies. That’s because official definitions use SIC codes, which don’t work well for companies doing innovative, high-tech stuff.

To try and fix this, we use big data provided by Growth Intelligence. GI pull in data from the web, social media, news feeds, patents and a range of other sources, and layer this on top of public data from Companies House. That gives a much richer picture of who’s out there, their characteristics and their performance.

Crucially, GI’s data buys us a lot more precision than SIC-based analysis. We can look at industries and at products, services, clients and distribution platforms. For increasingly tech-powered sectors like architecture, that allows us to distinguish ‘digital’ companies producing (say) CAD specialist software from ‘non-digital’ ones making buildings.

*

Overall, we find over 40% more digital companies than official estimates suggest. We also find that digital companies who report revenue or employment are pretty resilient, with faster revenue growth and higher average employment than non-digital companies.

And contrary to the popular sense that it’s all about London start-ups, we find hotspots of digital activity across the country, including some perhaps surprising places like Aberdeen, Middlesbrough and Blackpool.

*

Okay, this is all fascinating stuff for researchers. But what should Government do differently? First, the big data field is still in its early days, and we’d encourage officials to explore how it can complement conventional statistics. Second, better data should lead to better-designed industrial policies. Finding the optimal policy mix, however, is a separate and much harder question to answer.