Avoiding the quick fix approach to solving youth unemployment

A key priority for EU policy makers is to combat high levels of youth unemployment. Supporting young people to start a new business is increasingly regarded as a way to achieve this goal. And yet the understanding of what drives the success and failure of youth entrepreneurship policies remains incomplete. In a rush to deliver ‘quick fixes’ to the problem of youth unemployment, governments are falling short of devising well-coordinated policy interventions firmly grounded in evidence that could, in the longer term, be more effective and save public money.

The most recent Eurofound report on Start-up support for young people in the EU highlights the breadth and variety of publicly supported measures on youth entrepreneurship available in the EU. But it also points to the lack of robust policy impact evaluations and the many shortcomings of those evaluation practices deployed.

Questioning the rationale of youth entrepreneurship policies

The point of departure of many start-up support measures under the Youth Guarantee is that entrepreneurship is a viable solution to youth unemployment. The assumption is that all young people are potential entrepreneurs. But the reality is different: very few make it and become successful entrepreneurs. Those who fail can instead end up scarred by failure.

The other problem is that many start-up support schemes under the Youth Guarantee are often general activation measures for the unemployed as a whole. The risk is of subsuming the young within a much broader target group without addressing their specific needs, and instead diminishing the potential impact of the interventions.

The bumpy road from conception to evaluation of policy interventions

So where’s the rub? Where do policy makers get it wrong? It starts with the design of the interventions. First, the objectives of the interventions are rarely specified when programmes are being developed. Many start-up support measures specify higher-level aims, focused, for example, on increasing employability, but quantifiable and explicit targets are hardly ever set. This ends up compromising the entire evaluation process. In the absence of measurable targets, the evaluators have no other option than inferring and inputting them afterwards.

Another problem is that many start-up interventions for young people tend to be discrete, small-scale, temporary measures with relatively limited financial resources. They are rarely embedded within larger policy frameworks or youth employment strategies. Such measures are often short-lived and their potential impact is reduced. This is certainly not a good basis on which to build any evaluative research.

These first observations point to another important shortcoming: policy evaluations are not planned rigorously enough from the outset. They are often an exercise disconnected from the policy delivery, instead of being an integrated part of it and adequately resourced.

From a policy maker’s perspective, the big concern is the limited funds for policy evaluations, especially in a context where public funding is increasingly rationed. This concern is particularly important considering that impact evaluations using more sophisticated methods (those controlling for differences between the participants and non-participants) make heavy demands on time, as well as on human and financial resources. But what holds back from conducting impact evaluations, even more than any budgetary considerations, is that these costly endeavours often point to little or no effect of the policy measures under scrutiny.

Evaluation is an iterative process that helps designing better policies

Then, if costly and sophisticated impact evaluations show little or no impact, why bother at all? Simpler monitoring-type evaluations, mainly relying on self-reported data of beneficiaries’ opinions and views or basic data on budget and uptake, are less costly and yield more positive results. These are still referred to as ‘evaluations’ in the policy jargon and they tend to be favoured over evaluations using more appropriate quantitative statistical methods.

This is evidenced by the very few youth entrepreneurship interventions that are the subject of robust impact evaluations compared to those that receive lighter forms of evaluation. A simpler evaluation of the UK’s Prince’s Trust Enterprise Programme (for disadvantaged youth) shows positive results and high satisfaction among participants. However, a previous evaluation adopting a more scientific approach pointed to very limited impact, especially in terms of employment probabilities.

The point of committing to and investing in more robust impact evaluations is to link the results to the process of delivering the policy measure. If no or limited policy impact is shown, this is not the end of the exercise. The evaluation should lead to changes in the way the measure is implemented. In practice, impact evaluations rarely indicate whether and how the measure should be modified or whether it should be halted altogether.

Although the evaluation of the French CréaJeunes measure on enterprise training for young people found no policy impact, the programme continues to be run in the same format and is now part of the national Youth Guarantee. The results from this evaluation suggest that it may be wiser to revisit the specific design elements of the programme in order to improve its effectiveness and impact.

There is also evidence from a recent evaluation of the Swedish Junior Achievement Company Programme that offering hands-on opportunity to run a business at a fairly young age in a risk-free environment may be more effective. It helps young people to determine whether entrepreneurship is a suitable career path for them, without any fear of business failure.

If the results are not fed back into policy design and delivery, the risk is to perpetrate ineffective interventions that fail to address any youth problem and only waste public money.