We highlight the importance of randomisation bias, a situation where the process of participation in a social experiment has been affected by randomisation per se. We illustrate how this has happened in the case of the UK Employment Retention and Advancement (ERA) experiment, in which over one quarter of the eligible population was not represented. Our objective is to quantify the impact that the ERA eligible population would have experienced under ERA, and to assess how this impact relates to the experimental impact estimated on the potentially selected subgroup of study participants. We show that the typical matching assumption required to identify the average treatment effect of interest is made up of two parts. One part remains testable under the experiment even in the presence of randomisation bias, and offers a way to correct the non-experimental estimates should they fail to pass the test. The other part rests on what we argue is a very weak assumption, at least in the case of ERA. We implement these ideas to the ERA program and show the power of this strategy. Further exploiting the experiment we assess the validity in our application of the claim often made in the literature that knowledge of long and detailed labour market histories can control for most selection bias in the evaluation of labour market interventions. Finally, for the case of survey-based outcomes, we develop a reweighting estimator which takes account of both non-participation and non-response.