The Educational Attainment of Looked After Children - Local Authority Pilot Projects: Final Research Report

7 Impact of the Pilots - Quantitative Research

7.1 Introduction

7.1.1 This chapter contains an account of the quantitative data collection and analysis. The findings presented here are intended to answer Research Question 2: What, if any, was the impact of the pilot projects on quantitative measures, including school attendance, exclusion and attainment? The chapter describes the study population, and there is both interpretation of the data and discussion of the issues arising from the difficulties experienced in collecting accurate data about looked after children and young people. In the interests of clarity this chapter summarises the main findings, while more detail is provided in additional data tables provided in Appendix 4.

7.2 Methodology

7.2.1 The aim was to collect 'baseline' quantitative data for children and young people for the academic year prior to the start of the pilots, i.e. 2005-06, and then to follow up by collecting 'outcome' data for session 2006-07. This could never be a perfectly designed study, since different pilot projects were very different in terms of purpose, target population, design and scale. They had different start and end dates, ran for different lengths of time and some experienced delays in starting.

7.2.2 There is, of course, a very important caveat in respect of what measurable improvements it is reasonable to expect within such a relatively short period of time. Significant improvements can be achieved in attendance and attainment when individual children respond to intensive support. However, many of the young people targeted by pilot projects would have had rather complex support needs, compounded by many years of neglect. Also, some of the projects were based on activities designed to make improvements in more indirect ways, such as through personal education planning, development of the planning and monitoring systems, or by professional training.

7.2.3 We prepared notes of guidance and a pro-forma for both baseline and outcome stages, and invited pilot project co-ordinators to record the required data and return these to the research team. The recording sheet asked for details of age, gender, looked after category, attendance, exclusion, and attainment levels on 5-14 National Assessments and on national qualifications.

7.2.4 Additionally, at the outcome stage, approximately one year later, project co-ordinators were asked to indicate a measure of involvement in pilot-related activities for each participant, from no involvement, through some or moderate involvement, to high involvement.

7.2.5 We also asked pilot project co-ordinators to arrange for young people aged over 10 years to complete the Strengths and Difficulties Questionnaire ( SDQ) and the Harter & Harter Self-perception / Self-esteem Questionnaire at the outcome stage, since there was an opportunity to describe a fairly large population of looked after young people on these measures and because we wanted to compare the results with those reported in previous studies. Ideally we would have liked to have both before and after measures, but as the research began after most of the projects this was not possible. In the event, the organisational difficulties encountered meant that the number of questionnaires returned was insufficient for particularly meaningful analysis.

7.2.6 We experienced considerable difficulty in getting the data requested from some of the pilots, especially in relation to outcome data. Typically this was because project leaders did not have immediate access to attendance and attainment data, and were dependent on co-operation from another branch of the local authority for which this work was not necessarily a priority. The difficulties experienced have important implications for understanding the issues raised about the educational progress of looked after children and young people in general, and more specifically in relation to offering guidance about data collection for monitoring purposes within local authorities. They may also have implications for the reliability of the data. These points are elaborated in the following section.

7.3 Data collection issues

7.3.1 The process of collecting the required data was generally problematic, though the particular difficulties encountered varied. Sometimes this was a function of the scale of the project. Whereas it was relatively easy to collect data in a single-strand project with a small number of children, in projects potentially aimed at all looked after children within a local authority there were significant challenges. In one project, for example, baseline data could not initially be provided as the local authority had transferred to a new electronic management system and historic information could not be accessed centrally. The information was subsequently provided by manual methods directly from schools, and it was submitted after we had received the pilot's outcome data. In another authority, which had experienced a number of changes of co-ordinator, outcome data were provided only with difficulty and with considerable help from the research team. These difficulties reinforce previous research findings about problems in tracking looked after children and young people (Jacklin, Robinson, & Torrance, 2006).

7.3.2 The problems encountered fell roughly into two categories. Some projects had clearly underestimated the degree of negotiation and planning required to identify the young people and output the required data. This category of problem included lack of clarity about the deployment of resources to extract the data where this was expected to be extremely time-consuming. In most cases, to do this successfully required the co-operation of administrative staff invariably working in different sections or departments, and project staff in some pilots experienced difficulties in getting this work prioritised.

7.3.3 The second category of problem relates to methodological issues which emerged and which have implications for reporting on the accuracy of the data. The research was useful in uncovering a number of difficulties. One included recording errors, such as discrepancies between records for the same child held on different databases (typically social work and education) within the same local authority, and incorrect attendance and attainment information. Sometimes the explanation appeared to lie in a failure to record information, particularly when a child had moved home and school placements. Where we found examples of missing 5-14 National Assessment levels it was not always possible to establish whether this was due to a failure to record the data or because the child had not been assessed, although authorities were asked to check and clarify this point where possible.

7.3.4 A number of the pilots found that the process of collecting the data alerted staff to particular discrepancies in relation to recording attendance. The problem appears to have been most acute where looked after pupils were involved in off-site education, on either a part-time or a full-time basis. This problem is best understood through the observations of a manager in one pilot project:

We also quickly realised that a pupil can be attending on a very part-time basis (as little as two hours weekly) and their attendance is registered as 100%. There is a school of thought that the pupil is attending to the best of their ability and 100% is an appropriate figure, but, it gives a very misleading impression of the amount of education this pupil is receiving, and the more worrying thing is that an undesirable minimum of attendance might easily go unnoticed by those who are not in direct contact with the pupil. There are not the same statutory requirements to review the case at regular intervals for those who are looked after at home and something like this could go on for a longer period than anticipated at its conception for our most vulnerable looked after children. There appears to be no standard agreement about how to record the attendance of pupils who participate in full-time off-site education, even where a child remains on the roll of a mainstream school.

7.3.5 In one local authority, project staff found that practice varied between schools. In one school 100% attendance was recorded, in another the child was assumed to be absent 100% of the time, while in yet another no attendance data were entered, despite the off-site project having supplied the information to the school every week. In one project, workers found examples of a failure to record National Units 14 achieved by young people while attending alternative education.

7.3.6 The number of young people for whom these kinds of recording errors have occurred is likely to be a minority of the study population (typically older children participating in out of school projects), but the lack of clarity supports a more general conclusion that there is a need to be more explicit about how attendance and achievement should be recorded in situations where looked after children attend school part-time and/or participate in some form of off-site educational provision. This is clearly an important aspect of the corporate parent responsibility.

7.3.7 Despite the difficulties experienced in obtaining data returns, we have no reason to doubt the accuracy of the information received from pilots, a view that is reinforced by comparisons of our study population with previously published government data (see discussions later in this chapter). A matched data set of over 600 looked after young people, is a relatively large study population, though particular analyses have inevitably been carried out on smaller sub-sets of the population.

7.3.8 The combined effects of the small numbers in some projects, spread across age groups, and missing outcome data meant that we were unfortunately unable to carry out the quantitative analysis by type of activity.

7.4 Description of the study population

Data collected

7.4.1 We received from the 18 pilot projects information about 722 individual young people 15 in total across both baseline and outcome data collection stages. However, when the baseline and outcome data were matched on individual identity codes a total of 636 young people were represented within the matched dataset.

7.4.2 Both baseline and outcome datasets had missing data. Fife (40 young people) indicated at the baseline stage that they would be unable to provide outcome data because the focus of their project was on developing systems and also the administrative resource allocated to the project was not going to be available at the outcome stage. It was reported that 46 young people (across nine pilots) had withdrawn from the programmes. Where a reason for withdrawal was noted, this was typically one of the following: 'young person no longer in the looked after category'; or, 'moved to another local authority'; or, 'left school'.

Characteristics of the young people

7.4.3 Table 2 below shows details of the young people in the matched dataset across 17 pilots by age bands. The age distribution in the population differed in some respects from the distribution of all looked after children and young people nationally in Scotland. While the proportion at the early secondary school stage in the study population was in line with the national profile, there was a smaller proportion at primary school-age and a higher proportion at the older end of the age spectrum in the study population.

7.4.4 There were 280 girls (44%), 349 boys (55%) and seven cases (1%) for whom gender was not reported. The gender proportions in the study population are exactly in line with the national profile.

7.4.5 Table 3 below shows the care category of the young people in the population studied. The different figures for the total study population between the two tables (i.e. 614 in Table 2 and 624 in Table 3) arise as a result of data missing from pilot returns. More details are provided in the notes under the tables and in page footnotes.

Table 2: Age and stage of participants for each pilot in matched data set

Fife not included. The table total does not add up to 636 (total of the matched dataset) because the care category was not provided for 12 young people.

7.4.6 In terms of care category, the study population closely resembles the national profile in the proportion of young people looked after at home (39% compared with 43% for all Scotland) and in foster care settings (28% compared with 29%). The proportion in kinship care considerably under-represents the national profile (5% compared with 15%), while the proportion in residential care is over-representative of the national profile (20% compared with 12%) 22. The young people in residential care in the study population were all living in children units or houses in the community and none was living in a residential school or in secure care.

SDQ and Harter Questionnaires

7.4.7 As reported in the introduction to this chapter, we were relatively unsuccessful in achieving the desired level of questionnaire returns. Since the questionnaires were administered at a single point they could only ever yield descriptions of the study population towards the end of the pilot projects, rather than provide an indication of the value added by participation. We outline below a summary of the data received, but this information needs to be approached with caution because of the relatively small numbers involved and because it undoubtedly represent a skewed population.

7.4.8 Pilot project co-ordinators were asked to arrange for young people aged 10 years and above to complete the Strengths and Difficulties Questionnaire ( SDQ) and the Harter and Harter Self-perception/Self-esteem Questionnaire. Based on estimates about the numbers who might realistically complete questionnaires, we hoped for a potential sub-set of 390 young people. In the event, this proved to be overly optimistic. We received returns from nine of the pilots: 67 SDQs (37 girls and 28 boys), i.e. 17% of the potential sub-set (11% of the matched dataset), and 79 Harter questionnaires (43 girls and 28 boys), i.e. 20% of the potential sub-set (12% of the matched dataset).

7.4.9 The questionnaires were more likely to have been completed by young people living in residential care and foster care settings than those looked after at home. The SDQs were more likely to have been completed by young people who had high involvement in pilot activities, and therefore the results might be expected to present a more positive picture of the young people's wellbeing than would be typical of the overall study population. The Harter questionnaire, however, was completed by roughly even numbers of young people at all three levels of involvement.

Strengths and Difficulties Questionnaire ( SDQ)

7.4.10 The SDQ is a brief screening questionnaire which gives an indication of emotional wellbeing. It comprises 25 items on five sub-scales, each with five items. Respondents rate themselves on statements according to whether they consider them to be 'not true', 'a bit true' or 'definitely true'. The scores for individuals are then analysed to indicate 'low needs', 'some needs' and 'high needs'.

7.4.11 The scores on the SDQ scales for the sub-population obtained are presented in Table 8 in Appendix 4. The results show that, on average, the group is towards the 'low need' end of the scale. For example, the results for the 'pro-social' scale showed that 81% of the sub-population of 67 young people had low needs, compared with only 9% who had high needs. Averaging the four 'difficulties' scales, 45% of a sub-population of 58 young people rated themselves as having high needs, compared with 31% who had low needs. More young people rated themselves as having conduct problems and some indication of hyperactive behaviour than had social or peer issues.

7.4.12 There were no statistically significant differences in terms of age, gender and looked after category, with the exception of gender in relation to emotional symptoms. Girls rated themselves in a way that suggested greater emotional need than boys, but with mean scores still within the low need range. (The proportions of respondents falling into each category of need are shown in Table 9 in Appendix 4.)

7.4.13 The SDQ also invites respondents to indicate whether they have difficulties in one or more of the areas of emotions, concentration, behaviour, or being able to get on with other people, and whether these difficulties are minor, definite or severe. Thirty-eight out of 67 respondents (57%) indicated that they had some difficulty, though in most cases these were rated as minor. This reflects the results in the first section of the questionnaire, with 55% registering some need/high need on the total difficulties score. For 26 (68%) of the young people, the difficulties had been present for more than a year. The perceived difficulties were more likely to cause problems in school and family contexts than with friendships and in leisure activities, a finding that is consistent with the young people's self-ratings on the scales (see Table 10 in Appendix 4).

7.4.14 How do these finding compare with those in other studies? A previous study of 41 looked after children in one English local authority found that 65% rated themselves, on average, across the four difficulties scales, as having low needs (compared with 45% in our sub-set), while 15% self-rated as having high needs (compared with 31% in our sub-set). That study also found a mean rating for total difficulties, measured by self-ratings on the SDQ, of 12.6, while in our sub-population the mean rating was 15.9 (Richards, Wood, & Ruiz-Calzada, 2006). The mean for the general population is 10.3.

7.4.15 Richards et al. also found that teachers' and carers' ratings of difficulties were higher than those of the young people themselves. This indicates that despite our sub-set being skewed towards those who had higher involvement in projects their self-ratings of perceived difficulties do appear to be somewhat high. Whilst we should be cautious in drawing conclusions, this finding does support previous work indicating concerns about the poor mental health of looked after children and young people (Meltzer, Gatward, Corbin, Goodman, & Ford, 2003).

Harter Self-Esteem Questionnaire

7.4.16 The Harter questionnaire is an instrument designed to measure self-esteem in children and young people. It invites respondents to rate themselves on how like or unlike they are to certain propositions expressed as opposites, such as 'some kids find it hard…' versus 'other kids find it easy…'. The questionnaire has two underlying principles: that self-esteem has several components; and that young people's evaluation of their self-esteem is based on a comparison of their attributes with those of their peers.

7.4.17 Responses are rated and added to create scores on the following scales: scholastic performance, social acceptance, athletic performance, physical appearance, behaviour, and global self-esteem.

7.4.18 The detailed results are shown in Table 11 in Appendix 4. We used a version of the questionnaire that was revised for use in a large study of Scottish schoolchildren (Hoare, Elton, Greer, & Kerley, 1993). The results of that study therefore provide a useful comparison. Each item in the questionnaire is scored on a scale from 1 to 4, so the mid-point is 2.5; a higher score indicates greater self-esteem. Hoare et al. present their results in a table showing mean scores by age groups and for males and females. Means are reported for the five scales and also for 'global' self-esteem. While mean scores vary by age, they fall within a range of 2.75 (for girls in S4) to 3.06 (for boys in P5). Our sub-set gave a global mean score of 2.65 for all boys and girls of all ages. Girls in our sub-set scored lower than boys (mean of 2.58, compared with 2.74) consistent with the findings of Hoare et al.

7.4.19 These findings are indicative of lower self-esteem among the looked after population, again consistent with previous findings which highlight poor mental wellbeing in this group of young people.

7.5 The impact of the pilot projects

7.5.1 The following sections present findings from analyses of the matched dataset. We examine the value added by young people's involvement in the pilot projects, in relation to data about attendance, exclusion and attainment.

Attendance and exclusion

7.5.2 Tables 4 and 5 together describe attendance at, and exclusion from, school of the children in the matched data set during the school year 2006-2007, i.e. the year of involvement in the project for most of the young people. Data were not available for 85 young people in relation to attendance and for the number of days excluded, and for 94 in relation to the number of times excluded.

7.5.3 As Table 4 shows, a large variation in attendance is evident among the young people in the study population. The mean of 81% is somewhat lower than the mean attendance reported for looked after children and young people nationally in Scotland in 2007 (87%) and is also very considerably lower than the (93%) mean for pupils who are not looked after 23.

Table 4: Attendance and exclusion

N

Minimum

Maximum

Mean

Std. Deviation

% attendance in year during programme

551

1.18

100.00

81.2

22.6

number of days excluded in year during programme

551

0

91.5

3.5

11.0

number of times excluded in year during programme

542

0

9

0.63

1.5

7.5.4 Table 5 shows data on exclusion from school for a total of 542 young people. Most of them, i.e. 420 (78% of those for whom data were provided), were never excluded from school, while 130 (24%) had been excluded during the year. Excluded pupils had experienced, on average, 15 days' exclusion in the year (ranging from a minimum of one day to a maximum of 91 days).

Table 5: Number of times excluded

No of times excluded (instances)

No of pupils

Total exclusions

(i.e. Pupil numbers times number of times excluded)

% of pupils

0

420

0

77.5

1

47

47

8.7

2

20

40

3.7

3

23

69

4.2

4

11

44

2.0

5

7

35

1.3

6

4

24

0.7

7

4

28

0.7

8

2

16

0.4

9

4

36

0.7

Total

542

339

100

7.5.5 Of the 130 reported to have been excluded, the number of instances of exclusion was reported for 122 young people and of these a total of 75 (61%) had been excluded more than once. As a comparison: of all pupils who were excluded from school in Scotland in 2006-07, 40% were excluded more than once 24.

7.5.6 The rate of exclusions (339 out of a total of 542) represents 625 instances of exclusion per 1000 pupils. This rate is very high compared to the national rate of exclusions for looked after children and young people (368 per 1000), which in turn is significantly higher than the rate of exclusion for all pupils in Scotland (64 per 1000). Another useful comparison is the peak rate for all pupils in Scotland (at the S3 stage) of 204 per 1000. Our study population is of course likely to be skewed towards young people particularly at risk of exclusion or selected for participation in pilot activities because they had been excluded from school.

7.5.7 While gender appeared to have virtually no effect on attendance, boys had a higher mean number of exclusions than girls and were more likely to be excluded (see Table 12 in Appendix 4). Male pupils account for 78% of all exclusions in Scottish schools and are excluded at a significantly higher rate, so the gender difference in the research population is small compared with the national picture. Government statistics for exclusion in Scotland do not provide rates by gender for looked after children and young people.

Impact of the projects on attendance and exclusion

7.5.8 Changes in attendance and exclusion between baseline and outcome data were calculated (see Table 13 in Appendix 4). We found that attendance had increased from 78% to 81%, a statistically significant difference.

7.5.9 The mean number of exclusions declined from 0.85 days to 0.63, a finding which is also significant. The mean number of days excluded increased marginally from 3.5 to 3.7.

7.5.10 Attendance was lower among the older age groups. Attendance improved during the pilot year in all age groups, with the improvements reaching statistical significance among the 9-10 year olds and those aged over 15. Among the over 15s both the number of exclusions and the number of days excluded reduced significantly.

Exclusion rates across the pilot authorities

7.5.11 Considerable differences in exclusion rates have been reported for all pupils between local authorities in Scotland, varying from 10 per thousand in Orkney and 12 in East Renfrewshire, to 110 in Glasgow, and 126 in Dundee 25.

7.5.12 We were therefore interested to compare exclusion rates for the pilot local authorities. Examining the rates for nine of the 18 pilots (others were excluded from this analysis due to low numbers), a similarly wide variation was found (see Table 14 in Appendix 4). We should be very cautious, in interpreting these findings, particularly as the pilots had different aims, with some specifically targeting young people at the peak age for risk of exclusion. However, the rates of exclusion for all looked after children show considerable variation between local authorities in Scotland and this is clearly a matter which should be examined further.

Impact of the pilots on attainment: 5-14 National Assessment Levels

7.5.13 The assessment of children in P1-S2 is assessed by teachers and reported to parents in all areas of the 5-14 Curriculum 26. National Assessments are used by teachers to confirm their judgements about pupils' levels of attainment in reading, writing and mathematics 27. Assessments are based on attainment levels and targets set out in national guidelines, and these are available at six levels, A to F. Level A should be attainable by some pupils in P2 and by most in P3, while Level E should be attainable by some pupils in P7/S1 and by most in S2. Level F should be attainable in part by some pupils, and completed by a few pupils, in the course of P7-S2.

7.5.14 Details of the results for 5-14 National Assessments in reading, writing and mathematics are shown for the matched dataset and also for four different age cohorts at outcome stage in Appendix 4 (Tables 15-19). The appendix also shows 5-14 data collected nationally for looked after children and published in June 2003 (Table 20). The data tables use age groupings, while the government data are presented in school stages. However, to take one example, we could compare the 11-12 age band of the study population with the S1 stage in the national data. On the target of achieving Level D or better, the pilot population differs from the 2003 national cohort of looked after children in reading (pilot group 22.9%; national 42.0%) and writing (pilot group 18.8%; national 30.0%) but is similar in mathematics (pilot group 30.6%; national 31.0%). We need to be cautious about making such comparisons because the pilot population may not be representative (e.g. it is likely to include children with more difficulties) and because of the small numbers involved.

7.5.15 It is clear that looked after children have low attainment in reading, writing and mathematics. Between 65% and 70% of non-looked after children have attained Level D by age 11-12, compared with the much lower proportions of looked after children attaining these levels. One approximate comparison is to say that while the average non-looked after child progresses at a rate of up to one National Assessment level for every year of chronological age (i.e. six levels in about nine years), it takes the average looked after child about three years to progress up one level.

7.5.16 Since we had 5-14 National Assessment data for more than 230 young people for two consecutive years, i.e. at both baseline and outcome stages, we checked what progress they had made in one year. We found that about 40% of the children and young people participating in the pilots advanced by one 5-14 National Assessment level (38% in Reading; 41% in Writing; 38% in Maths), much better than the average progress for looked after children and similar to the advances made by non-looked after children nationally. These findings are statistically significant (see Table 21 in Appendix 4).

7.5.17 In fact we found that the mean improvement amounted to between 0.4 and 0.5 of a 5-14 level. As the 5-14 levels amount to a six-point scale (A-F), with progression over nine years, you might expect young people to progress at a mean rate of about 0.6 of a level per year. On this basis, the progress of the pilot population could be judged as good for particularly disadvantaged pupils.

7.5.18 We also examined the data by care category, distinguishing between young people looked after at home, in residential care, in foster care and in kinship care. There were, however, no apparent differences in mean improvement between categories of care.

Impact of the pilots on attainment: National Qualifications

7.5.19 Results for Standard Grades and National Qualifications 28 were provided for 122 young people in the matched data set 29. These exam results were converted to tariff points using the Unified Points Score Scale, 30 a system which allocates points to all awards and grades within awards, and allows a single tariff score to be computed for an individual. Scores were computed for both total exam results and also for the 'best 5' results. In reality, for many of the young people, both scores were identical, with some having fewer than five passes. Those who might have been expected to have awards but in reality did not were allocated zero points.

7.5.20 Table 6 shows the tariff score for the young people for whom results were provided. The table shows the very wide range of attainment achieved. However, this is partly a function of the spread of ages of those who were in S3 to S5 and beyond. Of the 78 young people in S4 (excluding those not presented for exams) from the pilots, the mean tariff score, based on all results was 59.5 ( SD 50.1). The national mean for pupils not looked after is 173. 31 This comparison of course is simply confirmation of the known low performance of young people who are looked after.

Table 6: Tariff points for National Qualifications

N

Min

Max

Mean

SD

Mean scores of 'best 5', including those who had not been presented for exams

148

0

264

36.7

42.4

Mean scores of 'best 5' for those who had been presented for exams

122

2

264

44.5

42.6

Mean scores of all results, including those who had not been presented for exams

149

0

264

43.6

52.3

Mean scores of all results for those who had been presented for exams

122

2

264

53.0

52.9

7.5.21 We found that baseline 5-14 National Assessment levels (particularly in maths) had some moderate effects in predicting subsequent results obtained by the young people in Standard Grades and National Qualifications, an unsurprising finding.

7.5.22 Some other features were also found to correlate with results in Standard Grades and National Qualifications. Girls appear to have done significantly better than boys, an effect not observed in the analysis of 5-14 National Assessment data. Young people with high reported attendance, unsurprisingly, attained more. Young people looked after at home achieved significantly less, while those in residential care and, especially, those who are fostered, achieved significantly more. Level of involvement in the pilot projects produced no significant effect, although the number for whom we have such data (i.e. 69) is small.

7.5.23 Because of the significant number of older young people from the Glasgow project represented in the data set, we were able to distinguish two particular programmes ( CLASS and EVIP) 32 and these young people appeared to be attaining less. Glasgow students overall appeared to have lower attainment, compared to non-Glasgow young people. These particular results seemed odd, given the particular focus on tuition and attainment in this pilot. However, there are two possible explanations. First, the tariff score does not include vocational qualifications which the Glasgow programmes emphasised, and which young people in EVIP are known to have achieved. Secondly, there were particular problems in data recording within the Glasgow pilot and it is possible that many young people who appeared to have no qualifications, in fact did not have their achievements recorded (See also tables 22-24 in Appendix 4).

7.5.24 But all of the differences observed did not necessarily arise during the study year, and cannot confidently be attributed to the effects of involvement in the pilot projects. They may have a longer history, or be due to differences in the characteristics of the young people at the point of recruitment to the pilots. We therefore tried to see what effects we could observe relative to a baseline of previous attainment.

Impact of the pilots: Involvement in pilot project activity

7.5.25 In order to explore the hypothesis that the pilot projects added value overall, i.e. that the young people showed improvement in attendance, exclusion and attainment, we used multiple regression analysis 33 to examine whether the extent of the young people's involvement in pilot project activities influenced their outcomes.

7.5.26 The project co-ordinators indicated the level of involvement on a three-point scale for a total of 402 young people (63% of the matched dataset), as shown in Table 7. We do not know exactly how judgements were made about level of involvement, but we can assume that assignment to the categories was not made arbitrarily and, as the data in any case exclude cases where this judgement proved impossible, it is reasonable to conclude that the results are meaningful.

Table 7: Reported level of involvement in pilot project activities

Level of involvement

N (%)

No involvement

69 (17%)

Low/moderate involvement

130 (32%)

High Involvement

203 (51%)

Total

402 (100%)

7.5.27 In a simple model which examined outcomes correlated with level of involvement we found a statistically significant correlation for attendance only. It is likely, however, that the young people with high involvement in the projects had better attendance in the first place and so it is not possible to attribute the improvement noticed solely to their engagement with the pilot activity.

7.5.28 Using a 'value-added' multiple regression model, where baseline data are held constant, we were able to examine apparent progress for the 203 students reported as having a high involvement in pilot projects. The results showed no effects in relation to attendance, days excluded and instances of exclusion.

7.5.29 However, statistically significant levels of progress were found in 5-14 National Assessments on reading and writing, although no significant effect was found in mathematics. Young people reported to have high levels of involvement in the pilots appeared to have made appreciably more progress than those with less involvement in reading and writing, an effect which is statistically significant. It is reasonable to conclude that since there was a particular emphasis on literacy across the pilots, this additional support was effective in achieving improvements in reading and writing.

7.5.30 It was unfortunately not possible to make any meaningful claims about value added in relation to the different pilot authorities and for particular activities, since the sheer variation in the projects meant that numbers required for calculations were not achieved.

7.6 Conclusions

7.6.1 The focus of this chapter was the impact of the pilot projects measured by quantitative data, i.e. school attendance, exclusion from school, and attainment measured by 5-14 National Assessments and National Qualifications.

7.6.2 Results were obtained on the Goodman Strengths and Difficulties Questionnaire and the Harter and Harter Self-Esteem Questionnaire for only a small sub-set of the pilot population. Bearing in mind this limitation, the findings support previous work indicating concerns about the poor mental wellbeing of looked after children and young people.

7.6.3 Previous research has shown that collecting robust data about the outcomes of looked after children and young people is problematic, and this finding has been confirmed by the results of the research on the pilot projects. The data tracking systems of many of the pilot local authorities were of variable quality, but the research process itself appears to have been helpful to the pilot authorities in relation to identifying weaknesses in tracking looked after children and young people.

7.6.4 Attendance improved in all age groups, findings which were statistically significant among 9-10 year olds and those over 15. The instances of exclusion and the number of days excluded reduced significantly amongst those young people over 15.

7.6.5 About 40% of the young people participating in the pilots advanced by one 5-14 National Assessment level, much better than the average progress reported for all looked after children and similar to advances made by non-looked after children nationally. Again, this finding was statistically significant.

7.6.6 The research identified effects related to the involvement of the young people in pilot activities per se, but was not able to attribute effects to particular activities. Perhaps this is not particularly important, since there is evidence from a previous research study suggesting that high levels of participation in 'study support' activities can make significant impact on attainment, attitudes to school and attendance. Improvements were found to be related to curriculum-focused activities, but also to drop-in sessions, sport and other activities (MacBeath et al., 2001). Perhaps we can conclude that local authorities and voluntary agencies should be encouraged to make provision of a range of activities capable of engaging looked after children and young people and that the precise nature of the intervention is probably less important than participation in an activity.

7.6.7 Younger looked after children who had high levels of involvement in the pilot projects appeared to have made appreciably more progress in one year than the others, measured by 5-14 National Assessments in reading and writing. This is encouraging because it also suggests that providing targeted additional support can raise attainment.