Executive summary, 2008 state pilot program results

E 1850.3 P643r 2008
2008
Pilot Program 2008 Evaluation Results
Program Description
In 2006 the Oklahoma State Legislature directed the State Board of Education to establish a
pilot infant and toddler program (hereafter called "Pilot Program") funded through private
donations and state funds to serve at-risk children and their families in at least one rural and
one urban area of the state. The Request for Proposals (RFP)developed by the Oklahoma State
Department of Education (2006) specified varying levels of educational preparation for
classroom personnel, family support services with specified caseloads, and continual
professional development. Community Action Project of Tulsa County (CAPTe) was awarded
the grant contract and during fiscal year 2007 (July 1, 2006 to June 30, 2007) they collaborated
with 5 other early childhood agencies to implement high quality infant and toddler services in
13 sites across Oklahoma.
.'
"
The Pilot Program reached 90 classrooms in June 2007 and 139 in June 2008. Currently, these
classrooms are distributed across 15 agencies operating in a variety of communities across
Oklahoma. These providers offer services to 1,204 infant and toddlers from families whose
incomes are 185% or less ofthe federal poverty level. Ten of the participating providers are
non-profit organizations (seven of whom operate Early Head Start and/or Head Start
programs), four providers are private/for-profits, and one is a Tribal government.
Program Evaluation
CAPTC,as the lead grantee for the Pilot Program, had dual goals for program evaluation: To
inform continuous program improvement and to document child and family outcomes. To
achieve these goals, CAPTCcollaborated with the Early Childhood Education Institute at the
University of Oklahoma-Tulsa to design and implement a three-phase evaluation.
For Phase I evaluation which is ongoing, all Pilot Program sites programs report on grant
requirements such as teacher qualifications, teacher-child ratios, and hours of operation. These
reports are submitted monthly to CAPTe.
The focus of Phase II evaluation, implemented with Pilot Program sites in their second year of
operation, is to assess program quality and to use this data to inform action plans for
continuous program improvements. The major evaluation question is: What is the quality of
Pilot Program-funded classrooms?
To answer this Phase II evaluation question concerning program quality, Early Childhood
Education Institute evaluators randomly selected and administered tools commonly used to
assess quality in infant and toddler classrooms. A specially trained observer spent at least 3
hours of observation time per classroom while administering these measures.
Evaluation Results for 2007-08: Overall Classroom Quality
In the Spring of 2008,46 (77%) of the 60 classrooms in their second year of participation in the
Statewide Pilot Program where observed for 3 hours and assessed using the Infant/Toddler
Environment Rating Scale-Revised Edition or ITERS-R(Harms, Cryer, & Clifford, 2003). The
ITERS-Rhas 35 items divided into seven subscales (Space and Furnishings, Personal Care
Routines, Listening and Talking, Activities, Interaction, and Program Structure) that assess
overall quality of programs for children from birth to 30 months of age. The anchor ratings
associated with the ITERS-Rscores are: 1=lnadequate, 3=Minimal, 5=Good, 7=Excellent.
ITERS-R
8
7
6
5
4
3
2
1
o
Space &
Furnishings
SO= .82
Personal
Care
Routines
SD=.78
Listening
& Talking
SO=.78
Activities
SO=.70
Parents
& Staff
SO=.43
Total
Score
SO=.33
Interaction Program
SO=.85 Structure
SO=.70
46 Classrooms
Overall, as displayed in the table above, the average ITERS-Rscore was 5.08. This rating is
considered good and is similar to the results found in the Early Head Start National Study
(Administration for Children and Families, 2006). Scores of 5 (good) or greater were evident on
5 ofthe 7 subscales. Ofthe 46 classrooms observed, 23 scored above 5 on Space and
Furnishings, 6.4% scored above 5 on Personal Care Routines, 93.4% scored above 5 on Listening
and Talking, 95.1% scored above 5 on Activities, 89.1% scored above 5 on Interaction, 45.6%
scored above 5 on Program Structure, and 69.4% scored above 5 on their TotaIITERS-R score.
Thus, many programs demonstrated indicators of high quality programming for the infants and
toddlers.
Because a rating of 5 is considered good, the number and percentage of classrooms scoring
below 5 is of interest when targeting quality improvement efforts. Ofthe 46 classrooms
observed, 37% scored below 5 on Space and Furnishings, 93.6% scored below 5 on Personal
Care Routines, 6.6% scored below 5 on Listening and Talking, 4.9% scored below 5 on Activities,
10.9% scored below 5 on Interaction, 54.4 %scored below 5 on Program Structure, and 30.6%
of the classrooms scored below 5 on their Total ITERS-Rscore. These results have been shared
with program administrators and were used to drive professional development programming.
Evaluation Results for 2007-08: Teacher Sensitivity
In the Spring of 2008, 46 (77%) of the 60 classrooms in their second year of participation with
the Statewide Pilot Program where observed for 3 hours and assessed using Arnett Caregiver-
Child Interaction Scale or Arnett (Arnett, 1989). The Arnett is a 26-item scale that assesses the
quality and content of the interactions between teacher and child. The items measure the
emotional tone, discipline style, and responsiveness of the caregiver in the classroom. When
using the Arnett, the observer rates the extent to which the caregiver exhibits the behavior
described in the item on the following 4-point scale: 4=Very much, 3=Quite a bit, 2=Somewhat,
l=Not at all.
ARNETT CAREGIVER-CHILD INTERACTION
SCALE
4
3.5 3.69
3
2.5
2
1.5
1
0.5
0
Sensitive Harsh
SD=.21 SD=.25
~ 46 Classrooms
Detached
SD=.27
Overall, as displayed on the previous table, the staff were rated as "sensitive" with a rating of
3.69 on the 4-point scale. Staff were rated "not at all" harsh or detached, with mean scores of
1.17 and 1.35 respectively. Again, the mean rating of 3.69 on sensitivity is consistent with the
means of 3.4 to 3.5 found in the Early Head Start classrooms that participated in the Early Head
Start Research and Evaluation Project (Administration for Children and Families, 2004).
Scores of 4 (very much) and 3 (quite a bit) on the Harsh and Detached subscales and scores of 2
(somewhat) and 1 (not at all) on the Sensitive subscale were targeted for quality improvement
efforts. Again, these results were shared with program administrators to inform future State
Pilot Program professional development.
Evaluation Results for 2007-08: Conclusions
Taken together, these two measures suggest high program quality in the Pilot Program sites.
Both measures of overall classroom quality (ITERS)and teacher sensitivity interactions (Arnett)
indicated good quality.
As noted, although positive, these results did drive Pilot Program professional development. In
conjunction with the Pilot Program professional development experts, a plan was implemented
to use the ITERS-Rand Arnett results to inform and focus individual mentoring and large group
trainings. The professional development experts delivered feedback to the observed sites that
was based on their unique ITERS-Rand Arnett results. The overall results will impact the
content of future large-group seminar-style trainings.
Next Steps
A new evaluation component, added as Pilot Program sites enter into their third year of funding
and operation, will examine child outcomes and specific program characteristics. The Phase III
evaluation will be a quasi-experimental design using comparison group methodology and
random sampling of children.
The child outcomes and program information produced during Phase III will be of interest to
audiences including the State Department of Education, funders, and state-level governmental
officials. Due to the anticipated scope of this phase of the evaluation and the inclusion of a
comparison group, the results will also be expected to impact a wider audience including the
early childhood education profession and national-level policy forums.
REFERENCES
Administration for Children and Families (2004). The role of Head Start Progroms in addressing
the childcare needs of low-income families with infants and toddlers: Influences on child
care use and quality. Washington, DC: U.S. Department of Health and Human Services.
Administration for Children and Families (2006). Early Head Start Research and Evaluation
Project: Research to Practice-Child Care. Washington, DC: U.S. Department of Health and
Human Services.
Arnett, J. (1989). Caregivers in day-care centers: Does training matter? Journal of Applied
Developmental Psychology, 10, 541-552.
Harms, T., Cryer, D., & Clifford, R. (2003). Infant/Toddler Environment Rating Scale-Revised.
NY, NY: Teachers College Press.
,,1 f"

Click tabs to swap between content that is broken into logical sections.

E 1850.3 P643r 2008
2008
Pilot Program 2008 Evaluation Results
Program Description
In 2006 the Oklahoma State Legislature directed the State Board of Education to establish a
pilot infant and toddler program (hereafter called "Pilot Program") funded through private
donations and state funds to serve at-risk children and their families in at least one rural and
one urban area of the state. The Request for Proposals (RFP)developed by the Oklahoma State
Department of Education (2006) specified varying levels of educational preparation for
classroom personnel, family support services with specified caseloads, and continual
professional development. Community Action Project of Tulsa County (CAPTe) was awarded
the grant contract and during fiscal year 2007 (July 1, 2006 to June 30, 2007) they collaborated
with 5 other early childhood agencies to implement high quality infant and toddler services in
13 sites across Oklahoma.
.'
"
The Pilot Program reached 90 classrooms in June 2007 and 139 in June 2008. Currently, these
classrooms are distributed across 15 agencies operating in a variety of communities across
Oklahoma. These providers offer services to 1,204 infant and toddlers from families whose
incomes are 185% or less ofthe federal poverty level. Ten of the participating providers are
non-profit organizations (seven of whom operate Early Head Start and/or Head Start
programs), four providers are private/for-profits, and one is a Tribal government.
Program Evaluation
CAPTC,as the lead grantee for the Pilot Program, had dual goals for program evaluation: To
inform continuous program improvement and to document child and family outcomes. To
achieve these goals, CAPTCcollaborated with the Early Childhood Education Institute at the
University of Oklahoma-Tulsa to design and implement a three-phase evaluation.
For Phase I evaluation which is ongoing, all Pilot Program sites programs report on grant
requirements such as teacher qualifications, teacher-child ratios, and hours of operation. These
reports are submitted monthly to CAPTe.
The focus of Phase II evaluation, implemented with Pilot Program sites in their second year of
operation, is to assess program quality and to use this data to inform action plans for
continuous program improvements. The major evaluation question is: What is the quality of
Pilot Program-funded classrooms?
To answer this Phase II evaluation question concerning program quality, Early Childhood
Education Institute evaluators randomly selected and administered tools commonly used to
assess quality in infant and toddler classrooms. A specially trained observer spent at least 3
hours of observation time per classroom while administering these measures.
Evaluation Results for 2007-08: Overall Classroom Quality
In the Spring of 2008,46 (77%) of the 60 classrooms in their second year of participation in the
Statewide Pilot Program where observed for 3 hours and assessed using the Infant/Toddler
Environment Rating Scale-Revised Edition or ITERS-R(Harms, Cryer, & Clifford, 2003). The
ITERS-Rhas 35 items divided into seven subscales (Space and Furnishings, Personal Care
Routines, Listening and Talking, Activities, Interaction, and Program Structure) that assess
overall quality of programs for children from birth to 30 months of age. The anchor ratings
associated with the ITERS-Rscores are: 1=lnadequate, 3=Minimal, 5=Good, 7=Excellent.
ITERS-R
8
7
6
5
4
3
2
1
o
Space &
Furnishings
SO= .82
Personal
Care
Routines
SD=.78
Listening
& Talking
SO=.78
Activities
SO=.70
Parents
& Staff
SO=.43
Total
Score
SO=.33
Interaction Program
SO=.85 Structure
SO=.70
46 Classrooms
Overall, as displayed in the table above, the average ITERS-Rscore was 5.08. This rating is
considered good and is similar to the results found in the Early Head Start National Study
(Administration for Children and Families, 2006). Scores of 5 (good) or greater were evident on
5 ofthe 7 subscales. Ofthe 46 classrooms observed, 23 scored above 5 on Space and
Furnishings, 6.4% scored above 5 on Personal Care Routines, 93.4% scored above 5 on Listening
and Talking, 95.1% scored above 5 on Activities, 89.1% scored above 5 on Interaction, 45.6%
scored above 5 on Program Structure, and 69.4% scored above 5 on their TotaIITERS-R score.
Thus, many programs demonstrated indicators of high quality programming for the infants and
toddlers.
Because a rating of 5 is considered good, the number and percentage of classrooms scoring
below 5 is of interest when targeting quality improvement efforts. Ofthe 46 classrooms
observed, 37% scored below 5 on Space and Furnishings, 93.6% scored below 5 on Personal
Care Routines, 6.6% scored below 5 on Listening and Talking, 4.9% scored below 5 on Activities,
10.9% scored below 5 on Interaction, 54.4 %scored below 5 on Program Structure, and 30.6%
of the classrooms scored below 5 on their Total ITERS-Rscore. These results have been shared
with program administrators and were used to drive professional development programming.
Evaluation Results for 2007-08: Teacher Sensitivity
In the Spring of 2008, 46 (77%) of the 60 classrooms in their second year of participation with
the Statewide Pilot Program where observed for 3 hours and assessed using Arnett Caregiver-
Child Interaction Scale or Arnett (Arnett, 1989). The Arnett is a 26-item scale that assesses the
quality and content of the interactions between teacher and child. The items measure the
emotional tone, discipline style, and responsiveness of the caregiver in the classroom. When
using the Arnett, the observer rates the extent to which the caregiver exhibits the behavior
described in the item on the following 4-point scale: 4=Very much, 3=Quite a bit, 2=Somewhat,
l=Not at all.
ARNETT CAREGIVER-CHILD INTERACTION
SCALE
4
3.5 3.69
3
2.5
2
1.5
1
0.5
0
Sensitive Harsh
SD=.21 SD=.25
~ 46 Classrooms
Detached
SD=.27
Overall, as displayed on the previous table, the staff were rated as "sensitive" with a rating of
3.69 on the 4-point scale. Staff were rated "not at all" harsh or detached, with mean scores of
1.17 and 1.35 respectively. Again, the mean rating of 3.69 on sensitivity is consistent with the
means of 3.4 to 3.5 found in the Early Head Start classrooms that participated in the Early Head
Start Research and Evaluation Project (Administration for Children and Families, 2004).
Scores of 4 (very much) and 3 (quite a bit) on the Harsh and Detached subscales and scores of 2
(somewhat) and 1 (not at all) on the Sensitive subscale were targeted for quality improvement
efforts. Again, these results were shared with program administrators to inform future State
Pilot Program professional development.
Evaluation Results for 2007-08: Conclusions
Taken together, these two measures suggest high program quality in the Pilot Program sites.
Both measures of overall classroom quality (ITERS)and teacher sensitivity interactions (Arnett)
indicated good quality.
As noted, although positive, these results did drive Pilot Program professional development. In
conjunction with the Pilot Program professional development experts, a plan was implemented
to use the ITERS-Rand Arnett results to inform and focus individual mentoring and large group
trainings. The professional development experts delivered feedback to the observed sites that
was based on their unique ITERS-Rand Arnett results. The overall results will impact the
content of future large-group seminar-style trainings.
Next Steps
A new evaluation component, added as Pilot Program sites enter into their third year of funding
and operation, will examine child outcomes and specific program characteristics. The Phase III
evaluation will be a quasi-experimental design using comparison group methodology and
random sampling of children.
The child outcomes and program information produced during Phase III will be of interest to
audiences including the State Department of Education, funders, and state-level governmental
officials. Due to the anticipated scope of this phase of the evaluation and the inclusion of a
comparison group, the results will also be expected to impact a wider audience including the
early childhood education profession and national-level policy forums.
REFERENCES
Administration for Children and Families (2004). The role of Head Start Progroms in addressing
the childcare needs of low-income families with infants and toddlers: Influences on child
care use and quality. Washington, DC: U.S. Department of Health and Human Services.
Administration for Children and Families (2006). Early Head Start Research and Evaluation
Project: Research to Practice-Child Care. Washington, DC: U.S. Department of Health and
Human Services.
Arnett, J. (1989). Caregivers in day-care centers: Does training matter? Journal of Applied
Developmental Psychology, 10, 541-552.
Harms, T., Cryer, D., & Clifford, R. (2003). Infant/Toddler Environment Rating Scale-Revised.
NY, NY: Teachers College Press.
,,1 f"