Earlier today, the Texas
Education Agency released a report titled, 2015-16 A-F Ratings: A Report to the 85th
Texas Legislature. HB
2804 (84th Texas Legislature, 2015) requires changes to the state
accountability system, effective with the 2017-18 school year. The changes
include assigning districts and campuses a rating of A, B, C, D, or F for overall performance, as well as for
performance in each of the following five domains: Domain I: Student
Achievement, Domain II: Student Progress, Domain III: Closing Performance Gaps,
Domain IV: Postsecondary Readiness, and Domain V: Community and Student
Engagement. TEA must assign the grades in Domains I-IV, and overall; public
schools and districts generate their own grades in Domain V, Community and
Student Engagement. The state must
incorporate three of the designated grades, selected by the schools and districts,
in determining the overall letter grades.

HB 2804 also required the agency
to report what the A-F ratings would have been for each Texas public school district
and campus, in Domains I-IV, if the A-F rating system had been in place for the
2015-16 school year. The 494-page document was due to the legislature by
January 1, 2017, as a precursor to the A-F accountability rating system that is
to be implemented in the summer of 2018.

The agency’s report contains
helpful resource materials, such as an overview of the A-F rating system, the domain
targets (which are the cut scores used to determine the letter grades), the domain
methodologies, and over 300 pages’ worth of results for campuses and school districts.
The report also contains 3 pages of caveats
or limitations, that are tempting to overlook but important to understand
just what is and what is not included in this preliminary “work-in-progress”
report. The following caveats, on pages
12-14, indicate that these A-F ratings are just a ‘starting point,’ with numerous technical decisions that remain to
be made before implementation of the A-F system in spring 2018 (emphasis added):

·“The
ratings in this report are for informational purposes to meet a legislative requirement.”

·“No inferences about district or campus
performance in the 2015–16 school
year should be drawn from these ratings, and these ratings should not be considered predictors of future district or campus
performance ratings.”

· "The Domain
I–IV targets used to determine the A–F ratings in this report are based on rating
cut points determined by the commissioner for the purpose of demonstrating one possible, but not necessarily the final,
approach."

· "The final methodology to determine the
overall rating label, including
the process to convert the domain outcomes to a scale that can be weighted
across the five domains, will be
developed with further stakeholder input and is expected to be adopted in
the Texas Administrative Code in spring
2018."

Preliminary Observations and Key Findings on Statewide Results

It’s what the report does not
include that may be most important in providing context at this point in time. The
Agency appears to have opted to let the data speak for themselves; there are
few, if any, analytic or interpretive remarks provided to help explain
statewide results or trends. Moak, Casey
and Associates (MCA) has created some observations and charts to help meet this
need. (NOTE:MCA will continue to analyze the A-F results in
this report and will have additional findings to discuss later.) Here are some of our most immediate
observations:

·Statewide, campuses and districts received a letter
grade of “C” most often on Domains I, II and IV; while ”D’s” and ”B’s,” in that
order, were the grades most often received on Domain III. (Figure 1)

·The number of campuses with an “A” range from
740 (10%) in Domain III to 981 (13%) in Domain I. In contrast, the number of
campuses with an “F” range from 595 (8%) in Domain III to 1,040 in Domain II.
(Table 1)

·Most campuses and districts rated as “Met
Standard” in 2016 received “A”, “B”, and “C” letter grades in the four domains.
(Tables 1 and 2)

·Approximately 30% of Met Standard-rated campuses
received “D” and “F” ratings in the four domains. For example, 29% (or 2,105
out of 7,099) campuses received “D” and “F” ratings in Domain I; while 31%, 39%
and 28% of Met Standard-rated campuses received “D” and “F” ratings for Domains
II, III, and IV respectively. (Table 1)

·Most campuses and districts rated as “Improvement Required”
(or “IR”) in 2016 received “D” and “F” ratings in each of the domains. However,
some 97 IR-rated campuses and another 27 districts received grades as high as
an “A,” “B,” or “C” in Domains II; similarly 118 IR-rated campuses and 19
districts received grades as high as an “A,” “B,” or “C” in Domain IV. (Table
1)

·In each of Domains I-IV for elementary schools, middle schools, high schools, and public school districts (including
charters) the average letter grade was a “C.” For K-12 campuses, the average
letter grades for Domains I-IV are “C,” “B,”“C,” and “C,” respectively. (Tables
1 and 2)

There also appear to be some
noteworthy relationships between campus or district student demographics and
letter grade ratings:

·Campuses with low percentages of Economically
Disadvantaged students enrolled (0-20%) received the most “A’s” in each of the
four domains. (Figure 5)

·Campuses with high percentages of Economically
Disadvantaged students enrolled (60-80% and 80-100%) received the most “D’s”
and “Fs” in Domains I, II, and IV. (Figure 5)

·Campuses with high percentages of White students
enrolled (60-80% and 80-100%) tend to receive more “A’s” and “B’s” in Domains
I, II, and IV when compared with similar enrollments of African American and
Hispanic students. (Figure 4)

·Campuses with high percentages of Hispanic
students enrolled (60-80% and 80-100%) tend to receive more “C’s” and “D’s” in
Domains I, II, and IV when compared with similar enrollments of African
American and White students; and they also have more “A’s” and “B’s” in Domain
III than the other student groups. (Figure 3)

·The relatively small number of campuses with high
percentages of African American students enrolled (60-80% and 80-100%) received
the most “F’s” in each of the four domains. (Figure 2)

Important Reminder about Correlations: Subscribers are wise to keep
in mind the “golden rule” of interpreting correlations between variables: correlation does not indicate causation. Causation can only be demonstrated through
carefully designed experiments; descriptive statistics do not prove that anything
caused anything else.

TEA included eight (8)
correlation tables in Appendix E. However, the agency did not include key
information such as p-values, other descriptive statistics, or explicit
definitions for each of the variables used. The absence of these details matters
to the analysis. However, if we assume statistical significance, then two interpretations can be made:

There is a strong negative relationship between the percentage of economically
disadvantaged students and Domain I: Student Achievement for both the district
and campus data (-0.70 and -0.66 respectively). Statistically, this means that
as student poverty increases in a district or campus, performance in Domain I
generally decreases. (Please see Appendix E, pages E-3 and E-5.) This is consistent with years of analyses
conducted by MCA and others on state accountability data.

There is a weak negative relationship between the percentage of economically disadvantaged
students and Domain II: Student Progress in both the district and campus data
(-0.25 and -0.10 respectively). This means that as student poverty increases, there
is a slight tendency for districts’ and campuses’ scores in Domain II to decrease.
However, this relationship is so weak, particularly for campuses, that it may
be most reasonable to say that public schools perform about the same in this
domain regardless of how many economically disadvantaged students are enrolled.
(Please see Appendix E, pages E-3 and E-5.)

Concluding Remarks

In the absence of overall ratings
and/or in-depth analyses, it is difficult to arrive at summary conclusions
about the “work-in-progress” A-F ratings of public school districts and
campuses. It is clear that the new
ratings seem to cluster around the letter grade of “C.” When there are strong contrasts between
multiple years’ worth of ratings from the current, Index-based system (in place
since 2013) and the new ratings, there are bound to be conflicting
interpretations and a great deal more heat than light shed on public school
performance. It can, and perhaps should
be, discomforting to see schools that, for instance, may have been on this
year’s Public Education Grant program list (likely labeled as “failing schools”
in the media) that earn satisfactory or better letter grades in the new system;
just as it is discomforting to see schools that may have earned Distinction
Designations that were given preliminary “D” or “F” ratings in one or more
domains based on data that were generated
in the very same school year. It may
be most helpful to keep in mind the myriad limitations to the current modeling,
and to use those as points to inform the discussion around the final decisions
that are yet to be made by the state education agency, and to inform
legislators as they enter the 85th session.

NOTE: The following link provides access to
TEA’s new A-F Accountability Resources webpage that includes a copy of the
legislative report and informative resources that provide an overview of the
A-F system and the individual domains:

The Texas education system is a complex business responsible for educating over 5 million students at a cost of almost $38 billion annually.

5 million students

Moak, Casey & Associates brings together a team of former top education officials who have been associated with every major issue affecting school finance in the last 35 years.

School Finance Experience

Moak, Casey & Associates is considered the gold standard in high-quality research and planning services for a diverse range of clients concerned with the financial operation and management of public school education.