Let me note right off that I do not like WalletHub’s reports because the reporting is superficial, and WalletHub does not link to detailed, professional-quality reports. Instead, WalletHub produces research snippets that make for easy headlines, and those who wish for anything resembling a professionally-researched report are out of luck.

In its 2016 “best and worst” states, WalletHub somehow decided, for example, that completion of an AP exam was worth twice as much as the high school grad rate for low-income students. Moreover, it does not clarify if the grad rate is a four-year cohort rate. Also, it gives points for something called “bookworm ranking” (no details), and it accords “double weight” to the percentage of students completing the SAT and/or ACT.

As one might expect in this era of test-centric ed reform, the “best and worst” rankings rely heavily on test scores, some of which are vaguely defined as “math test scores” and “reading test scores.” These could be NAEP scores, and they could be from 2015. The point is that readers should not have to guess what exactly was measured, and they should not have to guess the reasoning behind WalletHub’s weighted ratings– but guess, they must.

And readers should not have to guess about the study’s limitations. Each study should include a section for discussing limitations.

So much for “should.”

One of the obvious limitations of ranked data is that some data point must be last. Too, it is possible for an entity to seemingly “rise” in the rankings not because of any change but because other entities “fall.” It is also possible for all entities to actually increase (or decrease) on some measured outcome at a rate that does not alter the rankings.

Of course, the very idea of the rankings could also be nonsense, with the criteria for ranking dependent upon the whims and preferences of researcher selection of variables, the means of measurement, and the weighted importance of variables.

Such issues should be included in discussion of the limitations of the study. Again, WalletHub skips on discussing limitations, as it skips on justifying the weighted value it accorded its variables or even clearly defining its variables. At most, it offers the occasional statement– hardly worthy of consideration as a developed consideration of study cautions or shortcomings.

But as poor as the WalletHub education reporting is, it still produces an outcome upon which an education-ranking-dazzled media might seize.

Consider this “worst”: Louisiana falls dead last in this WalletHub ranking of the WalletHub-calculated, “best and worst states” score.

Now, even though I have been kept in the dark regarding many details of this WalletHub, uh, study, I realize that since it relies heavily upon test scores (likely NAEP, but also ACT and SAT), what I am viewing is a determination of “best and worst states” that is chiefly based upon test scores.

Massachusetts is first, followed by New Jersey and Connecticut. No surprise there.

WalletHub reports the “five highest” and “five lowest” for each of the test scores, such as ACT, but it doesn’t report the actual scores. So, the reader is left to imagine just how low Louisiana’s average composite ACT score is. In 2015, it was 19.4, with 100 percent of graduates tested.

As it turns out, all 13 states that tested 100 percent of grads in 2015 scored ACT composites ranging from 19.0 to 20.6. This level of detail is not available on WalletHub.

Of the five states reported with “lowest average ACT scores” in the WalletHub report, four of these tested 100 percent of grads (Louisiana, Alabama, Mississippi, North Carolina), and one tested 93 percent of grads (Hawaii). This info is not available on WalletHub.

As for the five states with “highest average ACT scores” (Connecticut, Massachusetts, New Hampshire, Maine, and New York), those 2015 ACT composites were based on between 10 percent (Maine) and 32 percent (Connecticut) of grads. This info is not available on WalletHub.

As for WalletHub’s overall “best schools” and ACT: Massachusetts has a 2015 ACT composite of 24.4– with only 28 percent of graduates tested. New Jersey’s composite was 23.2 (29 percent tested), and Connecticut’s was 24.4 (32 percent tested). This information is not available on WalletHub.

The percentage of grads tested makes a difference to ACT composite scores, but a WalletHub reader does not get to know these details. (A similar outcome occurs with state participation on the SAT.)

WalletHub concisely declared that Louisiana has the “worst” school systems. But chin up, Pelican State: It turns out that WalletHub really likes the schools in New Orleans– schools that were predominately taken over by the state post-Katrina to create what is now the all-charter Recovery School District (RSD)– and which had to be leveraged by grade inflation to even become a C district, with a 2016 ACT average composite of 16.7.

In another of its lets-rank-ed reports, “2016’s Cities with the Most and Least Efficient Spending on Education,” New Orleans wins first place among 112 of America’s most populated cities for providing education on the cheap (more nobly termed “return on investment,” or ROI). In short, WalletHub compared each state’s state-determined percentage of proficient grade 8 students (yes, both the tests and the determination of proficiency vary by state) with the amount of education money spent “per capita” (which appears to be per resident in the given metro area based on dollar amounts reported by WalletHub, but again, not clearly delineated).

Here’s what WalletHub offers on the “adjustment” front, including the best WalletHub has to offer regarding study limitations, the limitation sentence:

In order to gauge the return on educational investment for 112 of the most populated U.S. cities, WalletHub divided each city’s aggregated standardized test scores in reading and math for grade 8 by its total amount of education spending per capita. Please note that standardized test score data includes only the percentage of students who completed the tests and scored at or above the passing levels imposed by their respective states.

To control for major cross-city differences in economic status among cities, we adjusted education spending levels by two key economic factors: poverty rate and median household income. Moreover, given that education spending is further affected by the percentage of children in single-parent families and the percentage of households that do not speak English as their first language, we adjusted expenditures on these two measures as well.

The adjusted “education spending per capita” measure employed in this study assumes all cities have an average poverty rate, rate of single-parent families, rate of households who speak a language other than English at home and median household income. This allowed us to compare ROI on education spending net of cross-city differences within these indicators.

WalletHub assures readers that it has adjusted educational spending so that cities might be fairly compared, but you’ll just have to trust WalletHub’s methodology because in WalletHub style, no numeric or formulaic details on exact adjustments is included as it surely would be in a quality, professional research report.

Let’s just do like the media and jump to results, shall we?

In New Orleans’ case, the adjustment doesn’t matter. Whether education expenses “per capita” are adjusted or not, New Orleans garners first place for the, uh, “most efficient spending” given its state-determined percentage of grade 8 students garnering proficient scores on the state-determined test.

New Orleans actually tied with Miami for WalletHub’s adjusted ROI first-place rank: Miami spent almost twice as much “per capita” as New Orleans: $1,147, and after WalletHub’s under-detailed income adjustment, the Miami grade 8 proficiency as determined by the state on the state’s tests was 54.52%:

A fine Return on Investment, apparently.

Now, this WalletHub “efficient spending” is a downer for some cities, like Anchorage, where the state-determined proficiency on state tests is 74.78%. Turns out the “per capita” cost of $2,248 is just too much to warrant anything but a ranking of 96 out of 112, which, in the public eye and given the title of this WalletHub report, must hint at efficient-spending failure.

Other losers in the world of WalletHub ROI efficiency include Fremont, CA: 75.35% proficiency at $1,250 “per capita” (84 out of 112 “adjusted” ranking); Chesapeake, VA: 82.10% proficiency at $2,036 “per capita” (85 out of 112 “adjusted”); New York, NY: 49.47% proficiency at $2,549 (103 out of 112), and Washington, DC: 53.76% proficiency at $3,552 (106 out of 112).

So, in the case of this “efficient spending” WalletHub report, what matters is not necessarily the percentage proficient for these cities on the given grade 8 state test. What matters is how cheaply the proficiency is supposedly purchased.

That is how in the Wonderland of WalletHub, “worst school systems” Louisiana is home to “most efficient spending” New Orleans.

And even the “best school systems” star, Massachusetts, is home to apparently middle-of-the-road spending efficiency Boston, which WalletHub reports as having a state-test, grade 8 proficiency 52.30% at $1,750 “per capita” that gets Boston the “adjusted” efficiency ranking of 61 out of 112.

Will Boston work on becoming as spending-efficient as New Orleans?

Perhaps Massachusetts could just lower its threshold for state proficiency so that Boston might yield a higher-proficiency bang for its “per capita” buck and, in doing so, become the WalletHub “most efficient spending” city star of 2017.

We could market it as the Boston-New Orleans “efficient spending” gap closure.

Share this:

Like this:

Related

https://en.m.wikipedia.org/wiki/WalletHub
I looked at this for some perspective. These ratings schemes for education are proliferating. Few people look behind the curtain or have the expertise to question them, as you do. Ratings for cost effectiveness ROI in education are also proliferating with per student costs, not allocations of those dollars, compared with one or two indicators of “outcomes,” usually high school graduation rates and test scores. OECD is working on these metrics and think tanks are producing ROI reports.
T in name only. It
Test scores and other information about schools is so abundant, and “nearly free or free,” that any reprocessing is easy and a tempting consideration for reuse and profits. The most sophisticated reprocessing of data into ratings can be unearthed with a close look at greatschools.org. The site discloses the methodology for using the data and has state by state citations of which test scores are used in ratings. The site is non-profit in name only. It charges fees for leasing that data. Among the noteworthy users are Zillow and Scholastic, but for a fee the website will direct users to a specific school or “partner” based on zip code and other information. Almost all state test scores for individual public schools get to greatschools.org. They are transformed for use in a 10 point rating scheme. Among other outcomes, Success Academy schools get the highest rating.

Reblogged this on Politicians Are Poody Heads and commented:
“Return on investment”? Return on investment???
Really? That’s what we should be looking at?
Schools are not, and should never be, judged on “return on investment,” the way for-profit companies are.
Schools should be judged on how the kids are doing by wide measures, and I don’t mean how they are doing by state and nationally mandated tests that only measure how students who have been drilled on these tests are doing.
Do the schools teach critical thinking skills? Do they offer art, music, PE for the students? Do they teach science and social studies?
Or are they all about the standardized test scores and the “return on investment”?
And while we’re at it, what are the various governmental entities, local, state, and national, doing to relieve poverty, which is the biggest problem when addressing why children are not doing well in school. Oh, and also, they should also be addressing the English language learners and the special education students, as well.