Let me note right off that I do not like WalletHub’s reports because the reporting is superficial, and WalletHub does not link to detailed, professional-quality reports. Instead, WalletHub produces research snippets that make for easy headlines, and those who wish for anything resembling a professionally-researched report are out of luck.

In its 2016 “best and worst” states, WalletHub somehow decided, for example, that completion of an AP exam was worth twice as much as the high school grad rate for low-income students. Moreover, it does not clarify if the grad rate is a four-year cohort rate. Also, it gives points for something called “bookworm ranking” (no details), and it accords “double weight” to the percentage of students completing the SAT and/or ACT.

As one might expect in this era of test-centric ed reform, the “best and worst” rankings rely heavily on test scores, some of which are vaguely defined as “math test scores” and “reading test scores.” These could be NAEP scores, and they could be from 2015. The point is that readers should not have to guess what exactly was measured, and they should not have to guess the reasoning behind WalletHub’s weighted ratings– but guess, they must.

And readers should not have to guess about the study’s limitations. Each study should include a section for discussing limitations.