Thoughts About The New ACT/SAT Concordance Tables-Summer 2018

Browse

Thoughts About The New ACT/SAT Concordance Tables-Summer 2018

Longtime readers of this blog will know that I work with Method Test Prep, a national ACT/SAT preparation company whose mission is to level the playing field of standardized testing. As such, I have previously written in this space about developments relating to the ACT and SAT (here’s just one example) and will continue to do so when developments call for it. The joint publication of concordance tables by the ACT and SAT will be a great source of answers to questions for people about how to compare scores across the two tests, but it also raises some questions that I hope will encourage discussion among all stakeholders. My colleague Evan Wessler and I collaborated on an article for the Method Test Prep blog about this and I’ve reposted it here as well. I hope that you will find it interesting and useful.

**********

In Accordance With Concordance

by Evan Wessler and Ethan Lewis

Where some see significant change, those who look deeper see something quite different. When it comes to SAT and ACT concordance, it’s important to know what’s at stake.

Let’s [Concor]dance!

Because the ACT and SAT are different tests with distinct scoring scales, students’ results are not automatically easily comparable. But there needs to be a way to reconcile scores. Students who have taken both exams naturally want to know if their scores on one test are higher than their scores on the other; counselors want to be able to advise their students properly; colleges, universities, and scholarship providers want to make sure that student scores meet or exceed their cutoff criteria. To accomplish this, we need a document called aconcordance. When the College Board released a new SAT in 2016, it changed the the test’s scoring scale––shifting from 2400 points back to 1600 points––and unilaterally released a concordance that converted new SAT scores to old ones, and then converted these to ACT scores. This provoked the ire of the ACT, which dismissed the new tables as invalid due to a lack of available score data from the new SAT. Eventually, the College Board committed to cooperating with the ACT to establish new (and, in the eyes of the ACT, credible) concordance tables; two years later, the new concordance is now available, and should be used by all parties interested in comparing scores across the two exams.

The more things change…

The whole idea behind this spat was that the SAT changed in a big way––so big, in fact, that the previously determined concordance between the exams would no longer hold.While the new SAT is genuinely a very different exam than its predecessor, the more things change, the more they stay the same. Here’s a snapshot of the former and current SAT-ACT concordance tables, with old and new conversions shown.

Adapted from Guide to the 2018 ACT®/SAT® Concordance, The College Board, 2018.

The yellow cells highlight the scores that have apparently shifted. Looks like a lot of change, doesn’t it? The conclusion most organizations have drawn is that things have gotten “better” for SAT students and “worse” for ACT students. For an example that shows why, take a look at the 1340 SAT score. According to the table, this used to be equivalent to a 28 on the ACT, but is now worth a 29. Conversely, in the reverse direction, a 28 on the ACT now lands a student the equivalent of a 1320 on the SAT, whereas it used to be worth as much as a 1340. This interpretation, however, is a bit too simplistic.

Free Samples!

When we take a deeper look, however, we begin to see how such small differences are all but irrelevant. To understand why, we must learn more about the statistical methods used to generate these concordance tables.

In order to produce a concordance, the College Board and ACT must collect data bysampling. That is, because it would be impractical for the organizations to use data from every single SAT and ACT examinee or test, they instead make inferences from a subset of the available data (in this case, 589,753 members of the class of 2017 who took both tests). Regardless of the statistical methods used to generate average score equivalences across exams, sampling inherently generates a certain degree of variability, known by statisticians as standard error, in the final numbers. You’re probably familiar with standard error of a sampling statistic: when you see a “±” value, that’s the standard error talking.

Inthis document, the College Board states the standard error of the score conversion values as follows.

When using the SAT Total and ACT Composite concordance table to estimate a student’s proximal ACT Composite score from their SAT Total score, the estimates in the table have a standard error of approximately± 2.26 (2) ACT Composite score points on its 1–36 point scale. When using this table to estimate a student’s proximal SAT Total score from their ACT Composite score, the estimates have a standard error of approximately± 79.57 (80) SAT Total score points on its 400–1600 point scale.(The emphasis is my own.)

Let’s return to the example scores we used before to demonstrate how things supposedly got “better” for SAT takers and “worse” for ACT takers. Using the table alone, we might conclude that an SAT score of 1340 used to concord to a 28 on the ACT, but now concords to a 29. But the 29 in this table is not really a 29: it’s 29± 2. Because of the way standard error is calculated, the practical interpretation of the measurement plus-or-minus the standard error is this:we are 68% confident that a score of 1340 on the SAT concords to an ACT score between 27 and 31. Notice how this range comfortably includes the 28 that the 1340 used to “equal”. It doesn’t take long to see that, when extended to all of the other values in the table, the standard error erases the apparent changes in the tables, placing them well within the ranges of confidence produced by the sampling method.

The long and short of it is this: any sampling method used to generate concordance produces not “exact” numbers, but instead rangeswithin an acceptable degree of confidence, or certainty. Thus, the concordance table alone does not tell the whole story. When standard error of the numbers in this table is taken into account, we reach a simple conclusion: the concordance table hasn’t really changed, and things have not gotten markedly “better” or “worse” for either SAT or ACT takers.

So You’re Saying I Have A Chance…

Despite the mathematical fact that scores on the two tests are essentially the same (in relation to each other) as they were before the new concordance tables were published, the story doesn’t end there. Many colleges and universities publish score thresholds for scholarships based on the old, unadjusted concordance. Similarly, some states offer their residents reduced (or free) tuition based on test scores, and unless they speedily change their documentation, the new concordance tables mightseem to advantage one test over another. Let’s take a look at a few examples:

At Louisiana State University, recipients of the Academic Scholars Award get $15,500 per year based on an ACT score of 30-32 or an SAT score of 1330-1430 and a cumulative 3.0 GPA. With the new concordance, 30-32 ACT concords to 1360-1440. So, well-meaning advisors might tell students that they should take the SAT because they can score lower (by getting a 1330, which concords to a 29) and still be awarded the scholarship.

At Liberty University, students can get into the Honors Program with a 28 ACT or a 1330 SAT. Since the new concordance equates a 1330 to a 29 ACT, if Liberty doesn’t change its documentation, students might conclude that it would be wiser to take the ACT and shoot for a 28.

At the University of Arizona, the “Wildcat Excellence” award criteria are on a sliding scale based on ACT or SAT scores and high school GPA. As you can see in the table below, a small difference in test scores can be worth a lot of money.

For instance, a student with a 3.8 GPA and an 29 ACT stands to receive $18,000. With the new concordance, the 29 on the ACT is a 1330-1350 on the SAT. But a 1380 on the SAT now concords to a 30 on the ACT, which based on the chart, would get our student $25,000. What should our student do? Take the ACT again and shoot for an actual 30? Take the SAT and try for a 1390? Either option might work, but for $7,000, it would make sense to dosomething, unless Arizona updates their table.

Similarly,the state of Florida’s Bright Futures Programis a wonderful tool for ensuring college access and rewarding students with high test scores and grades. “Florida Academic Scholars” get 100% free tuition plus a stipend for books at state universities with a weighted cumulative GPA of 3.50 and a 29 ACT or 1290 SAT.

As we just saw, under the new concordance, a 29 ACT equates to an SAT score between 1330-1380. So if all a student must do to be a Florida Academic Scholar is get a 1290 (which is now a 27 ACT), it would seem like they should eschew the ACT and pursue the SAT instead, shooting for that 1330. As we now know, the standard error makes the apparent difference mathematically insignificant, but if the state of Florida doesn’t update its criteria, then there isan effective difference in “real life”.

Final Thoughts

Because the SAT and ACT generate so much stress for students and uncertainty for everyone in the college admissions process, any change in the tests can generate a disproportionate level of anxiety. The hubbub over the concordance tables is understandable, and is surely something that should be understood by anyone involved in the college process. It’s good that there is now an official, universally agreed upon conversion between the two college admissions tests, but it is crucial that applicants, advisors, and advocates make sure that colleges, universities, and scholarship agencies have updated their score thresholds so thatstudents can pursue the test preparation that makes the most sense for them.