This article is an excerpt from “Edtech, ESSA, and the SAT: Is the Use of Data in Schools Destroying K-12 Innovation?” The talk was given at the University of Virginia’s Data Science Institute on April 18.

When I served as a 6th grade science teacher at a KIPP middle school starting in 2009, my teaching position was distinct in one particular way. The state of Texas required 5th and 8th graders to take a statewide standardized test in science, but there was no end-of-year exam in 6th and 7th grade.

This put my students and I in a unique situation, because while I did align my course to state standards, we didn’t have pressure to perform on multiple-choice, high-stakes testing. My students worked on group projects, engaged in experiments, and performed in game-based assessments. And yes, students took an end-of-year exam that I created—but it wasn’t a do-or-die experience for them, tied to funding and “Adequate Yearly Progress.”

With the recent passing of the Every Student Succeeds Act (ESSA), United States administrators and teachers find themselves in a similarly unique situation. Schools may not have to live up to Adequate Yearly Progress, and high-stakes testing may no longer determine funding.

Photo Credit: NEAToday

So, we have a question in front of us: do we stick with the multiple choice-ridden assessments of old, or give more weight to new forms of assessment and data? And perhaps even more importantly—how do we assess students going forward for a world we can’t predict?

When the Gates and Broad Foundations pledged $60 million towards implementing the Common Core State Standards nationwide in the late 2000s, it quickly became evident that states’ exams weren’t properly aligned to these skill-based, “college and career-ready” standards. Two consortia—SmarterBalanced and PARCC—attempted to answer this call, but lost footing and contracts (sometimes drastically: PARCC dropped from 23 contracts in 6 contracts) in less than five years. Even when SBAC and PARCC found success, it still came in the form of sit-down, multiple choice exams.

A timeline of NCLB and Common Core.

Hess writes that the Every Student Succeeds Act “retains NCLB's testing framework (reading and math in grades 3-8 and in high school, science once at each level of schooling), but also creates more flexibility for states to explore new assessment options.” For the first time in fifteen years, states can cap the time students spend taking tests, and even better, they can design their own forms of assessments.

When it comes to redesigning tests, multiple choice isn’t the sole option. And thank goodness—when was the last time someone had to exercise multiple-choice skills when interviewing for an engineering, computing, or product management role?

The Bureau of Labor Statistics projects that 71 percent of all new jobs in STEM fields (science, technology, engineering and mathematics) during the next decade will be in computer science. If we’re going to prepare kids for those roles (and the roles that we don’t yet know about), we need to start testing them on skills that can’t be tested in an “A, B, C, or D” format. Creativity, collaboration, the ability to think on one’s feet—those skills are transferrable, applicable in a variety of industries, and difficult to track.

Alternative Types of Data

In both school observations and my own classroom practice, I’ve found that a variety of data and assessment offer a holistic and well-rounded approach to assessing both students and teachers. Here are a few of the options to consider.

“Qualitative Data”: Project-Based and Problem-Based Assessments

High Tech High (HTH), a set of public charter schools in Chula Vista and San Diego, doesn’t assess with tests; rather, the curriculum is built around annual student-driven projects, many of which are interdisciplinary. Featured in recent documentary Most Likely to Succeed, High Tech High students do take California state exams (with HTH’s low-income students performing 30% above the statewide average on both math and ELA exams), yet daily and monthly assessment come down to the use of rubrics and observation. And stats? HTH sends 96% of graduates to college, and 86% of those graduates are either still in or have graduated from college.

A second option comes with the adoption of adaptive technologies—something that EdSurge delved deeply into back in February. Defined as “education technology that can respond to a student's interactions in real-time by automatically providing the student with individual support,” adaptive technology can be divided up into three groups: content, sequence, and assessment.

Platforms with adaptive assessments adjust to the student as they perform well or poorly on an exam. Whether ScootPad or ALEKS or SuccessMaker, the difficulty of questions will increase as a student answers them accurately. If the student struggles, the questions will get easier. What’s more—when a tool falls into the “adaptive sequence” category, it continuously collects and analyzes student data, which educators can see behind the scenes.

Other Forms of Standardized Test Data: Still Relevant to Higher Education

Now, while it may come off as the opposite, I don’t think that we should eradicate multiple choice altogether. But it’s important to note that a variety of options exist of there, and they’re all worth evaluating. Take the SAT and/or ACT—college-bound high school students take them anyway, and with the SAT’s recent updates, questions are now more Common Core-aligned. If there is desire to stick with multiple choice, we need to ask ourselves how we can create and/or leverage assessments that are more relevant to the student trajectory, from K-12 into higher education learning experiences and beyond.

“Educators will have useful insights and advice on many of these questions. The trick is to put them forward in concrete, actionable, and persuasive ways.”

Rick Hess, Education Author

The Challenge: Advocating For New Data and Assessments

At the end of the day, I won’t choose what types of data and assessments enter into classrooms in the coming years. Educators, districts, students and parents will—when they lobby to their state administrators and policymakers. And truthfully, we all have a responsibility to speak up, from entrepreneurs to nonprofit leaders. As Rick Hess writes, no longer will “‘Washington is making us do that’" be a viable excuse.

“This all creates huge opportunities for teacher leaders, school leaders, and district staff to put their knowledge to work,” he writes. “Educators will have useful insights and advice on many of these questions. The trick is to put them forward in concrete, actionable, and persuasive ways.”

So, listeners, how do you think we should assess our kids? How can we put our insights forward?

This article is an excerpt from “Edtech, ESSA, and the SAT: Is the Use of Data in Schools Destroying K-12 Innovation?” The talk was given at the University of Virginia’s Data Science Institute on April 18.

When I served as a 6th grade science teacher at a KIPP middle school starting in 2009, my teaching position was distinct in one particular way. The state of Texas required 5th and 8th graders to take a statewide standardized test in science, but there was no end-of-year exam in 6th and 7th grade.

This put my students and I in a unique situation, because while I did align my course to state standards, we didn’t have pressure to perform on multiple-choice, high-stakes testing. My students worked on group projects, engaged in experiments, and performed in game-based assessments. And yes, students took an end-of-year exam that I created—but it wasn’t a do-or-die experience for them, tied to funding and “Adequate Yearly Progress.”

With the recent passing of the Every Student Succeeds Act (ESSA), United States administrators and teachers find themselves in a similarly unique situation. Schools may not have to live up to Adequate Yearly Progress, and high-stakes testing may no longer determine funding.

Photo Credit: NEAToday

So, we have a question in front of us: do we stick with the multiple choice-ridden assessments of old, or give more weight to new forms of assessment and data? And perhaps even more importantly—how do we assess students going forward for a world we can’t predict?

When the Gates and Broad Foundations pledged $60 million towards implementing the Common Core State Standards nationwide in the late 2000s, it quickly became evident that states’ exams weren’t properly aligned to these skill-based, “college and career-ready” standards. Two consortia—SmarterBalanced and PARCC—attempted to answer this call, but lost footing and contracts (sometimes drastically: PARCC dropped from 23 contracts in 6 contracts) in less than five years. Even when SBAC and PARCC found success, it still came in the form of sit-down, multiple choice exams.

A timeline of NCLB and Common Core.

Hess writes that the Every Student Succeeds Act “retains NCLB's testing framework (reading and math in grades 3-8 and in high school, science once at each level of schooling), but also creates more flexibility for states to explore new assessment options.” For the first time in fifteen years, states can cap the time students spend taking tests, and even better, they can design their own forms of assessments.

When it comes to redesigning tests, multiple choice isn’t the sole option. And thank goodness—when was the last time someone had to exercise multiple-choice skills when interviewing for an engineering, computing, or product management role?

The Bureau of Labor Statistics projects that 71 percent of all new jobs in STEM fields (science, technology, engineering and mathematics) during the next decade will be in computer science. If we’re going to prepare kids for those roles (and the roles that we don’t yet know about), we need to start testing them on skills that can’t be tested in an “A, B, C, or D” format. Creativity, collaboration, the ability to think on one’s feet—those skills are transferrable, applicable in a variety of industries, and difficult to track.

Alternative Types of Data

In both school observations and my own classroom practice, I’ve found that a variety of data and assessment offer a holistic and well-rounded approach to assessing both students and teachers. Here are a few of the options to consider.

“Qualitative Data”: Project-Based and Problem-Based Assessments

High Tech High (HTH), a set of public charter schools in Chula Vista and San Diego, doesn’t assess with tests; rather, the curriculum is built around annual student-driven projects, many of which are interdisciplinary. Featured in recent documentary Most Likely to Succeed, High Tech High students do take California state exams (with HTH’s low-income students performing 30% above the statewide average on both math and ELA exams), yet daily and monthly assessment come down to the use of rubrics and observation. And stats? HTH sends 96% of graduates to college, and 86% of those graduates are either still in or have graduated from college.

A second option comes with the adoption of adaptive technologies—something that EdSurge delved deeply into back in February. Defined as “education technology that can respond to a student's interactions in real-time by automatically providing the student with individual support,” adaptive technology can be divided up into three groups: content, sequence, and assessment.

Platforms with adaptive assessments adjust to the student as they perform well or poorly on an exam. Whether ScootPad or ALEKS or SuccessMaker, the difficulty of questions will increase as a student answers them accurately. If the student struggles, the questions will get easier. What’s more—when a tool falls into the “adaptive sequence” category, it continuously collects and analyzes student data, which educators can see behind the scenes.

Other Forms of Standardized Test Data: Still Relevant to Higher Education

Now, while it may come off as the opposite, I don’t think that we should eradicate multiple choice altogether. But it’s important to note that a variety of options exist of there, and they’re all worth evaluating. Take the SAT and/or ACT—college-bound high school students take them anyway, and with the SAT’s recent updates, questions are now more Common Core-aligned. If there is desire to stick with multiple choice, we need to ask ourselves how we can create and/or leverage assessments that are more relevant to the student trajectory, from K-12 into higher education learning experiences and beyond.

“Educators will have useful insights and advice on many of these questions. The trick is to put them forward in concrete, actionable, and persuasive ways.”

Rick Hess, Education Author

The Challenge: Advocating For New Data and Assessments

At the end of the day, I won’t choose what types of data and assessments enter into classrooms in the coming years. Educators, districts, students and parents will—when they lobby to their state administrators and policymakers. And truthfully, we all have a responsibility to speak up, from entrepreneurs to nonprofit leaders. As Rick Hess writes, no longer will “‘Washington is making us do that’" be a viable excuse.

“This all creates huge opportunities for teacher leaders, school leaders, and district staff to put their knowledge to work,” he writes. “Educators will have useful insights and advice on many of these questions. The trick is to put them forward in concrete, actionable, and persuasive ways.”

So, listeners, how do you think we should assess our kids? How can we put our insights forward?