Big Data, the Common Core, and the Global Governance of Education, Part 2: What is Data-Driven Decision Making?

In Part 1, I introduced the series and its aims, and presented its major theses. It documented and examined the role of Pearson’s Sir Michael Barber in directing the implementation of the Core regime, and in establishing an ideological framework to guide implementation, known as deliverology. Part 2 interrogates the meaning of a concept coveted by Core architects: data-driven decision making. This analysis sets the stage for an examination of the role radical behaviorist ideology plays in driving current education policy.

What Does Data-driven Decision Making Mean?

In speaking to a friend the other day, I quipped: “I actually really like data, but I never let it drive my car.” We laughed.

It’s easy to poke fun at this oft heard slogan, one educators are compelled to repeat like an incantation at Hogwarts. But current education policy is as close to being “research-based” as New York’s Education Commissioner John King is willing to listen to parents.

There’s little vetted research that supports Federal and State imposed reforms: growth of charters, the adoption of test-based teacher evaluation, so-called turnaround models, high stakes testing, and, as well, there’s little in the annals of research to support the Common Core Standards regime. There is much in the annals of research to suggest that these policies will not improve the quality of education; much evidence exists to warn us that these policies are destructive in nature.

No matter how much our existing stores of data and its scientific analysis reveals that poverty and attending social problems are key sources of the difficulty many schools and communities face, no matter how much our existing stores of data and its scientific analysis reveal the disintegrating effects of funding cuts and school closings, no matter what the facts are, the Doctors of Reform continue to prescribe the same medicine despite the fact that the patient is now on life support as a result of their “help.”

So, what does “data-driven decision making” actually mean, if it doesn’t mean basing policy on research?[1] What exactly is this federally mandated approach about?[2]

As I continue work for this “big data” series, the meaning of “data-driven decision making” has become much clearer: it means removing decision-making power from the practice setting. The key to the present “data-driven” agenda is “big data” and the algorithms that are developed to make sense of this data. This issue here is not big data per se, but the ideology — the aim — that guides the development and use of that technology.

Algorithms using very large cloud-based education datasets, containing information that has been gathered on us, our children, our friends and our fellow co-workers, largely without our consent, will be used to make decisions “for us.” Where ever a teacher or principal once used professional judgment, based on her classroom observations, tests, knowledge of students and the community, etc., learning analytics will be flown in take over. Like drones, these “personalized learning” systems will be controlled from a far away office, possibly in another country, and likely guarded by a private security firm.[3] Its operators will know little to nothing about education, and absolutely nothing about the children and teachers they lord over.

The “Big Data” McKinsey report cited in Part 1 makes this aim clear. They write that big data will be used to “replace … human decision making with automated algorithms.” In their words:

Sophisticated analytics can substantially improve decision making, minimize risks [to whom they don’t exactly say], and unearth valuable insights that would otherwise remain hidden. Such analytics have applications for organizations from tax agencies that can use automated risk engines to flag candidates for further examination to retailers that can use algorithms to optimize decision processes such as the automatic fine-tuning of inventories and pricing in response to real-time in-store and online sales. In some cases, decisions will not necessarily be automated but augmented by analyzing huge, entire datasets using big data techniques and technologies rather than just smaller samples that individuals with spreadsheets can handle and understand. Decision making may never be the same; some organizations are already making better decisions by analyzing entire datasets from customers, employees, or even sensors embedded in products.

Of course, the decisions to be replaced with algorithms are not those of executives. They deliberate, not only based on information, but ultimately based upon what serves their interests. Data-driven decision making works so that no one in the practice setting will make the “wrong decision” because the framework for decision making is tightly controlled by those who control the data. No worries that the teacher or principal might make a decisions that would interfere with the bottom line. No worries that educators might decide something in the student’s interest, despite its cost.

Yes, big data may enable better decision making. The question is, better decisions for whom? Does anyone really believe that their interests are served by being excluded from decision making about things that directly affect them, with their will, their agency, replaced by an algorithm? Does anyone really believe a computer and 400 data points (the number of data points reportedly used by inBloom) are a substitute for conscious human beings working together in a particular historical and cultural moment? Teachers and students make thousands of decisions every minute in a classroom, a process that cannot be replaced by a machine, because decision making properly understood requires consciousness and an aim!

This report does not focus on education, but as Core architect David Coleman and Core implementation guru Sir Michael Barber both worked at McKinsey before playing their current education policy roles, there’s no doubt that education is a key target of the big data agenda. The big data inBloom dataset, the role Pearson plays in the so called learning registry and the use of all of this to develop “personalized learning” is clear.[4]

A Vision of Educators as McPearson Behavior Compliance Managers

Big data-driven decision making means everyone is to be singularly mobilized on the basis of data from test scores and other types of “benchmarks”. Simply put, data-driven decision making is the pseudo-clever, Harvard Business School-esque moniker for teaching to the test.

This regime does not need real teachers to accomplish the task of teaching to the test; computers and McPearson classroom behavior compliance management drones will do just fine. Hence the effort to reduce the requirements for becoming a teacher.

Seen in this light, the scripted lessons, the Core aligned modules, and robotic instructional videos, are not aberrations, the result of poor implementation of a good idea. The Core standards, Core curriculum and Core assessments have all been taken care of for educators by non-educators; few decisions remain for educators to make. Schools are becoming mazes, where students and teachers are to chase the data cheese, and “do nows.” The agenda to establish massive student sorting, ranking and tracking apparati based on student potential for “added value”, calculated on the basis of their “personal characteristics” as represented in the massive data cloud, is hidden by the glowing utopian reform rhetoric. But nothing could be less personal than “personalized learning” via big data regimes.

Thus, this is not about educators using data to improve teaching and learning; it is about regulating those working in the practice setting, about mechanizing that work so that virtually (pun intended) anyone could do that work, thus vastly reducing the cost in terms of salaries and benefits, making more funds available to the techno-sharks “driving” reform. This is conscious work to reduce the quality of education.

I’ll leave readers with this understanding of “personalized learning,” drawn from a U.S Department of Education document entitled, Transforming American Education Learning Powered by Technology: Personalized learning is defined as the tailoring of pedagogy, curriculum, and learning environments to meet the needs and goals of individual learners through the use of technology.

The passive voice says it all. The “tailoring” is accomplished by the algorithm and big data, and thus those who control the data and the algorithm will make decisions about the pedagogy, curriculum and “learning environment”. No professionals needed.

Notes

And what about its partner in irrationalism, “Measurable data”? Measurable data is one of the most inept phrases to come out of reform land in at least a decade ↩

The current model of Data-Driven Instruction (DDI) was adopted by New York State so as to win Race to the Top funds. While the notion of data-driven instruction has been around for some time, its current form is particular, and specifically mandated by RTTT requirements. ↩

1 Comment →Big Data, the Common Core, and the Global Governance of Education, Part 2: What is Data-Driven Decision Making?

Betsy December 28, 2013 at 3:23 pm

All of this will require the mass adoption of lying. Teachers will lie to students or give them answers, students will lie on tests to get the correct assessments to “save” their teachers jobs and limit caustic indoctrination… Administration will lie to further their positions and parents will lie to keep their kids from getting any bad data points from say going to the guidance councillor or school psychologists. They will lie because they may posses ethics prohibiting their ad hoc prescribing of kiddie fad drugs and so on..
Mikhail Barishnikov said that the hardest part of his adapting to living in America after his defection from the Soviet Union was to not lie.
As we are clearly seeing the example of these educrats who lie to the public about charters, CCSS et al, it would go without saying that this Stalinistic repertoire would follow down the lane.