Categories

Author: ben_admin

Although technology is ever more ubiquitous in schools, there is no single best way to integrate technology into teaching. Rather, it is advantageous for educators to become familiar with a cross-section of theoretical approaches and frameworks that can be applied contextually in the unique circumstances of their practice. This document highlights four approaches that are worth the consideration of teachers.

TPACK

TPACK emphasizes the need for three distinct knowledge domains for effective teaching: content (subject) knowledge, teaching (pedagogy) knowledge, and technology knowledge. Moreover, the interplay and overlap of these domains is essential and recognized as it’s own distinct form of knowledge (for example, content-technology knowledge means understanding the technology that is essential for learning in a specific discipline). TPACK is framework teachers can use to evaluate what they need to know to teach their curriculum well and to explore how different knowledge domains overlap and contribute to effectiveness in each area.

SAMR

SAMR is a way of thinking about the impact that differing levels of technology integration can have on learning activities. Instructors may use SAMR to evaluate technology integration in their practice and explore deeper integrations to better realize the potential of technology and enrich learning and facilitate transformative outcomes.

Constructionism

Originators: Sidney Papert

Constructionism is a methodology that extends Piaget’s constructivist learning theory by suggesting that learners are best able to construct knowledge structures via purposeful, active engagement in the creation of a public entity. If learning is “constructive” in nature, it happens best by “construction.” Teachers can use Constructionism as a means to explore different types of discovery and project-based learning, as learners quite literally build understanding through making and project-based work.

Bloom’s Digital Taxonomy

An update to Bloom’s Revised Taxonomy that adds “Digital Verbs” to highlight the cognitive processes that are apparent in student learning that is mediated by digital technology. Instructors can use the Digital Taxonomy to investigate the different kinds of thinking they are asking of students and to align activities with lower and higher-order cognitive processes as appropriate.

Joining along with Bryan Alexander’s book club has been on my todo list for quite a while now. With grad school and other “life stuff” I haven’t been able to make it happen – until now! Cathy O’Neal’s Weapons of Math Destruction was on my must-read list, and when I saw Bryan’s post confirming it as the next selection, I jumped right in. I’ve finished the book now and wanted to share my thoughts and my responses to the provided book club discussion questions.

First off, I did like the book quite a bit. It was illuminating to get a data scientist’s insider perspective on algorithms and the myriad ways they operate just beneath the surface of our everyday lives; they influence our interactions with institutions, mediate our transactions, and shape our perceptions by determining the media content we see. The danger, O’Neal warns us, is when algorithms become Weapons of Math Destruction – automated decision makers that codify human bias or prejudice into unassailable mathematical facts of life. These systems don’t bother to correct misconceptions that lead to unfair outcomes, and their inner-workings are kept secret by their corporate masters (or, frighteningly, are ill-understood even by their creators). Most of all, O’Neal contends that these WMDs tend to punish or exploit the poor and marginalized, while favoring the privileged, who can often count on access to an empathetic human decision-maker instead of an indifferent mathematical formula.

Throughout the book, O’Neal cites cases in which algorithms are used to automate and optimize the process of economic and social stratification. From credit scores to law enforcement, to hedge funds and predatory lending, to college admissions to retail worker scheduling – over and over we see systematized processes that make the rich even richer and exclude or abuse the downtrodden.

The book invites, but never quite answers, the question: is technology inherently good or bad? Are the tools, the tool-makers, or the tool-wielders at fault? O’Neal seems to suggest plenty of blame to go around: The unstated aims of the powerful and wealthy are often to maintain their privileged position at the top of the heap (Sociology 101!), and ambitious or opportunistic firms are eager to sell algorithmic solutions to “solve” difficult social problems that are not as bulletproof as advertised. Ultimately, O’Neal says, we need to “stop relying on blind faith and start putting the ‘science’ back into data-science.” (p. 219)

Discussion Questions

How can political campaigns best use big data and data analytics without causing harm?

Algorithms and analysis of big data can help actors to achieve aims in a more accurate/automated/efficient way. O’Neal’s big question is “what are those aims?” I saw few examples in the book where the overall purpose of the unit deploying algorithms was really to serve the greater good, but noble goals went awry due to bad data science. Perhaps this is because transparency and the opportunity for feedback tend to accompany ethically deployed algorithms.

Which educational uses of algorithms actually benefit learners?

I think there is room for algorithms in education when they are complementary to learning – particularly in informal learning scenarios where there is no time/money/opportunity for more in-depth instruction. Duolingo is a great example of a tool that provides additional learning opportunities outside of the classroom that might not exist otherwise.

Which actors (agencies, nonprofits, companies, scholars) are best placed to help address the problems O’Neil identifies?

I ran across gobo from MIT’s Media Lab, which is an interesting example of a counter-algorithm that is designed to let you customize your social media feeds. Are open source, transparency, and more user control a step in the right direction?