Redefining The Human #edcmooc

From ‘reasserting the human’, this week we move on to looking at ‘redefining the human’ in the final block of the E-Learning and Digital Cultures MOOC. Last week I wrote about how current educational theories and practices are largely based on differing versions of humanist philosophy. Now we are being asked to consider a rather different perspective on ‘being human’ in the digital age: the notion that we are already posthuman, and that ‘human being’ is a variously constructed social category, not a pre-determined and fixed entity with universal characteristics. Instrumental posthumanists, for example, treat the human body and human life as things that can and ought to be optimised by technologies. Pacemakers, cosmetic surgery, prostheses, exercise equipment that provides biofeedback data, genetically modified food, diet supplements and Google glass, for example, are all posthuman technologies that are already widely used in the ‘developed’ world, which begs the question, to what extent can we continue to enhance the human body and mind before we redefine what it is to be ‘human’, and what are the implications for education?

At the same time, where instrumental posthumanism is merely the integration of post-industrial technologies with humanist values, critical posthumanist theories challenge the very values and assumptions on which humanism is based, and though varied in nature, share the view that humanism is a limiting and most often oppressive ideology that needs careful examination. Humanism often includes the belief that ‘technology’ is the opposite of ‘natural humanity.’ Critical posthumanists do not see these as opposed: the human body is just as ‘technological’ or ‘mechanical’ as the digital device on which you’re reading this post. The brain and the heart rely on electricity, just as DNA is a kind of programming. Critical posthumanism holds that technology is itself neither good nor bad, helpful nor hurtful. It is the contexts in which it is used, the conditions under which it is produced, etc., that make it a positive or negative thing.

In True Skin, this short science-fiction film by Stephan Zlotescu, synthetic enhancement has become the norm, and the boundary between human and machine has been erased (think Pop-On Body Spares for humans). At the end of the film, the protagonist, when facing death – at least the death of his current body – takes advantage of an internet service which backs up all of his memories, which can then be inserted into his future (new) self. Sound familiar? It’s that old two-way ‘computer as human brain, human brain as computer’ metaphor (see previous post MOOCs and Metaphors).

What this notion says about the nature of mind, memory and learning, and the ways in which technological mediation is positioned in relation to it, is a theme which is also picked up in this week’s reading assignments, in particular in an article in Atlantic magazine in 2008 by Nicholas Carr, entitled Is Google Making Us Stupid?, a defining polemic which became the water cooler around which critics of the internet gathered to bemoan the demise of critical thinking;-

“Still, their (Google founders Sergey Brin and Larry Page) easy assumption that we’d all ‘be better off’ if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimised. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction”

‘As we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence’, concludes Carr. One of the challenges for us is this: Is it possible to counter the technological determinism of this view, without resorting to over-simplistic assertions of human dominance over technology? How should we respond, as teachers and learners, to the idea that the internet damages our capacity to think? On the EDCMOOC Discussion Forum this week, one of the contributors had this to say;-

“Despite the interesting links and comments made on this thread, it really needs to be noted how many educators use Google, especially Google Scholar through a university library. That is, we teach our students how to use a database such as this – as well as many, many others – to access the billions of well researched and written (peer-reviewed and non-refereed) articles on the web. Usually there will then be a few articles our learners will download and read as hard copy … in the traditional way!

Of course, there are many readings we might wish to come back to and never do, but that’s because there is way too much for one human to read. As mere humans, we need to select and then focus … as suggested in the very first short animation posted by our lecturers on how to approach this course.

Google and digital learning gives us faster and easier access to the information in virtual space. We still can access information in other ways – we still can read in different ways. Binary thinking is for computers not humans.”