Wednesday, August 3, 2011

Most of us can relate to the befuddled lady in the "Age-Activated Attention Deficit Disorder" videohttp://tinyurl.com/6xcej6g. With the constant distractions of modern life interrupting completion of any tasks begun, the lady depicted can't keep up with frequent alterations to her memory synapses which are potentially activating a few genes capable of creating protein for memory storage which might find their way into the gene pool in case reproduction was on her agenda (oh, NOW I remember where I was going in the car :-)

For many of us, these distractions would be Internet-activated: checking email, Facebook, and hey what's this? Google Plus! This is new, can't wait to check this out! We get a pleasant jolt of dopamine in our nerve synapses just anticipating the next Internet event, the classic addiction syndrome, to which many of us succumb at the expense of other things we should be doing. Worse, our brains are being altered in favor of accommodating our newly learned horizontal tracking behaviors and this is drawing resources from areas that used to accommodate our more focused vertical thinking skills. These facts are as certain as global warming. The question is, as with global warming, to what extent should we be concerned?

In his book The Shallows, Nicholas Carr lays out a case for his contention that our infatuation for Internet is costing us our capacity for concentration, contemplation, and reflection. In one collapsible argument after another, Carr follows up with a next level of argument in which he says he's aware that we would have spotted that flaw, but hang on, here's more evidence. My video conception of denizens of this planet being overwhelmed with inputs impinging on focus takes place during Neanderthal times, in a cave. Someone is hungry so Daddy goes looking for his club but on the way gets distracted by a painting on a wall near where he sometimes leaves his clubs but he's out of a certain pigment, so he calls to the wife who suggests he go into the forest and collect some moss off the trees. Meanwhile granny is remarking on the fact that their last child was born with a forehead with distinctly less of a slope (due to re-allocaton of brain cells, get it?). Distracted, she fails to prevent another child touching a hot coal near the fire. The child starts crying.So what else is new? Learning creates new synapses. It changes our brains. That's positive isn't it? We survivors are here thanks to that process of species improvement.

Paul Howard Jones uses the analogy of fire to compare its use with that of the Internet http://www.thersa.org/events/video/vision-videos/dr-paul-howard-jones. Fire brings warmth and access to fine cuisine but it can be the source of tragedy and must be treated with caution. We have trained ourselves and our children over eons to take advantage of its affordances while avoiding its pitfalls. The title of Jones's video lecture, "What is the Internet doing to our Brains?" echos the subtitle of Carr's book. In this lecture Jones assesses whether the latest scientific findings support popular fears about how technology is rewiring our brains.

Jones addresses three popular beliefs: (1) that technology is a 21st century addiction, (2) that Facebook is infantilizing us, and (3) that Google is degrading our intelligence, as Carr famously suggested in his Atlantic article, "Is Google making us stupid?" http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/ (which prompted Stephen Downes to write that if that were true he must be a raving lunatic by now, or something to that effect).

In taking on the notion that using search engines takes something away from us in neural terms, Jones reminds us that "learning is always associated with changes in the brain." He cites research where naive and experienced Googlers used search engines; and another case where subjects practiced difficult multiplication problems. These studies found that in unpracticed subjects processing tended to take place in areas of the brain already taxed by demands of short term memory; whereas with experienced subjects this activity moves to the rear of the brain, areas associated with automaticity. Yes, experienced subjects had learned how to search or multiply more efficiently, and yes their brains had been rewired. That always happens in learning.

Jones addresses other areas of research, dismissing findings of decreased socialization with Internet use done on teenagers in the 1990s because the friends of the research subjects would not have been themselves connected. But nowadays, kids are, and current research shows that where social networking is used to augment existing relationships, this leads to happiness and well-being. Does screen readng disrupt sleep (apparently reading from small screens does)? If so, it would disrupt memory and learning as well. Does use of technology contribute to obesity by suppressing exercise? Jones finds after weighing the results of 178 studies "no evidence of digital technology's special influence on the brain."

The social media site Facebook might indeed be a panacea for a major problem for the elderly. Nick Harding's http://www.independent.co.uk/life-style/gadgets-and-tech/features/science-of-the\-social-network-2329529.html reports on a year-long study by Daniel Miller, Professor of anthropology at University College London, which he has reported in his book, Tales From Facebook. "If there is one obvious constituency for whom Facebook is absolutely the right technology, it is the elderly. It allows them to keep closely involved in the lives of people they care about when, for one reason or another, face-to-face contact becomes difficult... Its origins are with the young but the elderly are its future."
Furthermore, with all this talk about what is lost with the new technologies, Facebook is seen here as a throw-back to a time when everyone in small communities knew everyone else's business: "As Facebook transforms our relationship with public and private, it also updates the notion of community, becoming a simulacrum of the neighbourhoods lost in the West over the past 50 years - a place where people can keep abreast of the lives of their online neighbours. "

Such findings support Jones's contention that, as with our use of fire, maybe the positive aspects of technology can be emphasized through our better understanding of what it does do for us. Looking at past research on technology used in training memory and other useful skills, Jones notes that transfer has been shown to be a problem in traditional studies, but that research in video games suggests the opposite. Gaming research has revealed that enhancements can be achieved in performance on motor tasks, ability to task switch, to filter distractions, and in inference ability. And the reason for this is that in each instance the addictive response to the constant distractions of the Internet (the dopamine hit) is harnessed toward these outcomes. Jones grants that technology can generate addiction and aggression but more importantly, "benefits arise from exactly the same processes, learning new skills, pro-social behavior, and immense educational potential." So it's not whether we use technology but how we use it (we know fire burns, so use it safely).

NPR's On the Media did a show recently on video games, including a segment on the Future of Gaming http://www.onthemedia.org/2011/jul/01/future-gaming/. In this segment Brooke Gladstone explores how the potential envisaged by Jones is playing out in today's marketplace, culminating in Jane McGonigal's TED talk from February, 2010 on her research into how video games can contribute to training for a better world:

George Siemens has been expressing some dissatisfaction with the shallower aspects of social media in his Elearnspace blog; e.g. http://www.elearnspace.org/blog/2011/07/30/losing-interest-in-social-media-there-is-no-there-there/. Here, George dismisses social media as being mostly about flow, not substance. Perhaps, but without flow, substance would be lost, and that to me is one importance of social media.
George is saying, I think, that SM is impoverished where it doesn't create content, but simply kicks it along. This is certainly true in many cases of vanity posting (and just look at Farmville); however social media's significant impact in contexts where mainstream media is locked down is well understood. Clay Shirky dwells for a chapter in Here Comes Everybody, on the idea that popular uprisings occur not when "everyone knows" and not even when "everyone knows that everyone knows" but when finally "everyone knows that everyone knows that everyone knows" that the king has no clothes. Social media like Twitter is highly significant in creating that awareness. But George points out that it can also appear self-serving, cliquish, and a waste of time if you're spending mouse clicks sorting your friends yet again into this circle or that.

But in this post George doesn't count Blogging as social media. On the contrary, where "Social media=emotions", "Blogging/writing/transparent scholarship=intellect."

Michael Coghlan notes in a recent talk on The Shallows (http://michaelc.podomatic.com/entry/2011-07-12T07_08_48-07_00; and print version http://tinyurl.com/6xcej6g) how he becomes productive only when he disconnects. I can relate to that, I've been mulling over this blog post for over a week now, articulating it bit by bit in posts to the Webheads Yahoogroup. Such fora comprise another form of social media, time consuming, unproductive, and shallow only if you consider such deliberations as avoidance of a more deeply construed final product. However anyone who teaches writing knows the importance of process in achieving a well-crafted product. The most progressive writing teachers are putting their students in touch with peers via social media (here, including blogs). Is this misguided? I think George and Michael are bringing their valid and treasured perspectives on a 'problem' which I guess I'm saying is actually a part of the process.

This complements Siemens's views on some of the dilatory effects of social media with an explanation of how what are suddenly being called distractions fit as part of the process of knowledge management, with an assertion at the end that Jarche's critical thinking skills have improved as a result of his cycle of PKM (which I suppose would be anecdotal evidence of lateral thinking processes leading to vertical ones).
Jarche defines PKM as "a set of processes individually constructed to help the flow from implicit to explicit knowledge." Managing the flow of knowledge, "staying abreast events and advances in our respective fields takes more time than many of us have." Consequently "the lines between learning and working are getting blurred," and proper management of workflow becomes essential.

Knowledge management seeks to make implicit knowledge explicit through Internal (how do I deal with this) and External (who can I work with on this) processes. This entails a continuous loop of four internal elements: sort, categorize, make explicit, retrieve; while percolating these through the key external elements of Connect/ Exchange / Contribute. This enables us to observe, reflect, put tentative thoughts out; read, listen, converse, and reflect. Jarche points out that this is more about attitude (what I call paradigm shfts) than a particular set of tools, but the rest of the presentation is essentially about what tools go with which part of the flow.

Jarche concludes by saying that PKM is "part of a social learning contract" wherein we have an "obligation" to participate so that we can learn from each other. "Cooperation is the glue that holds together the important social networks in which we work and live."

This helps put Siemens's insights into perspective, but Jarche also touches on Carr's when he says at the end that he feels that he has been creating a powerful resource, "a growing and connected digital library. It has also helped me to better develop my critical thinking skills."

However Carr comes to similar conclusions himself in his series of beguilingly collapsible arguments, but then explains why all the input we're subjected to now is different from previous information revolutions (e.g. the one where using calculators freed our minds for better internalizing maths concepts) because this latest onslaught isn't freeing up mental processing power so much as it is making what's left impervious to keeping what flows past around long enough for proteins to form that will put it into long term memory.

I would argue that again these resources are being optimally allocated. We are evolving systems for tagging and bookmarking that are placing information at our fingertips where and when we need it, so we can process perfectly well once we recall where we can link to what we need. I guess I am arguing that resources devoted to long term memory are increasingly being devoted more to tracking linking mechanisms; whereas Carr seems to be saying that this is the shallow part, we index it but don't process it. On the other hand, this very process could be developing a level of abstraction that further pushes the boundaries of our cognition which distinguish us from other less capable species.Samuel Johnson was aware of this distinction in the 18th century ("Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it." from Boswell's Life of Johnson). Coghlan mentions this in his talk where he contrasts the relationship between horizontal thinking (multitasking, allowing us in the midst of composition to link out to the Internet for the source of a certain quote by Johnson, for example) vs vertical thinking (layered and focused, to allow us to complete the piece in which the quote is inserted). He shows us Howard Reingold's video (http://www.screenr.com/rNl) about how Reingold finds, organizes, curates, filters, and begins to compose using an impressive array of tools which have everything to do with generating two paragraphs of prose with awsome face valididy. Coghlan intends this as an illustration of how a wired academic can harness distracted horizontal processes to contribute to deeper vertical ones, but in the example we see no direct evidence of developing or adding value to content (though we can see that Howard is about to take that step, if the phone doesn't ring :-). Reingold's workflow appears however to work for him, he is after all a Stanford professor with an impressive publication record, and his output and workflow illustrate how he is able to manage and leverage certain processes for linking and abstraction to eventually produce a well-crafted final product.Carr however seems to be saying that this kind of workflow is making us incapable of deeper processing. True I was listening to an instructional designer in a VTE podcast the other day saying that whereas before you could count on a 12 min attn span now you had to reach learners in 5. But this doesn't mean we are incapable of processing. We are reading Carr's book aren't we? We are deeply examining the ideas to the extent that they engage us. So? We're distracted! - isn't this the human condition? From time immemorial? (And if it's immemorial then it didn't form the protein to make it into long term memory back then either.)

"Once flow becomes too rapid and complex, we need a model that allows individuals to learn and function in spite of the pace and flow. A network model of learning (an attribute of connectivism) offloads some of the processing and interpreting functions of knowledge flow to nodes within a learning network. Instead of the individual having to evaluate and process every piece of information, she/he creates a personal network of trusted nodes: people and content, enhanced by technology. The learner aggregates relevant nodes…and relies on each individual node to provide needed knowledge. The act of knowing is offloaded onto the network itself. This view of learning scales well with continued complexity and pace of knowledge development."David Weinberger has an interesting take on The Shallows. He says that "if the Net is the shallows (a brilliant title, by the way), then the old media that Nicholas romanticizes was the narrows: narrowing the richness of shared experience to a manageable trickle."http://davidweinberger.sys-con.com/node/1453793/mobile.Today's learners must learn to navigate between the narrows and the shallows. A recent CIBER meta-analysis of the reading and learning behaviors of student visitors to libraries, both brick & mortar and virtual, highlights trends it sees for the near future (next 5 years, 10 years from 2008) as noted in CIBER. (2008). Information behaviour of the researcher of the future. UCL. http://www.ucl.ac.uk/infostudies/research/ciber/downloads/ggexecutive.pdf

The report finds on p.9 that the emerging form of information seeking behavior of those in the study appears indeed to be "horizontal, bouncing, checking and viewing in nature." These users are "promiscuous, diverse and volatile." This poses "serious challenges for traditional information providers, nurtured in a hardcopy paradigm and ... still tied to it."

As with any data, these are subject to interpretation. My interpretation is that as information becomes more accessible patterns of access are of course going to change. On the other hand, maybe not all that much has fundamentally changed. I can't tell you how many nickels I put into photocopying machines when I was in grad school in 1981 in order to take away copies of journal articles I possibly skimmed at home looking for factoids I could use to augment my references, or perhaps didn't read at all. Except that back then, the library was neither able to track my behavior apart from monitoring my expenditure of nickels, nor know what I did with the material once I got it home and put it on the growing stack of accumulated papers.

I'm taking the stance that this is much ado about nothing much. Carr points out early in his book that, regarding the "torrent of new content ... one side's abundant Eden is the other's vast wasteland." Young people might tend to be hitting at links in their recreational browsing, as we all do, but to extrapolate from this to 'they therefore never engage in deep vertical absorption of what they are browsing' is in my view quite likely false. It could be that they have so much more data to scan that they simply click on a lot more horizon, as we all do, before we latch on to the bits we feel we need to explore in greater depth (possibly because there is so much horizon out "there", and now in my mind I hear Siemens warn, "there's no there there" :-).

As we learned from Clay Shirkey's Cognitive Surplus it's not so much a question of what someone is doing at a particular time but what they would otherwise be doing. Dan Pink asked Shirky in an interview, what was your favorite episode of Gilligan's Island, an inside joke because Shirky counts himself as one of a generation plunged into the vast wasteland of passive addiction to the shallows of television. It would have been impossible when Clay Shirky was curled up on the couch to know how young people in his day ('the TV generation') would develop as researchers and academics of the future. Shirky nevertheless seems to have undergone some positive plasticity considering his subsequently observable ability to grasp concepts and convey them to others in deeply textured literature.

Regarding my own cognitive surplus, I hardly ever sit down for any length of time in front of a TV anymore, and though my kids might do that, they don't just watch whatever's on, they'll have chosen their program and have some purpose in watching it. I would think that this observable change in behavior is freeing up time and cognitive surplus for the kind of horizon scanning that emerges in some of these studies.

In other words, if you have some moments where you are tired from a long day and you have no pressing deadlines, what do you choose to do? Do you play solitaire? sit down in front of a TV? Pick up a good book? See what's on YouTube? Check email / Facebook/ Twitter? If you tend toward the latter end of the scale you're in good shape in my view. And when many of us were growing up we didn't have the latter options, but now that we do, we learn a lot from YouTube / email / Facebook / Twitter, and other interactions with our PLN which, when it's time to write and reflect, we switch off and get down to it, as Michael said he did in writing his article. Switch it back on and it feeds and stimulates the times when it's switched back off.

The CIBER report recommends that "information skills have to be developed during formative school years and that remedial information literacy programmed at university level are likely to be ineffective [and that libraries should] go with the flow and help children to become more effective information consumers."

We should be making ourselves and our kids aware of how to successfully leverage the affordances of the new technologies while avoiding the pitfalls, same as for fire, TV, the telephone before that, books in the 16th century (much decried by writers back then, get the irony? writers?). What we would need (but will never have) is a comparative study of how much deep cognitive endeavor people did during the TV era vs what they engage in now. I think that, in Shirky's terms, a lot of cognitive surplus was merging with recreational time and now that we invest more of our cognitive surplus in recreational time, we still can only devote so much energy a day into that deep cognition, and for recreation, we web surf rather than watch TV, storing up links (utilizing our tagging and feed systems) for retrieval later during our focused work hours.

This is actually a positively enlightening development, making possible, in my view, a renaissance in thinking and sharing, along with a reversal of power directionality, as when cognitive surplus gets invested in Wikileaks, for example. Many people are scanning those superficially, but there are many others demonstrably capable of delving into the revelations deeply and distilling what they find into packets appropriate for consumption by the scanners. Is that a problem? (answer, only if it's your power that's being reversed :-)