On Dependence

Matthew Shadbolt Contributor

May 22, 2012

Part 3: Digital Decisions

“Our Digital Society in the Next 30 Years: An Interview with John Battelle”

“What I envisage is that, instead of designing everything (and particularly computer software) on the assumption that ‘people are going to behave like machines’ — that is, without feeling, love, hatred, anticipation, intuition, imagination, etc. (the very qualities we think of when we ask what it is to be human) — we design everything on the assumption that people are not heartless or stupid but marvelously capable, given the chance, each and every one.

“I’d like to see machines, systems, environments of all kinds, made such that if they are to work well everyone who uses or inhabits them is challenged to act at her or his best and that there are no built-in obstacles to doing that. The main obstacles to this at present are not so much the machines and technical processes but the presence of our other selves, as paid guardians, ‘protecting’ every one of us from our ‘mechanically stupefied selves’ and enforcing rules of behavior and design which assume that ‘users know nothing and producers know all.'”

Over the past two posts, we’ve explored various ideas surrounding our increasing technological dependence upon energy, and how it impacts our disrupted understanding of what a home means. In forming those digital decisions, it’s always also interesting to expand upon the consequences of those actions, with particular focus upon how that decision-making is arrived at.

Is the abdication of responsibility for our own actions changing the way we think? Do we behave a particular way when we make these kinds of decisions, not because they are convenient or that we simply don’t have time, but rather the way we think has changed?

The concerns surrounding society’s increasingly umbilical dependence upon technology is well documented (ironically) online, but what happens when your experience of the world is exclusively framed by the technology you choose to surround yourself with? How does being able to reach anyone, anywhere change us as human beings?

One of the most interesting of those concerns is how the magical lure of technology is actually dulled by the experience of daily life. Simply put, real life is slower, often less exciting, and not filled with the types of synthetic, oxytocin-fueled connections our brains increasingly crave. And while our use of technology aggressively disintermediates social interaction under the guise of reducing manual work, society is keen to not only pursue these changes, but emphatically celebrate them too.

Writing in Technorati, Christopher Califf asks, “Is Internet dependence helping us evolve or devolve?” One example he chooses to accurately illustrate this idea is the finding of information, something we’re all familiar with, and one of the most recognizable types of online activity in the world. Califf proposes that instead of learning and retaining the information we seek, we’re simply remembering how and where to locate it. If we can Google something within five seconds and get an immediate response at almost no cost or effort, does that have a higher value than being able to remember and recall the same information? Can our memories be as detailed as Google? The answer is most certainly not, but what this illustrates is, of course, the difference between data and insight.

Google is fantastic at mapping information, essentially plotting and cataloguing dots across the world of all the fruits of human knowledge. However, what algorithms increasingly struggle with is how to semantically connect those same dots in meaningful ways based on specific context. This is the huge human problem of search, and one that Google is wrestling with in its efforts to not only wrangle the colossal volume of information at its disposal, but also in assembling the largest (now social) resource of artificial intelligence ever created.

In abdicating the responsibility for remembering things to a search engine, are we essentially dulling our own capacity to learn, and pass on that knowledge to others? If the skill of information simply becomes who can find it fastest, with the result of producing the most comprehensive set of information on a particular topic, fields such as education, psychology and personal interaction become severely impacted.

“In a minute there is time for decisions and revisions, which a minute will reverse.”

In becoming more and more reliant on where the information is, rather than what it is, modern systems such as the Internet foreshadow significant consequences upon memory, reading and learning. Eco Ali goes further on this same point by trying to gauge the level of dependency currently in place. Is dependence upon technology ushering in a brighter future or a self-destructive hell? In defining dependence, Ali describes it as a point in time whereby “the use of technology becomes the unique fashion by which humans relate to the world.”

Given the sheer volume of human interactions passing through channels such as Facebook or Twitter, it’s easy to see where the concerns are coming from. In many ways, technology has always been a critical agent of human change, as far back as there were humans here, but as we get increasingly alienated and disenfranchised from the world, under the ironic illusion of ubiquitous connectivity, large sections of the population are now exclusively interacting with the world through the pixelated lens of technology.

Even in a literal sense, Google’s latest initiatives to overlay digital information in the form of glasses will further disintermediate us from what’s actually happening around us. Under the guise of being helpful and “augmenting” our experience of the world, it also increasingly distances it from us at the same time. For many of us now, a moment unshared is a moment unfulfilled. If we didn’t share that thought on Facebook, we didn’t have it. If we didn’t post that picture, we weren’t there.

“Google Project Glass”

As social media increasingly and aggressively illustrates, how we employ technology expresses our intentions, projects and values, suggests Ali. The types of technologies we surround ourselves with say so much about us to others. But this is to suggest that technology is some form of independent, autonomous, disruptive agent of change. It’s not. It’s replacing and eroding many natural human forms of interaction, such as nurturing, connecting, remembering and finding. It offers the synthetic promise of all of these, but with a fraction of the substance. As our experience of the world increasingly blends with the digital layer of information overlaid on top of it, is that lens a barrier, or an experience that improves cognition?

There are strong arguments on both sides for this (notably Clive Thompson‘s excellent ‘Your Outboard Brain Knows All’, but what’s inevitable is that our experience of the world is shifting to mean something very, very different. How often have you heard someone ask, “What would we do without Google Maps to get here?” or “Thank goodness for Amazon, I just couldn’t face going shopping at the mall today”?

What this situation describes is the ethical dilemma at the heart of Google’s (and increasingly Facebook’s) value proposition. If everyone is connected, with the ability to find anything, anywhere, at any time, how does that change us as humans, and is it for the better? Is trading ubiquitous access to information the price of dulling our ability to learn? The argument, of course, is much more complex than this simplistic characterization, but let’s take a specific example: handwriting.

“Nicholas Carr: The Internet Weakens Deep Thinking“

Writing on how technology is increasingly impacting education, Sequoia Davison outlines the declining trend in handwriting as a fundamental measure of literacy, specifically calling out how functions such as auto-corrects are having negative impacts upon grammar and spelling. If the software you’re using to write automatically adjusts your own mistakes, why do you even need to know how to spell?

With SAT scores in swift decline in English, there’s an increasing disparity between academic expectancy and actual learning. For many, the idea of writing something out in long form, or taking notes with a pen, is a laughable measure of technological latency.

Recent Pew Internet & American Life Project studies, analyzing the behaviors of American teenagers, found that more than 75 percent of them now have a smartphone, and perhaps even more incredible, that 58 percent of 12-year-olds own one too. And of course, it’s a trend that is spiraling upwards towards market saturation.

For example, when the same study conducted in 2008 found that 38 percent of teens regularly texted, compared with 54 percent just three years later, it’s easy to understand what’s happening. What does constant cellular communication do in impacting a teen’s social life, skills and education? Is it replacing it or augmenting it?

“Communication” has shifted to mean something very different, very rapidly. Whereas technology actively reduces verbal and in-person communication, there’s also a reduction in improvisation and spontaneity. The channels are effective, but we are not, and where those same channels help to perhaps combat the awkward silences of adolescence, the capacity for them to reappear in adulthood is only increased.

Facebook is severely disrupting the nature of how people build and maintain a network of friends, for example, and while technological dependence facilitates communicative shortcuts and efficiencies of time and location, the result is that it’s increasingly difficult to wall off the outside world. This is something we’re also seeing as a growing issue in politics, with China and now Iran, attempting to disintermediate themselves from the rest of the Web (and therefore the world).

As Steve Cohen accurately suggests, the transformative power of the mass media in the digital space makes it difficult not only for the state to incubate itself off from the outside world, but also the individual. Cohen goes as far as to propose that the benefit of a world with no privacy is also a world with no secrecy. When it comes to the political process, it’s also easy to see how technology has become a disruptive force, often for positive reasons. For example, the 2008 Obama electoral campaign relied heavily upon online donations, completely disrupting the traditional process of fundraising, to the extent that “the mobilization of the public through the Web has managed to overcome the anti-democratic impact of money in our electoral system.”

“Nick Bilton interviewed at Web 2.0 Summit 2011″

“I’ve started to notice that while technology allows us to connect to people far away, it can simultaneously disconnect us from people who may be directly in front of us.”

Writing in the always insightful Bits Blog for The New York Times, technology writer Nick Bilton sees a gradual societal transition of cautiously stepping back from the digital space, something he characterizes as reaching an “inflection point” with social tools. If we spend most of our days simply feeding the Internet, what room does that leave for the people we’re spending time with in real life? Bilton beautifully articulates this as “grazing on a conversation that’s not the one in the room.”

We see this everywhere, and it can be intensely frustrating when attempting to either hold a conversation or spend meaningful time together. For many, it’s simply not something that multitasking offers a solution for. We’re either in the moment in the room, or in the moment on our devices, and it’s tough for the brain to process the two together.

So are our digital decisions producing chemical and biological shifts in our bodies for the long term? Is this a point in our evolution whereby it will be difficult to return to a moment where clarity without the digital layer is possible? Nicholas Carr, writing in The Atlantic, goes further than Bilton in asking the fundamental question in the DNA of this discussion, “Is Google making us stupid?”

“Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going — so far as I can tell — but it’s changing.”

Carr produces a lengthy but brilliant analysis of how learning is changing in the era of Google, and while the promise of a brighter, more transparent and open future is often the guise under which these services appear, they are also chipping away at fundamental aspects of thinking and learning, such as concentration and contemplation. For many (perhaps even some of you reading this), it’s a real challenge to stay focused on long pieces of writing. “Deep reading” (the process of sustained, undistracted, intellectually inspiring reading), as Carr suggests, is now a very real problem, and, recalling Marshall McLuhan, Carr relays that media has never been a set of passive channels of information. They supply the “stuff of thought” but also shape “the process of thought” too. And while it’s indisputable that we’re consuming more information than ever before online, the nature and quality of that consumption is in question, especially as the content begins to morph around our own behaviors in order to satisfy and optimize the goals of advertisers.

Simply put, how we read, and subsequently think, is being fueled by the Web, and very often we find ourselves reading with a staccato quality, in small, digestible pieces, disconnected from each other, and without any real depth. As Scott Karpasks, “What if I do all my reading on the Web, not so much because the way I read has changed, i.e., I’m just seeking convenience, but because the way I think has changed?” It’s a fascinating set of ideas, and one that many will actively empathize with. The Web, in almost all its forms, optimizes and prioritizes efficiency and immediacy. Distraction is rampant, and the ability to create deep, rich, meaningful mental connections is simply not a focus. It’s a database-driven set of solutions predicated around search and impressions.

“Is Google Making Us Stupid?”

“It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.”

The notion of “power browsing” is an interesting one for the real estate industry, in particular, with many of the more popular real estate search portals actively fueling and creating experiences around this behavior. It’s the premise of “show me everything and let me filter” rather than “intuit what works best for me and just show me that.” In many ways, it’s the difference between searching listings and finding homes.

Karp’s suggestion that brains are changing as a result of these seismic disruptions is also supported by a growing community of neuroscientists, who temper those claims by raising the concern that the brain cannot evolve fast enough to keep up with the pace of technology. This accounts for distractions, attention deficits and the cognitive dissonance associated with multitasking. Simply put, there’s too much brain noise for our biological makeup to maintain pace.

And it’s not just happening in the brain’s formative years when we’re children. James Olds, often considered one of the founders of modern neuroscience, and Peter Milner, the one who discovered and popularized the idea of the “reward center,” found that “the brain is malleable enough to have the ability to reprogram itself on the fly, altering the way it functions” well into adulthood. For some of us, this continues to be a challenge.

In searching for the origins of this behavior, Carr charts its progress starting with the invention of clocks and timepieces, one of the original pieces of technology. In deciding when to eat, sleep and work, we simply stopped listening to our senses, and started obeying the technologies that surrounded us. Now we’re seeing the modern implementation of this same idea, as the Internet itself begins to subsume all other types of technologies: maps, clocks, typewriters, radios, televisions, telephones, and of course, many others (as Part 2 suggests, home appliances are next).

Carr continues by outlining how, when this happens (just think of all the different technologies subsumed by the iPhone’s ecosystem of apps, for example), the original medium gets recreated in the Web’s image. It’s television on your computer, but it’s surrounded by hyperlinks and banner ads. The language of the Web is becoming ubiquitous as the Web morphs around traditional media.

The classic example of this is the severe disruption and disintermediation of the newspaper industry, which continues to be faced with enormous challenges not only to its existing business models, but also to its often crumbling infrastructures. Today, newspapers have little choice but to play by new media rules such as headline-baiting, photo galleries, and bite-sized, ad-fueled consumption. In essence, the Web exercises unprecedented influence over the way our thoughts are formed, through the aggressive consumption of all of our forms of media exposure.

“The Internet is a machine designed for the efficient and automated collection, transmission and manipulation of information, and its legions of programmers are intent on finding ‘the one best method’ — the perfect algorithm — to carry out every mental movement of what we’ve come to describe as ‘knowledge work.'”

And while it’s true that the Web does indeed attempt to systematize and categorize everything, the stranglehold over how information is found and meaning extracted from it is algorithmic and efficient, but not reflective of how learning actually happens. Intelligence is not the output of a mechanical, mathematical process that can be isolated, measured and optimized. Intelligence originates in contemplation, ambiguity, discussion and richer, more meaningful experiences. With Google building artificial intelligence on a massive, unprecedented scale, remember that it is still coupled to the premise of selling advertising. Google has a vested commercial interest in its own data collection for the express benefit of appealing to marketers wanting to expose products to potential audiences. This is one of the main reasons that Facebook still poses such a colossal threat to Google. The data Facebook has collected on its millions of users is already highly targeted and used by advertisers, in a way far superior to that available in Google AdWords.

If we want to reach females aged 20-25 in the suburbs of San Francisco, who have already liked a number of local restaurants’ pages but are interested in coupons for home shopping instead, it’s possible. Carr concludes that it’s in Google’s and other online advertising partners’) economic interests to drive their own users to distraction. More page views, more impressions, more revenue.

“The best minds of my generation are thinking about how to make people click ads.”

It’s always been a concern of ours that technology will replace aspects of our behavior currently perceived as valuable. It was predicted that radio would empty concert halls, television would destroy the film industry, and that online shopping would bring an end to brick-and-mortar stores. None of those concerns were ever validated or fulfilled, but the set of changes over the past 15 years feel different.

It’s not a shift away from one service to another, it’s the calculated and systematic disintermediation of our own decision-making that’s under threat here, as we delegate more and more responsibility to algorithms and technology to fuel our own self-fulfilled dependence. In many ways, those decisions serve our best interests for survival, as in the case of curbing energy consumption, but they also come with immense risks to biological, generational consequences for learning and education.

Where and how we choose to be dependent affects us all, and it’s an important choice to make while we still can.