Your brain is your phone

In 370 BC, Socrates worried about what new technology was doing to young peoples’ brains. As younger scholars moved from oral arguments to written ones, Socrates argued against writing, which he felt would “create forgetfulness in the learners’ souls, because they will not use their memories.”

Millennia later, little has changed: in the 20th century, moral panic erupted over television, which detractors worried would turn us all into zombified couch potatoes, incapable of creative thought. Decades later, video games were even worse, turning our children into amoral, trigger-happy monsters.

In 2017, more than any other item of everyday technology, the smartphone is the focus of our fears. This isn’t surprising—we use them to remember precious memories and factual information, to find other people and to talk to them. Our smartphones have become an externalized part of our brains.

The first time you lose your phone, it can feel like losing a limb—and that feeling can be startling. But some new technologies have that effect, coming to occupy a part of your mental map of yourself. For those who’ve grown up with smartphones, an inability to imagine existence without one is normal. But if you remember life before smartphones, you might worry about how the world is changing, and how these new devices are changing us.

It’s important to think about what, exactly, the relationship is between our brains and our smartphones, and the impact that that relationship has on our mental health, our social lives, and how young people perceive the world around them. But to focus our worry on how smartphones might be “rewiring kids’ brains” is missing a bigger point. We should think less about changing brains and more about whether we can trust the devices that are doing the changing. After all, they’re designed and built by people trying to make money from us.

If you believe everything you read in the media right now, it can paint a bleak picture. We’re supposed to be increasingly lonely, anxious creatures — and it’s particularly bad for young people.

A recent piece in The Atlantic by psychologist Jean M. Twenge — asking “Have Smartphones Destroyed A Generation?” — is the latest prominent example. It presents a compelling, dystopian picture of an isolated, vitamin-D-deficient, digitally-native generation, locked in their rooms and negatively impacted by being constantly connected. Young people are drinking less, having less sex, sleeping less, and even applying for fewer driving licenses—and the culprit is the connectivity the smartphone affords.

This kind of argument, finding that young people are behaving differently and attributing the changes to a specific new technology, isn’t an unusual one. As sociologist David Oswell explores in an essay in Cool Places: Geographies of Youth, an anthology detailing youth cultures around the world, this happened with television as well: “Television is conceived, primarily by the press, as all powerful, whereas young people are constructed as ‘children’: innocent, manipulable, and in need of protection.” To older generations, television was a symbol of the unpredictability, uncontrollability, and moral laxness of younger ones.

We can see the same thing happening now, centering on smartphones and the idea that everyone who has one is online all the time. As digital youth researcher Katie Davis pointed out in her response to Twenge’s piece, many of the trends identified are correlational but not necessarily causational. Just as digital media won’t be the sole factor behind these trends, it’s simplistic to suggest everyone under 20 is being impacted by smartphone usage in the same way. There are plenty of other things that influence intergenerational changes in patterns of behavior.

But just this month, UK children’s commissioner Anne Longfield set off a round of discussion in the British press by suggesting that it was parents’ responsibility to manage their children’s time online, comparing children’s use of social media to “junk food.” This is just the latest in a long line of worries that “biologically [we’ve] not evolved to accommodate the sedentary, yet frenzied and chaotic, nature of today’s technology,” as pediatric therapist Cris Rowan put it.

But we can just as easily group together findings that could also be spun as positive impacts. Studies have found that screen-based media, particularly video games, increase decision-making speed without negatively affecting the accuracy of those decisions. University of Rochester researchers Daphne Bavelier and C. Shawn Green found that playing games might even be an effective form of cognitive therapy, and that children under the age of 10 who played games regularly had reaction times comparable to adults. Sleep researcher Russell Foster has pointed out though people “feel” their devices are disrupting their sleep, empirical evidence is still lacking. And as Simon Maybin explores in this BBC piece, the idea that new technologies have shortened peoples’ attention spans is largely a myth—as is the very idea of an “attention span.”

Young people clearly do like likes on social media—but that could be for a range of reasons beyond simple dopamine hits. There are more abstract, personal issues of social acceptance, or professional vanity, or even just a petty look-at-me-now post aimed at an ex. And why, for that matter, is getting a dopamine rush from hugging a friend “good,” but from a friend liking your photo “bad?”

To be clear: the research shows smartphones are affecting young peoples’ brains. The question that’s important, though, isn’t whether our devices are responsible for changing how young people act and think — using technology always changes our brains. To think of this kind of change as “good” or “bad” is a fallacy as old as intergenerational misunderstanding.

“What’s neat is that we’re in a time where anyone slightly older than you can give you an ‘I remember when’ moment about a piece of tech that was new for him or her,” explains Robert Rosenberger. “But what’s amazing about that is that that stuff gets normal really fast, and it’s that ‘getting normal’ which is the neat part.”

Rosenberger is a technology philosopher at the School of Public Policy at the Georgia Institute of Technology. He doesn’t buy into the claim that new technologies are inherently bad for us — or our brains. He points to the way our brains form relationships between learned behavior and technology. “I don’t think it’s inherently negative,” he tells me over the phone. “I think the question is more, how conscious are we of those relationships?”

Rosenberger is one of the leading researchers of so-called “Phantom Vibration Syndrome” (PVS), when you sense a rustle in your jeans or bag and go to reach for your vibrating phone—even when it’s on the table in front of you. “PVS is a good example of all this, because it’s something that’s proved by survey data that it’s kind of an epidemic,” he says. “But it’s not hurting people; most people aren’t bothered by it.”

As Rosenberger explains, we develop neural pathways that expect there to be phone vibrations, and they lead us to think that every rumble, every movement, as that of a phone in a pocket. There are other examples, too, like when people search for missing glasses that are actually perched on top of their heads. It’s a silly mental mistake — a side effect of the brain getting accustomed to an external technology and absorbing it into its map of the body. We unconsciously train our brains to ignore these relationships, like going to a website and ignoring the banner ads.

Iain Gilchrist is a neuropsychologist at Bristol University who specializes in visual exploration. He points out how the term “inundated with information” has “demonized” the modern, connected brain. “What the eye does is actually move around about three times a second,” he says. “It actually points your brain to which information is interesting and relevant.” Literally speaking, we’re already biologically “inundated.”

“Yes, technology is cognitively demanding, but things have been cognitively demanding for a very long time,” Gilchrist says. “I don’t think the technology itself, or the way it’s presented, is fundamentally changing the brain or necessarily putting us under more pressure than we’ve been under in previous times in history.”

He points to a historical example: “What’s happened is that technology has evolved to fit quite well with how human cognition works,” he explains. “There were times where people printed books and the letters were so small that you were really struggling to read them — and then people stopped printing type that small.”

Our changing habits aren’t just about information, but also about entertainment, and community. People in their teens today are at least two decades beyond being the first generation to experience life online, but smartphones have radically opened access to the internet and made connection a normal and expected part of life. Never before have young people been able to seek out, share, and enjoy content they love with such specificity. We can see every new digital content trend as some new fit with human cognition, as Gilchrist puts it.

I saw this for myself in a primary school in southwest London, watching a group of 11-year-olds gathered around the class computer looking at YouTube. They were entranced by a video of a purple and blue gloopy substance falling thickly from a wooden spoon, changing shape and kissing the inside of a bowl. After the class ended they used my phone to visit the 42,000-follower-strong Instagram account @satisfying.video, watching a half-painted pot spin on a potter’s wheel as a paint brush slowly makes long, slow indentations on it, perfectly cutting the soft clay. It was mesmerizing.

I find all this fascinating because it’s evidence that the internet, for lots of young people, is a place they want to be a part of. Whether we’re talking about Instagram-driven dopamine hits, or seeking out relaxing videos, or finding spaces for community — so much of what this generation knows as normal is not only dependent on the widespread adoption and use of smartphones, but is an inevitable part of their adoption. Their devices are an intuitive part of who they are, and they also create feedback loops influencing the types of things they do with their time.

When people worry about smartphones, they often talk of separating the user from the device for the sake of their mental health. But for many younger people, they might view that device as a part of themselves. A young person’s entire sense of connection and friendship may well be mediated by screens, with both analog and digital friendships. Given the boundless possibilities of being online, they love that even in the same physical space, they can choose to watch slime videos together.

How, for example, should a teen find social spaces in a town where their youth clubs have been shut down? As long as young people have had access to the web, it’s been a place for young people to find community regardless of physical constraints. For marginalized communities, the internet is a lifeline. YouTube reflects diversity in a way that TV doesn’t, while “Black twitter” or “Desi twitter” are community safe spaces that are crucial to their members, for everything from jokes to flagging injustices that don’t get enough attention from mainstream news coverage.

The children that I met at Plumcroft could be said to represent a generation of natural problem solvers, with the internet as an intuitive tool. Instead of focusing on the physiological changes to the brain, particularly in the young, we should think about why some changes panic us — and why others might not. But as we question smartphones and our future brains, perhaps a more important focus is not our ever-changing brain—but the devices themselves.

“I would love to see users and designers that are thinking more about the consequences of technologies becoming normal,” says Rosenberger. “I think the future of the human brain might be more of the same, but there are sure to be problems that we don’t know — or problems in our lives we don’t know because we’ve let technology become normal in our lives without thinking about it.” Gilchrist also interrogates the idea of blame solely resting with the users of technology. Like Rosenberger, he lays responsibility at the feet of designers, coders, and product designers.

There are some figures in Silicon Valley who are concerned about the ethical issues raised by the devices it produces. Tristan Harris, a former “product philosopher” at Google, has made a name for himself as the founder of the advocacy group Time Well Spent, which demands “technology that serves us, not advertising.” “The answer isn’t to abandon technology,” their manifesto reads. “The answer is to change the technology industry to put our best interests first.”

Time Well Spent prominently features a poll of 200,000 iOS users that ranks how “happy” or “unhappy” different phone apps make them feel. Two of the top ten in the unhappy section are variants of Candy Crush, a game notorious for centering addictiveness in its design. Social media apps like reddit and Facebook also rank highly on the unhappy list, as do dating apps.

“[T]he tech industry uses design techniques to keep people hooked to the screen for as long and as frequently as possible. Not because they’re evil but because of this arms race for attention … Technology steers what 2 billion people are thinking and believing every day. It’s possibly the largest source of influence over 2 billion people’s thoughts that has ever been created. Religions and governments don’t have that much influence over people’s daily thoughts.”

We know that we’re developing stronger and stronger bonds with our smartphones — they’re becoming a part of us, changing how we think, making us rely on their connectivity and in turn altering how we interpret and communicate with the world. To go back to Socrates, this was true of the written word as well: external objects could remember for us.

But a smartphone isn’t a piece of paper. It isn’t even a television. The devices themselves, their operating systems, and the apps that those operating systems run — they’re designed to turn attention into profit. Smartphones are more like malls than public squares. We’re putting our trust into devices which have an incentive to mislead and deceive us. When we analyze how the children of today are interacting with smartphones, a society shaped by privatized thoughts is a reason to worry.

Harris launched a new campaign this year to push designers to take responsibility for their decisions, asking questions like: “Does your product respect peoples’ schedules and boundaries?” “Does your product eliminate detours and distractions?” He’s also pushing for regular users to become more aware of their device usage.

But regulating the time you spend on your smartphone or the apps you use on it doesn’t necessarily increase the quality of what you do on there. The kinds of measures suggested by someone like Anne Longfield, in contrast, are blunter, and cruder —and “just don’t go online” isn’t much help to someone who’s housebound and depends on their smartphone for social contact, for example. People from different social, economic, or ethnic groups have different needs, and those backgrounds will also impact on some of the negative things we associate with smartphone usage.

Our conversations about smartphones and other devices need to move past whether or not the technology is a welcome or unwelcome force, a thing for good or bad. Our brains are adaptable, and they are adapting to this new environment — and as our brains evolve, all we can do is submit to that process. But our challenge is to process how to manage our relationships with smartphones, now and in the years to come.

We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. For more information on how we use cookies consult our revised Privacy Policy and Terms of Service.