San Francisco Is Smarter Than You Are

The city is a big brain that can solve big problems.

Its achievements are undeniable. Having hosted what some historians call the greatest creation of wealth in human history,1 the San…By Jim Davies

Its achievements are undeniable. Having hosted what some historians call the greatest creation of wealth in human history,1 the San Francisco Bay Area had the fastest growth rate in the United States in 2012,2 the highest per-capita gross domestic product,3 one of the highest average IQs,4 and has been called one of the country’s greenest cities.5 If cities were people, then San Francisco would certainly be called a genius. But are we willing to extend that term to a city, or should we insist that genius is contained within the confines of the human head?

To understand this question, let’s start inside the head. For the most part, there’s no single process in charge. Most parts of our brain work free from any conscious control, and intelligence is an emergent property of neuron behavior: A brain is intelligent, even though the individual neurons that make it up are not. At a higher level, human minds have different functions that are sometimes in competition with each other. One part of the mind might desire cupcakes, but another part of the mind knows that eating them might make us grumpy. One part of our mind knows we’re looking at an optical illusion, but another is still fooled by it. The evolutionarily newer parts of our brain know it’s “just a movie,” but we get scared nonetheless.

These conflicts can affect even our highest faculties. According to neuroscientist Joshua Greene, moral judgments are made according to two separate processes in the brain, what he calls “personal” and “impersonal.” Suppose a train is about to kill five people on a track, and you are asked if it is morally justified to pull a switch that will cause it to run on a different track that would result in the death of only one person. Most people say that this answer to the “impersonal” version of this dilemma is morally justified.

We treat “memories” on a computer screen just like memories in our head.

Suppose instead that, rather than pulling a switch, you were required to push a heavy man onto the tracks, killing him, in order to stop the train from killing five other people. In contrast with the first version of the problem, people are more likely to find this action immoral.6

Catherine Townsend-Lyon, 53, started gambling excessively when she was 30. As a result, her 40th birthday wasn’t much of a celebration: She was hospitalized, shortly after a suicide attempt. She’d tried to slit her wrists the day she’d missed her...READ MORE

The second version relies on reasoning by evolutionarily older parts of our brains. Our old brains don’t want us hurting people, and in our ancestral environment this meant, usually, putting your hands on somebody. We didn’t evolve with remote-controlled killer drones. A different and evolutionarily newer part of the brain evaluates the first version of the problem. It tends to think in a utilitarian way—what is for the greater good?

Brain imaging studies in Greene’s lab support this dichotomy. When engaged in “personal” moral problems, the parts of the brain implicated in social cognition and emotion were more active (the posterior cingulate/precuneus, the medial prefrontal cortex, and the inferior parietal lobe). For impersonal moral problems, the areas involved with abstract reasoning were more active (the right dorsolateral prefrontal cortex and the bilateral inferior parietal lobe.)

The upshot is that the brain argues with itself like a committee. Cognition is “distributed” throughout the brain, with some processes even working at cross-purposes with others. What if some of those distributed elements, though, are not in the brain itself?

Think about how many memory aids we use: We jot down phone numbers, and keep count on our fingers. Cognitive scientists Wayne Gray and Wai-Tat Fu showed that using these aids is not very different to remembering facts and figures ourselves. They gave participants a task which required access to a lot of information while varying the difficulty of retrieving that information from a computer screen.7 They found that the participants decided between using the screen and their own memories in a way that minimized the effort involved, without privileging one source over the other. This suggested that externalized knowledge is accessed like any other memory, and that we treat “memories” on a computer screen just like memories in our head. Something to keep in mind the next time you wonder whether your phone is making you smarter or stupider.

While some of us can name the inventor of the light bulb, does anybody know who invented the iPhone?

Let’s broaden our definition of cognition even further. For two years I was involved in anthropological research that focused on a biomedical engineering lab: basically, scientists watching scientists.8 The scientists we watched performed a visualization task that required them to re-represent data originally taken by their instruments. This was strikingly similar to how visual mental imagery is processed in the brain, which also relies on re-representation: When you imagine a jar of peanut butter, you generate an “image” of the jar in your visual buffer. Once there, it can be re-perceived by the same neural systems that work under normal, real-time perception. The lab, at its core, acted like a large, distributed visual brain circuit.

A lab is still a far cry from a city. But in laying the groundwork for understanding distributed cognition in the 1990s, the anthropologist Edwin Hutchins studied something much bigger: huge seafaring ships, some large enough to carry 20 helicopters and 1,800 people. Remarkably, he found that there was no single person on board these ships that could figure out where the ship was located. Instead, it required several people, charts, papers, clocks, and a variety of physical instruments with unlikely names like alidade and hoey. His 1995 book Cognition in the Wild introduced the startling idea that the cognitive task of navigation was socially distributed. Just like the engineer in the lab or you and your smartphone, thinking and perceiving was something spread over many people and things.

In fact, this may be the new norm. While some of us can name the inventor of the light bulb, does anybody know who invented the iPhone? Contemporary discoveries and inventions are increasingly accomplished by larger groups of people. Research done by teams also has more impact than work done alone, as shown by a study of nearly 20 million scientific papers and 2 million patents—and this effect has been increasing over time.9

Which brings us back to San Francisco. If memories and functions can flow seamlessly across devices, people, and artefacts, then why can’t we consider an entire city to be a kind of genius? Like the brain, it is using stored information and solving problems. Chief among these is how people can stay alive, and even flourish, under high geographic density. This is a considerable challenge, and one that the city itself solves—in fact, the larger the city, the better it seems to solve its own problems.

As a city grows in population, there is more efficient use of infrastructure, higher productivity, and an increase in cultural expression.10 There are per capita increases in the numbers of patents and educational and research institutions. This happens according to a power law, faster than would be expected by linear growth. Perhaps increasing the number of technologies, people, and level of communication in a city benefits intelligence in the same way that a larger number of neurons makes possible the great intelligence of human beings.

This raises an exciting possibility: If people are to cities as neurons are to brains, and cities (unlike brains) do not have any known limit to their size, then gigantic cities of the future might produce innovations on a scale that wouldn’t be possible for the cities of today. Faced with pollution, disease, and scarcity, should we be looking to creative environments rather than to individual innovators? Where will we turn to find solutions to the pressing problems of the 21st century?

Let’s make sure San Francisco is working on them.

Jim Davies is an associate professor at the Institute of Cognitive Science at Carleton University in Ottawa, where he is director of the Science of Imagination Laboratory.

References

1. Rao, A. & Scaruffi, P. A History of Silicon Valley: The Greatest Creation of Wealth in the History of the Planet Omniware, Palo Alto, CA (2011).

Related Articles:

Nautilus uses cookies to manage your digital subscription and show you your reading progress. It's just not the same without them.
Please sign in to Nautilus Prime or turn your cookies on to continue reading.Thank you!