Tag Archives: view

Why does everyone assume that the AI revolution will either lead to a fiery apocalypse or a glorious utopia, and not something in between? Of course, part of this is down to the fact that you get more attention by saying “The end is nigh!” or “Utopia is coming!”

But part of it is down to how humans think about change, especially unprecedented change. Millenarianism doesn’t have anything to do with being a “millennial,” being born in the 90s and remembering Buffy the Vampire Slayer. It is a way of thinking about the future that involves a deeply ingrained sense of destiny. A definition might be: “Millenarianism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.”

Millenarian beliefs, then, intimately link together the ideas of destruction and creation. They involve the idea of a huge, apocalyptic, seismic shift that will destroy the fabric of the old world and create something entirely new. Similar belief systems exist in many of the world’s major religions, and also the unspoken religion of some atheists and agnostics, which is a belief in technology.

Look at some futurist beliefs around the technological Singularity. In Ray Kurzweil’s vision, the Singularity is the establishment of paradise. Everyone is rendered immortal by biotechnology that can cure our ills; our brains can be uploaded to the cloud; inequality and suffering wash away under the wave of these technologies. The “destruction of the world” is replaced by a Silicon Valley buzzword favorite: disruption. And, as with many millenarian beliefs, your mileage varies on whether this destruction paves the way for a new utopia—or simply ends the world.

There are good reasons to be skeptical and interrogative towards this way of thinking. The most compelling reason is probably that millenarian beliefs seem to be a default mode of how humans think about change; just look at how many variants of this belief have cropped up all over the world.

These beliefs are present in aspects of Christian theology, although they only really became mainstream in their modern form in the 19th and 20th centuries. Ideas like the Tribulations—many years of hardship and suffering—before the Rapture, when the righteous will be raised up and the evil punished. After this destruction, the world will be made anew, or humans will ascend to paradise.

Despite being dogmatically atheist, Marxism has many of the same beliefs. It is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the late stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point—just as some millenarian Christians do.

In Marxism, this is when the exploitation of the working class by the rich becomes unsustainable, and the working class bands together and overthrows the oppressors. The “tribulation” is replaced by a “revolution.” Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millennium; and their rhetoric involves utterly smashing the old system such that a new world can be built. Of course, there is judgment, when the righteous workers take what’s theirs and the evil bourgeoisie are destroyed.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Nick Bostrom’s book Global Catastrophic Risks. Ragnarok involves men and gods being defeated in a final, apocalyptic battle—but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Judgement day is a cultural trope, too. Take the ancient Egyptians and their beliefs around the afterlife; the Lord of the underworld, Osiris, weighs the mortal’s heart against a feather. “Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by a demon, and the hope of an afterlife vanished.”

Perhaps in the Singularity, something similar goes on. As our technology and hence our power improve, a final reckoning approaches: our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing—with misguided stupidity, with arrogance and hubris, with evil—then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the Singularity and all of its threats and promises unscathed, then we will have paradise. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to be or not, whether it benefits you or leaves you behind. A technological rapture.

It almost seems like every major development provokes this response. Nuclear weapons did, too. Either this would prove the final straw and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” The scientists who worked on the bomb often thought that with such destructive power in human hands, we’d be forced to cooperate and work together as a species.

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics, we need to consider human biases. We like millenarian beliefs; and so when the idea of artificial intelligence outstripping human intelligence emerges, these beliefs spring up around it.

We don’t love facts. We don’t love information. We aren’t as rational as we’d like to think. We are creatures of narrative. Physicists observe the world and we weave our observations into narrative theories, stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of an endless stream of events. We rely on stories: stories that make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millenarian narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules—not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millenarian narrative promises paradise.

We need to be wary of the millenarian narrative when we’re considering technological developments and the Singularity and existential risks in general. Maybe this time is different, but we’ve cried wolf many times before. There is a more likely, less appealing story. Something along the lines of: there are many possibilities, none of them are inevitable, and lots of the outcomes are less extreme than you might think—or they might take far longer than you think to arrive. On the surface, it’s not satisfying. It’s so much easier to think of things as either signaling the end of the world or the dawn of a utopia—or possibly both at once. It’s a narrative we can get behind, a good story, and maybe, a nice dream.

But dig a little below the surface, and you’ll find that the millenarian beliefs aren’t always the most promising ones, because they remove human agency from the equation. If you think that, say, the malicious use of algorithms, or the control of superintelligent AI, are serious and urgent problems that are worth solving, you can’t be wedded to a belief system that insists utopia or dystopia are inevitable. You have to believe in the shades of grey—and in your own ability to influence where we might end up. As we move into an uncertain technological future, we need to be aware of the power—and the limitations—of dreams.

Image Credit: Photobank gallery / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading →

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.This article was originally published at Aeon and has been republished under Creative Commons.

The longevity field is bustling but still fragmented, and the “silver tsunami” is coming.

That is the takeaway of The Science of Longevity, the behemoth first volume of a four-part series offering a bird’s-eye view of the longevity industry in 2017. The report, a joint production of the Biogerontology Research Foundation, Deep Knowledge Life Science, Aging Analytics Agency, and Longevity.International, synthesizes the growing array of academic and industry ventures related to aging, healthspan, and everything in between.

This is huge, not only in scale but also in ambition. The report, totally worth a read here, will be followed by four additional volumes in 2018, covering topics ranging from the business side of longevity ventures to financial systems to potential tensions between life extension and religion.

And that’s just the first step. The team hopes to publish updated versions of the report annually, giving scientists, investors, and regulatory agencies an easy way to keep their finger on the longevity pulse.

“In 2018, ‘aging’ remains an unnamed adversary in an undeclared war. For all intents and purposes it is mere abstraction in the eyes of regulatory authorities worldwide,” the authors write.

That needs to change.

People often arrive at the field of aging from disparate areas with wildly diverse opinions and strengths. The report compiles these individual efforts at cracking aging into a systematic resource—a “periodic table” for longevity that clearly lays out emerging trends and promising interventions.

The ultimate goal? A global framework serving as a road map to guide the burgeoning industry. With such a framework in hand, academics and industry alike are finally poised to petition the kind of large-scale investments and regulatory changes needed to tackle aging with a unified front.

Infographic depicting many of the key research hubs and non-profits within the field of geroscience.
Image Credit: Longevity.International
The Aging Globe
The global population is rapidly aging. And our medical and social systems aren’t ready to handle this oncoming “silver tsunami.”

What’s more, because disease risk rises exponentially with age, medical care for the elderly becomes a game of whack-a-mole: curing any individual disease such as cancer only increases healthy lifespan by two to three years before another one hits.

That’s why in recent years there’s been increasing support for turning the focus to the root of the problem: aging. Rather than tackling individual diseases, geroscience aims to add healthy years to our lifespan—extending “healthspan,” so to speak.

Despite this relative consensus, the field still faces a roadblock. The US FDA does not yet recognize aging as a bona fide disease. Without such a designation, scientists are banned from testing potential interventions for aging in clinical trials (that said, many have used alternate measures such as age-related biomarkers or Alzheimer’s symptoms as a proxy).

Luckily, the FDA’s stance is set to change. The promising anti-aging drug metformin, for example, is already in clinical trials, examining its effect on a variety of age-related symptoms and diseases. This report, and others to follow, may help push progress along.

“It is critical for investors, policymakers, scientists, NGOs, and influential entities to prioritize the amelioration of the geriatric world scenario and recognize aging as a critical matter of global economic security,” the authors say.

Biomedical Gerontology
The causes of aging are complex, stubborn, and not all clear.

But the report lays out two main streams of intervention with already promising results.

The first is to understand the root causes of aging and stop them before damage accumulates. It’s like meddling with cogs and other inner workings of a clock to slow it down, the authors say.

The report lays out several treatments to keep an eye on.

Geroprotective drugs is a big one. Often repurposed from drugs already on the market, these traditional small molecule drugs target a wide variety of metabolic pathways that play a role in aging. Think anti-oxidants, anti-inflammatory, and drugs that mimic caloric restriction, a proven way to extend healthspan in animal models.

More exciting are the emerging technologies. One is nanotechnology. Nanoparticles of carbon, “bucky-balls,” for example, have already been shown to fight viral infections and dangerous ion particles, as well as stimulate the immune system and extend lifespan in mice (though others question the validity of the results).

Blood is another promising, if surprising, fountain of youth: recent studies found that molecules in the blood of the young rejuvenate the heart, brain, and muscles of aged rodents, though many of these findings have yet to be replicated.

Rejuvenation Biotechnology
The second approach is repair and maintenance.

Rather than meddling with inner clockwork, here we force back the hands of a clock to set it back. The main example? Stem cell therapy.

This type of approach would especially benefit the brain, which harbors small, scattered numbers of stem cells that deplete with age. For neurodegenerative diseases like Alzheimer’s, in which neurons progressively die off, stem cell therapy could in theory replace those lost cells and mend those broken circuits.

Once a blue-sky idea, the discovery of induced pluripotent stem cells (iPSCs), where scientists can turn skin and other mature cells back into a stem-like state, hugely propelled the field into near reality. But to date, stem cells haven’t been widely adopted in clinics.

It’s “a toolkit of highly innovative, highly invasive technologies with clinical trials still a great many years off,” the authors say.

But there is a silver lining. The boom in 3D tissue printing offers an alternative approach to stem cells in replacing aging organs. Recent investment from the Methuselah Foundation and other institutions suggests interest remains high despite still being a ways from mainstream use.

A Disruptive Future
“We are finally beginning to see an industry emerge from mankind’s attempts to make sense of the biological chaos,” the authors conclude.

Looking through the trends, they identified several technologies rapidly gaining steam.

One is artificial intelligence, which is already used to bolster drug discovery. Machine learning may also help identify new longevity genes or bring personalized medicine to the clinic based on a patient’s records or biomarkers.

Another is senolytics, a class of drugs that kill off “zombie cells.” Over 10 prospective candidates are already in the pipeline, with some expected to enter the market in less than a decade, the authors say.

Finally, there’s the big gun—gene therapy. The treatment, unlike others mentioned, can directly target the root of any pathology. With a snip (or a swap), genetic tools can turn off damaging genes or switch on ones that promote a youthful profile. It is the most preventative technology at our disposal.

There have already been some success stories in animal models. Using gene therapy, rodents given a boost in telomerase activity, which lengthens the protective caps of DNA strands, live healthier for longer.

“Although it is the prospect farthest from widespread implementation, it may ultimately prove the most influential,” the authors say.

Ultimately, can we stop the silver tsunami before it strikes?

Perhaps not, the authors say. But we do have defenses: the technologies outlined in the report, though still immature, could one day stop the oncoming tidal wave in its tracks.

Now we just have to bring them out of the lab and into the real world. To push the transition along, the team launched Longevity.International, an online meeting ground that unites various stakeholders in the industry.

By providing scientists, entrepreneurs, investors, and policy-makers a platform for learning and discussion, the authors say, we may finally generate enough drive to implement our defenses against aging. The war has begun.

Read the report in full here, and watch out for others coming soon here. The second part of the report profiles 650 (!!!) longevity-focused research hubs, non-profits, scientists, conferences, and literature. It’s an enormously helpful resource—totally worth keeping it in your back pocket for future reference.

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading →

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?
Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.
But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.
Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.
There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.
The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.
Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.
Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.
Others are less convinced. Forbes’ Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.
The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.
Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.
There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.
This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.
Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.
Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.
We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.
Image Credit: Ron Meijer / Shutterstock.com Continue reading →