Choose your preferred view mode

Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
mobile device.

In April 2000, Wired published a controversial article entitled “Why the Future Doesn’t Need Us” by Bill Joy (2000), co-founder and chief scientist at Sun Microsystems. In this article, Joy called for a moratorium on research in three technological fields—artificial intelligence, nanotechnology, and genetic engineering. He noted that, while we were poised to make rapid technological advances in each of these three areas, our understanding of the ethical questions these technologies would inevitably raise was lagging far behind. Fearing that a convergence of these technologies could be deadly, Joy writes, “We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don’t believe so, but we aren’t trying yet, and the last chance to assert control—the fail-safe point—is rapidly approaching.”

The intervening years since Joy’s warning have indeed brought significant advances in each of these technologies—Deep Learning, nanobots, CRISPER-Cas9, just to name a few. While a moratorium on their development was never in the cards, Joy was right about one thing. These technologies have huge implications for how human life will unfold, indeed, for what it might mean to be human in the coming decades. Each holds great promise—for new medical cures, for new materials, and for new insights into our world. They will bring great wealth to some and could ease the human condition for many.

However, as Joy warned, each of these technologies also brings the possibility of great peril. Science fiction writers have explored the worst-case scenarios. But we need not go to extremes to find reasons for concern. Artificial intelligence may not surpass human intelligence in the foreseeable future, but it is likely to soon displace many workers from their jobs. Nanotechnology may not end in the whole world converted to “grey goo,” as engineer and futurist Eric Drexler once suggested, but we do not know what long term effects nanoparticles, and other technological innovations, could have on the environment. Genetic manipulation may not end in biological warfare, but it is likely to exacerbate the divide between those who can afford it and those who cannot. For good or ill, these technologies will change the way we work, live, think, and love. Thus, it makes sense to approach them from a religious perspective. How do these technologies change our understanding of ourselves, our place in the world, our relationships to one another, the way we face death, or our relationship to God?

These are not new questions. Since the first humans fashioned weapons and clothing and controlled fire, humans have been using technology to master our environment. French philosopher Jacques Ellul describes the purpose of technology as “to defend man” (Ellul 1964, p. 405). Through technology we seek shelter from the elements and from predation, cure from sickness, and ways to make our lives safer, longer, and more comfortable. But our technologies go beyond a defensive role. Early humans also mixed paint and fashioned brushes in order to express their awe of the natural world. Technology provides us with means for communication and creation. Through the three new technologies Joy mentions, we seek not only to make our lives safer and easier but also to create new intelligences or to use genes or atoms as the building blocks of a species or material that has never existed before. In so doing, we risk making fundamental changes to both the world around us and to our very nature as human beings. Are we, therefore, “playing God?” Is this a proper role for us? If it is, how do we exercise such tremendous power wisely, with humility and compassion?

Each of the authors in this volume grapple with one or more of these questions. Ted Peters examines this charge of “playing God” through the lens of CRISPR-Cas9, a technique that allows us to easily edit the genomic sequence, thus opening the door, not only to cures for a variety of genetically based illnesses, but also to the making of new creatures and the enhancing of our own genetic code. Who should regulate this technology? Should religious bodies have a say?

Brian Green points out that through much of history the Catholic (and, thus, Christian) church has paid little attention to technology. This is not to say that the development of technology, and its concomitant ethical issues has gone totally without notice in the Judeo-Christian tradition. Genesis 3–9 provides a digest of stories that examine both the good and the bad effects of several early technological developments—clothing, agriculture, the rise of cities, warfare, transportation. The cautionary tales of Cain and Abel, the Tower of Bable, and the Flood were designed to show the Israelites that when left to their own technological devices, without a covenant with God, things were likely to go from bad to worse. In the face of what could be ecocidal climate change, Pope Francis echoes the Genesis lesson in his encyclical Laudato Si. Green examines this renewed interest in technology on the part of the Catholic church while Whitney Bauman expands on one of the pope’s main points, that it is necessary for us to see ourselves as a bit less exceptional, to examine our human nature in the context of our embeddedness within our environment.

That we are and should remain a part of nature’s biological continuum is not a universally accepted stance. Devotees of transhumanism, cryogenics, and artificial intelligence hope to transcend the limitations of our finite and fallible human bodies, either by uploading our minds to computers, or by making a new sort of progeny in intelligent machines. Jeffrey Pugh asks whether such machines would truly be our “mind children” or whether something fundamental to our human nature would be lost in a silicon platform. Uploading the mind assumes that what is essential in our self is merely a pattern. Levy Checketts highlights a selection of writers who disagree and suggests that the variety of ways they conceive what is essential in our nature could provide fertile ground for rethinking our religious anthropologies. Brent Waters and Calvin Mercer note that the ultimate goal for many in the transhumanist movement is to fight our ultimate foe—death. Waters examines our desire to control the two endpoints of our lives, birth and death, while Mercer asks whether the movement to cryogenically preserve our bodies in hopes of either a bodily or mental resurrection in some more technologically advanced future is a perversion of the Christian hope in the Resurrection.

The dreams of transhumanism may look like science fiction and be easily relegated to adistant future. But the new technologies are changing our lives here and now. Computers have become ubiquitous. We spend increasing amounts of time staring at screens—shopping, reading, texting, and watching pornography or cat videos. In the meantime, the owners of the sites we visit are scooping up vast quantities of information about us, our habits, our friends and associates, our tastes, and our day to day activities. Michael Fuller raises several ethical questions created by the advent of techniques that allow computers to mine this data for relevant information and patterns, techniques we lump together under the term “big data,” and looks to religion for ways to address these issues. Sara Lumbreras highlights a different breeding ground for new ethical issues in the increasing autonomy of our machines. Self-driving cars, drones, robotic care-givers, and autonomous weapons are only a few of the areas in which machines will be making complex decisions without human oversight. This has resulted in the development of the fledgling discipline of machine ethics. Should our machines make strictly utilitarian choices, or do we wish them to have a larger value set that mimics our own? What role should religion play in guiding our programming of these machines?

Do our technologies threaten religion itself? We used to believe in the power of God. Have we replaced that belief with a belief in the power of our own technologies? Several of our authors explore this question—Peters in terms of genetic manipulation, Green in reference to the environment, and Ionut Untea in reference to what producesthe experience of awe in our lives today.

Bill Joy ends his warning regarding the potential of our new technologies by observing, “Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided.” Indeed. Theologian Reinhold Niebuhr reminds us that we humans tend to overreach, prompted by “the desire to find a way of completing human destiny which would keep man’s end under his control and power” (Niebuhr 1996, p. 321). We look to our technologies to give us precisely this control and power. But Niebuhr also noted our capacity for faith that “completes our ignorance without pretending to possess its certainties and knowledge” coupled with our capacity for a humility and contrition that “mitigates our pride without destroying hope” (Niebuhr 1996, p. 321).

Our technologies will keep moving forward. Through them we will inevitably change our world, our selves and our understanding of what we are and what we wish to become. This volume advances an ongoing dialog between religion and technology in the hope that in bringing the wisdom and experience of our forebears together with the aspirations of science and engineering we can find the wisdom to steer those changes well.