Science Questions & Answershttp://www.louisdelmonte.com
Louis Del Monte - Science Questions & AnswersSun, 29 Mar 2015 07:26:57 +0000en-UShourly1http://wordpress.org/?v=3.9.3How Is the Universe Going to End?http://www.louisdelmonte.com/how-is-the-universe-going-to-end/
http://www.louisdelmonte.com/how-is-the-universe-going-to-end/#commentsSun, 29 Mar 2015 07:26:57 +0000http://www.louisdelmonte.com/?p=2294At the begging of the Twentieth Century, almost every scientist believed the universe was eternal. That is to say, the universe always was and always will be—it is static. In the context of an eternal universe, questions about a beginning or an ending are meaningless. By definition, an eternal universe has no beginning, and it will have no ending. This is what they taught our grandparents as schoolchildren. Overall, the eternal universe found acceptance in both science and religion. Science proclaimed that the universe simply existed, with no evidence to the contrary. Religious leaders simply proclaimed God made the universe, which seems to imply the universe had a beginning. However, since science had no evidence to the contrary, science and religion did not butt heads over this. At the turn of the Twentieth Century, science and religion appeared content with their assertions of the universe. Poetically, you might say all was well in heaven and on Earth.

A little over eighty years ago, our cosmic bubble of an eternal universe was shattered. In 1929, Edwin Hubble, using the 100-inch Hooker telescope at the Mount Wilson Observatory, discovered that extremely distant galaxies are moving away from us. Indeed, he discovered the farther away the galaxy, the higher the apparent velocity it is moving away from us. Therefore, a galaxy twice as far from us is moving away at twice the speed of a galaxy half the distance from us. Hubble noted that the universe was expanding in all directions. This was a profound discovery that caught the greatest scientific minds of the time, including Einstein, off guard. Prior to Hubble’s discovery, the prevalent theory held by the scientific community was that the universe was in a steady state, not expanding or contracting. Even though, the evidence was mounting before Hubble conclusively proved the universe was expanding, most scientists held strongly to their paradigm of a steady-state universe.

Surprisingly, Hubble was not the first to discover the universe was expanding. In 1912, Vesto Slipher measured the first Doppler shift (the length of a light wave) of spiral galaxies, and discovered that almost all spiral galaxies were receding from Earth. Unfortunately, not much attention was paid to Slipher’s findings. Slipher himself did not understand the implications of his discovery. In addition, telescopes in 1912 were relatively poor quality, and the nature of what he was measuring was not clearly understood as spiral galaxies. In fact, the term that was used to describe spiral galaxies in 1912 was “spiral nebula” (an indistinct bright patch).

Einstein’s equations of general relativity also predicted the universe was expanding. However, Einstein was convinced that this prediction was wrong and modified the equations by adding the “cosmological constant.” With this newly added mathematical expression, the equations of general relativity predicted a static universe. Later, though, as the evidence that the universe was expanding become incontrovertible, Einstein labeled his “cosmological constant” his greatest blunder. in fact, Starting with Hubble’s discovery of an expanding universe in 1929, within thirty-five years, most of the scientific community did a complete reversal, turning their backs on a static universe and now embracing an expanding universe.

As scientists began to think about an expanding universe, they reasoned that eventually gravity would play into the equation, halt the expansion, and even reverse it. In other words, up to 2008, mainstream science believed that the expansion of the universe would eventually be slowed by gravity, then halted, and gravity would pull everything back together in what science termed a “Big Crunch.” However, when the expansion of the universe was measured in 1998, by Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess, a startling discovery was made. The expansion was not slowing down. It was accelerating. Gravity did not appear to be playing a prominent role. In fact, a new and unknown force, termed “dark energy,” seemed to be in charge. This new force, dark energy, is still a mystery.

You may wonder at this point, what this all have to do with how the universe will end? Based on all known data, the accelerated expansion of the universe implies that eventually all galaxies will move away from us to the point they are beyond our cosmological horizon. We will no longer be able to see them. The Milky Way galaxy, the galaxy that is home to our planet Earth, will be completely alone. Eventually, all stars in the Milky Way galaxy, including our Sun, will exhaust their fuel and burn out. The Earth, itself, will be long gone by by the time the galaxy grows completely dim. Our Sun will eventually burn out in approximately five billion years. How long will it take for the Milky Way galaxy to to simply be reduced to cold remnants of rubble and dust? No one really knows. Most scientists agree it will take many billions of years, but no one knows the exact number of billions. Some theories calculate the end of the universe, but they hold little sway in mainstream science. All we know is that the universe is 13.8 billion years old, which suggests change on a cosmological scale moves slowly. The end is likely many billions of years in the future, but there is little doubt the universe will end and any remnants material, without stars to provide warmth, will be close or equal to absolute zero temperature. This may all sound like a grade B disaster movie. However, unlike many grade B disaster movies, this is real and doe not have a happy ending.

]]>http://www.louisdelmonte.com/how-is-the-universe-going-to-end/feed/2Is Time Travel Possible?http://www.louisdelmonte.com/is-time-travel-possible-2/
http://www.louisdelmonte.com/is-time-travel-possible-2/#commentsFri, 20 Mar 2015 23:04:02 +0000http://www.louisdelmonte.com/?p=2282Few topics in science capture the imagination like time travel. Science fiction, like H. G. Wells’ classic novel, The Time Machine, published in 1895, and science fact, like time dilation, continues to fuel interest in time travel. Let us start with the most important question: Is time travel possible?

Of course, time travel is possible. We are already doing it. At this point, I know my answer may come across a bit flippant. However, my answer has a kernel of truth. We are traveling in time. We continually travel from the present to the future. This is what philosophers refer to as the arrow of time. In our everyday experience, it moves in one direction, from the present to the future. I think, though, on a more serious note, what people want to know is can we travel back in time—or to a future date in time.

In theory, it is possible. Indeed, numerous solutions to Einstein’s special and general relativity equations predict time travel is possible. In general, no law of physics prohibits time travel. We will begin by considering two methods science proposes to travel in time .

Method 1: Time Travel to the Future – Faster-than-light (FTL)

Using faster than light or near the speed of light, time travel appears to offer methodologies grounded in science fact. Consider two examples:

1) Assume you build a spaceship capable of traveling near the speed of light. With such a spaceship, you literally can travel into the future. This may sound like science fiction, but it is widely accepted as scientific fact. Particle accelerators confirm it. We discussed it when we discussed time dilation and the twin paradox. All you need is the spaceship, and an enormous amount of energy to accelerate it near the speed of light. However, this is an enormous problem. From Einstein’s special theory of relativity, we know that as we begin to accelerate a mass close to the speed of light, it becomes more massive, and approaches infinity. Thus, to accelerate it close to the speed of light, we need an energy source that approaches infinity. Perhaps we would have to learn how to harness the energy of a star, or routinely create matter-antimatter annihilations to create energy. Today’s science is nowhere near that level of sophistication.

2) Assume you can move information (like a signal) faster than light. Theoretically, if we could send a signal from point A to point B faster than the speed of light, it would represent a form of time travel. However, a significant paradox occurs. Here is an example.
An observer A in an inertial frame A sends a signal to an observer B in an inertial frame B. When B receives the signal, B replies and sends a signal back to A faster than the speed of light. Observer A receives the reply before sending the first signal.

In 1907, Albert Einstein described this paradox in a thought experiment to demonstrate that faster-than-light communications can violate causality (the effect occurs before the cause). Albert Einstein and Arnold Sommerfeld in 1910 described a thought experiment using a faster-than-light telegraph to send a signal back in time. In 1910, no faster-than-light signal communication device existed. It still does not exist, but the possibility of its development is increasing. From quantum physics, it appears that certain quantum effects “transmit” instantaneously and, therefore, appear to transmit faster than the speed of light in empty space. One example of this is the quantum states of two “entangled” particles (particles that have physically interacted, and later separated). In quantum physics, the quantum state is the set of mathematical variables that fully describes the physical aspects of a particle at the atomic level. When two particles interact with each other, they appear to form an invisible bond between them. When this happens, they become “entangled.” If we take one of the particles, and separate it from the other, they remain entangled (invisibly connected). If we change the atomic state of one of the entangled particles, the other particle instantaneously changes its state to maintain quantum-state harmony with the other entangled particle. Significant experimental evidence indicates that separated entangled particles can instantaneously transmit information to each other over distances that suggest the information exchange exceeds the speed of light. Initially, scientists criticized the theory of particle entanglement. After its experimental verification, science recognizes entanglement as a valid, fundamental feature of quantum mechanics. Today the focus of the research has changed to utilize its properties as a resource for communication and computation.

Method 2: Time Travel to the Past - Using Wormholes

Scientists have proposed using “wormholes” as a time machine. A wormhole is a theoretical entity in which space-time curvature connects two distant locations (or times). Although we do not have any concrete evidence that wormholes exist, we can infer their existence from Einstein’s general theory of relativity. However, we need more than a wormhole. We need a traversable wormhole. A traversable wormhole is exactly what the name implies. We can move through or send information through it.

If you would like to visualize what a wormhole does, imagine having a piece of paper whose two-dimensional surface represents four-dimensional space-time. Imagine folding the paper so that two points on the surface are connected. I understand that this is a highly simplified representation. In reality, we cannot visualize an actual wormhole. It might even exist in more than four dimensions.

How do we create a traversable wormhole? No one knows, but most scientists believe it would require enormous negative energy. A number of scientists believe the creation of negative energy is possible, based on the study of virtual particles and the Casimir effect.

Assuming we learn how to create a traversable wormhole, how would we use it to travel in time? The traversable wormhole theoretically connects two points in space-time, which implies we could use it to travel in time, as well as space. However, according to the theory of general relativity, it would not be possible to go back in time prior to the creation of the traversable wormhole. This is how physicists like Stephen Hawking explain why we do not see visitors from the future. The reason: the traversable wormhole does not exist yet.

Hard as it may be to believe, most of the scientific community acknowledges that time travel is theoretically possible. If fact, time dilation of subatomic particles provides experimental evidence that time travel to the future is possible, at least for subatomic particle accelerated close to the speed of light. Real science is sometimes stranger than fiction. What do you believe?

]]>http://www.louisdelmonte.com/is-time-travel-possible-2/feed/0The Robot Wars Are Cominghttp://www.louisdelmonte.com/the-robot-wars-are-coming/
http://www.louisdelmonte.com/the-robot-wars-are-coming/#commentsWed, 18 Mar 2015 21:38:28 +0000http://www.louisdelmonte.com/?p=2266When I say “the robot wars are coming,” I am referring to the increase in the US Department of Deference’s use of robotic systems and artificial intelligence in warfare.

Recently, September 12, 2014, the US Department of Defense released a report,DTP 106: Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions. Its authors, James Kadtke and Linton Wells II, delineate the potential benefits and concerns of Robotics, Artificial Intelligence and associated technologies, as they relate to the future of warfare, stating: “This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the time frame between now and 2030 but emphasizes policy and related choices that need to be made in the next few years.” Their conclusions were shocking:

They express concerns about maintaining the US Department of Defense’s present technological preeminence, as other nations and companies in the private sector take the lead in developing robotics, AI and human augmentation such as exoskeletons.

They warn that “The loss of domestic manufacturing capability for cutting-edge technologies means the United States may increasingly need to rely on foreign sources for advanced weapons systems and other critical components, potentially creating serious dependencies. Global supply chain vulnerabilities are already a significant concern, for example, from potential embedded “kill switches,” and these are likely to worsen.”

The most critical concern they express, in my view, is “In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smar automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.”

It becomes obvious by reading this report and numerous similar reports, that the face of warfare is rapidly changing. It’s hard to believe we’ve come to this point, if you consider that 15 years ago Facebook and Twitter did not exist and Google was just getting started. However, even 15 years ago, drones played a critical role in warfare. For example, it was a Predator mission that located Osama bin Laden in Afghanistan in 2000. While drones were used as early as World War II for surveillance, it wasn’t until 2001 that missile-equipped drones were completed with the deployment of Predators drones, armed with Hellfire missiles. Today, one in every three fighter planes is a drone. How significant is this change? According to Richard Pildes, a professor of constitutional law at New York University’s School of Law, “Drones are the most discriminating use of force that has ever been developed. The key principles of the laws of war are necessity, distinction and proportionality in the use of force. Drone attacks and targeted killings serve these principles better than any use of force that can be imagined.”

Where is this all headed? Within the near future, the US military will deploy completely autonomy “Kill Bots.” There are robots that are programmed to engage and destroy the enemy without human oversight or control. Science fiction? No! According a 2014 media release from officials at the Office of Naval Research (ONR), a technological breakthrough will allow any unmanned surface vehicle (USV) to not only protect Navy ships, but also, for the first time, autonomously “swarm” offensively on hostile vessels. In my opinion, autonomous Predator drones are likely either being developed or have been developed, but the information remains classified.

Artificial intelligence and robotic systems are definitely changing the face of warfare. Within a decade, I judge, based on the current trends, that about half of the offensive capability of the US Department of Deference will consist of Kill Bots in one form or another, and a large percentage of them will be autonomous.

This suggest two things to me regarding the future of warfare:

Offensively fighting wars will become more palatable to the US public because machines, not humans, will perform the lion’s share of the most dangerous missions.

US adversaries are also likely to use Kill Bots against us, as adversarial nations develop similar technology.

This has prompted a potential United Nations moratorium on autonomous weapons systems. To quote the US DOD report DTP 106, “Perhaps the most serious issue is the possibility of robotic systems that can autonomously decide when to take human life. The specter of Kill Bots waging war without human guidance or intervention has already sparked significant political backlash, including a potential United Nations moratorium on autonomous weapons systems. This issue is particularly serious when one considers that in the future, many countries may have the ability to manufacture, relatively cheaply, whole armies of Kill Bots that could autonomously wage war. This is a realistic possibility because today a great deal of cutting-edge research on robotics and autonomous systems is done outside the United States, and much of it is occurring in the private sector, including DIY robotics communities. The prospect of swarming autonomous systems represents a challenge for nearly all current weapon systems.”

There is no doubt that the robot wars are coming. The real question is: Will humanity survive the robot wars?

]]>http://www.louisdelmonte.com/the-robot-wars-are-coming/feed/0Will Artificial Intelligence Result in the Merger of Man and Machine?http://www.louisdelmonte.com/will-artificial-intelligence-result-in-the-merger-of-man-and-machine/
http://www.louisdelmonte.com/will-artificial-intelligence-result-in-the-merger-of-man-and-machine/#commentsThu, 12 Mar 2015 20:29:58 +0000http://www.louisdelmonte.com/?p=2255Will humankind’s evolution merge with strong artificially intelligent machines (SAMs)? While no one really knows the answer to this question, many who are engaged in the development of artificial intelligence assert the merger will occur. Let’s understand what this means and why it is likely to occur.

While humans have used artificial parts for centuries (such as wooden legs), generally they still consider themselves human. The reason is simple: Their brains remain human. Our human brains qualify us as human beings. However, by 2099 most humans will have strong-AI brain implants and interface telepathically with SAMs. This means the distinction between SAMs and humans with strong-AI brain implants, or what is termed “strong artificially intelligent humans” (i.e., SAH cyborgs), will blur. There is a strong probability, when this occurs, humans with strong-AI brain implants will identify their essence with SAMs. These cyborgs (strong-AI humans with cybernetically enhanced bodies), SAH cyborgs, represent a potential threat to humanity, which we’ll discuss below. It is unlikely that organic humans will be able to intellectually comprehend this new relationship and interface meaningfully (i.e., engage in dialogue) with either SAMs or SAHs.

Let us try to understand the potential threats and benefits related to what becoming a SAH cyborg represents. In essence, the threats are the potential extinction of organic humans, slavery of organic humans, and loss of humanity (strong-AI brain implants may cause SAHs to identify with intelligent machines, not organic humans, as mentioned above). Impossible? Unlikely? Science fiction? No! Let understand first why organic humans may choose to become SAH cyborgs.

There are significant benefits to becoming a SAH cyborg, including:

Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.

Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). In my book, The Artificial Intelligent Revolution, I delineate the technology trends that suggests by the 2040s humans will develop the means to instantly create new portions of ourselves, either biological or non-biological, so that people can have a physical body at one time and not at another, as they choose.

To date, predictions regarding regarding most of humankind becoming SAH cyborgs by 2099 is on track to becoming a reality. An interesting 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs. In addition, in 2011 author Pagan Kennedy wrote an insightful article in TheNew York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I believe this will begin to significantly occur the 2040. I am not saying that in 2040 all humans will become SAH cyborgs but that a significant number will qualify as SAH cyborgs. I do predict, along with other AI futurists, that by 2099 most humans in technologically advanced nations will become SAH cyborgs. I also predict the leaders of many of those nations will be SAH cyborgs. The reasoning behind my last prediction is simple. SAH cyborgs will be intellectually and physically superior to organic humans in every regard. In effect, they will be the most qualified to assume leadership positions.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have strong reservations, but I leave it to your judgment to answer that question.

]]>http://www.louisdelmonte.com/will-artificial-intelligence-result-in-the-merger-of-man-and-machine/feed/0By 2030 Your Best Friend May Be a Computerhttp://www.louisdelmonte.com/by-2030-your-best-friend-may-be-a-computer/
http://www.louisdelmonte.com/by-2030-your-best-friend-may-be-a-computer/#commentsSun, 08 Mar 2015 00:47:48 +0000http://www.louisdelmonte.com/?p=2248AI has changed the cultural landscape. Yet the change has been so gradual that we hardly have noticed the major impact it has. Some experts, including myself, predict that in about fifteen years, the average desktop computer will have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

This is a good place for us to ask an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5. As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

There is no widely accepted empirical definition of consciousness (self-awareness).

A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

There is little doubt that intelligent machines by the year 2030 will be able to interact with organic humans, much the same way we are able to interact with each other. If it is programmed to share your interests and has strong affective computing capabilities (i.e., affective computing relates to machines having human-like emotions), you may well consider it a friend, even a best friend. Need proof? Just observe how additive computer games are to people in all walks of life and various age groups. Now imagine an intelligent machine that is able to not only play computer based games, but discuss any subject you’d like to discuss. I predict interactions with such machines will become additive and may even reduce human to human interactions.

]]>http://www.louisdelmonte.com/by-2030-your-best-friend-may-be-a-computer/feed/0What Is Dark Energy?http://www.louisdelmonte.com/what-is-dark-energy/
http://www.louisdelmonte.com/what-is-dark-energy/#commentsFri, 06 Mar 2015 01:49:01 +0000http://www.louisdelmonte.com/?p=2242Is dark energy real or simply a ghost story? Unfortunately, the phenomena we call dark energy is both real and scary. If it plays out on its current course, we are going to be alone, all alone. The billions upon billions of other galaxies holding the promise of planets with life like ours will be gone. The universe will be much like what they taught our grandparents at the beginning of the Twentieth Century. It will consist of the Milky Way galaxy. All the other galaxies will have moved beyond our cosmological horizon, and be lost to us forever. There will be no evidence that the Big Bang ever occurred.

Mainstream science widely accepts the Big Bang as giving birth to our universe. Scientists knew from Hubble’s discovery in 1929 that the universe was expanding. However, prior to 1998, scientific wisdom was that the expansion of the universe would gradually slow down, due to the force of gravity. We were so sure, so we decided to confirm our theory by measuring it. Can you imagine our reaction when our first measurement did not confirm our paradigm, namely that the expansion of the universe should be slowing down?

What happened in 1998? The High-z Supernova Search Team (an international cosmology collaboration) published a paper that shocked the scientific community. The paper was: Adam G. Riess et al. (Supernova Search Team) (1998). “Observational evidence from supernovae for an accelerating universe and a cosmological constant.” Astronomical J. 116 (3). They reported that the universe was doing the unthinkable. The expansion of the universe was not slowing down—in fact, it was accelerating. Of course, this caused a significant ripple in the scientific community. Scientists went back to Einstein’s general theory of relativity and resurrected the “cosmological constant,” which Einstein had arbitrarily added to his equations to prove the universe was eternal and not expanding. Previous chapters noted that Einstein considered the cosmological constant his “greatest blunder” when Edwin Hubble, in 1929, proved the universe was expanding.

Through high school-level mathematical manipulation, scientists moved Einstein’s cosmological constant from one side of the equation to the other. With this change, the cosmological constant no longer acted to keep expansion in balance to result in a static universe. In this new formulation, Einstein’s “greatest blunder,” the cosmological constant, mathematically models the acceleration of the universe. Mathematically this may work, and model the accelerated expansion of the universe. However, it does not give us insight into what is causing the expansion.

The one thing that you need to know is that almost all scientists hold the paradigm of “cause and effect.” If it happens, something is causing it to happen. Things do not simply happen. They have a cause. That means every bubble in the ocean has a cause. It would be a fool’s errand to attempt to find the cause for each bubble. Yet, I believe, as do almost all of my colleagues, each bubble has a cause. Therefore, it is perfectly reasonable to believe something is countering the force of gravity, and causing the expansion to accelerate. What is it? No one knows. Science calls it “dark energy.”

That is the state of science as I write this book in the latter half of 2012. The universe’s expansion is accelerating. No one knows why. Scientists reason there must be a cause countering the pull of gravity. They name that cause “dark energy.” Scientists mathematically manipulate Einstein’s self-admitted “greatest blunder,” the “cosmological constant,” to model the accelerated expansion of the universe.

Here is the scary part. In time, we will be entirely alone in the galaxy. The accelerated expansion of space will cause all other galaxies to move beyond our cosmological horizon. When this happens, our universe will consist of the Milky Way. The Milky Way galaxy will continue to exist, but as far out as our best telescopes will be able to observe, no other galaxies will be visible to us. What they taught our grandparents will have come true. The universe will be the Milky Way and nothing else. All evidence of the Big Bang will be gone. All evidence of dark energy will be gone. Space will grow colder, almost devoid of all heat, as the rest of the universe moves beyond our cosmological horizon. The entire Milky Way galaxy will grow cold. Our planet, if it still exists, will end in ice. How is that for a scary story?

]]>http://www.louisdelmonte.com/what-is-dark-energy/feed/1A New Theory of Dark Matterhttp://www.louisdelmonte.com/a-new-theory-of-dark-matter/
http://www.louisdelmonte.com/a-new-theory-of-dark-matter/#commentsMon, 02 Mar 2015 00:20:07 +0000http://www.louisdelmonte.com/?p=2234In my last post, “What Is Dark Matter,” I mentioned that most of the scientific community accepts the experimental evidence confirming the existence of dark matter. Rightly so, since the experimental evidence of its existence is incontrovertible. Here are the salient facts that experimentally indicate the existence and location of dark matter:

The rotation of stars, planets, and other celestial masses orbit galaxies, like ours, too rapidly relative to their mass and the gravitational pull exerted on them in the galaxy. For example, an outermost star should be orbiting slower than a similar-size star closer to the center of the galaxy, but we observe they are orbiting at the same rate. Based on this observation, the scientific community asserts there is more mass in the galaxy than we are able to observe. The call this mass dark matter.

We can see the effect dark matter has on light. It will bend light the same way ordinary matter bends light. This effect is gravitational lensing. The visible mass is insufficient to account for the gravitational lensing effects we observe. Once again, this suggests more mass than what we can see.

We are able to use the phenomena of gravitational lensing to determine where the missing mass (dark matter) is, and we find it is throughout galaxies. It is as though each galaxy in our universe has an aura of dark matter associated with it. We do not find any dark matter between galaxies.

While it is true that all evidence has led the scientific community to believes that dark matter is real and abundant, making up as much as 90% of the mass of the universe, its true nature is still a mystery. The current theory among the scientific community is that dark matter is a slow-moving particle that travels up to a tenth of the speed of light, and neither emits nor scatters light. In other words, it is invisible. Scientists call the mass associated with dark matter a “WIMP” (Weakly Interacting Massive Particle).

For years, scientists have been working to find the WIMP particle to confirm dark matter’s existence. All efforts have been either unsuccessful or inconclusive. This raises a significant question. Are we on the right track? Is there a WIMP particle? To address this question, let’s consider the experimental evidence:

The Standard Model of particle physics does not predict a WIMP particle. The Standard Model, refined to its current formulation in the mid-1970s, is one of science’s greatest theories. It successfully predicted bottom and top quarks prior to their experimental confirmation in 1977 and 1995, respectively. It predicted the tau neutrino prior to its experimental confirmation in 2000, and the Higgs boson prior to its experimental confirmation in 2012. Modern science holds the Standard Model in such high regard that a number of scientists believe it is a candidate for the theory of everything. Therefore, it is not a little “hiccup” when the Standard Model does not predict the existence of a particle. It is significant, and it might mean that the particle does not exist.

No evidence of the WIMP particle has surfaced from particle accelerator data, including data gather from experiments using the the Large Hadron Collider (LHC). This is particularly concerning since super colliders have successfully given us a glimpse into the early universe, the time frame from which most of the scientific community believes dark matter originated.

To sum it up, all experiments to detect the WIMP particle have to date been unsuccessful, including considerable effort by Stanford University, University of Minnesota and Fermilab.

That is all the experimental evidence we have. Where does this leave us? The evidence is telling us the WIMP particle might not exist. We have spent over a decade, and unknown millions of dollars, which so far leads to a dead end. This appears to beg a new approach.

To kick off the new approach, consider the hypothesis that dark matter is a new form of energy. We know from Einstein’s mass-energy equivalence equation (E = mc2), that mass always implies energy, and energy always implies mass. For example, photons are massless energy particles. Yet, gravitational fields influence them, even though they have no mass. That is because they have energy, and energy, in effect, acts as a virtual mass.

If dark matter is energy, where is it and what is it? Consider these properties of dark-matter energy:

It is not in the visible spectrum, or we would see it.

It does not strongly interact with other forms of energy or matter.

It does exhibit gravitational effects, but does not absorb or emit electromagnetic radiation.

Based on these properties, we should consider M-theory (the unification of all string theories that mathematically suggests there may be ten spacial dimensions, not three, as well as a time dimension). Several prominent physicists, including one of the founders of string theory, Michio Kaku, suggest there may be a solution to M-theory that quantitatively describes dark matter and cosmic inflation. If M-theory can yield a superstring solution, it would go a long way to solving the dark-matter mystery. I know this is like the familiar cartoon of a scientist solving an equation where the caption reads, “then a miracle happens.” However, it is not quite that grim. What I am suggesting is a new line of research and theoretical enquiry. I think the theoretical understanding of dark matter lies in M-theory. The empirical understanding lies in missing-matter experiments.

What is a missing-matter experiment? Scientists are performing missing-matter experiments as I write this book. They involve high-energy particle collisions. By accelerating particles close to the speed of light, and causing particle collisions at those speeds, they account for all the energy and mass pre- and post-collision. If any energy or mass is missing post-collision, the assumption would be it is in one of non-spatial dimensions predicted by M-theory.

Why would this work? M-theory has the potential to give us a theoretical model of dark matter, which we do not have now. Postulating we are dealing with energy, and not particles, would explain why we have not found the WIMP particle. It would also explain why the Standard Model of particle physics doesn’t predict a WIMP particle. Postulating that the energy resides in the non-spatial dimensions of M-theory would explain why we cannot see or detect it, except for its gravitational effects. Why is dark matter able to exhibit gravity,, especially from a hidden dimension? That is still a mystery, as is gravity itself. We have not been able to find the “graviton,” the mysterious particle of gravity that numerous particle physicists believe exists. Yet, we know gravity is real. It is theoretically possible that dark matter (perhaps a new form of energy) and gravity (another form of energy) are both in a different dimension. This framework provides an experimental path to verify both M-theory and the existence of dark matter (via high-energy particle collisions).

This is a conceptual framework, but fits the observations. I am not suggesting we abandon our search for the WIMP particle. However, I suggest we widen our search to include the possibility that dark matter is not a particle, but a new form of energy.

]]>http://www.louisdelmonte.com/a-new-theory-of-dark-matter/feed/2What Is Dark Matter?http://www.louisdelmonte.com/what-is-dark-matter/
http://www.louisdelmonte.com/what-is-dark-matter/#commentsFri, 27 Feb 2015 01:03:57 +0000http://www.louisdelmonte.com/?p=2226Dark matter is real, mysterious, and necessary for our existence. Without it, we would not have a universe. It is a good thing with an ominous-sounding name. So, what is dark matter?

The most popular theory of dark matter is that it is a slow-moving particle. It travels up to a tenth of the speed of light. It neither emits nor scatters light. In other words, it is invisible. However, its effects are detectable, as I will explain below. Scientists call the mass associated with dark matter a “WIMP” (Weakly Interacting Massive Particle).

In 1933, Fritz Zwicky (California Institute of Technology) made a crucial observation. He discovered the orbital velocities of galaxies were not following Newton’s law of gravitation (every mass in the universe attracts every other mass with a force inversely proportional to the square of the difference between them). They were orbiting too fast for the visible mass to be held together by gravity. If the galaxies followed Newton’s law of gravity, the outermost stars would be thrown into space. He reasoned there had to be more mass than the eye could see, essentially an unknown and invisible form of mass that was allowing gravity to hold the galaxies together. Zwicky’s calculations revealed that there had to be 400 times more mass in the galaxy clusters than what was visible. This is the mysterious “missing-mass problem.” It is normal to think that this discovery would turn the scientific world on its ear. However, as profound as the discovery turned out to be, progress in understanding the missing mass lags until the 1970s.

In 1975, Vera Rubin and fellow staff member Kent Ford, astronomers at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, presented findings that reenergized Zwicky’s earlier claim of missing matter. At a meeting of the American Astronomical Society, they announced the finding that most stars in spiral galaxies orbit at roughly the same speed. They made this discovery using a new, sensitive spectrograph (a device that separates an incoming wave into a frequency spectrum). The new spectrograph accurately measured the velocity curve of spiral galaxies. Like Zwicky, they found the spiral velocity of the galaxies was too fast to hold all the stars in place. Using Newton’s law of gravity, the galaxies should be flying apart, but they were not. Presented with this new evidence, the scientific community finally took notice. Their first reaction was to call into question the findings, essentially casting doubt on what Rubin and Ford reported. This is a common and appropriate reaction, until the amount of evidence (typically independent verification) becomes convincing.

In 1980, Rubin and her colleagues published their findings (V. Rubin, N. Thonnard, W. K. Ford, Jr, (1980). “Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R=4kpc) to UGC 2885 (R=122kpc).” Astrophysical Journal 238: 471.). It implied that either Newton’s laws do not apply, or that more than 50% of the mass of galaxies is invisible. Although skepticism abounded, eventually other astronomers confirmed their findings. The experimental evidence had become convincing. “Dark matter,” the invisible mass, dominates most galaxies. Even in the face of conflicting theories that attempt to explain the phenomena observed by Zwicky and Rubin, most scientists believe dark matter is real. None of the conflicting theories (which typically attempted to modify how gravity behaved on the cosmic scale) was able to explain all the observed evidence, especially gravitational lensing (the way gravity bends light).

Currently, the scientific community believes that dark matter is real and abundant, making up as much as 90% of the mass of the universe. However, dark matter is still a mystery. For years, scientists have been working to find the WIMP particle to confirm dark matter’s existence. All efforts have been either unsuccessful or inconclusive.

The Department of Energy Fermi National Accelerator Laboratory Cryogenic Dark Matter Search (CDMS) experiment is ongoing, in an abandoned iron mine about a half mile below the surface, in Soudan, Minnesota. The Fermilab is a half mile under the earth’s surface to filter cosmic rays so the instruments are able to detect elementary particles without the background noise of cosmic rays. In 2009, they reported detecting two events that have characteristics consistent with the particles that physicists believe make up dark matter. They may have detected the WIMP particle. However, they are not making that claim at the time of this writing. The Fermilab stopped short of claiming they had detected dark matter because of the strict criteria that they have self-imposed, specifically there must be less than one chance in a thousand that the event detected was due to a background particle. The two events, although consistent with the detection of dark matter, do not pass that test.

From an article written in Fermilab Today (December 13, 2009), the Fermilab Director Pier Oddone said, “While this result is consistent with dark matter, it is also consistent with backgrounds. In 2010, the collaboration is installing an upgraded detector (SuperCDMS) at Soudan with three times the mass and lower backgrounds than the present detectors. If these two events are indeed a dark matter signal, then the upgraded detector will be able to tell us definitively that we have found a dark matter particle.” As of this writing, Fermilab and other laboratories maintain their quest to find the WIMP particle. To date, we are without conclusive evidence that the WIMP exists.

If it exists, there is a reasonable probability that the WIMP particle can be “created” via experiments involving super colliders (such as the Large Hadron Collider (LHC) built by the European Organization for Nuclear Research (CERN) over a ten-year period from 1998 to 2008). Super colliders have successfully given us a glimpse into the early universe. Since most scientists believe that dark matter exists as part of creation at the instant of the Big Bang, super colliders may provide a reasonable methodology of directly creating dark matter. As of this writing, scientists using the Large Hadron Collider are attempting to create WIMP particles via high-energy proton collisions.

Are we on the right track? Is there a WIMP particle or is dark matter related to something else? We’ll explore the nature of dark matter in more depth in my next post?

]]>http://www.louisdelmonte.com/what-is-dark-matter/feed/0Is Time Travel to the Future Possible?http://www.louisdelmonte.com/is-time-travel-to-the-future-possible/
http://www.louisdelmonte.com/is-time-travel-to-the-future-possible/#commentsSun, 22 Feb 2015 01:11:16 +0000http://www.louisdelmonte.com/?p=2204Since the future doesn’t exist, how would it be possible to travel into the future? This question has been debated by both philosophers and scientists. However, time travel to the future is the only experimental evidence we have of time travel. To understand this, we will need to understand Einstein’s theories of special and general relativity.

The science of time travel was launch in 1905, when Einstein published his special theory of relativity in the prestigious Annalen der Physik (i.e., Annals of Physics), one of the oldest scientific journals (established in 1790). The paper that Einstein submitted regarding his special theory of relativity was titled “On the Electrodynamics of Moving Bodies.” By scientific standards, it was unconventional. It contained little in the way of mathematical formulations or scientific references. Instead, it was written in a conversational style using thought experiments. If you examine the historical context, Einstein had few colleagues in the scientific establishment to bounce ideas off. In fact, Einstein essentially cofounded, along with mathematician Conrad Habicht and close friend Maurice Solovine, a small discussion group, the Olympia Academy, which met on a routine basis at Solovine’s flat to discuss science and philosophy. It is also interesting to note that Einstein’s position as a patent examiner related to questions about transmission of electric signals and electrical-mechanical synchronization of time. Most historians credit Einstein’s early work as a patent examiner with laying the foundation for his thought experiments on the nature of light and the integration of space and time (i.e., spacetime).

Einstein’s special theory of relativity gave us numerous new important insights into reality, among them the famous mass equivalence formula (E = mc2) and the concept and formula for time dilation. Time dilation lays the foundation for forward time travel, so let’s understand it in more depth.

According to special relativity’s time dilation, as a clock moves close to the speed of light, time slows down relative to a clock at rest. The implication is that if you were able to travel in a spaceship that was capable of approaching the speed of light, a one-year round trip journey as measured by you on a clock within the spaceship would be equivalent to approximately ten or more years of Earth time, depending on your exact velocity. In effect, when you return to Earth, you will have traveled to Earth’s future. This is not science fiction. As I mentioned above, time dilation has been experimentally verified using particle accelerators. It is widely considered a science fact.

What scientific experimental evidence do we have that time dilation is real. Here are several experiments that validate time dilation caused when particles move close to the speed of light.

Velocity time dilation experimental evidence:

Rossi and Hall (1941) compared the population of cosmic-ray-produced muons at the top of a six-thousand-foot-high mountain to muons observed at sea level. A muon is a subatomic particle with a negative charge and about two hundred times more massive than an electron. Muons occur naturally when cosmic rays (energetic-charged subatomic particles, like protons, originating in outer space) interact with the atmosphere. Muons, at rest, disintegrate in about 2 x 10-6 seconds. The mountain chosen by Rossi and Hall was high. The muons should have mostly disintegrated before they reached the ground. Therefore, extremely few muons should have been detected at ground level, versus the top of the mountain. However, their experimental results indicated the muon sample at the base experienced only a moderate reduction. The muons were decaying approximately ten times slower than if they were at rest. They made use of Einstein’s time dilation effect to explain this discrepancy. They attributed the muon’s high speed, with its associated high kinetic energy, to be dilating time.

In 1963, Frisch and Smith once again confirmed the Rossi and Hall experiment, proving beyond doubt that extremely high kinetic energy prolongs a particle’s life.

With the advent of particle accelerators that are capable of moving particles at near light speed, the confirmation of time dilation has become routine. A particle accelerator is a scientific apparatus for accelerating subatomic particles to high velocities by using electric or electromagnetic fields. In 1977, J. Bailey and CERN (European Organization for Nuclear Research) colleagues accelerated muons to within 0.9994% of the speed of light and found their lifetime had been extended by 29.3 times their corresponding rest mass lifetime. (Reference: Bailey, J., et al., Nature 268, 301 [1977] on muon lifetimes and time dilation.) This experiment confirmed the “twin paradox,” whereby a twin makes a journey into space in a near-speed-of-light spaceship and returns home to find he has aged less than his identical twin who stayed on Earth. This means that clocks sent away at near the speed of light and returned near the speed of light to their initial position demonstrate retardation (record less time) with respect to a resting clock.

Time dilation can also occur as a result of gravity. Our understanding of this comes from Einstein’s theory of general relativity. What is the difference between the special and general theory of relativity? Einstein used the term “special” when describing his special theory of relativity because it only applied to inertial frames of reference, which are frames of reference moving at a constant velocity or at rest. It also did not incorporate the effects of gravity. Shortly after the publication of special relativity, Einstein began work to consider how he could integrate gravity and noninertial frames into the theory of relativity. The problem turned out to be monumental, even for Einstein. Starting in 1907, his initial thought experiment considered an observer in free fall. On the surface, this does not sound like it would be a difficult problem for Einstein, given his previous accomplishments. However, it required eight years of work, incorporating numerous false starts, before Einstein was ready to reveal his general theory of relativity.

In November 1915, Einstein presented his general theory of relativity to the Prussian Academy of Science in Berlin. The equations Einstein presented, now known as Einstein’s field equations, describe how matter influences the geometry of space and time. In effect, Einstein’s field equations predicted that matter or energy would cause spacetime to curve. This means that matter or energy has the ability to affect, even distort, space and time. One important aspect prediction of general relativity was that gravitational fields could cause time dilation. Here are some important experiments that prove this aspect of general relativity is correct.

Gravitational time dilation experimental evidence:

In 1959, Pound and Rebka measured a slight redshift in the frequency of light emitted close to the Earth’s surface (where Earth’s gravitational field is higher), versus the frequency of light emitted at a distance farther from the Earth’s surface. The results they measured were within 10% of those predicted by the gravitational time dilation of general relativity.

In 1964, Pound and Snider performed a similar experiment, and their measurements were within 1% predicted by general relativity.

In 1980, the team of Vessot, Levine, Mattison, Blomberg, Hoffman, Nystrom, Farrel, Decher, Eby, Baugher, Watts, Teuber, and Wills published “Test of Relativistic Gravitation with a Space-Borne Hydrogen Maser,” and increased the accuracy of measurement to about 0.01%. In 2010, Chou, Hume, Rosenband, and Wineland published “Optical Clocks and Relativity.” This experiment confirmed gravitational time dilation at a height difference of one meter using optical atomic clocks, which are considered the most accurate types of clocks.

The above discussion provides some insight into time dilation, or what some term time travel to the future. However, is it conclusive? Not to my mind! Although we have numerous experiments that demonstrate time dilation (i.e., forward time travel) involving subatomic particles is real, we have been unable to demonstrate significant human time dilation. By the word “significant,” I mean that it would be noticeable to the humans and other observers involved. To date, some humans, such as astronauts and cosmonauts, have experienced forward time travel (i.e., time dilation) in the order of approximately 1/50th of a second, which is not noticeable to our human senses. If it were in the order of seconds or minutes, then it would be noticeable. Scientifically speaking, there is no documented significant evidence of human time travel to the future.

To answer the subject question of this post, time travel to the future appears to have a valid scientific and experimental foundation. However, to date the experimental evidence does not include significant (noticeable) human time travel to the future, which leaves the question still unanswered. My own view is that when we develop space craft capable of speeds approaching the speed of light with humans on board, time dilation (time travel to the future) will be conclusively proven.

]]>http://www.louisdelmonte.com/is-time-travel-to-the-future-possible/feed/1Will Your Grandchildren Become Cyborgs?http://www.louisdelmonte.com/will-your-grandchildren-become-cyborgs/
http://www.louisdelmonte.com/will-your-grandchildren-become-cyborgs/#commentsFri, 20 Feb 2015 20:10:44 +0000http://www.louisdelmonte.com/?p=2192By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

Are strong-AI machines (SAMs) a new life-form?

Should SAMs have rights?

Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.