Glenn Yeffeth

We're not going to discuss my somewhat obsessive interest in The Matrix. Let's just say it's one of those things and move along. Obviously, this book is not of interest to you if you did not like The Matrix or are not interested in the ideas behind The Matrix.

But, if you are like me and fascinated with religion, philosophy and science, then this is a book you may want to read. Which is probably why The Matrix fascinates me so much, becuase it addresses subjects in which I am deeply interested.

But we said we weren't going to talk about that.

This book does address some of the problems I had with the movie. First and foremost was the "human battery" idea, to quote Tom and Ray. "Booooooooogus!" Robert J. Sawyer, in his essay, Artificial Intelligence, Science Fiction, and The Matrix, says: What the AIs of The Matrix plainly needed was not the energy of human bodies but, rather, the power of human minds--of true consciousness.I find that a far more reasonable explanation than the one given in the movie, and to be fair, Sawyer suggests that perhaps Morpheus simply doesn't understand what is going on.

Peter B. Llyod also addresses this issue and again sees Morpheus as mistaken.

That Morpheus has misunderstood what is going on is underscored by his mention in the same speech of the machine's discovery of a new form of nuclear fusion. Evidently, the fusion is the real source of energy that the machines use...the human brain...is a supreme parallel computer. Most likely, the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions.

This works well for me, as just as Morpheus did not know about the previous ones, so he did not truly understand why the machines kept the humans alive.

Dino Felluga and Andrew Gordon debate whether The Matrix was an intellectual film or just an "intellectual poseur." After the second and third films, I am somewhat inclined to agree with the second proposition, in that I do think that the films set themselves up to be more than they were, and that the philosophy and religion were somewhat ignored in the second and third movies, but that doesn't mean that the movie did not pose questions, and when it posed those questions, it did not respond with easy answers, or sometimes even answers at all.

I liked the fact that the movies did not answer all the questions they asked. Does that make the movie intellectual? Probably not in the strictest sense of the term, but the movies do create questions that are not easily answered, and asks the viewer to consider the nature of artificial intelligence--will it be deserving of rights or will it be okay to treat artificial creatures as slaves? This is not a question that we should blithely ignore, but yet one that is not being asked. Unfortunately I am not sure that the movie succeeded in this goal, in that the large questions are still not being debated by the public.
I particularly enjoyed Peter J. Boettke's essay Human Freedom and the Red Pill, but this should be surprising to no one, considering my previous interest in, and arguments about, free will.

Ray Kurzweil and Bill Joy consider whether we are headed for a future where virtual reality can be confused for reality and whether humans are becoming unnecessary.

My favorite part of the book related to a recent discussion as to what I would like my "superpower" to be. I said the ability to learn as quickly as they did in the Matrix, when they "jack in." Ray Kurzweil said:

We don't have quick downloading ports in our brains. But as we build nonbiological analogs of our neurons, interconnections, and neurotransmitter levels where our skills and memories are stored, we won't leave out the equivalent of downloading ports. We'll be able to download capabilities as easily as Trinity downloads the program that allows her to fly the B-212 helicopter.