Epistemology

One stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems. Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals, the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality. Roughly put, this is the ability to get how things work and thus make reasonably accurate predictions. This ability is rather useful: getting how things work is a big step on the road to success.

Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality. For example, if your Call of Cthulhu character were trying to avoid being spotted by the cultists of Hastur as she spies on them, you would need to roll under your Sneak skill on percentile dice. As another example, if your D-7 battle cruiser were firing phasers and disruptors at a Kzinti strike cruiser, you would roll dice and consult various charts to see what happened. Video games also include the digital equivalent of dice. For example, if you are playing World of Warcraft, the damage done by a spell or a weapon will be random.

Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.

Naturally, I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance. After all, the fact that we do not know what will happen does not entail that it is a matter of chance.

People also seem to believe in chance because they think things could have been differently: the die roll might have been a 1 rather than a 20 or I might have won the lottery rather than not. However, even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference. Also, there is the rather obvious question of proving that things could have been different. This would seem to be impossible: while it might be believed that conditions could be recreated perfectly, one factor that can never be duplicated – time. Recreating an event will be a recreation. If the die comes up 20 on the first roll and 1 on the second, this does not show that it could have been a 1 the first time. All its shows is that it was 20 the first time and 1 the second.

If someone had a TARDIS and could pop back in time to witness the roll again and if the time traveler saw a different outcome this time, then this might be evidence of chance. Or evidence that the time traveler changed the event.

Even traveling to a possible or parallel world would not be of help. If the TARDIS malfunctions and pops us into a world like our own right before the parallel me rolled the die and we see it come up 1 rather than 20, this just shows that he rolled a 1. It tells us nothing about whether my roll of 20 could have been a 1.

Of course, the flip side of the coin is that I can never know that the world is non-random: aside from some sort of special knowledge about the working of the universe, a random universe and a non-random universe would seem exactly the same. Whether my die roll is random or not, all I get is the result—I do not perceive either chance or determinism. However, I go with a random universe because, to be honest, I am a gamer.

If the universe is deterministic, then I am determined to do what I do. If the universe is random, then chance is a factor. However, a purely random universe would not permit actual decision-making: it would be determined by chance. In games, there is apparently the added element of choice—I chose for my character to try to attack the dragon, and then roll dice to determine the result. As such, I also add choice to my random universe.

Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not. I go with a choice universe for the following reason: If there is no choice, then I go with choice because I have no choice. So, I am determined (or chanced) to be wrong. I could not choose otherwise. If there is choice, then I am right. So, choosing choice seems the best choice. So, I believe in a random universe with choice—mainly because of gaming. So, what about the lessons from this?

One important lesson is that decisions are made in uncertainty: because of chance, the results of any choice cannot be known with certainty. In a game, I do not know if the sword strike will finish off the dragon. In life, I do not know if the investment will pay off. In general, this uncertainty can be reduced and this shows the importance of knowing the odds and the consequences: such knowledge is critical to making good decisions in a game and in life. So, know as much as you can for a better tomorrow.

Another important lesson is that things can always go wrong. Or well. In a game, there might be a 1 in 100 chance that a character will be spotted by the cultists, overpowered and sacrificed to Hastur. But it could happen. In life, there might be a 1 in a 100 chance of a doctor taking precautions catching Ebola from a patient. But it could happen. Because of this, the possibility of failure must always be considered and it is wise to take steps to minimize the chances of failure and to also minimize the consequences.

Keeping in mind the role of chance also helps a person be more understanding, sympathetic and forgiving. After all, if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance. It also helps in regards to praising success: knowing that chance plays a role in success is also important. For example, there is often the assumption that success is entirely deserved because it must be the result of hard work, virtue and so on. However, if success involves chance to a significant degree, then that should be taken into account when passing out praise and making decisions. Naturally, the role of chance in success and failure should be considered when planning and creating policies. Unfortunately, people often take the view that both success and failure are mainly a matter of choice—so the rich must deserve their riches and the poor must deserve their poverty. However, an understanding of chance would help our understanding of success and failure and would, hopefully, influence the decisions we make. There is an old saying “there, but for the grace of God, go I.” One could also say “there, but for the luck of the die, go I.”

When I was a young kid I played games like Monopoly, Chutes & ladders and Candy Land. When I was a somewhat older kid, I was introduced to Dungeons & Dragons and this proved to be a gateway game to Call of Cthulhu,Battletech, Star Fleet Battles, Gamma World, and video games of all sorts. I am still a gamer today—a big bag of many-sided dice and exotic gaming mice dwell within my house.

Over the years, I have learned many lessons from gaming. One of these is keep rolling. This is, not surprisingly, similar to the classic advice of “keep trying” and the idea is basically the same. However, there is some interesting philosophy behind “keep rolling.”

Most of the games I have played feature actual dice or virtual dice (that is, randomness) that are used to determine how things go in the game. To use a very simple example, the dice rolls in Monopoly determine how far your piece moves. In vastly more complicated games like Pathfinder or Destiny the dice (or random number generators) govern such things as attacks, damage, saving throws, loot, non-player character reactions and, in short, much of what happens in the game. For most of these games, the core mechanics are built around what is supposed to be a random system. For example, in games like Pathfinder when your character attacks the dragon with her great sword, a roll of a 20-sided die determines whether you hit or not. If you do hit, then you roll more dice to determine your damage.

Having played these sorts of games for years, I can think very well in terms of chance and randomness when planning tactics and strategies within such games. On the one hand, a lucky roll can result in victory in the face of overwhelming odds. On the other hand, a bad roll can seize defeat from the jaws of victory. But, in general, success is more likely if one does not give up and keeps on rolling.

This lesson translates very easily and obviously to life. There are, of course, many models and theories of how the real world works. Some theories present the world as deterministic—all that happens occurs as it must and things cannot be otherwise. Others present a pre-determined world (or pre-destined): all that happens occurs as it has been ordained and cannot be otherwise. Still other models present a random universe.

As a gamer, I favor the random universe model: God does play dice with us and He often rolls them hard. The reason for this belief is that the dice/random model of gaming seems to work when applied to the actual world—as such, my belief is mostly pragmatic. Since games are supposed to model parts of reality, it is hardly surprising that there is a match up. Based on my own experience, the world does seem to work rather like a game: success and failure seem to involve chance.

As a philosopher, I recognize this could simply be a matter of epistemology: the apparent chance could be the result of our ignorance rather than an actual randomness. To use the obvious analogy, the game master might not be rolling dice behind her screen at all and what happens might be determined or pre-determined. Unlike in a game, the rule system for reality is not accessible: it is guessed at by what we observe and we learn the game of life solely by playing.

That said, the dice model seems to fit experience best: I try to do something and succeed or fail with a degree of apparent randomness. Because I believe that randomness is a factor, I consider that my failure to reach a goal could be partially due to chance. So, if I want to achieve that goal, I roll again. And again. Until I succeed or decide that the game is not worth the roll. Not being a fool, I do consider that success might be impossible—but I do not infer that from one or even a few bad rolls. This approach to life has served me well and will no doubt do so until it finally kills me.