Category Archives: Essays

Like most Americans, I used to hold some self-evident beliefs about food:

The three dogmas of the food phobiac:

There are foods I “like” and foods I “dislike” and I ought to stick to the things that I like.

The better something tastes, the more unhealthy it must be and vice versa. You must choose between a long life of disgusting food or indulge yourself and die early.

There is a value hierarchy for all the edible parts of any animal. For example, top sirloin is the ideal for beef. There’s a similar value hierarchy for animals themselves. Decisions about which animal and which part of the animal to eat are therefore a simple cost/benefit equation.

Two things completely changed by attitude on food: getting married, and moving to China.

The psychology of taste

Our perception of taste is closely associated with our memories of things such as the taste of past meals, our emotional states, and sensory associations with similar foods. We come to associate foods with sensory reactions based on many factors such as familiarity, the quality of most meals, the people we were with, etc. By dissociating taste as such from negative experiences we can learn to appreciate food for its inherent taste, without emotional baggage. We can learn to prefer the taste of healthy foods by the same process.

Sensory integration therapy for food phobiacs

The first step to fixing food phobias is to recognize the problem: it’s not OK to exclude foods because of food sensitivities. All the “most hated” American foods are delicious when prepared properly. Having recognized the problem, here is the program that worked for me:

The strategy is to gradually introduce foods in different settings, gradually building exposure and positive associations with certain foods. For example, when my wife learned that I hated zucchini, she gradually introduced it into my diet starting with small amounts balanced by other flavors, and growing to having zucchini be the dominate ingredient. Here is what she cooked:

Stuffed peppers with zucchini and sausage

Potato and zucchini frittata

Roasted vegetable meatloaf with zucchini

Grated zucchini topped with marinara

Lasagna with zucchini noodles

Zucchini gratin

Zucchini latkes

Zucchini fried in butter with onions

Parmesan crusted fried zucchini

The same program was used for eggplant, brussel sprouts, avocados, cabbage, and okra. Once I learned to appreciate food for its taste and texture of foods rather than negative associations and new textures, it was no longer necessary to disguise the ingredients. When I have a negative reaction to something, I isolate the components of the food (source, flavor, smell, texture) and think about which aspect I reacted to. Oftentimes I react to negative memories and associations and not the food itself. Consciously understanding that a negative reaction has no rational basis is often enough to overcome it.

The importance of ceremony

The ceremonial aspect of dining is very important when learning to appreciate food. If you merely try to inhale as many calories as quickly as possible, any unusual tastes will be an unpleasant distraction. A proper sit-down meal is required to take the time to really analyze the taste of foods and form new positive sensory-conceptual associations to replace the old negative ones.

A cosmopolitan attitude to dining

One of the main differences between the Chinese diet and the Western diet is that the entire animal is considered edible. Whereas Americans stuff everything other than “choice” cuts into burgers, sausages, and McNuggets, the Chinese proudly consume the head, claws, organs, and other miscellaneous parts of animals as delicacies. This is not because they’re poorer – the head and feet are the most expensive parts of the animal. Neither do they restrict themselves to a few “blessed” animals – the entire animal kingdom is on the menu.

The difference is that of the food elitist versus that of the food connoisseur. The elitist believes that only a narrow socially accepted list of foods is good enough for him. The connoisseur is an explorer, who uses his palate as the universe-expanding sensory organ it was meant to be. The elitist lives within the small dietary-social circle he was born into. The connoisseur traverses the biological and cultural realms.

The approach I now take to eating new things now is exploratory one. Instead of responding with “like” or “dislike” I try to understand the flavor components and texture of food. I appreciate meals from many perspectives – sensory, anatomical, social, and historical, to fully integrate it with my worldview.

Note: I have found that adopting a Paleo diet enhances flavor discrimination. For example, a carrot is actually quite sweet and delicious to eat raw, but a typical carb-addict wouldn’t know it.

None of this is to claim attitude alone will make everything taste good. Meals must be prepared skillfully to taste good. The notion I want to dispel is that taste is either genetic or set by undecipherable psychological factors we cannot affect. Human culture has a rich history of many culinary traditions and we ought to learn to appreciate them without emotional baggage or provincial bias.

The purpose of the Bill of Rights is to make a fundamental and clear statement about the rights of man. They are fundamental because all Congressional acts are subservient to them and clear because, unlike the complex legal code, the basic rights were intended to be known by all.

Having lived through war, the Founders recognized that during war, it is necessary to suspend the normal function of law, as “law” is a concept that is only possible in civil society. But they also recognized the danger of allowing any exception that would lead to the violation of rights. So, they provided strict limits: the President is the Commander in Chief, but he may only act with the consent of Congress, and that consent expires after two years. Furthermore, Congress has the power to issue letters of Marque and Reprisal, which authorizes specific individuals to attack specific groups and bring them to admiralty courts. In both cases, enemies were to be explicitly identified by Congress and enjoyed the protection of the rules of war.

We may argue about how practical these principles are and how earnestly they were followed from the start, but it is worth considering how they are routinely violated in the so-called “war on terror” going on today:

There is no war: “Terror” is an emotion, not a group of people. Therefore, no actual “war” and no actual “victory” (which requires clearly identified parties) is possible. This makes a congressional declaration of war impossible. While Congress makes occasional statements in support of the executive office, they are Constitutionally meaningless.

There is no enemy: The Constitution provides for Letters of Marque and Reprisal in cases where a war is not possible or desirable. But there is no enemy in the “war on terror.” “Al Qaeda” is a quasi-mythical entity which has more existence as an entity in the minds of those who hate/fear and/or admire it than as a physical organization of material command and support. Most of its “followers” are non-violent. Many more advocate violence (not admirable but not an act of war) than practice it. Many of those killed as “terrorists” have only some vague emotional bond with its ideology, others none at all. Certainly there is no physical network in which all such persons can be proven to be involved.

Few people would openly admit that they prefer irrational treatments and doctors. But most people do in fact advocate irrational health practices – using pseudonyms for “irrational” as “holistic,” “alternative,” “homeopathic” and the deadly “natural.”

Medicine requires reason

The human body operates according to certain causal principles. If we wish to make a change in our health, we must understand some of those causal principles and act according to our understanding. To act without a rational basis is to disconnect our goals from their achievement. Irrationality does not guarantee failure — it just means that success, to the extent that it happens, will be due to other factors that our goals.

The study of human health is especially difficult

In the field of health, especially rigorous rationality is necessary for at least five reasons:

The human body will solve, or at least try to solve most problems on its own. This makes establishing causality due external factors quite difficult and introduces biases such as the placebo effect and the regression fallacy.

The body is very complex! Because it evolved over billions of years, the causal relationships in the body are extremely complex and interdependent.

For example, even if we know that the body has too little of a certain substance, taking that substance may: a: not do anything b: cause the body to produce even less of the substance or c: cause an unpredictable side effect. On the other hand, if the body has too much of something, then the solution may be to a: consume less of that substance b: consume more of that substance or c: the consumption has no relationship at all to the level of that substance.

It can be difficult to measure the extent to which medical problems are solved. While some things can be measured, many things, such as pain levels are very difficult to quantify.

It is difficult to isolate causal factors in human beings since changes in health take time to develop and we can’t control every factor during an experiment or dissect human subjects when it is over.

Humans tend to be irrational when it comes to their own mortality! We fear death, leading us to irrational over or under spending on health as well as being especially vulnerable to all the logical fallacies.

In medicine, rationality requires quality science research

There is a name for the field that applies rigor to the discovery of facts about nature: science. Science has been so successful in improving the state of human knowledge that many irrational, anti-scientific quacks have begun to use the term “scientific” to describe anti-scientific practices and ideas. In response to this, the medical community has come up with a term which identifiers the distinguishing aspect of rationality: “evidence based medicine.” This phrase is a necessary redundancy that identifies the essential characteristic of science: that it is based on sensory evidence. The alternative to non-evidence based science is not science at all, but emotionalism – “I feel it is true, so it must be.”

In the last hundred years, we have discovered certain practices for ensuring the conclusions of our medical experiments are valid. We know experimentally that observing these practices leads to more accurate conclusions. Let me emphasize that: the truth of medical claims is strongly correlated with the degree to which experiments follow accepted scientific standards. There are a number of objective scales for measuring the quality of an experiment.

Five characteristics of quality medical studies

Here is the essence of quality medical studies:

The experiment and its results are fully described in enough detail to reproduce and compare the results

There is a randomized control group

The selection of control subjects is double blind

The methods of randomization and blinding are accurately described and appropriate

There is a description of withdrawals and dropouts.

How to judge health claims

Unfortunately, it is difficult to design good experiments and more difficult still to reach firm conclusions from most experiments. In the legitimate (“evidence-based”) medical community, the degree to which practitioners adhere to the principles varies greatly – but they at least try. Fortunately, in the “quack community,” while there is sometimes the pretense of evidence, basic scientific principles are so grossly violated and ignored that is becomes easy to distinguish fraud from legitimate science.

Here is an important point: it is difficult to make firm conclusions in medicine. But when valid scientific principles are not followed, it is easy to conclude that no valid conclusion can be reached. In other words, you can’t always be sure what’s good for you, but you can be sure when someone is talking nonsense.

Another important point: When someone makes irrational health claims, it does not mean that those claims are false. It just means those claims were not derived by rational (scientific) principles, and so we cannot say anything about their truth – we can only ignore them as arbitrary. It is as if someone claimed an invisible, undetectable pink unicorn in the sky – that which cannot be proven or disproved can only be dismissed.

How can we apply these ideas? It so happens that most health claims in the non-scientific media and many health “practitioners” are unscientific. This does not mean that they are wrong, or that people don’t feel helped by them. It means that their claims have no connection to reality.

In some cases, the practices that quacks suggest are helpful — but not for the reasons they identify. More importantly, in all cases following rational, scientific principles leads increases the likely hood of successful outcomes over quackery (aka emotionalism).

To conclude, to judge whether a medical claim is legitimate or arbitrary nonsense, check whether:

It is based on quality experiments

It is consistent with medical consensus (of evidence-based medicine)

The certainty of the claim is well-established (by numerous studies, systematic reviews, etc

This essay was written on August 13th, 2003 and edited slightly for this post:

Is religion a value to mankind? Some alleged benefits which have been attributed to religion include: scientific and philosophical principles, technologies such as the printing press, the colonization of the new world, great works of art such as Michelangelo’s David and the Sistine Chapel, monasteries that preserved and carried on knowledge during the Middle Ages, social institutions such as charities, schools, and universities. It’s undeniable that all these things have benefited mankind and that religion played a part in them.

On a personal note, I have benefited greatly from the Judaism. A Jewish organization helped my parents come to America, placed me in private school so I could learn English and Hebrew, sent me to summer camp, paid for my trip to Israel, and even helped fund my college tuition. In addition to these material benefits, I learned a lot about history, philosophy, ethics, Hebrew, and social interaction while attending Sunday school and then helping to teach it for three years. Many of my religious teachers were intelligent and inspirational people who taught me many things in the classroom and by example.

So, it is indisputable that religion has done many good things for man. Is this sufficient evidence to conclude that religion is a value to man? The fact that an institution does good is not sufficient evidence that it is good overall. Consider a profession which is not considered desirable despite doing some good for people: medical quackery. A quack who sells a fake remedy for all ailments provides some benefit to people: the placebo effect often makes people feel better, and the alcohol, cocaine, and other drugs contained in remedies were often effective and making their users feel better. However, despite the benefit he provides, the quack also defrauds people, does not fix underlying health problems, and often addicts his patients to his “medicine.” Even though the quack provides a benefit, a real doctor could provide a greater benefit to people without the accompanying harm. Thus, when evaluating religion, we must consider the total effect, not just isolated benefits, and evaluate whether the benefits religion provides are essential to its nature.

This question makes the logical fallacy of the stolen concept. The question of what is “scientifically provable” is derived from our metaphysics and epistemology. We use our basic philosophy to derive the epistemological standard by which to investigate the specific aspects of reality (e.g. physics, chemistry, mathematics, and economics). To demand that philosophical statements be scientifically validated is to demand that a derivative which depends on philosophy be used to prove philosophy. This is like trying to build a house by assembling the roof, walls, and windows before the foundation. It is fine to examine the whole structure of knowledge to verify that it is internal consistent and sound. But we cannot use a higher-level deduction to prove the premise that it depends on. The only way to validate philosophical claims is to use reason: to use logic to validate abstract ideas by reducing them to sensory evidence.

What is the difference between science and philosophy?

Science is distinguished from philosophy by subject matter: science studies the specific nature of the universe, and philosophy (of which religion is a primitive form) studies the fundamental and universal of the universe and man’s relationship to it. Both are concerned with facts, but they differ in subject matter and the standard of evidence. In the field of philosophy, we must be logically rigorous, but we cannot, and need not measure the physical evidence quantitatively as in the subject-specific sciences.

Science is made possible by the acceptance of certain philosophical axioms in metaphysics and epistemology. In metaphysics, science requires recognizing that all entities behave in a causal manner according to their nature. In epistemology, it recognizes that man is capable of perceiving and understanding reality by the use of his senses, and because his consciousness is fallible and not automatic, he needs to actively adhere to reason and logic to reach the right conclusions. Science requires a systematic method to collect evidence and correctly interpret it because knowledge of how nature works is not self-evident.

Science is different in degree from informal empirical methods such as “trial and error” and in kind from non-empirical methods such as revelation, astrology, or emotionalism. But the basic method – of rational investigation based on the evidence of reality must be used in all fields, whether philosophy, law, chemistry, mathematic, or cooking.

(This is the second part of selections from a Facebook debate. Part 1 is here.)

Introduction:

The key to my disagreement with the theist hinges on the question of “Can we know God?” or “Can have knowledge of the supernatural?” The theist says yes, we use both experience and the “sensus divinitatus” to acquire knowledge of God. I disagree – I believe that knowledge of reality can only be obtained through reason, and the supernatural is by its very definition opposed to reason. Furthermore, the “divine sense” the theist refers to is just emotionalism. In this post, I will focus on the essence of our disagreement by examining in detail the nature of this supposed divine sense and reveal it to be pure emotionalism.

To recap three key points from my last note:

I reviewed valid and invalid means of acquiring knowledge and concluded that truth can only be reached by perceiving it and integrating sensory data – e.g. reason.

Emotions are a kind of thinking that tells us about our mental state.

We can learn from others, but ultimately new knowledge is formed by integrating new evidence into our own experience of reality.

Introduction: Faith is emotionalism

My key criticism of the theistic argument for faith is: it is emotionalism. But emotions are not evidence of reality, only of one’s mental state. Neither revelation nor any other evidence for the supernatural is possible. I believe this argument is sufficient to disprove all religious convictions, as all other (i.e. “historical”) arguments for the supernatural are revealed to be absurd once a proper epistemology (e.g. reliance on the senses) is assumed.

The Nature of the Senses

Let’s begin with the senses we agree on: sight, hearing, touch, smell and taste. This much has been known since Aristotle. What is the exact nature and method of these senses?

(In the next few posts, I’m going to re-post selections from a Facebook debate:)

Many apologetics claim that their faith is based on reason and evidence. In fact faith is just a kind of emotionalism.

Two analogies:

Suppose you decided to base your knowledge of reality on the result of dart throws. Whenever you have some doubts about something, you write four possible answers on a dart board. You would aim the dart in the general direction of the board, turn off the lights, and throw. Whichever answer is closest to the dart becomes your conclusion.

What is wrong with this methodology? If you adhere to the correspondence theory of truth (that for a belief to be true, it must correspond to reality) then you should realize that answer “chosen” by the dart has no correspondence to reality. Why not? Because there is no causal connection between your ideas and the random path taken by the dart. The dart’s path is not a valid proof of your conclusion because it is not derived from observation or logical consideration of the ideas in question.

Frustrated, you try another methodology:

You will write down the four answers as before, and then take a large dose of hallucinogenic and amnesia-inducing drugs. You will pick the answer in your drugged state but have no memory of how you selected it when you are sober again. Is this conclusion valid? Now, you are not depending on random chance, but on a distorted version of your own mental processes. Is your method any more valid? No – there is still not causal connection between the idea and your drugged ravings. The answers are you most likely to choose will probably correspond to your existing conclusions. But it will still not be any kind of proof or evidence.

Reason means a valid epistemology:

In order for evidence to be valid, there must be a valid epistemological process. To prove that a claim is true, we must verify it by deriving a conclusion step by step from the evidence of our own senses in accordance with the laws of logic. This process is known as reason. If we fail to rely on our senses and logic, we might as well be throwing the allegorical darts in the dark. Doing so willingly is irrationality.

What is the “evidence” given for supernatural claims?

There are two possible kinds: empirical claims and non-empirical claims. Empirical claims are based on observation, such as “the universe exists, so God must have created it” or “I saw Jesus on a piece of toast I ate last week.” These claims are wrong, but they do not involve faith, since they can be proven or disproven. No one would take such arguments seriously however if it were not for claims based on non-empirical evidence – faith. This takes many forms in different religions, but generally it is a kind of “revelation.” Ultimately, all revelation can be reduced to emotionalism. How so? This requires an understanding of the nature of emotion:

I have jury duty tomorrow morning, so I thought I would share some thoughts on the moral and contractual obligations of a juror:

A trial is, or ought to be, a fact-finding process, conducted in order to determine whether pre-existing legal principles are applicable to a specific case. It should not be a religious, philosophical, or political discourse – that is, the rules by which guilt or responsibility is determined must be known beforehand. It is not up to the judge or jury to determine what the law ought to be, only to apply it to the established facts. If the law was determined rather than applied at trial, it would be impossible for anyone to obey it. Furthermore, a just legal system should be uniform – people must have assurance that outcomes will not depend on the particular judge and juror they stand before.

However, while it is not the job of the juror to determine whether the law is just, it is his moral responsibility to treat other men justly. Someone who is hired to be a repo agent may not have a contractual obligation to determine whether the collateral he collects is for debts which are legitimately are in default, but he has a moral obligation to refuse his assignments if he suspects that he’s seizing legitimate property. If he refuses assignments based on tenuous grounds, he may justly be fired, but if he has some certainty that he’s seizing legitimate property, he becomes as much a thief as his employer. Likewise with the juror.

One criticism of jury nullification is that a jury is not neither qualified to judge the law nor does it have any legitimacy in doing so. And this is certainly true as a matter of law. A juror who disagrees with the practical implementation of the moral principles behind a law ought to defer to the established process. He can always exercise his disagreement and try to effect change in his role as a private citizen.

But, the situation is different when a juror disagrees with the moral principles behind a law. A law based on incorrect moral principles is inherently unjust, regardless of the facts of the case. The conviction of anyone based on such as law is necessarily an act of aggression. Any participation in the process, even solely in the function of determining the facts, is an immoral act. No judge can honestly ask a juror to breach his integrity, or blame him for refusing to do so. Everyone, regardless of his role, has a personal moral obligation to treat others justly and refrain from willingly participating in injustice.

What is a moral juror to do then? He should refuse to serve if he believes that the principles of a law are inherently unjust. By doing so, he does not undermine the legal process, since another juror can be substituted, nor does he violate his own integrity. For example, a juror exceeds his role if he refuses to convict because he thinks that the punishment for an action is too harsh, but he acts properly if he refuses to serve because he does not believe the act being prosecuted to constitute an act of coercion at all.

In the special case that the judge is unable to find enough jurors who accept the morality of the law, has two choices: he may either require the charges to be dropped, or he may offer the dissenting jurors to serve anyway. If they do so, they cannot be blamed for acquitting the defendant based on their judgment of the law, in addition to their judgment of the facts. However, the possibility of them changing their minds after hearing the evidence remains. Presumably, as long as the law is consistent with the basic moral principles of citizens qualified to serve on the jury, it should not be difficulty to find sufficient jurors.

How good are your communication skills? How often do you feel that misunderstandings get in the way of your personal relationships or your career? Do you ever avoid talking to people because you don’t know how to express what you feel, or because you are afraid that you will be misunderstood?

What if you could dramatically improve the effectiveness of your spoken and written communication? Would it increase your confidence when speaking to coworkers, friends, and romantic interests? Would you take more chances if you could speak directly to someone’s mind, almost as if you had a telepathic connection with your listener?

The problem with most people’s communication skills is that they think that it is an innate talent. They think that if you’re not a smart, good-looking extrovert with a good voice, you can never be a great communicator. It’s true that these things help. But just because you’re tall and have strong legs doesn’t mean that you can win a gold medal at the olympics. And even if you are short and weak by nature, doesn’t mean that you can’t double or triple your performance. Of course, no workout will make you two feet taller. But unlike your body, your brain is very flexible.

You might think that speaking is something we learn automatically, and don’t have much control over. It’s true that we learn how to talk automatically and subconsciously, just like we learned to run automatically. But, just as a trained athlete can run faster and longer than an amateur, so can a conscious effort to improve your skills vastly improve your performance.

I’m going to share some of the things I learned with you as a kind of test. If my ideas are any good, you will remember most of what I said. After you’re done watching, please leave a comment to let me know how I did.

The five tips are: less is more, use examples, no distractions, repeat, repeat repeat, and five or less.

One: Less is more.

Paying attention is hard. It takes an effort to follow what someone is saying. Don’t make that effort any harder than it absolutely has to be. Keep it simple. Keep it short. Keep it focused.

Long and unusual words take longer to recognize than smaller and more familiar words. Many people use a stilted academic tone when they have something important to say. Don’t do it. Don’t say comprehend, say understand, or follow, or just get. Don’t go on an harangue, tirade, or diatribe, go on a rant.

Same goes for sentence and paragraph size. Ditto for analogies and figures of speech. They need an extra mental cross-reference. Just say it. Don’t give me a piece of your mind. Just say it. And whatever you do, cut it out with the likes and the umms, and the you know. You need to take mental breaks when speaking, but just practice making them silent. Your perceived competency will immediately go up 50%. Yes, I just made that number up. Here’s another made up rule: if your finished work is not 30% shorter than your first draft, it’s too long.

Two: Use relevant visual examples.

Your brain is just a big network of triggers made up of images, sounds, tastes, and sensations. If you want me to remember what you said, you need to tie some of those triggers to what you just said. Use examples I know. If you want us to go out for sushi, remind me of the smoked salmon we ate last week. Yes, examples are not just for English class. See? That’s another one.

Good examples are about important things your audience is already familiar with. Don’t talk to young people about how you applied conflict resolution to your mother in law. Talk about your parents. Talk about shiny, fast, loud, dangerous, smelly things if you want to create strong mental triggers to your message.

Three: No distractions.

“Cue words” are concepts that can trigger emotional responses that block rational analysis. For example, democracy, Obama, guns, abortion. Just by saying those words, I’ve triggered a whole cascade of mental activity. Regardless of your political orientation, your mind is now busy trying to classify me into friend, enemy, or maybe just trying to think of something intelligent to say about them. Don’t distract me by mentioning things that trigger distracting emotional responses, or words with a whole host of irrelevant connotations. I’m not saying that you should not talk about controversial topics – just don’t distract the reader with them unnecessarily, even if you think he sides with you.

Four: Repeat, repeat, repeat.

Repetition is crucial to forming long-term memory. You’ve heard this before: say what you’re going to say, say it, then say what you said. Here’s an advanced trick: you can improve memorization by using spaced repetition. Make your point then repeat it with increasing intervals of time between each repetition.

Five: Five or less.

Most people can only keep a limited number of ideas in their immediate memory at once. Once they exceed that number, they are going to forget some of the things they learned. For most people, that number is five. So regardless of the topic, organize your presentation or argument so that you never list more than five items for any given category.

The five tips are: less is more, use relevant examples, no distractions, repeat, repeat repeat, and five points or less.

The debate between free will and determinism stems from the apparent conflict between the universal rule of causality found in nature and the apparent ability of men to choose between multiple courses of action in order to lead to the most desirable outcome. Inorganic matter such as chairs, stones, and planets, blindly follows whatever forces affects it, and non-human organisms act for their survival alone, but human beings seem to be an exception to nature’s rule by their unique ability to ponder about how to go about their life and which values to live by. Determinists reject the idea that any of these choices are freely chosen however, and claim that a man is no exception to nature’s law because he and his choices are nothing more than the product of his environment. Decisions, they usually claim, are simply a product of conflicting environmental influences duking it out. A proper understanding of the nature of volition however, can reconcile the apparent conflict between free will and causality, and soundly reject the position that man is merely a product of his environment.

Determinists claim that the nature of the universe is such that it is governed by certain universal scientific laws, so that each action is caused by a specific prior cause, and human action is no exception. They claim that the human mind is also governed by these rules so that no alternative course of action is possible to humans other than the specific and unique set of prior factors that caused that human action to be made. Thus, human choices are not “free” because they are determined ahead of time by whatever environmental, social, genetic, biological and any other unknown factors caused such choices to be made. Accordingly, men cannot be held morally responsible for their actions, since they have no more control over the causal chain of events in reality than anyone else.

The determinist would say that whether the human mind operates by random firing of neurons or strict logic is irrelevant: both are governed by specific prior causes, and even if science could show that human choices were caused by random firing of neurons, the choice would not be “free” because it would not be “chosen,” independent of prior factors. To the determinist, free will would not be possible under any condition: if it was caused by prior causes, all choice would follow the strict laws of causation, and if it was independent of any prior causes it would have to be random, and hence not “chosen” in any meaningful way.

The classic reply in favor of free will to adopt some sort of indeterminism: to claim that free will involves some sort of exception from the rules of causation. Traditionally, God has played this role, providing some sort of mystical staging ground, exempt from causality, that allowed choice to occur. Rene Descartes took a similar position by arguing that that the mind exists on a separate plane from the body, and more recently, quantum physics and chaos theory have provides scientific excuses to “escape” causation and allow a possible for “free” choice to occur. Both of these notions are nonsense. If a human choice is independent from any prior factors grounded in reality then it must be random, and randomness is in no way a “choice.” A man who acts randomly is mad, not “free.” Whether God or quantum physics is the excuse, it is not viable to claim that human choice is independent of prior cause, and yet not completely random.

The self-determinist position rejects both of these views. Affirming free will does not involve a rejection of causality in favor of a magical mechanism for human choice, but an affirmation of the process of volition that is the process behind all human choice. The self-determinist position rejects both the notion that any supernatural forces are involved or that human decisions violate or are independent of whatever laws, known and as yet unknown govern the workings of the universe. No “alternative world” where different choices were made is possible because the brain itself is not excused from the same rules that govern all other matter. Rather, “free will” – as I see it – refers to the uniquely human process of volition that allows multiple courses of actions to be considered and evaluated and one selected. Hence, the process of volition does not involve a separate realm of uncaused thought and decisions, but the specific process of human thought and decision-making.

Free will is “free” in the sense that the human mind has the ability to consider multiple decisions and choose particular outcomes. In reality, only one choice and only one decision is actually made – the hardware of the brain allows no uncaused, truly random or causeless factors to enter the process – but from the perspective of the person making a decision, multiple decisions are possible, and multiple outcomes are considered. It might be said that the process I have just described is really just an illusion of free will since in actuality only one decision was actual after the consideration. However the term “free will” does not refer to either uncaused or random actions (for then it becomes useless) but to our ability to evaluate multiple courses of actions, consider different outcomes, and then select the action most likely to leave the world in a more desirable state than if we had selected a different action or none at all.

While we do not yet understand the specific physical process by which we make decisions, the evidence of our own ability to choose between multiple outcomes is readily evident by introspection. We can easily observe the fact that we can consider different factors, evaluate different possibilities and come up with original choices and decisions. Unlike inanimate objects, human actions have both a purpose and a goal, and unlike an animal’s actions, they arise from the choice to pursue certain goals and values, rather than the automatic guidance of instinct. The result – the creation of human civilization and peaceful interaction between individuals in society stands as a testament to human originality, creativity, and more fundamentally, the choice of some values over others.

A better understanding of the distinction between human choice and the interaction of non-volitional matter can be gained my examining the fundamental requirements for an intelligence. Suppose that a human brain or a sentient mind was somehow transferred onto a computer. Would that very complex computer program have free will? It would not base its decisions on randomness or act without a cause, but if it were able to conceptualize and independently choose between different ideas and decisions, it would in fact have free will. Free will then, is not dependent on random neurons or some other otherworldly trait of the human brain, but the ability to independently consider and choose between different alternatives. An intelligent computer may “think” by varying the charges on electrical circuits while humans think by firing electrical charges between axons and dendrites, but they will both be able to conceive of the concept of a thunderstorm and decide that it is better not to be outside or on a non-grounded line when lighting strikes. In both cases, they will use their particular means of thought to reach decisions about which alternative scenario (golf course or inside, non-grounded line or a heavy-duty surge protector) will provide the most desirable outcome. Thus, the ability of both sentient computer programs and sentient human beings to create original ideas rests in the (so far) uniquely human ability to create concepts out of sensory inputs, relate the concepts to one another, and reach the conclusions about the nature of reality that are necessary for our survival.

While the decisions reached by the human mind and the artificial intelligence are limited by their particular hardware, the software program and the human mind can function independently of hardware they run on. By “independent,” I do not mean independent of causality, but rather able to perform conceptual analysis that is not strictly limited to the hardware it runs on. For example, I cannot multiply large numbers in my head any more than a computer can feel tired or excited, but I can write out the solution on paper, and a computer can emulate the biological influences of a human. The function of one’s mind is not limited to the particular nature of the brain, giving humans the ability to discover new relationships and understanding among old concepts.

Objections to volition often rely on downplaying the difference between human volition and other organic and inorganic matter. Some determinists argue that humans are no different from animals – they act on whatever goals they believe necessary for their survival. However, man is unique in his ability to choose the values he lives by if he decides to live at all. Animals do not have such a choice: their actions are automatic and governed by instinct. When we interact with animals, we do so only by force or reward, not by reason, and when we punish them, it is only to alter their behavior, not to carry out justice. For example, when a dog misbehaves, we punish it not because we hold it responsible but to change its habit, but when a human acts in an immoral way, we hold the person as morally responsible: as culpable for their basic choice: to think or not. Humans can choose what to live for, how to live, and even whether they should live at all.

A more basic argument against free will is the comparison of a human mind to inanimate matter, such as a car. After all, we turn a key and the car either starts or not, depending on whether reality is such that the process of causation leads to an engine starting or to the battery being dead. In the same way, the determinist will claim, the human mind will either make the right or wrong choices, depending on what prior state it is in. However, a car and a human mind are fundamentally different: the ignition process is a rigid mechanical chain, whereas human thought (when one chooses to think) involves a process of evaluation and conceptualization, which considers multiple possible avenues of action and allows for an evaluation of the consequences of each choice. A car that could think would be able to evaluate whether it is low on gas, and then decide to start or not depending on a variety of such factors. Of course, a human may design such a car, but the evaluation to include such a feature still rests with the human, not the car.

While the determinist position generally accepts the possibility of thought, it rejects the possibility of true choice, negating the possibility of more responsibility. However, the determinist position is itself contradictory. By saying that humans should “pretend to have free will” the determinist accepts that all human thought requires choices to be made between various choices. By arguing that his position is a true statement about reality rather than simply the product of various influences, he implicitly accepts the correct definition of volition while rejecting its logical consequences. The determinist cannot argue that he knows his position is true – after all, he is only arguing for it because of prior environmental factors, not because it is independently true or false. In short, in arguing for determinism, the determinist implicitly accepts the opposite of his position.