First, let me apologize for the long delay between posts. I’m busy in life, and I am not afforded the free time to do research and write extensively that I had as a student at university. Also, I am posting this because it is a subject that particularly touched me, though I do have several posts regarding economics and democracy that I’ve had lined up for quite some time. There have been plenty of subjects I would have loved to write about since I put my writing on hold. But this I would like to address right now.

The question I pose in the title seems like an easy one to answer. To any decent human being, the answer should be no. The conclusion seems to go unquestioned. The idea that the killing of innocent lives is morally wrong and unjust is so embedded in the mores and norms of our culture, and countless others across the globe, the questions seems nearly absurd on its face. Yet, I’ve recently found myself asking the question and defending the forgoing conclusion in a Facebook discussion. Surely, though, the only opposition would be from a militant extremist, some brainwashed fascist, or simply a troll?

Actually, the tiff was with none other than Fouzi Slisli, a human relations professor at SCSU. (This is the same professor whom, by the way, I vehemently defended on this blog and in the SCSU University Chronicle regarding a presentation he and others had gave on the attack on Gaza in 2009, which was interrupted by professor Edelheit. This is also the same professor I praised, both here and in the University Chronicle, for their trip to Palestine and their presentation of that trip.) I do not pretend to admit that Dr. Slisli does not take outspoken stances on several issues, some of which I agree with, but this one goes beyond the pale.

This started when the professor posted a link to a Telegraph article titled “Muslim group claims royal wedding is legitimate terror target.” Seemingly approving the notion, he says, “They’re not saying they are going to target the wedding; they’re just saying the wedding is a legitimate target and might be targeted by others…” I reply, saying, “No such thing as a legitimate target that has as its essence a civilian population.” The conclusion seems obvious enough. But not for Dr. Slisli.

Dr. Slisli contends that the U.S.—and the West in general—has targeted civilians and has deliberately killed civilians. This is undoubtedly true. I agree with the professor here. In fact, I wrote on this blog about the criminal bombings of Nagasaki and Hiroshima, describing them as “One of the worst terrorist attacks in human history.” The intentional killing of civilians is a sad reality of U.S. foreign policy and is a reason why the U.S. is one of the leading terror states. However, the fact that the West attacks civilians in no way justifies the position that killing civilians is okay. It should seem obvious enough that the actions of the West do not dictate morality. A moral theory based on such a concept would be shallow, as only a few moments of thought and reflection evidences.

Certainly the West’s behavior vis-à-vis its rhetoric makes it hypocritical. But as logic might remind us, hypocrisy does not validate an argument. Tu quoque (“you too”) is a kind of fallacious argument that aims to discredit a conclusion because its arguer does not adhere to said conclusion. But the fact that the U.S. has engaged or is currently engaged in targeting civilians has no bearing on the question of its legality or its morality. As I stated to him, “The question, though, isn’t whether the West has attacked civilians. The question is what is the proper response? Is it proper to attack the civilians of the offending nation—say you or I? That is to say, is it legitimate [to] deliberately target civilians for any reason? The answer is no. And the answer doesn’t change just because Western governments have violated the rule. Sure, it tells us a lot about the moral culture of Western nations. But if it’s wrong for the West that also means its wrong for everyone else. That’s just the elementary principle of moral universality.” (Many readers know that I’ve repeatedly mentioned the principle of moral universalism on this blog, and I’ll return to it here later.) The principle of universalism dictates that you apply to yourself the standards you apply to others (more stringent ones, in fact) and vice-versa. If it’s wrong for the West to kill civilians, it is wrong for you and your cohorts to do the same; if it is right for you to kill civilians, it is right for the U.S.

Dr. Slisli contends that moral universalism, “lofty as it is, does not capture the complexity of the issue.” While I believe the principle is both basic and elementary (far from lofty)—the necessary basis for any decent moral theory—the professor takes issue with it. He claims I am “making the weaker sides to a conflict uphold a morality that you know full well the stronger side does not/will not uphold.” But again, that has no bearing on the question of either its legality or its morality. In any case, Dr. Slisli says Islam offers a “contingency plan” that universalism does not in situations for those who suffer the transgressions of others: “the law of equality.” This law states, “If then any one transgresses the prohibition against you, Transgress ye likewise against him. But fear Allah, and know that Allah is with those who restrain themselves.” Those from the Christian tradition can think of a similar idea found in the Bible (“an eye for an eye”). Thus, “if anyone transgresses this universal law against you, the Qur’an instructs, then Muslims are allowed to transgress likewise against the enemy,” posits Dr. Slisli. (Of course, “Allah prefers if Muslims have restraint.”) He therefore concludes that, while it’s preferable to have restraint, it is not necessary when “ONE HAS TO PROTECT ONESELF” (emphasis his). He does claim, however, “I am not stating my own opinion here” and that he is “merely explaining the legal frameworks that the Qur’an sets for the rules of war and the legal status of civilians and civilian infrastructure.” I’ll leave the latter claim for more competent scholars.

In any case, the phrase that Muslims ought to show restraint unless “ONE HAS TO PROTECT ONESELF” in an important one because it requires the person using force to demonstrate that in fact it is for the purpose of protecting oneself. So certainly the onus is on the attacker to demonstrate that attacking innocent civilians is an act of “protecting oneself.” And quite frankly I don’t think the onus can be met. In fact, I would venture to say that it would have the opposite effect: it would endanger oneself more. The reason should be obvious, but I’ll return to it later.

At this point, the discussions turns ugly. Dr. Slisli perverts my statements, saying my act of “Preaching non-violence while the powerful is sawing through the weak is, in practical terms, nothing but a complicity by inaction.” Careful readers will note that at no point do I ever “preach non-violence,” and most certainly not to those stricken by violence. In fact, I do believe violence is legitimate, but only under very certain circumstances, and the onus is on the perpetrator to demonstrate that violence is appropriate. So, for example, the use of force for the purpose of self-defense is legitimate. You can find this precedence in article 51 of the UN Charter. Self-defense has always been a legitimate act. Thus, I fully support the Quaranic injunction that allows for the use force to “protect oneself.” Again, though, one has to demonstrate that the use of force is, in fact, self-defense.

To attack innocent civilian populations under the guise of self-defense is an act reserved only for the most morally depraved. And I do not pretend that this is an uncommon excuse for violence and terror. Take, say, Hitler when he invaded Poland and began his slaughter of Jews and millions of others; he did so under the pretense of self-defense. That’s always the pretense. We could go through a long list, but I doubt that would be necessary.

So let’s summarize. According international law, Quaranic injunctions, and elementary morality, self-defense is legitimate. The use of force, violence, etc. is legitimate insofar as it can be demonstrated to be legitimate, for example for the purpose of self-defense. Attacking those who have not attacked you does not qualify as self-defense. Ergo, the killing of innocent civilian lives is illegitimate and is deeply immoral. It is for this reason that such acts are outlawed, condemned (nearly) universally, is considered terrorism, and is a grave abuse against human rights.

Yet, the professor is having none of it. He clings to the claim that, because the U.S. does it, it’s okay for everyone else to do it. He ponders, “If the West refuses to apply the universal laws of common decency with people A, B and C, why should people A and B and C apply the laws of common decency with the West?” He gives two reason why A, B, and C might. He says either they would because “the balance of power OBLIGES THEM to uphold the laws of common decency” while the other side does not—i.e., they are too weak to retaliate. The second is because “People A, B and C are ‘better people’ and although the West doesn’t deal with them decently, they CHOOSE to act and be better.” He admits the latter case demonstrates “admirable strength because it produces moral rectitude.” Yet, he says this is not the path to follow, because it is a deceit by the West to prevent its victims from retaliating. He wonders, “Is it a coincidence you think that intellectuals in colonial societies have always advised the colonized to use non-violence?” He claims the idea that we ought not attack innocent civilians has “sinister uses as a weapon to disarm populations …”

Therefore, Dr. Slisli concludes, the proper order of things is for A, B, and C to “apply common decency with People D, E and F and EVERY OTHER people who submit to the universal laws of common decency.” But should someone not adhere to the “universal laws,” then A, B, and C “also HAVE THE RIGHT TO DECLARE THAT COMMITMENT VOID IF THE OTHER SIDE FLAGRANTLY VIOLATES IT.” There’s a problem with this argument, though. A law is not “universal” if it is not applied universally. Of course, what the professor really meant to say, if he were being a little more honest, is, “it’s wrong for them to do it to me, but it’s okay for me to do it to them.” And it’s a demonstration of the sheer hypocrisy found in those defending the attacks on innocent lives. And that’s a vile maxim that operates nearly everywhere: it’s a crime if they do it, but not when I do it. If you think about it, that’s the exact opposite of what one might call a “universal law.”

Finally, an argument made by others (and hinted at by Dr. Slisli when he accuses me of “a complicity by inaction”) is that innocent civilians really aren’t innocent at all. (In a separate posting, Dr. Slisli contends the innocents being targeted by al-Qaeda, including Muslims, are “the Crusader-Zionist alliance and those who collaborate with them,” thus fair game. But, “At any rate, this is an inter-Muslim debate in which Americans have no business sticking their nose.” When innocent American lives are at stake, I believe this to be an issue in which we might have the right to stick our nose, so I’ll continue.) One commenter notes, “We are all party to what our government/military does until it stops,” as if it’s a valid argument for attacks on civilians. But if they commenter, whom I’ve also defended elsewhere, agrees with me that the bombing of Nagasaki and Hiroshima were wrong, as I suspect they do, then it is wrong for terrorists to bomb us here. Just because these were citizens of Imperial Japan make them no more a legitimate target than you or I simply because we are U.S. citizens. So in the same vein, the attack on the World Trade Center was no more legitimate than the U.S. and Israel’s punishment of Gazan citizens for voting the wrong way in a free election. They both represent an illegitimate and immoral use of force.

So back to the original topic of the royal wedding, just because the spectators of the royal wedding are citizens of the country, or merely residents, or merely tourists, or merely bystanders does not make them a legitimate target. And, as it was hinted in the previous sentence, attacks on civilian populations do not even assure one that those targeted are only nationals of that country, as there could very easily be non-associated agents within the same population. But even if we could assume it was only nationals within the civilian population being targeted, is nationality ever a legitimate basis for attack? I suspect the commenter who says we are all party to our government’s crimes also believes that other discriminations based on nationality are wrong. So if I asked her if it’s okay for us to make certain nationalities pay more in taxes or if it’s okay for us to put certain nationalities in internment camps or maybe even okay for us to toss certain nationalities into furnaces (because of the crimes their nations committed, of course), I’m confident she’d say no. Yet there is such a disconnect to the point that she see nothing wrong in the idea that it’s okay for innocent civilians to be subjected to terror attacks because of what their government has done. And that brings me to the final point, which I’ve discussed throughout this blog, which is that, even to the extent that I do live in a “democracy,” my influence on policy is basically near zero. Democracy is mostly nominal and is defined in procedural terms: I pull a lever every four years and keep quiet and to myself in the time in between. Does that make me responsible to some extent? Maybe one could argue so. But it certainly does not make me a legitimate target for attacks, nor does it make Dr. Slisli, nor the aforementioned commenter—neither of whom, I’m sure, are ready to admit they are vile war criminals deserving death.

I understand the importance of criticizing one’s own crimes. Again, to the extent that I do live in a democracy and free society, I can make some effort to address them. I take seriously Dr. Slisli’s argument that, “If you want to talk universalism, then you should make the aggressor stop aggression FIRST …” Those who have read my blog know well my critique of state crimes, particularly those of the U.S. That has always been my focus. A dishonest person is one who criticizes the crimes of others but does not reflect on his own. But that does not make the crimes of others any less of a crime. This is a moral truism we should not easily let escape from our minds.

The answer to this question requires some careful examination that goes beyond the platitudes that we are supposed to take as self-evident. What we’re constantly told is that Social Security is in shambles. It’s bankrupt. The elderly on Social Security are outpacing workers who contribute to it, and we’re headed for a crisis very soon. Even King Banaian, the chairman and a professor of the economics department at SCSU, says we suffer from “cognitive dissonance”; it’s “part of the angst that grips” us, though none of us “want to hear of big changes.” Ed Morrissey from the Hot Air blog says it was foolhardy to listen to those who “assured us that Social Security was safe for decades without reform.”

The reason for this maelstrom is because, as The New York Times reports, “the system will pay out more in benefits than it receives in payroll taxes” this year. The recession has claimed millions of jobs and, as a result, tax receipts are down. At the same time, the Baby Boomer generation is beginning to retire en masse and will be collecting their Social Security benefits. By 2016, “indefinite deficits” are expected. Naturally, we should be frightened.

Indeed, Social Security looks like it is in shambles. Save some major reforms, which may very well including privatizing the system, the entire program appears to be heading for collapse. In fact, we’re probably better off getting rid of it entirely.

That much seems like common sense. If you collect less than you handout, you’re eventually going to go broke and the system cannot continue as is. This common sense is what drives the usual iterations about how Social Security is doomed. But, as with everything claimed to be common sense and self-evident, we should force ourselves to ask if it’s true. The assumption, of course, is that you don’t question it. It’s easy to parrot what the demagogues and pundits are saying on television and blogs; it requires some effort to look a bit beyond the rhetoric and platitudes.

Is it true that a fiscal disaster is on its way? As it happens, it’s not. In fact, if we bother to compare our Social Security system to the pension systems of other highly developed nations, just as the OECD has done, we find that the United States has one of the least generous pension systems for the elderly. Yet the fiscal hawks keep pushing on us “the great deficit scare,” though prominent economist such as Robert Eisner have been telling us for a long time now how absurd their claims are. Eisner’s book is over a decade old now, but we can learn some valuable lessons from it. Moreover, Dean Baker of the Center for Economic and Policy Research warns that the policies deficit hawks want to push through, which are are not based on sound economics, would be much more devastating than any projected deficit.

It’s certainly true the American population is aging, and faster than the workforce is growing (or will be soon). In economics, the technical literature refers to this as the dependency ratio. It tells us the number of dependent people (children under the age of 15 and adults over the age of 65) for every 100 productive people (people aged 16 to 64). The United States does not have the largest dependency ratio—far from it, in fact. And when we actually bother to look, the dependency ratio is not currently at the highest it’s ever been (nor will it be for a long time). That was around 1965. There was a problem in the 1960s, a more significant problem than we face today, back when real GDP was almost a quarter of what it is today (i.e. when we were much poorer).

What did they do about it? Did they say the rights to a decent life in a highly developed nation simply “are not natural rights of the people,” and therefore we should just stop helping the young and the elderly find a more decent life? Actually, that’s not what they did. They increased expenditures. That’s how they dealt with the unprecedented dependency ratio, one we won’t come close to experiencing for a long time. The solution to the current “crisis” is the same. You increase expenditures to ensure disadvantaged people can still live a life that isn’t marred by poverty, sickness, and starvation—so that people’s basic needs are met. There’s a consensus in every rich and developed nation that safety nets are a society’s moral obligation. In fact, the world came together and agreed on the Universal Declaration of Human Rights, which affirms these rights, calling them “indispensable for [a person’s] dignity and the free development of his personality.”

When we actually look at the published literature, there is an almost unanimous agreement that there is no “crisis,” that the dangers of an aging society are being way overblown (it is argued, in fact, that an aging society is beneficial), and that the problems that do lie ahead are quite manageable (in the same way the bigger problems of the 1960s were managed). What’s pointed out is that any fiscal problem that might possibly arise is easily addressed. For example, the Social Security board of trustees report that future problems (because there isn’t one currently) could be remedied with a simple increase on the payroll tax. The estimated 75-year actuarial deficit for OASDI is just 2% of taxable payroll (so you increase it from something like 14% to 16%). The OECD also came out with a major report on easy solutions for any possible future problem that might occur with the pension system, none of which included abandoning the pension system. One reason is because it’s recognized that there is a moral obligation on our part and that there is in fact something that separates us from primitive animals that might simply “let nature take its course” (one of the more repugnant euphemisms I’ve heard).

So the solution, then, is quite simple. We don’t need to get rid of Social Security. Nor is there a need for “big changes” or major reform.

I’m in international economics this semester with Professor Ming Lo. The class is very interesting and Dr. Lo is a great professor. The topic of child labor came up in class as we were discussing globalization. Most people today agree that child labor is unethical. The question becomes, how do we stop it?

One response has been to simply outlaw it. For example, in 1938, President Franklin Roosevelt signed the Fair Labor Standards Act in an attempt to curb child labor and protect children from the horrors of industrialization, which had brought with it brutal, and often fatal, working conditions. This had an effect in domestic markets, but it did not stop similar abuses of children in foreign markets. This is why Senator Harkin (D-IA) introduced the Child Labor Deterrence Act in 1992 and several other years after that. The bill would “prohibit the importation of products that have been produced by child labor, and included civil and criminal penalties for violators.” Well this had an effect. According to Jagdish Bhagwati, the University Professor of economics at Columbia University and author of the 2004 book In Defense of Globalization, garment employers in Bangladesh laid off an estimated 50,000 child workers, fearing passage of the bill. We don’t know what happened to these children, but it is believed that these children moved to the underground economy. That is to say, they found worse jobs in worse conditions. These included, for example, unregistered garment factories. At least in some cases, however, these may have included child prostitution and being sold into the sex trade. Very few people could agree this is a positive result.

So how do we stop child labor if we agree that it ought to be stopped? Clearly, banning imported products made with child labor will likely have the effect of not eliminating child labor, but rather making it more concealed and even more dangerous and exploitative than it was before. Not doing anything doesn’t seem to be the solution either, evidenced by the fact that child labor still exists and has always existed until actions were undertaken to deal with the problems too. Dr. Bhagwati suggests in his book that we label products that are are made by child laborers. In this way, consumers can make a decision as to whether to buy the product or not. Although I agree it is a good idea to label products in this way (it increases consumer information), there are some problem. For one, many consumers still purchase goods even when they are aware of the negative aspects associated with it. People still continued to buy Nike products, for example, even after it was exposed that many of their products were produced in sweatshops and unethical working conditions. Sometimes the benefit that we receive from purchasing a product outweighs any negative thoughts we have about the ethical standards of its production. That is, even if we agree that the production of what we’re buying was done unethically, we still are inclined to purchase the product. Second, even if demand for products created with child labor does decrease because of increased awareness, the effect won’t be much different than prohibiting the import of these products. Children will be forced into other sectors, including underground markets that help conceal the true abuses to these children. While it may help us feel better, it doesn’t do much in the way of ending the exploitation of children. There does not seem to be any clear and easy solution to this problem, and I certainly don’t have the answer. I do believe, however, that a principal component needs to address the underlying causes that drive parents and their children to pick child labor as their available best option. In other words, we need to tackle the issue of world poverty and the social conditions in developing countries that lead to child labor. Decreasing our demand for these products is a step in the right direction, but clearly not enough to end this blight on human affairs.

Recently, the Supreme Court ruled in Citizens United v. Federal Election Commission that corporations (and labor unions) can spend unlimited amounts of their money on elections. Essentially, the Supreme Court ruled that corporations can run campaigns. Many have lauded the decision as a great defense of First Amendment rights.

Is it? “Freedom is awaking from its coma today,” declares conservative Rush Limbaugh. Dr. Spagnoli, writing on his blog, states, “there’s no reason to deny corporations [free speech].” This is because “free speech [is a human right],” he says. I agree with Dr. Spagnoli, free speech is a human right. But are corporations humans?

As it happens, corporations are not people. They are social constructs, entities created to carry out specific functions. However, as I discussed in a earlier blog post, Are corporations individuals?, corporations slowly became considered “persons” through a series of judicial rulings. There is no law that says corporations are humans. It’s not anywhere in the Constitution. The Fourteenth Amendment was passed after the Civil War to give rights to people, specifically the newly freed slaves. It declared, “No State shall … deprive any person of life, liberty, or property, without due process of law.” It affirmed the rights of people. It was there to protect blacks from the evils they had endured under the brutal regime of slavery that had oppressed them for centuries.

Well, corporate lawyers were very savvy, and they began to say, “look, corporations are persons.” Corporations deserve the protection that was meant for freed slaves. In fact, when you look at the history of it, it’s very perverse. According to work done by Doug Hammerstrom, of the 150 cases involving the Fourteenth Amendment heard by the Supreme Court up to Plessy v. Ferguson, only 15 involved blacks. The other 135 were brought by corporations. This is the exact opposite of what we would expect to happen. However, through a series of activist decisions by judges, which has no basis in law, corporations gained personhood. Richard Grossman proclaims, “600,000 people were killed to get rights for people, and then with strokes of the pen over the next 30 years, judges applied those rights to capital and property, while stripping them from people.”

So now they can say corporations deserve the rights of flesh-and-blood persons, like the right to free speech; the ability to sue others; the right to “life, liberty, or property”; the right to own other businesses; the right to run campaigns; and so on. But there’s nothing inherent to a corporation that says its a person and deserves the rights of flesh-and-blood people. That’s only come about through very perverse judicial activism (e.g. Santa Clara County v. Southern Pacific Railroad). Moreover, there’s nothing in economic theory that says corporations ought to be treated as persons. That corporations should run campaigns has got nothing to do with capitalism. There’s nothing about efficiency that says corporations should be allowed to do this. In a free and competitive market, it wouldn’t happen.

Anyone who argues that corporations should be treated as persons and have the same rights would also have to accept that corporations should also then be allowed to run for office, hold office, to vote in elections, and so on. But no one agrees with that and for obvious reasons. Moreover, Dr. Spagnoli does not say that only corporations should have the rights of persons. He also says, “corporations, trade unions etc.” should not be denied the right to free speech. Well, what does “etc.” constitute? If a corporation is a person, why not a sports team? Can a townhome association be considered a person under the Fourteenth Amendment? Why not?

What happened before corporations were granted the rights of persons? They were chartered by the state to carry out some function that was meant to serve the public good. They had a specific charter, their shareholders were accountable, they had limited rights, they were regulated, and so on. That they should be running campaigns was completely unfathomable, particularly to the Founding Fathers, who were vary wary of corporate power. Within this framework, corporations had moral obligations to the communities they served. With judges granting corporations personhood, however, the moral obligations we ascribe to flesh-and-blood persons was not ascribed to corporations. The moral obligations and social responsibility that corporations have, according to people like Milton Friedman and Ayn Rand, is to serve their own interests. The only obligation corporations are to have is to maximize profits. These are not the same type of moral obligations we think flesh-and-blood people have. Most decent people, ignoring extreme ethical egoists, believe we ought to consider what happens to other people, that we have an obligation not to harm others, that we should not rape the environment, that we should not ignore grave injustices, that we should treat flesh-and-blood people as ends rather than means, and so on. Even those who support corporate personhood do not ascribe these moral obligations to corporations. These are very special types of “persons” indeed.

Should people have the right to free speech in a democracy? Yes. Are corporations people? No.

One of the problems that ideologues of any persuasion probably run into is the problem of democracy. What do I mean by “the problem of democracy”? What I mean by this is that the democratic majority often does not adhere or conform perfectly to the ideology that a person or group may have. This can be a problem for the ideologue if he or she professes to be a democrat (a supporter of democracy). So, for example, the libertarian may decry the government’s role in society, despite the democratic majority wanting social programs or government regulation. Thus, any claim that we should wipe out social spending is inherently anti-democratic in this sense. My previous post on government involvement touches on this issue. Of course, the ideologue can bypass this “problem” if they do not profess to be democrats. Instead, we should simply implement the policies of our ideology, no matter how much the public is opposed to it. That is, we become authoritarians. For the libertarian or the anarchist, this is inherently paradoxical. We cannot claim to be libertarians and authoritarians at the same time—the ideas are necessarily opposed to each other. It is not possible to authoritatively implement our policies in the name of libertarianism, for example. That isn’t to say no one has tried; for example, Augusto Pinochet, in his brutal dictatorship over Chile, enacted free-market reforms in the name of “liberating.” We know that’s hypocritical, and we understand the perversity in his understanding of “liberty.” Here, “liberty” means liberty for the corporation, not for the people. Thus, the ideas of libertarianism and anti-democratic measures are incompatible.

How can the ideologue cope with “the problem of democracy”? How can we accept certain principles that the majority rejects, yet still call ourselves “champions of democracy”? I have two suggestions, and others are welcome. First, be what could be called a philosophical ideologue (cf. philosophical anarchism). That is to say, you keep your beliefs in whatever ideology you choose, but you accept the majority’s opinion as the opinion that should be adhered to. So, for example, if you’re against social spending, but the majority supports it, you continue to believe that social spending is wrong but accept the majority’s choice as the will of the people. For some, this might seem like an unpleasing solution, which I accept. It does seem contradictory to accept the choice but at the same time to not accept the choice. It would seem as if we are not truly adhering to our ideologies (that’s a common argument against anarchists who do not support the overthrow of the state—they’re not real anarchists). Do we or do we not accept that argument? The other thing I suggest is that we teach or advocate our ideology in a way that is not anti-democratic. We explain our philosophies (non-coercively) to others in the hopes that they will accept them. In this way, we can influence the outcome of the democratic choice without resorting to authoritarianism.

I accept that others may not accept this. They may say we have to cling to our ideologies, no matter what. We must reject the democratic majority. They may not say it in this way, but it is what they’re saying. I reject this argument and find it to be dangerous. Over ideology, I am a democrat.

P.S. This is a further exploration of a concept that Dr. Spagnoli explores on his blog in a post titled “What is Democracy?” In it, he explains, “Napoleon Bonaparte propelled his armies across Europe on behalf of the universal principles of liberty, equality and fraternity . . . Napoleon’s armies occupied Europe because they wanted to export French principles and French civilization. . . . France was the advance guard of the struggle of humanity for freedom and against old-style authoritarianism.” The parallels to contemporary foreign affairs are obvious enough. Claims Dr. Spagnoli, “Attacking, conquering and occupying other countries, even with the purpose of liberating these countries from oppression and archaic authoritarian forms of government, seems to be highly illogical and self-contradictory. It’s incompatible with the very principles of democracy (democracy is self-determination).” The question being raised is, “are we allowed to impose or enforce democracy in an authoritarian way?” Likewise, I raise the question if libertarians are allowed to impose or enforce libertarianism in an authoritarian way. I say no.

Global warming is a crime against that which not exist, namely future people. Of course, it is still a crime against people who do exist in the present, e.g. the poor in Bolivia whose glacial water sources are quickly disappearing.

This is a point I just thought about in a discussion about global warming on some other forum. It’s worth mentioning that global warming (read anthropogenic climate change) is a classic example of externalities. Neoclassical economics tells us that when people (which includes corporations) don’t have to pay the price for the consequences of their actions, there is market failure. Resources are not being allocated efficiently—one reason why any claim about markets being efficient should be taken with a grain of salt. For examples, producers of pollution do not take into consideration the harmful effects of pollution—i.e. the true cost of pollution is ignored—and so pollution is overproduced (because the price does not reflect the cost). However, not only is global warming a classic example of market failure, it is the “greatest market failure” ever, in the words of Nicholas Stern:

The science tells us that GHG emissions are an externality; in other words, our emissions affect the lives of others. When people do not pay for the consequences of their actions we have market failure. This is the greatest market failure the world has seen. It is an externality that goes beyond those of ordinary congestion or pollution, although many of the same economic principles apply for its analysis.

This externality is different in 4 key ways that shape the whole policy story of a rational response. It is: global; long term; involves risks and uncertainties; and potentially involves major and irreversible change.

As it happens, there is a solution to fix the problem of when prices do not reflect true costs. The solution is to make the price reflect cost. In this case, you increase the price. That’s what some people have called the carbon tax (i.e. a Pigouvian tax). The externality goes away and resources are being allocated more efficiently. Now, we know the cost of our pollution and activity on this planet is enormous. It is several magnitudes larger than any cost associated with mitigating it, in fact. The rational human being should therefore be opting to mitigate it. The real question becomes whether or not we’re rational.

But let us think about the four key ways that Stern says global warming is distinct from other typical externalities. It’s global, long-term, risky and involves uncertainties, and is irreversible (within reasonable amounts of time, that is). What this means is that we’re condemning future populations of humans to live with the adverse effects of our actions. When we think about it for just a moment or two, we quickly realize that this is fundamentally wrong. It is morally wrong. Yet, many of these people do not even exist yet. They haven’t been born. At the same time, when they do come into existence, they will have to live in a much worse environment because of the actions we are committing in the present. It is in this sense that we are committing a crime against that which does not yet exist (namely future generations).

This is very peculiar indeed. The non-harm and non-aggression principles of libertarianism tells us not to harm other people. But it says nothing of people who do not exist (in that they have yet to exist). In a sense, I think many people in the present feel undisturbed about the effects of human activity on future generations because it’s a rather intangible idea, somewhat abstract. It’s hard to connect. If we are able to so brazenly ignore the plight of suffering Africans in the present, surely it is almost impossible for us to feel anything for generations of humans who are yet to exist. The effects of what goes on in our neighborhood, our cities, our states, or even our nation are much more immediate than that which goes on halfway around the world. So I think there is a problem of immediacy here. What happens to future generations is not immediate to us. This allows us to do what we do without even so much as batting an eyelid. Again, though, this is because we aren’t having to pay for the costs. Future generations will have to pay for it, and they will pay greatly. This is an externality. We can fix it by making the price of our actions reflect the true cost, and in this way we will also make the problems associated with our actions more immediate to us.

A couple of months ago, I wrote a piece supporting the sale and purchase of organs. I argued that allowing for a market for organs would get rid of the lack of supply of organs that are desperately needed to save the lives of those in need of an organ transplant (the literature tends to agree, as does empirical evidence from Iran). Every year, tens of thousands of people (about 100,000 in the U.S. alone) in need of new organs must wait because there lacks a supply of healthy organs available to them, and thousands more die because of this shortage. Every year the shortage continues to rise (demand is increasing faster than supply is). Healthy organs abound, but the shortage arises from the fact that it is a crime to sell or purchase organs. The altruistic donation of organs, while undoubtedly saving many lives, simply does not come close to supplying the necessary amount of organs. The logical conclusion seems to suggest there ought to be a market for organs.

Some people, however, detest this notion. In a recent post on his blog, Dr. Filip Spagnoli makes several nuanced arguments against free organ trade. The main arguments he makes, insofar as I understand them, are that the poor, qua the poor, will be desperate for money and therefore forced to sell their organs; the wealthy, qua the wealthy, will be able to disproportionately afford the organs and therefore be unfairly benefited; organ transplants are dangerous; solid organs donation is nonrenewable and therefore decidedly and relevantly different from blood donation; organ trade is tantamount to commodification of the body; and an opt-out system is the best solution to our problem.

King Banaian, a professor and chairman of the economics department at SCSU, made an argument similar to first listed above, back in 2005. The poor are desperate, goes the argument, and so it’s unfair they would end up selling organs (i.e. they lack informed consent). The basic underlying premise of this argument is that the poor are irrational, incapable of thinking for themselves. The government, therefore, knows what’s better for them than they themselves do. That is, they have to be protected from themselves (because they’re poor). I reject that argument. If the premise is true for the sale of organs, then it would be equally true of other economic decisions they make, including the sale of their labor or their purchase of goods. That doesn’t seem to be the case and, if it is, we certainly don’t restrict completely their involvement in the market (and even more certainly not the involvement of others).

So the argument seems to be that, if it weren’t for the money incentive being offered, the poor wouldn’t choose to donate their organs. But that’s true of basically all people. Economic agents would not do many things if it were not for the money incentive. Dell wouldn’t sell me computers if they weren’t being compensated for it. Is that exploitation (of their want for money)? I don’t think so. That’s called trade, and both the seller and buyer are made better off by it. So what the detractors have failed to explain is how the poor are made better off by ensuring they are not able to receive money that would help alleviate their predicament.

The second argument is that the wealthy, because they obtain more wealth, “will be able to benefit disproportionately from the market because prices will be high …” Dr. Spagnoli says this is true because society is aging, but admits prices will fall because of competition among suppliers, particularly from poor places like Africa. Still, the fact that the wealthy can disproportionately afford things does not suggest to me there ought not to be trade. So long as the distribution of wealth is unequal (which seems to always be the case), some people will be able to afford more than others. That doesn’t mean we outlaw trade. (The wealthy can afford more food and labor than the poor; do we therefore outlaw the sale of food or labor?) That seems to be punishing the rich merely by fact that they’re rich. And in this case, the punishment is death. Detractors of organ trade fail to explain how outlawing trade benefits the less-than-wealthy who are in need of an organ. It seems to me that as the wealthy begin to demand less organs, prices will begin to fall (allowing even the poor to afford to buy organs). By outlawing organ trade, detractors are (at best) disallowing some people from buying lifesaving organs merely because of their wealth.

The next argument is organ transplants are dangerous, so they should not be allowed. It’s true that all surgery, even the most benign, carry risk (including death). That reason alone is not enough to outlaw transplantation of organs. If it’s dangerous to sell organs for money, then it’s equally dangerous to donate organs altruistically. But we allow the altruistic kind. Clearly, danger is not the underlying factor here. Moreover, the dangers are overplayed, I think. The death rate for liver transplantation in Japan is at 0%, and 0.3% in the United States. These rates will continue to decline as advances in medicine continue and as surgeons progress along the learning curve. Kidney transplantation is even safer (people who donate kidneys live longer than those who don’t). Remember, it was Joseph Murray who won the Nobel Prize in medicine for completing the first kidney allograft in 1954. The patient in that case is still alive. While there is serious risk in many activities, including surgery (or timber cutting or smoking, which are both legal), that’s not enough to outlaw the practice.

Another argument is that blood (or sperm perhaps) is okay to sell because it’s renewable. The only defense here is that the distinction between renewable and nonrenewable is “relevant,” but without any further explanation. How is the distinction important? The fact that some goods are scarce doesn’t tell me a lot (other than that their price is going to be higher). Furthermore, modern liver transplantation in live patients consists of removing only a portion of the liver from the donor, which will regenerate and return to full functionality within a matter of a few weeks. Does the detractor now accept the trade of livers? (Similarly, the sale of bone marrow, which is just as renewable as blood, is a serious crime in the United States.)

A very common argument that also comes up is that the trade of organs is tantamount to the commodification of the body. I say, So what? It saves the lives of real human beings. That’s what matters. “Commodification is dehumanization,” they say. I counter that there is nothing more dehumanizing than simply letting people die, which is precisely what you do when you outlaw the trade of organs. So what is worse: that people might sell parts of their body or that hundreds of thousands of people are unable to get organs that could save their lives? Supporters of this argument also fail to explain how blood, bone marrow, sperm, ovarian eggs, or bearing children (all of which have markets) do not constitute commodification. Or, if they do, do they believe these practices ought to be outlawed? It’s also worth mentioning that even in altruistic donations, people are still profiting from it: the doctor, the nurses, the hospital, and so on. That is, everyone except the donor.

They might respond, We’re not letting them die because we have a solution, which is the op-out system. Opt-out, also called presumed consent, begins with the assumption that all people wish to donate their organs upon death. If you do not consent, you must specifically tell this to the state. (My contention is that this is akin to saying the state owns your body by assumption.) This is opposed to the opt-in system, like in the United States, where the assumption is that people do no wish to donate their organs upon death, unless it is otherwise specified. In that way, the hope of opt-out supporters is that people are not altruistic, but rather lazy or ignorant. I find that deeply unethical, but Dr. Spagnoli points to several studies from his country, Belgium, which show opt-out is providing an ample amount of organs (that is, people seem to be lazy or ignorant). (The rate of “donation” is marginally higher than that of the U.S.) A lot of the scholarly literature, however, does indeed seem to suggest opt-out provides higher rates of organ “donations” than opt-in. If this were necessarily true, though, Sweden and Israel (opt-out states) would not have such low donation rates. A study published in 2005 further shows that opt-out systems do not guarantee higher donation rates. The authors find that, when correcting for mortality rates, the apparent efficacy of opt-out disappears. Perhaps better than opt-in or opt-out, a better choice would be mandatory choice. All competent adults must choose whether or not they wish to donate their organs or not. Family decisions (which negatively affect the efficacy of both opt-in and opt-out) must not override the individual’s choice.

Still, it seems giving enough of an incentive to potential donors is the key to finally getting rid of organ shortages, which cause unnecessary deaths each year. As I explained earlier, Iran has been successful in accomplishing this. If detractors are unwavering, however, then perhaps markets are not necessarily the correct solution. Instead, I might propose a system wherein the government pays for the organs and then distributes these as equitably as possible. This removes a lot of the fears detractors have of unfair market allocations. Others, unfortunately, might reflexively bash the idea because it relies on (*moan*) the government. But I ask these people the same question as before: What is worse—that the government might be involved in the trade of organs or that thousands must die needlessly each year because they are unable to procure necessary organs?

As I live, saith the Lord GOD, I have no pleasure in the death of the wicked; but that the wicked turn from his way and live. Ezekiel 33:11

I remember very clearly the state of terror that gripped the East Coast during fall of 2002. I was only just entering high school then, but the atrocities taking place along the Eastern Seaboard were so jarring to me. The atrocities I’m speaking about are the murders committed by John Allen Muhammad and Lee Boyd Malvo, who shot 13 people and killed 10 of them. Last night, Muhammad, dubbed the “D.C. Sniper,” was put to death last night via lethal injection in the state of Virginia for the heinous crimes he committed.

The question I want to focus on is whether the death penalty is ever justified, even for decidedly evil people like Muhammad. In 2008, the United States put to death 37 people, the fifth most just after North Korea, Saudi Arabia, Iran, and China. That’s quite the distinguished neighborhood we’re in, to say the least. Each of these nations (including the U.S.) is known for their human rights violations, which should not come as a surprise. The U.S. is surpassed only by Pakistan for the most prisoners on death row awaiting their “imminent deaths” that can take up to 20 years to come. All of the West, with the exception of the U.S., has outlawed capital punishment. The United States is the only developed nation, outside of Japan and Singapore, to still execute people. In the EU, Article 2 of the Charter of Fundamental Rights of the European Union outlaws the barbaric practice.

The conservative right in this country has always perplexed me with their unwavering commitment to the sanctity of life when it comes to the unborn but, at the same time, almost reflexively support the state’s right to murder human beings. This is evidenced by the fact that an overwhelming majority (95% last year) of executions in the United States occur in the South, with Texas being the leader.

Some may state that my use of the word “murder” here is unfair, biased, etc. However, let’s consider the conditions under which people are being killed by the state: the person is incapacitated, restrained, and no longer a threat. Let’s say someone invades your house, but you are able to capture him and tie him up. You are not then allowed to shoot the person once you have him under your control. That would be murder. Specifically, murder is the unjustified “killing of another human being with intent.” Therefore, capital punishment is unambiguously murder. In order for this to be so, we must explain why it is unjustified.

Now that we’ve seen that we are in isolation within the West on this issue, that conservatives are the biggest supporters and perpetrators of capital punishment, and that this constitutes murder, let’s try to evaluate some of the claims. One of main justifications given for capital punishment is the so-called “deterrent effect.” If the state puts to death a murderer, this disincentivizes (deters) would-be murderers because they are fearful of being put to death by the state if they are caught. Of course, this has been the justification for all sorts of draconian practices. “If we want to stop a practice, we can murder whoever does it and it will stop.” (If we want to stop robbery, why don’t we just impale robbers, as Draco did?) As one might expect, this theory falls flat on its face as soon any empirical data are examined. The “deterrence effect” is a myth. A review of the scholarly literature by Bailey and Peterson published in the 1997 book “The Death Penalty in America” (Chapter 9) emphatically affirms there exists no “deterrent effect.” Just a cursory survey shows that murder rates are higher in states that have the death penalty than in those that do not. That’s the correlation. What’s the causation? Well, there are obviously many factors, but one interesting one is what’s called the “brutalization effect.” This hypothesis says that when the state kills human beings, it sends the opposite message that it intended: the deliberate killing of human beings is acceptable and there exists no inherent sanctity in life. In effect, the death penalty brutalizes society and homicide rates go up. That is to say, executions dehumanize people and sends the message that human beings are mere instruments. Interrupted time-series analyses from Oklahoma have found a brutalization effect for homicide of strangers. Older studies find a similar effect in New York and Arizona.

Supporters also claim executions create a “specific deterrent,” i.e. those put to death are deterred from committing future crimes because they are dead. We could therefore claim that if we wanted to stop a robber from robbing again, we could kill him. Does anyone accept that argument? No one does. Moreover, life in prison for murderers is also a “specific deterrent” (as well a general deterrence). There is no reason to take the unnecessary step of execution.

But what if executions deterred? It is the utilitarian argument that killing human beings is okay because it may save the lives of would-be victims from would-be murderers. Since it is utilitarian, it is concerned with the consequences of actions, not the principles on which they are based. It follows from this theory, if we assume executions deter future crime, that the execution of innocent prisoners is in fact moral because it deterred future crimes. That is one of the many “repugnant conclusions” that practitioners of utilitarianism must accept. Further, if we are truly committed to deterring future crimes, we should be impaling or burning prisoners at the stake, which would surely deter would-be criminals. But no one accepts this, because it is an elementary moral principle that humans are not mere tools for creating some desired end. We accept that there are things we simply do not tolerate as moral beings, even against the most reprehensible of people.

It might be argued that it simply isn’t the case that innocent people would be executed. This is also another interesting argument coming from the conservative right; for all their contempt and distrust of government, they also almost reflexively assume the judiciary gets it right and the criminal justice system is inherently efficient and just. The sobering fact is that, “since 1973, 139 people in 26 states have been released from death row with evidence of their innocence” (source). This fact alone, that the criminal justice system in the United States is prone to errors of enormous consequence, demolishes any conceived justification for capital punishment (if we agree putting innocent people to death is wrong). Unfortunately, once prisoners are executed, there is no longer any effort to examine their guilt. The aforementioned source mentions at least eight executed prisoners whose guilt has been seriously doubted. They also note two people who have pardoned or exonerated after their execution.

Furthermore, the use of capital punishment is discriminatory. The race of the victim murdered is the greatest predictor of whether the person who committed the murder will be executed. According to a 2003 report by Amnesty International that explores the role of race in the judicial system, “Blacks and whites were the victims of these murders in almost equal numbers. Yet 80 per cent of the people executed since 1977 were convicted of murders involving white victims.” The report also finds that minorities are also underrepresented in juries, which may skew the result of convictions and sentences. Explains Justice Scalia, “the unconscious operation of irrational sympathies and antipathies, including racial, upon jury decisions and (hence) prosecutorial decisions is real, acknowledged in the decisions of this court, and ineradicable.” Justice Scalia neglects to mention, however, that the death penalty is not ineradicable. In the words of Senator Feingold, “We simply cannot say we live in a country that offers equal justice to all Americans when racial disparities plague the system by which our society imposes the ultimate punishment.”

In the United States, this “ultimate punishment” is most often manifested in lethal ejections. In their dubious quest to find more “humane” and efficient ways to kill human beings, practitioners of executions have settled on a lethal cocktail of drugs meant to swiftly and painlessly kill its victim. The effectiveness of this practice has been highly criticized. It is argued that the anesthetic used, a short-acting barbiturate, quickly wears off and leads to a very painful death for the prisoner. The prisoner, however, is unable to communicate this pain as the pancuronium bromide, a paralytic muscle relaxant, has paralyzed the prisoner. The attempts to make executions look like medical procedures have had, in Dr. Groner’s words, the purpose of making them “socially more acceptable.” A 2007 study found that current procedures “may not reliably effect death through the mechanisms intended” and that prisoners may in fact be fully aware and suffering painful asphyxiation rather than the intended cardiac arrest. To wit, a prisoner named Angel Diaz who was put to death in 2006 required two doses of the lethal cocktail because the first was insufficient to kill than man within 35 minutes. A list of further botched executions can be found here.

Additionally, from a purely economical standpoint, executions are more costly than life in prison. Eliminating the death penalty could save the public hundreds of millions of dollars, which could be better spent on public safety and efforts that actually succeed in reducing murder and other crime. Saving money alone should not be the sole purpose of abolishing the death penalty, but if we accept the argument I have made above then we get the benefit of increased efficiency and a better utilization of scarce resources.

A natural question that remains is, Why does capital punishment still exist in the United States? There may well be a rational explanation, which is that “The death penalty has served the political class at great expense to the greater society.” Politicians benefit from supporting the death penalty because it helps them win elections. It will only be after the American public realizes, as they increasingly are, that death penalty is inherently wrong that policymakers will stop clinging to the antiquated practice.

As we remember the moments of terror that shocked the nation seven years ago, let us reflect on the statement of those who protested the execution of even the most reprehensible kind of person: “We remember the victims, but not with more killing.”

Today I attended a presentation given by Paul Neiman, an Assistant Professor of Philosophy at SCSU. The topic of his presentation was international business ethics. He focused on providing an ethical framework for conducting international business from a social contract perspective, expanding on the work of John Rawls and his “A Theory of Justice.” The presentation is based on a paper he is currently in the process of writing.

The basic premises are that social contracts should incorporate common shared presumptions that are reasonable and generally acceptable. Based on this, Dr. Neiman argues there are four restrictions that seem reasonable and generally acceptable to place on free markets:

1. Contractors should not be forced into accepting or rejecting principles.
2. Contractors should not be willfully deceived in arguing for or against principles.
3. All contractors must have an equal right to propose or argue against principles.
4. All contractors should expect that the terms of the social contract will be enforced.

These are all well and good. They are reasonable and generally acceptable rules to impose. The problem is when Dr. Neiman ventured into what these rules would result in for two contractors, one representing a corporation and another representing a community, coming to negotiate a deal but who are completely unfamiliar with their constituents. That is, the business contractor does not know anything about the corporation he is representing other than that they seek profit maximization. Likewise, the community contractor does not know anything about the community she represents except that they care about their living standards, their culture and social norms, their environment, and so on. These two people are then supposed to negotiate a deal based on the four rules above, and we’re supposed to assume both have an equality of power (the community does not desperately need the corporation and the corporation does not desperately need the community).

Based on these rules, Dr. Neiman posits that the negotiators will come to expect that the corporation is obligated to pay a living wage, be fully responsible for any environmental damage it creates, respect cultural and social norms all the time (i.e. not just when it is profitable to do so), and so on. Clearly it seems the balance of power rests with the community negotiator, not with the corporation (and Dr. Neiman justifies this by saying it is the community that has the deal-breaking terms, as if the corporation has none of its own).

To be frank, the presentation made little economic sense to me, as someone who is minoring in the subject. That’s just my opinion. Let’s take the living wage obligation, for example. Dr. Neiman says the community won’t accept any corporation that won’t pay its population a living wage because that would decrease their wages. I assume that means they won’t take any salary below the average. Already something seems quite wrong with this ethical framework. What happens to, say, cashiers at a grocery store? They usually don’t get paid a “living wage” because they do not add at least that much revenue to the company. If the marginal cost of an additional laborer does not at least equal the marginal revenue of hiring that laborer, the laborer won’t be hired. That is, the company won’t hire someone at a cost that exceeds the benefit that hiring that person would bring. Otherwise they’re just losing money. So what does this mean when we say the corporation is obligated to pay a living wage? Well, it means this theoretical world doesn’t have any cashiers or any other job, for that matter, that would normally pay less than a “living wage.” (That’s the classical argument against minimum wages, a topic I’ve explored here [by far my most popular post for some reason]. The difference, really, is in the magnitude in setting the minimum. The hypothetical purpose of a minimum wage, as I see it, is to set wages at an equilibrium price that clears the market, essentially correcting for market failures that exist. A “living wage” minimum sets it way above this level and is based more on ethical rather than economic arguments.)

So the result is actually very high unemployment because there is a lack of jobs. Jobs that cannot afford to pay a “living wage” won’t exist. People whose labor is not worth this living wage are out of luck. And I did ask Dr. Neiman about this problem. He essentially responded that he doesn’t buy the argument and that perhaps some people will just have to live unemployed to persevere the interests of the community, namely high wages. To me, this is a very astonishing ethical guideline he is proposing. What he is saying is that it is better to receive nothing rather than something. That it is better to live in poverty and in unemployment than to receive at least some amount equal to the worth of your labor (if your labor is worth less than the “living wage”). Is that ethical? Further, having people earning zero rather than even a minimal amount decreases the average wage, what Dr. Neiman was originally against.

It seems I haven’t written thing about the Iraq War on this blog. I always thought I had, since it seems like an obvious topic. (I have, however, discussed the use of torture by President Bush as a result of this war.) I even have a post on the Afghanistan War. (Many consider the case for the Afghanistan War much stronger than the Iraq War. Indeed it is, but only marginally; both wars are fundamentally wrong.)

I could approach the Iraq War in the same way I approached the Afghanistan War. Namely, if something is wrong for others, then it is equally wrong for us. That is the idea of moral universalism, which every respectable moral theory has at its core. See my post on the Afghanistan War for more on moral universalism and its application to U.S. foreign policy. The justification for the Iraq War fails in the same regard. I can speak more on this if anyone is interested.

There is clearly a lot to say about the Iraq War and a lot can be said about why it was wrong. I cannot possibly cover all of these but I will try to cover some of what I feel are important points.

Let’s look at some of those justifications, a lot of which were provided in in a discussion at the SCSU Scholars blog (updated link) regarding President Obama’s foreign policy. The first justification is that most of Congress and some of the international “coalition” (mainly the West) supported President Bush’s war of aggression against Iraq. But this is trivial to the question of whether it is right or wrong. If 51% of Congress supported the genocide of a population, would that make it justifiable? Certainly not. The rightness of response is not dictated by Congress (or a coalition, for that matter). Conservative critics often lambast Congress and the President for the laws they pass, but when it comes to war they automatically get it right? No, because, again, rightness is independent of Congressional mandates.

Second, it should also be considered that the American public and the rest of the world was blatantly lied to by Bush and his administration. “Lie” implies the knowledge of truth and stating a deliberate falsehood contrary to that truth, and indeed this is what occurred in order to sell the war in Iraq. That’s virtually without doubt. The Downing Street memo, “the smoking gun,” clearly demonstrated that Bush wanted to dispose of Saddam Hussein on the grounds of WMDs and terrorist links but had to knowingly lie to the American public to do so (it should be clear to everyone that both of these are patent falsehoods). Add to that the Manning Memo, the 2004 document leaks in the UK, the Bush-Aznar memo, the Niger uranium forgeries, or Ron Suskind’s pile of evidence that Bush and his administration orchestrated the war in Iraq long before 2003 and had fabricated evidence or misled the public to do so. What you have is a clear case that the justification, the primary rationale given by Bush’s administration, was contrived and deliberately used to mislead the American public and the rest of the world to get them on board the hawkish agenda. So even when a majority of people agree with you, that’s a void point when those people have been duped.

Another point used to justify the war is that the fact that Bush and no one from his administration has been indicted or tried for any of their actions shows that their actions were right. This is another nonsense argument. Even if we look at the more generous example of O.J. Simpson, who was in fact tried, we do not say his murdering of his wife is justified (and it’s clear he did). (Also, morality is not dictated by law. I can do something morally wrong even if it’s not against the law.) But the fact that Bush has not been tried says nothing about the rightness of response (again). In fact, what it does show is how hypocritical we are. This goes back to the idea of moral universalism. If war of aggression is the “supreme crime” for one, then it also is for Bush. That Bush has not been tried says a lot about who we are as a people, but little about the justification for war in Iraq.

Perhaps the strongest justification the hawks have in defense of the Iraq War is that it resulted in the disposal of Saddam Hussein, who we all agree was a dictator who committed terrible atrocities. (It should, however, be noted that he did so with full support from the U.S.) This is the argument that Bush and his administration switched to when it became glaringly obvious to the world that the primary justifications given for the war were completely invalid. This is the utilitarianism argument, which says the Iraq War was right because it saved more lives than it ended, got rid of a dictator, etc. I personally believe utilitarianism is a shoddy moral theory for reasons I’ve laid out in other posts on this blog, but it should be mentioned that even some utilitarians would disagree with this assessment. They may argue that overall utility has decreased because of the war (or at least was not maximized), and I feel they would be correct in saying so. Rule utilitarians might also claim that invasions and occupations such as these, as a general rule, do not maximize utility, and I feel they would also be correct in saying so.

But does the ousting of Saddam really make this war justifiable and does it make legal? The answer is no. The UN is actually very specific in their charter: “The Security Council shall determine the existence of any threat to the peace, breach of the peace, or act of aggression, and shall make recommendations, or decide what measures shall be taken in accordance with Articles 41 and 42″ where “measures not involving the use of armed force” are preferred. The exception is Article 51, which allows for self-defense until the Security Council can respond. So that’s what the international community agrees with. It’s clear the U.S. war in Iraq fails to meet this requirement. In 2004, Kofi Annan, the UN Secretary-General at that time, declared, “From our point of view and the UN Charter point of view, [the war] was illegal.” Not only is this war blatantly illegal, it is considered the supreme war crime because it encompasses all the evil that follows from it: “to initiate a war of aggression…is not only an international crime; it is the supreme international crime, differing only from other war crimes in that it contains within itself the accumulated evil of the whole.”

Richard Perle, the former chairman of the Defense Policy Board Advisory Committee under Bush (and about as conservative as you can get), even admits the war is illegal, but he says it was justified. The argument now is that the war is illegal but justified. Indeed, it’s possible for morally right actions to exist even if they are contrary to written law. But was the war justified? If it is not even justified from the utilitarian perspective, then it’s certainly going to be hard to justify! (By the way, if it was right to invade and occupy Iraq, it could certainly be said it is equally right for a country to invade and occupy the U.S.) Deontologically, people often use “just war theory” to determine whether a war is just or not (both the criteria to enter into war and how the war is conducted once it is entered into). I think just war theory is a bit dubious, but even this theory makes it clear that the Iraq War is not just. Just war theory states that nations have the right to defend against aggression, but in this case that would apply to Iraq, not the U.S. The war in Iraq had absolutely nothing to do with self-defense (Iraq could not even defend itself).

The question to answer now is, “Where do we go from here?” I think it’s fairly clear. The U.S. has the obligation to withdraw from the nation and pay massive reparations to the Iraqi people. That’s what it ought to do. Second, we also have the obligation to hold the guilty responsible for their crimes. Will that ever happen? I seriously doubt it because we fail to rise to even a minimal moral standard in which we can say what’s wrong for others is wrong for us too.