Does Being a Vegetarian Really Save
Animals?

Some people argue vegetarianism isn’t morally necessary
because a single meat purchase will not actually cause
more farm animals to be raised or slaughtered. Thus,
regardless of whether or not the production of meat
is inhumane to animals, someone who buys meat is doing
nothing wrong. This argument fails to show that meat
purchases are morally permissible, however, because
our choice to buy meat affects the expected number
of animals bred, raised, and slaughtered.

Given the size of modern animal agriculture, it seems
plausible to assume that a single meat purchase is
too insignificant, relative to the vast number of
other meat purchases, to be noticed by the manager
of a factory farm. If the manager cannot perceive
any increase in demand caused by a single meat purchase,
no additional animals will be raised or slaughtered,
and thus no harm will have been done to animals by
the purchase. In other words, it is claimed that most
meat purchases are “causally inefficacious.”

This may be true but it is irrelevant to how we ought
to make moral decisions under uncertainty. When we
make a decision about how to act, we can never know
for certain all of the actual consequences that will
result from al our possible actions. We may, after
making a decision to act in a particular way, come
to know the actual consequences that resulted from
the one action we decided upon. However, this knowledge
is not helpful in making the original decision, since
it is not only reached after the fact, but also limited
to only one of the many possible actions we may have
had to choose from. Consequently, it is more reasonable
that we should make decisions, not on the basis of
actual consequences (which we can’t know for
certain), but on the basis of expected consequences – the
product of those consequences resulting from an action
and the probability of those consequences resulting
– that one might reasonably predict given the
available evidence. Since the expected consequences,
not actual consequences, can be known when making
decisions, only expected consequences can help ethical
individuals decide what course of action to take.

Acting on expected consequences can be understood
in problems of “contributory causation,”
where many people seem responsible for causing something
to happen. Jonathan Glover provides an example of
contributory causation called The100 Bandits, where
100 bandits descend on a village that has 100 villagers,
and each villager has one bowl containing 100 baked
beans. Each bandit takes one bean from each bowl,
so that each bandit ends up with a bowl of 100 beans.
Now, no villager can perceive the difference made
by one bean being stolen from his bowl (either at
the moment or later, due to malnutrition). Thus none
of the bandits would seem to have individually harmed
any of the villagers and so none of the villagers
should have been harmed. Yet 100 villagers are without
lunch and hungry. So something is wrong.

Glover suggests we approach contributory problems
like The 100 Bandits by employing a “divisibility
principle” – in other words, a single
agent is causally responsible for the consequences
of a contributory result divided by the number of
contributing agents. In this case, the hunger of 100
lunch-less villagers is divided over 100 bandits.
Glover would thus say that each bandit is responsible
for the hunger of one lunch-less villager. If we accept
Glover’s divisibility principle, each bandit
ought not to steal 100 beans because he would then
be causally responsible for the disutility of one
lunch-less villager.

There may be a more compelling solution to contributory
problems such as this one, however, that does not
attempt to reconcile actual causal responsibility
with our intuitions about moral responsibility. For
in the case of the Bandits, it is not true that none
of the bandits is actually causally responsible for
harming the villagers. At the very least a handful
of the bandits are causally responsible for the villagers’
hunger – those bandits who complete threshold
units. While it is true that no villager can perceive
the difference made by one bean stolen from their
bowl, each can clearly perceive the difference made
by 100 beans stolen from their bowl. Thus there must
be some number of beans between one and 100 that is
the smallest number of beans a villager can perceive.
Call this number the threshold unit. Say, for instance,
the threshold unit is 20. Any number of beans stolen
below 20 cannot be perceived. Any number of beans
stolen between 20 and is perceived only as 20 beans
being stolen; between 40 and 59, only as 40 beans
being stolen; and so on, up to 100 beans. Thus bandits
who cause a 20th bean to be stolen are responsible
for the disutility of 20 beans being stolen. For instance,
bandits who cause the 100th bean to be stolen from
a bowl are responsible for the consequence of 20 beans
being stolen, since had they not caused the 100th
bean to be stolen, only 80 beans would have been perceived
as stolen.

This is the approach to take in describing the causal
responsibility, after the fact, of agents in similar
problems of contributory causation. However, as suggested
above, this retrospective description of actual consequences
does not help us to decide on a course of action.

For this, ethical individuals must combine the knowledge
of thresholds with expected consequences. Imagine
that the bandits are contemplating stealing beans
again. This time, each bandit knows villagers can
perceive only threshold units of 20, but each bandit
does not know whether he will be stealing a 20th bean
from each bowl. Under this uncertainty, each bandit
ought to calculate the expected consequence of stealing
100 beans as the probability of completing a threshold
unit in each bowl (1/20) times the consequence of
perceiving that threshold unit (20) times the number
of bowls (100), which equals 100 – one hungry
villager.

Even if each bandit knows neither the size of the
threshold unit nor which bean he is stealing, he can
still calculate the expected consequences. In each
case he will know that the consequences of reaching
a threshold unit times the probability of completing
a threshold unit in each bowl is one. (This is so
because the size of the threshold unit and the probability
of completing it always vary inversely.) Hence the
expected consequence of stealing 100 beans will always
be 100. The only condition under which the expected
consequence will be les than 100 is when the Bandit
has information about both the exact size of the threshold
unit and the exact position of a particular bean within
that unit. In most cases of contributory causation,
this kind of information will not be available.

As a decision procedure, expected consequences yield
the same prescription as Glover’s divisibility
principle: don't steal beans. This makes sense,
since the sum of all the bandits’ chances of
completing a perceptible unit is one and the product
of each of these probabilities is also one. One virtue
of calculating expected consequences, then, is that
it provides the same prescriptions as Glover’s
divisibility principle but without a questionable
view of actual causal responsibility.

Recognizing the expected consequences of an action,
the “causal inefficacy” defense of buying
meat no longer holds. There must be some threshold
at which point a unit of meat demanded by some group
of customers is perceived by the grocer. At the very
most, the size of this threshold unit is the difference
between the demand for no meat and the current demand
for meat. Likewise, there must be some threshold where
a unit of meat demanded by some group of grocers is
perceived by the butcher. And so on, all the way to
the farmer. The expected consequence of completing
a threshold unit that affects the production and slaughter
of animals is thus the product of al the probabilities
of completing each threshold unit [p(Al)=p(Grocer)*
p(Butcher) *…* p(Farmer)] times the consequence
of that entire threshold unit of animal production.
It is likely that the probability is quite small.
However, the consequence of completing the threshold
unit is the consequence of the entire unit, not some
portion of it. This consequence is quite large and
terrible, since it involves raising and slaughtering
a significant number of animals.

For example, take the case of The 200 Million Consumers.
There are 200 million consumers, each of whom eats
50 farmed animals each year. In this market, there
are only ten possible annual outputs of animals for
farmers: one billion animals, two billion, and so
on, up to ten billion. The difference between each
of these annual outputs – one billion – is the smallest
unit of demand perceivable to the farmer and is thus
the threshold unit. Since there are 20 million customers
per threshold unit, and only one of these customers
will actually complete the unit of which his other
purchase is a part, the probability of my completing
a unit is one in 20 million. That means by buying
meat for the year, an individual has a one-in-20 million
chance of affecting the production and slaughter of
one billion animals. The expected consequence is then
one-20-millionth times one billion, which equals 50
– that is, raising and slaughtering 50 animals
per year. Given the horrors of today's animal
agriculture, that is a substantial consequence. These
hypothetical numbers are close to the actual numbers
for meat production and consumption in the United
States.

As with The 100 Bandits, in the case of The 200 Million
Consumers, only a small fraction of individuals may
actually cause harm, as determined after the fact.
While at first glance this seems to weaken the argument
against buying meat, on closer inspection it makes
no difference. Since we can never have perfect knowledge
beforehand, only a decision procedure can tell us
whether or not we ought to buy meat. An ethical individual
must thus use expected consequences to make a decision
about buying meat, and the expected consequences of
buying meat are terrible.