September 21, 2004COMMENT ON THE CARTER CENTER’S ANSWER TO OUR REPORT HAUSMANN AND RIGOBON

(Caracas, 21 de Agosto de 2004 - NPS 081)
- What is our claim?
The Carter Center issued a report entitled
“Report on an Analysis of the
Representativeness of the Second Audit
Sample, and the Correlation between Petition
Signers and the Yes Vote in the Aug. 15,
2004 Presidential Recall Referendum in
Venezuela” which is a response to our paper
“In search of the black swan: Analysis of
the Statistical Evidence of Electoral Fraud
in Venezuela”. In preparing their response,
the Carter Center never contacted us to ask
questions about our methodology or asked to
see our data in order to reproduce our
results, although we offered to do so.

In our original paper we studied – among
other things – whether the sample used by
the Carter Center for the purpose of the
audit was a random sample of the whole
universe of automated voting precincts. We
also presented what we believe to be
evidence of fraud, but the Carter Center
report does not deal with this aspect of our
report.

It is useful to remind the reader of the
reasons why we looked into this question. We
asked ourselves whether it was possible to
have massive electronic fraud take place and
for the audit conducted under the
observation of the Carter Center and the
Organization of American States not to have
found it. We argued that this was possible
if the audit was conducted on a sample that
was not truly random and hence
representative of the whole universe. If
fraud was committed in some precincts and
not in others, it would be possible to
direct the audit to the non-altered
precincts causing the audit to fail to find
any wrongdoing. To check this possibility we
developed a test for randomness. We based
ourselves on a well accepted principle in
statistics which holds that if a
relationship between certain variables holds
for a universe as a whole, it should hold
for any random sub-sample of that universe.
This implies that if we estimate a
relationship for the whole universe, we
should not be able to reject the hypothesis
that the relationship is statistically
different for the audited sample.

We proceeded by estimating a relationship in
logarithms between the number of votes on
the one hand and four factors which should
affect this result, on the other. The
factors we considered were four and have a
clear reason to be in the relationship we
estimated:

The number of registered voters in each
precinct that signed the recall
referendum in November 2003.

The number of voters who were registered
at the time of the recall referendum and
hence could have signed the petition at
that time

The number of voters in each precinct
who were not registered at the time of
the signature collection in November
2003 but who were registered for the
August 2004 referendum

The number of registered voters in each
precinct that were registered to vote on
August 15 2004 but that did not vote.

The logic for including these variables in
the relationship is straightforward. First,
the higher the number of signers in a
precinct, the higher the expected number of
Yes votes, given that signers have expressed
a preference for such a vote. Second, the
higher the number of registered voters at
the time of the recall referendum, the
higher the potential number of additional
yes votes as some voters may not have been
able to sign or may have preferred not to do
so given that signing was a public event
while the vote was secret. Third, the higher
the number of new voters, the higher the
expected number of Yes (and No) votes as
these voters have yet to express their
political preferences one way or the other.
Finally, the higher the number of voters who
do not show up to vote, the lower the number
of Yes (and No) votes.

When we estimate this relationship for the
whole universe we find that all the
variables are significant at the 1 percent
confidence level. When we check whether this
relationship holds with similar parameters
for the audited sample, we can reject the
hypothesis that it does, also with a very
high confidence level. In particular we
remark that the estimated elasticity between
signatures and votes – conditional on
controlling for the other three factors – is
10.5 percent higher in the audited sample
than in the rest of the universe of
automated precincts.

This is the essence of our proof. We called
our paper “In search of the black swan” in
reference to Karl Popper’s dictum that a
thousand white swans do not prove the
proposition that all swans are white, but
one black swan does show that they are not.
For the white-swan proposition that the
sample used in the audit was random, we
provided a black swan proof that it was
not.

How does the Carter Center answer our claim?
They make three propositions:

1..They check whether the mean of the
votes in the two samples are similar

2.

3.They test the random number generator
program used by the Electoral Council and
find that it does generate a random draw of
all the precincts. They also correctly point
out that the numbers are not truly random in
the sense that the same initial seed number
generates the same sequence of numbers.

Similar sample means

With respect to the first point, the
question that the Carte Center asks is
whether the unconditional means of
the two samples are similar. By
unconditional we mean that they do not
control for the fact that precincts are
different in the four dimensions we include
in our equation or in any other dimension.
To see the importance of conditioning,
let us imagine that there is fraud and let
us suppose that the fraud is carried out in
a large number of precincts but not in all
of them. The question is: is it possible to
choose an audit sample of non-tampered
centers that has the same mean as the
universe of tampered and un-tampered
precincts? The answer is obviously yes. Let
us give an example using a population with a
varying level of income, say from US$ 4,000
per year to several million. Assume that
half of them have been taxed 20 percent of
their income while the other half has not.
Is it possible to construct an audit sample
of non-taxed individuals whose average
income is similar to that of those that have
been taxed? Obviously the answer is yes.
However, if one controls for the level of
education, the years of work experience and
the positions they hold in the companies
they work in, it should be possible to find
that the audited individuals actually a
higher net income than the non-audited
group. That is the essence of what we do.

Now, lets go back to the case in point.
Precincts vary from those where the Yes got
more than 90 percent of the vote and those
where it got less than 10 percent. This is a
very large variation relative to the
potential size of the fraud, say 10 or 20
percent. It is perfectly feasible to choose
a sample that has the same mean as the rest
of the universe.

However, the non-random nature of the sample
would be revealed if we compare the means
but controlling for the fact that each
precinct is different. That is what we do
and this is the randomness test that the
audited sample failed.

Similar correlation coefficients

The second check consists of comparing the
correlation between signatures and votes in
the two samples, which they find to be very
similar. This is clearly not a test of
anything relevant to the case in point. To
see this, suppose that in the audited sample
there is a perfect relationship in which
each signature becomes 2 votes and in the
non-audited sample, because of fraud - the
relationship implies that each signature
becomes only 1 vote. However, the
correlation coefficient in both samples is
1. This is due to the fact that the
correlation coefficient is affected by
whether the two variables move up and down
together, but not by whether they do so in a
relationship of 1-to-1, 2-to-1 or 10-to1.
This procedure is certainly no proof of
randomness or of the absence of fraud.

Test of the sample number generator

The final point is that the random number
generator actually generates a sample that
can potentially pick all the universe of
precincts and that it was tested and
appeared to actually generate random
numbers. However, there are many ways in
which this kind of analysis is weak. The
most obvious one is that the program does
not really generate random numbers but a
predetermined set of numbers for each
seed-number that initiates the sequence. By
putting a known seed-number the Electoral
Council would know beforehand which
precincts would come up, and could thus
decide which precincts to leave unaltered.
It is our understanding that in the audit
conducted on August 18-20, the seed number
was provided by the Electoral Council and
implemented in their computer. It does not
matter if, as reported by the Carter Center,
after 1000 draws, the likelihood of any
precinct being chosen looks reasonably
random. The point is that the first draw is
completely pre-determined by the seed
number.

Other problems involve the possibility that
at the time and place in which the program
was run, a logical bomb might have been
active which would make the program work
differently. The bomb could self-destruct
leaving no trace.

Conclusions

The Carter Center report does not address
the two main findings of our report. It
completely disregards the evidence we put
forth regarding the statistical evidence for
the existence of fraud in the statistical
record. It only addresses the issues we
raise regarding the randomness of the sample
used for the audit they observed on August
18-20, 2004. They show that the
unconditional means between the audited
sample and the rest of the universe are
similar. However, this is no proof of
randomness. Conditional on the
characteristics of the precincts, we show
them to be different and this result is not
challenged or addressed by the report. The
report also argues that the correlation
coefficient between signatures and votes in
the audited sample is similar to that in the
rest of the precincts, but this is an
irrelevant statistic for this discussion.
Finally, the report checks the source code
of the software used but leaves open wide
avenues for fraudulent behavior.

We do not know what happened during the
audit, as we were not present. We do know
that the sample fails the randomness test we
designed. The Carter Center has nothing to
say about this fact. Paraphrasing Popper
again, the Carter Center seems content in
finding the odd white swan here and there.
That does not prove the proposition that the
sample was randomly chosen. We have
presented a formal test of randomness and
the sample fails it.
That is a black swan.