The correctness of the
incremental and divide-and-conquer algorithms depends on
reliable orientation and incircle tests.
The orientation test determines
whether a point lies to the left of, to the right of,
or on a line; it is used in many (perhaps most)
geometric algorithms. The incircle
test determines whether a point lies inside, outside, or on a circle.
Inexact versions of these
tests are vulnerable to roundoff error, and the wrong answers
they produce can cause geometric algorithms to hang, crash,
or produce incorrect output.
Figure 17 demonstrates
a real example of the failure of Triangle's divide-and-conquer
algorithm.

Figure 17:
Left: A Delaunay triangulation (two of the guitar's tuning screws).
Right: An invalid triangulation
created by Triangle with exact arithmetic disabled.

The easiest solution to many of these robustness
problems is to use software implementations of
exact arithmetic, albeit often at great expense.
It is common to hear reports of implementations being
slowed by factors of ten or more as a consequence.
The goal of improving the speed of correct
geometric calculations has received much recent
attention [4, 8, 1],
but the most promising proposals take integer or rational inputs, often
of limited precision.
These methods do not appear to be usable if it is convenient
or necessary to use ordinary floating-point inputs.

Triangle includes fast correct
implementations of the orientation and incircle tests
that take floating-point inputs.
They owe their speed to two features. First, they employ
new fast algorithms for arbitrary precision
arithmetic that have a strong advantage
over other software techniques in computations that manipulate
values of extended but small
precision.
Second, they are adaptive; their running time
depends on the degree of uncertainty of the result, and is usually small.
For instance, the adaptive orientation test is slow only if the
points being tested are nearly or exactly collinear.

The orientation and incircle tests both work by computing the sign of
a determinant.
Fortune and Van Wyk [8] take
advantage of the fact that only the sign is needed
by using a floating-point filter: the
determinant is first evaluated approximately, and only if forward error
analysis indicates that the sign of the approximate result cannot be trusted
does one use an exact test. Triangle's adaptive implementations
carry this suggestion to its logical
extreme by computing a sequence of successively more accurate approximations
to the determinant, stopping only when the accuracy of the sign is assured.
To reduce computation time, some of these approximations can reuse previous,
less accurate computations.
Shewchuk [16] presents details of the arbitrary precision
arithmetic algorithms and the adaptivity scheme, and provides empirical
evidence that multiple-stage adaptivity can significantly improve
on two-stage adaptivity
when difficult point sets are triangulated.

Using the adaptive tests,
Triangle computes Delaunay triangulations, constrained
Delaunay triangulations, and convex hulls exactly, roundoff error
notwithstanding.
Table 1 shows that the robust tests usually incur only
a 10% to 30% overhead, though more time may be needed for points sets
with many near-degeneracies. One exception is the divide-and-conquer
algorithm with vertical cuts. Because this algorithm repeatedly
merges tall, thinly separated triangulations, it performs many
orientation tests on nearly-collinear points, and hence the robust
version is much slower than the non-robust version. The variant that
uses alternating cuts encounters nearly-collinear points less often;
hence, its robust version suffers a smaller speed handicap, and its
non-robust version is less likely to fail.

Of course, adaptive tests do not solve all robustness problems.
Geometric computations that produce new vertices, including circumcenters
and segment intersections, could be performed exactly in principle, but
the results would have large bit complexity and would be inconvenient to
manipulate and expensive to store. Worse, vertices of arbitrarily large
bit complexity could eventually be produced in a cascading effect
when the Delaunay refinement algorithm inserts circumcenters of triangles
whose vertices were themselves circumcenters. Hence, it is infeasible
to make the algorithm perfectly robust. Fortunately, the Delaunay
refinement algorithm is naturally stable with regard to floating-point
roundoff error. Problems arise only when triangles are refined to so small
a size
that it is no longer possible to construct a circumcenter that is distinct
from its triangle's vertices.

I have not produced a robust version of the sweepline algorithm for a
somewhat technical reason. The sweepline algorithm maintains a priority
queue (normally implemented as a heap) containing two types of events:
site events, where the sweepline passes over an input point, and
circle events, where the sweepline reaches the top of a circle defined
by three consecutive vertices on the boundary
of the triangulation. Unfortunately, the y-coordinate
of such a circle top is expensive to compute exactly, may be irrational,
and has a somewhat complicated exact representation.
A robust implementation must keep the events correctly ordered,
and hence must replace the simple comparisons normally used to maintain
a priority queue with a test that correctly compares two circle tops.
Even a fast adaptive version of such a test would be so much slower
than simple comparisons that
event queue maintenance, which is a dominant cost of the sweepline algorithm,
would become prohibitively expensive.