(with Dale Brownawell)
Suppose $V\ib\RR^n$ is the zero set of $m$ integer
polynomials of degree at most $D$ and height at most $H$.
If the projection of $V$ onto any (say, the first)
coordinate is zero-dimensional, then we obtain a lower
bound $B(n,m,D,L)$ on the absolute value of any non-zero
component of a point in $V$. This generalizes a previous
zero bound of Yap.

(with Long Lin)
We consider domain subdivision algorithms for
computing isotopic approximations of nonsingular curves
represented implicitly by an equation $f(X,Y)=0$.
Two algorithms in this area are
from Snyder (1992) and \pv\ (2004).
We introduce a new algorithm that combines the
advantages of these two algorithms:
%resulting in much more efficient algorithm than either one.
like Snyder, we use the parametrizability criterion
for subdivision, and like \pv\ we exploit non-local isotopy.
We also extend our algorithm in two important and practical directions:
first, we allow subdivision cells
to be rectangles with arbitrary but bounded aspect ratios.
Second, we extend the input domains to be regions with
arbitrary geometry and which might not be simply connected.
Our algorithm halts as long as the curve has no singularities
in the region.
We report on very encouraging preliminary experimental results.
Our Java code and data is online (a Core Library version will
be available soon).

(with Jihun Yu)
Exact rounding of numbers and functions
is a fundamental computational problem.
This paper introduces the mathematical and computational foundations
for exact rounding.
We show that all the elementary functions in ISO standard
(ISO/IEC 10967) for Language Independent Arithmetic
can be exactly rounded, in any format, and to any precision.
Moreover, a priori complexity bounds can be given
for these rounding problems.
Our conclusions are derived from results in
transcendental number theory.

This paper proves two generalizations of the well-known
master theorem of divide-and-conquer recurrences.
Our approach stresses the use of real recurrences
and elementary methods, including real induction.

(with Michael Burr, Sungwoo Choi, and Ben Galehouse)
Let f(X,Y) be a C^1 function for which we can
evaluate the sign of f at dyadic points,
and evalute interval versions of f and its derivatives.
Plantinga-Vegter gave an algorithm PV to compute
the isotopic approximation to the curve f=0 within
any given box, under the assumption that f is
non-singular.
We generalize the Plantinga-Vegter algorithm
to more general regions. In case f is algebraic
(i.e., integer polynomials) and square-free, we show how
to isolate the the singularities of f and to compute
the isotopic approximation.
Our result appears to be the first purely numerical
algorithm that can handle singular algebraic curves.

(with Michael Burr and Felix Krahmer)
We give an adaptive analysis of the
complexity of EVAL, an evaluation-based
real root isolation algorithm. The
complexity is bounded in terms of an integral.
Moreover, we give an a priori bound of $O(d^2L)$ for
the integral for the benchmark problem of
isolating all real zeros of a square-free integer
polynomial of degree d and logarithmic height L.

(with Jin-San Cheng and Xiao-shan Gao)
We show how to isolate all the real zeros in a
zero-dimensional triangular system of integer
polynomials. The method is completely in that we
make no regularity assumptions on the system.
The result is made possible through the use of
evaluation bounds. Our implementation indicate
that the approach is practical for modest size systems.

We outline a theory of real computation that is
suitable for Exact Geometric Computation (EGC).
It is based on a concept of
explicit sets and explicit algebraic structures.
The relation to the analytic theory of real computation
is shown, and we provide a framework in which
the algebraic model of real computation coexists in
complementary roles with a numerical model of real computation.

(with Eli Daiches and Ken Been)
We gave a model of the dynamic map labeling problem,
list a set of desiderata, introduce the class of
point-invariant dynamic labels, and provide a
fast and practical solution to this problem.

(with Sungwoo Choi, Sung-il Pae and Hyunju Park)
In robot motion, the simplest example of
a transcendental motion is
a rotation with constant angular velocity.
When combined with translation, this is a helical motion.
We show that it is decidable if two algebraic
rigid bodies collide, given that one of them
has a helical motion and the other an algebraic motion.
The general case of two helical motions colliding
is open. This is another example of transcendental
number theory yielding positive algorithmic results.

This is an popular article describing the central problem of
deciding zero, a puzzle that is at the center of
robust computation, real computation as well as
transcendental number theory. Its intended audience
are high school and college students.

(with Zilin Du)
We give a uniform complexity bound for evaluation
of hypergeometric function to any absolute
precision. This is further extended to
the case when the argument is a black-box number.
In Proc. ASCM, Dec 2005.

(with Arno Eigenwillig and Vikram Sharma)
We give an amortized bound for the Descartes'
and de Casteljau's method for isolating
all the real roots of a square-free polynomial of degree d.
Our bound is optimal for $L > \log d$, where L is the
bit size of the coefficients and d is the degree
of the polynomial. (NOTE: the student co-authors of this
paper won the Best Student Paper Award at ISSAC 2006).

Smale's notion of approximate zero is extended
to the situation where the Newton evaluation
is inexact. We call the corresponding notion
"robust approximate zero". We provide a point
estimate for such robust approximate zeros.
We also provide the global complexity of
approximating a real zero of a polynomial
to $n$ absolute bits, starting from a robust
approximate zero.

The complexity of root isolation via Sturm sequences
is the product of two bounds:
(1) the complexity of each Sturm sequence evaluation,
(2) the number of Sturm sequence evaluations.
Assume square-free integer polynomials of degree $d$ with
$L$-bit coefficients.
For (1), we give a new and simpler approach which achieves
the current best
bound of $\widetilde{O}(d^3L)$, but avoiding the
more complicated algorithms of Reischert and also Lickteig-Roy.
Here, $\widetilde{O}$ means that logarithmic terms are ignored.
For (2), we give an alternative charging scheme that
match current best
bound of $\widetilde{O}(dL)$ from Davenport. The main advantage
of our approach is that it can be generalized to the case
of complex roots.
Both our results are based on amortization arguments.
In Proc. Symbolic-Numeric Workshop, Xi'an, China
(Aug 2005).

We give the first complete subdivision algorithm for
intersecting two Bezier curves. Various geometric
separation bounds are also given: the minimum
distance between an algebraic point and a curve,
between two isolated singularities of a curve, etc.

We show that the shortest path between two points
amidst a collection of disc obstacles is computable.
This appears to be the first instance of a nontrivial
combinatorial problem involving transcendental numbers
that has been shown computable on a Turing machine.
Further, by appeal to effective bounds in Transcendental
Number Theory, we show that this problem can be
solved in single exponential time.

Geometric algorithms are designed for the Real RAM Model.
Substituing machine floating point arithmetic may cause algorithms
to fail. There is no comprehensive documentation of what can
go wrong and why. We consider some basic algorithms in
Computational Geometry and show, by explicit numerical examples,
the various ways they can fail.
We discuss how to find such numerical examples semi-systematically.
We hope this paper will be useful for teaching in the classroom.
Code, more examples, theory and pictures are available from our
website www.mpi-sb.mpg.de/~mehlhorn/ftp/ClassroomExamples.ps.

This paper introduces a theory of guaranteed
accuracy computation. This is a generalization of
guaranteed sign computation which is the critical
idea in Exact Geometric Computation.
We introduce a new approach to computing
over the reals, and prove some basic theorems.
We also introduce a new numerical
computational model based on Sch\"onhage's pointer
machine. The main result gives a transfer theorem
for problem that is algebraically computable to be
also numerically approximable.

A survey of robust geometric computation, covering both the
finite precision approach as well as exact approaches.
Topics include constructive root bounds,
filters, degeneracies and perturbation techniques.

(with Sylvain Pion)
This paper describes a new ``factoring technique'' that
can be used to improve the known constructive root bounds.
In this paper, we show how this technique is applied
to the BFMS Bound and to the Measure Bound. The technique
is especially effective when the inputs are k-ary rational
numbers, which includes the overwhelming majority of
real world input data (either binary or decimal rationals).

(with T. Asano and D. Kirkpatrick)
This paper introduces the concept of "pseudo approximation
algorithms", and shows how these can be systematically
converted into a true approximation algorithm.
The technique is useful because pseudo approximation algorithms
seems easier to construct. We apply it to the two
simplest NP-hard optimum motion planning problems:
(A) Euclidean shortest path (ESP) and (B) $d_1$-optimum motion
of a rod. For (A), our new algorithm is simpler than previous
solutions, and has only logarithmic dependence on input bit size $L$
(previous solutions are polynomial in $L$). For (B), this
is the first $\varepsilon$-approximation algorithm.

(with K. Been and Z. Du)
This paper introduces
(a) the characteristic property of thinwire settings,
(b) some criteria for responsiveness in such a setting,
(c) a responsive architecture for visualization in such a setting,
and (d) the Telescoping Window Interface,
a truly scalable user interface for visualizing
arbitrarily large images.

(with E.C. Chang)
This paper introduces a new scheduling problem in which
requests can be partially served. This gives one
formulation of the notion of "Level of Service" which is
important in multimedia applications. Our model arose in
our thinwire visualization applications.
We present two online schedulers
that achieve a competitive ratio of 2 in the sense of Tarjan-Sleator.
These bounds are tight.

(with Chen Li)
A fundamental problem in the Exact Geometric Computation
is to determine the sign of algebraic
expressions. This usually depend on the bounding roots of
polynomials away from zero. Classical bounds are often
non-constructive, but we seek ``constructive root bounds''.
Recently, Burnikel, Fleischer, Mehlhorn and Schirra (BFMS) introduce
such a bound for the important class of radical expressions.
In the division-free case, it was essentially tight and
improves on all previouns bounds. In the general case,
its root "bit-bound" is quadratic in the degree of the
algebraic expression. We provide a new bound that can often
avoid this quadratic dependence on degree. Our method is also
applicable to more general algebraic expressions.
Our improvement can give a dramatic speedups in practice.
We implemented this bound in our
Core Library and report on some experiments.

We apply our test in proving geometric theorems in the
classical ruler-and-compass setting. In contrast to
Wu's method which
works with complex geometry, ours addresses
real geometry. Our method is less general
than the Groebner Bases method, but is almost perfectly
matched to proving ruler-and-compass theorems. These
features, plus our use of randomized testing,
suggest that our method may turn out to be
the most efficient method for verifying this class of theorems.
Some experimental results are reported.
This prover is implemented using our Core Library, and
is distributed with the Library.

Universal Construction for the FKS Scheme (April 1999)
We introduce a analysis of the Fredman-Komlos-Szemeredi
(FKS) scheme for optimal static hashing, based on the
universality concepts of Carter-Wegman. This leads to
a more general and improved construction of FKS schemes,
starting from any strongly universal hash set.
We resolve a question first raised in the FKS paper:
can hash functions in FKS schemes avoid arithmetic large
arithmetic and primes? We give an affirmative answer to this.
This paper introduce the concept of weighted universal hashing
and convex combination of universal hash functions.
An implementation is available upon request.

Wavelet Foveation (Jan 1999)
(with E.-C. Chang and S. Mallat)
This paper develops the mathematics of the foveation operator
introduced by Chang and Yap in 1997. It is shown to be
a bounded operator that is diagonally dominant.
This gives a mathematical justification of the practice
of using wavelets to perform foveation.

A Core Library For Robust Numeric and
Geometric Computation (Dec 1998)
(with V. Karamcheti, C. Li and I. Pechtchanski)
Describes a new Library that will bring
the techniques of Exact Geometric Computation (EGC)
into the hands of any C/C++ programmer. Describes
interface, implementation, and some optimization experiments.

A New Number Core for Robust Numerical and
Geometric Libraries (Oct 1998)
Describes a novel number core that supports
Exact Geometric Computing (EGC) technology. It introduces
4 levels of numerical accuracies, including a ``guaranteed
accuracy level''. An library currently under construction
will have the following features:

(1) Usability: there is no change in
programming behavior for achieving EGC algorithms.

(2) Efficiency: by working with compiler experts and
exploiting the Trimaran system (a state-of-art
compiler infrastructure), we attack efficiency
not only at the algorithmic level, but at the
level of compiler front- and back-ends.

(3) Impact: by collaborating with teams at SUNY Stony Brook
(Glimm and Mitchell), UNC Chapel Hill (Manocha and Lin)
and Brown (Preparata and Tamassia), and their industrial
and laboratory partners, we ensure that this work has
immediate impact on practice.
Open problems for each of these items are discussed.

A wavelet approach to foveating images (March 1997)
(with E.-C. Chang and T.-J. Yen)
We investigate realtime techniques for visualization
of very large images over a ``thinwire''.
An example is map servers on the internet.
Current map servers on the internet have the following properties

Visual discontinuities in zooming/panning

Small viewing window

Distinctly non-realtime response.

We have constructed an image server
that can effectively overcome these limitations.
The key to our approach is the use of foveation techniques,
which are in turn based on wavelets.

Roundness Classification (1997)
(with K. Mehlhorn and T.C. Shermer)
We present a ``complete''
3-step procedure for bounding the relative roundness
of a given planar object $I$. This procedure
is based on the probing model.
Our method is efficient (being linear time) and extremely simple.
It can be used in contemporary Coordinate
Measuring Systems (CMS) technology.
This paper also introduces the concepts of {\em relative
roundness} $\rho(I)\ge 0$ and {\em near-center} of an object $I$.
(This paper supercedes in part the paper
``Probing for near-centers and estimating relative roundness''
below.)

A wavelet approach to foveating images (1997)
(with E.-C. Chang)
Motivated by applications of foveated images in visualization, we
introduce the foveation transform
of an image. We study the basic properties of
these transforms using the multiresolution framework of Mallat.
We also consider practical methods
of realizing such transforms. In particular, we
introduce a new method for foveating images based
on wavelets. Preliminary experimental results are shown.
For java demo, see our
foveated image server

A simultaneous search problem (1996)
(with E.-C. Chang)
We pose and solve the following search problem
that arises in metrology: we want to locate two numbers
$x,y$ in the unit interval by comparisons. Each ``simultaneous comparison''
is specified by a real number $r$, and the outcomes
for the (usual) comparisons $x:r$ and $y:r$ is given. Based
on the outcomes, the algorithm can formulate another
simultaneous comparison, etc. Let $k\ge 1$ be given.
At any point, we can localize $x$ to some interval $I_x$ and $y$ to some
$I_y$. The goal is to minimize the sum $| I_x|+| I_y|$ after $k$
simultaneous comparisons. Let $U_k$ denote the minimax
bound on $| I_x|+| I_y|$ after $k$ simultaneous comparisons.
We derive exact bounds on $U_k$. The asymptotic behaviour
of $U_k$ depends on whether $k$ is odd or even.

(with Ee-Chien Chang)
The classification problem is fundamental in
the metrology of Geometric Tolerancing:
given a part $B$, is it within some tolerance $\Phi$?
The ``standard methodology'' for the classification problem
comprises three steps:
sampling $\sigma$,
algorithmic step $\alpha$
and policy decision $\pi$.
The issues we examine are posed around this methodology,
using a simple 1-dimensional problem as case study.

(with T.Asano and D.Kirkpatrick)
We prove that trying to optimize the orbit length of a fixed
point in the interior of a rod moving amidst polygonal
obstacles in the plane is $NP$-hard. We also present
approximation algorithms.

(with Joosoo Choi)
We prove that for any collection of pairwise disjoint axes-parallel
boxes in $d$-dimensions, and for any source $s$ and target $t$
not inside the boxes, there exists a coordinate direction $\phi$
such that every rectilinear geodesic (shortest path) from
$s$ to $t$ is monotonic along direction $\phi$. Previously,
this result was known in 2- and 3-dimensions.

(with E. Schoemer, J. Sellen and M. Teichmann)
We introduce a general
technique for transforming non-linear problems
into a higher dimensional linear problem. Using
this, we show the best current complexity bounds
for computing the smallest-radius cylinder that
encloses an input set of points in 3-space.

(with T. Shermer)
We give a simple and efficient procedure for estimating the
relative roundness of an object. It is the first ``complete''
procedure that integrates sampling, computation
and estimation steps. We introduce the
concept of ``near center'' and show that 6 probes
suffice to locate such a point.
(Presented at
1995 ASME Workshop on Tolerancing & Metrology,
June 21-23, 1995, Precision Engineering Laboratory,
University of North Carolina, Charlotte)

Describes the application of computational geometry and
exact computation to computational problems in
dimensional tolerancing and metrology. In
Snapshots of Computational and Discrete Geometry, Vol.3,
(eds. D.Avis and J.Bose),
McGill School of Comp.Sci, Tech.Rep. No.SOCS-94.50.
A volume dedicated to G.Toussaint.

(with J.S.Choi) We prove that any $L_1$-shortest
path amidst obstacles comprising disjoint boxes in $3$-space
must be monotone in at least one coordinate direction.
Some algorithmic consequences are derived.
(appeared in 11th Symp.CG'95, and the PhD thesis
of Joonsoo Choi)

(with J.S.Choi and J.Sellen)
We introduce the concept of precision-sensitive
algorithms and apply this to the Euclideans shortest
path problem in $3$-space.
(In 11th ACM Symp.CG'95, and the PhD thesis
of Joonsoo Choi)

We revisited a problem of Papadimitriou,
and show how to compute epsilon-shortest paths
amidst polyhedral obstacles in polynomial-time
in TRUE bit-complexity model. An alternative
subdivision scheme is also introduced.

(with T.Dub\'e)
Demonstrates the usefulness of big-floats in exact computation --
despite the fact that big-float numbers are inherently
approximations. Specifically, although
comparisons involving square roots can be
reduced to integer operations, there are too many
cases (this fact is not often noted)
and is overall less efficient than the use of
approximations accompanied by root bounds.
Also describe our implementation of a
big-float package.

(with T.Dub\'e)
In Computing in Euclidean Geometry
(eds. Du and Hwang, World Scientific Press, 2nd Edition, 1995)
A survey of the approaches to non-robustness of geometric
algorithms, and especially the exact computation approach.
Also surveys available big-number packages.

Describes a research program for exact
computation, intended as an antidote to non-robustness problems
in geometric computation. Geometric computation is characterized
as one that constructs a combinatorial structure that is
implicitly defined by numerical data (e.g., a convex hull).

For $n\ge 1$ and $d\ge 2$,
we describe a commutative Thue system that has $\sim 2n$ variables
and $O(n)$ rules, each rule of size $d+O(1)$ and that counts to
$d^{2^{n}}$ in a certain technical sense. This gives a more
``efficient'' alternative to a well-known construction of Mayr and Meyer.
Using this construction, we sharpen the known
double-exponential lower bounds
for the maximum degrees $D(n,d), I(n,d), S(n,d)$
associated (respectively) with Gr\"obner bases,
ideal membership problem and the syzygy basis problem:

$$D(n,d)\ge S(n,d)\ge d^{2^{m}},\quad I(n,d)\ge d^{2^{m}} $$

where $m\sim n/2$, and $n,d$ sufficiently large. For comparison,
it was known that $D(n,d)\le d^{2^{n}}$ and $I(n,d)\le (2d)^{2^{n}}$.

We present a framework for the
asymptotically fast Half-GCD (HGCD) algorithms,
based on properties of the norm. This framework
covers both the integer and the polynomial case.
Two benefits of our approach are
(a) a simplified (and first published) correctness proof
of the polynomial HGCD algorithm,
and (b) another explicit integer HGCD algorithm.

Minimal circumscribing simplices
We analyze simplices with locally minimal volume
that circumscribe a convex polyhedron in $\RR^d$.
A necessary condition is that all centroids of facets of
the simplex lie on the circumscribed polyhedron.
We give a complete classification of the locally minimal
circumscribing simplices whose facets have contact with
the polytope either of maximal dimension ($d-1$, flush facet)
or of minimal dimension ($0$, contact at facet centroid).
But there can be locally minimal circumscribing simplices
whose facets have contacts of neither maximal nor minimal
dimension. In 3-space, we give a complete necessary
and sufficient classification of locally minimum simplices.

Gives a constructive, elementary proof of Robbiano's characterization
of admissible orderings. Based on this characterization, we give
a tight double-exponential bound on the complexity of the normal
form algorithm for
any admissible orderings. With a simple refinement of the normal
form algorithm, we improve this to a single exponential.