Tools

"... We use random sampling for several new geometric algorithms. The algorithms are "Las Vegas," and their expected bounds are with respect to the random behavior of the algorithms. These algorithms follow from new general results giving sharp bounds for the use of random subsets in geometric ..."

We use random sampling for several new geometric algorithms. The algorithms are &quot;Las Vegas,&quot; and their expected bounds are with respect to the random behavior of the algorithms. These algorithms follow from new general results giving sharp bounds for the use of random subsets in geometric algorithms. These bounds show that random subsets can be used optimally for divide-and-conquer, and also give bounds for a simple, general technique for building geometric structures incrementally. One new algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of n points in E d in O(n log n) expected time for d = 3, and O(n bd=2c ) expected time for d ? 3. The algorithm also gives fast expected times for random input points. Another algorithm computes the diameter of a set of n...

"... Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies ..."

Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algorithm. 1

...r or low-discrepancy set. Formally, the analysis is similar to our analysis of the Set Cover algorithm in Section 3.4. Clarkson used this idea to give a deterministic algorithm for Linear Programming =-=[Cla88]-=-. Following Clarkson, Bronnimann and Goodrich use similar methods to find Set Covers for hyper-graphs with small VC dimension [BG94]. Recently, multiplicative weights arguments were also used in conte...

"... We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. ..."

We show that with recently developed derandomization techniques, one can convert Clarkson&apos;s randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We show that the algorithm works in a fairly general abstract setting, which allows us to solve various other problems (such as finding the maximum volume ellipsoid inscribed into the intersection of n halfspaces) in linear time.

"... Abstract. Partitioning a multi-dimensional data set into rectangular partitions subject to certain constraints is an important problem that arises in many database applications, including histogram-based selec-tivity estimation, load-balancing, and construction of index structures. While provably op ..."

Abstract. Partitioning a multi-dimensional data set into rectangular partitions subject to certain constraints is an important problem that arises in many database applications, including histogram-based selec-tivity estimation, load-balancing, and construction of index structures. While provably optimal and ecient algorithms exist for partitioning one-dimensional data, the multi-dimensional problem has received less attention, except for a few special cases. As a result, the heuristic parti-tioning techniques that are used in practice are not well understood, and come with no guarantees on the quality of the solution. In this paper, we present algorithmic and complexity-theoretic results for the fundamental problem of partitioning a two-dimensional array into rectangular tiles of arbitrary size in a way that minimizes the number of tiles required to sat-isfy a given constraint. Our main results are approximation algorithms for several partitioning problems that provably approximate the optimal solutions within small constant factors, and that run in linear or close to linear time. We also establish the NP-hardness of several partitioning problems, therefore it is unlikely that there are ecient, i.e., polynomial time, algorithms for solving these problems exactly. We also discuss a few applications in which partitioning problems arise. One of the applications is the problem of constructing multi-dimensional histograms. Our results, for example, give an ecient algorithm to con-struct the V-Optimal histograms which are known to be the most ac-curate histograms in several selectivity estimation problems. Our algo-rithms are the rst to provide guaranteed bounds on the quality of the solution. 1

...n analogous fashion. ut Lemma 5. The loop in Step (2) of MAX-pxp terminates after O(p log n) iterations. Proof: The proof is similar to that of Lemma 3.4 of [4], which itself follows the arguments in =-=[7, 22, 34]-=-. Note that the weight w(X) is initially 2n, and that it increases by at most a factor of i 1 + ffl 2\Delta(2+ffl)\Deltap 0 j in each iteration, since in each iteration we multiply 2 An exception is t...

...r, a sample of size Θ((1/ɛ2 )(d log O(1)/ɛ + log 1/δ)) is required ([VC71]; see also [13, 17, 23] and §6.). Hence the time to compute such an approximate center is O((d2 /ɛ2 )(d/ log d/ɛ+ log 1/δ)) d =-=[3]-=-. This bound is constant in that it does not depend on n, but it has a constant factor exponential in d. Alternatively, a deterministic linear-time sampling algorithm can be used in place of random sa...

"... Improving on a recent breakthrough of Sharir, we find two minimum-radius circular disks covering a planar point set, in randomized expected time O(n log 2 n). 1 Introduction The k-center problem for a point set S is to find k points (called centers, usually not required to be a subset of S) such ..."

Improving on a recent breakthrough of Sharir, we find two minimum-radius circular disks covering a planar point set, in randomized expected time O(n log 2 n). 1 Introduction The k-center problem for a point set S is to find k points (called centers, usually not required to be a subset of S) such that the maximum distance from any point in S to the nearest center is minimized. A case of particular interest is the planar two-center problem [4], which can be viewed less abstractly as one of covering a set of points in the plane by two congruent circular disks, in such a way as to minimize the radius r # of the disks. For a long time the best algorithms for this problem had time bounds of the form O(n 2 log c n) [1, 5, 12, 11]. In a recent breakthrough, Sharir [16] greatly improved all of these algorithms, giving a two-center algorithm with running time O(n log c n). The basic idea is to search for different types of partition depending on the relative positions of the two disk...

"... An Abstract Optimization Problem (AOP) is a triple (H, <, Φ) where H is a finite set, < a total order on 2 H and Φ an oracle that, for given F ⊆ G ⊆ H, either reports that F = min<{F ′ | F ′ ⊆ G} or returns a set F ′ ⊆ G with F ′ < F. To solve the problem means to find the minimum set ..."

An Abstract Optimization Problem (AOP) is a triple (H, &lt;, Φ) where H is a finite set, &lt; a total order on 2 H and Φ an oracle that, for given F ⊆ G ⊆ H, either reports that F = min&lt;{F ′ | F ′ ⊆ G} or returns a set F ′ ⊆ G with F ′ &lt; F. To solve the problem means to find the minimum set in H. We present a randomized algorithm that solves any AOP with an expected number of at most e 2 √ n+O ( 4 √ n ln n) oracle calls, n = |H|. In contrast, any deterministic algorithm needs to make 2 n − 1 oracle calls in the worst case. The algorithm is applied to the problem of finding the distance between two n-vertex (or nfacet) convex polyhedra in d-space, and to the computation of the smallest ball containing n points in d-space; for both problems we give the first subexponential bounds in the arithmetic model of computation.

...in n, see Dyer [7], Clarkson [1], Seidel [22], Sharir & Welzl [23]. The currently best (randomized) algorithm combines the recent subexponential results in [12] and [19] with an algorithm by Clarkson =-=[2]-=-. This gives a bound of O(d 2 n + e O(√ d log d) ). The best deterministic algorithm is due to Chazelle & Matouˇsek [4] and is obtained by ‘derandomizing’ Clarkson’s algorithm. Its runtime is O(d O(d)...

"... Research conducted over the past fifteen years has amply demonstrated the advantages of algorithms that make random choices in the course of their execution. This paper presents a wide variety of examples intended to illustrate the range of applications of randomized algorithms, and the general prin ..."

Research conducted over the past fifteen years has amply demonstrated the advantages of algorithms that make random choices in the course of their execution. This paper presents a wide variety of examples intended to illustrate the range of applications of randomized algorithms, and the general principles and approaches that are of greatest use in their construction. The examples are drawn from many areas, including number theory, algebra, graph theory, pattern matching, selection, sorting, searching, computational geometry, combinatorial enumeration, and parallel and distributed computation. 1. Foreword This paper is derived from a series of three lectures on randomized algorithms presented by the author at a conference on combinatorial mathematics and algorithms held at George Washington University in May, 1989. The purpose of the paper is to convey, through carefully selected examples, an understanding of the nature of randomized algorithms, the range of their applications and the principles underlying their construction. It is not our goal to be encyclopedic, and thus the paper should not be regarded as a comprehensive survey of the subject. This paper would not have come into existence without the magnificent efforts of Professor Rodica Simion, the organizer of the conference at George Washington University. Working from the tape-recorded lectures, she created a splendid transcript that served as the first draft of the paper. Were it not for her own reluc-tance she would be listed as my coauthor.

...[U,] =,c, +-In t.s184 R. A4. Karps7.2. Linear programming with a fixed number of variablessThis is another elegant example of the use of randomization in computationalsgeometry. It is due to Clarkson =-=[12]-=-. The linear programming problem is, of course,sto minimize a linear objective function c. x subject to a system of linear inequalitiessAx< b. The data of the problem consists of the d-dimensional vec...

"... In the first part of the paper we survey some far-reaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot ru ..."

In the first part of the paper we survey some far-reaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot rules and upper bounds on the diameter of graphs of polytopes. 1 Introduction: A convex polyhedron is the intersection P of a finite number of closed halfspaces in R d . P is a d-dimensional polyhedron (briefly, a d-polyhedron) if the points in P affinely span R d . A convex d-dimensional polytope (briefly, a d-polytope) is a bounded convex d-polyhedron. Alternatively, a convex d-polytope is the convex hull of a finite set of points which affinely span R d . A (nontrivial) face F of a d-polyhedron P is the intersection of P with a supporting hyperplane. F itself is a polyhedron of some lower dimension. If the dimension of F is k we call F a k-face of P . The empty set and P itself are...

...pivot rules and recent upper bounds for the diameter of graphs of polytopes. The algorithms we consider should be regarded in the general context of LP algorithms discovered by Megiddo [25], Clarkson =-=[5]-=-, Seidel [28], Dyer, Dyer and Frieze [7] and many others. But we will not attempt giving this general picture here. For the use of randomized algorithms in computational geometry the reader is referre...