Timothy M. Chan's Publications: Range searching


Adaptive and approximate orthogonal range counting

(with
Bryan T. Wilkinson)

We present three new results on one of the most basic problems in geometric data structures, 2-D orthogonal range counting. All the results are in the w-bit word RAM model.


Linear-space data structures for range minority query in arrays

(with
Stephane Durocher, Matthew Skala, and Bryan T. Wilkinson)

We consider range queries in arrays that search for low-frequency elements: least frequent elements and alpha-minorities. An alpha-minority of a query range has multiplicity no greater than an alpha fraction of the elements in the range. Our data structure for the least frequent element range query problem requires O(n) space, O(n^{3/2}) preprocessing time, and O(sqrt{n}) query time. A reduction from boolean matrix multiplication to this problem shows the hardness of simultaneous improvements in both preprocessing time and query time. Our data structure for the alpha-minority range query problem requires O(n) space and O(1/alpha) query time, and allows alpha to be specified at query time.


Linear-space data structures for range mode query in arrays

(with
Stephane Durocher, Kasper Green Larsen, Jason Morrison, and Bryan T. Wilkinson)

A mode of a multiset S is an element a in S of maximum multiplicity; that is, a occurs at least as frequently as any other element in S. Given an array A[1:n] of n elements, we consider a basic problem: constructing a static data structure that efficiently answers range mode queries on A. Each query consists of an input pair of indices (i, j) for which a mode of A[i:j] must be returned. The best previous data structure with linear space, by Krizanc, Morin, and Smid (ISAAC 2003), requires O(sqrt(n) loglog n) query time. We improve their result and present an O(n)-space data structure that supports range mode queries in O(sqrt(n / log n)) worst-case time. Furthermore, we present strong evidence that a query time significantly below sqrt(n) cannot be achieved by purely combinatorial techniques; we show that boolean matrix multiplication of two sqrt(n) by sqrt(n) matrices reduces to n range mode queries in an array of size O(n). Additionally, we give linear-space data structures for orthogonal range mode in higher dimensions (queries in near O(n^(1-1/2d)) time) and for halfspace range mode in higher dimensions (queries in O(n^(1-1/d^2)) time).


Orthogonal range searching on the RAM, revisited

(with
Kasper Green Larsen and Mihai Patrascu)

We present a number of new results on one of the most extensively studied topics in computational geometry, orthogonal range searching. All our results are in the standard word RAM model:


Optimal partition trees

We revisit one of the most fundamental classes of data structure problems in computational geometry: range searching. Back in SoCG'92, Matousek gave a partition tree method for d-dimensional simplex range searching achieving O(n) space and O(n^{1-1/d}) query time. Although this method is generally believed to be optimal, it is complicated and requires O(n^{1+eps}) preprocessing time for any fixed eps > 0. An earlier method by Matousek (SoCG'91) requires O(n log n) preprocessing time but O(n^{1-1/d} polylog n) query time. We give a new method that achieves simultaneously O(n log n) preprocessing time, O(n) space, and O(n^{1-1/d}) query time with high probability. Our method has several advantages:

Numerous consequences follow (e.g., improved results for computing spanning trees with low crossing number, ray shooting among line segments, intersection searching, exact nearest neighbor search, linear programming queries, finding extreme points, ...).


Counting inversions, offline orthogonal range counting, and related problems

(with
Mihai Patrascu)

We give an O(n sqrt{lg n})-time algorithm for counting the number of inversions in a permutation on n elements. This improves a long-standing previous bound of O(n lg n/lg lg n) that followed from Dietz's data structure [WADS'89], and answers a question of Andersson and Petersson [SODA'95]. As Dietz's result is known to be optimal for the related dynamic rank problem, our result demonstrates a significant improvement in the offline setting. Our new technique is quite simple: we perform a "vertical partitioning" of a trie (akin to van Emde Boas trees), and use ideas from external memory. However, the technique finds numerous applications: for example, we obtain

As a bonus, we also give a simple (1+epsilon)-approximation algorithm for counting inversions that runs in linear time, improving the previous O(n lg lg n) bound by Andersson and Petersson.


Optimal halfspace range reporting in three dimensions

(with
Peyman Afshani)

We give the first optimal solution to a standard problem in computational geometry: three-dimensional halfspace range reporting. We show that n points in 3-d can be stored in a linear-space data structure so that all k points inside a query halfspace can be reported in O(log n + k) time. The data structure can be built in O(n log n) expected time. The previous methods with optimal query time required superlinear (O(n log log n)) space.

We also mention consequences, for example, to higher dimensions and to external-memory data structures. As an aside, we partially answer another open question concerning the crossing number in Matousek's shallow partition theorem in the 3-d case (a tool used in many known halfspace range reporting methods).


On approximate range counting and depth

(with
Peyman Afshani)

Improving previous methods by Aronov and Har-Peled (SODA'05) and Kaplan and Sharir (SODA'06), we present a randomized data structure of O(n) expected size which can answer 3D approximate halfspace range counting queries in O(log (n/k)) expected time, where k is the actual value of the count. This is the first optimal method for the problem in the standard decision tree model; moreover, unlike previous methods, the new method is Las Vegas instead of Monte Carlo. In addition, we describe new results for several related problems, including approximate Tukey depth queries in 3D, approximate regression depth queries in 2D, and approximate linear programming with violations in low dimensions.


Dynamic connectivity for axis-parallel rectangles

(with
Peyman Afshani)

In this paper we give a fully dynamic data structure to maintain the connectivity of the intersection graph of n axis-parallel rectangles. The amortized update time (insertion and deletion of rectangles) is O(n^{10/11} polylog n) and the query time (deciding whether two given rectangles are connected) is O(1). It slightly improves the update time (O(n^{0.94})) of the previous method while drastically reducing the query time (near O(n^{1/3})). Our method does not use fast matrix multiplication results and supports a wider range of queries.


Random sampling, halfspace range reporting, and construction of (<= k)-levels in three dimensions

Given n points in three dimensions, we show how to answer halfspace range reporting queries in O(log n + k) expected time for an output size k. Our data structure can be preprocessed in optimal O(n log n) expected time. We apply this result to obtain the first optimal randomized algorithm for the construction of the (<= k)-level in an arrangement of n planes in three dimensions. The algorithm runs in O(n log n + nk^2) expected time. Our techniques are based on random sampling. Applications in two dimensions include an improved data structure for "k nearest neighbors" queries, and an algorithm that constructs the order-k Voronoi diagram in O(n log n + nk log k) expected time.


On enumerating and selecting distances

Given an n-point set, the problems of enumerating the k closest pairs and selecting the k-th smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n + k) running time in any fixed-dimensional Euclidean space. For the selection problem, we give a randomized algorithm with running time O(n log n + n^{2/3}k^{1/3}log^{5/3}n) in the Euclidean plane. We also describe output-sensitive results for halfspace range counting that are of use in more general distance selection problems. None of our algorithms requires parametric search.


Fixed-dimensional linear programming queries made easy

We derive two results from Clarkson's randomized algorithm for linear programming in a fixed dimension d. The first is a simple general method that reduces the problem of answering linear programming queries to the problem of answering halfspace range queries. For example, this yields a randomized data structure with O(n) space and O(n^{1-1/floor(d/2)} 2^O(log* n)) query time for linear programming on n halfspaces (d > 3). The second result is a simpler proof of the following: a sequence of q linear programming queries on n halfspaces can be answered in O(n log q) time, if q <= n^{alpha_d} for a certain constant alpha_d > 0. Unlike previous methods, our algorithms do not require parametric searching.


Copyright Notice

The documents contained in this directory are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Timothy M. Chan (Last updated 29 Mar 2014)