This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Department of Computer Science, University of Illinois at Urbana-Champaign, USAtmc@illinois.eduhttps://orcid.org/0000-0002-8093-0675 Supported in part by NSF Grant CCF-1814026. Department of Computer Science, University of Illinois at Urbana-Champaign, USAqizheng6@illinois.edu \CopyrightTimothy M. Chan and Qizheng He \ccsdesc[100]Theory of computation Computational geometry \hideLIPIcs\EventEditorsKevin Buchin and Éric Colin de Verdière \EventNoEds2 \EventLongTitle37th International Symposium on Computational Geometry (SoCG 2021) \EventShortTitleSoCG 2021 \EventAcronymSoCG \EventYear2021 \EventDateJune 7–11, 2021 \EventLocationBuffalo, NY, USA \EventLogosocg-logo.pdf \SeriesVolume189 \ArticleNo53

More Dynamic Data Structures for Geometric Set Cover with Sublinear Update Time

Timothy M. Chan    Qizheng He
Abstract

We study geometric set cover problems in dynamic settings, allowing insertions and deletions of points and objects. We present the first dynamic data structure that can maintain an O(1)O(1)-approximation in sublinear update time for set cover for axis-aligned squares in 2D. More precisely, we obtain randomized update time O(n2/3+δ)O(n^{2/3+\delta}) for an arbitrarily small constant δ>0\delta>0. Previously, a dynamic geometric set cover data structure with sublinear update time was known only for unit squares by Agarwal, Chang, Suri, Xiao, and Xue [SoCG 2020]. If only an approximate size of the solution is needed, then we can also obtain sublinear amortized update time for disks in 2D and halfspaces in 3D. As a byproduct, our techniques for dynamic set cover also yield an optimal randomized O(nlogn)O(n\log n)-time algorithm for static set cover for 2D disks and 3D halfspaces, improving our earlier O(nlogn(loglogn)O(1))O(n\log n(\log\log n)^{O(1)}) result [SoCG 2020].

keywords:
Geometric set cover, approximation algorithms, dynamic data structures, sublinear algorithms, random sampling

1 Introduction

Approximation algorithms for NP-hard problems and dynamic data structures are two of the major themes studied by the algorithms community. Recently, problems at the intersection of these two threads have gained much attention, and researchers in computational geometry have also started to systematically explore such problems. For example, at SoCG last year, two papers appeared, one on dynamic geometric set cover by Agarwal et al. [3], and another on dynamic geometric independent set by Henzinger, Neumann, and Wiese [23]. In this paper, we continue the study by Agarwal et al. [3] and investigate dynamic data structures for approximating the minimum set cover in natural geometric instances.

Static geometric set cover.

In the static (unweighted) geometric set cover problem, we are given a set XX of O(n)O(n) points and a set SS of O(n)O(n) geometric objects, and we want to find the smallest subset of objects in SS that covers all points of XX. The problem is fundamental and has many applications. Let OPT denote the value (i.e., cardinality) of the optimal solution. As the problem is NP-hard for many classes of geometric objects, we are interested in efficient approximation algorithms.

Many classes of objects, such as squares and disks in 2D, objects with linear union complexity, and halfspaces in 3D, admit polynomial-time O(1)O(1)-approximation algorithms (i.e., computing a solution of size O(1)OPTO(1)\cdot\mbox{\rm OPT}), by using ε\varepsilon-nets and LP rounding or the multiplicative weight update (MWU) method [8, 17, 19]. Some classes of objects, such as disks in 2D and halfspaces in 3D, even have PTASs [30] or quasi-PTASs [29], though with large polynomial running time.

Agarwal and Pan [5] worked towards finding faster approximation algorithms that run in near linear time. They gave randomized O(nlog4n)O(n\log^{4}n)-time algorithms (based on MWU) for computing O(1)O(1)-approximations, for example, for disks in 2D and halfspaces in 3D. At last year’s SoCG [14], we described further improvements to the running time for 2D disks and 3D halfspaces, including a deterministic O(nlog3nloglogn)O(n\log^{3}n\log\log n)-time algorithm and a randomized O(nlogn(loglogn)O(1))O(n\log n(\log\log n)^{O(1)})-time algorithm.

Dynamic geometric set cover.

It is natural to consider the dynamic setting of the geometric set cover problem. Here, we want to support insertions and deletions of points in XX as well as insertions and deletions of objects in SS, while maintaining an approximate solution. Note that the solution may have linear size in the worst case. In the simplest version of the problem, we may just want to output the value of the solution. More strongly, we may want some representation of the solution itself, so that afterwards, the objects in the solution can be reported in constant time per element when needed.

Agarwal et al. [3] gave a number of results on dynamic geometric set cover. They showed that for intervals in 1D, a (1+ε)(1+\varepsilon)-approximation can be maintained in O((1/ε)nδ)O((1/\varepsilon)n^{\delta}) time per insertions and deletions of points and intervals. (Throughout the paper, δ>0\delta>0 denotes an arbitrarily small constant.) In 2D, they had only one main result: a fully dynamic O(1)O(1)-approximation algorithm for unit axis-aligned squares, with O(n1/2+δ)O(n^{1/2+\delta}) update time.

New results.

We present several new results on dynamic geometric set cover. The first is a fully dynamic, randomized, O(1)O(1)-approximation algorithm for the more general case of arbitrary axis-aligned squares in 2D, with O(n2/3+δ)O(n^{2/3+\delta}) amortized update time. Though our time bound is a little worse than Agarwal et al.’s for the unit square case, the arbitrary square case is more challenging. (The unit square case reduces to case of dominance ranges, i.e., quadrants, via a standard grid approach; since the union of such ranges forms a “staircase” sequence of vertices, the problem is in some sense “1.51.5-dimensional”. In contrast, the arbitrary square problem requires truly “2-dimensional” ideas.)

We then consider the case of halfspaces in 3D. This case is fundamental, as set cover for 2D disks reduces to set cover for 3D halfspaces by the standard lifting transformation [21]. Also, by duality, hitting set for 3D halfspaces is equivalent to set cover for 3D halfspaces, and hitting set for 2D disks reduces to set cover for 3D halfspaces as well.

For 3D halfspaces, we obtain a fully dynamic, randomized algorithm with O(n12/13+δ)O(n0.924)O(n^{12/13+\delta})\leq O(n^{0.924}) amortized update time. Our result here is slightly weaker: it only finds the value of an O(1)O(1)-approximate solution (which could be good enough in some applications). If a solution itself is required, we can still get sublinear update time as long as OPT is sublinear (below n1δn^{1-\delta}). This assumption seems reasonable, since sublinear reporting time is not possible otherwise. (However, we currently do not know how to obtain a stronger time bound of the form O(nα+OPT)O(n^{\alpha}+\mbox{\rm OPT}) with α<1\alpha<1 to report a solution for 3D halfspaces.)

Remarks.

Our results are randomized in the Monte Carlo sense: the computed solution may not always be correct, but it is correct with high probability (w.h.p.), i.e., probability at least 11/nc1-1/n^{c} for an arbitrarily large constant cc. The error probability bounds hold even when the user has knowledge of the random choices made by the algorithms. (We do not assume an “oblivious adversary”; in fact, the algorithms make a new set of random choices each time it computes a solution, and the data structures themselves are not randomized.)

The update times are better than stated in many cases, notably, when OPT is small or when OPT is large. More precise OPT-sensitive bounds are given in lemmas and theorems throughout the paper. (The O(n2/3+δ)O(n^{2/3+\delta}) and O(n12/13+δ)O(n^{12/13+\delta}) bounds are obtained by “balancing”.)

We have assumed that we are required to compute a solution after every update. In some (but not all) cases, the update cost is smaller than the cost of computing a solution (this may be useful if we are executing a batch of updates).

Techniques.

Our algorithms are obtained by handling two cases differently: when OPT is small and when OPT is large. Intuitively, the small OPT case is easier since we are generating fewer objects, but the large OPT case also seems potentially easier since we can tolerate a larger additive error when targeting an O(1)O(1)-factor approximation—so, we are in a “win-win” situation. For our algorithms for 3D halfspaces, we even find it necessary to handle an intermediate case when OPT is medium (aiming for sublinear time for sublinear OPT).

Our algorithms for the small OPT case are based on the previous static MWU algorithms [8, 5, 14]. The adaptation of these static algorithms is not straightforward, and requires using various known techniques in new ways ((k)(\leq k)-levels in arrangements, for our 2D square algorithm in Section 3.1, and “augmented” partition trees, for our 3D halfspace algorithm in Section 4.1). The medium case for 3D halfspaces (in Section 4.2) is technically even more challenging (where we use “shallow” partition trees and other ideas).

Our algorithm for squares in the large OPT case (in Section 3.2) is different (not based on MWU), and interestingly uses quadtrees in a non-obvious way. For 3D halfspaces, our algorithm in the large OPT case (in Section 4.3) can compute only the value of an approximate solution, and is based on random sampling. However, the obvious way to use a random sample (just solving the problem on a random subset of points and objects) does not work. We use sampling in a nontrivial way, combining geometric cuttings with planar graph separators.

In some of our algorithms, notably the small OPT algorithms for squares and 3D halfspaces (in Sections 3.1 and 4.1) and the large OPT algorithm for 3D halfspaces (in Section 4.3), the data structure part is “minimal”: we just assume that points and objects are stored separately in standard range searching data structures. We describe sublinear-time algorithms to compute a solution from scratch, using range searching as oracles. Dynamization becomes trivial, since range searching data structures typically are already known to support insertions and deletions. (It also potentially enables other operations like merging sets, or solving the set cover problem for range-restricted subsets of points and subsets of objects.)

The topic of sublinear-time algorithms has received considerable attention in the algorithms community, due to applications to big data (where we want to solve problems without examining the entire input). A similar model of sublinear-time algorithms where the input is augmented with range searching data structures was proposed by Czumaj et al. [20], who presented results on approximating the weight of the Euclidean minimum spanning tree in any constant dimension under this model.

Application to static geometric set cover.

Although we did not intend to revisit the static problem, our techniques can lead a randomized O(1)O(1)-approximation algorithm for set cover for 3D halfspaces running in O(nlogn)O(n\log n) time, which completely eliminates the extra loglogn\log\log n factors in our previous result from SoCG’20 [14] and is optimal (in comparison-based models)! This bonus result is interesting in its own right, and is described in Section 5.

2 Review of an MWU Algorithm

We begin by briefly reviewing a known static approximation algorithm for geometric set cover, based on the multiplicative weight updates (MWU) method. Some of our dynamic algorithms will be built upon this algorithm.

Specifically, we consider the following randomized algorithm from our previous SoCG’20 paper [14], which is a variant of a standard algorithm by Brönnimann and Goodrich [8] or Clarkson [17] (see also Agarwal and Pan [5]). Below, c0c_{0} is a sufficiently large constant. The depth of a point pp in a set SS of objects is the number of objects of SS containing pp. A subset of objects TST\subseteq S is called an ε\varepsilon-net of SS if all points with depth ε|S|\geq\varepsilon|S| in SS are covered by TT. The size |S^||\hat{S}| of a multiset S^\hat{S} refers to the sum of the multiplicities of its elements iSmi\sum_{i\in S}m_{i}.

Algorithm 1 MWU for set cover
1:Guess a value t[OPT,2OPT]t\in[\mbox{\rm OPT},2\,\mbox{\rm OPT}].
2:Define a multiset S^\hat{S} where each object ii in SS initially has multiplicity mi=1m_{i}=1.
3:loop\triangleright call this the start of a new round
4:     Fix ρ:=c0tlogn|S^|\rho:=\frac{c_{0}t\log n}{|\hat{S}|} and take a random sample RR of S^\hat{S} with sampling probability ρ\rho.
5:     while there exists a point pXp\in X with depth in RR at most c02logn\frac{c_{0}}{2}\log n do
6:         for each object ii containing pp do \triangleright call lines 6–8 a multiplicity-doubling step
7:              Double its multiplicity mim_{i}, i.e., insert mim_{i} new copies of object ii into S^\hat{S}.
8:              For each copy, independently decide to insert it into RR with probability ρ\rho.          
9:         if the number of multiplicity-doubling steps in this round exceeds tt then
10:              Go to line 3 and start a new round.               
11:     Terminate and return a 18t\frac{1}{8t}-net of RR.

In the standard version of MWU, the “lightness” condition in line 5 was whether the depth of pp in S^\hat{S} is at most |S^|2t\frac{|\hat{S}|}{2t}. The main difference in the above randomized version is that lightness is tested with respect to the sample RR, which is computationally easier to work with—RR has size O(tlogn)O(t\log n) with high probability (w.h.p.). Justification of this randomized variant follows from a Chernoff bound, and was shown in [14, Section 4.2].

It is known that the algorithm terminates in O(lognt)O(\log\frac{n}{t}) rounds and O(tlognt)O(t\log\frac{n}{t}) multiplicity-doubling steps [8, 5, 14]. Furthermore, |S^||\hat{S}| increases by at most a factor of 2 in each round; in particular, |S^||\hat{S}| is bounded by nO(1)n^{O(1)} at the end of O(lognt)O(\log\frac{n}{t}) rounds.

In the standard version of MWU, line 11 returns a Θ(1t)\Theta(\frac{1}{t})-net of the multiset S^\hat{S}, but a Θ(1t)\Theta(\frac{1}{t})-net of RR works just as well: in the end, the depth in RR of all points of XX is at least c02logn>|R|8t\frac{c_{0}}{2}\log n>\frac{|R|}{8t} w.h.p., and so XX will be covered by the net. For objects that are axis-aligned squares in 2D, disks in 2D, or halfspaces in 3D, ε\varepsilon-nets of size O(1ε)O(\frac{1}{\varepsilon}) exist, and thus the above algorithm yields a set cover of size O(t)O(t), i.e., a constant-factor approximation. By known algorithms (e.g., [15]), the net for RR in line 11 can be constructed in O~(|R|)=O~(t)\tilde{O}(|R|)=\tilde{O}(t) time.

Several modifications have been explored in previous work. For example, in the first algorithm of Agarwal and Pan [5], each round examines all points of XX in a fixed order and test for lightness of the points one by one (based on the observation that a point found to have large depth will still have large depth by the end of the round). In our previous paper [14], we have also added a step at the beginning of each round, where the multiplicities are rescaled and rounded, so as to keep |S^||\hat{S}| bounded by O(n)O(n). These modifications led to a number of different static implementations running in O~(n)\tilde{O}(n) time.

3 Axis-Aligned Squares

Our first result for dynamic set cover is for axis-aligned squares. Previously, a sublinear time algorithm for dynamic set cover was known only for unit squares. Our method will be divided into two cases: when OPT is small and when OPT is large. Let tt be a guess on OPT. We will run our algorithm for each possible t=2it=2^{i} in parallel. (When the guess is wrong, our algorithm will be able to tell whether tt is approximately smaller or larger than OPT w.h.p.) The update time will increase only by a factor of O(logn)O(\log n).

3.1 Algorithm for Small OPT

Our algorithm for the small OPT case will be based on the randomized MWU algorithm described in Section 2. The key is to realize that this algorithm can actually be implemented to run in sublinear time, assuming that the points and the objects have been preprocessed in standard range searching data structures. Since these structures are dynamizable, we can just re-run the MWU algorithm from scratch after every update.

3.1.1 Data structures

Our data structures are simple. We store the point set XX in the standard 2D range tree [21]. For each square ss with center (x,y)(x,y) and side length 2z2z, map ss to a point s=(xz,x+z,yz,y+z)s^{\uparrow}=(x-z,x+z,y-z,y+z) in 4D. We also store the lifted point set S={s:sS}S^{\uparrow}=\{s^{\uparrow}:s\in S\} in a 4D range tree. Range trees support insertions and deletions in XX and SS in polylogarithmic time.

3.1.2 Computing a solution

We now show how to compute an O(1)O(1)-approximate solution in sublinear time when OPT is small by running Algorithm 1 using the above data structures. At first glance, linear time seems unavoidable: (i) the obvious way to find low-depth points in line 5 is to scan through all points of XX in each round (as was done in previous algorithms [5, 14]), and (ii) explicitly maintaining the multiplicities of all points would also require linear time.

To overcome these obstacles, we observe that (i) we can use data structures to find the next low-depth point pp without testing each point one by one (recall that there are only O~(t)\tilde{O}(t) multiplicity-doubling steps), and (ii) multiplicities do not need to be maintained explicitly, so long as in line 8 we can generate a multiplicity-weighted random sample among the objects containing a given point pp efficiently (recall that the sample RR has only O~(t)\tilde{O}(t) size). The subproblem in (ii) is a weighted range sampling problem.

Finding a low-depth point.

Let b:=c02lognb:=\frac{c_{0}}{2}\log n. Each time we want to find a low-depth point in line 5, we compute (from scratch) b(R){\cal L}_{\leq b}(R), the (b)(\leq b)-level of RR, i.e., the collection of all cells in the arrangement of the squares of depth at most bb. It is known [31] that b(R){\cal L}_{\leq b}(R) has O(|R|b)O(|R|b) cells and can be constructed in O~(|R|b)\tilde{O}(|R|b) time, which is O~(t)\tilde{O}(t) (since |R|=O~(t)|R|=\tilde{O}(t) and b=O~(1)b=\tilde{O}(1)). To find a point pp of XX that has depth in RR at most bb, we simply examine each cell of b(R){\cal L}_{\leq b}(R), and perform an orthogonal range query to test if the cell contains a point of XX. All this takes O~(t)\tilde{O}(t) time.

As there are O~(t)\tilde{O}(t) multiplicity-doubling steps, the total cost is O~(t2)\tilde{O}(t^{2}).

Weighted range sampling.

For each square ss with center (x,y)(x,y) and side length 2z2z, define its dual point ss^{*} to be (x,y,z)(x,y,z) in 3D. For each point p=(px,py)p=(p_{x},p_{y}), define its dual region pp^{*} to be {(x,y,z):zmax{|xpx|,|ypy|}}\{(x,y,z):z\geq\max\{|x-p_{x}|,|y-p_{y}|\}\} in 3D. Then a point pp is in the square ss iff the point ss^{*} is in the region pp^{*}.

Let QQ be the set of all points pp for which we have performed multiplicity-doubling steps thus far. Note that |Q|=O~(t)|Q|=\tilde{O}(t). Let b=c0lognb^{\prime}=c_{0}^{\prime}\log n for a sufficiently large constant c0c_{0}^{\prime}. Each time we perform a multiplicity-doubling step, we compute (from scratch) b(Q){\cal L}_{\leq b^{\prime}}(Q^{*}), the (b)(\leq b^{\prime})-level of the dual regions Q={p:pQ}Q^{*}=\{p^{*}:p\in Q\}. This structure corresponds to planar order-(b)(\leq b^{\prime}) LL_{\infty} Voronoi diagrams. By known results [31, 9, 25], b(Q){\cal L}_{\leq b^{\prime}}(Q^{*}) has O(|Q|(b)2)O(|Q|(b^{\prime})^{2}) cells and can be constructed in O~(|Q|(b)2)\tilde{O}(|Q|(b^{\prime})^{2}) time, which is O~(t)\tilde{O}(t) (since |Q|=O~(t)|Q|=\tilde{O}(t) and b=O~(1)b^{\prime}=\tilde{O}(1)). The multiplicity of a square sSs\in S is equal to 2depth of s in Q2^{\mbox{\scriptsize depth of $s^{*}$ in $Q^{*}$}} (since each pp^{*} in QQ^{*} containing ss^{*} doubles the multiplicity of ss). In particular, since multiplicities are bounded by nO(1)n^{O(1)}, the depth (i.e., level) of ss^{*} must be logarithmically bounded. So, each ss^{*} is covered by b(Q){\cal L}_{\leq b^{\prime}}(Q^{*}), and the multiplicity of ss is determined by which cell of b(Q){\cal L}_{\leq b^{\prime}}(Q^{*}) the point ss^{*} is in.

To generate a multiplicity-weighted sample of the squares containing pp for line 8, after pp has been inserted to QQ, we examine all cells of b(Q){\cal L}_{\leq b^{\prime}}(Q^{*}) contained in pp^{*}. For each such cell γ\gamma, we identify the squares sSs\in S for which sγs^{*}\in\gamma; this reduces to an orthogonal range query in SS^{\uparrow}, and the answer can be expressed as a disjoint union of O~(1)\tilde{O}(1) canonical subsets. Knowing the sizes and multiplicities of these canonical subsets for all such O~(t)\tilde{O}(t) cells, we can then generate the weighted sample in time O~(t)\tilde{O}(t) plus the size of the sample.

Hence, the total cost of all O~(t)\tilde{O}(t) multiplicity-doubling steps is O~(t2)\tilde{O}(t^{2}).

In addition, we need to generate a new weighted sample RR (with a new sampling probability ρ\rho) in line 4 at the beginning of each round; this can be done similar to above, in O~(t)\tilde{O}(t) time plus the size of the sample (O~(t))(\tilde{O}(t)), for each of the O(logn)O(\log n) rounds. As mentioned, the final net computation in line 11 takes O~(t)\tilde{O}(t) time. We have thus obtained:

Lemma 3.1.

There exists a data structure for the dynamic set cover problem for O(n)O(n) axis-aligned squares and O(n)O(n) points in 2D that supports insertions and deletions in O~(1)\tilde{O}(1) time and can find an O(1)O(1)-approximate solution w.h.p. to the set cover problem in O~(OPT2)\tilde{O}(\mbox{\rm OPT}^{2}) time.

3.2 Algorithm for Large OPT

To complement our solution for the small OPT case, we now show that the problem also gets easier when OPT is large, mainly because we can afford a large additive error. We describe a different, self-contained algorithm for this case (not based on modifying MWU), interestingly by using quadtrees in a novel way.

To allow for both multiplicative and additive error, we use the term (α,β)(\alpha,\beta)-approximation to refer to a solution with cost at most αOPT+β\alpha\,\mbox{\rm OPT}+\beta.

For simplicity, we assume that all coordinates are integers bounded by U=poly(n)U=\mathrm{poly}(n). At the end, we will comment on how to remove this assumption.

In the standard quadtree, we start with a bounding square cell and recursively divide a square cell into four square subcells. We define the size of a cell Γ\Gamma to be the number of vertices in Γ\Gamma among the squares of SS, plus the number of points of XX in Γ\Gamma. We stop subdividing when a leaf cell has size at most bb, where bb is a parameter to be set later. This yields a subdivision into O(nb)O(\frac{n}{b}) cells per level, and O(nblogU)O(\frac{n}{b}\log U) cells in total. The quadtree decomposition can be easily made dynamic under insertions and deletions of points and squares.

For each leaf cell Γ\Gamma, a square in SS intersecting Γ\Gamma is called short if at least one of its vertices is in the cell, and long otherwise, as shown in Fig. 1. Note that a long square can have at most one side crossing the cell, because the quadtree cell Γ\Gamma is also a square. The union of the long squares within the cell is defined by at most 4 long squares—call these the maximal long squares. (If there is a square containing Γ\Gamma, we can designate one such square as the maximal long square.) For each leaf cell Γ\Gamma, it suffices to approximate the optimal set cover for the input points in XΓX\cap\Gamma using only the short squares plus the at most 4 maximal long squares in Γ\Gamma. By charging each square in the optimal solution to the cells containing its 4 vertices, we see that the sum of the sizes of the optimal covers in the leaf cells is at most 4OPT+O(nblogU)4\,\mbox{\rm OPT}+O(\frac{n}{b}\log U), which is indeed an O(1)O(1)-approximation if we choose bnlognOPTb\geq\frac{n\log n}{\rm OPT}.

Refer to caption
Figure 1: Short square (left) and long square (right). The quadtree cell is shaded.

Note that the complement of the union of the at most 4 maximal long squares in a cell Γ\Gamma is a rectangle rΓr_{\Gamma}. We will store the short squares in the cell Γ\Gamma in a data structure 𝒮Γ{\cal S}_{\Gamma} to answer the following type of query:

Given any query rectangle rr, compute an O(1)O(1)-approximation to the optimal set cover for the points in XrX\cap r using only the short squares.

Assuming the availability of such data structures 𝒮Γ{\cal S}_{\Gamma}, we can solve the dynamic set cover problem as follows:

  • An insertion/deletion of a square ss in SS requires updating 4 of these data structures 𝒮Γ{\cal S}_{\Gamma} for the leaf cells Γ\Gamma containing the 44 vertices of ss, and also updating the maximal long squares for all leaf cells in O~(nb)\tilde{O}(\frac{n}{b}) time.

  • An insertion/deletion of a point pp in XX requires updating one data structure 𝒮Γ{\cal S}_{\Gamma} for the leaf cell Γ\Gamma containing pp.

  • Whenever we want to compute a set cover solution, we examine all O~(nb)\tilde{O}(\frac{n}{b}) cells Γ\Gamma and query the data structure 𝒮Γ{\cal S}_{\Gamma} for rΓr_{\Gamma}, and return the union of the answers.

First implementation of 𝒮Γ{\cal S}_{\Gamma}.

A simple way to implement the data structure 𝒮Γ{\cal S}_{\Gamma} is as follows: in an update, we just recompute an approximate solution from scratch for every possible query rectangle in Γ\Gamma. Since there are only O(b4)O(b^{4}) combinatorially different query rectangles, and static approximate set cover on O(b)O(b) squares and points takes O~(b)\tilde{O}(b) time [5, 14], the update time is O~(b5)\tilde{O}(b^{5}), and the query time is trivially O(1)O(1).

As a result, the cost of insertion/deletion in the overall method is O~(b5+nb)\tilde{O}(b^{5}+\frac{n}{b}).

Improved implementation for 𝒮Γ{\cal S}_{\Gamma}.

We further improve the update time for the data structure 𝒮Γ{\cal S}_{\Gamma}. Instead of recomputing solutions for all O(b4)O(b^{4}) rectangles, the idea is to recompute solutions for a smaller number of “canonical rectangles”. More precisely, by using a 2D range tree [4, 21] for the O(b)O(b) points in XΓX\cap\Gamma, with branching factor aa in each dimension, we can form a set of canonical rectangles with total size O(aO(1)b(logab)2)O(a^{O(1)}b(\log_{a}b)^{2}), such that every query rectangle can be decomposed into O((logab)2)O((\log_{a}b)^{2}) canonical rectangles, ignoring portions that are empty of points. We set a:=bδa:=b^{\delta} for an arbitrarily small constant δ>0\delta>0.

For each canonical rectangle rr with size bib_{i}, there are at most O(bi)O(b_{i}) maximal long squares with respect to rr: the union of the long squares that cut across rr horizontally have at most two edges between any two consecutive points/vertices; and a similar statement holds for the long squares that cut across rr vertically. These maximal long squares can be found in O~(bi)\tilde{O}(b_{i}) time by standard orthogonal range searching. We can thus approximate the optimal set cover for the points in the canonical rectangle rr, using the O(bi)O(b_{i}) short squares and maximal long squares with respect to rr, in O~(bi)\tilde{O}(b_{i}) time by known static set cover algorithms [5, 14]. The total time over all canonical rectangles is O~(aO(1)b(logab)2)=O~(b1+O(δ))\tilde{O}(a^{O(1)}b(\log_{a}b)^{2})=\tilde{O}(b^{1+O(\delta)}).

Given a query rectangle, we can decompose it into O((logab)2)=O(1)O((\log_{a}b)^{2})=O(1) canonical rectangles and return the union of the optimal solutions in the canonical rectangles, which is an O(1)O(1)-approximation (more precisely, an O(δ2)O(\delta^{-2})-approximation).

As a result, the cost of insertion/deletion in the overall method is O~(b1+O(δ)+nb)\tilde{O}(b^{1+O(\delta)}+\frac{n}{b}).

Removing the dependency on UU.

When UU may be large, we can reduce the tree depth from O(logU)O(\log U) to O(logn)O(\log n) by replacing the quadtree with the BBD tree of Arya et al. [7]. Each cell in the BBD tree is the set difference of two quadtree squares, one contained in the other. The size of each child cell is at most a fraction of the size of the parent cell. As before, we stop subdividing when a leaf cell has size at most bb. Since any such leaf cell has size Θ(b)\Theta(b) now, the number of leaf cells is O(nb)O(\frac{n}{b}). The BBD tree can be maintained dynamically in polylogarithmic time (for example, by periodically rebuilding when subtrees become unbalanced). Since a leaf cell Γ\Gamma is the difference of two quadtree squares, it is not difficult to see that the number of maximal long squares in Γ\Gamma remains O(1)O(1). So, our previous analysis remains valid.

Lemma 3.2.

Given a parameter bb, there exists a data structure for the dynamic set cover problem for O(n)O(n) axis-aligned squares and O(n)O(n) points in 2D that maintains an (O(1),O(nb))(O(1),O(\frac{n}{b}))-approximate solution with O~(b1+O(δ)+nb)\tilde{O}(b^{1+O(\delta)}+\frac{n}{b}) insertion and deletion time.

Combining the algorithms.

When OPTn1/3\mbox{\rm OPT}\leq n^{1/3}, we use the algorithm for small OPT; the running time is O~(OPT2)O~(n2/3)\tilde{O}(\mbox{\rm OPT}^{2})\leq\tilde{O}(n^{2/3}). When OPT>n1/3\mbox{\rm OPT}>n^{1/3}, we use the algorithm for large OPT with b=n2/3b=n^{2/3}, so that an (O(1),O(nb))(O(1),O(\frac{n}{b}))-approximation is indeed an O(1)O(1)-approximation; the running time is O~(b1+O(δ)+nb)=O(n2/3+O(δ))\tilde{O}(b^{1+O(\delta)}+\frac{n}{b})=O(n^{2/3+O(\delta)}).

Theorem 3.3.

There exists a data structure for the dynamic set cover problem for O(n)O(n) axis-aligned squares and O(n)O(n) points in 2D that maintains an O(1)O(1)-approximate solution w.h.p. with O(n2/3+δ)O(n^{2/3+\delta}) insertion and deletion time for any constant δ>0\delta>0.

The case of fat rectangles can be reduced to squares, since such rectangles can be replaced by O(1)O(1) squares (increasing the approximation factor by only O(1)O(1)). Our approach can be modified to work more generally for homothets of a fixed fat convex polygon with a constant number of vertices.

4 Halfspaces in 3D

In this section, we study dynamic geometric set cover for the more challenging case of 3D halfspaces. Using the standard lifting transformation [21], we can transform 2D disks to 3D upper halfspaces. For simplicity, we assume that all halfspaces are upper halfspaces; Section 4.4 discusses how to modify our algorithms when there are both upper and lower halfspaces. Our method will be divided into three cases: small, medium, and large OPT.

4.1 Algorithm for Small OPT

Similar to the small OPT algorithm for axis-aligned squares in Section 3.1, we describe a small OPT algorithm for halfspaces based on the randomized MWU algorithm in Section 2. Although our earlier approach using levels in arrangements could be generalized, we describe a better approach based on augmenting partition trees with counters.

4.1.1 Data structures

We store the 3D point set XX in Matoušek’s partition tree [26]: The tree has height O(logn)O(\log n) and degree rr for a sufficiently large constant rr. Each node vv stores a simplicial cell Γv\Gamma_{v} and a “canonical subset” XvΓvX_{v}\subset\Gamma_{v}, where Xv=XX_{v}=X at the root vv, and Γv\Gamma_{v} is contained in Γparent(v)\Gamma_{\mathop{\rm parent}(v)}, XvX_{v} is the disjoint union of XvX_{v^{\prime}} over all children vv^{\prime} or vv, and XvX_{v} has constant size at each leaf vv. Furthermore, any halfspace crosses O(n2/3+δ)O(n^{2/3+\delta}) cells of the tree for any arbitrarily small constant δ>0\delta>0 (depending on rr). Here a halfspace hh crosses a cell Γ\Gamma iff the boundary of hh intersects Γ\Gamma.

For each upper halfspace hh, let hh^{*} denote its dual point; for each point pp, let pp^{*} denote its dual upper halfspace. (Duality [21] is defined so that pp is in hh iff hh^{*} is in pp^{*}.)

We also store the 3D dual point set S={h:hS}S^{*}=\{h^{*}:h\in S\} in Matoušek’s partition tree. Each node vv stores a cell Γv\Gamma_{v} and a canonical subset SvΓvS^{*}_{v}\subset\Gamma_{v} like above.

Matoušek’s partition trees can be built in O~(n)\tilde{O}(n) time and support insertions and deletions in XX and SS in polylogarithmic time. (In the static case, there are slightly improved partition trees reducing the nδn^{\delta} factor in the crossing number bound [26, 28, 12], but these will not be important to us.)

4.1.2 Computing a solution

We now show how to compute an O(1)O(1)-approximate solution in sublinear time when OPT is small by running Algorithm 1 using the above data structures. As in Section 3.1, the main subproblems are (i) finding a low-depth point with respect to RR, and (ii) weighted range sampling, where the weights are the multiplicities (which are not explicitly stored).

Finding a low-depth point.

We maintain two values at each node vv of the partition tree of XX:

  • cvc_{v} is the number of halfspaces of RR containing Γv\Gamma_{v} but not containing Γparent(v)\Gamma_{\mathop{\rm parent}(v)}.

  • dvd_{v} is the minimum depth among all points in XvX_{v} with respect to the halfspaces of RR crossing Γv\Gamma_{v}.

The overall minimum depth with respect to RR is given by the value dvd_{v} at the root vv. Whenever we insert a halfspace hh to RR, for each of the O(n2/3+δ)O(n^{2/3+\delta}) cells Γv\Gamma_{v} crossed by hh, we update the counters cvc_{v^{\prime}} for the children vv^{\prime} of the nodes vv; we also update the value dvd_{v} bottom-up according to the formula dv=minchild v of v(dv+cv)d_{v}=\min_{\mbox{\scriptsize\rm child $v^{\prime}$ of $v$}}(d_{v^{\prime}}+c_{v^{\prime}}). Thus, all values can be maintained in O(n2/3+δ)O(n^{2/3+\delta}) time per insertion to RR.

As there are O~(t)\tilde{O}(t) insertions to RR, the total cost is O~(tn2/3+δ)\tilde{O}(tn^{2/3+\delta}). This cost covers the resetting of counters at every round.

(We remark that the idea of augmenting nodes of partition trees with counters appeared before in at least one prior work on dynamic geometric data structures [10, Theorem 4.1].)

Weighted range sampling.

Let QQ be the set of all points pp for which we have performed multiplicity-doubling steps thus far. Note that |Q|=O~(t)|Q|=\tilde{O}(t). The multiplicity of a halfspace hSh\in S is 2depth of h in Q2^{\mbox{\scriptsize depth of $h^{*}$ in $Q^{*}$}}. To implicitly represent the multiplicities and their sum, we maintain two values at each node vv of the partition tree for SS^{*}:

  • cvc_{v} is the number of dual halfspaces of QQ^{*} containing Γv\Gamma_{v} but not containing Γparent(v)\Gamma_{\mathop{\rm parent}(v)}.

  • mvm_{v} is the sum of 2depth of h among the halfspaces of Q crossing Γv2^{\mbox{\scriptsize depth of $h^{*}$ among the halfspaces of $Q^{*}$ crossing $\Gamma_{v}$}} over all hSvh^{*}\in S^{*}_{v}.

Whenever we insert a point pp to QQ, for each of the O(n2/3+δ)O(n^{2/3+\delta}) cells Γv\Gamma_{v} crossed by the dual halfspace pp^{*}, we update the counters cvc_{v^{\prime}} for the children vv^{\prime} of the nodes vv; we also update the value mvm_{v} bottom-up according to the formula mv=child v of v2cvmvm_{v}=\sum_{\mbox{\scriptsize\rm child $v^{\prime}$ of $v$}}2^{c_{v^{\prime}}}m_{v^{\prime}}. Thus, all values can be maintained in O(n2/3+δ)O(n^{2/3+\delta}) time per insertion to QQ.

To generate a weighted sample of the halfspaces of SS containing pp for line 8, we find all O(n2/3+δ)O(n^{2/3+\delta}) cells Γv\Gamma_{v} crossed by the dual halfspace pp^{*}, and consider the canonical subsets SvS_{v^{\prime}}^{*} for the children vv^{\prime} of vv with Γv\Gamma_{v^{\prime}} contained in pp^{*}. We can then sample from these canonical subsets, weighted by mv2ucum_{v^{\prime}}2^{\sum_{u}c_{u}}, where the sum is over all ancestors uu of vv^{\prime}. All this takes time O(n2/3+δ)O(n^{2/3+\delta}) plus the size of the sample.

Hence, the total cost of all O~(t)\tilde{O}(t) multiplicity-doubling steps is O~(tn2/3+δ)\tilde{O}(tn^{2/3+\delta}).

Lemma 4.1.

There exists a data structure for the dynamic set cover problem for O(n)O(n) upper halfspaces and O(n)O(n) points in 3D that supports insertions and deletions in O~(1)\tilde{O}(1) time and can find an O(1)O(1)-approximate solution w.h.p. in O~(OPTn2/3+δ)\tilde{O}(\mbox{\rm OPT}\cdot n^{2/3+\delta}) time.

4.2 Algorithm for Medium OPT

The preceding algorithm works well only when OPT is smaller than about n1/3n^{1/3}. We show that a more involved algorithm, also based on MWU, can achieve sublinear time even when OPT approaches n1δn^{1-\delta}. The basic approach is to use shallow versions of the partition trees [27].

4.2.1 Data structures

We begin with a lemma that was used before in some of the previous static algorithms by Agarwal and Pan [5] and ours [14]. With this lemma, we can effectively make every input halfspace shallow, i.e., contain at most O~(nt)\tilde{O}(\frac{n}{t}) points. The extra condition in the second sentence of the lemma below is new, and is needed in order to make dynamization possible later (difficulty arises when halfspaces in T0T_{0} get deleted). Because of this extra condition, we describe a construction which is different from the previous algorithms [5, 14].

Lemma 4.2.

Given a set X0X_{0} of nn points, a set S0S_{0} of nn halfspaces in 3D and a parameter tt, we can construct a subset of halfspaces T0S0T_{0}\subseteq S_{0} of size O(t)O(t), and a subset of points A0X0A_{0}\subseteq X_{0}, such that (i) each point in A0A_{0} is covered by T0T_{0}, and (ii) each halfspace of S0T0S_{0}-T_{0} contains O~(nt)\tilde{O}(\frac{n}{t}) points of X0A0X_{0}-A_{0}.

Furthermore, we can decompose A0=hT0AhA_{0}=\bigcup_{h\in T_{0}}A_{h}, where AhA_{h} has size O(nt)O(\frac{n}{t}) for each hT0h\in T_{0}, such that each point in AhA_{h} is covered by hh. The construction takes O~(n)\tilde{O}(n) time.

Proof 4.3.

We use a simple “greedy” approach: We examine each halfspace hSh\in S in an arbitrary order, and test whether hh contains more than nt\frac{n}{t} points of X0X_{0}. If so, we add hh to T0T_{0}, pick some nt\frac{n}{t} (but not more) points in X0hX_{0}\cap h to add to AhA_{h}, and delete AhA_{h} from X0X_{0}. Clearly, the number of halfspaces added to T0T_{0} is O(t)O(t).

To bound the construction time, we maintain X0X_{0} in Chan’s dynamic 3D halfspace range reporting structure with polylogarithmic amortized update time [11]. The cost of all deletions is O~(n)\tilde{O}(n). Testing each halfspace hh requires a halfspace range counting query, but for the above purposes, it suffices to use an approximate count, with O~(1)\tilde{O}(1) approximation factor. Given a query halfspace hh containing kk points, in O~(1)\tilde{O}(1) time, one can find O(logn)O(\log n) lists of size O(k)O(k) from Chan’s data structure [11] (see also [13, Section 3]), so that the kk points inside hh are contained in the union of these lists. The sum of the sizes of these lists is O(klogn)O(k\log n) and Ω(k)\Omega(k), yielding an O(logn)O(\log n)-approximation of the count.

We divide the update sequence into phases of gg updates each, for a parameter gtg\ll t. Our data structure will be rebuilt periodically, after each phase. Let X0X_{0} and S0S_{0} be XX and SS at the beginning of the current phase. Let XIX_{I} and SIS_{I} be the current set of points and the set of halfspaces that have been inserted to XX and SS in the current phase. Let XDX_{D} and SDS_{D} be the current set of points and the set of halfspaces deleted from XX and SS in the current phase. At the start of each phase, we apply Lemma 4.2.

We store the 3D point set X0A0X_{0}-A_{0} in a partition tree, like before. However, we will need a bound on the crossing number that is sensitive to shallowness: namely,

  • any halfspace containing O~(nt)\tilde{O}(\frac{n}{t}) points of X0A0X_{0}-A_{0} crosses at most O((nt)2/3+O(δ))O((\frac{n}{t})^{2/3+O(\delta)}) cells of the tree;

  • any other halfspace crosses at most O(t(nt)2/3+δ)O(t\cdot(\frac{n}{t})^{2/3+\delta}) cells.

This follows by combining Matoušek’s shallow version of the partition tree [27] with the original version of the partition tree: Using the shallow version of the partition tree, with O(t)O(t) leaf cells containing O(nt)O(\frac{n}{t}) points each, any halfspace that contains O~(nt)\tilde{O}(\frac{n}{t}) points is known to cross at most O(nδ)O(n^{\delta}) leaf cells [27]. For each leaf cell with O(nt)O(\frac{n}{t}) points, we build the original partition tree [26], which has crossing number bound O((nt)2/3+δ)O((\frac{n}{t})^{2/3+\delta}).

In addition, we maintain the following counter at each node vv of the partition tree:

  • cv#c^{\#}_{v} is the number of halfspaces of T0SISDT_{0}\cup S_{I}-S_{D} containing Γv\Gamma_{v} but not containing Γparent(v)\Gamma_{\mathop{\rm parent}(v)}.

In the beginning of a phase, we can compute each count cv#c^{\#}_{v} by O(1)O(1) 3D simplex range counting query on the dual points of T0T_{0}^{*} (since halfspaces containing a simplex and not containing another simplex dualize to points in a polyhedral region of constant size); by known results [4, 26], O(n)O(n) such queries on O(t)O(t) points in 3D take O~(n+(nt)3/4)\tilde{O}(n+(nt)^{3/4}) time. The amortized cost is O~(n+(nt)3/4g)\tilde{O}(\frac{n+(nt)^{3/4}}{g}).

Afterwards, during each insertion/deletion of a halfspace in SS, we increment/decrement the counters at O(t(nt)2/3+δ)O(t\cdot(\frac{n}{t})^{2/3+\delta}) nodes of the tree (note that the halfspace may not be shallow). Thus, each update in SS costs O(t1/3n2/3+δ)O(t^{1/3}n^{2/3+\delta}) time.

We also store the 3D dual point set (S0T0SD)(S_{0}-T_{0}-S_{D})^{*} in a partition tree, again with O((nt)2/3+O(δ))O((\frac{n}{t})^{2/3+O(\delta)}) crossing number bound with respect to halfspaces containing O~(nt)\tilde{O}(\frac{n}{t}) points. Such a partition tree supports deletions in polylogarithmic time.

4.2.2 Computing a solution

We now show how to compute an O(1)O(1)-approximate solution in sublinear time when OPT is sublinear using the above data structures. We first take a new unweighted random sample RextraS0T0SDR_{\mbox{\scriptsize\rm extra}}\subset S_{0}-T_{0}-S_{D} of size tt. We include RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D} in the solution. Since this set has size O(t+g)=O(t)O(t+g)=O(t), this increases the approximation factor only by O(1)O(1). Let E=hT0SDAhXIXDE=\bigcup_{h\in T_{0}\cap S_{D}}A_{h}\cup X_{I}-X_{D}, which has O(gnt)O(\frac{gn}{t}) points. It remains to cover the points in (X0A0XD)E(X_{0}-A_{0}-X_{D})\cup E, excluding those already covered by RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D}, using halfspaces in S0T0SDS_{0}-T_{0}-S_{D}. We will do so by running Algorithm 1 on these points and halfspaces using the above data structures. As before, the main subproblems are (i) finding a low-depth point, and (ii) weighted range sampling.

Finding a low-depth point.

We can find the minimum depth of the points of X0A0XDX_{0}-A_{0}-X_{D} (with respect to RR) as in the small OPT algorithm, by using counters in the partition tree for X0A0X_{0}-A_{0}.

Since any halfspace in S0T0SDS_{0}-T_{0}-S_{D} contains at most O~(nt)\tilde{O}(\frac{n}{t}) points of X0A0XDX_{0}-A_{0}-X_{D} by Lemma 4.2, the cost per insertion to RR is reduced from O(n2/3+δ)O(n^{2/3+\delta}) to O((nt)2/3+O(δ))O((\frac{n}{t})^{2/3+O(\delta)}). The total cost over all O~(t)\tilde{O}(t) insertions to RR is O(t(nt)2/3+O(δ))=O(t1/3n2/3+O(δ)).O(t(\frac{n}{t})^{2/3+O(\delta)})=O(t^{1/3}n^{2/3+O(\delta)}).

One technicality is that we should be excluding points already covered by RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D}. To fix this, we add c0lognc_{0}\log n copies of RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D} to RR at the beginning of each round. This way, points covered by RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D} would not be picked as low-depth points. Adding these copies of T0SISDT_{0}\cup S_{I}-S_{D} requires no extra effort, since we can initialize cvc_{v} with the already computed cv#c_{v}^{\#} value, times c0lognc_{0}\log n, for all nodes vv encountered. The copies of the O~(t)\tilde{O}(t) halfspaces of RextraR_{\mbox{\scriptsize\rm extra}} can be inserted one by one, in O~((nt)2/3+O(δ))\tilde{O}((\frac{n}{t})^{2/3+O(\delta)}) time each, as above. The extra cost for these O~(t)\tilde{O}(t) insertions to RR is bounded as above.

Another technicality that we should be excluding the deleted points in XDX_{D} when defining the minimum depth values dvd_{v}. When we delete a point from XX, we can update the dvd_{v} values along a path bottom-up in O~(1)\tilde{O}(1) time.

We can find low-depth points of EE (with respect to RR) more naively, by examining the points of EE, excluding those covered by RextraT0SISDR_{\mbox{\scriptsize\rm extra}}\cup T_{0}\cup S_{I}-S_{D}, and testing them one by one in each round (like in Agarwal and Pan’s algorithm [5]). In line 5, it suffices to use an O(1)O(1)-approximation to the depth (after adjusting constants in the pseudocode), and as noted in our previous paper [14], we can apply known data structures for 3D halfspace approximate range counting [2] for the dual points RR^{*}; queries and insertions to RR take polylogarithmic time. A point that has been found to have depth larger than the threshold will remain having large depth during a round. The total cost over all rounds is O~(|E|)=O~(gnt)\tilde{O}(|E|)=\tilde{O}(\frac{gn}{t}).

Weighted range sampling.

During a multiplicity-doubling step for a point pp, we can generate a multiplicity-weighted sample from the halfspaces containing pp in the same way as in the small OPT algorithm, by using counters in the partition tree for (S0T0SD)(S_{0}-T_{0}-S_{D})^{*}. Observe that pp has depth in S0T0SDS_{0}-T_{0}-S_{D} at most O(ntlogn)O(\frac{n}{t}\log n) w.h.p., because otherwise, pp would be covered by the random sample RextraR_{\mbox{\scriptsize\rm extra}} and would have been excluded (in other words, a random sample RextraR_{\mbox{\scriptsize\rm extra}} of size tt is a Θ(lognt)\Theta(\frac{\log n}{t})-net w.h.p.). Since the dual halfspace pp^{*} contains at most O~(nt)\tilde{O}(\frac{n}{t}) points of SS^{*}, the cost per multiplicity-doubling step is reduced from O~(n2/3+δ)\tilde{O}(n^{2/3+\delta}) to O~((nt)2/3+O(δ))\tilde{O}((\frac{n}{t})^{2/3+O(\delta)}). The total cost over all O~(t)\tilde{O}(t) multiplicity-doubling steps is O~(t1/3n2/3+O(δ)).\tilde{O}(t^{1/3}n^{2/3+O(\delta)}).

In conclusion, the total time for running MWU is O~(gnt+t1/3n2/3+O(δ))\tilde{O}(\frac{gn}{t}+t^{1/3}n^{2/3+O(\delta)}). To balance this computation cost with the O~(n+(nt)3/4g)\tilde{O}(\frac{n+(nt)^{3/4}}{g}) update cost, we set g=t7/8n1/8+tg=\frac{t^{7/8}}{n^{1/8}}+\sqrt{t} and get a bound of O~(n7/8t1/8+nt+t1/3n2/3+O(δ))\tilde{O}(\frac{n^{7/8}}{t^{1/8}}+\frac{n}{\sqrt{t}}+t^{1/3}n^{2/3+O(\delta)}).

Lemma 4.4.

There exists a data structure for the dynamic set cover problem for O(n)O(n) upper halfspaces and O(n)O(n) points in 3D that maintains an O(1)O(1)-approximate solution w.h.p.  with O~(n7/8OPT1/8+nOPT+OPT1/3n2/3+O(δ))\tilde{O}(\frac{n^{7/8}}{\tiny\mbox{\rm OPT}^{1/8}}+\frac{n}{\sqrt{\tiny\mbox{\rm OPT}}}+\mbox{\rm OPT}^{1/3}n^{2/3+O(\delta)}) amortized insertion and deletion time.

4.3 Algorithm for Large OPT

Lastly, we give an algorithm for the large OPT case, which is very different from the algorithms in the previous subsections (and not based on modifying MWU). Here, we can only compute the size of the approximate set cover, not the cover itself. Like before, we will show that the problem gets easier for large OPT, because we can afford a large additive error. The idea is to decompose the problem into subproblems via geometric sampling and planar separators, and then approximate the sum of the subproblems’ answers by sampling again.

4.3.1 Data structures

We just store the dual point set SS^{*} in a known 3D halfspace range reporting structure. The data structure by Chan [11] supports queries in O((logn+k)logn)O((\log n+k)\log n) time for output size kk, and insertions and deletions in SS in polylogarithmic amortized time.

We store the xyxy-projection of the point set XX in a known 2D triangle range searching structure [26] that supports queries in O(n1/2+δz1/2+k)O(\frac{n^{1/2+\delta}}{z^{1/2}}+k) time for output size kk, and insertions and deletions in XX in O~(z)\tilde{O}(z) time for a given trade-off parameter z[1,n]z\in[1,n].

4.3.2 Approximating the optimal value

Let bb and gg be parameters to be set later. Take a random sample RR of the halfspaces SS with size nb\frac{n}{b}. Imagine that RR is included in the solution. The remaining uncovered space is the complement of the union of RR, which is a 3D convex polyhedron. There are O(|R|)=O(nb)O(|R|)=O(\frac{n}{b}) cells in the vertical decomposition VD(R)\textrm{VD}(R) of this polyhedron (formed by triangulating each face and drawing a vertical wall at each edge of the triangulation). Each cell is crossed by O(blogn)O(b\log n) halfspaces w.h.p., by well-known geometric sampling analysis [16]. The decomposition VD(R)\textrm{VD}(R) can be constructed in O~(nb)\tilde{O}(\frac{n}{b}) time.

Our key idea is to use planar graph separators to divide into smaller subproblems. The following is a multi-cluster version of the standard planar separator theorem [24] (sometimes known as “rr-divisions” [22]):

Lemma 4.5 (Planar Separator Theorem, Multi-Cluster Version).

Given a planar graph G=(V,E)G=(V,E) with nn vertices, and a parameter gg, we can partition VV into ng\frac{n}{g} subsets V1,,Vn/gV_{1},\cdots,V_{n/g} of size O(g)O(g) each, and an extra “boundary set” BB of size O(ng)O(\frac{n}{\sqrt{g}}), such that no two vertices from different subsets ViV_{i} and VjV_{j} are adjacent. The partition can be constructed in O~(n)\tilde{O}(n) time.

(We remark that the general idea of combining cuttings/geometric sampling with planar graph separators appeared in some geometric approximation algorithms before, e.g., [1].)

We apply Lemma 4.5 to the dual graph of VD(R)\textrm{VD}(R) (which has size O(nb)O(\frac{n}{b})), yielding O((n/b)/g)O((n/b)/g) “clusters” of O(g)O(g) cells each, and a set BB of O((n/b)/g)O((n/b)/\sqrt{g}) “boundary cells”, in O~(nb)\tilde{O}(\frac{n}{b}) time.

Let SBS_{B} be the subset of all halfspaces of SS that cross boundary cells of BB. Note that |SB|=O((n/b)/gblogn)=O~(ng)|S_{B}|=O((n/b)/\sqrt{g}\cdot b\log n)=\tilde{O}(\frac{n}{\sqrt{g}}) w.h.p.

For each cluster γ\gamma, let XγX_{\gamma} denote the subset of all points of XX whose xyxy-projections lie in the xyxy-projection of the cells of γ\gamma, and let SγS_{\gamma} denote the subset of all halfspaces of SS that cross the cells of γ\gamma. Note that |Sγ|=O(gblogn)=O~(bg)|S_{\gamma}|=O(g\cdot b\log n)=\tilde{O}(bg) w.h.p. Let OPTγ\mbox{\rm OPT}_{\gamma} denote the optimal value for the set cover problem for the halfspaces of SγS_{\gamma} and the points of XγX_{\gamma} not covered by RSBR\cup S_{B}.

Claim 1.

γOPTγ\sum_{\gamma}\mbox{\rm OPT}_{\gamma} approximates OPT with additive error O~(nb+ng)\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}}) w.h.p.

Proof 4.6.

A feasible solution can be formed by taking the union of the solutions corresponding to OPTγ\mbox{\rm OPT}_{\gamma}, together with RR (to cover points not covered by VD(R)\textrm{VD}(R)) and SBS_{B} (to cover points inside boundary cells). As |R|=nb|R|=\frac{n}{b} and |SB|=O~(ng)|S_{B}|=\tilde{O}(\frac{n}{\sqrt{g}}) w.h.p., this proves that OPTγOPTγ+O~(nb+ng)\mbox{\rm OPT}\leq\sum_{\gamma}\mbox{\rm OPT}_{\gamma}+\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}}).

In the other direction, observe that if a halfspace hh crosses two different clusters γi\gamma_{i} and γj\gamma_{j}, it must also cross some boundary cell in XX by convexity: pick points phγip\in h\cap\gamma_{i} and qhγjq\in h\cap\gamma_{j}; then the line segment pq¯\overline{pq} must hit the wall of some boundary cell. So, after removing RSBR\cup S_{B} from the global optimal solution, we get disjoint local solutions in the clusters. This proves that OPTγOPTγ\mbox{\rm OPT}\geq\sum_{\gamma}\mbox{\rm OPT}_{\gamma}.

We use the following known fact about approximating a sum via random sampling (which is of course a standard trick):

Lemma 4.7.

Suppose a1++am=Ta_{1}+\cdots+a_{m}=T where ai[0,U]a_{i}\in[0,U]. Take a random subset RR of r=(c0/ε2)mUlognTr=\frac{(c_{0}/\varepsilon^{2})mU\log n}{T} elements from a1,,ama_{1},\ldots,a_{m}, for a sufficiently large constant c0c_{0}. Then aiRairm\sum_{a_{i}\in R}a_{i}\cdot\frac{r}{m} is a (1+ε)(1+\varepsilon)-approximation to TT w.h.p.

Proof 4.8.

By rescaling the aia_{i}’s and TT by a factor UU, we may assume that U=1U=1. Define a random variable YiY_{i}, which is aia_{i} with probability rm\frac{r}{m}, and 0 otherwise. Then E[iYi]=Trm=(c0/ε2)lognE[\sum_{i}Y_{i}]=T\cdot\frac{r}{m}=(c_{0}/\varepsilon^{2})\log n. The result follows from a standard Chernoff bound on the YiY_{i}’s.

By applying the above lemma with m=(n/b)/gm=(n/b)/g, T=Θ(t)T=\Theta(t), and U=O~(bg)U=\tilde{O}(bg) (assuming OPT is finite), we can O(1)O(1)-approximate OPT by summing OPTγ\mbox{\rm OPT}_{\gamma} over a random sample of r=O~(mUT)=O~(nt)r=\tilde{O}(\frac{mU}{T})=\tilde{O}(\frac{n}{t}) clusters γ\gamma.

We can generate the set SBS_{B} by finding the halfspaces of SS that contain the O((n/b)/g)O((n/b)/\sqrt{g}) vertices of the cells in BB—this corresponds to O((n/b)/g)O((n/b)/\sqrt{g}) halfspace range reporting queries for the dual 3D point set SS^{*}, each with output size O~(b)\tilde{O}(b) w.h.p. and each taking O~(b)\tilde{O}(b) time [11]. Thus, SBS_{B} can be found in O~(ng)\tilde{O}(\frac{n}{\sqrt{g}}) time. We compute the union of RSBR\cup S_{B}, which is the complement of an intersection of halfspaces, by the dual of 3D convex hull algorithm [21]. This takes O~(nb+ng)\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}}) time.

For each chosen cluster γ\gamma, we can generate SγS_{\gamma} similarly by O(g)O(g) halfspace range reporting queries for SS^{*}, each with output size O~(b)\tilde{O}(b) w.h.p. Thus, SγS_{\gamma} can be found in O~(bg)\tilde{O}(bg) time. We can generate XγX_{\gamma} by performing O(g)O(g) triangle range reporting queries for the 2D xyxy-projection of the point set XX. Thus, XγX_{\gamma} can be found in O~(gn1/2+δz1/2+|Xγ|)\tilde{O}(g\frac{n^{1/2+\delta}}{z^{1/2}}+|X_{\gamma}|) time. We filter points of XγX_{\gamma} covered by RSBR\cup S_{B}, by performing |Xγ||X_{\gamma}| planar point location queries [21] in the xyxy-projection of the boundary of the union of RSBR\cup S_{B}. This takes O~(|Xγ|)\tilde{O}(|X_{\gamma}|) time. We can then compute an O(1)O(1)-approximation to OPTγ\mbox{\rm OPT}_{\gamma} by running a known static set cover algorithm [5, 14] in O~(bg+|Xγ|)\tilde{O}(bg+|X_{\gamma}|) time.

The expected sum of XγX_{\gamma} over all chosen clusters γ\gamma is O(nrm)=O(rbg)O(n\cdot\frac{r}{m})=O(rbg). The total expected time over rr clusters is O~(rbg+rgn1/2+δz1/2)=O~(bgnt+gn3/2+δtz1/2)\tilde{O}(rbg+rg\frac{n^{1/2+\delta}}{z^{1/2}})=\tilde{O}(\frac{bgn}{t}+\frac{gn^{3/2+\delta}}{tz^{1/2}}). The overall expected running time is O~(nb+ng+bgnt+gn3/2+δtz1/2)\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}}+\frac{bgn}{t}+\frac{gn^{3/2+\delta}}{tz^{1/2}}), and we obtain an (O(1),O~(nb+ng))(O(1),\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}}))-approximation. (The expected bound can be converted to worst-case by placing a time limit and re-running logarithmically many times.) Choosing g=b2g=b^{2} yields the following result:

Lemma 4.9.

Given parameters bb and z[1,n]z\in[1,n] and any constant ε>0\varepsilon>0, there exists a data structure for the dynamic set cover problem for O(n)O(n) upper halfspaces and O(n)O(n) points in 3D that supports insertions and deletions in O~(z)\tilde{O}(z) amortized time and can find the value of an (O(1),O~(nb))(O(1),\tilde{O}(\frac{n}{b}))-approximation w.h.p. in O~(nb+b3nOPT+b2n3/2+δOPTz1/2)\tilde{O}(\frac{n}{b}+\frac{b^{3}n}{\tiny\mbox{\rm OPT}}+\frac{b^{2}n^{3/2+\delta}}{{\tiny\mbox{\rm OPT}}\cdot z^{1/2}}) time for any constant δ>0\delta>0.

A minor technicality is that when applying Lemma 4.7, we have assumed that the optimal value is finite. The problem of checking whether a solution exists, i.e., whether a point set is covered by a set of halfspaces (or more generally, maintaining the lowest-depth point), subject to insertions and deletions of points and halfspaces, has already been solved before by Chan [10, Theorem 4.1], who gave a fully dynamic algorithm with O~(n2/3)\tilde{O}(n^{2/3}) time per operation (based on augmenting partition trees with counters, similar to what we have done here).

Combining the algorithms.

Finally, we combine all three algorithms:

  1. 1.

    When OPTn2/9\mbox{\rm OPT}\leq n^{2/9}, we use the algorithm for small OPT; the running time is O~(OPTn2/3+δ)O~(n8/9+δ)\tilde{O}(\mbox{\rm OPT}\cdot n^{2/3+\delta})\leq\tilde{O}(n^{8/9+\delta}).

  2. 2.

    When n2/9<OPTn10/13n^{2/9}<\mbox{\rm OPT}\leq n^{10/13}, we use the algorithm for medium OPT; the running time is O~(n7/8OPT1/8+nOPT+OPT1/3n2/3+O(δ))O(n12/13+O(δ)).\tilde{O}(\frac{n^{7/8}}{\tiny\mbox{\rm OPT}^{1/8}}+\frac{n}{\sqrt{\tiny\mbox{\rm OPT}}}+\mbox{\rm OPT}^{1/3}n^{2/3+O(\delta)})\leq O(n^{12/13+O(\delta)}).

  3. 3.

    When OPT>n10/13\mbox{\rm OPT}>n^{10/13}, we use the algorithm for large OPT with b=Θ~(n3/13)b=\tilde{\Theta}(n^{3/13}) and z=n7/13z=n^{7/13}, so that an (O(1),O~(nb))(O(1),\tilde{O}(\frac{n}{b}))-approximation is indeed an O(1)O(1)-approximation; the running time is O~(nb+b3nOPT+b2n3/2+δOPTz1/2)O(n12/13+O(δ))\tilde{O}(\frac{n}{b}+\frac{b^{3}n}{\tiny\mbox{\rm OPT}}+\frac{b^{2}n^{3/2+\delta}}{{\tiny\mbox{\rm OPT}}\cdot z^{1/2}})\leq O(n^{12/13+O(\delta)}).

Theorem 4.10.

There exists a data structure for the dynamic set cover problem for O(n)O(n) upper halfspaces and O(n)O(n) points in 3D that maintains the value of an O(1)O(1)-approximate solution w.h.p.  with O(n12/13+δ)O(n^{12/13+\delta}) amortized insertion and deletion time for any constant δ>0\delta>0.

4.4 Upper and Lower Halfspaces

Small and medium OPT.

It is straightforward to modify the small and medium OPT algorithms in Sections 4.1 and 4.2 to handle the case when there are both upper and lower halfspaces in SS. For weighted range sampling, we can handle the upper halfspaces and the lower halfspaces separately; for example, we build the partition tree for the dual points of SS separately for the upper and the lower halfspaces. To find low-depth points, we use just one partition tree for the points of XX, where depth is defined relatively to the combined set of upper and lower halfspaces.

Large OPT.

In the large OPT algorithm, we can no longer use the vertical decomposition. Instead, we pick a point p0p_{0} inside the polyhedron and consider a “star” triangulation of the polyhedron where all tetrahedra have p0p_{0} as a vertex. Because we can no longer use xyxy-projections, naively we would need to replace 2D triangle range searching with 3D simplex range searching, which would increase the update time slightly.

If p0p_{0} is fixed, we can replace the orthogonal xyxy-projection with a perspective projection with respect to p0p_{0}, and we can still use 2D triangle range searching. We describe a way to find a point p0p_{0} that stays fixed for a number of updates. (Note that p0p_{0} need not be in XX.)

Specifically, we divide the update sequence into phases with nb\frac{n}{b} updates each. At the beginning of each phase, we set p0p_{0} to be a point of minimum depth with respect to SS, among all points in 3\mathbb{R}^{3}; an O(1)O(1)-approximation is fine and can be found in O~(n)\tilde{O}(n) randomized time [2, 6].

If p0p_{0} has depth at least 2nb\frac{2n}{b} at the beginning of the phase, the minimum depth is at least nb\frac{n}{b} during the entire phase, and a (1b)(\frac{1}{b})-net of size O~(b)\tilde{O}(b) is a set cover and can be generated by random sampling; trivially, this gives an (O(1),O~(nb))(O(1),\tilde{O}(\frac{n}{b}))-approximation, assuming bnb\leq\sqrt{n}.

Otherwise, p0p_{0} has depth O(nb)O(\frac{n}{b}) during the entire phase. In the large OPT algorithm, we let Z0Z_{0} be the set of O(nb)O(\frac{n}{b}) halfspaces containing p0p_{0} (which can be found by halfspace range reporting in the dual), include Z0Z_{0} in the solution, and remove Z0Z_{0} from SS before taking the random sample RR. As a result, the complement of the union of RR indeed contains p0p_{0}. The rest of the algorithm is similar, using a perspective projection from p0p_{0} instead of xyxy-projection. (Points covered by Z0Z_{0} should be excluded, and we can do so by adding Z0Z_{0} to SBS_{B}.) The additive error increases by O(nb)O(\frac{n}{b}), and so is asymptotically unchanged.

The data structures have preprocessing time O~(nz)\tilde{O}(nz). Since we rebuild after every nb\frac{n}{b} updates, the amortized update cost is O~(bz)\tilde{O}(bz). In our application with b=Θ~(n3/13)b=\tilde{\Theta}(n^{3/13}) and z=n7/13z=n^{7/13}, this cost does not dominate.

Theorem 4.11.

Theorem 4.10 holds even when there are both upper and lower halfspaces.

5 Improving Static Set Cover

In this last section, we show how the techniques we have developed for the dynamic geometric set cover problem can lead to a randomized algorithm for static set cover for 3D halfspaces running in O(nlogn)O(n\log n) time, which is optimal and improves our previous O(nlogn(loglogn)O(1))O(n\log n(\log\log n)^{O(1)}) randomized algorithm [14]. The new algorithm combines the medium OPT algorithm in Section 4.2 and the large OPT algorithm in Section 4.3.

Let NN be a fixed parameter used to control the error probability.

Case 1: OPTn5/6+δ\mbox{\rm OPT}\leq n^{5/6+\delta}.

Here, we modify the medium OPT algorithm in Section 4.2. We first guess a value t[OPT/nδ,OPT]t^{\prime}\in[\mbox{\rm OPT}/n^{\delta},\mbox{\rm OPT}]; a constant (O(1/δ)O(1/\delta)) number of guesses suffices. The preprocessing algorithm will use this parameter tt^{\prime}.

In the static setting, our old version of Lemma 4.2 for constructing T0T_{0} suffices and takes O(nlogn)O(n\log n) time [14]. Specifically, we construct a set T0ST_{0}\subseteq S of O(t)O(t^{\prime}) halfspaces so that after removing the points covered by T0T_{0}, every halfspace of SS contains at most O(nt)O(\frac{n}{t^{\prime}}) points. Since we are in the static setting, we can explicitly remove the points covered by T0T_{0}. The shallow versions of the partition trees [27] can be preprocessed in O(nlogn)O(n\log n) time. Parts of the algorithm can be simplified: there is no need to divide into phases, and no need for the extra counters cv#c^{\#}_{v} and the extra point set EE during the MWU algorithm. The MWU algorithm then runs in O~(t(nt)2/3+O(δ))=O~(t1/3n2/3+O(δ))=O(n17/18+O(δ))\tilde{O}(t\cdot(\frac{n}{t^{\prime}})^{2/3+O(\delta)})=\tilde{O}(t^{1/3}n^{2/3+O(\delta)})=O(n^{17/18+O(\delta)}) time. As we need to run the MWU algorithm for all guesses tt that are powers of 2 (up to n5/6+δn^{5/6+\delta}), the running time of the MWU algorithm increases by a logarithmic factor but remains O(n17/18+O(δ))O(n^{17/18+O(\delta)}), excluding preprocessing.

To bound the error probability by O(1N)O(\frac{1}{N}), we can re-run the MWU algorithm O(logN)O(\log N) times. The total running time including preprocessing is O(nlogn+n17/18+O(δ)logN)O(n\log n+n^{17/18+O(\delta)}\log N).

Case 2: OPT>n5/6+δ\mbox{\rm OPT}>n^{5/6+\delta}.

Here, we modify the large OPT algorithm in Section 4.3. In the static setting, there is no need for the halfspace range reporting and triangle range searching structures. We first generate the conflict lists of the cells (the lists of halfspaces crossing the cells) in VD(R)\textrm{VD}(R) in O(nlogn)O(n\log n) expected time [18]. We verify that indeed every conflict list has size O(blogn)O(b\log n); if not, we restart, with O(1)O(1) expected number of trials till success. For each point pXp\in X, we locate the cell in VD(R)\textrm{VD}(R) which contains pp in the xyxy-projection, by planar point location in O(nlogn)O(n\log n) time [21].

We refine VD(R)\textrm{VD}(R) before applying the planar separator theorem: for each cell in VD(R)\textrm{VD}(R), we subdivide into subcells each containing at most bb points of XX in the xyxy-projection. The number of extra cuts is O(nb)O(\frac{n}{b}), and so the new decomposition still has O(nb)O(\frac{n}{b}) cells.

(As noted before, when there are both upper and lower halfspaces, we replace the vertical decomposition with a “star” triangulation and replace orthogonal xyxy-projection with a perspective projection.)

The original large OPT algorithm takes a random sample of the clusters to approximate the value of the optimal solution. To compute an actual solution, we instead use recursion in every cluster.

Recall that the number of clusters is O(n/(bg))O(n/(bg)). For each cluster γ\gamma, the number of halfspaces in SγS_{\gamma} is O~(bg)\tilde{O}(bg), and the number of points in XγX_{\gamma} is also O~(bg)\tilde{O}(bg), because of the above refinement of VD(R)\textrm{VD}(R). Recall that after removing halfspaces in SBS_{B}, the SγS_{\gamma}’s become disjoint; and after removing points covered by RSBR\cup S_{B}, the sum of the optimal values in the subproblems OPTγ\mbox{\rm OPT}_{\gamma} is upper-bounded by OPT.

We set b=n1/6b=n^{1/6} and g=n1/3g=n^{1/3} so that the additive error is O~(nb+ng)=O~(n5/6)O(OPT/logn)\tilde{O}(\frac{n}{b}+\frac{n}{\sqrt{g}})=\tilde{O}(n^{5/6})\leq O(\mbox{\rm OPT}/\log n).

Analysis.

The worst-case expected running time of the overall algorithm for input size nn satisfies the recurrence T(n)iT(ni)+O(nlogn+n17/18+O(δ)logN)T(n)\leq\sum_{i}T(n_{i})+O(n\log n+n^{17/18+O(\delta)}\log N), for some nin_{i}’s with inin\sum_{i}n_{i}\leq n and maxini=O~(n)\max_{i}n_{i}=\tilde{O}(\sqrt{n}). This recurrence solves to T(n)=O(nlogn+nlogN)T(n)=O(n\log n+n\log N). (Roughly, the reason is that logn\log n forms a geometric progression as we descend downward in the recursion tree.)

Let E(n,x)E(n,x) be the worst-case additive error of the computed solution for an input of size nn and optimal value xx. In Case 1, the additive error is O(x)O(x). In general, we have the recurrence E(n,x)max{O(x),iE(ni,xi)+O(xlogn)}E(n,x)\leq\max\{O(x),\,\sum_{i}E(n_{i},x_{i})+O(\frac{x}{\log n})\}, for some nin_{i}’s and xix_{i}’s with inin\sum_{i}n_{i}\leq n, and maxini=O~(n)\max_{i}n_{i}=\tilde{O}(\sqrt{n}), and ixix\sum_{i}x_{i}\leq x. This recurrence solves to E(n,x)=O(x)E(n,x)=O(x). (Roughly, the reason is that 1logn\frac{1}{\log n} forms a geometric progression as we descend downward in the recursion tree.) Thus, the algorithm yields an O(1)O(1)-approximation.

The total error probability over the entire recursion is bounded by O(nN)O(\frac{n}{N}). We set N=ncN=n^{c} for the global input size nn and an arbitrarily large constant cc.

Theorem 5.1.

Given O(n)O(n) halfspaces and O(n)O(n) points in 3D, there exists a randomized O(1)O(1)-approximation algorithm for the set cover problem that runs in O(nlogn)O(n\log n) expected time and is correct w.h.p.

It is possible to modify the analysis to get O(nlogn)O(n\log n) worst-case time instead of expected. In Case 2, the conflict lists have size O(blogn)O(b\log n) w.h.p. and the construction time is actually O(nlogn)O(n\log n) w.h.p., at the root of the recursion. In subsequent levels of the recursion, we can apply a Chernoff bound to get a high-probability bound on the total running time.

References

  • [1] Anna Adamaszek, Sariel Har-Peled, and Andreas Wiese. Approximation schemes for independent set and sparse subsets of polygons. Journal of the ACM, 66(4):29:1–29:40, 2019. doi:10.1145/3326122.
  • [2] Peyman Afshani and Timothy M. Chan. On approximate range counting and depth. Discrete & Computational Geometry, 42(1):3–21, 2009. URL: https://doi.org/10.1007/s00454-009-9177-z, doi:10.1007/s00454-009-9177-z.
  • [3] Pankaj K. Agarwal, Hsien-Chih Chang, Subhash Suri, Allen Xiao, and Jie Xue. Dynamic geometric set cover and hitting set. In Proceedings of the 36th Symposium on Computational Geometry (SoCG), volume 164, pages 2:1–2:15, 2020. URL: https://doi.org/10.4230/LIPIcs.SoCG.2020.2, doi:10.4230/LIPIcs.SoCG.2020.2.
  • [4] Pankaj K. Agarwal and Jeff Erickson. Geometric range searching and its relatives. In B. Chazelle, J. E. Goodman, and R. Pollack, editors, Advances in Discrete and Computational Geometry, pages 1–56. AMS Press, 1999. URL: http://jeffe.cs.illinois.edu/pubs/survey.html.
  • [5] Pankaj K. Agarwal and Jiangwei Pan. Near-linear algorithms for geometric hitting sets and set covers. Discrete & Computational Geometry, 63(2):460–482, 2020. Preliminary version in SoCG’14. doi:10.1007/s00454-019-00099-6.
  • [6] Boris Aronov and Sariel Har-Peled. On approximating the depth and related problems. SIAM Journal on Computing, 38(3):899–921, 2008. doi:10.1137/060669474.
  • [7] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. Journal of the ACM, 45(6):891–923, 1998. URL: https://doi.org/10.1145/293347.293348, doi:10.1145/293347.293348.
  • [8] Hervé Brönnimann and Michael T. Goodrich. Almost optimal set covers in finite VC-dimension. Discrete & Computational Geometry, 14(4):463–479, 1995.
  • [9] Timothy M. Chan. Random sampling, halfspace range reporting, and construction of (k)(\leq k)-levels in three dimensions. SIAM Journal on Computing, 30(2):561–575, 2000. doi:10.1137/S0097539798349188.
  • [10] Timothy M. Chan. Semi-online maintenance of geometric optima and measures. SIAM Journal on Computing, 32(3):700–716, 2003. URL: https://doi.org/10.1137/S0097539702404389, doi:10.1137/S0097539702404389.
  • [11] Timothy M. Chan. A dynamic data structure for 3-d convex hulls and 2-d nearest neighbor queries. Journal of the ACM, 57(3):16:1–16:15, 2010. doi:10.1145/1706591.1706596.
  • [12] Timothy M. Chan. Optimal partition trees. Discrete & Computational Geometry, 47(4):661–690, 2012. doi:10.1007/s00454-012-9410-z.
  • [13] Timothy M. Chan. Three problems about dynamic convex hulls. International Journal of Computational Geometry & Applications, 22(4):341–364, 2012. doi:10.1142/S0218195912600096.
  • [14] Timothy M. Chan and Qizheng He. Faster approximation algorithms for geometric set cover. In Proceedings of the 36th Symposium on Computational Geometry (SoCG), volume 164, pages 27:1–27:14, 2020. URL: https://doi.org/10.4230/LIPIcs.SoCG.2020.27, doi:10.4230/LIPIcs.SoCG.2020.27.
  • [15] Timothy M. Chan and Konstantinos Tsakalidis. Optimal deterministic algorithms for 2-d and 3-d shallow cuttings. Discrete & Computational Geometry, 56(4):866–881, 2016.
  • [16] Kenneth L. Clarkson. New applications of random sampling in computational geometry. Discrete & Computational Geometry, 2:195–222, 1987. doi:10.1007/BF02187879.
  • [17] Kenneth L. Clarkson. Algorithms for polytope covering and approximation. In Workshop on Algorithms and Data Structures, pages 246–252, 1993.
  • [18] Kenneth L. Clarkson and Peter W. Shor. Application of random sampling in computational geometry, II. Discrete & Computational Geometry, 4:387–421, 1989. doi:10.1007/BF02187740.
  • [19] Kenneth L. Clarkson and Kasturi Varadarajan. Improved approximation algorithms for geometric set cover. Discrete & Computational Geometry, 37(1):43–58, 2007.
  • [20] Artur Czumaj, Funda Ergün, Lance Fortnow, Avner Magen, Ilan Newman, Ronitt Rubinfeld, and Christian Sohler. Approximating the weight of the Euclidean minimum spanning tree in sublinear time. SIAM Journal on Computing, 35(1):91–109, 2005. doi:10.1137/S0097539703435297.
  • [21] Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars. Computational Geometry: Algorithms and Applications. Springer, 3rd edition, 2008.
  • [22] Greg N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM Journal on Computing, 16(6):1004–1022, 1987.
  • [23] Monika Henzinger, Stefan Neumann, and Andreas Wiese. Dynamic approximate maximum independent set of intervals, hypercubes and hyperrectangles. In Proceedings of the 36th Symposium on Computational Geometry (SoCG), volume 164, pages 51:1–51:14, 2020. URL: https://doi.org/10.4230/LIPIcs.SoCG.2020.51, doi:10.4230/LIPIcs.SoCG.2020.51.
  • [24] Richard J. Lipton and Robert Endre Tarjan. Applications of a planar separator theorem. SIAM Journal on Computing, 9(3):615–627, 1980. doi:10.1137/0209046.
  • [25] Chih-Hung Liu, Evanthia Papadopoulou, and D. T. Lee. An output-sensitive approach for the L1L_{1}/LL_{\infty} kk-nearest-neighbor Voronoi diagram. In Proceedings of the 19th Annual European Symposium on Algorithms (ESA), volume 6942 of Lecture Notes in Computer Science, pages 70–81. Springer, 2011. doi:10.1007/978-3-642-23719-5\_7.
  • [26] Jiří Matoušek. Efficient partition trees. Discrete & Computational Geometry, 8(3):315–334, 1992.
  • [27] Jiří Matoušek. Reporting points in halfspaces. Computational Geometry, 2(3):169–186, 1992.
  • [28] Jiří Matoušek. Range searching with efficient hierarchical cutting. Discrete & Computational Geometry, 10:157–182, 1993. doi:10.1007/BF02573972.
  • [29] Nabil H. Mustafa, Rajiv Raman, and Saurabh Ray. Quasi-polynomial time approximation scheme for weighted geometric set cover on pseudodisks and halfspaces. SIAM Journal on Computing, 44(6):1650–1669, 2015. URL: https://doi.org/10.1137/14099317X, doi:10.1137/14099317X.
  • [30] Nabil H. Mustafa and Saurabh Ray. Improved results on geometric hitting set problems. Discrete & Computational Geometry, 44(4):883–895, 2010. doi:10.1007/s00454-010-9285-9.
  • [31] Micha Sharir. On kk-sets in arrangement of curves and surfaces. Discrete & Computational Geometry, 6:593–613, 1991. doi:10.1007/BF02574706.