Adapting to Function Difficulty and
Growth Conditions
in Private Optimization
Abstract
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the function at hand belongs to a smaller class that enjoys faster rates. Concretely, we show that for functions exhibiting -growth around the optimum, i.e., for , our algorithms improve upon the standard privacy rate to the faster . Crucially, they achieve these rates without knowledge of the growth constant of the function. Our algorithms build upon the inverse sensitivity mechanism, which adapts to instance difficulty [2], and recent localization techniques in private optimization [25]. We complement our algorithms with matching lower bounds for these function classes and demonstrate that our adaptive algorithm is simultaneously (minimax) optimal over all whenever .
1 Introduction
Stochastic convex optimization (SCO) is a central problem in machine learning and statistics, where for a sample space , parameter space , and a collection of convex losses , one wishes to solve
(1) |
using an observed dataset . While as formulated, the problem is by now fairly well-understood [12, 38, 29, 10, 37], it is becoming clear that, because of considerations beyond pure statistical accuracy—memory or communication costs [45, 26, 13], fairness [23, 28], personalization or distributed learning [35]—problem (1) is simply insufficient to address modern learning problems. To that end, researchers have revisited SCO under the additional constraint that the solution preserves the privacy of the provided sample [22, 21, 1, 16, 19]. A waypoint is Bassily et al. [7], who provide a private method with optimal convergence rates for the related empirical risk minimization problem, with recent papers focus on SCO providing (worst-case) optimal rates in various settings: smooth convex functions [8, 25], non-smooth functions [9], non-Euclidean geometry [5, 4] and under more stringent privacy constraints [34].
Yet these works ground their analyses in worst-case scenarios and provide guarantees for the hardest instance of the class of problems they consider. Conversely, they argue that their algorithms are optimal in a minimax sense: for any algorithm, there exists a hard instance on which the error achieved by the algorithm is equal to the upper bound. While valuable, these results are pessimistic—the exhibited hard instances are typically pathological—and fail to reflect achievable performance.
In this work, we consider the problem of adaptivity when solving (1) under privacy constraints. Importantly, we wish to provide private algorithms that adapt to the hardness of the objective . A loss function may belong to multiple problem classes, each exhibiting different achievable rates, so a natural desideratum is to attain the error rate of the easiest sub-class. As a simple vignette, if one gets an arbitrary -Lipschitz convex loss function , the worst-case guarantee of any -DP algorithm is . However, if one learns that exhibits some growth property—say is -strongly convex—the regret guarantee improves to the faster rate with the appropriate algorithm. It is thus important to provide algorithms that achieves the rates of the “easiest” class to which the function belongs [32, 46, 18].
To that end, consider the nested classes of functions for such that, if then there exists such that for all ,
For example, strong convexity implies growth with parameter . This growth assumption closely relates to uniform convexity [32] and the Polyak-Kurdyka-Łojasiewicz inequality [11], and we make these connections precise in Section 2. Intuitively, smaller makes the function much easier to optimize: the error around the optimal point grows quickly. Objectives with growth are widespread in machine learning applications: among others, the -regularized hinge loss exhibits sharp growth (i.e. ) while - or -constrained -norm regression —i.e. and —has -growth for any integer greater than [43]. In this work, we provide private adaptive algorithms that adapt to the actual growth of the function at hand.
We begin our analysis by examining Asi and Duchi’s inverse sensitivity mechanism [2] on ERM as a motivation. While not a practical algorithm, it achieves instance-optimal rates for any one-dimensional function under mild assumptions, quantifying the best bound one could hope to achieve with an adaptive algorithm, and showing (in principle) that adaptive private algorithms can exist. We first show that for any function with -growth, the inverse sensitivity mechanism achieves privacy cost ; importantly, without knowledge of the function class , that belongs to. This constitutes grounding and motivation for our work in three ways: (i) it validates our choice of sub-classes as the privacy rate is effectively controlled by the value of , (ii) it exhibits the rate we wish to achieve with efficient algorithms on and (iii) it showcases that for easier functions, privacy costs shrink significantly—to illustrate, for the privacy rate becomes .
We continue our treatment of problem (1) under growth in Section 4 and develop practical algorithms that achieve the rates of the inverse sensitivity mechanism. Moreover, for approximate -differential privacy, our algorithms improve the rates, achieving roughly . Our algorithms hinge on a reduction to SCO: we show that by solving a sequence of increasingly constrained SCO problems, one achieves the right rate whenever the function exhibits growth at the optimum. Importantly, our algorithm only requires a lower bound (where is the actual growth of ).
We provide optimality guarantees for our algorithms in Section 5 and show that both the inverse sensitivity and the efficient algorithms of Section 4 are simultaneously minimax optimal over all classes whenever and for -DP algorithms. Finally, we prove that in arbitrary dimension, for both pure- and approximate-DP constraints, our algorithms are also simultaneously optimal for all classes with .
On the way, we provide results that may be of independent interest to the community. First, we develop optimal algorithms for SCO under pure differential privacy constraints, which, to the best of our knowledge, do not exist in the literature. Secondly, our algorithms and analysis provide high-probability bounds on the loss, whereas existing results only provide (weaker) bounds on the expected loss. Finally, we complete the results of Ramdas and Singh [40] on (non-private) optimization lower bounds for functions with -growth by providing information-theoretic lower bounds (in contrast to oracle-based lower bounds that rely on observing only gradient information) and capturing the optimal dependence on all problem parameters (namely and ).
1.1 Related work
Convex optimization is one of the best studied problems in private data analysis [16, 19, 41, 7]. The first papers in this line of work mainly study minimizing the empirical loss, and readily establish that the (minimax) optimal privacy rates are for pure -DP and for -DP [16, 7]. More recently, several works instead consider the harder problem of privately minimizing the population loss [8, 25]. These papers introduce new algorithmic techniques to obtain the worst-case optimal rates of for -DP. They also show how to improve this rate to the faster in the case of -strongly convex functions. Our work subsumes both of these results as they correspond to and respectively. To the best of our knowledge, there has been no work in private optimization that investigates the rates under general -growth assumptions or adaptivity to such conditions.
In contrast, the optimization community has extensively studied growth assumptions [40, 32, 15] and show that on these problems, carefully crafted algorithms improves upon the standard for convex functions to the faster . [32] derives worst-case optimal (in the first-order oracle model) gradient algorithms in the uniformly convex case (i.e. ) and provides technique to adapt to the growth , while [40], drawing connections between growth conditions and active learning, provides upper and lower bounds in the first-order stochastic oracle model. We complete the results of the latter and provide information-theoretic lower bounds that have optimal dependence on and —their lower bound only holding for inversely proportional to , when . Closest to our work is [15] who studies instance-optimality via local minimax complexity [14]. For one-dimensional functions, they develop a bisection-based instance-optimal algorithm and show that on individual functions of the form , the local minimax rate is .
2 Preliminaries
We first provide notation that we use throughout this paper, define useful assumptions and present key definitions in convex analysis and differential privacy.
Notation.
typically denotes the sample size and the dimension. Throughout this work, refers to the optimization variable, to the constraint set and to elements ( when random) of the sample space . We usually denote by the (convex) loss function and for a dataset , we define the empirical and population losses
We omit the dependence on as it is often clear from context. We reserve for the privacy parameters of Definition 2.1. We always take gradients with respect to the optimization variable . In the case that is not differentiable at , we override notation and define , where is the subdifferential of at . We use for (potentially random) mechanism and as a shorthand for . For , is the standard -norm, is the corresponding -dimensional -ball of radius and is the dual of , i.e. such that . Finally, we define the Hamming distance between datasets , where is the set of permutations over sets of size .
Assumptions.
We first state standard assumptions for solving (1). We assume that is a closed, convex domain such that . Furthermore, we assume that for any , is convex and -Lipschitz with respect to . Central to our work, we define the following -growth assumption.
Assumption 1 (-growth).
Let . For a loss and distribution , we say that has growth for and , if the population function satisfies
In the case where is the empirical distribution on a finite dataset , we refer to -growth of as -growth of the empirical function .
Uniform convexity and Kurdyka-Łojasiewicz inequality.
Assumption 1 is closely related to two fundamental notions in convex analysis: uniform convexity and the Kurdyka-Łojasiewicz inequality. Following [39], we say that is -uniformly convex with and if
This immediately implies that (i) sums (and expectations) preserve uniform convexity (ii) if is uniformly convex with and , then it has -growth. This will be useful when constructing hard instances as it will suffice to consider -uniformly convex functions which are generally more convenient to manipulate. Finally, we point out that, in the general case that , the literature refers to Assumption 1 as the Kurdyka-Łojasiewicz inequality [11] with, in their notation, . Theorem 5-(ii) in [11] says that, under mild conditions, Assumption 1 implies the following inequality between the error and the gradient norm for all
(2) |
This is a key result in our analysis of the inverse sensitivity mechanism of Section 3.
Differential privacy.
We begin by recalling the definition of -differential privacy.
Definition 2.1 ([22, 21]).
A randomized algorithm is -differentially private (-DP) if, for all datasets that differ in a single data element and for all events in the output space of , we have
We use the following standard results in differential privacy.
Lemma 2.1 (Composition [20, Thm. 3.16]).
If are randomized algorithms that each is -DP, then their composition is -DP.
Next, we consider the Laplace mechanism. We will let denote a -dimensional vector such that for .
Lemma 2.2 (Laplace mechanism [20, Thm. 3.6]).
Let have -sensitivity , that is, . Then the Laplace mechanism with is -DP.
Finally, we need the Gaussian mechanism for -DP.
Lemma 2.3 (Gaussian mechanism [20, Thm. A.1]).
Let have -sensitivity , that is, . Then the Gaussian mechanism with is -DP.
Inverse sensitivity mechanism.
Our goal is to design private optimization algorithms that adapt to the difficulty of the underlying function. As a reference point, we turn to the inverse sensitivity mechanism of [2] as it enjoys general instance-optimality guarantees. For a given function that we wish to estimate privately, define the inverse sensitivity at
(3) |
that is, the inverse sensitivity of a target parameter at instance is the minimal number of samples one needs to change to reach a new instance such that . Having this quantity, the inverse sensitivity mechanism samples an output from the following probability density
(4) |
The inverse sensitivity mechanism preserves -DP and enjoys instance-optimality guarantees in general settings [2]. In contrast to (worst-case) minimax optimality guarantees which measure the performance of the algorithm on the hardest instance, these notions of instance-optimality provide stronger per-instance optimality guarantees.
3 Adaptive rates through inverse sensitivity for -DP
To understand the achievable rates when privately optimizing functions with growth, we begin our theoretical investigation by examining the inverse sensitivity mechanism in our setting. We show that, for instances that exhibit -growth of the empirical function, the inverse sensitivity mechanism privately solves ERM with excess loss roughly .
In our setting, we use a gradient-based approximation of the inverse sensitivity mechanism to simplify the analysis, while attaining similar rates. Following [3] with our function of interest , we can lower bound the inverse sensitivity under natural assumptions. We define a -smoothed version of this quantity which is more suitable to continuous domains
and define the -smooth gradient-based inverse sensitivity mechanism
(5) |
Note that while exactly sampling from the un-normalized density is computationally intractable, analyzing its performance is an important step towards understanding the optimal rates for the family of functions with growth that we study in this work. The following theorem demonstrates the adaptivity of the inverse sensitivity mechanism to the growth of the underlying instance. We defer the proof to Appendix A.
Theorem 1.
The rates of the inverse sensitivity in Theorem 1 provide two main insights regarding the landscape of the problem with growth conditions. First, these conditions allow to improve the worst-case rate to for pure -DP and therefore suggest a better rate is possible for approximate -DP. Moreover, the general instance-optimality guarantees of this mechanism [2] hint that these are the optimal rates for our class of functions. In the sections to come, we validate the correctness of these predictions by developing efficient algorithms that achieve these rates (for pure and approximate privacy), and prove matching lower bounds which demonstrate the optimality of these algorithms.
4 Efficient algorithms with optimal rates
While the previous section demonstrates that there exists algorithms that improve the rates for functions with growth, we pointed out that was computationally intractable in the general case. In this section, we develop efficient algorithms—e.g. that are implementable with gradient-based methods—that achieve the same convergence rates. Our algorithms build on the recent localization techniques that Feldman et al. [25] used to obtain optimal rates for DP-SCO with general convex functions. In Section 4.1, we use these techniques to develop private algorithms that achieve the optimal rates for (pure) DP-SCO with high probability, in contrast to existing results which bound the expected excess loss. These results are of independent interest.
In Section 4.2, we translate these results into convergence guarantees on privately optimizing convex functions with growth by solving a sequence of increasingly constrained SCO problems—the high-probability guarantees of Section 4.1 being crucial to our convergence analysis of these algorithms.
4.1 High-probability guarantees for convex DP-SCO
We first describe our algorithm (Algorithm 1) then analyze its performance under pure-DP (Proposition 1) and approximate-DP constraints (Proposition 2). Our analysis builds on novel tight generalization bounds for uniformly-stable algorithms with high probability [24]. We defer the proofs to Appendix B.
Proposition 1.
Let , and be convex, -Lipschitz for all . Setting
then for , Algorithm 1 is -DP and has with probability
Similarly, by using a different choice for the parameters and noise distribution, we have the following guarantees for approximate -DP.
Proposition 2.
Let , and be convex, -Lipschitz for all . Setting
then for , Algorithm 1 is -DP and has with probability
4.2 Algorithms for DP-SCO with growth
Building on the algorithms of the previous section, we design algorithms that recover the rates of the inverse sensitivity mechanism for functions with growth, importantly without knowledge of the value of . Inspired by epoch-based algorithms from the optimization literature [31, 29], our algorithm iteratively applies the private procedures from the previous section. Crucially, the growth assumption allows to reduce the diameter of the domain after each run, hence improving the overall excess loss by carefully choosing the hyper-parameters. We provide full details in Algorithm 2.
The following theorem summarizes our main upper bound for DP-SCO with growth in the pure privacy model, recovering the rates of the inverse sensitivity mechanism in Section 3. We defer the proof to Section B.3.
Theorem 2.
Let , and be convex, -Lipschitz for all . Assume that has -growth (Assumption 1) with . Setting , Algorithm 2 is -DP and has with probability
where hides logarithmic factors depending on and .
Sketch of the proof.
The main challenge of the proof is showing that the iterate achieves good risk without knowledge of . Let us denote by the error guarantee of Proposition 1 (or Proposition 2 for approximate-DP). At each stage , as long as belongs to , the excess loss is of order and thus decreases exponentially fast with . The challenge is that, without knowledge of , we do not know the index (roughly ) after which for and the regret guarantees become meaningless with respect to the original problem. However, in the stages after , as the constraint set becomes very small, we upper bound the variations in function values and show that the sub-optimality cannot increase (overall) by more than , thus achieving the optimal rate of stage .
∎
Moreover, we can improve the dependence on the dimension for approximate -DP, resulting in the following bounds.
Theorem 3.
Let , and be convex, -Lipschitz for all . Assume that has -growth (Assumption 1) with . Setting and , Algorithm 2 is -DP and has with probability
where hides logarithmic factors depending on and .
5 Lower bounds
In this section, we develop (minimax) lower bounds for the problem of SCO with -growth under privacy constraints. Note that taking provides lower bound for the unconstrained minimax risk. For a sample space and collection of distributions over , we define the function class as the set of convex functions from that are -Lipschitz and has -growth (Assumption 1). We define the constrained minimax risk [6]
(6) |
where is the collection of -DP mechanisms from to . When clear from context, we omit the dependency on of the function class and simply write . We also forego the dependence on when referring to pure-DP constraints, i.e. . We now proceed to prove tight lower bounds for -DP in Section 5.1 and -DP in Section 5.2.
5.1 Lower bounds for pure -DP
Although in Section 4 we show that the same algorithm achieves the optimal upper bounds for all values of , the landscape of the problem is more subtle for the lower bounds and we need to delineate two different cases to obtain tight lower bounds. We begin with , which corresponds to uniform convexity and enjoys properties that make the problem easier (e.g., closure under summation or addition of linear terms). The second case, , corresponds to sharper growth and requires a different hard instance to satisfy the growth condition.
-growth with .
We begin by developing lower bounds under pure DP for
Theorem 4 (Lower bound for -DP, ).
Let , , , and . Let be the set of distributions on . Assume that
The following lower bound holds
(7) |
First of all, note that is not an overly-restrictive assumption. Indeed, for an arbitrary -uniformly convex and -Lipschitz function, it always holds that . This is thus equivalent to assuming . Note that when , the standard lower bound holds. We present the proof in Section C.1.1 and preview the main ideas here.
Sketch of the proof.
Our lower bounds hinges on the collections of functions for to be chosen later. These functions are [39, Lemma 4] -uniformly convex for any and in turn, so is the population function . We proceed as follows, we first prove an information-theoretic (non-private) lower bound (Theorem 8 in Appendix C.1.1) which provides the statistical term in (7). With the same family of functions, we exhibit a collection of datasets and prove by contradiction that if an estimator were to optimize below a certain error it would have violated -DP—this yields a lower bound on ERM for our function class (Theorem 9 in Appendix C.1.1). We conclude by proving a reduction from SCO to ERM in Proposition 4. ∎
-growth with .
As the construction of the hard instance is more intricate for , we provide a one-dimensional lower bound and leave the high-dimensional case for future work. In this case we directly obtain the result with a private version of Le Cam’s method [44, 42, 6], however with a different family of functions.
The issue with the construction of the previous section is that the function does not exhibit sharp growth for . Indeed, the added linear function shifts the minimum away from where the function is differentiable and as a result it locally behaves as a quadratic and only achieves growth . To establish the lower bound, we consider a different sample function that has growth exactly on one side and on the other side. This yields the following
Theorem 5 (Lower bound for -DP, ).
Let , , , , , and . There exists a collection of distributions such that, whenever , it holds that
(8) |
5.2 Lower bounds under approximate privacy constraints
We conclude our treatment by providing lower bounds but now under approximate privacy constraints, demonstrating the optimality of the risk bound of Theorem 3. We prove the result via a reduction: we show that if one solves ERM with -growth with error , this implies that one solves arbitrary convex ERM with error . Given that a lower bound of holds for ERM, a lower bound of holds for ERM with -growth. However, for this reduction to hold, we require that . Furthermore, we consider to be roughly a constant—in the case that is too large, standard lower bounds on general convex functions apply.
Theorem 6 (Private lower bound for -DP).
Let such that , . Let and . Assume that , then for any mechanism , there exists and such that
Theorem 6 implies that the same lower bound (up to logarithmic factors) applies to SCO via the reduction of [8, Appendix C]. Before proving the theorem, let us state (and prove in Section C.2) the following reduction: if an -DP algorithm achieves excess error (roughly) on ERM for any function with -growth, there exists an -DP algorithm that achieves error for any convex function. We construct the latter by iteratively solving ERM problems with geometrically increasing -regularization towards the previous iterate to ensure the objective has -growth.
Proposition 3 (Solving ERM with -growth implies solving any convex ERM).
Let . Assume there exists an mechanism such that for any -Lipschitz loss on and dataset such that exhibits -growth, the mechanism achieves excess loss
Then, we can construct an -DP mechanism such that for any -Lipschitz loss , the mechanism achieves excess loss
where is the smallest integer such that .
With this proposition, the proof of the theorem directly follows as Bassily et al. [7] prove a lower bound for ERM with -DP.
Discussion
In this work, we develop private algorithms that adapt to the growth of the function at hand, achieving the convergence rate corresponding to the “easiest” sub-class the function belongs to. However, the picture is not yet complete. First of, there are still gaps in our theoretical understanding, the most interesting one being . On these functions, appropriate optimization algorithms achieve linear convergence [43] and raise the question, can we achieve exponentially small privacy cost in our setting? Finally, while our optimality guarantees are more fine-grained than the usual minimax results over convex functions, they are still contigent on some predetermined choice of sub-classes. Studying more general notions of adaptivity is an important future direction in private optimization.
Acknowledgments
The authors would like to thank Karan Chadha and Gary Cheng for comments on an early version of the draft.
References
- Abadi et al. [2016] M. Abadi, A. Chu, I. Goodfellow, B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In 23rd ACM Conference on Computer and Communications Security (ACM CCS), pages 308–318, 2016.
- Asi and Duchi [2020a] H. Asi and J. Duchi. Near instance-optimality in differential privacy. arXiv:2005.10630 [cs.CR], 2020a.
- Asi and Duchi [2020b] H. Asi and J. C. Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. In Advances in Neural Information Processing Systems 33, 2020b.
- Asi et al. [2021a] H. Asi, J. Duchi, A. Fallah, O. Javidbakht, and K. Talwar. Private adaptive gradient methods for convex optimization. arXiv:2106.13756 [cs.LG], 2021a.
- Asi et al. [2021b] H. Asi, V. Feldman, T. Koren, and K. Talwar. Private stochastic convex optimization: Optimal rates in geometry. arXiv:2103.01516 [cs.LG], 2021b.
- Barber and Duchi [2014] R. F. Barber and J. C. Duchi. Privacy and statistical risk: Formalisms and minimax bounds. arXiv:1412.4451 [math.ST], 2014.
- Bassily et al. [2014] R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 55th Annual Symposium on Foundations of Computer Science, pages 464–473, 2014.
- Bassily et al. [2019] R. Bassily, V. Feldman, K. Talwar, and A. Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems 32, 2019.
- Bassily et al. [2020] R. Bassily, V. Feldman, C. Guzmán, and K. Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Advances in Neural Information Processing Systems 33, 2020.
- Beck and Teboulle [2003] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167–175, 2003.
- Bolte et al. [2017] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. Mathematical Programming, 165:471–507, 2017.
- Bottou et al. [2018] L. Bottou, F. Curtis, and J. Nocedal. Optimization methods for large-scale learning. SIAM Review, 60(2):223–311, 2018.
- Braverman et al. [2016] M. Braverman, A. Garg, T. Ma, H. L. Nguyen, and D. P. Woodruff. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In Proceedings of the Forty-Eighth Annual ACM Symposium on the Theory of Computing, 2016. URL https://arxiv.org/abs/1506.07216.
- Cai and Low [2015] T. Cai and M. Low. A framework for estimating convex functions. Statistica Sinica, 25:423–456, 2015.
- Chatterjee et al. [2016] S. Chatterjee, J. Duchi, J. Lafferty, and Y. Zhu. Local minimax complexity of stochastic convex optimization. In Advances in Neural Information Processing Systems 29, 2016.
- Chaudhuri et al. [2011] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069–1109, 2011.
- Duchi [2019] J. C. Duchi. Information theory and statistics. Lecture Notes for Statistics 311/EE 377, Stanford University, 2019. URL http://web.stanford.edu/class/stats311/lecture-notes.pdf. Accessed May 2019.
- Duchi and Ruan [2021] J. C. Duchi and F. Ruan. Asymptotic optimality in stochastic optimization. Annals of Statistics, 49(1):21–48, 2021.
- Duchi et al. [2013] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In 54th Annual Symposium on Foundations of Computer Science, pages 429–438, 2013.
- Dwork and Roth [2014] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3 & 4):211–407, 2014.
- Dwork et al. [2006a] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology (EUROCRYPT 2006), 2006a.
- Dwork et al. [2006b] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Theory of Cryptography Conference, pages 265–284, 2006b.
- Dwork et al. [2012] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science (ITCS), pages 214–226, 2012.
- Feldman and Vondrak [2019] V. Feldman and J. Vondrak. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In Proceedings of the Thirty Second Annual Conference on Computational Learning Theory, pages 1270–1279, 2019.
- Feldman et al. [2020] V. Feldman, T. Koren, and K. Talwar. Private stochastic convex optimization: Optimal rates in linear time. In Proceedings of the Fifty-Second Annual ACM Symposium on the Theory of Computing, 2020.
- Garg et al. [2014] A. Garg, T. Ma, and H. L. Nguyen. On communication cost of distributed statistical estimation and dimensionality. In Advances in Neural Information Processing Systems 27, 2014.
- Hardt and Talwar [2010] M. Hardt and K. Talwar. On the geometry of differential privacy. In Proceedings of the Forty-Second Annual ACM Symposium on the Theory of Computing, pages 705–714, 2010. URL http://arxiv.org/abs/0907.3754.
- Hashimoto et al. [2018] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang. Fairness without demographics in repeated loss minimization. In Proceedings of the 35th International Conference on Machine Learning, 2018.
- Hazan and Kale [2011] E. Hazan and S. Kale. An optimal algorithm for stochastic strongly convex optimization. In Proceedings of the Twenty Fourth Annual Conference on Computational Learning Theory, 2011. URL http://arxiv.org/abs/1006.2425.
- Jin et al. [2019] C. Jin, P. Netrapalli, R. Ge, S. M. Kakade, and M. I. Jordan. A short note on concentration inequalities for random vectors with subgaussian norm. arXiv:1902.03736 [math.PR], 2019.
- Juditsky and Nesterov [2010] A. Juditsky and Y. Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. URL http://hal.archives-ouvertes.fr/docs/00/50/89/33/PDF/Strong-hal.pdf, 2010.
- Juditsky and Nesterov [2014] A. Juditsky and Y. Nesterov. Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stochastic Systems, 4(1):44––80, 2014.
- Levy and Duchi [2019] D. Levy and J. C. Duchi. Necessary and sufficient geometries for gradient methods. In Advances in Neural Information Processing Systems 32, 2019.
- Levy et al. [2021] D. Levy, Z. Sun, K. Amin, S. Kale, A. Kulesza, M. Mohri, and A. T. Suresh. Learning with user-level privacy. arXiv:2102.11845 [cs.LG], 2021. URL https://arxiv.org/abs/2102.11845.
- McMahan et al. [2017] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017.
- Mitzenmacher and Upfal [2005] M. Mitzenmacher and E. Upfal. Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
- Nemirovski and Yudin [1983] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
- Nemirovski et al. [2009] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009.
- Nesterov [2008] Y. Nesterov. Accelerating the cubic regularization of newton’s method on convex problems. Mathematical Programming, 112(1):159–181, 2008.
- Ramdas and Singh [2013] A. Ramdas and A. Singh. Optimal rates for stochastic convex optimization under tsybakov noise condition. In Proceedings of the 30th International Conference on Machine Learning, pages 365–373, 2013.
- Smith and Thakurta [2013] A. Smith and A. Thakurta. Differentially private feature selection via stability arguments, and the robustness of the Lasso. In Proceedings of the Twenty Sixth Annual Conference on Computational Learning Theory, pages 819–850, 2013. URL http://proceedings.mlr.press/v30/Guha13.html.
- Wainwright [2019] M. J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press, 2019.
- Xu et al. [2017] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th International Conference on Machine Learning, pages 3821–3830, 2017.
- Yu [1997] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423–435. Springer-Verlag, 1997.
- Zhang et al. [2013] Y. Zhang, J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Information-theoretic lower bounds for distributed estimation with communication constraints. In Advances in Neural Information Processing Systems 26, 2013.
- Zhu et al. [2016] Y. Zhu, S. Chatterjee, J. Duchi, and J. Lafferty. Local minimax complexity of stochastic convex optimization. In Advances in Neural Information Processing Systems 29, 2016.
Appendix A Proofs for Section 3
A.1 Proof of Theorem 1
See 1
Let us first prove privacy. The sensitivity of is as is -Lipschitz, therefore following the privacy proof of the smooth inverse sensitivity mechanism [2, Prop. 3.2] we get that (5) is -DP.
Let us now prove the claim about utility. Denote and with to be chosen presently. We argue that it is enough to show that . Indeed then with probability at least we have , which implies there is such that and , hence using the Kurdyka-Łojasiewicz inequality (2)
It remains to prove that . Let and . Note that for any as is in the interior of which implies . Hence the definition of the smooth inverse sensitivity mechanism (5) implies
where the last inequality follows by choosing .
Appendix B Proofs for Section 4
We need to the following result on the generalization properties of uniformly stable algorithms [24].
Theorem 7.
[24, Cor. 4.2] Assume . Let where and is -Lipschitz and -strongly convex for all . Let be the empirical minimizer. For , with probability at least
B.1 Proof of Proposition 1
See 1
We begin by proving the privacy claim. We show that each iterate is -DP and therefore post-processing implies the claim as each sample is used in exactly one iterate. To this end, let and note that the minimizer has sensitivity [25], hence the -sensitivity is at most . Standard properties of the Laplace mechanism [20] now imply that is -DP which give the claim about privacy.
Now we proceed to prove utility which follows similar arguments to the localization-based proof in [25]. Letting , we have:
First, by using standard properties of Laplace distributions [17], we know that for ,
which implies (as ) that with probability we have for all . Hence
where the last inequality follows since . Now we use high-probability generalization guarantees of uniformly-stable algorithms. We use Theorem 7 with to get that with probability for each
Thus,
where the last inequality follows by choosing
B.2 Proof of Proposition 2
See 2
The proof is similar to the proof of Proposition 1. For privacy, we show in the proof of Proposition 1 that the -sensitivity of is upper bounded by hence standard properties of the Gaussian mechanism [20] now imply that is -DP which implies the final algorithm is -DP using post-processing.
The utility proof follows the same arguments as in the proof of Proposition 1, except that for we have [30] (since is -norm-sub-Gaussian)
implying that for all with probability .
B.3 Proofs of Theorems 2 and 3
See 2
See 3
We start by proving privacy. Since each sample is used in exactly one iterate, we only need to show that each iterate is -DP, which will imply the main claim using post-processing. The privacy of each iterate follows directly from the privacy guarantees of Algorithm 1. We proceed to prove utility.
We will prove the utility claim assuming the subroutine used in Algorithm 2 satisfies the following: the output has error
for some . Note that in our setting, Proposition 1 implies that for pure-DP and similarly Proposition 2 gives the corresponding for -DP.
The proof has two stages. In the first stage (Lemma B.1), we prove that as long as for some , then and the performance of the algorithm keeps improving. We show that at the end of this stage, the points has optimal excess loss. Then, in the second stage (Lemma B.2), we show that the iterates would not move much as the radius of the domain is sufficiently small, hence the final accumulated error along these iterations is small.
Let us begin with the first stage. Let be the largest such that . We prove that for all where we recall that and .
Lemma B.1.
For all we have
Proof.
To prove the first part, we need to show that . Let . First, note that the claim is true for . Now we assume it is correct for and prove correctness for . Note that the growth condition implies
where . Thus we have
where the second inequality holds for that satisfies . This proves the first part of the claim. For the second part, note that the definition of implies that . Therefore, as and the algorithm has error , we have
The claim now follows as . ∎
We now proceed to the second stage. The following lemma shows that the accumulated error along the iterates is small and therefore obtains the same error as (up to constant factors).
Lemma B.2.
Assume the algorithm has error . Let be the largest such that . For all we have
In particular, for we have
Proof.
Note that as , the guarantees of the algorithm give
For the second part of the claim, we have
The claim now follows as and . ∎
Assuming , Theorem 2 and Theorem 3 now follow immediately from Lemma B.2. Indeed, for the case of pure-DP (), the choice of hyper-parameters in Algorithm 2 and the guarantees of Algorithm 1 (Proposition 1) imply that , which proves Theorem 2. Similarly, Theorem 3 follows by using the guarantees of of Algorithm 1 for approximate -DP, that ism Proposition 2, which gives . Note that our choice of stepsize at each iterate implies that Theorem 2 guarantees the desired utility with probability at least , hence the final utility guarantee holds with probability at least .
It remains to verify . Note that by choosing , we get that , hence . As we have (non-private error) and in our setting, we get that choosing gives the claim.
Appendix C Proofs of Section 5
In this section, we provide the proofs for our lower bound under privacy constraints for functions with growth. This section is organized as follows: we prove in Section C.1, the lower bounds under pure-DP and in Section C.2, the lower bounds under approximate-DP. Within Section C.1, we distinguish between (Section C.1.1) and (Section C.1.2).
C.1 Proofs of Section 5.1
C.1.1 Proof of Theorem 4
As we preview in the main text, the proof combines the (non-private) information-theoretic lower bounds of Theorem 8 with the (private) lower bound on ERM of Theorem 9. Finally, we show in Proposition 4 that privately solving SCO is harder than privately solving ERM, concluding the proof of the theorem. We restate the theorem and prove these results in sequence.
See 4
Non-private lower bound
We begin the proof of Theorem 4 by proving a (non-private) information-theoretic lower bound for minimizing functions with -growth. We use the standard reduction from estimation to testing [see 33, Appendix A.1] in conjunction with Fano’s method [42, 44].
Theorem 8 (Non-private lower bound).
Let , , , and . Let be the set of distributions on . Assume that
The following lower bound holds
Proof.
For let us consider the following function and distribution
Since the linear term does not affect uniform convexity, Lemma 4 in [39] guarantees that is -uniformly convex. Furthermore, for
by assumption, so the functions are -Lipschitz and satisfy Assumption 1.
Computing the separation. As , we have
Note that for , it holds that
To make sure that , we require . After choosing , we will see that this holds under the assumptions of the theorem. Let us consider the Gilbert-Varshimov packing of the hypercube: there exists such that and for all . Let us compute the separation
Note that . This yields a separation
Lower bounding the testing error. In the case of a multiple hypothesis test, we use Fano’s method and for and , Fano’s inequality guarantees
where is the Shannon mutual information between and . In our case, we have and . In the case , we choose . We handle the one-dimensional case thereafter. For this , we have
For this choice of , the assumption on ensures that the minimum remains in .
One-dimensional lower bound with Le Cam’s method. Since Fano’s method requires , we finish the proof by providing a lower bound for using Le Cam’s method. We use the same family of functions in one dimension, i.e. , and for define
As this is the one-dimensional analog of the previous construction, remains -lipschitz and has -growth. A calculation yields that the separation is
where we used that . For and . Le Cam’s lemma in conjunction with Pinsker’s inequality yields that
In our case, we have for . We set , which yields the final result in one dimension
∎
Privatizing the lower bound via a packing argument
We now show how this construction yields a private lower bound via a packing argument. For , considering the ERM problem, the following private lower bound holds.
Theorem 9 (Private lower bound for ERM).
Let , , and . Let be the set of distributions on . Assume that
Then any -DP algorithm has
Proof.
First, note that it is enough to prove the following lower bound
(9) |
Indeed, this implies that
Let us now prove the lower bound (9). To this end, we consider the function where . We now construct datasets as follows. Let be the Gilbert-Varshimov packing of the hypercube: that is, and for all . We define . Note that and that , hence
Therefore we have
We are now ready to finish the proof using packing-based arguments [27]. Assume by contradiction there is an -DP algorithm such that
Let . Note that the sets are disjoint and that Markov inequality implies
Thus, the privacy constraint now gives
where the second inequality follows since . This gives a contradiction for as . For , we can repeat the same arguments with to get the desired lower bound. ∎
Reduction from -DP ERM to -DP SCO
We conclude the proof of the theorem by proving that SCO under privacy constraints is strictly harder than ERM. This is similar to Appendix C in [8] but we require it for pure-DP constraints. We make this formal in here.
We have the following lemma.
Proposition 4.
Let . Assume is an -DP algorithm that for a sample with achieves with probability error
Then there is an -DP algorithm such that for any dataset has with probability ,
Proof.
Given the algorithm , we define as follows. For an input , let be the empirical distribution of . Then, proceeds as follows:
-
1.
Sample a new dataset where
-
2.
If there is a sample that was sampled more than times, return
-
3.
Else, return
We need to prove that is -DP and that it has the desired utility. For utility, note that returns at step with probability at most , since we have for every
where with , and the second inequality follows from Chernoff [36, Thm. 4.4] and . Applying a union bound over all samples, we get that step returns with probability at most as . Moreover, Algorithm fails with probability at most . Therefore, as , we have with probability at least ,
Let us now prove privacy. Assume we run algorithm on two neighboring datasets , and let be the datasets produced at step . Let denote the event that there was a sample that was used more than times (note that this does not depend on the input). Then for any measurable ,
where the first inequality follows from group privacy since and is -DP. This completes the proof.
∎
C.1.2 Proof of Theorem 5
See 5
Proof.
We follow the same reduction that we used in the proof of Theorem 8. For , we again consider with probability and otherwise. For to be defined later, we construct the following function
Computing the separation. First, let us compute the separation . We will then choose to ensure has -growth. By symmetry, assume . is increasing on and decreasing on , thus the minimum belongs to and by inspection, is attained at with value . Similarly, the minimum of is attained on with value . This yields
Let us now pick such that has -growth. Again, by symmetry we only treat the case. We have
where the last inequality is because and so for . In the second case, we have
It holds that for all iff . As a result, we set . Finally, for , we define
We wish to prove that . First of, note that , whenever . Let us show that is decreasing on which suffices to conclude the proof. We have
First of, note that and , thus it suffices to show that if has an extremum then is it negative. An extremum of this function is a point such that
which yields that
as . This calculation shows that has -growth. Finally note that the function is -Lipschitz as desired.
Lower bounding the testing error. It remains to choose the value of . Since we require a lower bound under privacy constraints, in contrast to the one-dimensional section of the proof of Theorem 8, we require the following privatized version of Le Cam’s lemma from [6]
Proposition 5.
[6, Thm. 2] Let be an -DP mechanism from . It holds that
With this result, we set and lower bound by for readability, which concludes the proof of the theorem. ∎
C.2 Proof for Section 5.2
See 3
Proof of Proposition 3.
Let us first show how to construct the mechanism . Let be such that and let be a collection of positive scalars. Set , for
Finally, define . Standard composition theorems [20] guarantee that is -DP. Let us analyze its utility; we drop the dependence of on other variables when clear from context. First of, since is a constant, note that is -Lipschitz with a numerical constant. For simplicity, we define and . It holds that is -uniformly-convex and thus the following growth condition holds
Also note that for any point , it holds that
Finally, let us bound the distance to the optimum of at the final iterate. We have
Let us put the pieces together: for to be determined later and , set . After rounds and denoting , we have
Finally, note that
It then holds that
It remains to pick to minimize the upper bound above. A calculation yields that for
Setting yields the regret bound
∎
Proof.
Consider the reduction of Proposition 3. For to be determined later, assume by contradiction that there exists an mechanism such that
Setting , the condition holds and the result of Proposition 3 guarantees that there exists a numerical constant and a mechanism such that
However, Theorem 5.3 in [7] guarantees that there exists such that for any -DP mechanism , it must hold
Setting yields a contradiction and the desired lower bound by noting that consists only of log factors. ∎