Tactics for Improving Least Squares Estimation
Abstract
This paper deals with tactics for fast computation in least squares regression in high dimensions. These tactics include: (a) the majorization-minimization (MM) principle, (b) smoothing by Moreau envelopes, and (c) the proximal distance principal for constrained estimation. In iteratively reweighted least squares, the MM principle can create a surrogate function that trades case weights for adjusted responses. Reduction to ordinary least squares then permits the reuse of the Gram matrix and its Cholesky decomposition across iterations. This tactic is pertinent to estimation in L2E regression and generalized linear models. For problems such as quantile regression, non-smooth terms of an objective function can be replaced by their Moreau envelope approximations and majorized by spherical quadratics. Finally, penalized regression with distance-to-set penalties also benefits from this perspective. Our numerical experiments validate the speed and utility of deweighting and Moreau envelope approximations. Julia software implementing these experiments is available on our web page.
Keywords: MM principle, Moreau envelope, matrix decomposition, distance majorization, Sylvester equation
1 Introduction
It is fair to say that most estimation problems in classical statistics reduce to either least squares or maximum likelihood and that most of maximum likelihood estimation reduces to iteratively reweighted least squares (IRLS). Quantile regression and generalized linear model (GLM) regression are just two of the many examples of IRLS. The modern era of frequentist statistics has added to the classical inference soup sparsity, robustness, and rank restrictions. These additions complicate estimation by introducing nonsmooth terms in the objective function. In practice, smooth approximations can be substituted for nonsmooth terms without substantially impacting estimates or inference. The present paper documents the beneficial effects on model selection and solution speed of several tactics: (a) systematic application of the majorization-minimization (MM) principle, (b) substitution of ordinary least squares for weighted least squares, (c) substitution of Moreau envelopes for nonsmooth losses and penalties, and (d) application of the proximal distance principle in constrained estimation.
The speed of an optimization algorithm is a trade-off between the cost per iteration and the number of iterations until convergence. In our view, this trade-off has not been adequately explored in IRLS. Recall that in IRLS, we minimize the objective
(1) |
at iteration . Here is the response vector, is the design matrix, and is the vector of regression coefficients. The dominant cost in high-dimensional IRLS is solution of the normal equations , where is the diagonal matrix of case weights. For dense problems, solution of the normal equations can be achieved via the Cholesky decomposition of the weighted Gram matrix or the QR decomposition of the weighted design matrix .
Unfortunately, both decompositions must be recalculated from scratch each time the weight matrix changes. Given regression coefficients, the complexity of computing the Gram matrix, weighted or unweighted, is . Fortunately, matrix multiplication is super fast on modern computers with access to the highly optimized BLAS routines. On large-scale problems with comparable to , Cholesky and QR decompositions execute more slowly than matrix multiplications even though the former has computational complexity , and the latter has computational complexity . If it takes iterations until convergence, these costs must be multiplied by . In repeated unweighted regressions, a single QR decomposition or a single paired Gram matrix and Cholesky decomposition suffices, and the computational burden drops to . On the other hand, the process of deweighting may well cause to increase. The downside of the Cholesky decomposition approach is that the condition number of is the square of the condition number of . In our view this danger is over-rated. Adding a small ridge penalty overcomes it and is still consistent with rapid extraction of the Cholesky decomposition. In practice, code depending on Cholesky decomposition is faster than code depending on QR decomposition, and our numerical experiments take advantage of this fact.
The majorization-minimization (MM) principle (Hunter and Lange, 2004; Lange, 2016; Lange et al., 2000) is our engine for leveraging Moreau envelopes and converting weighted to unweighted least squares. In minimization, MM iteratively substitutes a surrogate function that majorizes a loss around the current iterate . Majorization is defined by the tangency condition and the domination condition for all . The surrogate balances two goals, hugging the objective tightly and simplifying minimization. Minimizing the surrogate produces the next iterate and drives the objective downhill owing to the conditions
In maximization, the surrogate minorizes the objective and instead must be maximized. The tangency condition remains intact, but the domination condition is now reversed. The celebrated EM (expectation-maximization) principle for maximum likelihood estimation with missing data (McLachlan and Krishnan, 2007) is a special case of minorization-maximization. In the EM setting Jensen’s inequality supplies the surrogate as the expectation of the complete data loglikelihood conditional on the observed data.
In practice, many surrogate functions are strictly convex quadratics. When this is the case, minimizing the surrogate is achieved by the Newton update
(2) | |||||
The second form of the update reflects the tangency condition . In this version of gradient descent, the step length is exactly . The local rate of convergence is determined by how well the distorted curvature matrix approximates the actual curvature matrix .
The Moreau envelope of a function from to is defined by
When is convex, the infimum is attained, and its Moreau envelope is convex and continuously differentiable (Nesterov, 2005). However, also exists for many nonconvex functions (Polson et al., 2015). In the absence of convexity, the proximal operator
(3) |
can map to a set rather than a single point. For a proxable function , the definition of the Moreau envelope implies the quadratic majorization
(4) |
at for any . Generally, and under mild conditions. In particular, if is Lipschitz with constant L, then
Despite the elementary nature of the Moreau majorization (4), its value in regression has largely gone unnoticed.
In Section 2.5 we rederive the majorization of Heiser (1987) replacing the weighted least squares criterion (1) with the unweighted surrogate
(5) |
Here the constant is irrelevant in the subsequent minimization. In our examples the regression functions depend linearly on the parameter vector . The majorization (5) removes the weights and shifts the responses. It therefore allows the Cholesky decomposition of to be recycled. Derivation of the surrogate requires the assumption . Because rescaling does not alter the minimum point, one can divide by at iteration to ensure .
To render this discussion more concrete, consider the problem of least absolute deviation (LAD) regression with objective
The nonsmooth absolute value function has Moreau envelope equal to Huber’s function
with proximal map
(6) |
The loose majorization (4) generates the least squares surrogate
(7) |
where and . Minimization of the surrogate invokes the same Cholesky decomposition at every iteration. Algorithm 1 summarizes this simple but effective algorithm for (smoothed) LAD regression.
Alternatively, one can employ the best quadratic majorization (de Leeuw and Lange, 2009)
(8) |
This majorization translates into a weighted sum of squares plus an irrelevant constant. The resulting weighted sum of squares can be further deweighted by appealing to the surrogate (5). Appendix A shows that these two successive majorizations lead exactly to the Moreau envelope surrogate (7).
Although the primary purpose of this paper is expository, we would like to highlight several likely new contributions. The first is our emphasis on the Moreau envelope majorization noted in equations (3) and (4) and recognized implicitly in the convex case by Chen et al. (2012) and explicitly in general in the book (Lange, 2016). Among its many virtues, this majorization converts smoothed quantile regression into ordinary least squares. The Moreau envelopes of various functions, such as the -norm, the matrix rank function, and various set indicator functions, are invaluable in inducing sparse and low-rank structures. The spherical majorization of the convolution-smoothed check function in Section 2.3 and the subsequent MM algorithm is new to our knowledge. Our emphasis on the deweighting majorization of Heiser (1987) and Kiers (1997) may help revive this important tactic. We also revisit the quadratic upper bound majorization of Böhning and Lindsay (1988). Application of this principle has long been a staple of logistic and multinomial regression (Böhning, 1992). Inspired by the paper (Xu and Lange, 2022), we demonstrate how to reduce the normal equation of penalized multinomial regression to a Sylvester equation (Bartels and Stewart, 1972). Finally, in addition to mentioning existing theory guaranteeing the convergence of the algorithms studied here, we also construct a new convergence proof based on considerations of fixed points and monotone operators.
As a takeaway message from this paper, we hope readers will better appreciate the potential in high-dimensional estimation of combining the MM principle, smoothing, and numerical tactics such as recycling matrix decompositions and restarted Nesterov acceleration. These advances are ultimately as important as faster hardware. Together the best hardware and the best algorithms will make future statistical analysis even more magical.
2 Methods
In this section we first briefly review majorizations pertinent to least squares. The material covered here is largely standard. Good references are (Bauschke and Combettes, 2011) and (Beck, 2017). After a few preliminaries, we take up: (a) conversion of weighted to unweighted least squares (b) the quadratic bound principle, (c) integration with the proximal distance method (d) restarted Nesterov acceleration, and (e) convergence theory.
2.1 Notation
Here are the notational conventions used throughout this article. All vectors and matrices appear in boldface. An entry of a vector or matrix is denoted by the corresponding subscripted lower-case letter or . All entries of the vector equal ; indicates an identity matrix. The ⊤ superscript indicates a vector transpose. The symbols and denote a sequence of vectors and a sequence of matrices with entries and . The Euclidean norm and norm of a vector are denoted by and . The spectral, Frobenius, and nuclear norms of a matrix are denoted by , , and . For a smooth real-valued function , we write its first differential (row vector of partial derivatives) as . The gradient is the transpose of . The directional derivative of in the direction is written . When the function is differentiable, . The second differential (Hessian matrix) of is .
2.2 M Estimation
In M estimation (de Menezes et al., 2021; Huber and Dutter, 1974; Huber and Ronchetti, 2011) based on residuals, one minimizes a function
of the residuals with bounded below. We have already dealt with the choice in LAD regression. In general, one can replace by its Moreau envelope, leverage the Moreau majorization, and immediately invoke ordinary least squares. Many authors advocate estimating by minimizing a function
of the squared residuals. Conveniently, this functional form is consistent with loglikelihoods under an elliptically symmetric distribution (Lange and Sinsheimer, 1993). If is increasing, differentiable, and concave, then
for all . This plays out as the majorization
with weights and an irrelevant constant that depends on but not on . This surrogate can be minimized by deweighting and invoking ordinary least squares. The Geman-McClure choice and the Cauchy choice for show that the intersection of the imposed conditions is non-empty.
A prime example (Chi and Chi, 2022; Heng et al., 2023) of robust M estimation revolves around the objective (Scott, 2001; Liu et al., 2023)
This loss is particularly attractive because it incorporates a precision parameter as well as the regression coefficient vector . Because is concave, one can construct the MM surrogate
at iteration with fixed. The resulting weighted least squares surrogate can be further deweighted as explained in Section 2.5. Under deweighting it morphs into the surrogate
where and is an irrelevant constant. This ordinary least squares surrogate allows one to recycle a single Cholesky decomposition across all iterations. For fixed, one can update the precision parameter by gradient descent (Heng et al., 2023) or an approximate Newton’s method (Liu et al., 2023). Both methods rely on backtracking.
2.3 Quantile Regression
Quantile regression (Koenker and Bassett, 1978) is a special case of M estimation with the objective , where
(9) |
is called the check function. The argument here suggests a residual. When , quantile regression reduces to LAD regression. The check function is not differentiable at . To smooth the objective (9), one can take either its Moreau envelope or the Moreau envelope of just the nonsmooth part . In the former case, elementary calculus shows that the minimum is attained at
It follows that
A benefit of replacing by is that, by definition, satisfies the Moreau majorization (4) at the anchor point . This majorization can be improved, but it is convenient because it induces the uniform majorization
(10) |
which is an unweighted sum of squares in the residuals .
Convolution approximation offers yet another avenue for majorization (Fernandes et al., 2021; He et al., 2023; Kaplan and Sun, 2017; Tan et al., 2022; Whang, 2006). If we select a well-behaved kernel with bounded support and total mass , then the convolution
approximates as the bandwidth . Given and the symmetric uniform kernel , the necessary integral
is straightforward to calculate. This is just the Moreau envelope shifted upward by . This approximate loss is now optimized via the Moreau majorization
(11) |
where is an irrelevant constant and . Although our numerical experiments in quantile estimation feature the convolution-smoothed quantile loss (11), the choice (10) works equally well. What is more important in practice is the transformation of quantile regression into a sequence of ordinary least squares problems with shifted responses.
2.4 Sparsity and Low Matrix Rank
The -norm, the rank function, and their corresponding constraint sets are helpful in inducing sparsity and low matrix rank. Directly imposing these nonconvex penalties or constraint sets can lead to intractable optimization problems. A common middle ground is to use convex surrogates, such as the -norm and the nuclear norm, but these induce shrinkage bias and a surplus of false positives in support recovery. The Moreau envelopes of these nonconvex penalties offer an attractive alternative that leverages differentiability and majorization.
The norm has Moreau envelope (Beck, 2017)
The corresponding proximal map sends to itself when and to when . The Moreau envelope of the indicator of the sparsity set is determined by projection onto . Elementary arguments show that
where denote the smallest in magnitude (Beck, 2017). The Moreau majorization defined by equation (4) is available in both examples. In general, the Moreau envelope of the indicator of a set is , where is the Euclidean distance from to . Other nonconvex sparsity inducing penalties , such as the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010), also have straightforward Moreau envelopes (Polson et al., 2015).
For matrices, low-rank is often preferred to sparsity in estimation and inference. Let denote the set matrices of rank or less. The Moreau envelope of the indicator of turns out to be , where the are the singular values of in descending order. If has singular value decomposition , then the corresponding proximal map takes to , where sets the singular values of smaller than to . This result is just the content of the Eckart-Young theorem. The function has Moreau envelope (Hiriart-Urruty and Le, 2013)
The map retains when and sets it otherwise. In other words, the proximal operator hard thresholds the singular values. In both cases the Moreau majorization (4) is available.
The nuclear norm (Recht et al., 2010) is a widely adopted convex penalty in low-rank estimation. Its proximal map soft thresholds all by subtracting from each, then replacing negative values by , and finally setting , where contains the thresholded values. The Moreau envelope serves as a smooth-approximation to and benefits from the Moreau majorization (4).
2.5 Deweighting Weighted Least Squares
To remove the weights and shift the responses in least squares, Heiser (1987) and Kiers (1997) suggest a crucial deweighting majorization. At iteration , assume that the weights satisfy and that abbreviates the mean of case . Then we have
where does not depend on the current . Equality holds, as it should, when .
2.6 Quadratic Upper and Lower Bound Principles
The quadratic upper bound principle applies to functions with bounded curvature (Böhning and Lindsay, 1988). Given that is twice differentiable, we seek a matrix satisfying and in the sense that is positive semidefinite for all and is positive definite. The quadratic bound principle then amounts to the majorization
The quadratic surrogate is exactly minimized by the update (2) with replaced by . The quadratic lower bound principle involves minorization and subsequent maximization.
The quadratic upper bound principle applies to logistic regression and its generalization to multinomial regression (Böhning, 1992; Krishnapuram et al., 2005). In multinomial regression, items are draw from categories numbered through . Logistic regression is the special case . If denotes the response of sample , and denotes a corresponding predictor vector, then the probability that is
where . To estimate the regression coefficient vectors over independent trials we must find a quadratic lower bound for the loglikelihood
(12) |
Here is the matrix with th column . It suffices to find a quadratic lower bound for each sample, so we temporarily drop the subscript to simplify notation.
The loglikelihood has directional derivative
for the perturbation of with th column . In the Kronecker product representation of , is now an indicator vector of length with a in entry if the sample belongs to category , and is the weight vector with value in entry . The quadratic form associated with the second differential is
Across all cases Böhning (1992) derives the lower bound
where the matrix has the explicit inverse . The full-sample multinomial loglikelihood is then minorized by the quadratic
(13) | |||||
Maximizing the surrogate yields the update
Reverting to matrices, this update can be restated as
where conveys the weights for the first categories and the rows of are indicator vectors conveying category membership. In the special case of logistic regression, the solution reduces to
Later, when we discuss penalized multinomial regression, we will consider the penalized loglikelihood , where the projection depends on the current iterate and . Redoing the minorization (13) with this extra term leads to the stationary condition
with and . This condition is the same as the two equivalent matrix equations
The second of these is a Sylvester equation (Bartels and Stewart, 1972) for the matrix . It can be solved by extracting the spectral decompositions and , left multiplying the equation by , and right multiplying the equation by . If in the result
we treat as a new variable , then this simpler Sylvester equation has the entry-by-entry solution
Overall, this highly efficient way of updating requires just two spectral decompositions, which can be recycled across all iterations and all values of . Computation of the projection map sending to must still be performed at each iteration.
2.7 Integration with the Proximal Distance Principle
The proximal distance principle is designed to minimize a loss function subject to , where is a nonempty closed set (Keys et al., 2019; Landeros et al., 2022, 2023). For instance, could be the sparsity set . The general idea of the proximal distance method is to approximate the solution by minimizing the penalized loss for large. The squared Euclidean distance is majorized by the spherical quadratic , where denotes the Euclidean projection of onto . If is an ordinary sum of squares, then the surrogate remains within this realm. Otherwise, it may be that is majorized by an ordinary sum of squares . The overall surrogate is then also an ordinary sum of squares.
Because the penalty constant is gradually sent to , whatever matrix factorization is invoked should apply for all choices of . The annealing schedule for is crucial in practice. Simply setting equal to a large constant can cause the proximal distance algorithm to converge prematurely and prevent the algorithm from adequately exploring parameter space. Although the converged value of will be close to the constraint set , the influence of the loss will be slight. The remedy is to start with a small value of and gradually increase it at a slow geometric rate to a final desired level. Our previous papers (Keys et al., 2019; Landeros et al., 2022) provide essential details about annealing schedules and stopping criteria. In any event, the penalty parameter should not be confused with the smoothing parameter determining a Moreau envelope. Our examples always fix the value of .
Sparse quantile regression provides a concrete example of the proximal distance principle in action. After distance penalization, the smoothed quantile loss (11) is majorized by the surrogate function
(14) |
The stationary condition is then
where are the shifted responses. Given the pre-computed spectral decomposition of the Gram matrix, the normal equations have the explicit solution
Although a spectral decomposition is more expensive to compute than a Cholesky decomposition, a single spectral decomposition suffices across all . In contrast, each time changes, the Cholesky decomposition must be completely recalculated.
Sparse quantile regression can also be achieved by adding the Moreau envelope penalty to the loss. (Observe that and are distinct smoothing parameters; nothing prevents us from equating them.) The surrogate function (14) remains the same, provided we substitute the proximal map for the projection and the constant for the constant . If denotes the minimum point of the penalized loss, then the curve now becomes an object of interest. This curve should smoothly approximate a jump function with downward jumps of as increases.
2.8 Nesterov Acceleration
Conversion of weighted to ordinary least squares comes at the expense of an increased number of iterations until convergence. This adverse consequence can be mitigated by Nesterov acceleration (Nesterov, 1983), which is easier to implement than to understand. In practice one maintains both the current iterate and the previous iterate . The shifted point is then taken as the base point in majorization-minimization. The next point is no longer guaranteed to reduce the objective. If the descent property fails, we take the standard precaution of restarting the Nesterov acceleration at . Some of our numerical examples benefit from acceleration while others do not. We also explored the application of SQUAREM (Varadhan and Roland, 2008) and Anderson acceleration (Anderson, 1965), but they do not accelerate convergence as satisfactorily as Nesterov acceleration.
2.9 Convergence Theory
As a prelude to establishing the convergence of our MM algorithms, one needs to prove that an optimal point actually exists. This issue is usually tackled in minimization by showing that the objective has compact sublevel sets . This notion is implied by the coerciveness condition , which in turn is implied by strong convexity. In the absence of strong convexity, one can check that a finite convex function is coercive by checking that it is coercive along all nontrivial rays emanating from some arbitrarily chosen base point (Lange, 2016). For instance in quantile regression, the objective is convex and
If has full rank and , then at least one inner product does not vanish. The corresponding term then tends to as tends to . Given that all other terms are nonnegative, is coercive.
As pointed out earlier, our algorithms maps assume the form
where the matrix is positive definite and is a step-size multiplier chosen by backtracking. For our MM algorithms, the choice always decreases the objective. Lange (2013) and Xu et al. (2017) prove the following proposition pertinent to our problems.
Proposition 2.1.
Suppose the objective has compact sublevel sets and is continuous and positive definite. If the step size is selected by Armijo backtracking, then the limit points of the sequence are stationary points of . Moreover, the set of limit points is compact and connected. If all stationary points are isolated, then the sequence converges to one of them.
Our examples omit backtracking and involve surrogates satisfying the uniform Lipschitz gradient condition
(15) |
for all , , and . Proposition 8 of (Lange et al., 2021) then offers the following simpler but more precise result.
Proposition 2.2.
Let be a coercive differentiable function majorized by a surrogate satisfying condition (15). If denotes a minimum point of , then the iterates delivered by the corresponding MM algorithm satisfy the bound
It follows that . Furthermore, when is continuously differentiable, any limit point of the sequence is a stationary point of .
Proposition 12 of (Lange et al., 2021) dispenses with the isolated stationary point assumption of Proposition 2.1 and proves convergence when is coercive, continuous, and subanalytic and all are continuous, -strongly convex, and satisfy condition (15) on the compact sublevel set . Virtually all well-behaved functions are subanalytic. For instance, the Moreau envelope of a convex lower-semicontinuous subanalytic function is subanalytic (Bolte et al., 2007); the function is semialgebraic and therefore subanalytic (Landeros et al., 2022).
For the convex problems, one can also attack global convergence by invoking monotone operator theory (Bauschke and Combettes, 2011).
Proposition 2.3.
For a convex problem with a fixed positive definite matrix, suppose that the set of stationary points is nonempty. Then the MM iterates
converge to a minimum point.
Proof.
See Appendix D. ∎
Proposition 10 of (Lange et al., 2021) proves that MM iterates converge at a linear rate in the best circumstances.
Proposition 2.4.
Let be -strongly convex and differentiable, and assume satisfies the Lipschitz condition (15). If the global minimum occurs at , then the MM iterates satisfy
establishing linear convergence of to .
Proof.
See Appendix D for a slightly corrected proof. ∎
In many of our examples, the Hessian for some . The constants of Proposition 2.4 satisfy and , where the are the ordered singular values of . The ratio is the condition number of relevant to the rate of convergence. Note that disappears in the condition number. In the proximal distance method, we encounter the Hessian , where the constant no longer cancels in the condition number. Although addition of improves the condition number pertinent to inner iterations, annealing of towards in outer iterations slows overall convergence to the constrained minimum. Multinomial regression replaces the Gram matrix by the Kronecker product , whose singular values are products of the eigenvalues of the matrices and . Because these are and and the , respectively, the revised condition number turns out to equal .
3 Numerical Experiments
The numerical examples of this section demonstrate the versatility of the MM principle in the derivation of fast estimation algorithms. We consider quantile regression (ordinary and sparse), regression (ordinary and isotonic), logistic regression, and multinomial regression (reduced rank). In sparse and low-rank regression our penalties ignore intercept parameters. All computations were conducted on the UCLA Hoffman2 cluster with 4 CPU cores.
3.1 Unpenalized Models
We now compare the performance of our MM algorithms and popular competing algorithms for smoothed quantile regression, regression, and logistic/multinomial regression. No constraints or penalties are imposed at this stage. We adopt a common protocol for generating the design matrix . The aim here is to demonstrate the computational efficiency of the MM algorithms on large-scale data. Each row of is sampled from a multivariate normal distribution with mean and covariance matrix with entries . An intercept column is then left appended to to obtain a full design matrix . Generally, we set the true coefficient vector to . The responses require a different generation protocol for each problem class. Unless stated otherwise, the number of observations equals , where the number of predictors ranges over the set . We terminate our MM algorithms when the relative change in objective falls below . In the following paragraphs, we describe competing methods, their software implementations, and the further protocols for generating the response vector .
Quantile Regression
For convolution-smoothed quantile regression, we compare MM to the conquer R package (Tan et al., 2022; He et al., 2023). The conquer package implements gradient descent with a Barzilai-Borwein stepsize. In calling conquer, we invoke the uniform kernel with the default bandwidth . Our MM software shares this kernel and bandwidth and therefore optimizes the same objective. The response is generated as
(16) |
where follows a -distribution with 1.5 degrees of freedom and is the distribution function of the -distribution. The term creates quantile drift, which is a common practice in simulation studies of quantile regression. The multiplier encourages heteroskedasticity. We consider two quantile levels, and in our experiments.

Regression
For regression, we compare the proposed double majorization technique described in Section 2.2 to iteratively reweighted least squares as proposed by Liu et al. (2023). Their implementation in R executes poorly on large-scale data. We re-implemented their IRLS approach in Julia and used it as our baseline competition. Responses are generated as
where the are standard normal random deviates. To induce outlier contamination, we add 10 to for the first 10% of the samples and 10 to the second column of for the last 10% of samples. This simulation choice demonstrates that the estimator is also robust to contamination in the covariates , a property not enjoyed by Huber regression or quantile regression.

Logistic and Multinomial Regression
For logistic and multinomial (logistic) regression, we compare our MM algorithm to the glmnet R package (Friedman et al., 2010) with penalty parameter 0. For logistic regression, the responses are generated as
For multinomial regression, the responses are generated as
where is the -th column of the coefficient matrix . The entries of are populated with random values. In this simulation there are categories. In contrast to the two previous examples, ranges over and . This adjustment is made because glmnet often fails to converge when the number of predictors is too large.

Figure 1, 2, and 3 summarize our computational experiments. All experimental results are averaged over 50 random replicates. Figure 1 shows that MM converges to the same objective as conquer, certifying the correctness of our implementation. However, as demonstrated in the left panel of figure 1, MM can be 3 to 7 times faster than conquer. In regression, figure 2 demonstrates that double majorization is nearly 10 times faster than weighted least squares. The right panel of figure 2 shows that the estimation error of the estimator is much smaller than that of least squares, confirming the robustness of . Figure 3 shows that MM converges to the same loglikelihood as glmnet, but can be 3 times faster for logistic regression and 10 times faster for multinomial regression. To our surprise, glmnet often fails to converge for in multinomial regression. Figure 3 consequently only show results for . When glmnet does converge, our converged loglikelihoods match its converged loglikelihoods. These results highlight MM’s efficiency in computing common statistical estimators with no compromise in accuracy.
3.2 Sparse Quantile Regression
We next compare sparse quantile regression under -norm and distance-to-set penalties to quantile regression under Lasso, SCAD, and MCP penalties. The conquer.cv.reg function of conquer supplies the Lasso, SCAD, and MCP results. In our comparisons, the design matrix is generated as described in Section 3.1. However, following Tan et al. (2022) and Man et al. (2024), the true coefficient vector has , , , , , , , , , , and . All remaining elements of are 0. In other words, there are 10 nonzero predictors besides the intercept. The responses are generated by the protocol (16) except that we now consider noise in addition to the heavy-tailed noise. Our experiments cover both the over-determined case and the under-determined case . Quantile levels are set at and . We set the Moreau envelope parameter for the quantile loss to be the same as the bandwidth paramter returned by conquer, namely .
The following metrics measure model performance: (a) true positive rate (TPR), (b) false positive rate (FPR), (c) estimation error (EE), defined as , and (d) prediction error (PE), defined as . Penalty parameters for all models are selected via 5-fold cross-validation, with quantile loss serving as the validation error. For the -norm Moreau envelope penalty with penalty constant , we set and search over an exponentially spaced grid of 50 different points. For the proximal distance model, we search over the grid to identify the optimal sparsity level. While the optimal solution is not sparse in the -norm model, its proximal projection is sparse, and we use this projection to assess variable selection performance.
Method | TPR | FPR | EE | PE | Time (s) |
SQR-Lasso | 1.00 (0.00) | 0.09 (0.04) | 0.47 (0.10) | 7.92 (1.41) | 2.52 |
SQR-SCAD | 0.93 (0.13) | 0.00 (0.05) | 0.86 (1.23) | 11.2 (13.6) | 3.02 |
SQR-MCP | 0.93 (0.14) | 0.02 (0.10) | 1.49 (0.66) | 15.4 (43.9) | 2.92 |
SQR- | 1.00 (0.00) | 0.00 (0.00) | 0.24 (0.08) | 4.54 (1.13) | 0.73 |
SQR-PD | 1.00 (0.00) | 0.00 (0.00) | 0.21 (0.07) | 3.88 (1.16) | 6.40 |
SQR-Lasso | 1.00 (0.00) | 0.08 (0.04) | 0.57 (0.13) | 9.72 (2.04) | 2.69 |
SQR-SCAD | 0.90 (0.15) | 0.00 (0.01) | 1.22 (1.19) | 16.4 (14.9) | 2.69 |
SQR-MCP | 0.90 (0.13) | 0.02 (0.11) | 0.88 (0.20) | 40.9 (182.1) | 2.69 |
SQR- | 1.00 (0.00) | 0.00 (0.00) | 0.27 (0.07) | 5.21 (1.19) | 0.66 |
SQR-PD | 1.00 (0.00) | 0.00 (0.00) | 0.26 (0.08) | 4.96 (1.40) | 6.95 |
SQR-Lasso | 1.00 (0.00) | 0.05 (0.02) | 0.83 (0.18) | 9.58 (1.71) | 3.20 |
SQR-SCAD | 0.76 (0.18) | 0.10 (0.28) | 6.43 (14.3) | 66.0 (152) | 3.81 |
SQR-MCP | 0.82 (0.16) | 0.15 (0.27) | 4.45 (6.98) | 45.1 (75.3) | 4.10 |
SQR- | 0.97 (0.07) | 0.00 (0.00) | 0.71 (0.49) | 8.00 (4.03) | 0.56 |
SQR-PD | 0.99 (0.02) | 0.01 (0.01) | 0.61 (0.27) | 7.68 (2.83) | 16.9 |
SQR-Lasso | 1.00 (0.02) | 0.05 (0.02) | 1.15 (0.30) | 12.7 (3.16) | 3.50 |
SQR-SCAD | 0.68 (0.17) | 0.06 (0.22) | 4.82 (8.88) | 48.9 (91.1) | 3.41 |
SQR-MCP | 0.78 (0.17) | 0.10 (0.28) | 3.91 (6.22) | 39.2 (61.9) | 2.69 |
SQR- | 0.94 (0.11) | 0.00 (0.00) | 1.04 (0.73) | 11.3 (5.56) | 0.63 |
SQR-PD | 0.97 (0.07) | 0.01 (0.01) | 0.97 (0.58) | 11.5 (4.28) | 14.2 |
Table 1 reports simulation results averaged over 50 random replicates under heavy-tailed noise. In general, Lasso-penalized quantile regression tends to produce an excess of false positives. SCAD and MCP penalties significantly reduce the number of false positives, but often at the expense of missing some true positives. In the under-determined case with heavy-tailed noise, SCAD and MCP perform poorly, missing many true positives while including many false positives. SCAD and MCP do perform better when the noise is normal, as the results tabulated in Appendix E indicate. The -norm and distance-to-set penalties consistently achieve high TPR, low FPR, and low prediction error. The -norm penalty delivers high precision and efficient computation, while the distance-to-set penalty offers the best overall estimation performance. Given its dense grid search and annealing of , distance-to-set estimation is generally slower than -norm estimation.
3.3 Robust Isotonic Regression
To illustrate the application of robust isotonic regression, we consider global warming. The dataset climate at a glance records December land and ocean temperature anomalies from 1850 to 2023 relative to the 1901-2000 mean. This dataset is similar to one analyzed by Wu et al. (2001) and Tibshirani et al. (2011). Appendix B records the implementation details of our method. The method not only enables robust estimation, but it also quantifies the outlyingness of each observation through a converged case weight . We compare the method, which involves recycled matrix decompositions, to the weighted least squares method of Liu et al. (2023). As a non-robust benchmark, we include the non-robust isotonic fit delivered by the pooled adjacent violators algorithm (Barlow and Brunk, 1972).

The two robust methods give almost identical fits, as depicted in the left panel of figure 4. However, our method takes 0.97 seconds, while IRLS takes 13.15 seconds. These results echo the findings in Section 3.1 about efficient computation. Compared to the non-robust fit, our robust isotonic estimate appears to be less influenced by the unusually low temperatures near 1920 and the unusually high ones near 1940. The right panel of figure 4 depicts the values quantifying the outlyingness of each observation. The two spikes near 1920 and 1940 are clearly visible.
3.4 Low-Rank Multinomial Regression
Simulated data on smoothed-nuclear-norm penalized multinomial regression allow us to draw comparisons with the R package npmr (Powers et al., 2018), which is powered by accelerated proximal gradient descent. Appendix C records the details of our multinoimial model under the smoothed-nuclear-norm penalty. The coefficient matrix is now populated with deviates and then projected to the closest subspace of rank 3. To avoid penalizing intercepts, this projection ignores the first row of . The design matrix and response are again generated according to the protocols of 3.1. We consider the two settings and and carry out a grid search grid on the penalty constant over 50 exponentially spaced points between and . Because we divide the overall loglikelihood by while Powers et al. (2018) does not, the grid for npmr must be multiplied by . The values of the Moreau smoothing constant and adopted illustrate the impact of on inferrence. For each random replicate, we generate a separate test set and evaluate estimation performance by the loglikelihood of both the training set and the test set.

Figure 5 displays training and test loglikelihoods averaged over 50 replicates. All methods yield similar best test loglikelihoods over the grid. Our smoothed nuclear norm MM algorithm executes over 20 times faster than npmr when and around 30 times faster when . Two main factors may be at play here. Although proximal gradient descent benefits from backtracking, the starting step size still has a major impact on speed and can be tricky to specify without careful hand tuning. Our experiments adopt the default initial step size of npmr. In contrast, step size is not an issue with an MM algorithm since majorization automatically guarantees the descent property. Our MM algorithm also incorporates curvature information through the quadratic upper bound majorization. This translates into faster convergence. Finally, our MM algorithm bypasses the bottleneck imposed by repeated matrix decompositions. Reduction of the normal equation to a Sylvester equation is a key tactic for reducing computational complexity.
We also applied MM based low-rank multinomial regression to the Vowel data set available in the npmr package. The 990 observations on 10 independent predictors record speaker independent recognition of the steady state vowels of British English. After adding an intercept, . The 10 predictors are log-area ratios under linear predictive coding. The data are divided into a training set of 528 observations, and a testing set of observations. We again compute the whole solution path on the training set with the penalty constant ranging over 50 exponentially spaced points between and . We report the best test loglikelihood and total computation time for MM low-rank multinomial regression (), npmr low-rank multinomial regression, and ordinary multinomial regression without low-rank regularization. Because glmnet fails to converge without regularization on these data, we employ our own implementation of ordinary multinomial regression.
Method | Best Test Loglikelihood | Computation Time |
---|---|---|
Low-rank Multinomial | -579.1 | 1.0 s |
package npmr | -581.5 | 22.9 s |
Multinomial | -1216.9 | 0.04 s |
Table 2 summarizes the experimental results. The table shows that regularized multinomial regression yields much higher test loglikelihoods than unregularized multinomial regression, demonstrating the benefits in inference of low-rank regularization. Despite its reliance on nuclear norm smoothing, our MM algorithm also yields similar and even slightly higher test loglikelihoods than npmr. Our code executes over 20 times faster than npmr, reinforcing our previous claims about computational efficiency.
4 Discussion
The current paper stresses the MM principle, smoothing by Moreau envelopes, and the recycling of matrix decompositions. The MM principle simplifies optimization by: (a) separating the variables of a problem, (b) avoiding large matrix inversions, (c) linearizing a problem, (d) restoring symmetry, (e) dealing with equality and inequality constraints gracefully, and (f) turning a nondifferentiable problem into a smooth problem. Our examples feature quadratic surrogates, by far the most common surrogates and the easiest to implement in practice. Quadratic surrogates are not necessarily parameter separated, and their stationary conditions require solving linear equations. This is where recycling of matrix decompositions comes into play. Cholesky, QR, and spectral decompositions are expensive to extract in high dimensions, so there is strong motivation to employ quadratic surrogates with the same curvature across all iterations. Deweighting and Moreau majorization are two tactics achieving this goal. Although Cholesky decompositions are more likely to suffer from ill conditioning than QR decompositions, Cholesky decompositions are faster to extract and feature exclusively in our numerical examples.
Many loss and penalty functions are nondifferentiable. Smoothing by Moreau envelopes and other approximation methods removes the kinks and allows the standard rules of calculus to apply in optimization. A Moreau envelope can be majorized by a spherical quadratic with parameters separated. This advantage extends to squared distance-to-set penalties in constrained optimization. These majorizations are not especially tight, so it often takes many iterations until convergence. Fortunately, Nesterov acceleration takes most of the sting out of this drawback. Our numerical examples illustrate the virtues of these tactics. Computational speed is often an order of magnitude better than that delivered by popular competing methods. More subtly, statistical inference is improved through the parsimony imposed by regularization and the flagging of outliers.
In any event, it is the combination of tactics that produces the fastest and most reliable algorithms. Techniques such as alternating minimization (block descent and ascent), profile loglikelihoods, penalty constant annealing (Zhou and Lange, 2010), and quasi-Newton methods (Zhou et al., 2011) have also proved valuable in many settings. Readers should keep in mind that the MM principle can be invoked on the subproblems of alternating minimization. Progress, not perfection, is usually adequate in solving the subproblems. Optimization requires both art and science. Although there is no panacea, over-arching themes can guide the construction of good algorithms. Our examples vividly demonstrate the speed improvements possible with thoughtful algorithm construction. There are doubtless many advances yet to be made in accelerating the optimization algorithms so vital to the progress of statistics. Readers who want to replicate our experiments and extend our Julia code can visit the web site https://github.com/qhengncsu/MMDeweighting.jl.
Acknowledgments
This research was partially funded by grants from the National Institute of General Medical Sciences (R35GM141798, HZ and KL) and the National Science Foundation (DMS-2054253 and IIS-2205441, HZ).
References
- Anderson (1965) Anderson, D. G. (1965). Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560.
- Barlow and Brunk (1972) Barlow, R. E. and H. D. Brunk (1972). The isotonic regression problem and its dual. Journal of the American Statistical Association 67(337), 140–147.
- Bartels and Stewart (1972) Bartels, R. H. and G. W. Stewart (1972). Algorithm 432 [C2]: solution of the matrix equation AX+ XB= C [F4]. Communications of the ACM 15(9), 820–826.
- Bauschke and Combettes (2011) Bauschke, H. H. and P. L. Combettes (2011). Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer.
- Bauschke and Moursi (2023) Bauschke, H. H. and W. M. Moursi (2023). An Introduction to Convexity, Optimization, and Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics.
- Beck (2017) Beck, A. (2017). First-Order Methods in Optimization. SIAM.
- Böhning (1992) Böhning, D. (1992). Multinomial logistic regression algorithm. Annals of the Iinstitute of Statistical Mathematics 44(1), 197–200.
- Böhning and Lindsay (1988) Böhning, D. and B. G. Lindsay (1988). Monotonicity of quadratic-approximation algorithms. Annals of the Institute of Statistical Mathematics 40(4), 641–663.
- Bolte et al. (2007) Bolte, J., A. Daniilidis, and A. Lewis (2007). The Lojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM Journal on Optimization 17(4), 1205–1223.
- Chen et al. (2012) Chen, X., Q. Lin, S. Kim, J. G. Carbonell, and E. P. Xing (2012). Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics 6(2), 719 – 752.
- Chi and Chi (2022) Chi, J. T. and E. C. Chi (2022). A user-friendly computational framework for robust structured regression with the criterion. Journal of Computational and Graphical Statistics 31(4), 1051–1062.
- de Leeuw and Lange (2009) de Leeuw, J. and K. Lange (2009). Sharp quadratic majorization in one dimension. Computational Statistics & Data Analysis 53(7), 2471–2484.
- de Menezes et al. (2021) de Menezes, D., D. Prata, A. Secchi, and J. Pinto (2021). A review on robust M-estimators for regression analysis. Computers & Chemical Engineering 147, 107254.
- Fan and Li (2001) Fan, J. and R. Li (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360.
- Fernandes et al. (2021) Fernandes, M., E. Guerre, and E. Horta (2021). Smoothing quantile regressions. Journal of Business & Economic Statistics 39(1), 338–357.
- Friedman et al. (2010) Friedman, J., T. Hastie, and R. Tibshirani (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33(1), 1.
- He et al. (2023) He, X., X. Pan, K. M. Tan, and W.-X. Zhou (2023). Smoothed quantile regression with large-scale inference. Journal of Econometrics 232(2), 367–388.
- Heiser (1987) Heiser, W. J. (1987). Correspondence analysis with least absolute residuals. Computational Statistics & Data Analysis 5(4), 337–356.
- Heng et al. (2023) Heng, Q., E. C. Chi, and Y. Liu (2023). Robust low-rank tensor decomposition with the criterion. Technometrics 65(4), 537–552.
- Hiriart-Urruty and Le (2013) Hiriart-Urruty, J.-B. and H. Y. Le (2013). From Eckart and Young approximation to Moreau envelopes and vice versa. RAIRO-Operations Research-Recherche Opérationnelle 47(3), 299–310.
- Huber and Dutter (1974) Huber, P. J. and R. Dutter (1974). Numerical solution of robust regression problems. In COMPSTAT 1974, Proc. Symposium on Computational Statistics. Physike Verlag.
- Huber and Ronchetti (2011) Huber, P. J. and E. M. Ronchetti (2011). Robust Statistics. John Wiley & Sons.
- Hunter and Lange (2004) Hunter, D. R. and K. Lange (2004). A tutorial on MM algorithms. The American Statistician 58, 30–37.
- Kaplan and Sun (2017) Kaplan, D. M. and Y. Sun (2017). Smoothed estimating equations for instrumental variables quantile regression. Econometric Theory 33(1), 105–157.
- Keys et al. (2019) Keys, K. L., H. Zhou, and K. Lange (2019). Proximal distance algorithms: theory and practice. Journal of Machine Learning Research 20(66), 1–38.
- Kiers (1997) Kiers, H. A. (1997). Weighted least squares fitting using ordinary least squares algorithms. Psychometrika 62, 251–266.
- Koenker and Bassett (1978) Koenker, R. and G. Bassett (1978). Regression quantiles. Econometrica 46(1), 33–50.
- Krishnapuram et al. (2005) Krishnapuram, B., L. Carin, M. A. Figueiredo, and A. J. Hartemink (2005). Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(6), 957–968.
- Landeros et al. (2022) Landeros, A., O. H. M. Padilla, H. Zhou, and K. Lange (2022). Extensions to the proximal distance method of constrained optimization. Journal of Machine Learning Research 23(182), 1–45.
- Landeros et al. (2023) Landeros, A., J. Xu, and K. Lange (2023). MM optimization: proximal distance algorithms, path following, and trust regions. Proceedings of the National Academy of Sciences of the United States of America 120(27), e2303168120.
- Lange (2013) Lange, K. (2013). Optimization, Volume 95. Springer Science & Business Media.
- Lange (2016) Lange, K. (2016). MM Optimization Algorithms. SIAM.
- Lange et al. (2000) Lange, K., D. R. Hunter, and I. Yang (2000). Optimization transfer using surrogate objective functions. Journal of Computational and Graphical Statistics 9, 1–20.
- Lange and Sinsheimer (1993) Lange, K. and J. S. Sinsheimer (1993). Normal/independent distributions and their applications in robust regression. Journal of Computational and Graphical Statistics 2(2), 175–198.
- Lange et al. (2021) Lange, K., J.-H. Won, A. Landeros, and H. Zhou (2021). Nonconvex Optimization via MM Algorithms: Convergence Theory, pp. 1–22. John Wiley & Sons, Ltd.
- Liu et al. (2023) Liu, X., E. C. Chi, and K. Lange (2023). A sharper computational tool for regression. Technometrics 65(1), 117–126.
- Man et al. (2024) Man, R., X. Pan, K. M. Tan, and W.-X. Zhou (2024). A unified algorithm for penalized convolution smoothed quantile regression. Journal of Computational and Graphical Statistics 33(2), 625–637.
- McLachlan and Krishnan (2007) McLachlan, G. J. and T. Krishnan (2007). The EM Algorithm and Extensions. John Wiley & Sons.
- Nesterov (1983) Nesterov, Y. (1983). A method for solving the convex programming problem with convergence rate . Soviet Mathematics Doklady 27, 372–376.
- Nesterov (2005) Nesterov, Y. (2005). Smooth minimization of non-smooth functions. Mathematical Programming 103, 127–152.
- Polson et al. (2015) Polson, N. G., J. G. Scott, and B. T. Willard (2015). Proximal Algorithms in Statistics and Machine Learning. Statistical Science 30(4), 559 – 581.
- Powers et al. (2018) Powers, S., T. Hastie, and R. Tibshirani (2018). Nuclear penalized multinomial regression with an application to predicting at bat outcomes in baseball. Statistical Modelling 18(5-6), 388–410.
- Recht et al. (2010) Recht, B., M. Fazel, and P. A. Parrilo (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review 52(3), 471–501.
- Scott (2001) Scott, D. W. (2001). Parametric statistical modeling by minimum integrated square error. Technometrics 43(3), 274–285.
- Tan et al. (2022) Tan, K. M., L. Wang, and W.-X. Zhou (2022). High-dimensional quantile regression: Convolution smoothing and concave regularization. Journal of the Royal Statistical Society Series B: Statistical Methodology 84(1), 205–233.
- Tibshirani et al. (2011) Tibshirani, R. J., H. Hoefling, and R. Tibshirani (2011). Nearly-isotonic regression. Technometrics 53(1), 54–61.
- Varadhan and Roland (2008) Varadhan, R. and C. Roland (2008). Simple and globally convergent methods for accelerating the convergence of any EM algorithm. Scandinavian Journal of Statistics 35(2), 335–353.
- Whang (2006) Whang, Y.-J. (2006). Smoothed empirical likelihood methods for quantile regression models. Econometric Theory 22(2), 173–205.
- Wu et al. (2001) Wu, W. B., M. Woodroofe, and G. Mentz (2001). Isotonic regression: another look at the changepoint problem. Biometrika 88(3), 793–804.
- Xu et al. (2017) Xu, J., E. Chi, and K. Lange (2017). Generalized linear model regression under distance-to-set penalties. Advances in Neural Information Processing Systems 30, 1–11.
- Xu and Lange (2022) Xu, J. and K. Lange (2022). A proximal distance algorithm for likelihood-based sparse covariance estimation. Biometrika 109(4), 1047–1066.
- Zhang (2010) Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38(2), 894 – 942.
- Zhou et al. (2011) Zhou, H., D. Alexander, and K. Lange (2011). A quasi-newton acceleration for high-dimensional optimization algorithms. Statistics and computing 21, 261–273.
- Zhou and Lange (2010) Zhou, H. and K. L. Lange (2010). On the bumpy road to the dominant mode. Scandinavian Journal of Statistics 37(4), 612–631.
Appendix A Deweighting the Sharpest LAD Majorization
In LAD regression deweighting the sharpest quadratic majorization (8) leads to the surrogate (7) derived from the Moreau majorization (4). To prove this assertion, recall that the Moreau surrogate (7) is
where and . On the other hand, the best quadratic majorization (8) in LAD is
where if and if . In both cases the weights satisfy . Deweighting yields the new surrogate is
When ,
and when ,
Appendix B Robust Isotonic Regression
The version of robust isotonic regression is driven by the penalized loss
where is a matrix that generates the differences of adjacent components of , and is the -dimensional nonnegative orthant. Our double majorization of the loss and distance majorization of the set penalty together produce the MM surrogate
The stationary condition for updating is
where is the vector of shifted responses with components . Block descent alternates the updates of and . The precision parameter can be updated by gradient descent (Heng et al., 2023) or an approximate Newton’s method (Liu et al., 2023).
Appendix C Low-Rank Multinomial Regression
In multinomial regression, imposing a rank penalty on the regression coefficient matrix encourages information sharing and improves prediction. Following Powers et al. (2018) we start with a nuclear norm penalty . Because the nuclear norm is hard to majorize, we pass to its Moreau envelope and minimize the penalized loss
involving the loglikelihood defined by equation (12). Invoking the quadratic upper bound majorization for and the Moreau envelope majorization for yields the MM surrogate
where and are described in the last paragraph of Section 2.6. The stationary condition and the Sylvester equation method for solving it are found there as well.
Appendix D Proofs
D.1 Proof of Proposition 2.3
Proof.
Our attack exploits the inner product and corresponding norm associated with the positive definite second differential . Taking in the majorization
(17) |
leads to the conclusion
(18) |
Now consider the function . It is convex, achieves its minimum at the stationary point , and satisfies the analogues of the inequalities (17) and (18). Therefore,
By symmetry,
and adding the last two inequalities gives
We are now in a position to prove that the operator is non-expansive in the norm . Indeed,
The related operator
is -averaged with fixed points equal to the stationary points of . To complete the proof, we note that the sequence defined by an averaged operator is known to converge to a fixed point. For example, see Proposition 7.5.2 of (Lange, 2016) or Corollary 22.20 of (Bauschke and Moursi, 2023). ∎
D.2 Proof of Proposition 2.4
Proof.
Existence and uniqueness of follow from the strong convexity of . Because , the -smoothness of gives the quadratic upper bound
which incidentally implies . In view of the strong convexity assumption, we have the lower bound
Therefore, . Combining the inequalities (D.2) and (D.2) yields the Polyak-Łojasiewicz (PL) bound
We now turn to the MM iterates and take . The PL inequality implies
Subtracting from both sides of the previous inequality and rearranging gives
Iteration of this inequality yields the claimed linear convergence. ∎
Appendix E Sparse Quantile Regression under Gaussian Noise
Method | TPR | FPR | EE | PE | Time (s) |
SQR-Lasso | 1.00 (0.00) | 0.09 (0.04) | 0.45 (0.10) | 8.02 (1.46) | 1.56 |
SQR-SCAD | 0.99 (0.04) | 0.00 (0.00) | 0.30 (0.30) | 5.02 (4.21) | 1.33 |
SQR-MCP | 1.00 (0.00) | 0.00 (0.00) | 0.22 (0.06) | 4.01 (0.93) | 1.30 |
SQR- | 1.00 (0.00) | 0.00 (0.00) | 0.26 (0.08) | 4.79 (1.15) | 0.48 |
SQR-PD | 1.00 (0.00) | 0.00 (0.00) | 0.23 (0.08) | 4.34 (1.28) | 6.52 |
SQR-Lasso | 1.00 (0.00) | 0.09 (0.04) | 0.54 (0.10) | 9.35 (1.56) | 1.51 |
SQR-SCAD | 1.00 (0.01) | 0.00 (0.00) | 0.26 (0.15) | 4.55 (2.14) | 1.09 |
SQR-MCP | 1.00 (0.00) | 0.00 (0.00) | 0.22 (0.05) | 3.97 (0.85) | 1.11 |
SQR- | 1.00 (0.00) | 0.00 (0.00) | 0.29 (0.08) | 5.33 (1.40) | 0.55 |
SQR-PD | 1.00 (0.00) | 0.00 (0.00) | 0.26 (0.08) | 4.76 (1.41) | 7.15 |
SQR-Lasso | 1.00 (0.00) | 0.06 (0.03) | 0.83 (0.16) | 9.69 (1.66) | 2.34 |
SQR-SCAD | 0.94 (0.09) | 0.00 (0.00) | 0.69 (0.54) | 7.47 (5.18) | 1.99 |
SQR-MCP | 0.95 (0.09) | 0.00 (0.00) | 0.67 (0.53) | 7.42 (4.93) | 1.99 |
SQR- | 1.00 (0.00) | 0.00 (0.00) | 0.48 (0.19) | 5.95 (2.13) | 0.46 |
SQR-PD | 1.00 (0.00) | 0.00 (0.00) | 0.50 (0.29) | 6.47 (2.59) | 13.99 |
SQR-Lasso | 1.00 (0.00) | 0.06 (0.03) | 0.87 (0.18) | 10.33 (1.73) | 2.50 |
SQR-SCAD | 0.93 (0.10) | 0.00 (0.00) | 0.79 (0.61) | 8.45 (5.59) | 2.04 |
SQR-MCP | 0.92 (0.09) | 0.00 (0.00) | 0.90 (0.58) | 9.36 (5.37) | 2.09 |
SQR- | 0.99 (0.00) | 0.00 (0.00) | 0.57(0.24) | 7.53 (2.54) | 0.50 |
SQR-PD | 0.99 (0.05) | 0.01 (0.01) | 0.64 (0.44) | 7.71 (3.61) | 15.55 |