This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A New Crossover Algorithm for LP Inspired by the Spiral Dynamic of PDHG

Tianhao Liu Shanghai Jiao Tong University, Antai College of Economics and Management (tianhao.liu@sjtu.edu.cn).    Haihao Lu MIT, Sloan School of Management (haihao@mit.edu).
Abstract

Motivated by large-scale applications, there is a recent trend of research on using first-order methods for solving LP. Among them, PDLP, which is based on a primal-dual hybrid gradient (PDHG) algorithm, may be the most promising one. In this paper, we present a geometric viewpoint on the behavior of PDHG for LP. We demonstrate that PDHG iterates exhibit a spiral pattern with a closed-form solution when the variable basis remains unchanged. This spiral pattern consists of two orthogonal components: rotation and forward movement, where rotation improves primal and dual feasibility, while forward movement advances the duality gap. We also characterize the different situations in which basis change events occur. Inspired by the spiral behavior of PDHG, we design a new crossover algorithm to obtain a vertex solution from any optimal LP solution. This approach differs from traditional simplex-based crossover methods. Our numerical experiments demonstrate the effectiveness of the proposed algorithm, showcasing its potential as an alternative option for crossover.

1 Introduction

Linear programming (LP), which refers to optimizing a linear function over a polyhedron, is one of the most fundamental classes of optimization problems. It has been extensively studied in both academia and industry and has been applied to solve real-world applications in transportation, scheduling, economy, resource allocation, etc.

Two classic methods for solving LP are simplex methods and interior point methods (IPMs). The simplex methods, proposed by Dantzig in the late 1940s (Dantzig, 1951), start from a vertex and pivot between vertices to monotonically improve the objective (Dantzig, 1963). IPMs (Karmarkar, 1984), on the other hand, utilize a self-concordant barrier function to ensure that solutions remain inside the polyhedron, approaching an optimal solution by following a central path (Renegar, 1988).

Motivated by applications of large-scale LP, there has been a recent trend towards developing first-order methods (FOMs) for LP, such as PDLP (Applegate et al., 2021; Lu and Yang, 2023a; Lu et al., 2023), SCS (O’Donoghue et al., 2016; O’Donoghue, 2021), and ABIP (Lin et al., 2021; Deng et al., 2024). The key advantage of FOMs is their low iteration cost, primarily involving matrix-vector multiplications, which avoids the need for solving linear equations as required in the simplex methods and IPMs. This advantage makes them particularly well-suited for GPU implementation. Among these, PDLP stands out as the most promising, with its GPU implementation demonstrating numerical performance on par with state-of-the-art LP solvers on standard benchmark sets and demonstrating superior performance on large-scale instances (Lu and Yang, 2023a; Lu et al., 2023). PDLP is based on the primal-dual hybrid gradient (PDHG) method, enhanced by various numerical improvements (Applegate et al., 2021). Variants of PDLP have been implemented in both commercial and open-source optimization solvers, such as COPT (Ge et al., 2024), Xpress (Xpress, 2014), Google OR-Tools (Perron and Furnon, 2024), HiGHS (Huangfu and Hall, 2018), and Nvidia cuOpt (Fender, 2024).

While the behaviors of simplex methods and IPMs are well-studied—simplex methods move over vertices, and IPMs follow the central path—the convergence behaviors of PDLP and other FOMs toward an optimal solution are less well-understood. For example, we plot the 2-dimensional primal trajectories 111For the LP min𝐱+2{𝐜𝐱:𝐀𝐱𝐛}\min_{\mathbf{x}\in\mathbb{R}_{+}^{2}}\{\mathbf{c}^{\top}\mathbf{x}:\mathbf{A}\mathbf{x}\leq\mathbf{b}\}, we reformulate it as min(𝐱,𝐰)+2×+m{𝐜𝐱:𝐀𝐱+𝐰=𝐛}\min_{(\mathbf{x},\mathbf{w})\in\mathbb{R}_{+}^{2}\times\mathbb{R}_{+}^{m}}\{\mathbf{c}^{\top}\mathbf{x}:\mathbf{A}\mathbf{x}+\mathbf{w}=\mathbf{b}\} and plot the trajectories of 𝐱\mathbf{x} from different methods solving the reformulated problem. of different LP methods in Figure 1. PDHG seems to progress in a more chaotic and winding manner compared to other classic methods.

Refer to caption
Figure 1: Primal trajectories of different LP methods.

The first contribution of this paper is to present a geometric viewpoint on the convergence behavior of the PDHG algorithm when applied to LP problems (Section 2). Similar to simplex methods, we can define basic and non-basic variables for PDHG iterates. We define a phase as a series of consecutive PDHG iterates that share the same set of basic variables. Within each phase, we demonstrate that PDHG behaves like a spiral, consisting of two orthogonal components: rotation and forward movement. The rotation improves primal and dual feasibility, while the forward movement advances the duality gap. We also characterize the different situations in which basis change events occur.

A fundamental distinction among different LP methods is that simplex methods identify an optimal vertex (i.e., an optimal basis), whereas IPMs and FOMs may find an interior point within the optimal solution set. Specifically, IPMs converge to the analytic center of the optimal solution set, while FOMs can reach any optimal solution depending on the initial solution. On the other hand, vertex solutions obtained via simplex methods are generally more precise, sparse, and informative. For example, vertex solutions are particularly useful for LP subroutines in branch-and-bound MIP solvers, as they enhance integrality and also benefit warm starts. Consequently, the process of “crossover,” which involves deriving an optimal vertex from any arbitrary optimal solution, has been extensively studied for IPMs. Most modern LP solvers include a crossover step following IPM solving. The algorithmic framework for contemporary crossover was introduced by Megiddo (1991) and has been further refined by Bixby and Saltzman (1994) and Andersen and Ye (1996). These approaches, which rely on simplex methods to adjust variables to their bounds, are referred to as simplex-based crossover.

Inspired by the spiral behavior of PDHG, our second contribution is the design of a new crossover method to generate a vertex solution from a PDLP solution. This crossover leverages PDLP itself and its spiral axes (referred to as spiral rays in this paper), eliminating the need for any simplex steps. We present a numerical study that demonstrates the effectiveness of our proposed crossover algorithm, which may provide a practical alternative to simplex-based crossover methods.

Organization of the paper.

The paper is organized as follows. Section 1.1 reviews related studies in PDHG and crossover approaches. Section 2 presents the spiral behavior of PDHG for LP. Section 3 proposes a new crossover algorithm inspired by the spiral structure of PDHG without the need for simplex methods. Section 4 presents a numerical study of our crossover approach.

Notations.

We use bold letters for vectors and matrices. Let 𝐱j\mathbf{x}_{j} be the jj-th component of vector 𝐱\mathbf{x}, and 𝐱𝒥\mathbf{x}_{\mathcal{J}} be the vector formed by components 𝐱j\mathbf{x}_{j} for j𝒥j\in\mathcal{J}. Let 𝐀j\mathbf{A}_{j} be the jj-th column of matrix 𝐀\mathbf{A}, and 𝐀𝒥\mathbf{A}_{\mathcal{J}} be the submatrix formed by columns 𝐀j\mathbf{A}_{j} for j𝒥j\in\mathcal{J}. Let 𝐀i,j\mathbf{A}_{i,j} be the entry in the ii-th row and jj-th column of matrix 𝐀\mathbf{A}, and 𝐀,𝒥\mathbf{A}_{\mathcal{I},\mathcal{J}} be the submatrix formed by entries 𝐀i,j\mathbf{A}_{i,j} for i,j𝒥i\in\mathcal{I},j\in\mathcal{J}. We use 𝐱𝐲\mathbf{x}\geq\mathbf{y} to express the element-wise inequality 𝐱i𝐲i\mathbf{x}_{i}\geq\mathbf{y}_{i} for all ii. Let 𝟎\mathbf{0} and 𝟏\mathbf{1} be a vector or matrix of zeros and ones, respectively. Let 𝐈\mathbf{I} be the identity matrix. The dimension of a vector or a matrix will be unspecified whenever it is clear from the context. Let n\mathbb{R}^{n} denote the nn-dimensional Euclidean space and +n={𝐱n:𝐱𝟎}\mathbb{R}_{+}^{n}=\{\mathbf{x}\in\mathbb{R}^{n}:\mathbf{x}\geq\mathbf{0}\}. The vector norm \|\cdot\|_{\ell} is \ell-norm. The 𝐌\mathbf{M}-norm 𝐯𝐌=𝐯𝐌𝐯\|\mathbf{v}\|_{\mathbf{M}}=\sqrt{\mathbf{v}^{\top}\mathbf{M}\mathbf{v}} where 𝐌\mathbf{M} is a positive definite matrix. The matrix norm 2\|\cdot\|_{2} is the spectral norm. 𝐱(k)\mathbf{x}^{(k)} means the kk-th point 𝐱\mathbf{x}, while 𝐏k\mathbf{P}^{k} without the parenthesis means 𝐏\mathbf{P} to the power of kk. 𝐀\mathbf{A}^{\dagger} is the Moore–Penrose inverse of 𝐀\mathbf{A}. The operator Proj𝛀(𝐱)\operatorname{Proj}_{\bm{\Omega}}(\mathbf{x}) projects 𝐱\mathbf{x} onto the set 𝛀\bm{\Omega} using 2-norm. The range of an operator 𝐓\mathbf{T} is denoted as range(𝐓)\text{range}(\mathbf{T}) while the closure of a set SS is denoted as cl(S)\text{cl}(S).

1.1 Related Work

PDLP and related.

PDHG was studied in Chambolle and Pock (2011) for image processing applications. Applegate et al. (2021) developed a restarted PDHG algorithm for solving linear programs (LPs), which, together with a few practical enhancements, leads to the PDLP solver. Compared with other popular FOM solvers such as SCS and ABIP, there is no linear system to solve in PDLP, making it a more suitable choice for large-scale LPs. cuPDLP, the GPU implementation of PDLP, demonstrated strong numerical performance on par with commercial LP solvers, and has attracted attention from both the academic and solver industry (Lu and Yang, 2023a; Lu et al., 2023). Recently, additional numerical enhancements have been proposed to speed up the algorithm further. Xiong and Freund (2024) pointed out that the central-path-based scaling can markedly improve the convergence rate of PDHG. Lu and Yang (2024a) embedded Halpern iteration (Halpern, 1967) in PDLP, achieving accelerated theoretical guarantees and improved computational results.

The practical success of PDLP motivates a stream of theoretical analysis. Lu and Yang (2022) showed PDHG has a linear convergence rate when solving LP. Applegate et al. (2023) revealed that an accelerated linear convergence rate can be achieved with restarts, and such an accelerated linear rate matches the lower bound. Applegate et al. (2024) illustrated that for infeasible/unbounded LP, PDHG iterates diverge towards the direction of infeasibility/unboundedness certificate; thus, one can detect the infeasibility/unboundedness of LP using PDHG without additional effort. Lu and Yang (2024b) discovered that PDHG has a two-stage behavior when solving LP. In the first stage, PDHG uses finite iterations to verify the optimal active variables; in the second stage, PDHG solves a homogeneous linear inequality system with a significantly improved linear rate. This explains the slow behaviors of PDHG when the LP is near-degeneracy. Xiong and Freund (2024) introduced level-set geometry that characterizes the difficulty of PDHG for LP, which shows that PDHG converges faster for LP with a regular non-flat level-set around the optimal solution set.

For more general fixed point problems, trajectories of various FOMs were studied in Poon and Liang (2019, 2020), which showed that eventually (after a finite number of iterations), these algorithms follow a straight line or a spiral structure if the problem is non-degenerate. Unfortunately, this non-degeneracy assumption is generally never satisfied for real-world LP instances.

Crossover.

Crossover, or purification, refers to the process that moves a feasible or optimal solution to a vertex solution, also known as the extreme point, basic feasible solution, or corner solution. Purification methods (Charnes and Kortanek, 1963; Charnes et al., 1965) were proposed even earlier than those non-vertex LP methods, including the ellipsoid methods (Khachiyan, 1979) and IPMs (Karmarkar, 1984).

Given a basis BB and a feasible solution 𝐱\mathbf{x}, the purification algorithm (Charnes and Kortanek, 1963; Charnes et al., 1965) generates a null space vector step 𝜶\bm{\alpha} same as the direction in simplex methods and then tries to move 𝐱\mathbf{x} along it without worsening the objective or update BB for feasible directions, which will finally return a feasible vertex with the objective value no worse in finite steps. Given a basis BB and a strictly interior feasible point, Kortanek and Jishan (1988) improved Charnes’s method by drawing the vertex back to the interior if it is not optimal. An optimal extreme point will be reached in a finite number of steps. If the feasible point is not entirely in the interior, Kortanek and Jishan (1988) also provided another purification method that considers dual feasibility. Combining Kortanek’s second method with a proper cleanup will return an optimal basis. Given a dual optimal solution 𝝅\bm{\pi}, Amor et al. (2006) constructed an auxiliary dual LP by adding box constraints around 𝝅\bm{\pi} to the original dual problem. Dualizing back to obtain an auxiliary primal LP, Amor claimed that the auxiliary primal LP can be efficiently solved by simplex methods, and then an optimal vertex can be easily recovered.

Starting from an optimal primal-dual solution pair, Megiddo (1991) designed a strongly polynomial crossover approach to an optimal vertex. Megiddo also pointed out that given only the optimal primal or dual solution, solving a general LP is theoretically as hard as solving it from scratch. Megiddo’s crossover method lays down the algorithmic and theoretical foundation for modern crossover algorithms in modern LP solvers. Our randomized crossover modifies Megiddo’s crossover framework, which will be introduced in Section 3.

With the boom of various IPMs and MIP applications, crossover gradually plays a significant role in the LP solvers. The key task becomes refining IPM solutions to optimal vertices. Mehrotra (1991) added a controlled random cost perturbation to eliminate dual degeneracy at the sacrifice of losing only a little optimality. Especially for nondegenerate LPs, an indicator (Tapia and Zhang, 1991) can be calculated from the interior solution to identify the basis. Bixby and Saltzman (1994) implemented a modified simplex method, which accepts an interior feasible point as a super-basic solution. Then similar to a normal simplex method, Bixby’s algorithm uses the primal (or dual) ratio test to push primal (or dual) variables to bound if possible or pivots it into (or out of) the bound. Andersen and Ye (1996) used the strictly complementary property of IPM solutions to define a perturbed LP problem. Andersen’s method obtains a basis and then recovers feasibility by pivoting and taking a weighted average with the given IPM solution. Ge et al. (2021) also proposed a perturbation crossover method that combines variable fixing and random cost perturbation. They proved that as long as the perturbation is sufficiently small, their crossover approach will not bring objective loss.

Our crossover modifies Megiddo’s framework and adopts random perturbation, together with PDLP and ordinary least squares (OLS) solver as subroutines. More details will be discussed in Section 3.

2 Spiral Behavior of PDHG for LP

In this section, we present a geometric viewpoint on the spiral behavior of PDHG on LPs. For ease of presentation, we consider PDHG based on the standard form of LP, while most of the results can be naturally extended to other formulations. More formally, we consider

min\displaystyle\min 𝐜𝐱\displaystyle\mathbf{c}^{\top}\mathbf{x} (1)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐀𝐱=𝐛\displaystyle\mathbf{A}\mathbf{x}=\mathbf{b}
𝐱𝟎,\displaystyle\mathbf{x}\geq\mathbf{0},

where 𝐀m×n,𝐜n,𝐛m\mathbf{A}\in\mathbb{R}^{m\times n},\mathbf{c}\in\mathbb{R}^{n},\mathbf{b}\in\mathbb{R}^{m}. The dual problem associated with (1) is

max\displaystyle\max 𝐛𝐲\displaystyle\mathbf{b}^{\top}\mathbf{y} (2)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐀𝐲𝐜.\displaystyle\mathbf{A}^{\top}\mathbf{y}\leq\mathbf{c}.

2.1 Preliminaries

2.1.1 PDHG for LP

PDHG solves the following saddle-point formulation of (1) and (2)

min𝐱𝟎max𝐲(𝐱,𝐲)=𝐜𝐱𝐲𝐀𝐱+𝐛𝐲,\min_{\mathbf{x}\geq\mathbf{0}}\max_{\mathbf{y}}\ \mathcal{L}(\mathbf{x},\mathbf{y})=\mathbf{c}^{\top}\mathbf{x}-\mathbf{y}^{\top}\mathbf{A}\mathbf{x}+\mathbf{b}^{\top}\mathbf{y},

and the detail is presented in Algorithm 1. Without loss of generality, we assume the primal and the dual step sizes are the same throughout the paper (otherwise, we can rescale the primal or dual variables correspondingly, see Applegate et al. (2023)).

Data: The standard LP problem with 𝐜,𝐀,𝐛\mathbf{c},\mathbf{A},\mathbf{b}.
Input: Initial point (𝐱(0),𝐲(0))+n×m(\mathbf{x}^{(0)},\mathbf{y}^{(0)})\in\mathbb{R}^{n}_{+}\times\mathbb{R}^{m}, step size η(0,1𝐀2)\eta\in(0,\frac{1}{\|\mathbf{A}\|_{2}}), tolerance ϵ(0,+)\epsilon\in(0,+\infty).
Output: The ϵ\epsilon-optimal solution (𝐱,𝐲)(\mathbf{x},\mathbf{y}).
1 k0k\leftarrow 0 ;
2 while not converge within ϵ\epsilon accuracy do
3       𝐱(k+1)=Proj+n(𝐱(k)η(𝐜𝐀𝐲(k)))\mathbf{x}^{(k+1)}=\operatorname{Proj}_{\mathbb{R}^{n}_{+}}\left(\mathbf{x}^{(k)}-\eta(\mathbf{c}-\mathbf{A}^{\top}\mathbf{y}^{(k)})\right) ;
4       𝐲(k+1)=𝐲(k)+η(𝐛𝐀(2𝐱(k+1)𝐱(k)))\mathbf{y}^{(k+1)}=\mathbf{y}^{(k)}+\eta(\mathbf{b}-\mathbf{A}(2\mathbf{x}^{(k+1)}-\mathbf{x}^{(k)})) ;
5       kk+1k\leftarrow k+1 ;
6      
7 end while
Algorithm 1 Primal-Dual Hybrid Gradient for LP

PDHG can be viewed as conducting projected primal gradient descent and projected dual gradient ascent with extrapolation in turn, and for primal-dual feasible LPs, it is guaranteed to converge to an optimal solution (see, for example, Lu and Yang (2022, 2023b) for intuitions of this extrapolation step). Overall, updates in PDHG are simple, with only matrix-vector multiplication as the bottleneck, while converging to an optimal primal-dual pair at a linear rate.

PDHG, together with other FOMs, has been observed to spiral in practice. For example, Applegate et al. (2023) plotted a spiral trajectory of PDHG for a 2-dimensional bilinear saddle-point problem. Deng et al. (2024) visualized the first 3 dimensions of ABIP iterations when solving an LP from NETLIB (Gay, 1985), which also forms an oscillation. These observations encourage FOMs to restart from the average among previous steps, fast approaching the spiral center. From the theoretical aspect, restart accelerates PDHG to converge at a faster linear rate (Applegate et al., 2023).

2.1.2 Infimal Displacement Vector

For infeasible LPs, PDHG iterates do not converge but rather diverge along a vector called the infimal displacement vector (IDV) of the PDHG operator (Applegate et al., 2024). Formally, we denote 𝐓\mathbf{T} as the PDHG operator, i.e.,

𝐓(𝐱(k),𝐲(k))=[Proj+n(𝐱(k)η(𝐜𝐀𝐲(k)))𝐲(k)+η(𝐛𝐀(2Proj+n(𝐱(k)η(𝐜𝐀𝐲(k)))𝐱(k)))].\mathbf{T}(\mathbf{x}^{(k)},\mathbf{y}^{(k)})=\begin{bmatrix}\operatorname{Proj}_{\mathbb{R}^{n}_{+}}\left(\mathbf{x}^{(k)}-\eta(\mathbf{c}-\mathbf{A}^{\top}\mathbf{y}^{(k)})\right)\\ \mathbf{y}^{(k)}+\eta(\mathbf{b}-\mathbf{A}(2\operatorname{Proj}_{\mathbb{R}^{n}_{+}}\left(\mathbf{x}^{(k)}-\eta(\mathbf{c}-\mathbf{A}^{\top}\mathbf{y}^{(k)})\right)-\mathbf{x}^{(k)}))\end{bmatrix}.

If η(0,1𝐀2)\eta\in(0,\frac{1}{\|\mathbf{A}\|_{2}}), 𝐓\mathbf{T} is firmly non-expansive with respect to the norm 𝐌\|\cdot\|_{\mathbf{M}} where the positive definite matrix 𝐌=[𝐈nη𝐀η𝐀𝐈m]\mathbf{M}=\begin{bmatrix}\mathbf{I}_{n}&\eta\mathbf{A}^{\top}\\ \eta\mathbf{A}&\mathbf{I}_{m}\end{bmatrix}. Its IDV is defined as the minimum perturbation we should subtract from 𝐓\mathbf{T} to ensure it has a fixed point:

Definition 1 (Infimal displacement vector (Pazy, 1971)).

The unique infimal displacement vector for PDHG with the step size η(0,1𝐀2)\eta\in(0,\frac{1}{\|\mathbf{A}\|_{2}}) is defined as

𝐯𝐓=argmin\displaystyle\mathbf{v}_{\mathbf{T}}=\operatornamewithlimits{argmin} 12𝐯𝐌2\displaystyle\frac{1}{2}\|\mathbf{v}\|^{2}_{\mathbf{M}}
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐯cl(range(𝐓𝐈))\displaystyle\mathbf{v}\in\text{cl}(\text{range}(\mathbf{T}-\mathbf{I}))

where 𝐓\mathbf{T} is the operator corresponding to PDHG.

Proposition 2 ((Pazy, 1971; Baillon, 1978)).

Let 𝐓\mathbf{T} be a non-expansive operator and 𝐳\mathbf{z} is the initial point. Then the infimal displacement vector of the operator 𝐓\mathbf{T} satisfies

limk𝐓k𝐳k=𝐯.\lim_{k\to\infty}\frac{\mathbf{T}^{k}\mathbf{z}}{k}=\mathbf{v}.

If further 𝐓\mathbf{T} is firmly non-expansive, then

limk𝐓k+1𝐳𝐓k𝐳=𝐯.\lim_{k\to\infty}\mathbf{T}^{k+1}\mathbf{z}-\mathbf{T}^{k}\mathbf{z}=\mathbf{v}.

Proposition 2 implies that PDHG will diverge along 𝐯𝟎\mathbf{v}\neq\mathbf{0} if the LP is primal or dual infeasible, and will converge (𝐯=𝟎\mathbf{v}=\mathbf{0}) if the LP is primal-dual feasible.

2.2 PDHG Spiral Behavior

In this part, we formally present the PDHG spiral behavior when solving LPs. At a high level, PDHG identifies different bases from phase to phase. During each phase, the iterates follow a spiral ray, where the ray direction and the spiral center are derived later.

First, we define the basis of PDHG iterates, which characterizes the active primal variables. This definition is very similar to the basis defined for the simplex method. The fundamental difference is that the corresponding columns of the basis in the constraint matrix may not form a nonsingular square matrix.

Definition 3 (PDHG basis).

For a PDHG iterate (𝐱,𝐲)(\mathbf{x},\mathbf{y}), the basis BB and the non-basis NN form a partition of the primal variables, defined as

B\displaystyle B ={i:𝐱i>0}{i:𝐱i=0,𝐜i𝐀i𝐲0}\displaystyle=\{i:\mathbf{x}_{i}>0\}\cup\{i:\mathbf{x}_{i}=0,\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}\leq 0\}
N\displaystyle N ={i:𝐱i=0,𝐜i𝐀i𝐲>0}.\displaystyle=\{i:\mathbf{x}_{i}=0,\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}>0\}.

We further call 𝐱i\mathbf{x}_{i} a basic variable if iBi\in B and 𝐱i\mathbf{x}_{i} a non-basic variable if iNi\in N.

Then according to changes in the PDHG basis, we can divide the PDHG solving process into multiple phases.

Definition 4 (Phase and basis change event).

We call a sequence of consecutive PDHG iterations form a phase if they share the same PDHG basis. A basis change event occurs when two sequential PDHG iterations have different PDHG bases.

Within one phase, the spiral ray—including the spiral center and the ray direction, which are key to describing the PDHG spiral behavior—is defined as follows.

Definition 5 (Spiral ray).

A spiral behavior along the spiral ray is a trajectory {𝐳(k)}\{\mathbf{z}^{(k)}\} in the form of

𝐳(k)𝐳𝐯=𝐏k(𝐳(0)𝐳𝐯)+k𝐯,\mathbf{z}^{(k)}-\mathbf{z}_{\mathbf{v}}=\mathbf{P}^{k}(\mathbf{z}^{(0)}-\mathbf{z}_{\mathbf{v}})+k\mathbf{v},

where the eigenvalues of the diagonalizable 𝐏\mathbf{P} consist of contingent one and non-real complex eigenvalues with the modulus strictly less than one, 𝐳(0)𝐳𝐯\mathbf{z}^{(0)}-\mathbf{z}_{\mathbf{v}} is orthogonal to all eigenvectors associated with the eigenvalue of one, and 𝐯𝐏k(𝐳(0)𝐳𝐯)=𝟎\mathbf{v}^{\top}\mathbf{P}^{k}(\mathbf{z}^{(0)}-\mathbf{z}_{\mathbf{v}})=\mathbf{0} for all kk. We refer to the vector 𝐯\mathbf{v} as the ray direction, the vector 𝐳𝐯\mathbf{z}_{\mathbf{v}} as the spiral center, and together 𝐳𝐯+θ𝐯,θ0\mathbf{z}_{\mathbf{v}}+\theta\mathbf{v},\theta\geq 0 as the spiral ray.

The 𝐏k(𝐳(0)𝐳𝐯)\mathbf{P}^{k}(\mathbf{z}^{(0)}-\mathbf{z}_{\mathbf{v}}) captures the rotation with the radius decreasing linearly along the direction corresponding to the complex eigenvalues, while the k𝐯k\mathbf{v} captures the forward movement. The orthogonality between rotation and forward movement allows us to analyze their effects independently.

To gain intuition about the above definition of PDHG basis, phase, basis change event, and spiral ray, we illustrate them with the following example:

Example 6 (PDHG spiral and basis change events).

We apply PDHG to solve a toy LP with two primal variables and one constraint.

min\displaystyle\min 2𝐱1+3𝐱2\displaystyle 2\mathbf{x}_{1}+3\mathbf{x}_{2} (3)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐱1+2𝐱2=1\displaystyle\mathbf{x}_{1}+2\mathbf{x}_{2}=1
𝐱1,𝐱20.\displaystyle\mathbf{x}_{1},\mathbf{x}_{2}\geq 0.

PDHG uses the step size η=0.05\eta=0.05 and relative tolerance ϵ=1e8\epsilon=1e-8, and starts from 𝐱(0)=(1,2),𝐲(0)=2\mathbf{x}^{(0)}=(1,2),\mathbf{y}^{(0)}=2. It takes 3235 steps to the unique optimal solution 𝐱=(0,0.5),𝐲=1.5\mathbf{x}^{*}=(0,0.5),\mathbf{y}^{*}=1.5.

Refer to caption
(a) Four phases of PDHG on the toy LP (3).
Refer to caption
(b) The trajectory of PDHG for min𝐱2𝐱1+3𝐱2,s.t.,𝐱1+2𝐱2=1\min_{\mathbf{x}}2\mathbf{x}_{1}+3\mathbf{x}_{2},\operatornamewithlimits{s.t.},\mathbf{x}_{1}+2\mathbf{x}_{2}=1.
Figure 2: The spiral behavior of PDHG.

The whole trajectory of the primal-dual solution (𝐱,𝐲)(\mathbf{x},\mathbf{y}) is shown in Figure 2(a), which can be divided into four phases.

  • Phase 1 (in purple)

    • The first 16 steps form a huge arc while moving forward. Then PDHG hits the 𝐱1=0\mathbf{x}_{1}=0 plane and a basis change event happens.

    • During Phase 1, we have B={1,2}B=\{1,2\} and N=N=\emptyset.

  • Phase 2 (in green)

    • During iterations from steps 17 to 21, PDHG arcs around within the 𝐱1=0\mathbf{x}_{1}=0 plane until it reaches another 𝐱2=0\mathbf{x}_{2}=0 plane.

    • During Phase 2, we have B={2}B=\{2\} and N={1}N=\{1\}.

  • Phase 3 (in khaki)

    • Marching against 𝐱1=0\mathbf{x}_{1}=0 and 𝐱2=0\mathbf{x}_{2}=0, the trajectory degenerates to a ray along 𝐲1\mathbf{y}_{1} axis.

    • During Phase 3, we have B=B=\emptyset and N={1,2}N=\{1,2\}.

  • Phase 4 (in blue)

    • Finally, 𝐱2\mathbf{x}_{2} leaves its bound and the optimal PDHG basis is identified. There is no more basis change event and PDHG spirals towards the optimal solution.

    • During Phase 4, we have B={2}B=\{2\} and N={1}N=\{1\}.

Notice that PDHG has already found the optimal PDHG basis in Phase 2, but changes the basis two more times due to the obstruction from the lower bound of 𝐱2\mathbf{x}_{2}. Actually, Phase 2 and Phase 4 share the same spiral center, and the directions of their spiral rays are both zero vectors.

Another interesting fact is that no basis change event will happen if we remove the bounds 𝐱1,𝐱20\mathbf{x}_{1},\mathbf{x}_{2}\geq 0. Phase 1 will continue and be fully observed as a combination of rotation and forward movement, as displayed in Figure 2(b). Moreover, the rotation converges rapidly; thus, PDHG may diverge almost straightly along the IDV, which is an instance of the spiral ray.

Note that the basis change events split the PDHG trajectory into multiple similar spiral phases. Our next goal is to analyze the spiral behavior within each phase, when the PDHG basis remains unchanged.

2.2.1 PDHG Spiral Behavior within One Phase

Within one phase, the basis of the PDHG iterates does not change, and it turns out the PDHG iterates follow a spiral ray, as observed in the example stated in the previous section. This can be formalized in the next theorem:

Theorem 7 (The spiral behavior of PDHG).

Within one phase given the PDHG basis (B,N)(B,N), PDHG iterates as

[𝐱B(k)(𝐱𝐯)B𝐲(k)𝐲𝐯]=𝐏Bk[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]+k[(𝐯𝐱)B𝐯𝐲],𝐱N(k)=𝟎,\begin{bmatrix}\mathbf{x}_{B}^{(k)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(k)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}=\mathbf{P}_{B}^{k}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}+k\begin{bmatrix}(\mathbf{v}_{\mathbf{x}})_{B}\\ \mathbf{v}_{\mathbf{y}}\end{bmatrix},\ \mathbf{x}_{N}^{(k)}=\mathbf{0}, (4)

where 𝐳(0)=(𝐱(0),𝐲(0))\mathbf{z}^{(0)}=(\mathbf{x}^{(0)},\mathbf{y}^{(0)}) is the initial point of the phase, the matrix

𝐏B=[𝐈|B|η𝐀Bη𝐀B𝐈m2η2𝐀B𝐀B],\mathbf{P}_{B}=\begin{bmatrix}\mathbf{I}_{|B|}&\eta\mathbf{A}_{B}^{\top}\\ -\eta\mathbf{A}_{B}&\mathbf{I}_{m}-2\eta^{2}\mathbf{A}_{B}\mathbf{A}_{B}^{\top}\end{bmatrix}, (5)

the spiral center 𝐳𝐯=(𝐱𝐯,𝐲𝐯)\mathbf{z}_{\mathbf{v}}=(\mathbf{x}_{\mathbf{v}},\mathbf{y}_{\mathbf{v}}) is given by

(𝐱𝐯)B\displaystyle(\mathbf{x}_{\mathbf{v}})_{B} =(𝐀B𝐀B)𝐀B𝐛+Proj𝐀B𝐱=𝟎(𝐱B(0)),(𝐱𝐯)N=𝟎\displaystyle=(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\mathbf{b}+\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)}),\ (\mathbf{x}_{\mathbf{v}})_{N}=\mathbf{0} (6)
𝐲𝐯\displaystyle\mathbf{y}_{\mathbf{v}} =(𝐀B𝐀B)𝐀B𝐜B+Proj𝐀B𝐲=𝟎(𝐲(0)),\displaystyle=(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{c}_{B}+\operatorname{Proj}_{\mathbf{A}_{B}^{\top}\mathbf{y}=\mathbf{0}}(\mathbf{y}^{(0)}),

and the ray direction 𝐯=(𝐯𝐱,𝐯𝐲)\mathbf{v}=(\mathbf{v}_{\mathbf{x}},\mathbf{v}_{\mathbf{y}}) is given by

(𝐯𝐱)B\displaystyle(\mathbf{v}_{\mathbf{x}})_{B} =η[𝐜B𝐀B(𝐀B𝐀B)𝐀B𝐜B],(𝐯𝐱)N=𝟎\displaystyle=-\eta\left[\mathbf{c}_{B}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{c}_{B}\right],\ (\mathbf{v}_{\mathbf{x}})_{N}=\mathbf{0} (7)
𝐯𝐲\displaystyle\mathbf{v}_{\mathbf{y}} =η[𝐛𝐀B(𝐀B𝐀B)𝐀B𝐛].\displaystyle=\eta\left[\mathbf{b}-\mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\mathbf{b}\right].

Moreover, the rotation part converges to 𝟎\mathbf{0}, i.e.,

limk𝐏Bk[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]=𝟎,\lim_{k\to\infty}\mathbf{P}_{B}^{k}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}=\mathbf{0}, (8)

and the forward movement is orthogonal to the rotation, i.e.,

𝐯B𝐏Bk[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]=0,k,\mathbf{v}_{B}^{\top}\mathbf{P}_{B}^{k}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}=0,\ \forall k, (9)

where 𝐯B=((𝐯𝐱)B,𝐯𝐲)\mathbf{v}_{B}=((\mathbf{v}_{\mathbf{x}})_{B},\mathbf{v}_{\mathbf{y}}).

The proof is given in Appendix A.

Equation (4) presents a closed-form solution of the iterates within one phase. Intuitively, PDHG spirals by rotating around 𝐳𝐯\mathbf{z}_{\mathbf{v}} and moving forward along 𝐯\mathbf{v}. The rotation and forward movement are orthogonal to each other. No matter how PDHG rotates in the hyperplane 𝐯𝐳=0\mathbf{v}^{\top}\mathbf{z}=0, it will firmly step one unit of 𝐯\mathbf{v} along the spiral ray at each iteration. The ray direction is unique for one phase regardless of the initial solution 𝐳(0)\mathbf{z}^{(0)}, but the spiral centers may depend on the initial solution (i.e., the projection term in (6)).

Furthermore, in terms of the influence of the step size η\eta, in general, a larger step size will not only help the rotation converge faster by reducing the moduli of the eigenvalues of 𝐏B\mathbf{P}_{B} but also accelerate the forward movement along the ray direction.

2.2.2 Understanding How the Spiral Behavior Helps the Convergence

Theorem 7 presents orthogonally decomposed spiral behavior of PDHG into forward movement along the ray direction 𝐯\mathbf{v} and rotation within the hyperplane 𝐯𝐳=0\mathbf{v}^{\top}\mathbf{z}=0. These evidences almost form a satisfying description of what PDHG is doing in one phase. Here we would like to further discuss how the trajectory of a spiral can lead PDHG to the optimum.

The optimality of an LP contains primal feasibility, dual feasibility, and duality gap. Considering how the PDHG spiral affects all three aspects, we have the following proposition:

Proposition 8.

Within one phase given the PDHG basis (B,N)(B,N), the rotation improves the primal and dual feasibility by approaching the spiral center with the least squares primal error 𝐀B𝐱B𝐛2\|\mathbf{A}_{B}\mathbf{x}_{B}-\mathbf{b}\|_{2} and dual error 𝐀B𝐲𝐜B2\|\mathbf{A}_{B}^{\top}\mathbf{y}-\mathbf{c}_{B}\|_{2}. The forward movement improves the duality gap at the spiral center in the sense that the duality gap along the spiral ray monotonically decays, i.e., 𝐜𝐯𝐱𝐛𝐯𝐲0\mathbf{c}^{\top}\mathbf{v}_{\mathbf{x}}-\mathbf{b}^{\top}\mathbf{v}_{\mathbf{y}}\leq 0. Furthermore, if the forward movement is nonzero (i.e., 𝐯𝟎\mathbf{v}\not=\mathbf{0}), we have a strict decay of the duality gap when moving along the direction of 𝐯\mathbf{v}, i.e., 𝐜𝐯𝐱𝐛𝐯𝐲<0\mathbf{c}^{\top}\mathbf{v}_{\mathbf{x}}-\mathbf{b}^{\top}\mathbf{v}_{\mathbf{y}}<0.

The proof is provided in Appendix B.

The primal feasibility 𝐀𝐱=𝐛,𝐱𝟎\mathbf{A}\mathbf{x}=\mathbf{b},\mathbf{x}\geq\mathbf{0} is naturally improved via rotating to minimize the difference between 𝐀𝐱\mathbf{A}\mathbf{x} and 𝐛\mathbf{b}. In contrast, when optimizing the dual feasibility, PDHG not only aims to satisfy 𝐀𝐲𝐜\mathbf{A}^{\top}\mathbf{y}\leq\mathbf{c}, but also prefers to activate 𝐀B𝐲=𝐜B\mathbf{A}_{B}^{\top}\mathbf{y}=\mathbf{c}_{B}. This can be explained from the perspective of complementary slackness. In a primal-dual optimal solution (𝐱,𝐲)(\mathbf{x},\mathbf{y}) to (1), the complementary slackness condition requires 𝐱i(𝐜i𝐀i𝐲)=0,i\mathbf{x}_{i}(\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y})=0,\forall i. Therefore, for those primal variables in BB, it is better to activate the corresponding dual constraints for optimality.

As for the effect of rotation in the duality gap, damped oscillation occurs around the primal and dual objectives at the spiral center, resulting in the total duality gap exhibiting an oscillating improvement.

2.2.3 Basis Change Event

Basis change events happen between two adjacent periods when the spiral hits new bounds or escapes from current bounds, i.e., for the phase with PDHG basis (B,N)(B,N), the next step (𝐱+,𝐲+)(\mathbf{x}^{+},\mathbf{y}^{+}) satisfies

𝐱i+=0,𝐜i𝐀i𝐲+>0,iB,\mathbf{x}_{i}^{+}=0,\ \mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}^{+}>0,\ \exists i\in B, (10)

or

𝐜i𝐀i𝐲+0,iN.\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}^{+}\leq 0,\ \exists i\in N. (11)

We call the condition (10) the “leaving basis” condition, since the basic coordinate iBi\in B satisfying (10) leaves the basis in the next iteration. Similarly, we call condition (11) the “entering basis” condition, since the non-basic coordinate iNi\in N satisfying (11) enters the basis in the next iteration.

To be more specific, the basis change event corresponds to the activation of 𝐱B𝟎\mathbf{x}_{B}\geq\mathbf{0} and the satisfaction of 𝐀N𝐲<𝐜N\mathbf{A}_{N}^{\top}\mathbf{y}<\mathbf{c}_{N}. From the perspective of the spiral, as shown in Figure 3, three typical reasons will cause a basic variable to leave the basis:

  • First, the spiral center is located outside the bounds (and forward movement is ignored for simplicity). Since rotation converges to the center, PDHG will meet bounds before converging, yielding a basis change event.

  • Second, although the spiral center is located inside the bounds (and forward movement is ignored for simplicity), the spiral radius, i.e., the distance between the current point and the spiral center, is too large.

  • Third, the spiral ray has a strictly negative component in its direction, which will eventually hit the bound.

For the entering basis event, when 𝐲+\mathbf{y}^{+} violates the dual constraints 𝐀N𝐲<𝐜N\mathbf{A}_{N}^{\top}\mathbf{y}<\mathbf{c}_{N}, the corresponding components of 𝐱N+\mathbf{x}^{+}_{N} have a tendency to increase, making the projection operator redundant in the next step.

It is worth highlighting that differing from the optimality improvement in one phase, the basis change event is usually triggered by the combined action of rotation and forward movement, instead of by one side alone. For example, though the spiral center may lie outside the bounds as shown in Figure 3(a), the spiral ray is likely to intersect the bounds, leading to a basis change event similar to the second case in Figure 3(b).

Refer to caption
(a) The spiral center is outside the bounds.
Refer to caption
(b) The spiral center is inside the bounds, but the radius is too large.
Refer to caption
(c) The ray hits the bounds.
Figure 3: Three typical cases for the basis change in the first quadrant.

3 Crossover inspired by PDHG

Without loss of generality, we assume 𝐀\mathbf{A} is full row rank in this section. For a primal-dual feasible LP, PDLP returns only a primal-dual optimal solution pair with PDHG basis (B,N)(B,N) satisfying (12)

𝐀B𝐱B=𝐛,𝐱B𝟎,𝐱N=𝟎\displaystyle\mathbf{A}_{B}\mathbf{x}_{B}=\mathbf{b},\ \mathbf{x}_{B}\geq\mathbf{0},\ \mathbf{x}_{N}=\mathbf{0} (12)
𝐜B𝐀B𝐲=𝟎,𝐜N𝐀N𝐲>𝟎,\displaystyle\mathbf{c}_{B}-\mathbf{A}_{B}^{\top}\mathbf{y}=\mathbf{0},\ \mathbf{c}_{N}-\mathbf{A}_{N}^{\top}\mathbf{y}>\mathbf{0},

rather than the optimal vertex with basis (B,N)(B,N) satisfying (13)

𝐱B\displaystyle\mathbf{x}_{B} =(𝐀B)1𝐛𝟎,𝐱N=𝟎\displaystyle=(\mathbf{A}_{B})^{-1}\mathbf{b}\geq\mathbf{0},\ \mathbf{x}_{N}=\mathbf{0} (13)
𝐲\displaystyle\mathbf{y} =(𝐀B)1𝐜B,𝐜N𝐀N𝐲𝟎.\displaystyle=(\mathbf{A}_{B}^{\top})^{-1}\mathbf{c}_{B},\ \mathbf{c}_{N}-\mathbf{A}_{N}^{\top}\mathbf{y}\geq\mathbf{0}.

Compared with (12), here a vertex must be determined by |B|=m|B|=m linearly independent columns of 𝐀\mathbf{A}, and there may also be zero components in 𝐜N𝐀N𝐲\mathbf{c}_{N}-\mathbf{A}_{N}^{\top}\mathbf{y}. Crossover for PDLP refers to obtaining a solution that satisfies (13) from a solution satisfying (12).

One important observation is that if we properly fix variables on bounds, then a feasible solution to LP is also an optimal solution to the LP, which is formally stated below in Proposition 9.

Proposition 9.

Given the primal-dual optimal solution (𝐱,𝐲)(\mathbf{x},\mathbf{y}) to (1), denote B={i:𝐱i>0}B=\{i:\mathbf{x}_{i}>0\}, E={i:𝐱i=0}E=\{i:\mathbf{x}_{i}=0\}, D={i:𝐜i𝐀i𝐲=0}D=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}=0\}, and N={i:𝐜i𝐀i𝐲>0}N=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}>0\}. Any feasible solution (𝐱~,𝐲~)(\tilde{\mathbf{x}},\tilde{\mathbf{y}}) satisfying

𝐀B𝐱~B=𝐛,𝐱~B𝟎,𝐱~E=𝟎\displaystyle\mathbf{A}_{B}\tilde{\mathbf{x}}_{B}=\mathbf{b},\ \tilde{\mathbf{x}}_{B}\geq\mathbf{0},\ \tilde{\mathbf{x}}_{E}=\mathbf{0} (14)
𝐜D𝐀D𝐲~=𝟎,𝐜N𝐀N𝐲~𝟎\displaystyle\mathbf{c}_{D}-\mathbf{A}_{D}^{\top}\tilde{\mathbf{y}}=\mathbf{0},\ \mathbf{c}_{N}-\mathbf{A}_{N}^{\top}\tilde{\mathbf{y}}\geq\mathbf{0}

also form a primal-dual optimal solution to the original LP.

Proof.

The KKT condition of (1) is

𝐀𝐱=𝐛,𝐱𝟎\displaystyle\mathbf{A}\mathbf{x}=\mathbf{b},\ \mathbf{x}\geq\mathbf{0} (15)
𝐜𝐀𝐲𝟎\displaystyle\mathbf{c}-\mathbf{A}^{\top}\mathbf{y}\geq\mathbf{0}
𝐜𝐱𝐛𝐲=0.\displaystyle\mathbf{c}^{\top}\mathbf{x}-\mathbf{b}^{\top}\mathbf{y}=0.

Since (𝐱,𝐲)(\mathbf{x},\mathbf{y}) is optimal, we have BDB\subseteq D and NEN\subseteq E. The duality gap of (𝐱~,𝐲~)(\tilde{\mathbf{x}},\tilde{\mathbf{y}}) is

𝐜𝐱~𝐛𝐲~=𝐜B𝐱~B𝐱~B𝐀B𝐲~=𝐜D𝐱~D𝐱~D𝐀D𝐲~=0.\mathbf{c}^{\top}\tilde{\mathbf{x}}-\mathbf{b}^{\top}\tilde{\mathbf{y}}=\mathbf{c}_{B}^{\top}\tilde{\mathbf{x}}_{B}-\tilde{\mathbf{x}}_{B}^{\top}\mathbf{A}_{B}^{\top}\tilde{\mathbf{y}}=\mathbf{c}_{D}^{\top}\tilde{\mathbf{x}}_{D}-\tilde{\mathbf{x}}_{D}^{\top}\mathbf{A}_{D}^{\top}\tilde{\mathbf{y}}=0.

The last equality is from 𝐜D𝐀D𝐲~=𝟎\mathbf{c}_{D}-\mathbf{A}_{D}^{\top}\tilde{\mathbf{y}}=\mathbf{0}. Together with (14), (𝐱~,𝐲~)(\tilde{\mathbf{x}},\tilde{\mathbf{y}}) satisfies (15). ∎

Therefore, we can iteratively push 𝐱~B\tilde{\mathbf{x}}_{B} and 𝐀N𝐲~\mathbf{A}_{N}^{\top}\tilde{\mathbf{y}} in (14) to their bounds and fix those on bounds until a vertex is found, which is precisely the idea of Megiddo (1991).

Another important phenomenon, which is the key to our crossover method, is that when PDHG starts from a primal (or dual) feasible solution within one phase, it also becomes the start of the primal (or dual) PDHG spiral ray:

Proposition 10.

Within one phase with PDHG basis (B,N)(B,N) and the initial solution (𝐱(0),𝐲(0))(\mathbf{x}^{(0)},\mathbf{y}^{(0)}), if 𝐀B𝐱B(0)=𝐛\mathbf{A}_{B}\mathbf{x}^{(0)}_{B}=\mathbf{b}, then we have 𝐱𝐯=𝐱(0)\mathbf{x}_{\mathbf{v}}=\mathbf{x}^{(0)}. Similarly, if 𝐀B𝐲(0)=𝐜B\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}=\mathbf{c}_{B}, then we have 𝐲𝐯=𝐲(0)\mathbf{y}_{\mathbf{v}}=\mathbf{y}^{(0)}.

Proof.

To prove 𝐱𝐯=𝐱(0)\mathbf{x}_{\mathbf{v}}=\mathbf{x}^{(0)}, we have (𝐱𝐯)N=𝐱N(0)=𝟎(\mathbf{x}_{\mathbf{v}})_{N}=\mathbf{x}^{(0)}_{N}=\mathbf{0} and

(𝐱𝐯)B\displaystyle(\mathbf{x}_{\mathbf{v}})_{B} =(𝐀B𝐀B)𝐀B𝐛+Proj𝐀B𝐱=𝟎(𝐱B(0))\displaystyle=(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\mathbf{b}+\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)})
=𝐀B(𝐀B𝐀B)𝐛+Proj𝐀B𝐱=𝟎(𝐱B(0))\displaystyle=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{b}+\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)})
=𝐀B(𝐀B𝐀B)𝐀B𝐱B(0)+[𝐈𝐀B(𝐀B𝐀B)𝐀B]𝐱B(0)\displaystyle=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{x}^{(0)}_{B}+\left[\mathbf{I}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\right]\mathbf{x}^{(0)}_{B}
=𝐱B(0).\displaystyle=\mathbf{x}^{(0)}_{B}.

The second equation utilizes the property of Moore–Penrose inverse that (𝐀B𝐀B)𝐀B=𝐀B=𝐀B(𝐀B𝐀B)(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}=\mathbf{A}_{B}^{\dagger}=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}. Analogously, we also have 𝐲𝐯=𝐲(0)\mathbf{y}_{\mathbf{v}}=\mathbf{y}^{(0)}. ∎

3.1 Crossover Framework

Many crossover frameworks have been designed (Megiddo, 1991; Bixby and Saltzman, 1994; Andersen and Ye, 1996), most combining simplex methods and IPMs. Following Megiddo’s idea, our crossover mainly includes three parts.

  1. 1.

    Primal push moves 𝐱\mathbf{x} towards their bounds while maintaining the primal feasibility. After this process, the primal part of an optimal vertex solution is found, although degeneracy may exist.

  2. 2.

    Dual push moves 𝐲\mathbf{y} to activate as many dual constraints as possible while maintaining the dual feasibility. After this process, the dual part of an optimal vertex solution is obtained.

  3. 3.

    Linear independence check will no more change solution values but select basic columns to construct the nonsingular square matrix 𝐀B\mathbf{A}_{B}.

Data: The standard LP problem with 𝐜,𝐀,𝐛\mathbf{c},\mathbf{A},\mathbf{b}.
Input: Initial optimal solution (𝐱,𝐲)+n×m(\mathbf{x},\mathbf{y})\in\mathbb{R}^{n}_{+}\times\mathbb{R}^{m}.
Output: The optimal vertex solution (𝐱,𝐲)(\mathbf{x},\mathbf{y}) and the basis (B,N)(B,N).
1
/* ---------------------------------------------------------------------------------------------------- */
/* Primal Push */
/* ---------------------------------------------------------------------------------------------------- */
B={i:𝐱i>0}B=\{i:\mathbf{x}_{i}>0\} ;
  // initialize basis set
Solve the auxiliary LP (16) with cost perturbation ;
  // solve auxiliary LP
2 while primal support can be further reduced do
3       Generate a primal direction (𝜹𝐱)B𝟎(\bm{\delta}_{\mathbf{x}})_{B}\neq\mathbf{0} in the kernel of 𝐀B\mathbf{A}_{B} by solving (17) and calculating the primal spiral ray in (7) with cost perturbation ;
4       if (𝛅𝐱)B𝟎(\bm{\delta}_{\mathbf{x}})_{B}\neq\mathbf{0} then
5             if (𝛅𝐱)B𝟎(\bm{\delta}_{\mathbf{x}})_{B}\geq\mathbf{0} then
                   (𝜹𝐱)B=(𝜹𝐱)B(\bm{\delta}_{\mathbf{x}})_{B}=-(\bm{\delta}_{\mathbf{x}})_{B} ;
                    // make sure to push basic variables to bounds
6                  
7             end if
            θmin{iB:(𝜹𝐱)i<0}{𝐱i(𝜹𝐱)i}\theta\leftarrow\min_{\{i\in B:(\bm{\delta}_{\mathbf{x}})_{i}<0\}}\{-\frac{\mathbf{x}_{i}}{(\bm{\delta}_{\mathbf{x}})_{i}}\} ;
              // primal ratio test
             𝐱B𝐱B+θ(𝜹𝐱)B\mathbf{x}_{B}\leftarrow\mathbf{x}_{B}+\theta(\bm{\delta}_{\mathbf{x}})_{B} ;
              // move 𝐱B\mathbf{x}_{B} to reduce its support
             BB{iB:𝐱i=0}B\leftarrow B\setminus\{i\in B:\mathbf{x}_{i}=0\} ;
              // update basis set
8            
9       end if
10      
11 end while
12
/* ---------------------------------------------------------------------------------------------------- */
/* Dual Push */
/* ---------------------------------------------------------------------------------------------------- */
D={i:𝐜i𝐀i𝐲=0},N={i:𝐜i𝐀i𝐲>0}D=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}=0\},N=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}>0\} ;
  // initialize dual activation set
Solve the auxiliary LP (18) with penalty ;
  // solve auxiliary LP
13 while dual constraints can be further activated do
14       Generate a dual direction 𝜹𝐲𝟎\bm{\delta}_{\mathbf{y}}\neq\mathbf{0} in the kernel of 𝐀D\mathbf{A}_{D}^{\top} by solving (19) and calculating the dual spiral ray in (7) with right-hand-side perturbation ;
15       if 𝐀N𝛅𝐲𝟎\mathbf{A}_{N}^{\top}\bm{\delta}_{\mathbf{y}}\neq\mathbf{0} then
16             if 𝐀N𝛅𝐲𝟎\mathbf{A}_{N}^{\top}\bm{\delta}_{\mathbf{y}}\leq\mathbf{0} then
                   𝜹𝐲=𝜹𝐲\bm{\delta}_{\mathbf{y}}=-\bm{\delta}_{\mathbf{y}} ;
                    // make sure to activate constraints
17                  
18             end if
            θmin{iN:𝐀i𝜹𝐲>0}{𝐜i𝐀i𝐲𝐀i𝜹𝐲}\theta\leftarrow\min_{\{i\in N:\mathbf{A}^{\top}_{i}\bm{\delta}_{\mathbf{y}}>0\}}\{\frac{\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}}{\mathbf{A}^{\top}_{i}\bm{\delta}_{\mathbf{y}}}\} ;
              // dual ratio test
             𝐲𝐲+θ𝜹𝐲\mathbf{y}\leftarrow\mathbf{y}+\theta\bm{\delta}_{\mathbf{y}} ;
              // move 𝐲\mathbf{y} to activate dual constraints
             DD{iN:𝐜i𝐀i𝐲=0}D\leftarrow D\cup\{i\in N:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}=0\}, NN{iN:𝐜i𝐀i𝐲=0}N\leftarrow N\setminus\{i\in N:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}=0\} ;
              // update dual activation set
19            
20       end if
21      
22 end while
23
/* ---------------------------------------------------------------------------------------------------- */
/* Linear Independence Check */
/* ---------------------------------------------------------------------------------------------------- */
24 Select m|B|m-|B| columns 𝐀C\mathbf{A}_{C} from 𝐀DB\mathbf{A}_{D\setminus B} via LU decomposition and combine them with columns of 𝐀B\mathbf{A}_{B} to form the entire basis ;
BBCB\leftarrow B\cup C, N{1,,n}BN\leftarrow\{1,\cdots,n\}\setminus B ;
  // construct the final basis set
Algorithm 2 Crossover Framework

The fundamental difference between our crossover algorithm and Megiddo’s scheme is that we use directions (𝜹𝐱)B(\bm{\delta}_{\mathbf{x}})_{B} and 𝜹𝐲\bm{\delta}_{\mathbf{y}} inspired by PDHG rather than the pivot direction in simplex methods to move the variables. Another difference from Meggido’s framework is that we check the linear independence after the dual push is completed to separate the two processes more independently, while Meggido checks the linear independence immediately once the dual push identifies newly activated dual constraints. In terms of theoretical guarantees, following the same analysis of Megiddo (1991), we can show that this PDHG-inspired crossover will succeed with probability (w.p.) 1.

We formally present the crossover scheme in Algorithm 2, which includes three major components:

Primal push.

Our primal push blends two approaches: solving one auxiliary LP and a few sequential OLS problems that identify the ray of PDHG iterates.

After recognizing the basic set BB, we build an auxiliary LP with random cost perturbation 𝐜~B\tilde{\mathbf{c}}_{B},

min\displaystyle\min 𝐜~B𝐱B\displaystyle\tilde{\mathbf{c}}_{B}^{\top}\mathbf{x}_{B} (16)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐀B𝐱B=𝐛\displaystyle\mathbf{A}_{B}\mathbf{x}_{B}=\mathbf{b}
𝐱B𝟎,\displaystyle\mathbf{x}_{B}\geq\mathbf{0},

and solve (16) with PDLP. This auxiliary LP has multiple optimal solutions w.p. 0 (see Lemma 1 in Ge et al. (2021)), so with proper perturbation to avoid the unbounded case, we have the unique optimal solution 𝐱B\mathbf{x}_{B} to (16). Then (𝐱B,𝟎E)(\mathbf{x}_{B},\mathbf{0}_{E}) can be the primal part of an optimal vertex of (1).

Unfortunately, due to the numerical residuals in practice, we may not correctly fix all the variables on bounds at once. To reduce the negative impact of the issue, the first thing to consider is to control the amplitude of disturbance, see Mehrotra (1991); Ge et al. (2021) for more specific discussions. Besides, a verification scheme needs to be designed.

Therefore, after solving (16), we solve a series of OLS problems (17) with various randomly generated costs 𝐜~B\tilde{\mathbf{c}}_{B} to calculate the PDHG primal ray directions (7).

min12𝐀B𝐲𝐜~B22.\min\ \frac{1}{2}\|\mathbf{A}_{B}^{\top}\mathbf{y}-\tilde{\mathbf{c}}_{B}\|_{2}^{2}. (17)

From Proposition 10, since the current primal solution 𝐱B\mathbf{x}_{B} is still feasible for (16) with cost perturbation, it is located at the primal spiral center of the rotation in the current phase of PDHG. We can then start from 𝐱B\mathbf{x}_{B} and move along (𝜹𝐱)B(\bm{\delta}_{\mathbf{x}})_{B} until reaching new boundaries, mirroring how the PDHG solves the auxiliary LPs. At the end of the primal push, no more primal variable can be pushed to its bound when columns of 𝐀B\mathbf{A}_{B} are linearly independent. OLS can be solved by the direct QR factorization, LSMR (Fong and Saunders, 2011), or the conjugate gradient (CG) method.

The reason why we only solve one auxiliary LP is that the auxiliary LP (16) is in most cases enough to obtain a primal solution close to a vertex and that the OLS has lower complexity to reduce the support and verify the termination of primal push. Directly following the primal spiral ray avoids unnecessary oscillation. However, when the original LP is too large, or the resulting OLS is too hard to solve, using multiple auxiliary LPs can be beneficial for the primal push.

Dual push.

In the dual push, we still combine the auxiliary LP and sequential OLS.

For the auxiliary LP, we sum up the non-negative dual slacks as a penalty to activate more dual constraints.

min\displaystyle\min 𝟏(𝐜N𝐀N𝐲)\displaystyle\mathbf{1}^{\top}(\mathbf{c}_{N}-\mathbf{A}_{N}^{\top}\mathbf{y}) (18)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐀D𝐲=𝐜D\displaystyle\mathbf{A}_{D}^{\top}\mathbf{y}=\mathbf{c}_{D}
𝐀N𝐲𝐜N.\displaystyle\mathbf{A}_{N}^{\top}\mathbf{y}\leq\mathbf{c}_{N}.

Then similarly, we compute a series of OLS (19) with perturbed right-hand side 𝐛~\tilde{\mathbf{b}} to calculate the dual ray directions of PDHG (7).

min12𝐀D𝐱D𝐛~22.\min\ \frac{1}{2}\|\mathbf{A}_{D}\mathbf{x}_{D}-\tilde{\mathbf{b}}\|_{2}^{2}\ . (19)

More constraints are fixed to be active step by step until rows of 𝐀D\mathbf{A}_{D} are linearly independent when 𝐲\mathbf{y} will be uniquely determined by 𝐀D𝐲=𝐜D\mathbf{A}_{D}^{\top}\mathbf{y}=\mathbf{c}_{D}.

Linear independence check.

After the primal and dual push, (𝐱,𝐲)(\mathbf{x},\mathbf{y}) should reach some vertex. All we need is to identify m|B|m-|B| columns from 𝐀C=𝐀DB\mathbf{A}_{C}=\mathbf{A}_{D\setminus B} to form an optimal basis with 𝐀B\mathbf{A}_{B}. Here we apply LU factorization twice. We first decompose 𝐀B\mathbf{A}_{B} as

𝐀B=𝐋B𝐔B=(𝐋R1,B𝐋R2,B𝐈m|B|)(𝐔𝟎),\mathbf{A}_{B}=\mathbf{L}_{B}\mathbf{U}_{B}=\begin{pmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}_{m-|B|}\end{pmatrix}\begin{pmatrix}\mathbf{U}\\ \mathbf{0}\end{pmatrix},

where (R1,R2)(R_{1},R_{2}) divides the rows according to the decomposition. The basic candidates become

𝐀BC\displaystyle\mathbf{A}_{B\cup C} =[𝐀R1,B𝐀R1,C𝐀R2,B𝐀R2,C]=[(𝐋R1,B𝐋R2,B𝐈)(𝐔𝟎)𝐀R1,C𝐀R2,C]\displaystyle=\begin{bmatrix}\mathbf{A}_{R_{1},B}&\mathbf{A}_{R_{1},C}\\ \mathbf{A}_{R_{2},B}&\mathbf{A}_{R_{2},C}\end{bmatrix}=\begin{bmatrix}\begin{pmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}\end{pmatrix}\begin{pmatrix}\mathbf{U}\\ \mathbf{0}\end{pmatrix}&\begin{matrix}\mathbf{A}_{R_{1},C}\\ \mathbf{A}_{R_{2},C}\end{matrix}\end{bmatrix}
=[𝐋R1,B𝐋R2,B𝐈][𝐔𝟎(𝐋R1,B𝐋R2,B𝐈)1(𝐀R1,C𝐀R2,C)]\displaystyle=\begin{bmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}\end{bmatrix}\begin{bmatrix}\begin{matrix}\mathbf{U}\\ \mathbf{0}\end{matrix}&\begin{pmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}\end{pmatrix}^{-1}\begin{pmatrix}\mathbf{A}_{R_{1},C}\\ \mathbf{A}_{R_{2},C}\end{pmatrix}\end{bmatrix}
=[𝐋R1,B𝐋R2,B𝐈][𝐔𝟎(𝐋R1,B1𝐋R2,B𝐋R1,B1𝐈)(𝐀R1,C𝐀R2,C)]\displaystyle=\begin{bmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}\end{bmatrix}\begin{bmatrix}\begin{matrix}\mathbf{U}\\ \mathbf{0}\end{matrix}&\begin{pmatrix}\mathbf{L}_{R_{1},B}^{-1}&\phantom{\mathbf{0}}\\ -\mathbf{L}_{R_{2},B}\mathbf{L}_{R_{1},B}^{-1}&\mathbf{I}\end{pmatrix}\begin{pmatrix}\mathbf{A}_{R_{1},C}\\ \mathbf{A}_{R_{2},C}\end{pmatrix}\end{bmatrix}
=[𝐋R1,B𝐋R2,B𝐈][𝐔𝐋R1,B1𝐀R1,C𝟎𝐀R2,C𝐋R2,B𝐋R1,B1𝐀R1,C].\displaystyle=\begin{bmatrix}\mathbf{L}_{R_{1},B}&\phantom{\mathbf{0}}\\ \mathbf{L}_{R_{2},B}&\mathbf{I}\end{bmatrix}\begin{bmatrix}\mathbf{U}&\mathbf{L}_{R_{1},B}^{-1}\mathbf{A}_{R_{1},C}\\ \mathbf{0}&\mathbf{A}_{R_{2},C}-\mathbf{L}_{R_{2},B}\mathbf{L}_{R_{1},B}^{-1}\mathbf{A}_{R_{1},C}\end{bmatrix}.

Then, we only need to select m|B|m-|B| linearly independent columns from 𝐀R2,C𝐋R2,B𝐋R1,B1𝐀R1,C\mathbf{A}_{R_{2},C}-\mathbf{L}_{R_{2},B}\mathbf{L}_{R_{1},B}^{-1}\mathbf{A}_{R_{1},C}, where the second LU decomposition is involved.

3.2 Practical Consideration

We discuss several practical considerations when implementing our proposed crossover scheme.

LP formulation.

So far, we have built our theory and algorithm design on the standard LP (1). In practice, the general LP formulation (20) is more convenient and thus widely used.

min\displaystyle\min 𝐜𝐱\displaystyle\mathbf{c}^{\top}\mathbf{x} (20)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐥𝐰𝐀𝐱𝐮𝐰\displaystyle\mathbf{l}_{\mathbf{w}}\leq\mathbf{A}\mathbf{x}\leq\mathbf{u}_{\mathbf{w}}
𝐥𝐱𝐱𝐮𝐱.\displaystyle\mathbf{l}_{\mathbf{x}}\leq\mathbf{x}\leq\mathbf{u}_{\mathbf{x}}.

To extend the concept of the basis on the general form, we reformulate it as (21) via introducing the artificial variable 𝐰\mathbf{w}.

min\displaystyle\min 𝐜𝐱\displaystyle\mathbf{c}^{\top}\mathbf{x} (21)
s.t.\displaystyle\operatornamewithlimits{s.t.} 𝐀𝐱𝐰=δ𝟏\displaystyle\mathbf{A}\mathbf{x}-\mathbf{w}=-\delta\mathbf{1}
𝐥𝐱𝐱𝐮𝐱\displaystyle\mathbf{l}_{\mathbf{x}}\leq\mathbf{x}\leq\mathbf{u}_{\mathbf{x}}
𝐥𝐰+δ𝟏𝐰𝐮𝐰+δ𝟏,\displaystyle\mathbf{l}_{\mathbf{w}}+\delta\mathbf{1}\leq\mathbf{w}\leq\mathbf{u}_{\mathbf{w}}+\delta\mathbf{1},

where δ=10\delta=10 aims to avoid the right-hand-side term becoming 𝟎\mathbf{0}, as the presence of 𝟎\mathbf{0} may render the LP highly sensitive to perturbations, causing the optimal solution to change drastically. This reformulation guarantees the constraint matrix [𝐀𝐈]\begin{bmatrix}\mathbf{A}&-\mathbf{I}\end{bmatrix} to be full row rank, where we can easily apply the definition of the basic solution.

Non-basis identification.

At the beginning of primal and dual push, we need to determine BB and (D,N)(D,N) to construct the auxiliary LP and OLS. For an IPM solution, which tends to stay away from bounds, the central path and strict complementarity are crucial for identifying non-basic variables. Identification is typically done by comparing the orders of magnitude of 𝐱\mathbf{x} and 𝐜𝐀𝐲\mathbf{c}-\mathbf{A}^{\top}\mathbf{y}. In contrast, PDHG can benefit from the sparsity of its solution, induced by the projection, to more aggressively fix variables to their bounds.

We take

B={i:𝐱i>max(γ(𝐜i𝐀i𝐲),ϵ)},B=\{i:\mathbf{x}_{i}>\max(\gamma(\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}),\epsilon)\},

and after BB is updated in primal push

D\displaystyle D ={i:𝐜i𝐀i𝐲ϵ}B\displaystyle=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}\leq\epsilon\}\cup B
N\displaystyle N ={i:𝐜i𝐀i𝐲>ϵ}B,\displaystyle=\{i:\mathbf{c}_{i}-\mathbf{A}_{i}^{\top}\mathbf{y}>\epsilon\}\setminus B,

with γ=1\gamma=1 and ϵ=1e8\epsilon=1e-8. This criterion works well in our experiments for purifying the PDLP solutions.

Perturbation.

The cost and right-hand-side perturbations guide the direction of crossover. They are perturbed as

𝐜~B\displaystyle\tilde{\mathbf{c}}_{B} =1𝐜B+1𝐜B+𝜹𝐜\displaystyle=\frac{1}{\|\mathbf{c}_{B}\|_{\infty}+1}\mathbf{c}_{B}+\bm{\delta}_{\mathbf{c}}
𝐛~\displaystyle\tilde{\mathbf{b}} =1𝐛+1𝐛+𝜹𝐛,\displaystyle=\frac{1}{\|\mathbf{b}\|_{\infty}+1}\mathbf{b}+\bm{\delta}_{\mathbf{b}},

where 𝜹𝐜,𝜹𝐛\bm{\delta}_{\mathbf{c}},\bm{\delta}_{\mathbf{b}} are independently and uniformly distributed over the interval [0,1][0,1]. The first normalized terms function as a center to stabilize the disturbance, preventing the unfixed non-basic variables from occasionally escaping boundaries.

Efficiency of OLS.

Currently, all the OLS problems are solved independently from scratch. Note that the coefficient matrices of two adjacent OLS problems differ in only a few columns or rows; thus, further development on reusing the factorization (probably via low-rank updates) may largely enhance the efficiency of our crossover to the level of the classic simplex-based crossover.

Understanding solution structures between PDLP and IPMs.

This crossover scheme can be applied to solutions obtained by IPMs or PDLP. However, their structures differ as discussed below.

IPMs find the analytical center of the optimal solution face, and the obtained solutions tend to be denser, while PDLP solutions are usually much sparser because of the projection operator.

IPMs and PDHG both have a strong ability to distinguish non-basic variables. IPMs can identify the strictly complementary solution pair with a sufficiently small duality gap (Andersen and Ye, 1996). PDHG can correctly classify variables as basic and non-basic in the scope of PDHG basis within finite iterations (Lu and Yang, 2024b).

Therefore, even though the PDLP solution may be less accurate than the IPM solution, it can still be refined to the vertex successfully in practice.

4 Experiments

In this section, we present experimental results to demonstrate the competence of our crossover algorithm. Our experiment does not plan to surpass the state-of-the-art crossover code implemented in commercial solvers but rather to verify the effectiveness of our proposed crossover algorithm. Further practical improvements are necessary to explore performance enhancement, which is beyond the scope of this paper.

Benchmark dataset.

We conduct our crossover experiment on the NETLIB collection (Gay, 1985). Considering the efficiency of the experiment, we select 100 instances from NETLIB according to two rules:

  • The number of rows or columns of 𝐀\mathbf{A} in the general form (20) is smaller than 5000.

  • cuPDLP-C (Lu et al., 2023) can solve it to the relative tolerance of 1e81e-8 within 600 seconds.

Software and hardware.

Tests are run on a MacBook Pro with an 8-core Apple M2 CPU and 24GB unified memory. Algorithm 2 is implemented in Julia and can be accessed at https://github.com/MIT-Lu-Lab/crossover.

Initialization and time limit.

We first solve the NETLIB LPs using cuPDLP-C inside COPT 7.1 (Ge et al., 2024) without GPU acceleration to the relative tolerance of 1e81e-8. Then the optimal primal-dual solution is passed to the Julia implementation of our crossover algorithm. A crossover time limit of 300 seconds is set and the random seed is set to 0 for reproducibility.

OLS solving.

OLS solving to identify the spiral ray direction is a critical step in our crossover algorithm. For most instances, aside from six rare exceptions, the OLS is solved via an automatic selection between direct solver and LSMR based on the dimensions. Specifically, if the matrix 𝐀B\mathbf{A}_{B}^{\top} or 𝐀D\mathbf{A}_{D} in the OLS has fewer than 1000 rows or columns, we solve the OLS by direct factorization. Otherwise, we use the iterative solver LSMR with a relative tolerance of 1e161e-16. The auxiliary LPs are again solved by cuPDLP-C.

For the six exceptions marked in blue, custom methods for OLS or auxiliary LPs are chosen. Due to occasional numerical issues, we always use direct solver or LSMR, which are labeled “d” and “*” respectively. The label “i” indicates the auxiliary LPs are solved by the IPM in COPT to avoid the timeout of cuPDLP-C.

Results.

The numerical results are shown in Table LABEL:tbl:netlib, where “# supp. PDLP” is the number of support (i.e., components not on their bounds) of the initial solution, “# supp. COPT” and “# supp. cross” are the numbers of support of vertices from the crossover algorithm in COPT commercial solver and our crossover approaches. We also report the running time of our algorithm in the “time (sec)” column.

Our crossover overall succeeds in 93 out of 100 instances. The instance “cycle” only recovers a primal optimal vertex, but this vertex solution appears much sparser than the optimal vertex from COPT crossover. Here we still count it as a success, included in the 93 successful instances.

Significant enhancement of the solution sparsity is observed in both COPT and our crossover methods, with comparable levels of improvement. In several instances, our crossover performs differently from the classic simplex-based crossover in COPT, implying that our approach has the potential to serve as a viable alternative to the traditional method.

Due to the instability of FOMs, the auxiliary LPs take a long time to solve for certain instances, sometimes even causing a timeout. The numerical difficulty with LPs and OLS is another factor contributing to the failure. However, despite failing to recover the optimal basis for these problems, the number of support of most primal solutions is still successfully reduced.

Table 1: Crossover results for 100 NETLIB instances. “a” means the basis is only primal optimal. “t” means crossover times out. “f” means crossover fails. “i” means the auxiliary LPs are solved by the IPM. For the OLS methods, “*” means always using LSMR, “d” means always using the direct solver, while other unmarked records automatically switch between direct solver and LSMR.
prob nRows nCols # supp. PDLP # supp. COPT # supp. cross time (sec)
25fv47 821 1571 600 583 584 11.63
80bau3b 2262 9799 1851 1758 1753 1.30
adlittle 56 97 61 45 44 0.01
afiro 27 32 14 13 14 0.01
agg 488 163 62 57 57 0.39
agg2 516 302 141 120 121 5.32
agg3 516 302 144 124 125 5.71
bandm 305 472 306 294 294 0.54
beaconfd 173 262 89 89 89 0.04
blend 74 83 56 54 54 0.01
bnl1 643 1175 689 451 448 11.33
bnl2 2324 3489 1519 1171 1168 40.05
boeing1 351 384 207 196 195 0.78
boeing2 166 143 69 55 55 0.04
bore3d 233 315 126 126 126 0.06
brandy 220 249 135 134 134 0.12
capri 271 353 246 220 225 0.26
cre-a 3516 4067 579 492 484 23.35d
cre-c 3068 3678 590 508 503 43.49d
cycle 1903 2857 994 224 105 36.27a
czprob 929 3523 924 866 866 27.00
d2q06c 2171 5167 1612 1543 1541 t
d6cube 415 6184 136 115 101 13.85
degen2 444 534 207 207 207 0.84
degen3 1503 1818 640 637 637 153.40
e226 223 282 127 127 127 0.17
etamacro 400 688 292 271 272 0.59
fffff800 524 854 357 312 315 2.59
finnis 497 614 259 227 229 1.61
fit1d 24 1026 12 12 12 0.01
fit1p 627 1677 634 627 627 7.79
fit2d 25 10500 22 20 20 0.01
fit2p 3000 13525 3004 2997 2997 5.47
forplan 161 421 83 83 83 1.69
ganges 1309 1681 1278 1175 1176 2.89i
gfrd-pnc 616 1092 349 336 336 1.32
greenbeb 2392 5405 1048 937 942 135.27
grow15 300 645 533 299 300 0.31
grow22 440 946 849 440 440 0.76
grow7 140 301 237 140 140 0.04
israel 174 142 80 70 69 0.63
kb2 43 41 27 27 27 0.01
ken-07 2426 3602 2236 2234 2234 1.44
lotfi 153 308 126 99 98 0.02
maros-r7 3136 9408 3136 3136 3136 0.39
maros 846 1443 345 340 270 5.58
modszk1 687 1620 666 666 666 10.68
nesm 662 2923 726 550 545 46.11i*
osa-07 1118 23949 357 355 355 0.20
osa-14 2337 52460 873 781 781 f
osa-30 4350 100024 1733 1536 1536 f
pds-02 2953 7535 1379 332 313 15.32
perold 625 1376 580 546 544 61.55
pilot.ja 940 1988 789 681 682 t
pilot 1441 3652 1299 1287 1298 t
pilot4 410 1000 374 364 362 104.69
pilot87 2030 4883 1856 1840 1839 t
pilotnov 975 2172 1809 708 680 f
qap12 3192 8856 2940 2462 2446 93.34
qap8 912 1632 656 466 418 9.54
recipe 91 180 24 24 24 0.01
sc105 105 103 85 85 85 0.02
sc205 205 203 184 184 184 0.09
sc50a 50 48 42 42 42 0.01
sc50b 50 48 48 48 48 0.07
scagr25 471 500 317 307 307 0.57
scagr7 129 140 98 97 97 0.01
scfxm1 330 457 248 231 231 0.15*
scfxm2 660 914 501 473 471 3.04
scfxm3 990 1371 754 711 711 12.42
scorpion 388 358 245 245 245 0.17
scrs8 490 1169 276 276 276 0.29
scsd1 77 760 31 7 11 0.02
scsd6 147 1350 182 63 56 0.03
scsd8 397 2750 551 147 143 0.21
sctap1 300 480 263 164 168 0.20*
sctap2 1090 1880 811 562 562 17.23
sctap3 1480 2480 1038 731 731 50.74
seba 515 1028 438 438 438 2.32
share1b 117 225 95 94 94 0.03
share2b 96 79 52 48 48 0.05
shell 536 1775 391 383 383 0.57
ship04l 402 2118 261 260 260 0.24
ship04s 402 1458 281 280 280 0.27
ship08l 778 4283 422 422 422 1.42
ship08s 778 2387 447 447 447 1.69
ship12l 1151 5427 707 706 706 4.67
ship12s 1151 2763 728 728 728 4.82
sierra 1227 2036 373 361 361 6.61
stair 356 467 350 349 349 0.76
standata 359 1075 71 50 50 0.03
standgub 361 1184 71 50 50 0.03
standmps 467 1075 190 174 174 0.26
stocfor1 117 111 69 69 69 0.01
stocfor2 2157 2031 1267 1267 1267 15.84
truss 1000 8806 802 691 689 17.45
tuff 333 587 165 122 113 0.76
vtp.base 198 203 55 55 55 0.03
wood1p 244 2594 39 39 39 1.49
woodw 1098 8405 696 550 550 3.31

5 Conclusions and Discussions

In this paper, we formally derive the spiral behavior of PDHG for solving LP. The spiral behavior can be decomposed by a rotation and a forward movement that are orthogonal to each other. This explicit formula and geometric perspective provide insights into the recent findings of the algorithm, including the infeasibility detection, the two-stage behavior, the local linear rate, etc.

Inspired by the spiral of PDHG, we propose a new randomized crossover approach that is distinct from the traditional crossover approach based on the simplex algorithm. Our crossover moves along the PDHG spiral ray, which can be calculated via solving OLS. Our numerical experiments demonstrate the effectiveness of the proposed approach.

Looking ahead, several future directions are worth exploring to enhance this crossover scheme. Firstly, we can reuse the factorization in the OLS solving, potentially speeding up the algorithm significantly. One potential enhancement is implementing a warm start for PDHG, which could speed up auxiliary LPs, although a theoretical guarantee for this remains to be established. Another promising direction involves developing a GPU-friendly OLS solver, which would allow the crossover method to leverage GPU parallelization. Additionally, designing a strategic perturbation could direct the solution toward a desired vertex more efficiently. If this perturbation is well-calibrated to avoid making the LP unbounded or infeasible, it could enable a purely PDHG-based, matrix-free crossover, making it particularly suitable for handling large-scale LPs.

Acknowledgement

This work was mostly done when the first author visited the second author at the University of Chicago, Booth School of Business. The authors thank the Booth School of Business for funding this visit. The authors also would like to thank Qi Huangfu for helpful discussions on the crossover algorithms in commercial LP solvers. The first author is partially supported by NSFC [Grant NSFC-72225009, 72394360, 72394365]. The second author is partially supported by AFOSR FA9550-24-1-0051.

References

  • Dantzig [1951] George B Dantzig. Maximization of a linear function of variables subject to linear inequalities. Activity analysis of production and allocation, 13:339–347, 1951.
  • Dantzig [1963] George B Dantzig. Linear programming and extensions. Princeton university press, 1963.
  • Karmarkar [1984] Narendra Karmarkar. A new polynomial-time algorithm for linear programming. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pages 302–311, 1984.
  • Renegar [1988] James Renegar. A polynomial-time algorithm, based on Newton’s method, for linear programming. Mathematical programming, 40(1):59–93, 1988.
  • Applegate et al. [2021] David Applegate, Mateo Díaz, Oliver Hinder, Haihao Lu, Miles Lubin, Brendan O’Donoghue, and Warren Schudy. Practical large-scale linear programming using primal-dual hybrid gradient. Advances in Neural Information Processing Systems, 34:20243–20257, 2021.
  • Lu and Yang [2023a] Haihao Lu and Jinwen Yang. cuPDLP.jl: A GPU implementation of restarted primal-dual hybrid gradient for linear programming in Julia. arXiv preprint arXiv:2311.12180, 2023a.
  • Lu et al. [2023] Haihao Lu, Jinwen Yang, Haodong Hu, Qi Huangfu, Jinsong Liu, Tianhao Liu, Yinyu Ye, Chuwen Zhang, and Dongdong Ge. cuPDLP-C: A strengthened implementation of cuPDLP for linear programming by C language. arXiv preprint arXiv:2312.14832, 2023.
  • O’Donoghue et al. [2016] Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Conic optimization via operator splitting and homogeneous self-dual embedding. Journal of Optimization Theory and Applications, 169(3):1042–1068, June 2016. URL http://stanford.edu/~boyd/papers/scs.html.
  • O’Donoghue [2021] Brendan O’Donoghue. Operator splitting for a homogeneous embedding of the linear complementarity problem. SIAM Journal on Optimization, 31:1999–2023, August 2021.
  • Lin et al. [2021] Tianyi Lin, Shiqian Ma, Yinyu Ye, and Shuzhong Zhang. An ADMM-based interior-point method for large-scale linear programming. Optimization Methods and Software, 36(2-3):389–424, 2021.
  • Deng et al. [2024] Qi Deng, Qing Feng, Wenzhi Gao, Dongdong Ge, Bo Jiang, Yuntian Jiang, Jingsong Liu, Tianhao Liu, Chenyu Xue, Yinyu Ye, et al. An enhanced alternating direction method of multipliers-based interior point method for linear and conic optimization. INFORMS Journal on Computing, 2024.
  • Ge et al. [2024] Dongdong Ge, Qi Huangfu, Zizhuo Wang, Jian Wu, and Yinyu Ye. Cardinal Optimizer (COPT) user guide. https://guide.coap.online/copt/en-doc, 2024.
  • Xpress [2014] FICO Xpress. FICO Xpress Optimization Suite. 2014.
  • Perron and Furnon [2024] Laurent Perron and Vincent Furnon. OR-Tools, 2024. URL https://developers.google.com/optimization/.
  • Huangfu and Hall [2018] Qi Huangfu and JA Julian Hall. Parallelizing the dual revised simplex method. Mathematical Programming Computation, 10(1):119–142, 2018.
  • Fender [2024] Alex Fender. Advances in optimization AI. https://www.nvidia.com/en-us/on-demand/session/gtc24-s62495/, 2024.
  • Megiddo [1991] Nimrod Megiddo. On finding primal- and dual-optimal bases. ORSA Journal on Computing, 3(1):63–65, 1991.
  • Bixby and Saltzman [1994] Robert E Bixby and Matthew J Saltzman. Recovering an optimal LP basis from an interior point solution. Operations Research Letters, 15(4):169–178, 1994.
  • Andersen and Ye [1996] Erling D Andersen and Yinyu Ye. Combining interior-point and pivoting algorithms for linear programming. Management Science, 42(12):1719–1731, 1996.
  • Chambolle and Pock [2011] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision, 40:120–145, 2011.
  • Xiong and Freund [2024] Zikai Xiong and Robert M Freund. The role of level-set geometry on the performance of PDHG for conic linear optimization. arXiv preprint arXiv:2406.01942, 2024.
  • Lu and Yang [2024a] Haihao Lu and Jinwen Yang. Restarted Halpern PDHG for linear programming. arXiv preprint arXiv:2407.16144, 2024a.
  • Halpern [1967] Benjamin Halpern. Fixed points of nonexpanding maps. 1967.
  • Lu and Yang [2022] Haihao Lu and Jinwen Yang. On the infimal sub-differential size of primal-dual hybrid gradient method and beyond. arXiv preprint arXiv:2206.12061, 2022.
  • Applegate et al. [2023] David Applegate, Oliver Hinder, Haihao Lu, and Miles Lubin. Faster first-order primal-dual methods for linear programming using restarts and sharpness. Mathematical Programming, 201(1):133–184, 2023.
  • Applegate et al. [2024] David Applegate, Mateo Díaz, Haihao Lu, and Miles Lubin. Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming. SIAM Journal on Optimization, 34(1):459–484, 2024.
  • Lu and Yang [2024b] Haihao Lu and Jinwen Yang. On the geometry and refined rate of primal–dual hybrid gradient for linear programming. Mathematical Programming, pages 1–39, 2024b.
  • Poon and Liang [2019] Clarice Poon and Jingwei Liang. Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in neural information processing systems, 32, 2019.
  • Poon and Liang [2020] Clarice Poon and Jingwei Liang. Geometry of first-order methods and adaptive acceleration. arXiv preprint arXiv:2003.03910, 2020.
  • Charnes and Kortanek [1963] A Charnes and KO Kortanek. An opposite sign algorithm for purification to an extreme point solution. Office of Naval Research, Memorandum, (129), 1963.
  • Charnes et al. [1965] A Charnes, KO Kortanek, and W Raike. Extreme point solutions in mathematical programming: An opposite sign algorithm. Systems Research Memorandum, (129):97–98, 1965.
  • Khachiyan [1979] Leonid Genrikhovich Khachiyan. A polynomial algorithm in linear programming. In Doklady Akademii Nauk, volume 244, pages 1093–1096. Russian Academy of Sciences, 1979.
  • Kortanek and Jishan [1988] KO Kortanek and Zhu Jishan. New purification algorithms for linear programming. Naval Research Logistics (NRL), 35(4):571–583, 1988.
  • Amor et al. [2006] Hatem Ben Amor, Jacques Desrosiers, and François Soumis. Recovering an optimal LP basis from an optimal dual solution. Operations research letters, 34(5):569–576, 2006.
  • Mehrotra [1991] Sanjay Mehrotra. On finding a vertex solution using interior point methods. Linear Algebra and its Applications, 152:233–253, 1991.
  • Tapia and Zhang [1991] RA Tapia and Yin Zhang. An optimal-basis identification technique for interior-point linear programming algorithms. Linear algebra and its applications, 152:343–363, 1991.
  • Ge et al. [2021] Dongdong Ge, Chengwenjian Wang, Zikai Xiong, and Yinyu Ye. From an interior point to a corner point: smart crossover. arXiv preprint arXiv:2102.09420, 2021.
  • Lu and Yang [2023b] Haihao Lu and Jinwen Yang. On a unified and simplified proof for the ergodic convergence rates of PPM, PDHG and ADMM. arXiv preprint arXiv:2305.02165, 2023b.
  • Gay [1985] David M Gay. Electronic mail distribution of linear programming test problems. Mathematical Programming Society COAL Newsletter, 13:10–12, 1985.
  • Pazy [1971] Amnon Pazy. Asymptotic behavior of contractions in Hilbert space. Israel Journal of Mathematics, 9:235–240, 1971.
  • Baillon [1978] Jean-Bernard Baillon. On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houston J. Math., 4:1–9, 1978.
  • Fong and Saunders [2011] David Chin-Lung Fong and Michael Saunders. LSMR: An iterative algorithm for sparse least-squares problems. SIAM Journal on Scientific Computing, 33(5):2950–2971, 2011.

Appendix A Proof of Theorem 7

Proof of Theorem 7.

For each phase given PDHG the basis (B,N)(B,N), the PDHG operator can be rewritten as

[𝐈|B|𝐈|N|2η𝐀B𝐈m][𝐱B(k+1)𝐱N(k+1)𝐲(k+1)]=[𝐈|B|η𝐀B𝐈|N|η𝐀B𝐈m][𝐱B(k)𝐱N(k)𝐲(k)]+[η𝐜B𝟎η𝐛].\begin{bmatrix}\mathbf{I}_{|B|}&&\\ &\mathbf{I}_{|N|}&\\ 2\eta\mathbf{A}_{B}&&\mathbf{I}_{m}\end{bmatrix}\begin{bmatrix}\mathbf{x}_{B}^{(k+1)}\\ \mathbf{x}_{N}^{(k+1)}\\ \mathbf{y}^{(k+1)}\end{bmatrix}=\begin{bmatrix}\mathbf{I}_{|B|}&&\eta\mathbf{A}_{B}^{\top}\\ &\mathbf{I}_{|N|}&\\ \eta\mathbf{A}_{B}&&\mathbf{I}_{m}\end{bmatrix}\begin{bmatrix}\mathbf{x}_{B}^{(k)}\\ \mathbf{x}_{N}^{(k)}\\ \mathbf{y}^{(k)}\end{bmatrix}+\begin{bmatrix}-\eta\mathbf{c}_{B}\\ \mathbf{0}\\ \eta\mathbf{b}\end{bmatrix}.

Since the non-basic part 𝐱N\mathbf{x}_{N} is always 𝟎\mathbf{0}, we focus on the basic part of PDHG iterates

[𝐱B(k+1)𝐲(k+1)]=[𝐈|B|η𝐀Bη𝐀B𝐈m2η2𝐀B𝐀B][𝐱B(k)𝐲(k)]+[η𝐜B2η2𝐀B𝐜B+η𝐛].\begin{bmatrix}\mathbf{x}_{B}^{(k+1)}\\ \mathbf{y}^{(k+1)}\end{bmatrix}=\begin{bmatrix}\mathbf{I}_{|B|}&\eta\mathbf{A}_{B}^{\top}\\ -\eta\mathbf{A}_{B}&\mathbf{I}_{m}-2\eta^{2}\mathbf{A}_{B}\mathbf{A}_{B}^{\top}\end{bmatrix}\begin{bmatrix}\mathbf{x}_{B}^{(k)}\\ \mathbf{y}^{(k)}\end{bmatrix}+\begin{bmatrix}-\eta\mathbf{c}_{B}\\ 2\eta^{2}\mathbf{A}_{B}\mathbf{c}_{B}+\eta\mathbf{b}\end{bmatrix}.

From (6) and (7), we have

[𝐱B(k+1)(𝐱𝐯)B𝐲(k+1)𝐲𝐯]=𝐏B[𝐱B(k)(𝐱𝐯)B𝐲(k)𝐲𝐯]+[(𝐯𝐱)B𝐯𝐲]\begin{bmatrix}\mathbf{x}_{B}^{(k+1)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(k+1)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}=\mathbf{P}_{B}\begin{bmatrix}\mathbf{x}_{B}^{(k)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(k)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}+\begin{bmatrix}(\mathbf{v}_{\mathbf{x}})_{B}\\ \mathbf{v}_{\mathbf{y}}\end{bmatrix}

where 𝐏B\mathbf{P}_{B} is defined in (5).

Note that

𝐏B𝐯B=𝐯B,𝐯B𝐏B=𝐯B,\mathbf{P}_{B}\mathbf{v}_{B}=\mathbf{v}_{B},\ \mathbf{v}_{B}^{\top}\mathbf{P}_{B}=\mathbf{v}_{B}^{\top}, (22)

we have (4) by induction.

Suppose we have the SVD decomposition of 𝐀B\mathbf{A}_{B}

𝐀B=𝐔𝚺𝐕=[𝐔r𝐔mr][σ1σr][𝐕r𝐕|B|r],\mathbf{A}_{B}=\mathbf{U}\bm{\Sigma}\mathbf{V}^{\top}=\begin{bmatrix}\mathbf{U}_{r}&\mathbf{U}_{m-r}\end{bmatrix}\begin{bmatrix}\sigma_{1}\\ &\ddots\\ &&\sigma_{r}\\ &&&&&\end{bmatrix}\begin{bmatrix}\mathbf{V}_{r}&\mathbf{V}_{|B|-r}\end{bmatrix}^{\top}, (23)

where 𝚺\bm{\Sigma} has r=rank(𝐀B)r=\text{rank}(\mathbf{A}_{B}) non-zero diagonal elements, and 𝐔,𝐕\mathbf{U},\mathbf{V} are two unitary matrices. The first rr columns of 𝐔\mathbf{U} and the remaining respectively form the basis of the image of 𝐀B\mathbf{A}_{B} and the kernel of 𝐀B\mathbf{A}_{B}^{\top}. Analogously, the first rr columns of 𝐕\mathbf{V} and the remaining respectively form the basis of the image of 𝐀B\mathbf{A}_{B}^{\top} and the kernel of 𝐀B\mathbf{A}_{B}.

𝐀B𝐕|B|r=𝟎,𝐀B𝐔mr=𝟎.\mathbf{A}_{B}\mathbf{V}_{|B|-r}=\mathbf{0},\ \mathbf{A}_{B}^{\top}\mathbf{U}_{m-r}=\mathbf{0}. (24)

Replace 𝐀B\mathbf{A}_{B} in (5) by its SVD decomposition (23) and 𝐏Bk\mathbf{P}_{B}^{k} equivalently becomes

𝐏Bk=[𝐕𝐔][𝐈|B|η𝚺η𝚺2η2𝚺𝚺+𝐈m]k[𝐕𝐔].\mathbf{P}_{B}^{k}=\begin{bmatrix}\mathbf{V}\\ &\mathbf{U}\end{bmatrix}\begin{bmatrix}\mathbf{I}_{|B|}&\eta\bm{\Sigma}^{\top}\\ -\eta\bm{\Sigma}&-2\eta^{2}\bm{\Sigma}\bm{\Sigma}^{\top}+\mathbf{I}_{m}\end{bmatrix}^{k}\begin{bmatrix}\mathbf{V}^{\top}\\ &\mathbf{U}^{\top}\end{bmatrix}.

We can reorder the middle matrix as

[[1][ησ1](1)(ησ2)1[ησ1][12η2σ12](ησ2)(12η2σ22)1][[1ησ1ησ112η2σ12](1ησ2ησ212η2σ22)11]\begin{bmatrix}[1]&&&&[\eta\sigma_{1}]\\ &(1)&&&&(\eta\sigma_{2})\\ &&1&&&&\\ [-\eta\sigma_{1}]&&&&[1-2\eta^{2}\sigma^{2}_{1}]\\ &(-\eta\sigma_{2})&&&&(1-2\eta^{2}\sigma^{2}_{2})\\ &&&&&&1\\ \end{bmatrix}\rightarrow\begin{bmatrix}\begin{bmatrix}1&\eta\sigma_{1}\\ -\eta\sigma_{1}&1-2\eta^{2}\sigma^{2}_{1}\end{bmatrix}\\ &\begin{pmatrix}1&\eta\sigma_{2}\\ -\eta\sigma_{2}&1-2\eta^{2}\sigma^{2}_{2}\end{pmatrix}\\ &&1\\ &&&1\\ \end{bmatrix}

to obtain several 2×22\times 2 blocks and contingent ones on the diagonal. Moreover, since 𝐀B\mathbf{A}_{B} is a submatrix of 𝐀\mathbf{A} and η𝐀2<1\eta\|\mathbf{A}\|_{2}<1, we still have

0<ησiη𝐀B2η𝐀2<1,i=1,,r.0<\eta\sigma_{i}\leq\eta\|\mathbf{A}_{B}\|_{2}\leq\eta\|\mathbf{A}\|_{2}<1,\ i=1,\cdots,r.

For one 2×22\times 2 block 𝐉=[1ησησ12η2σ2]\mathbf{J}=\begin{bmatrix}1&\eta\sigma\\ -\eta\sigma&1-2\eta^{2}\sigma^{2}\end{bmatrix} with ησ(0,1)\eta\sigma\in(0,1), it is easy to prove that limk+𝐉k=𝟎\lim_{k\to+\infty}\mathbf{J}^{k}=\mathbf{0} by verifying that the modulus of its eigenvalues is strictly less than 1. Therefore, we have

limk𝐏Bk=[𝐕|B|r𝐕|B|r𝐔mr𝐔mr].\lim_{k\to\infty}\mathbf{P}_{B}^{k}=\begin{bmatrix}\mathbf{V}_{|B|-r}\mathbf{V}_{|B|-r}^{\top}\\ &\mathbf{U}_{m-r}\mathbf{U}_{m-r}^{\top}\end{bmatrix}. (25)

For the vector 𝐳(0)𝐳𝐯\mathbf{z}^{(0)}-\mathbf{z}_{\mathbf{v}}, we have

𝐱B(0)\displaystyle\mathbf{x}_{B}^{(0)} =[𝐈𝐀B(𝐀B𝐀B)𝐀B]𝐱B(0)+𝐀B(𝐀B𝐀B)𝐀B𝐱B(0)\displaystyle=\left[\mathbf{I}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\right]\mathbf{x}_{B}^{(0)}+\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}
=Proj𝐀B𝐱=𝟎(𝐱B(0))+𝐀B(𝐀B𝐀B)𝐀B𝐱B(0).\displaystyle=\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)})+\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}.

and

(𝐱𝐯)B\displaystyle(\mathbf{x}_{\mathbf{v}})_{B} =(𝐀B𝐀B)𝐀B𝐛+Proj𝐀B𝐱=𝟎(𝐱B(0))\displaystyle=(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\mathbf{b}+\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)})
=𝐀B(𝐀B𝐀B)𝐛+Proj𝐀B𝐱=𝟎(𝐱B(0))\displaystyle=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{b}+\operatorname{Proj}_{\mathbf{A}_{B}\mathbf{x}=\mathbf{0}}(\mathbf{x}_{B}^{(0)})

Here we utilize the property of Moore–Penrose inverse that (𝐀B𝐀B)𝐀B=𝐀B=𝐀B(𝐀B𝐀B)(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}=\mathbf{A}_{B}^{\dagger}=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}. Then we have

𝐱B(0)(𝐱𝐯)B\displaystyle\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B} =𝐀B(𝐀B𝐀B)(𝐀B𝐱B(0)𝐛)\displaystyle=\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}(\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}-\mathbf{b})
𝐲(0)𝐲𝐯\displaystyle\mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}} =𝐀B(𝐀B𝐀B)(𝐀B𝐲(0)𝐜B),\displaystyle=\mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}(\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}-\mathbf{c}_{B}),

where the 𝐲(0)𝐲𝐯\mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}} part can be verified in a similar way.

To prove (8), from (25) we have

limk𝐏Bk[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]=[𝐕|B|r𝐕|B|r𝐀B(𝐀B𝐀B)(𝐀B𝐱B(0)𝐛)𝐔mr𝐔mr𝐀B(𝐀B𝐀B)(𝐀B𝐲(0)𝐜B)]=𝟎,\lim_{k\to\infty}\mathbf{P}_{B}^{k}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}\\ =\begin{bmatrix}\mathbf{V}_{|B|-r}\mathbf{V}_{|B|-r}^{\top}\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}(\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}-\mathbf{b})\\ \mathbf{U}_{m-r}\mathbf{U}_{m-r}^{\top}\mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}(\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}-\mathbf{c}_{B})\end{bmatrix}=\mathbf{0}, (26)

where the second equation comes from (24).

To prove (9), for all kk, we use (22) to obtain

𝐯B𝐏Bk[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]=𝐯B[𝐱B(0)(𝐱𝐯)B𝐲(0)𝐲𝐯]\displaystyle\mathbf{v}_{B}^{\top}\mathbf{P}_{B}^{k}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}=\mathbf{v}_{B}^{\top}\begin{bmatrix}\mathbf{x}_{B}^{(0)}-(\mathbf{x}_{\mathbf{v}})_{B}\\ \mathbf{y}^{(0)}-\mathbf{y}_{\mathbf{v}}\end{bmatrix}
=\displaystyle= [η[𝐜B𝐀B(𝐀B𝐀B)𝐀B𝐜B]η[𝐛𝐀B(𝐀B𝐀B)𝐀B𝐛]][𝐀B(𝐀B𝐀B)(𝐀B𝐱B(0)𝐛)𝐀B(𝐀B𝐀B)(𝐀B𝐲(0)𝐜B)]\displaystyle\begin{bmatrix}-\eta\left[\mathbf{c}_{B}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{c}_{B}\right]\\ \eta\left[\mathbf{b}-\mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\mathbf{b}\right]\end{bmatrix}^{\top}\begin{bmatrix}\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}(\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}-\mathbf{b})\\ \mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}(\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}-\mathbf{c}_{B})\end{bmatrix}
=\displaystyle= [η𝐜Bη𝐛][𝐈|B|𝐀B(𝐀B𝐀B)𝐀B𝐈m𝐀B(𝐀B𝐀B)𝐀B][𝐀B(𝐀B𝐀B)(𝐀B𝐱B(0)𝐛)𝐀B(𝐀B𝐀B)(𝐀B𝐲(0)𝐜B)]\displaystyle\begin{bmatrix}\eta\mathbf{c}_{B}\\ -\eta\mathbf{b}\end{bmatrix}^{\top}\begin{bmatrix}\mathbf{I}_{|B|}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\\ &\mathbf{I}_{m}-\mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}\mathbf{A}_{B}^{\top}\end{bmatrix}\begin{bmatrix}\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}(\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}-\mathbf{b})\\ \mathbf{A}_{B}(\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}(\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}-\mathbf{c}_{B})\end{bmatrix}
=\displaystyle= [η𝐜Bη𝐛][𝐀B𝐀B(𝐀B)𝐀B𝐀B𝐀B𝐀B𝐀B][(𝐀B𝐀B)(𝐀B𝐱B(0)𝐛)(𝐀B𝐀B)(𝐀B𝐲(0)𝐜B)]\displaystyle\begin{bmatrix}\eta\mathbf{c}_{B}\\ -\eta\mathbf{b}\end{bmatrix}^{\top}\begin{bmatrix}\mathbf{A}_{B}^{\top}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}^{\top}\\ &\mathbf{A}_{B}-\mathbf{A}_{B}\mathbf{A}_{B}^{\dagger}\mathbf{A}_{B}\end{bmatrix}\begin{bmatrix}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}(\mathbf{A}_{B}\mathbf{x}_{B}^{(0)}-\mathbf{b})\\ (\mathbf{A}_{B}^{\top}\mathbf{A}_{B})^{\dagger}(\mathbf{A}_{B}^{\top}\mathbf{y}^{(0)}-\mathbf{c}_{B})\end{bmatrix}
=\displaystyle= 𝟎,\displaystyle\mathbf{0},

where the last equation utilizes the property of Moore–Penrose inverse that 𝐀B=𝐀B𝐀B𝐀B\mathbf{A}_{B}=\mathbf{A}_{B}\mathbf{A}_{B}^{\dagger}\mathbf{A}_{B}.

Appendix B Proof of Proposition 8

Proof of Proposition 8.

From (6), we have

𝐀B𝐀B(𝐱𝐯)B\displaystyle\mathbf{A}_{B}^{\top}\mathbf{A}_{B}(\mathbf{x}_{\mathbf{v}})_{B} =𝐀B𝐛\displaystyle=\mathbf{A}_{B}^{\top}\mathbf{b} (27)
𝐀B𝐀B𝐲𝐯\displaystyle\mathbf{A}_{B}\mathbf{A}_{B}^{\top}\mathbf{y}_{\mathbf{v}} =𝐀B𝐜B.\displaystyle=\mathbf{A}_{B}\mathbf{c}_{B}.

(𝐱𝐯)B(\mathbf{x}_{\mathbf{v}})_{B} is the optimal solution to the OLS problem min12𝐀B𝐱B𝐛22\min\frac{1}{2}\|\mathbf{A}_{B}\mathbf{x}_{B}-\mathbf{b}\|_{2}^{2}, while 𝐲𝐯\mathbf{y}_{\mathbf{v}} is the optimal solution to the OLS problem min12𝐀B𝐲𝐜B22\min\frac{1}{2}\|\mathbf{A}_{B}^{\top}\mathbf{y}-\mathbf{c}_{B}\|_{2}^{2}.

From (7), we have

𝐀B(𝐯𝐱)B\displaystyle\mathbf{A}_{B}(\mathbf{v}_{\mathbf{x}})_{B} =𝟎\displaystyle=\mathbf{0} (28)
𝐀B𝐯𝐲\displaystyle\mathbf{A}_{B}^{\top}\mathbf{v}_{\mathbf{y}} =𝟎,\displaystyle=\mathbf{0},

implying that the forward movement will not improve the primal or dual feasibility.

Suppose we have the SVD decomposition of 𝐀B\mathbf{A}_{B} (23) and let 𝚺r\bm{\Sigma}_{r} denote the r×rr\times r nonzero-diagonal matrix in 𝚺\bm{\Sigma}, then

𝐜𝐯𝐱\displaystyle\mathbf{c}^{\top}\mathbf{v}_{\mathbf{x}} =η𝐜B[𝐜B𝐀B(𝐀B𝐀B)𝐀B𝐜B]\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{c}_{B}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\mathbf{c}_{B}\right]
=η𝐜B[𝐈|B|𝐀B(𝐀B𝐀B)𝐀B]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{I}_{|B|}-\mathbf{A}_{B}^{\top}(\mathbf{A}_{B}\mathbf{A}_{B}^{\top})^{\dagger}\mathbf{A}_{B}\right]\mathbf{c}_{B}
=η𝐜B[𝐈|B|𝐕𝚺𝐔(𝐔𝚺𝚺𝐔)𝐔𝚺𝐕]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{I}_{|B|}-\mathbf{V}\bm{\Sigma}^{\top}\mathbf{U}^{\top}(\mathbf{U}\bm{\Sigma}\bm{\Sigma}^{\top}\mathbf{U}^{\top})^{\dagger}\mathbf{U}\bm{\Sigma}\mathbf{V}^{\top}\right]\mathbf{c}_{B}
=η𝐜B[𝐈|B|𝐕𝚺(𝚺𝚺)𝚺𝐕]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{I}_{|B|}-\mathbf{V}\bm{\Sigma}^{\top}(\bm{\Sigma}\bm{\Sigma}^{\top})^{\dagger}\bm{\Sigma}\mathbf{V}^{\top}\right]\mathbf{c}_{B}
=η𝐜B[𝐈|B|𝐕r𝚺r𝚺r2𝚺r𝐕r]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{I}_{|B|}-\mathbf{V}_{r}\bm{\Sigma}_{r}\bm{\Sigma}_{r}^{-2}\bm{\Sigma}_{r}\mathbf{V}_{r}^{\top}\right]\mathbf{c}_{B}
=η𝐜B[𝐈|B|𝐕r𝐕r]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{I}_{|B|}-\mathbf{V}_{r}\mathbf{V}_{r}^{\top}\right]\mathbf{c}_{B}
=η𝐜B[𝐕|B|r𝐕|B|r]𝐜B\displaystyle=-\eta\mathbf{c}_{B}^{\top}\left[\mathbf{V}_{|B|-r}\mathbf{V}_{|B|-r}^{\top}\right]\mathbf{c}_{B}
0,\displaystyle\leq 0,

and similarly

𝐛𝐯𝐲=η𝐛[𝐔mr𝐔mr]𝐛0.\mathbf{b}^{\top}\mathbf{v}_{\mathbf{y}}=\eta\mathbf{b}^{\top}\left[\mathbf{U}_{m-r}\mathbf{U}_{m-r}^{\top}\right]\mathbf{b}\geq 0.

The last inequality of 𝐜𝐯𝐱0\mathbf{c}^{\top}\mathbf{v}_{\mathbf{x}}\leq 0 is activated if and only if 𝐕|B|r𝐜B=𝟎\mathbf{V}_{|B|-r}^{\top}\mathbf{c}_{B}=\mathbf{0}, i.e., 𝐜B\mathbf{c}_{B} belongs to the image of 𝐀B\mathbf{A}_{B}^{\top}. From (7), 𝐜B\mathbf{c}_{B} belongs to the image of 𝐀B\mathbf{A}_{B}^{\top} if and only if (𝐯𝐱)B(\mathbf{v}_{\mathbf{x}})_{B} belongs to the image of 𝐀B\mathbf{A}_{B}^{\top}. Moreover, (28) tells us (𝐯𝐱)B(\mathbf{v}_{\mathbf{x}})_{B} belongs to the kernel of 𝐀B\mathbf{A}_{B}, so (𝐯𝐱)B(\mathbf{v}_{\mathbf{x}})_{B} belongs to the image of 𝐀B\mathbf{A}_{B}^{\top} if and only if (𝐯𝐱)B=𝟎(\mathbf{v}_{\mathbf{x}})_{B}=\mathbf{0}. Similarily, 𝐛𝐯𝐲0\mathbf{b}^{\top}\mathbf{v}_{\mathbf{y}}\geq 0 is activated if and only if 𝐔|B|r𝐛=𝟎\mathbf{U}_{|B|-r}^{\top}\mathbf{b}=\mathbf{0}, i.e., 𝐛\mathbf{b} belongs to the image of 𝐀B\mathbf{A}_{B}, or if and only if 𝐯𝐲=𝟎\mathbf{v}_{\mathbf{y}}=\mathbf{0}. Combining the primal and dual parts completes the proof. ∎