Iteration Complexity of an Infeasible Interior Point Methods for Seconder-order Cone Programming and its Warmstarting
Abstract
This paper studies the worst case iteration complexity of an infeasible interior point method (IPM) for seconder order cone programming (SOCP), which is more convenient for warmstarting compared with feasible IPMs. The method studied bases on the homogeneous and self-dual model and the Monteiro-Zhang family of searching directions. Its worst case iteration complexity is , to reduce the primal residual, dual residual, and complementarity gap by a factor of , where is the number of cone constraints. The result is the same as the best known result for feasible IPMs. The condition under which warmstarting improves the complexity bound is also studied.
1 Introduction
Warm starting of IPMs is widely perceived to be difficult [Potra and Wright, 2000]. When the initial value is close to the boundary and not well-centered, the IPMs usually convergences very slow. Compared with feasible IPMs, since infeasible IPMs can be initialized trivially under the cone constraints without satisfying the equity constrains, they are more convenient for warm starting [Skajaa et al., 2013]. In order to study the convergence of warm starting schemes for SOCP, a better understanding on the iteration complexity of infeasible IPMs is required. This paper studies the iteration-complexity of a infeasible interior point method for SOCP, and present the condition under which warmstarting improves the complexity.
Most of the existing works on the worst-case iteration complexity of IPMs for SOCP focus on feasible IPMs, requiring that the iterations satisfy the equity constraints. Among these works, Nesterov and Todd [1997, 1998] present the Nesterov-Todd (NT) searching direction and show that the short-step path-following algorithm [Kojima et al., 1989] using the direction has iteration complexity, where is the number of cones. Tsuchiya [1999] further studies the convergence of path-following algorithms for SOCP, and shows that: (1) using the HRVW/KSH/M directions [Helmberg et al., 1996, Kojima et al., 1997, Monteiro, 1997], the iteration complexity of the shortstep, semi-long step and long step methods are , , and , respectively; (2) using the NT directions, the iteration complexity of the methods are , , and iterations, respectively. Monteiro and Tsuchiya [2000] extend the complexity bound to the short-step method and the Mizuno-Todd-Ye predictor-corrector method using the Monteiro-Zhang (MZ) family [Monteiro, 1997, Monteiro and Zhang, 1998, Zhang, 1998] family of directions, that includes the HRVW/KSH/M directions and NT directions as special cases. Mohammad-Nezhad and Terlaky [2019] show a Newton method converges quadraticly, when initialized using a IPM solution with a sufficiently small complementary gap.
Compared with feasible IPMs, iteration complexity of infeasible IPMs for SOCP are not thoroughly investigated. Rangarajan [2006] study the convergence of infeasible IPMs for conic programing that includes SOCP as a special case. They show using the central path neighborhood, the method using the NT directions takes iterations, and the method using th XS or SX directions [Zhang, 1998] takes iterations.
In this paper, we show an infeasible IPM for SOCP has iteration complexity, that is the same as the best known result of feasible IPMs. We also present the condition under which warmstarting improves the bound. The method studied is an extension of the well know short-step path-following algorithm for linear programming, carried over to the context of SOCP. The modifications are as follows: firstly, the iterates are not required to be feasible to the equity constraints, but only be in the interior of the cone constraints, so that it is convenient for warmstarting; secondly, a homogeneous and self-dual (HSD) model [Nemirovskii and Nesterov, 1993, Peng et al., 2003] is applied, therefore an infeasibility certificate can be obtained when either the primal or dual problem is infeasible; thirdly, the MZ family of searching directions is employed for acceleration. The warmstarting scheme analyzed is presented in a companion paper [Chen et al., 2022], which uses initial values modified from inexact solutions of previous problems to save computation.
The paper is organized as follows. Section II introduces the SOCP problem and the IPM for solving it. Section III studies the iteration complexity for the IPM to reduce the primal residual, dual residual, and complementarity gap. Section IV studies the conditions that warm starting can improve the worst case complexity compared with cold starting.
1.1 Notation and terminology
The following notations are used throughout the paper. denotes the set of positive real numbers, denotes the -dimensional Euclidean space, and denotes the set of all matrices with real entries. denotes the identity matrix of order . means is symmetric and positive definite, and means is symmetric and positive semi-definite. For a square matrix with all real eigenvalues, its largest and smallest eigenvalues are denoted by and . For a set , denotes its interior. denotes a diagonal matrix using a given vector as diagonal, and means a block diagonal matrix with given matrix blocks. without specification is used instead of for simplicity. With a little abuse of notations, when concatenating column vectors and , we use instead of for simplicity.
2 The SOCP problem and the infeasible IPM
In this section, we introduce the seconder-order cone programming (SOCP) problem, some important concepts about the problem, and the infeasible IPM for SOCP studied in this paper.
2.1 The SOCP problem
SOCP minimize a linear function over the intersection of an affine set and the direct product of linear cones (LC) and second-order cones (SOC).
The linear cones and second-order cones are defined as
(1) | ||||
The 1-dimensional SOC is defined as .
Then, the standard form of SOCP is defined as
(2) | ||||
s.t. | ||||
where , . Without loss of generality, we assume that in the solution variable the linear cone is arranged at first, followed with the SOCs. The number of cones is
(3) |
where the -dimensional LC is viewed as 1-dimensional LCs, so that for . The -th cone is also mentioned as for simplicity, and correspondingly . For a vector , we use to denote the subvector in the cone . For , also denotes the subvector corresponding to the position of .
The dual problem of SOCP Eq. (2) is
(4) | ||||
s.t. | ||||
2.2 Important concepts about SOCP
This subsection introduce some important concepts about SOCP for later use.
Firstly, we introduce the Euclidean Jordan algebra associated with the SOC (see e.g. [Faraut and Koranyi, 1994, Faybusovich, 1997a, b]), which is crucial in the IPMs for SOCP. The algebra for the SOC is defined as
(5) |
The Euclidean Jordan algebra for is defined as
(6) |
From now on, we assume that the space is endowed with the Euclidean Jordan algebra.
The unit element of the algebra is , where , because . Then, the algebra can be written as multiplication of arrow head matrices and the unit element, as
(7) |
where the arrow head matrix for is defined as
(8) | ||||
It is easy to verify that the eigenvalues of the arrow head matrix satisfies
(9) |
Consequently, we define the notation
(10) |
For a vector , and are used as abbreviations of and , where denotes the part of in the -th cone.
Secondly, we introduce the homogeneous and self-dual (HSD) model and the central path. The HSD model combines the primal and dual problem, and enables to solved them simultaneously. The model is given by
(11) | ||||
s.t. | ||||
where is free, and , , , are residuals defined by
(12) | ||||
is typically set as 1.
The solution of the HSD model fully describes the property of the primal problem and dual problem, that is summarized by the following theorem.
Theorem 1. Let be an optimal solution of the HSD model Eq. (11). Then, the following relations hold [Terlaky et al., 2002, Wang, 2003]:
a) and ;
b) If , then is the optimal solution of the primal problem Eq. (2) and the dual problem Eq. (LABEL:SOCP_DUAL);
c) If , then at least one of the inequities and hold. If the first inequality holds then the primal problem Eq. (2) is infeasible. If the second inequality holds then the dual problem Eq. (LABEL:SOCP_DUAL) is infeasible.
Proof. see corollary 1.3.1 of [Wang, 2003].
The central path is defined by
(13) | ||||
The central path approaches the optimal solution of the HSD model as , that corresponds to the solution or an infeasibility certificate of the SOCP problem.
Thirdly, we introduce the scaling matrix, that can be applied to scale the searching directions. Monteiro and Tsuchiya [2000] introduces a group of scaling auto-morphism that maps the cone onto itself, as the following group of matrices
(14) |
where is a positive scaling value, and
(15) |
The set of scaling matrice for a vector in is defined as
(16) |
where and
(17) |
The solution variables and are scaled by
(18) |
For convenience, we denote a scaling matrix by
(19) |
The following proposition gives several important properties of the scaling matrix.
Proposition 1 [Andersen et al., 2003]. Let , then
(20) | ||||
Proof. See Lemma 3 of [Andersen et al., 2003].
Proposition 2 [Tsuchiya, 1999]. For a vector , is the unique scaling matrix which maps to . Moreover, .
Proof. See Proposition 2.1 of [Tsuchiya, 1999].
Finally, it is ready to define the scaling-invariant neighborhoods of the central path. For , define the central path coefficient
(23) |
With a little abuse of notation, we also use instead of for simplicity. The central path distances are defined as
(24) | ||||
where
(25) | ||||
for .
The neighborhoods of the central path determined by the above distance are
(26) | ||||
where the parameter . For , it is easy to show , so that .
The following proposition shows the invariance property of .
Proposition 3 Tsuchiya [1999]. Suppose that . Let and , . Then:
a) and ;
b) and ;
c) and .
Proof. See Proposition 2.4 of Tsuchiya [1999].
Using Proposition 3, it is easy to verify that for a point on the central path defined by Eq. (13), the distances .
2.3 The infeasible IPM for SOCP
In this subsection, we briefly introduce the infeasible IPM for SOCP studied in the paper.
The SOCP problem can be solved by loosely track the central path towards the optimal solution of the HSD model. In such a way, the updating direction in each iteration is computed by solving
(27) | ||||
where , . is defined in Eq. (23), which is proportional to the complementarity gap .
Instead of using the searching direction obtained by solving Eq (27) directly, the MZ family of drections are applied to speed up the convergence. The directions are calculated by solving a scaled linear system, as
(28) | ||||
where
(29) |
The scaled linear system Eq (28) is obtained by using the scale variable instead of and in the HSD model Eq. (11).
The MZ family of directions include the HRVW/KSH/M directions and NT directions as special cases. For example, in the NT scaling, the scaling matrix is symmetric and satisfies , and the multiplication with or can be calculated in complexity.
Then, the infeasible IPM for SOCP is presented as Algorithm 1.
Input: Constant coefficients , a small positive threshold ; the SOCP problem parameters ; initial value .
Output: the solution and final status.
In Algorithm 1, the stop criteria is as follows:
(30) | ||||
where the primal residual and the dual residual are defined as
(31) |
For convience, we also define
(32) |
The criteria indicates that the primal residual, dual residual, and complementarity gap are reduce by a certain ratio . When the criteria is satisfied, the final status is decided using an approximate version of Theorem 1: if , then the problem is solved; if , the primal problem is infeasible if , and the dual problem is infeasible if . Some important issues in implementation such as the maximum iteration number limit and rounding errors are not considered in the theoretical analysis of this paper.
Algorithm 1 obtains the solution for the HSD model. If the problem is solved, the solution for the SOCP problem is . In oder to guarantee the convergence of Algorithm 1, the parameters should be chosen to satisfy certain conditions, which will be presented in Proposition 4 in the next section.
3 Analysis of the iteration complexity
In order to prove the polynomial convergence in the worst case, it is crucial to show two points: (1) the updating keeps the solution in a central path neighborhood, which is a subset of the interior of cone constraints; (2) when the solution is bounded in the central path neighborhood, the primal residual, dual residual, and complementarity gap can be reduced for a certain ratio in each iteration.
This section is organized as follows: firstly, an equivalent form of the HSD model is proposed to simplify the following analysis; secondly, the well-definedness of the searching directions is proved; thirdly, the improvement of the primal residual, dual residual, and complementarity gap is investigated; fourthly, several important results are presented to bound the updating in the central path neighborhood for the unscaled search directions; fifthly, the polynomial convergence for the unscaled search directions is proved; finally, the proof is extend to the MZ family of search directions.
3.1 An equivalent form of the HSD model
In the HSD model, and are variables in , but they act differently from other variables in LCs. Although we find it is theoretically not difficult to consider them as additional terms, the analysis in such a way would be long and tedious. In order to simplify the analysis, we propose a equivalent form of the HSD model in this subsection.
Consider the following change of variables
(33) |
where and .
It is easy to verify that the linear system Eq. (27) can be written as
(34) | ||||
where , . It is worth noting that the second and third equations in Eq. (27) are combined into the second equation in Eq. (34).
The central path coefficient is also simplified as
(37) |
Because the two linear systems are identical, the IPM solving the HSD model by Eq. (28) is identical to solving the model by Eq. (35). Without loss of generality, the last LC can be viewed as a SOC . Then, with a little abuse of notation, Eq. (35) is a special case of the following generalized linear system
(38) | ||||
where the variables are redefined for simplicity, as
(39) | ||||
The central path coefficient for the generalized linear system Eq. (38) is defined as
(40) |
where is the complementarity gap . The residuals are defined as
(41) |
For the special case Eq. (35), the generalized linear system Eq. (38) has 1 additional dimensional in both and compared with the original form Eq. (28), to substitute and . Meanwhile, its cone constraint is also 1-dimensional higher. The scaling matrix generates the MZ family of searching directions. The residual is identical to the primal residual of the HSD model defined in Eq. (12), is a combination of the dual residual and gap as , and the central path coefficient is identical to defined in Eq. (23).
In the following, it sufficient to study the convergence properties to reduce the residuals , , and the complementarity gap which is proportional to by solving the linear system Eq. (38), and the results apply to the process by solving Eq. (28).
For the generalized linear system Eq. (38), the central path distances are
(42) | ||||
The central path neighborhoods are
(43) | ||||
where the parameter . It is easy to verify the central path distances and neighborhoods are identical to the definition Eq. (24) and Eq. (26) for the special case Eq. (35). We also have , and .
3.2 The well-definedness of the searching directions
Tsuchiya [1999] and Monteiro and Tsuchiya [2000] provide valuable tools for our study. For the convenience of analysis, we introduce Lemma 1-4 in [Monteiro and Tsuchiya, 2000] as Lemma 1-4 (see the paper for the proofs).
Define the following notations as
(44) |
Lemma 1 [Monteiro and Tsuchiya, 2000]. For any , the matrice and satisfy:
a)
(45) |
where
(46) |
for , and is the orthogonal projection matrix onto the subspace orthogonal to , as
(47) |
We also use the notation
(48) |
b)
(49) | ||||
c) and commute and .
Lemma 3 [Monteiro and Tsuchiya, 2000]. Let satisfy
(53) |
for some scalars and . Then
(54) | ||||
Lemma 4 [Monteiro and Tsuchiya, 2000]. Let satisfy
(55) |
for some scalars and . Assume that and satisfy
(56) |
and define and . Then,
(57) |
Using the above lemmas, we obtain the following result that shows the well-definedness of the search directions.
Proof. In order to prove the linear system Eq. (38) has unique solution, it is equivalent to show the solution of the homogeneous system associated with it is 0. The homogeneous system is
(59) | ||||
where the solution is denoted by .
Because , it is invertible. Since by definition Eq. (39), multiplying and to the first and second equations in Eq. (59) respectively, we can obtain
(60) |
By proposition 3 b),
(61) |
By Lemma 3, Eq. (55) holds for . By the last equation in Eq. (59), using and as and in Lemma 4 with , we obtain
(62) |
Then, by the second equation in Eq. (59),
(63) |
Since A has full row rank, we obtain
(64) |
Consequently, the linear system Eq. (59) has unique solution. Then, the linear system Eq. (38) also has unique solution. Q.E.D.
3.3 The improvements in each iteration
Now we investigate the improvement of the primal residual, dual residual and complementarity gap in each iteration. For a step size , define
(65) |
The following theorem states the improvement in each iteration to solve the linear system Eq. (38). It also shows that the updating directions of and are orthogonal.
Theorem 3. Assuming the updating direction satisfy the linear system Eq. (38), then
(66) | ||||
Proof. The first two equations in Eq. (66) can be proved easily using elementary linear algebra.
Then, we prove the last two equations in Eq. (66).
Since by definition Eq. (39), multiplying the first two equation in Eq. (38) by , and respectively, and adding them together, we obtain
(67) |
Multiplying the first two equation in Eq. (38) by , respectively, and adding them together, we get
(68) |
Using the definition Eq. (39) and Eq. (40), summing the first element of each cone in the third equation in Eq. (38) yields
(69) |
Subtracting Eq. (68) and Eq. (69), we obtain
(70) |
From Eq. (67) and Eq. (70), we get
(71) |
Combining Eq. (71) with Eq. (69) and the definition Eq. (65) yields
(72) |
Q.E.D.
The first three equation in Eq. (66) can also be written as
(73) | ||||
where , , and are , , and for the point . The relation shows that the improvement in each iteration depends on and . If there are constants and that keep in a central path neighborhood or , where is also constant, the polynomial convergence is established.
3.4 Results to bound the updating in the central path neighborhood
This subsection presents results to bound the updated solution in a central path neighborhood for some , when the updating directions are unscaled.
By the definition Eq. (43), a point lying in should satisfy two conditions: (1) the distance is bounded by ; (2) the point is in the interior of the cone constraints. The following lemmas provide means to guarantee the two conditions. They are modifications of the results for feasible IPMs [Monteiro and Tsuchiya, 2000] by including the HSD model and removing the equity constaints, so as to be applicable in infeasible IPMs and hold in cases where the primal or the dual problem is infeasible. The equavalent form of the HSD model and the orthogonality of search directions provided by Theorem 4 allow them to be proved roughly the same as the counterparts for feasible IPMs.
Firstly, we study the conditions to bound the distance .
The following Lemma simplifies the expression of , that is useful to analyze .
Lemma 5. Let , the updating direction satisfy the linear system Eq. (38) with and
(74) |
Then
(75) |
where
(76) |
Proof. Due to the assumption, the definition Eq. (25), Eq. (44), and Lemma 1 b), multiplying to the third equation of the linear system Eq. (38) yields
(77) |
Using the above result and definition Eq. (65), we have
(78) | ||||
Lemma 6 and 7 are introduced to bound the norm of in Eq. (75).
Lemma 6. Assume that , , is the solution of the linear system Eq. (38) with . Then, the scaled increments satisfies
(79) |
where
(80) |
Proof.
The assumption and the definition of the neighborhood Eq. (43) show
(81) |
Because
(82) |
we obtain
(83) |
Then,
(84) | ||||
Because , using Lemma 3 with , Eq. (55) holds with and .
By Theorem 3, we also have . Hence, using Lemma 4 with and , we have
(85) | ||||
Now, it is ready to bound as the following lemma.
Lemma 8. Assume that for some scalar , and let be the unique solution of the linear system Eq. (38) with for some , and any . Then,
(87) |
Proof. Due to Lemma 6 and Lemma 7, we have
(88) |
The following lemma shows is a lower bound of the approximation .
Secondly, we introduce the next lemma to facilitate the proof of the interior point condition.
Lemma 10 [Monteiro and Tsuchiya, 2000]. Let . If , then . In particular, if for some , and , then .
Proof. See Lemma 10 of [Monteiro and Tsuchiya, 2000].
3.5 polynomial convergence for unscaled searching directions
This subsection establishes iteration-complexity bounds of the infeasible IPM Algorithm 1 to reduce the residuals , , and complementarity gap , by solving the linear system Eq. (38) with the unscaled searching directions.
Theorem 4. Let , , satisfy
(91) | ||||
Assume that , and is the solution of the linear system Eq. (38). Then,
(92) |
and
(93) | ||||
Proof.
Due to the assumption, Lemma 8, Eq. (73), and , we obtain
(94) | ||||
Denote . In order to prove Eq. (92), we have to show . Suppose that for some step size . Then, at least one of the cone constraint is not satisfied. Denote the part of corresponding to an unsatisfied cone constraint by . Then, and . Since is continuous for , there exists satisfying . We use to denote the minimum for all the cone constraints that is unsatisfied in . Then, . However, by Lemma 10 and Eq. (94), . The contradiction shows does not exist. Consequently, . Using Lemma 10 and Eq. (94) again, we conclude that for .
Algorithm 1 computes the search direction by solving the linear system Eq. (28), that is identical to Eq. (35) and updates with step size . Because Eq. (35) is a special case of (38), satisfies the condition of Theorem 4, the residuals are subvectors of , and is unchanged, we can obtain the following proposition.
Proposition 4. Let the parameters satisfy
(96) |
Assume that , . Then, in Algorithm 1,
(97) | ||||
where is the iteration number.
Proposition 4 shows the worst case iteration complexity of Algorithm 1 is when and the parameters and satisfy Eq. (96).
At the end of this subsection, we show that the centrality can be improved when the parameters are set properly. By setting in Eq. (94), and Lemma 9, we can also obtain
(98) |
Combining the result with Eq. (92),
(99) |
that shows the centrality can be improved since . For example, in Algorithm 1, if the parameters satisfy , the central path neighborhood coefficient improves by in each iteration.
3.6 Polynomial convergence for the MZ family of directions
The MZ family of searching directions are generated by using in the linear system Eq. (38).
For a scaling matrix , consider the following change of variables
(100) |
Linear system Eq. (38) is transformed to
(101) | ||||
which is equal to (38) with instead of and , and . Using Theorem 4 and Proposition 3 c) which shows the central path neighborhood is unchanged under the scaling, we obtain that Theorem 4 also holds with instead of .
Then, similar to Proposition 4, we obtain the next proposition immediately.
Proposition 5. Let the parameters satisfy
(102) |
Assume that , . Then, in Algorithm 1,
(103) | ||||
where is the iteration number.
Proposition 5 shows the worst case iteration complexity of Algorithm 1 is for the MZ family of searching directions, when the parameters and satisfy Eq. (102). It is equal to the best known complexity of feasible IPMs. More precisely, the algorithm takes iterations to reduce the primal residual, dual residual, and complementary gap by a factor of .
4 Analysis of warm starting
In this section, we study the conditions that warm starting can improve the worst case iteration bound compared with cold starting for IPMs of SOCP.
The warm starting scheme investigated is modified from a scheme presented by Skajaa et al. [2013], which initialize infeasible IPMs for SOCP with a linear combination of the optimal solution of a similar problem solved previously and the cold starting point. They show that the scheme can improve the worst case interaion complexity under certain conditions in linear programming, and generalized the scheme to SOCP. We extend the scheme by using an inexact solution of a previous problem instead of the optimal solution, that enables the former problem to be solved for only a few iterations in order to save computation in scenarios such as successive convexification e.g. [Szmuk et al., 2020, Miao et al., 2021].
We firstly introduce the warm starting scheme.
Consider two related SOCP problems defined by Eq. (2), where the problem dimensions and cone constraints are unchanged. Denote the current SOCP problem by and its paramters by . The previous problem is and its paramters are , an inexact solution of which is . The differences of problem parameters are defined as
(104) |
The cold starting point for an infeasible IPMs based on the HSD model (e.g. Algorithm 1) is defined as
(105) |
where denotes the solution as a whole for simplicity. We also have .
Then, the warm starting point is computed as
(106) | ||||
where the coefficient .
Then, we investigate the iterations required when using the warm starting scheme in Algorithm 1, which has the best known worst case iteration complexity in IPMs for SOCP. Considering the inital primal residual, dual residual, and complementary gap are different for cold and warm starting, we use an unified stop creteria, as
(107) |
Then, according to proposition 5, when the parameters and satisfy Eq. (102), the iterations required is
(108) |
Obviously, a sufficient condition for the warm starting to improve the iteration bound is
(109) |
where is a constant. Under the condition, the iterations can be reduced by compared with cold starting.
In the rest of this section, we study the conditions for the warm starting to improve the inital primal residual, dual residual, complementary gap by at least a constant factor , and to keep the inital point being in the central path neighorhood.
4.1 The conditions to improve the primal residual
In this subsection, we analyze the primal residual .
Assume that
(110) | ||||
Conbining the assumption and the definition Eq. (106), the primal residual satisfies
(111) | ||||
Consequently,
(112) |
Then, we obtain
(113) |
The result shows that when the previous primal residual and the difference of problem paramters are small, warm starting with the coefficient close to 1 can reduce the primal residual compared with cold starting.
4.2 The conditions to improve the dual residual
In this subsection, we analyze the dual residual .
Assume that
(114) | ||||
Then, using the definition Eq. (106),
(115) | ||||
Then, we obtain
(116) |
The result shows that when the previous dual residual and the difference of problem paramters are small, warm starting with the coefficient close to 1 can reduce the dual residual compared with cold starting.
4.3 The conditions to improve the complementary gap
In this subsection, we analyze the central path coefficient , that is propotional to the complementary gap.
The result shows that when the previous complementary gap is small, warm starting with the coefficient close to 1 can reduce the complementary gap compared with cold starting.
4.4 The conditions to maintain centrality
Finaly, we study the condition to ensure , which is required for the iteration number relation Eq.(108) to hold.
Because of Eq. (99) that shows the centrality can be improved in the solving process of Algorithm 1, we can assume that and . Besides, the cold starting point is perfectly centered, as and . Since is convex, by the definition Eq. (106), .
Then, in order to ensure , we have to ensure . To investigate , we firstly study
(122) |
Due to Eq. (45),
(123) |
Denote
(124) |
Due to Eq. (22), Eq. (45), and Eq. (106), we have
(125) |
where
(126) | ||||
Because Eq. (47) shows , we obtain
(127) |
Then, using the definition Eq. (106),
(128) | ||||
Combining the result with Eq. (119), we obtain
(129) | ||||
where
(130) |
Then, we obtain a sufficient condition for by the following derivation:
(131) | ||||
where
(132) |
Then, the sufficient condition of is
(133) | ||||
The result shows when is closed to the boundary of the cone constrains, the coefficient to ensure may be very close to 1. The reason is twofold: firstly, near the boundary, the corresponding element in and may be different for many orders of magnitude, making ; secondly, by Eq. (127), when is close to the boundary, may be very large. In that case, little correction for the previous solution can be added. Typically, if the exact solution is on the boundary, as the number of iterations increases, the solution gets closer to the boundary. Consequently, an early stage inexact solution of the previous problem is preferred in the warm starting scheme, because less computation is cost to solve the previous problem, as well as it is easier to maintain centrality. It is similar to [Yildirim and Wright, 2002], which also prefers early stage inexact solutions for warm starting of feasible IPMs for linear programing.
However, when the differences of problem parameters defined in Eq. (104) are very small, and the previous solution is well centered, little correction is required. Consequently, the warm starting scheme with very close to 1 is applicable in that case. For example, in some successive convexification applications, the increment of , , and between problems depend on the increment of , and the Jacobian matrix is bounded. In the late stage of these applications, the differences of problem parameters are small since the increment of is small. As a result, the warm starting scheme with very close to 1 may also work well.
As a summary of this section, when the differences of problem parameters are small, and the previous solution is well centered with small primal residual, dual residual, and complementary gap, the warm starting scheme with close to 1 can improve the worst case iteration complexity compared with cold starting.
5 Conclusion
This paper shows that an infeasible IPM for SOCP has iteration complexity, which is the same as the best known result of feasible IPMs. Compared with cold starting, a warm starting scheme can reduce the iterations required, when the differences of problem parameters are small, and the previous solution is well centered with small primal residual, dual residual, and complementary gap.
References
- Andersen et al. [2003] E. D. Andersen, C. Roos, and T. Terlaky. On implementing a primal-dual interior-point method for conic quadratic optimization. Mathematical Programming, 95(2):249–277, 2003.
- Chen et al. [2022] Yushu Chen, Guangwen Yang, Lu Wang, Qingzhong Gan, Haipeng Chen, and Quanyong Xu. A fast algorithm for onboard atmospheric powered descent guidance. ArXiv, abs/2209.04157, 2022.
- Faraut and Koranyi [1994] J. Faraut and A. Koranyi. Analysis on symmetric cones. oxford mathematical monographs, 1994.
- Faybusovich [1997a] Leonid Faybusovich. Linear systems in jordan algebras and primal-dual interior-point algorithms. Journal of Computational and Applied Mathematics, 86(1):149–175, 1997a. ISSN 0377-0427. doi: 10.1016/S0377-0427(97)00153-2.
- Faybusovich [1997b] Leonid Faybusovich. Euclidean jordan algebras and interior-point algorithms. Positivity, 1:331–357, 12 1997b. doi: 10.1023/A:1009701824047.
- Helmberg et al. [1996] Christoph Helmberg, Franz Rendl, Robert J. Vanderbei, and Henry Wolkowicz. An interior-point method for semidefinite programming. SIAM Journal on Optimization, 6(2):342–361, 1996. doi: 10.1137/0806020. URL https://doi.org/10.1137/0806020.
- Kojima et al. [1989] Masakazu Kojima, Shinji Mizuno, and Akiko Yoshise. A polynomial-time algorithm for a class of linear complementarity problems. Mathematical programming, 44(1):1–26, 1989.
- Kojima et al. [1997] Masakazu Kojima, Susumu Shindoh, and Shinji Hara. Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM Journal on Optimization, 7(1):86–125, 1997. doi: 10.1137/S1052623494269035. URL https://doi.org/10.1137/S1052623494269035.
- Miao et al. [2021] Xinyuan Miao, Yu Song, Zhiguo Zhang, and Shengping Gong. Successive convexification for ascent trajectory replanning of a multi-stage launch vehicle experiencing non-fatal dynamic faults. IEEE Transactions on Aerospace and Electronic Systems, pages 1–1, 2021. doi: 10.1109/TAES.2021.3133310.
- Mohammad-Nezhad and Terlaky [2019] Ali Mohammad-Nezhad and Tamás Terlaky. Quadratic convergence to the optimal solution of second-order conic optimization without strict complementarity. Optimization Methods and Software, 34(5):960–990, 2019. doi: 10.1080/10556788.2018.1528249. URL https://doi.org/10.1080/10556788.2018.1528249.
- Monteiro [1997] Renato D. C. Monteiro. Primal–dual path-following algorithms for semidefinite programming. SIAM Journal on Optimization, 7(3):663–678, 1997. doi: 10.1137/S1052623495293056.
- Monteiro and Tsuchiya [2000] Renato D. C. Monteiro and T. Tsuchiya. Polynomial convergence of primal-dual algorithms for the second-order cone program based on the mz-family of directions. Mathematical Programming, 88(1):61–83, 2000.
- Monteiro and Zhang [1998] Renato DC Monteiro and Yin Zhang. A unified analysis for a class of long-step primal-dual path-following interior-point algorithms for semidefinite programming. Mathematical Programming, 81(3):281–299, 1998. doi: 10.1007/BF01580085.
- Nemirovskii and Nesterov [1993] A. S. Nemirovskii and Y. E. Nesterov. Interior-point polynomial algorithms in convex programming. Studies in Applied Mathematics Philadelphia SIAM, 1993.
- Nesterov and Todd [1998] Yu. E. Nesterov and M. J. Todd. Primal-dual interior-point methods for self-scaled cones. SIAM Journal on Optimization, 8(2):324–364, 1998. doi: 10.1137/S1052623495290209. URL https://doi.org/10.1137/S1052623495290209.
- Nesterov and Todd [1997] Yurii E Nesterov and Michael Jeremy Todd. Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations Research, 1997.
- Peng et al. [2003] Jiming Peng, Cornelis Roos, and Tamás Terlaky. Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, 01 2003. ISBN 9781400825134. doi: 10.1515/9781400825134.
- Potra and Wright [2000] Florian A. Potra and Stephen J. Wright. Interior-point methods. Journal of Computational and Applied Mathematics, 124(1):281–302, 2000. ISSN 0377-0427. doi: https://doi.org/10.1016/S0377-0427(00)00433-7. URL https://www.sciencedirect.com/science/article/pii/S0377042700004337. Numerical Analysis 2000. Vol. IV: Optimization and Nonlinear Equations.
- Rangarajan [2006] Bharath Kumar Rangarajan. Polynomial convergence of infeasible-interior-point methods over symmetric cones. SIAM Journal on Optimization, 16(4):1211–1229, 2006. doi: 10.1137/040606557. URL https://doi.org/10.1137/040606557.
- Skajaa et al. [2013] Anders Skajaa, Erling D Andersen, and Yinyu Ye. Warmstarting the homogeneous and self-dual interior point method for linear and conic quadratic problems. Mathematical Programming Computation, 5(1):1–25, 2013.
- Szmuk et al. [2020] Michael Szmuk, Taylor P. Reynolds, and Behçet Açıkmeşe. Successive convexification for real-time six-degree-of-freedom powered descent guidance with state-triggered constraints. Journal of Guidance, Control, and Dynamics, 43(8):1399–1413, 2020. doi: 10.2514/1.G004549. URL https://doi.org/10.2514/1.G004549.
- Terlaky et al. [2002] Tamás Terlaky, Cornelis Roos, and Jiming Peng. Self-regularity: A New Paradigm for Primal-dual Interior-point Algorithms (Princeton Series in Applied Mathematics). Princeton University Press, 2002.
- Tsuchiya [1999] Takashi Tsuchiya. A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming. Optimization Methods and Software, 11(1-4):141–182, 1999. doi: 10.1080/10556789908805750.
- Wang [2003] Bixiang Wang. Implementation of interior point methods for second order conic optimization. PhD thesis, McMaster University, 2003.
- Yildirim and Wright [2002] E. Alper Yildirim and Stephen J. Wright. Warm-start strategies in interior-point methods for linear programming. SIAM Journal on Optimization, 12(3):782–810, 2002. doi: 10.1137/S1052623400369235.
- Zhang [1998] Yin Zhang. On extending some primal–dual interior-point algorithms from linear programming to semidefinite programming. SIAM Journal on Optimization, 8(2):365–386, 1998. doi: 10.1137/S1052623495296115. URL https://doi.org/10.1137/S1052623495296115.