This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A feasible adaptive refinement algorithm for linear semi-infinite optimization

\nameShuxiong Wang1 Email: shuxionw@uci.edu 1 Department of Mathematics, University of California, Irvine, CA,US
Abstract

A numerical method is developed to solve linear semi-infinite programming problem (LSIP) in which the iterates produced by the algorithm are feasible for the original problem. This is achieved by constructing a sequence of standard linear programming problems with respect to the successive discretization of the index set such that the approximate regions are included in the original feasible region. The convergence of the approximate solutions to the solution of the original problem is proved and the associated optimal objective function values of the approximate problems are monotonically decreasing and converge to the optimal value of LSIP. An adaptive refinement procedure is designed to discretize the index set and update the constraints for the approximate problem. Numerical experiments demonstrate the performance of the proposed algorithm.

keywords:
Linear semi-infinite optimization, feasible iteration, concavification, adaptive refinement
articletype: Manuscript

1 Introduction

Linear semi-infinite programming problem (LSIP) refers to the optimization problem with finitely many decision variables and infinitely many linear constraints associated with some parameters, which can be formulated as

minxncxs.t.a(y)x+a0(y)0yY,xi0,i=1,2,,n,\displaystyle\begin{split}\min\limits_{x\in\mathbb{R}^{n}}\quad&c^{\top}x\\ \textrm{s.t.}\quad&a(y)^{\top}x+a_{0}(y)\geq 0\ \forall y\in Y,\\ &x_{i}\geq 0,i=1,2,...,n,\end{split} (LSIP)

where cnc\in\mathbb{R}^{n}, a(y)=[a1(y),,an(y)]a(y)=[a_{1}(y),...,a_{n}(y)]^{\top} and ai:ma_{i}:\mathbb{R}^{m}\mapsto\mathbb{R}, for i=0,1,,ni=0,1,...,n, are real-valued coefficient functions, YmY\subseteq\mathbb{R}^{m} is the index set. In this paper, we assume that Y=[a,b]Y=[a,b] is an interval with a<ba<b. Denote by FF the feasible set of ((LSIP)):

F={x+n|a(y)x+a0(y)0,yY},F=\{x\in\mathbb{R}^{n}_{+}\ |\ a(y)^{\top}x+a_{0}(y)\geq 0,\forall y\in Y\},

where +n={xn|xi0,i=1,2,,n}\mathbb{R}^{n}_{+}=\{x\in\mathbb{R}^{n}\ |\ x_{i}\geq 0,i=1,2,...,n\}.

Linear semi-infinite programming has wide applications in economics, robust optimization and numerous engineering problems, etc. More details can be found in [1, 2, 3] and references therein.

Numerical methods have been proposed for solving linear semi-infinite programming problems such as discretization methods, local reduction methods and descent direction methods (See [4, 5, 6, 7] for an overview of these methods). The main idea of discretization methods is to solve the following linear program

minx+n\displaystyle\min_{x\in\mathbb{R}^{n}_{+}} f(x)\displaystyle\quad f(x)
s.t. a(y)x+a0(y)0yT,\displaystyle\quad a(y)^{\top}x+a_{0}(y)\geq 0\ \forall y\in T,

in which the original index set YY in (LSIP) is replaced by its finite subset TT. The iterates generated by the discretization methods converge to a solution of the original problem as the distance between TT and YY tends to zero (see [8, 2, 4]). The reduction methods solve nonlinear equations by qusi-Newton method, which require the smoothing conditions on the functions defining the constraint [9]. The feasible descent direction methods generate a feasible direction based on the current iterate and achieve the next iterate by such a direction [10].

The purification methods proposed in [11, 12] generate a finite feasible sequence where the objective function value of each iterate is reduced. The method proposed in [11] requires that the feasible set of (LSIP) is locally polyhedral, and the method proposed in [12] requires that the coefficient functions ai,i=0,1,,n,a_{i},i=0,1,...,n, are analytic.

Feasible iterative methods for nonlinear semi-infinite optimization problems have been developed via techniques of convexification or concavification etc [13, 14, 15]. These methods might be applicable to solve (LSIP) directly. However, they are not developed specifically for (LSIP). Computational time will be reduced if the algorithm can be adapted to linear case effectively.

In this paper, we develop a feasible iterative algorithm to solve (LSIP). The basic idea is to construct a sequence of standard linear optimization problems with respect to the discretized subsets of the index set such that the feasible region of each linear optimization problem is included in the feasible region of (LSIP). The proposed method consists of two stages. The first stage is based on the restriction of the semi-infinite constraint. The second stage is base on estimating the lower bound of the coefficient functions using concavification or interval method.

The rest of the paper is organized as follows. In section 2, we propose the methods to construct the inner approximate regions for the feasible region of (LSIP). Numerical method to solve the original linear semi-infinite programming problem is proposed in section 3. In section 4, we implement our algorithm to some numerical examples to show the performance of the method. At last, we conclude our paper in section 5.

2 Restriction of the lower level problem

The restriction of the lower level problem leads to inner approximation of the feasible region of (LSIP), and thus, to feasible iterates. Two-stage procedures are performed to achieve the restriction for (LSIP). At the first stage, we construct an uniform lower-bound function w.r.t decision variables for the function defining constraint in (LSIP). This step requires to solve a global optimization associated with coefficient functions over the index set. The second stage is to estimate the lower bound of the coefficient functions over the index set rather than solving the optimization problems globally which significantly reduce the computational cost.

2.1 Construction of the lower-bound function

The semi-infinite constraint of (LSIP) can be reformulated as

minyY{a(y)x+a0(y)}0.\min_{y\in Y}\{a(y)^{\top}x+a_{0}(y)\}\geq 0. (1)

Since a(y)x=i=1nai(y)xia(y)^{\top}x=\sum_{i=1}^{n}a_{i}(y)x_{i}, (1) is equivalent to

minyY{i=1nai(y)xi+a0(y)}0.\min_{y\in Y}\{\sum_{i=1}^{n}a_{i}(y)x_{i}+a_{0}(y)\}\geq 0.

By exchanging the minimization and summation on the left side of the inequality, we obtain a new linear inequality

i=1n{minyYai(y)}xi+minyYa0(y)0.\sum_{i=1}^{n}\{\min_{y\in Y}a_{i}(y)\}x_{i}+\min_{y\in Y}a_{0}(y)\geq 0. (2)

Since the decision variables xi0x_{i}\geq 0, i=1,2,,ni=1,2,...,n, we have

i=1n{minyYai(y)}xi+minyYa0(y)minyY{i=1nai(y)xi+a0(y)}.\sum_{i=1}^{n}\{\min_{y\in Y}a_{i}(y)\}x_{i}+\min_{y\in Y}a_{0}(y)\leq\min_{y\in Y}\{\sum_{i=1}^{n}a_{i}(y)x_{i}+a_{0}(y)\}.

Thus, we obtain an uniform lower-bound function for minyY{a(y)x+a0(y)}\min_{y\in Y}\{a(y)^{\top}x+a_{0}(y)\}. And any point xx satisfying (2) is a feasible point for LSIP. Let F¯\bar{F} be the feasible region defined by the inequality (2), i.e.,

F¯={x+n|i=1n{minyYai(y)}xi+minyYa0(y)0}.\bar{F}=\{x\in\mathbb{R}^{n}_{+}\ |\ \sum_{i=1}^{n}\{\min_{y\in Y}a_{i}(y)\}x_{i}+\min_{y\in Y}a_{0}(y)\geq 0\}.

From above analysis, we conclude that F¯F\bar{F}\subseteq F.

The main difference between the original constraint (1) and the restriction constraint (2) is that the minimization is independent on the decision variable xx in the later case. In order to compute F¯\bar{F}, we need to solve a series of problems as follows:

minyai(y)s.t.yY\min_{y}\ a_{i}(y)\quad\textrm{s.t.}\quad y\in Y (3)

for i=0,1,,ni=0,1,...,n. Based on F¯\bar{F}, we can construct a linear program associated with one linear inequality constraint such that it has the same objective function as LSIP and any feasible point of the constructed problem is feasible for LSIP. Such a problem is defined as

minxncxs.t.xF¯.\min_{x\in\mathbb{R}^{n}}\quad c^{\top}x\quad\textrm{s.t.}\quad x\in\bar{F}. (R-LSIP)

To characterize how well R-LSIP approximates LSIP, we can estimate the distance between g(x)=minyY{a(y)x+a0(y)}g(x)=\min_{y\in Y}\{a(y)^{\top}x+a_{0}(y)\} and g¯(x)=i=1n{minyYai(y)}xi+minyYa0(y)\bar{g}(x)=\sum_{i=1}^{n}\{\min_{y\in Y}a_{i}(y)\}x_{i}+\min_{y\in Y}a_{0}(y) which have been used to define the constraints of LSIP and R-LSIP. Assume that each function ai(y)a_{i}(y) is Lipschitz continuous on YY, i.e., there exist some constant Li0L_{i}\geq 0 such that |ai(y)ai(z)|Li|yz||a_{i}(y)-a_{i}(z)|\leq L_{i}|y-z| holds for any y,zY,i=0,1,2,,ny,z\in Y,i=0,1,2,...,n. By direct computation, we have

|g(x)g¯(x)|(i=1nLixi+L0)(ba).|g(x)-\bar{g}(x)|\leq(\sum_{i=1}^{n}L_{i}x_{i}+L_{0})(b-a).

It turns out that for any fixed xx, the error between g(x)g(x) and g¯(x)\bar{g}(x) is bounded linearly with respect to (ba)(b-a). Furthermore, if we assume that the decision variables are upper bounded (e.g., 0xiUi0\leq x_{i}\leq U_{i} for some constants Ui>0U_{i}>0, i=0,1,2,,ni=0,1,2,...,n), we have

|g(x)g¯(x)|(i=1nLiUi+L0)(ba).|g(x)-\bar{g}(x)|\leq(\sum_{i=1}^{n}L_{i}U_{i}+L_{0})(b-a).

This indicates that the error between g(x)g(x) and g¯(x)\bar{g}(x) goes to zeros uniformly as |ba||b-a| tends to zero. By dividing the index set Y=[a,b]Y=[a,b] into subintervals, one can construct a sequence of linear programs that approximate LSIP exhaustively as the size of the subdivision (formally defined in section 3.1) tends to zero. Given a subdivision, constructing R-LSIP on each subinterval requires to solve (R-LSIP) globally which will become computationally expensive due to the increasing number of subintervals and non-convexity of the coefficient functions in general. In fact, it is not necessary to solve (R-LSIP) exactly. In the next section, we will discuss how to estimate a good lower bound of (R-LSIP) and use it to construct the feasible approximation problems for (LSIP).

2.2 Construction of the inner approximation region

In order to guarantee that the feasible region F¯\bar{F} derived from inequality (2) is an inner approximation of the feasible region of (LSIP), optimization problem (3) needs to be solved globally. However, computing a lower bound for (3) is enough to generate a restriction problem of (LSIP). In this section, we present two alternative approaches to approximate problem (3). The idea of the first approach comes from the techniques of interval methods [16, 21]. Given an interval Y=[a,b]Y=[a,b], the range of ai(y)a_{i}(y) on YY is defined as R(ai,Y)=[Ril,Riu]={ai(y)|yY}R(a_{i},Y)=[R_{i}^{l},R_{i}^{u}]=\{a_{i}(y)\ |\ y\in Y\}. An interval function Ai(Y)=[Ail,Aiu]A_{i}(Y)=[A_{i}^{l},A_{i}^{u}] is called a inclusion function for ai(y)a_{i}(y) on YY if R(ai,Y)Ai(Y)R(a_{i},Y)\subseteq A_{i}(Y). A natural inclusion function can be obtained by replacing the decision variable yy in ai(y)a_{i}(y) with the corresponding interval and computing the resulting expression using the rules of interval arithmetic [21]. In some special cases, the natural inclusion function is tight (i.e., R(ai,Y)=Ai(Y)R(a_{i},Y)=A_{i}(Y)). However, in more general cases, the natural interval function overestimates the original range of ai(y)a_{i}(y) on YY which implies that Ail<minyYai(y)A_{i}^{l}<\min_{y\in Y}a_{i}(y). In such cases, the tightness of the inclusion can be measured by

max{|RilAil|,|RiuAiu|}γ|ba|pand|AilAiu|δ|ba|p,\max\{|R_{i}^{l}-A_{i}^{l}|,|R_{i}^{u}-A_{i}^{u}|\}\leq\gamma|b-a|^{p}\quad\textrm{and}\quad|A_{i}^{l}-A_{i}^{u}|\leq\delta|b-a|^{p}, (4)

where p1p\geq 1 is the convergence order, γ0\gamma\geq 0 and δ0\delta\geq 0 are constants which depend on the expression of ai(y)a_{i}(y) and the interval [a,b][a,b]. By replacing minyYai(y)\min_{y\in Y}a_{i}(y) in (2) with AilA_{i}^{l} for i=0,1,,ni=0,1,...,n, we have a new linear inequality as follows

i=1nAilxi+A0l0.\sum_{i=1}^{n}A_{i}^{l}x_{i}+A_{0}^{l}\geq 0. (5)

It is obvious that any xx satisfying (5) is a feasible point for (LSIP).

The second approach to estimate the lower bound of problem (2) is to construct a uniform lower bound function a¯i(y)\bar{a}_{i}(y) such that a¯i(y)ai(y)\bar{a}_{i}(y)\leq a_{i}(y) holds for all yYy\in Y. In addition, we require that the optimal solution for

minya¯i(y)s.t.yY\min_{y}\ \bar{a}_{i}(y)\quad\textrm{s.t.}\quad y\in Y

is easy to be identified. Here, we construct a concave lower bound function for ai(y)a_{i}(y) by adding a negative quadratic term to it, i.e.,

a¯i(y)=ai(y)αi2(ya+b2)2,\bar{a}_{i}(y)=a_{i}(y)-\frac{\alpha_{i}}{2}(y-\frac{a+b}{2})^{2},

where α0\alpha\geq 0 is a parameter. It follows that a¯i(y)ai(y)yY\bar{a}_{i}(y)\leq a_{i}(y)\ \forall y\in Y. Furthermore, a¯i(y)\bar{a}_{i}(y) is twice continuously differentiable if and only if ai(y)a_{i}(y) is twice continuously differentiable and the second derivative of a¯i(y)\bar{a}_{i}(y) is a¯i′′(y)=ai′′(y)αi\bar{a}^{\prime\prime}_{i}(y)=a^{\prime\prime}_{i}(y)-\alpha_{i}. Thus a¯i(y)\bar{a}_{i}(y) is concave on YY if the parameter αi\alpha_{i} satisfies αimaxyYai′′(y)\alpha_{i}\geq\max_{y\in Y}\ a^{\prime\prime}_{i}(y). To sum up, we select the parameter αi\alpha_{i} such that

αimax{0,maxyYai′′(y)}.\alpha_{i}\geq\max\{0,\max_{y\in Y}a^{\prime\prime}_{i}(y)\}. (6)

This guarantees that a¯i(y)\bar{a}_{i}(y) is a lower bound concave function of ai(y)a_{i}(y) on the index set YY. The computation of αi\alpha_{i} in (6) involves a global optimization. However, we can use any upper bound of the right hand side in (6). Such an upper bound can be obtained by interval methods proposed above. On the other hand, the distance between a¯i(y)\bar{a}_{i}(y) and ai(y)a_{i}(y) on [a,b][a,b] is

maxyY|ai(y)a¯i(y)|=αi8(ba)2.\max_{y\in Y}|a_{i}(y)-\bar{a}_{i}(y)|=\frac{\alpha_{i}}{8}(b-a)^{2}.

Since a¯y(y)\bar{a}_{y}(y) is concave on YY, the minimizer of a¯i(y)\bar{a}_{i}(y) on YY is attained on the boundary of YY (see [22]), i.e., minyYa¯i(y)=min{a¯i(a),a¯i(b)}\min_{y\in Y}\bar{a}_{i}(y)=\min\{\bar{a}_{i}(a),\bar{a}_{i}(b)\}. By replacing minyYai(y)\min_{y\in Y}a_{i}(y) in (2) with minyYa¯i(y)\min_{y\in Y}\bar{a}_{i}(y), we get the second type of restriction constraint as follows

i=1nmin{a¯i(a),a¯i(b)}xi+min{a¯0(a),a¯0(b)}0.\sum_{i=1}^{n}\min\{\bar{a}_{i}(a),\bar{a}_{i}(b)\}x_{i}+\min\{\bar{a}_{0}(a),\bar{a}_{0}(b)\}\geq 0. (7)

The two approaches are distinct in the sense that the interval method requires mild assumptions on the coefficient function while the concave-function based method admits better approximation rate.

3 Numerical method

Based on the restriction approaches developed in the previous section, we are able to construct a sequence of approximations for (LSIP) by dividing the original index set into subsets successively and constructing linear optimization problems associated with restricted constraints on the subsets.

Definition 3.1.

We call T={τ0,,τN}T=\{\tau_{0},...,\tau_{N}\} a subdivision of the interval [a,b][a,b] if

a=τ0τ1τN=b.a=\tau_{0}\leq\tau_{1}\leq...\leq\tau_{N}=b.

Let Yk=[τk1,τk]Y_{k}=[\tau_{k-1},\tau_{k}] for k=1,2,,Nk=1,2,...,N, the length of YkY_{k} is defined by |Yk|=|τkτk1||Y_{k}|=|\tau_{k}-\tau_{k-1}| and the length of the subdivision TT is defined by |T|=max1kN|Yk||T|=\max_{1\leq k\leq N}|Y_{k}|. It follows that Y=i=1NYkY=\cup_{i=1}^{N}Y_{k}.

The intuition behind the approximation of (LSIP) through subdivision comes from an observation that the original semi-infinite constraints in (LSIP)

a(y)x+a0(y)0,yYa(y)^{\top}x+a_{0}(y)\geq 0,\ \forall y\in Y

can be reformulated equivalently as finitely many semi-infinite constraints

a(y)x+a0(y)0,yYk,k=1,2,,N.a(y)^{\top}x+a_{0}(y)\geq 0,\ \forall y\in Y_{k},k=1,2,...,N.

Given a subdivision, we can construct the approximate constraint on each subinterval and combine them together to formulate the inner-approximation of the original feasible region. The corresponding optimization problem provide a restriction of (LSIP). The solution of the approximate problem approach to the optimal solution of (LSIP) as the size of the subdivision tends to zero.

The two different approaches (e.g., interval method and Concavification method) were introduced in section 2 to construct the approximate region that lies inside of the original feasible region. This induces two different types of approximation problems when applied to a particular subdivision. We only describe main results for the first type (e.g., interval method) and focus on the convergence and algorithm for the second one.

We introduce the Slater condition and a lemma derived from it which will be used in the following part. We say Slater condition holds for (LSIP) if there exists a point x¯+n\bar{x}\in\mathbb{R}^{n}_{+} such that

a(y)x¯+a0(y)>0,yY.a(y)^{\top}\bar{x}+a_{0}(y)>0,\ \forall y\in Y.

Let Fo={xF|a(y)x¯+a0(y)>0,yY}F^{o}=\{x\in F\ |\ a(y)^{\top}\bar{x}+a_{0}(y)>0,\ \forall y\in Y\} be the set of all the Slater points in FF. It is shown that the feasible region FF is exactly the closure of FoF^{o} under the Slater condition [4]. We present this result as a lemma and give a direct proof in the appendix.

Lemma 3.2.

Assume that the Slater condition holds for (LSIP) and the index set YY is compact, then we have

F=cl(Fo),F=cl(F^{o}),

where cl(Fo)cl(F^{o}) represents the closure of the set FoF^{o}.

3.1 Restriction based on interval method

Let Ai(Yk)=[Ai,kl,Ai,ku]A_{i}(Y_{k})=[A_{i,k}^{l},A_{i,k}^{u}] be the inclusion function of ai(y)a_{i}(y) on YkY_{k}. By estimating the lower bound for minyYkai(y)\min_{y\in Y_{k}}a_{i}(y) via interval method, we can construct the following linear constraints

i=1nAi,klxi+A0,kl0,k=1,2,,N,\sum_{i=1}^{n}A_{i,k}^{l}x_{i}+A_{0,k}^{l}\geq 0,k=1,2,...,N,

corresponding to the original constraints a(y)x+a0(y)0,yYk,k=1,2,,Na(y)^{\top}x+a_{0}(y)\geq 0,\ \forall y\in Y_{k},k=1,2,...,N. For simplicity, we reformulate the inequalities as

ATx+bT0,A_{T}^{\top}x+b_{T}\geq 0, (8)

where AT(i,k)=Ai,klA_{T}(i,k)=A_{i,k}^{l} and bT(k)=A0,klb_{T}(k)=A_{0,k}^{l} for i=1,2,,n,k=1,2,,Ni=1,2,...,n,k=1,2,...,N. The approximation problem for (LSIP) in such case is formulated as

minx+ncxs.t.ATx+bT0.\displaystyle\begin{split}\min_{x\in\mathbb{R}^{n}_{+}}\ c^{\top}x\quad\textrm{s.t.}\quad A_{T}^{\top}x+b_{T}\geq 0.\end{split} R1-LSIP(T)

Following the analysis in section 2, we know that {x+n|ATx+bT0}F\{x\in\mathbb{R}^{n}_{+}\ |\ A_{T}^{\top}x+b_{T}\geq 0\}\subseteq F. Therefore, any feasible point of R1-LSIP(T) is feasible for (LSIP) provided that the feasible region of R1-LSIP(T) is non-empty. By solving R1-LSIP(T), we can obtain a feasible approximate solution for (LSIP) and the corresponding optimal value of R1-LSIP(T) provides an upper bound for the optimal value of (LSIP).

Let F(T)={x+n|ATx+bT0}F(T)=\{x\in\mathbb{R}^{n}_{+}\ |\ A_{T}^{\top}x+b_{T}\geq 0\} be the feasible region of R1-LSIP(T). We say that F(T)F(T) is consistent if F(T)F(T)\neq\emptyset. In this case, the corresponding problem R1-LSIP(T) is called consistent. The following lemma shows that the approximate problem R1-LSIP(T) is consistent for all |T||T| small enough if Slater condition holds for (LSIP).

Lemma 3.3.

Assume that the Slater condition holds for (LSIP) and the coefficient functions ai(y)a_{i}(y), i=0,1,ni=0,1,...n, are Lipschitz continuous on YY, then F(T)F(T) is nonempty for all |T||T| small enough.

In following theorem, we show that any accumulation point of the solutions of the approximate problems R1-LSIP(T) is a solution to (LSIP) if the size of the subdivision tends to zero.

Theorem 3.4.

Assume the Slater condition holds for (LSIP) and the level set L(x¯)={xF|cxcx¯}L(\bar{x})=\{x\in F\ |\ c^{\top}x\leq c^{\top}\bar{x}\} is bounded (x¯\bar{x} is a Slater point). Let {Tk}\{T_{k}\} be a sequence of subdivisions of YY such that T0T_{0} is consistent and limk|Tk|=0\lim_{k\to\infty}|T_{k}|=0 with TkTk+1T_{k}\subseteq T_{k+1}. Let xkx_{k}^{*} be a solution of R1-LSIP(TkT_{k}). Then any accumulation point of the sequence {xk}\{x_{k}^{*}\} is an optimal solution to (LSIP).

3.2 Restriction based on concavification

Given a subdivision T={τ0,,τN}T=\{\tau_{0},...,\tau_{N}\} and Yk=[τk1,τk],k=1,2,NY_{k}=[\tau_{k-1},\tau_{k}],k=1,2,...N, by applying concavification method in section 2 to each of the finitely many semi-infinite constraints

a(y)x+a0(y)0,yYk,k=1,2,,N,a(y)^{\top}x+a_{0}(y)\geq 0,\ \forall y\in Y_{k},k=1,2,...,N,

we can construct the linear constraints as follows

i=1nmin{a¯i(τk1),a¯i(τk)}xi+min{a¯0(τk1),a¯0(τk)}0,k=1,2,,N,\sum_{i=1}^{n}\min\{\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})\}x_{i}+\min\{\bar{a}_{0}(\tau_{k-1}),\bar{a}_{0}(\tau_{k})\}\geq 0,\ k=1,2,...,N,

where a¯i()\bar{a}_{i}(\cdot) is the concavification function defined on YkY_{k} when we calculate a¯i(τk1)\bar{a}_{i}(\tau_{k-1}) or a¯i(τk)\bar{a}_{i}(\tau_{k}) (i.e., a¯i(y)=ai(y)αi,k2(yτkτk12)2\bar{a}_{i}(y)=a_{i}(y)-\frac{\alpha_{i,k}}{2}(y-\frac{\tau_{k}-\tau_{k-1}}{2})^{2}). We rewrite the above inequalities as

A¯Tx+b¯T0,\bar{A}^{\top}_{T}x+\bar{b}_{T}\geq 0,

where A¯T(i,k)=min{a¯i(τk1),a¯i(τk)}\bar{A}_{T}(i,k)=\min\{\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})\} and b¯T(k)=min{a¯0(τk1),a¯0(τk)}\bar{b}_{T}(k)=\min\{\bar{a}_{0}(\tau_{k-1}),\bar{a}_{0}(\tau_{k})\}. The corresponding approximate problem for (LSIP) is defined by

minx+ncxs.t.A¯Tx+b¯T0.\displaystyle\begin{split}\min_{x\in\mathbb{R}^{n}_{+}}\ c^{\top}x\quad\textrm{s.t.}\quad\bar{A}_{T}^{\top}x+\bar{b}_{T}\geq 0.\end{split} R2-LSIP(T)

Let F¯(T)={x+n|A¯Tx+b¯T0}\bar{F}(T)=\{x\in\mathbb{R}^{n}_{+}\ |\ \bar{A}^{\top}_{T}x+\bar{b}_{T}\geq 0\} be the feasible set of the problem R2-LSIP(T), we can conclude that F¯(T)F\bar{F}(T)\subseteq F.

The approximate problem R2-LSIP(T) is similar to R1-LSIP(T) in the sense that both problems induce restrictions of (LSIP). Therefore, any feasible solution of R2-LSIP(T) is feasible for (LSIP) and the corresponding optimal value provide an upper bound for the optimal value of the problem (LSIP).

The following lemma shows that if the Slater condition holds for (LSIP), R2-LSIP(T) is consistent for all |T||T| small enough (e.g., F¯(T)\bar{F}(T)\neq\emptyset). Proof can be found in appendix.

Lemma 3.5.

Assume the Slater condition holds for (LSIP) and ai(y)a_{i}(y), i=1,2,,ni=1,2,...,n, are twice continuously differentiable. Then R2-LSIP(T) is consistent for all |T||T| small enough.

In order to find a good approximate solution for (LSIP), R2-LSIP(T) need to be solved iteratively during which the subdivision will be refined. We present a particular strategy of the refinement here such that the approximate regions of R2-LSIP(T) are monotonically enlarging from the inside of the feasible region FF. Consequently, the corresponding optimal values of the approximation problems are monotonically decreasing and converge to the optimal value of the original linear semi-infinite problem. Note that such a refinement procedure can not guarantee the monotonic property when applied to solve R1-LSIP(T).

Let T={τk|k=0,1,,N}T=\{\tau_{k}\ |\ k=0,1,...,N\} be a subdivision of the YY. Assume Yk=[τk1,τk]Y_{k}=[\tau_{k-1},\tau_{k}] is the subinterval to be refined. Denote by τk,1\tau_{k,1} and τk,2\tau_{k,2} the trisection points of YkY_{k}:

τk,1=τk1+13(τkτk1),τk,2=τk1+23(τkτk1).\tau_{k,1}=\tau_{k-1}+\frac{1}{3}(\tau_{k}-\tau_{k-1}),\ \tau_{k,2}=\tau_{k-1}+\frac{2}{3}(\tau_{k}-\tau_{k-1}).

The constraint in R2-LSIP(T) on the subset YkY_{k} is

i=1n[min[a¯i(τk1),a¯i(τk)]xi+min[a¯0(τk1),a¯0(τk)]0,\sum_{i=1}^{n}[\min[\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})]x_{i}+\min[\bar{a}_{0}(\tau_{k-1}),\bar{a}_{0}(\tau_{k})]\geq 0, (9)

where a¯i(y)=ai(y)αi,k2(yτk1+τk2)2\bar{a}_{i}(y)=a_{i}(y)-\frac{\alpha_{i,k}}{2}(y-\frac{\tau_{k-1}+\tau_{k}}{2})^{2} and parameter αi,k\alpha_{i,k} is calculated in the manner of (6). The lower bounding functions on each subset after refinement are defined by

a¯i1(y)\displaystyle\bar{a}_{i}^{1}(y) =ai(y)αi,k12(yτk1+τk,12)2,yYk,1=[τk1,τk,1],\displaystyle=a_{i}(y)-\frac{\alpha^{1}_{i,k}}{2}(y-\frac{\tau_{k-1}+\tau_{k,1}}{2})^{2},\ y\in Y_{k,1}=[\tau_{k-1},\tau_{k,1}],
a¯i2(y)\displaystyle\bar{a}_{i}^{2}(y) =ai(y)αi,k22(yτk,1+τk,22)2,yYk,2=[τk,1,τk,2],\displaystyle=a_{i}(y)-\frac{\alpha^{2}_{i,k}}{2}(y-\frac{\tau_{k,1}+\tau_{k,2}}{2})^{2},\ y\in Y_{k,2}=[\tau_{k,1},\tau_{k,2}],
a¯i3(y)\displaystyle\bar{a}_{i}^{3}(y) =ai(y)αi,k32(yτk,2+τk2)2,yYk,3=[τk,2,τk],\displaystyle=a_{i}(y)-\frac{\alpha^{3}_{i,k}}{2}(y-\frac{\tau_{k,2}+\tau_{k}}{2})^{2},\ y\in Y_{k,3}=[\tau_{k,2},\tau_{k}],

where αi,kj,j=1,2,3\alpha_{i,k}^{j},j=1,2,3 are selected such that αi,kjmax{0,maxyYk,j2ai(y)}\alpha_{i,k}^{j}\geq\max\{0,\max_{y\in Y_{k,j}}\nabla^{2}a_{i}(y)\} and αi,kjαi,k\alpha_{i,k}^{j}\leq\alpha_{i,k} for j=1,2,3j=1,2,3. The refined approximate region F¯(T{τk,1,τk,2})\bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\}) is obtained by replacing the constraint (9) in F¯(T)\bar{F}(T) with

i=1n[min[a¯i1(τk1),a¯i1(τk,1)]xi+min[a¯01(τk1),a¯01(τk,1)]0,\displaystyle\sum_{i=1}^{n}[\min[\bar{a}_{i}^{1}(\tau_{k-1}),\bar{a}_{i}^{1}(\tau_{k,1})]x_{i}+\min[\bar{a}_{0}^{1}(\tau_{k-1}),\bar{a}_{0}^{1}(\tau_{k,1})]\geq 0,
i=1n[min[a¯i2(τk,1),a¯i2(τk,2)]xi+min[a¯02(τk,1),a¯02(τk,2)]0,\displaystyle\sum_{i=1}^{n}[\min[\bar{a}_{i}^{2}(\tau_{k,1}),\bar{a}_{i}^{2}(\tau_{k,2})]x_{i}+\min[\bar{a}_{0}^{2}(\tau_{k,1}),\bar{a}_{0}^{2}(\tau_{k,2})]\geq 0,
i=1n[min[a¯i3(τk,2),a¯i3(τk)]xi+min[a¯03(τk,2),a¯03(τk)]0.\displaystyle\sum_{i=1}^{n}[\min[\bar{a}_{i}^{3}(\tau_{k,2}),\bar{a}_{i}^{3}(\tau_{k})]x_{i}+\min[\bar{a}_{0}^{3}(\tau_{k,2}),\bar{a}_{0}^{3}(\tau_{k})]\geq 0.
Lemma 3.6.

Let TT be a consistent subdivision of YY. Assume that F¯(T{τk,1,τk,2})\bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\}) is obtained by the trisection refinement procedure above, then we have

F¯(T)F¯(T{τk,1,τk,2})F.\bar{F}(T)\subseteq\bar{F}(T\cup\{\tau_{k,1},\tau_{k,2}\})\subseteq F.
Proof.

Since x+nx\in\mathbb{R}^{n}_{+}, it suffices to prove that for i=0,1,2,,n,i=0,1,2,...,n,

min[a¯i1(τk1),a¯i1(τk,1),a¯i2(τk,1),a¯i2(τk,2),a¯i3(τk,2),a¯i3(τk)]min[a¯i(τk1),a¯i(τk)].\min[\bar{a}_{i}^{1}(\tau_{k-1}),\bar{a}_{i}^{1}(\tau_{k,1}),\bar{a}_{i}^{2}(\tau_{k,1}),\bar{a}_{i}^{2}(\tau_{k,2}),\bar{a}_{i}^{3}(\tau_{k,2}),\bar{a}_{i}^{3}(\tau_{k})]\geq\min[\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})].

By direct computation, we know a¯i1(τk1)a¯i(τk1)\bar{a}_{i}^{1}(\tau_{k-1})\geq\bar{a}_{i}(\tau_{k-1}) and a¯i3(τk)a¯i(τk)\bar{a}_{i}^{3}(\tau_{k})\geq\bar{a}_{i}(\tau_{k}). Since a¯i(y)\bar{a}_{i}(y) is concave on Yk=[τk1,τk]Y_{k}=[\tau_{k-1},\tau_{k}], we have a¯i(τk,j)min[a¯i(τk1),a¯i(τk)]\bar{a}_{i}(\tau_{k,j})\geq\min[\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})] for j=1,2j=1,2. In addition, direct calculation implies

min[a¯i1(τk,1),a¯i2(τk,1)]a¯i(τk,1),min[a¯i2(τk,2),a¯i3(τk,2)]a¯i(τk,2).\min[\bar{a}_{i}^{1}(\tau_{k,1}),\bar{a}_{i}^{2}(\tau_{k,1})]\geq\bar{a}_{i}(\tau_{k,1}),\quad\min[\bar{a}_{i}^{2}(\tau_{k,2}),\bar{a}_{i}^{3}(\tau_{k,2})]\geq\bar{a}_{i}(\tau_{k,2}).

The last two statements indicate that min[a¯i1(τk,1),a¯i2(τk,1)]min[a¯i(τk1),a¯i(τk)]\min[\bar{a}_{i}^{1}(\tau_{k,1}),\bar{a}_{i}^{2}(\tau_{k,1})]\geq\min[\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})] and min[a¯i2(τk,2),a¯i3(τk,2)]min[a¯i(τk1),a¯i(τk)]\min[\bar{a}_{i}^{2}(\tau_{k,2}),\bar{a}_{i}^{3}(\tau_{k,2})]\geq\min[\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})].

This proves our statement.   ∎

We present in the following theorem the general convergence results for approximating (LSIP) via a sequence of restriction problems.

Theorem 3.7.

Assume that the assumptions in Theorem 3.4 hold. Let {Tk}\{T_{k}\} be a sequence of subdivisions of the index set YY, which is obtained by trisection refinement recursively, such that T0T_{0} is consistent and limk|Tk|=0\lim_{k\to\infty}|T_{k}|=0. Denote by xkx^{*}_{k} the optimal solution to R2-LSIP(TkT_{k}). Then we have:
(1) xkx^{*}_{k} is feasible for (LSIP) and any accumulation point of the sequence {xk}\{x^{*}_{k}\} is a feasible solution to (LSIP).
(2) {f(xk):f(xk)=cxk}\{f(x^{*}_{k}):\ f(x^{*}_{k})=c^{\top}x^{*}_{k}\} is a decreasing sequence and v=limkf(xk)v^{*}=\lim_{k\to\infty}f(x^{*}_{k}) is an optimal value to (LSIP).

Proof.

The proof of the first statement is similar to the proof in Theorem 3.4.

From Lemma 3.6, we know that F¯(Tk1)F¯(Tk)\bar{F}(T_{k-1})\subseteq\bar{F}(T_{k}) holds for kk\in\mathbb{N} which implies that the sequence {f(xk)}\{f(x^{*}_{k})\} is decreasing. Since the level set L(x¯)L(\bar{x}) is bounded, the sequence {f(xk)}\{f(x^{*}_{k})\} is bounded. Therefore, the limit of the sequence exists which is denoted by vv^{*}. From (1), we know that vv^{*} is an optimal value to (LSIP).

This completes our proof.   ∎

3.3 Adaptive refinement algorithm

In this section, we present a specific algorithm to solve (LSIP). The algorithm is based on solving the approximate linear problems R2-LSIP(T) (or R1-LSIP(T)) for a given subdivision TT and then refine the subdivision to improve the solution. The key idea of the algorithm is to select the candidate subsets in TT to be refined in an adaptive manner rather than making the refinement exhaustively.

We introduce the optimality condition for (LSIP) as follows before presenting the details of the algorithm. Given a point xFx\in F, let A(x)={yY|a(y)x+a0(y)=0}A(x)=\{y\in Y\ |\ a(y)^{\top}x+a_{0}(y)=0\} be the active index set for (LSIP) at xx. If some constraint qualification (e.g., Slater condition) holds for (LSIP), a feasible point xFx^{*}\in F is an optimal solution if and only if xx^{*} satisfies the KKT systems ([4]), i.e.,

cyA(x)λya(y)=0c-\sum_{y\in A(x^{*})}\lambda_{y}a(y)=0

for some λy0\lambda_{y}\geq 0, yA(x)y\in A(x^{*}).

Definition 3.8.

We say that xFx^{*}\in F is an (ϵ,δ)(\epsilon,\delta) optimal solution to (LSIP) if there exist some indices yYy\in Y as well as λy0\lambda_{y}\geq 0 such that

cyA(x,δ)λya(y)ϵ,||c-\sum_{y\in A(x^{*},\delta)}\lambda_{y}a(y)||\leq\epsilon,

where A(x,δ)={yY| 0a(y)x+a0(y)δ}A(x^{*},\delta)=\{y\in Y\ |\ 0\leq a(y)^{\top}x^{*}+a_{0}(y)\leq\delta\}.

————————————————————————————————
Algorithm 1
(Adaptive Refinement Algorithm for LSIP)
————————————————————————————————

  • S1.

    Find an initial subdivision T0T_{0} such that R2-LSIP(T0T_{0}) is consistent. Choose an initial point x0x_{0} and tolerances ϵ\epsilon and δ\delta. Set k=0k=0.

  • S2.

    Solve R2-LSIP(TkT_{k}) to obtain a solution xkx^{*}_{k} and the active index set A(xk)A(x^{*}_{k}).

  • S3.

    Terminate if xkx^{*}_{k} is an (ϵ,δ)(\epsilon,\delta) optimal solution to (LSIP). Otherwise update Tk+1T_{k+1} and F(Tk+1)F(T_{k+1}) by trisection refinement procedure for subintervals in TkT_{k} that correspond to A(xk)A(x^{*}_{k}).

  • S4.

    Let k=k+1k=k+1 and go to step 2.

————————————————————————————————

To obtain a consistent subdivision in the firs step of Algorithm 1, we apply the adaptive refinement algorithm to the following problem

min(x,z)+n×zs.t.a(y)x+a0(y)zyY\displaystyle\begin{split}\min_{(x,z)\in\mathbb{R}^{n}_{+}\times\mathbb{R}}\ z\quad\textrm{s.t.}\quad a(y)^{\top}x+a_{0}(y)\geq z\ \forall y\in Y\end{split} LSIP0

until a feasible solution (x0,z0)(x_{0},z_{0}), with z00z_{0}\geq 0, of the problem LSIP0(T0T_{0}) is found for some subdivision T0T_{0}. The current subdivision T0T_{0} is consistent and chosen as the initial subdivision of Algorithm 1. In addition, x0x_{0} is feasible for the original problem and selected as the initial point for the algorithm.

The refinement procedure in the third step of the algorithm is taken as follows. In the kkth iteration, each [τi1k,τik][\tau^{k}_{i-1},\tau^{k}_{i}] is divided into three equal length subsets for iA(xk)i\in A(x^{*}_{k}). New constraints are constructed on the subsets and used to update the constraint corresponding to [τi1k,τik][\tau^{k}_{i-1},\tau^{k}_{i}] for each index iA(xk)i\in A(x^{*}_{k}). Then we have F¯(Tk+1)\bar{F}(T_{k+1}) and the associated approximation problem R2-LSIP(Tk+1T_{k+1}).

Theorem 3.9 (Convergence of Algorithm 1).

Assume the Slater condition holds for (LSIP) and the coefficient functions ai(y)a_{i}(y), i=0,1,,ni=0,1,...,n, are twice continuously differentiable. Then Algorithm 1 terminates in finitely many iterations for any positive tolerances ϵ\epsilon and δ\delta.

Proof.

Let xkx^{*}_{k} be a solution to the approximate subproblem R2-LSIP(TkT_{k}) with Tk={τjk|j=0,1,,Nk}T_{k}=\{\tau^{k}_{j}\ |\ j=0,1,...,N_{k}\}, there exists some λjk0\lambda_{j}^{k}\geq 0 for jA(xk)j\in A(x^{*}_{k}) such that

cjA(xk)λjkmin[a¯(τj1k),a¯(τjk)]=0,c-\sum_{j\in A(x^{*}_{k})}\lambda_{j}^{k}\min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})]=0, (10)

where A(xk)={j|min[a¯(τj1k),a¯(τjk)]xk+min[a¯0(τj1k),a¯0(τjk)]=0}A(x^{*}_{k})=\{j\ |\ \min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})]x^{*}_{k}+\min[\bar{a}_{0}(\tau^{k}_{j-1}),\bar{a}_{0}(\tau_{j}^{k})]=0\} is the active index set for R2-LSIP(TkT_{k}) at xkx^{*}_{k} and min[a¯(τj1k),a¯(τjk)]\min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})] represents a vector in n\mathbb{R}^{n} such that the iith element is defined by min[a¯i(τj1k),a¯i(τjk)]\min[\bar{a}_{i}(\tau^{k}_{j-1}),\bar{a}_{i}(\tau_{j}^{k})]. Since a¯i(y)=ai(y)αi,k2(yτj1k+τjk2)2\bar{a}_{i}(y)=a_{i}(y)-\frac{\alpha_{i,k}}{2}(y-\frac{\tau_{j-1}^{k}+\tau_{j}^{k}}{2})^{2} for y[τj1k,τjk]y\in[\tau^{k}_{j-1},\tau^{k}_{j}], we have

min[a¯(τj1k),a¯(τjk)]=min[a(τj1k),a(τjk)]18(τjkτj1k)2αk,\min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})]=\min[a(\tau^{k}_{j-1}),a(\tau_{j}^{k})]-\frac{1}{8}(\tau^{k}_{j}-\tau^{k}_{j-1})^{2}\alpha^{k},

where αk=(α1,jk,α2,jk,,αn,jk)\alpha^{k}=(\alpha^{k}_{1,j},\alpha^{k}_{2,j},...,\alpha^{k}_{n,j})^{\top} is the parameter vector on the subset [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}] with all elements are uniformly bounded. On the other hand, since ai(y)a_{i}(y) is twice continuously differentiable, there exists τ¯j1k\bar{\tau}_{j-1}^{k} such that

ai(τjk)=ai(τj1k)+ai(τ¯j1k)(τjkτj1k),1ina_{i}(\tau^{k}_{j})=a_{i}(\tau^{k}_{j-1})+a_{i}^{{}^{\prime}}(\bar{\tau}^{k}_{j-1})(\tau^{k}_{j}-\tau^{k}_{j-1}),1\leq i\leq n

which implies that min[a(τj1k),a(τjk)]=a(τj1k)+(τjkτj1k)βk\min[a(\tau^{k}_{j-1}),a(\tau_{j}^{k})]=a(\tau_{j-1}^{k})+(\tau^{k}_{j}-\tau^{k}_{j-1})\beta^{k} where βkn\beta^{k}\in\mathbb{R}^{n} is a constant vector (e.g., βik=ai(τ¯j1k)\beta^{k}_{i}=a_{i}^{{}^{\prime}}(\bar{\tau}^{k}_{j-1}) if min[a(τj1k),a(τjk)]=a(τjk)\min[a(\tau^{k}_{j-1}),a(\tau_{j}^{k})]=a(\tau_{j}^{k}) and βik=0\beta^{k}_{i}=0 otherwise). It follows that

min[a¯(τj1k),a¯(τjk)]=a(τj1k)+(τjkτj1k)βk18(τjkτj1k)2αk.\min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})]=a(\tau^{k}_{j-1})+(\tau^{k}_{j}-\tau^{k}_{j-1})\beta^{k}-\frac{1}{8}(\tau^{k}_{j}-\tau^{k}_{j-1})^{2}\alpha^{k}. (11)

Substitute min[a¯(τj1k),a¯(τjk)]\min[\bar{a}(\tau^{k}_{j-1}),\bar{a}(\tau_{j}^{k})] in (11) into (10) and A(xk)A(x^{*}_{k}), we can claim that it suffices to prove the lengths of all the subsets [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}] for jA(xk)j\in A(x^{*}_{k}) converge to zeros as the iteration kk tends to infinity. From the algorithm, we know that in each iteration at least one subset [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}] is divided into three equal subintervals where the length of each subinterval is bounded above by 13(τjkτj1k)13(ba)\frac{1}{3}(\tau^{k}_{j}-\tau^{k}_{j-1})\leq\frac{1}{3}(b-a). For each integer pp\in\mathbb{N}, at least one interval with its length bounded by 13p(ba)\leq\frac{1}{3^{p}}(b-a) is generated. Furthermore, all the subintervals [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}], jA(xk)j\in A(x^{*}_{k}) are different for all kk\in\mathbb{N}. Since for each pp\in\mathbb{N}, only finitely subintervals with length greater than 13p(ba)\frac{1}{3^{p}}(b-a) exists. This implies that the lengths of the subsets [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}] for jA(xk),kj\in A(x^{*}_{k}),k\in\mathbb{N} must tend to zero.   ∎

We can conclude from Theorem 3.9 that if the tolerances ϵ\epsilon and δ\delta are decreasing to zero then any accumulation point of the sequence generated by Algorithm 1 is a solution to the original linear semi-infinite programming.

Corollary 3.10.

Let the assumptions in Theorem 3.9 be satisfied and the tolerances (ϵk,δk)(\epsilon_{k},\delta_{k}) are chosen such that (ϵk,δk)(0,0)(\epsilon_{k},\delta_{k})\searrow(0,0). If xkx^{*}_{k} is an (ϵk,δk)(\epsilon_{k},\delta_{k}) KKT point for (LSIP) generated by Algorithm 1, then any accumulation point xx^{*} of the sequence {xk}\{x^{*}_{k}\} is a solution to (LSIP).

It follows from Corollary 3.10 the sequence {cxk}\{c^{\top}x^{*}_{k}\} is monotonically decreasing to the optimal value of (LSIP) as kk tends to infinity. In the implement of our algorithm, the termination criterion is set as

|cxkcxk1|ϵ.|c^{\top}x^{*}_{k}-c^{\top}x^{*}_{k-1}|\leq\epsilon.

The convergence of Algorithm 1 is also applicable to the case that the approximate problem R1-LSIP(TkT_{k}) is used in the second step. The proof is similar to that in theorem 3.9 as we explained in appendix. However, we can not guarantee the sequence {cxk}\{c^{\top}x^{*}_{k}\} is monotonically decreasing.

3.4 Remarks

The proposed algorithm can be applied to solve linear semi-infinite optimization problem with finitely many semi-infinite constraints and some extra linear constraints, i.e.,

minxXcxs.t.aj(y)x+a0j(y)0,yY,j=1,2,,m,\min_{x\in X}c^{\top}x\quad\textrm{s.t.}\quad a^{j}(y)^{\top}x+a^{j}_{0}(y)\geq 0,\ \forall y\in Y,j=1,2,...,m,

where X={xn|Dxd}X=\{x\in\mathbb{R}^{n}\ |\ Dx\geq d\} and aj():na^{j}(\cdot):\mathbb{R}\mapsto\mathbb{R}^{n}. In such a case, we split each decision variable xix_{i} into two non-negative variable yi0y_{i}\geq 0 and zi0z_{i}\geq 0 such that xi=yizix_{i}=y_{i}-z_{i}, and then substitute xix_{i} into the above problem. Then the problem is reformulated as a linear semi-infinite programming problem with non-negative decision variables in which the Algorithm 1 can be applied to solve it. Such a technique is applied in the numerical experiments.

In the case that X=[Xl,Xu]X=[X_{l},X_{u}] is a box in n\mathbb{R}^{n}, we can set a new variable transformation as x=z+Xlx=z+X_{l} in which z0z\geq 0. The advantage to reformulate the original problem in such a translation is that the dimension of the new variables is the same as that of the original decision variables.

4 Numerical experiments

We present the numerical experiments for a couple of optimizaiton problems selected from the literature. The algorithm is implemented in MatlabMatlab 8.1 and the subproblem is solved by using linproglinprog of OptimizationOptimization ToolboxToolbox 6.3 with default tolerance and active set algorithm. All the following experiments were run on 3.2 GHz Intel(R) Core(TM) processor.

The computation of the bounds for the coefficient functions and the parameter α\alpha in the second approach are obtained directly if the closed form bound exists. Otherwise, we use MatlabMatlab toolbox IntlabIntlab 6.0 [23] to obtain the corresponding bounding values. The problems in the literature are listed as follows.
Problem 1.

min\displaystyle\min\quad i=1ni1xi\displaystyle\sum_{i=1}^{n}i^{-1}x_{i}
s.t. i=1nyi1xitan(y),y[0,1].\displaystyle\sum_{i=1}^{n}y^{i-1}x_{i}\geq tan(y),\ \forall y\in[0,1].

This problem is taken from [4] and also tested in [8] for n=8n=8. For 1n71\leq n\leq 7, the problem has unique optimal solution and the Strong Slater condition holds. The problem for n=8n=8 is hard to solve and thus a good test of the performance for our algorithm.
Problem 2. This problem has same formulation as Problem 1 with n=9n=9 which is also tested in [8].
Problem 3.

min\displaystyle\min\quad i=18i1xi\displaystyle\sum_{i=1}^{8}i^{-1}x_{i}
s.t. i=18yi1xi12y,y[0,1].\displaystyle\sum_{i=1}^{8}y^{i-1}x_{i}\geq\frac{1}{2-y},\ \forall y\in[0,1].

This problem is taken from [19] and also tested in [8].
Problem 4.

min\displaystyle\min\quad i=17i1xi\displaystyle\sum_{i=1}^{7}i^{-1}x_{i}
s.t. i=17yi1xii=04y2i,y[0,1].\displaystyle\sum_{i=1}^{7}y^{i-1}x_{i}\geq-\sum_{i=0}^{4}y^{2i},\ \forall y\in[0,1].

Problem 5.

min\displaystyle\min\quad i=19i1xi\displaystyle\sum_{i=1}^{9}i^{-1}x_{i}
s.t. i=19yi1xi11+y2,y[0,1].\displaystyle\sum_{i=1}^{9}y^{i-1}x_{i}\geq\frac{1}{1+y^{2}},\ \forall y\in[0,1].

Problem 4 and 5 are taken from [20] and also tested in [8].

The following problems, as noted in [8], arise in the design of finite impulse response(FIR) filters which are more computationally demanding than the previous ones (see, e.g., [7, 8]).

Problem 6.

min\displaystyle\min\quad i=110r2i1xi\displaystyle-\sum_{i=1}^{10}r_{2i-1}x_{i}
s.t. 2i=110cos((2i1)2πy)xi1,y[0,0.5],\displaystyle 2\sum_{i=1}^{10}\textrm{cos}((2i-1)2\pi y)x_{i}\geq-1,\ \forall y\in[0,0.5],

where ri=0.95ir_{i}=0.95^{i}.

Problem 7. This problem is formulated as Problem 6 where ri=2ρcos(θ)ri1ρ2ri2r_{i}=2\rho\textrm{cos}(\theta)r_{i-1}-\rho^{2}r_{i-2} with ρ=0.975\rho=0.975, θ=π/3\theta=\pi/3, r0=1r_{0}=1, r1=2ρcos(θ)/(1+ρ2)r_{1}=2\rho\textrm{cos}(\theta)/(1+\rho^{2}).

Problem 8. This problem is also formulated as Problem 6 where ri=sin(2πfsi)2πfsir_{i}=\frac{\textrm{sin}(2\pi f_{s}i)}{2\pi f_{s}i} with fs=0.225f_{s}=0.225.

The numerical results are summarized in Table 4 where CPU Time is the time cost when the algorithm terminates, Objective Value represents the objective function value at the iteration point when the algorithm terminates, No of Iteration is the number of iterations when the algorithm terminates for each particular problem and Violation measures the feasibility of the solution xx^{*} obtained by the algorithm which is defined by minyY¯g(x,y)\min_{y\in\bar{Y}}g(x^{*},y) with Y¯=a:106:b\bar{Y}=a:10^{-6}:b. We also list the numerical results for these problems by MATLAB toolbox fseminffseminf as a reference. We can see that the algorithm proposed in this paper generates the feasible solutions for all the problems tested. This is coincide with the theoretical results. Furthermore, Algorithm 1 works well for the computational demanding problems 6-8. The solver fseminffseminf is faster than our method, however the feasibility is not guaranteed for this kind of method.

\tbl

Summary of numerical results for the proposed algorithm in this paper Algorithm CPU Time(sec) Objective Value No. of Iterations Violation Problem 1. Approach 1 1.8382 0.6174 169 2.3558e-04 Approach 2 1.8910 0.6174 172 1.5835e-04 fseminf 0.2109 0.6163 33 -1.2710e-04 Problem 2. Approach 1 5.2691 0.6163 273 4.1441e-04 Approach 2 4.0928 0.6166 266 1.8372e-04 fseminf 0.3188 0.6157 46 -7.6194e-04 Problem 3. Approach 1 0.1646 0.6988 12 2.7969e-03 Approach 2 0.1538 0.6988 13 2.8014e-03 fseminf 0.2387 0.6932 35 -5.8802e-07 Problem 4. Approach 1 4.1606 -1.7841 354 1.9689e-05 Approach 2 4.1928 -1.7841 356 1.9646e-05 fseminf 0.4794 -1.7869 70 -3.4649e-09 Problem 5. Approach 1 4.2124 0.7861 300 1.9829e-05 Approach 2 4.7892 0.7861 302 1.9243e-05 fseminf 0.3642 0.7855 32 -8.5507e-07 Problem 6. Approach 1 1.7290 -0.4832 137 5.0697e-06 Approach 2 1.5302 -0.4832 132 5.0914e-06 fseminf 1.1476 -0.4754 86 -1.2219e-04 Problem 7. Approach 1 2.5183 -0.4889 170 2.8510e-04 Approach 2 3.2521 -0.4890 219 2.8861e-04 fseminf 1.0480 -0.4883 86 -1.5211e-03 Problem 8. Approach 1 4.4262 -0.4972 252 4.5808e-05 Approach 2 4.0216 -0.4972 252 5.0055e-05 fseminf 0.4324 -0.4973 45 -4.3322e-07 \tabnote* Approach 1 represents Algorithm 1 with R1-LSIP and Apporach 2 represents Algorithm 1 with R2-LSIP.

5 Conclusion

A new numerical method for solving linear semi-infinite programming problems is proposed which guarantees that each iteration point is feasible for the original problem. The approach is based on a two-stage restriction of the original semi-infinite constraint. The first stage restriction allows us to consider semi-infinite constraint independently to the decision variables on the subsets of the index set. In the second stage, the lower bounds for the optimal values of the optimization problems associated with coefficient functions are estimated using two different approaches. The approximation error goes to zero as the size of the subdivisions tends to zero.

The approximate problems with finitely many linear constraints is constructed such that the corresponding feasible regions are included in the feasible region of (LSIP). It follows that any feasible solution of the approximate problem is feasible for (LSIP) and the corresponding objective function value provide an upper bound for the optimal value of (LSIP). It is proved that the solutions of the approximate problems converge to that of the original problem. Also, the sequence of optimal values of the approximate problems converge to the optimal value of (LSIP) in a monotonic manner.

An adaptive refinement algorithm is developed to obtain an approximate solution to (LSIP) which is proved to terminate in finite iterations for arbitrarily given tolerances. Numerical results show that the algorithm works well in finding feasible solutions for (LSIP).

References

  • [1] S. Christensen, A method for pricing American options using Semi-infinite linear programming, Mathematical Finance, 24 (2014), pp. 156–172.
  • [2] S. Daum and R. Werner, A novel feasible discretization method for linear semi-infinite programming applied to basket option pricing, Optimization, 60 (2011), pp. 1379–1398.
  • [3] S. Özöğür-Akyüz and G. W. Weber, Infinite kernel learning via infinite and semi-infinite programming, Optimisation Methods & Software, 25 (2010), pp. 937–970.
  • [4] M. A. Goberna,and M. A. López, Linear Semi-Infinite Optimization, Wiley, New York, 1998.
  • [5] M. A. Goberna, and M. A. López, Linear semi-infinite programming theory: an updated survey, European Journal of Operational Research, 143 (2002), pp. 390–405.
  • [6] R. Hettich and K. O. Kortanek, Semi-infinite programming: theory, methods, and applications, SIAM Rev., 35 (1993), pp. 380–429.
  • [7] R. Reemtsen and S. Görner, Numerical methods for semi-infinite programming: A survey, in Semi-Infinite Programming, R. Reemtsen and J.-J. Rückmann, eds., Kluwer, Boston, 1998, pp. 195–275.
  • [8] B. Betrò, An accelerated central cutting plane algorithm for linear semi-infinite programming, Mathematical programming, 101 (2004), pp. 479–495.
  • [9] S. Å. Gustafson, On the computational solution of a class of generalized moment problems, SIAM Journal on Numerical Analysis, 7 (1970), pp. 343–357.
  • [10] T. Le¨®n, S. Sanmatias and E. Vercher, On the numerical treatment of linearly constrained semi-infinite optimization problems, European Journal of Operational Research, 121 (2000) pp. 78–91.
  • [11] E. J. Anderson and A. S. Lewis, An extension of the simplex algorithm for semi-infinite linear programming, Mathematical Programming, 44 (1989), pp. 247–269.
  • [12] E. J. Anderson and M. A. Goberna, Simplex-like trajectories on quasi-polyhedral sets, Mathematics of Operations Research, 26 (2001), pp. 147–162.
  • [13] C. A., Floudas, O. Stein The adaptive convexification algorithm: a feasible point method for semi-infinite programming, SIAM Journal on Optimization, 18(4), (2007) pp. 1187-1208.
  • [14] S., Wang, Y., Yuan Feasible method for semi-infinite programs, SIAM Journal on Optimization, 25(4), (2015) pp. 2537-2560.
  • [15] A., Mitsos, Y., Yuan Global optimization of semi-infinite programs via restriction of the right-hand side, Optimization, 60(10-11) (2011) pp. :1291-308.
  • [16] G. Alefeld and G. Mayer, Interval analysis: theory and applications, J. Comput. Appl. Math., 121 (2000), pp. 421–464.
  • [17] M. A. Goberna, Post-optimal analysis of linear semi-infinite programs, Optimization and Optimal Control, Springer New York, 2010, pp. 23–53.
  • [18] M. A. Goberna, Linear semi-infinite optimization: recent advances, In Continuous Optimization, Springer US, 2005, pp. 3–22.
  • [19] K. Glashoff and S. A. Gustafson, Linear optimization and approximation, Springer-Verlag, Berlin, 1983.
  • [20] T. Leon and E. Vercher, A purification algorithm for semi-infinite programming, European Journal of Operational Research, 57 (1992), pp. 412–420.
  • [21] R. Moore, Methods and applications of interval analysis, SIAM, Stud. Appl. Math. 2, Philadelphia, 1979.
  • [22] R. T. Rockafellar, Convex analysis, Princeton University Press, New Jersey, 1970.
  • [23] S. M. Rump, INTLAB - INTerval LABoratory, Institute for Reliable Computing, Hamburg University of Technology, 1999, http://www.ti3.tu-harburg.de/rump/intlab.

6 Appendices

Proof of Lemma 3.2

Since the Slater condition holds, there exists a point x¯n\bar{x}\in\mathbb{R}^{n} such that

a(y)x¯+a0(y)>0,yY.a(y)^{\top}\bar{x}+a_{0}(y)>0,\ \forall y\in Y.

It has been shown in [5] that the boundary of FF is

F={xF|minyY{a(y)x¯+a0(y)}=0}.\partial F=\{x\in F\ |\ \min_{y\in Y}\{a(y)^{\top}\bar{x}+a_{0}(y)\}=0\}.

It follows that F=FoFF=F^{o}\cup\partial F. The compactness of the index set YY implies that the function g(x)=minyY{a(y)x+a0(y)}g(x)=\min_{y\in Y}\{a(y)^{\top}x+a_{0}(y)\} is continuous. Thus, FF is closed. It suffices to prove that

Fcl(Fo).\partial F\subseteq cl(F^{o}).

For any x~F\tilde{x}\in\partial F, we have a(y)x~+a0(y)=0a(y)^{\top}\tilde{x}+a_{0}(y)=0 for all yA(x~)y\in A(\tilde{x}) with A(x~)={yY|a(y)x~+a0(y)=0}A(\tilde{x})=\{y\in Y\ |\ a(y)^{\top}\tilde{x}+a_{0}(y)=0\}. Then

a(y)(x¯x~)>0,yA(x~).a(y)^{\top}(\bar{x}-\tilde{x})>0,\ \forall y\in A(\tilde{x}).

This indicates that for any τ>0\tau>0, we have

a(y)(x~+τ(x¯x~))+a0(y)>0,yA(x~).a(y)^{\top}(\tilde{x}+\tau(\bar{x}-\tilde{x}))+a_{0}(y)>0,\ \forall y\in A(\tilde{x}).

For a point yYy\in Y and yA(x~)y\notin A(\tilde{x}), there holds that a(y)x~+a0(y)>0a(y)^{\top}\tilde{x}+a_{0}(y)>0. Therefore, a(y)(x~+τ(x¯x~))+a0(y)>0a(y)^{\top}(\tilde{x}+\tau(\bar{x}-\tilde{x}))+a_{0}(y)>0 for τ\tau small enough. Since YY is compact, we can chose a uniform τ\tau such that a(y)(x¯+τ(x¯x~))+a0(y)>0,yYa(y)^{\top}(\bar{x}+\tau(\bar{x}-\tilde{x}))+a_{0}(y)>0,\ \forall y\in Y for τ\tau small enough. It follows that we can choose a sequence τk>0\tau_{k}>0 with limkτk=0\lim_{k\to\infty}\tau_{k}=0 such that

a(y)(x~+τk(x¯x~))+a0(y)>0,yY,k.a(y)^{\top}(\tilde{x}+\tau_{k}(\bar{x}-\tilde{x}))+a_{0}(y)>0,\forall y\in Y,k\in\mathbb{N}.

Hence xk=x~+τk(x¯x~)Fox_{k}=\tilde{x}+\tau_{k}(\bar{x}-\tilde{x})\in F^{o} and limkxk=x~\lim_{k\to\infty}x_{k}=\tilde{x} which implies that x~cl(Fo)\tilde{x}\in cl(F^{o}).

This completes our proof.

Proof of Lemma 3.3

The Slater condition implies that there exists a point x¯+n\bar{x}\in\mathbb{R}^{n}_{+} such that

a(y)x¯+a0(y)>0,yY.a(y)^{\top}\bar{x}+a_{0}(y)>0,\ \forall y\in Y.

Let T={τk|k=0,1,,N}T=\{\tau_{k}\ |\ k=0,1,...,N\}, from (4) we know that for each Yk=[τk1,τk]Y_{k}=[\tau_{k-1},\tau_{k}], k=1,2,,Nk=1,2,...,N, there holds that

minyYkai(y)Ai,klγi|Yk|pγi|T|p,i=0,1,,n,k=1,2,N,\min_{y\in Y_{k}}a_{i}(y)-A_{i,k}^{l}\leq\gamma_{i}|Y_{k}|^{p}\leq\gamma_{i}|T|^{p},i=0,1,...,n,k=1,2,...N,

with p1p\geq 1. By direction computation, we have

i=1n[minyYkai(y)]x¯i+minyYka0(y)[i=1nAi,klx¯i+A0,kl][i=1nγix¯i+γ0]|Yk|p.\sum_{i=1}^{n}[\min_{y\in Y_{k}}a_{i}(y)]\bar{x}_{i}+\min_{y\in Y_{k}}a_{0}(y)-[\sum_{i=1}^{n}A_{i,k}^{l}\bar{x}_{i}+A_{0,k}^{l}]\leq[\sum_{i=1}^{n}\gamma_{i}\bar{x}_{i}+\gamma_{0}]|Y_{k}|^{p}.

The Lipschitz continuity of ai(y)a_{i}(y), i=0,1,,ni=0,1,...,n, implies that

minyYk[i=1nai(y)x¯i+a0(y)][i=1n[minyYkai(y)]x¯i+minyYka0(y)][i=1nLix¯i+L0]|Yk|.\min_{y\in Y_{k}}[\sum_{i=1}^{n}a_{i}(y)\bar{x}_{i}+a_{0}(y)]-[\sum_{i=1}^{n}[\min_{y\in Y_{k}}a_{i}(y)]\bar{x}_{i}+\min_{y\in Y_{k}}a_{0}(y)]\leq[\sum_{i=1}^{n}L_{i}\bar{x}_{i}+L_{0}]|Y_{k}|.

It follows from the last two inequalities that

i=1nAi,klx¯i+A0,klminyYk[i=1nai(y)x¯i+a0(y)]{[i=1nγix¯i+γ0]|Yk|p+[i=1nLix¯i+L0]|Yk|},\sum_{i=1}^{n}A_{i,k}^{l}\bar{x}_{i}+A_{0,k}^{l}\geq\min_{y\in Y_{k}}[\sum_{i=1}^{n}a_{i}(y)\bar{x}_{i}+a_{0}(y)]-\{[\sum_{i=1}^{n}\gamma_{i}\bar{x}_{i}+\gamma_{0}]|Y_{k}|^{p}+[\sum_{i=1}^{n}L_{i}\bar{x}_{i}+L_{0}]|Y_{k}|\},

which implies that i=1nAi,klx¯i+A0,kl0\sum_{i=1}^{n}A_{i,k}^{l}\bar{x}_{i}+A_{0,k}^{l}\geq 0, k=1,2,,Nk=1,2,...,N, for |T||T| small enough. This implies that x¯\bar{x} is a feasible point for the approximate region F(T)F(T).

This completes our proof.

Proof of Theorem 3.4

By the construction of the approximate regions, we know that F(Tk)F(T_{k}) is included in the original feasible set, i.e., F(Tk)FF(T_{k})\subseteq F for all kk\in\mathbb{N}. Hence, we have {xk}F\{x_{k}^{*}\}\subseteq F.

Let x¯\bar{x} be any Slater point, we can conclude from Lemma 3.3 that x¯\bar{x} is contained in F(Tk)F(T_{k}) for kk large enough. Thus cxkcx¯c^{\top}x_{k}^{*}\leq c^{\top}\bar{x} which indicates that xkL(x¯)x_{k}^{*}\in L(\bar{x}) for sufficient large kk. Since the level set L(x¯)L(\bar{x}) is compact, there exists at least an accumulation point xx^{*} of the sequence {xk}\{x_{k}^{*}\}. Assume without loss of generality that the sequence {xk}\{x_{k}^{*}\} itself converges to xx^{*}, i.e., limkxk=x\lim_{k\to\infty}x_{k}^{*}=x^{*}. It suffices to prove that xx^{*} is an optimal solution to (LSIP). It is obvious that xx^{*} is feasible for (LSIP).

Let xoptx_{opt} be an optimal solution to (LSIP). If xoptFox_{opt}\in F^{o}, then xoptF(Tk)x_{opt}\in F(T_{k}) for all kk large enough. This indicates that f(xk)f(xopt)f(x_{k}^{*})\leq f(x_{opt}) for kk large enough and thus

f(x)=limkf(xk)f(xopt),f(x^{*})=\lim_{k\to\infty}f(x_{k}^{*})\leq f(x_{opt}),

where f(x)=cxf(x)=c^{\top}x. If xoptx_{opt} lies on the boundary of the feasible set FF, there exists a sequence of the Slater points {x¯j|x¯jFo}\{\bar{x}_{j}\ |\ \bar{x}_{j}\in F^{o}\} such that limjx¯j=xopt\lim_{j\to\infty}\bar{x}_{j}=x_{opt}. For each x¯jFo\bar{x}_{j}\in F^{o} there exists at least an index k=k(j)k=k(j) such that x¯jF(Tk(j))\bar{x}_{j}\in F(T_{k(j)}) which implies that f(xk(j))f(x¯j)f(x_{k(j)}^{*})\leq f(\bar{x}_{j}) for jj\in\mathbb{N}. Since {xk}\{x_{k}^{*}\} converges to xx^{*} and {xk(j)}\{x_{k(j)}^{*}\} is a subsequence of {xk}\{x_{k}^{*}\}, the sequence {xk(j)}\{x_{k(j)}^{*}\} is convergent and limjf(xk(j))=f(x)\lim_{j\to\infty}f(x_{k(j)}^{*})=f(x^{*}). By the continuity of ff we have

f(x)=limjf(xk(j))limjf(x¯j)=f(xopt).f(x^{*})=\lim_{j\to\infty}f(x_{k(j)}^{*})\leq\lim_{j\to\infty}f(\bar{x}_{j})=f(x_{opt}).

To sum up, we have xFx^{*}\in F and f(x)f(xopt)f(x^{*})\leq f(x_{opt}).

This completes our proof.

Proof of Lemma 3.5

Let x¯F\bar{x}\in F be a Slater point, then we have

a(y)x¯+a0(y)>0,yYk,k=1,2,,N.a(y)^{\top}\bar{x}+a_{0}(y)>0,\ \forall y\in Y_{k},k=1,2,...,N.

Since ai(),i=1,2,,na_{i}(\cdot),i=1,2,...,n are twice continuously differentiable, they are Lipschitz continuous, i.e., there exist a constant LL such that

|ai(y)ai(z)|L|yz|,y,zY.|a_{i}(y)-a_{i}(z)|\leq L|y-z|,\ \forall y,z\in Y.

Let g¯k(x)=i=1nmin{a¯i(τk1),a¯i(τk)}xi+min{a¯0(τk1),a¯0(τk)}\bar{g}_{k}(x)=\sum_{i=1}^{n}\min\{\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})\}x_{i}+\min\{\bar{a}_{0}(\tau_{k-1}),\bar{a}_{0}(\tau_{k})\}, then we have

a(y)x¯+a0(y)g¯k(x¯)\displaystyle a(y)^{\top}\bar{x}+a_{0}(y)-\bar{g}_{k}(\bar{x})
=i=1nai(y)x¯i+a0(y)i=1nmin{a¯i(τk1),a¯i(τk)}x¯i+min{a¯0(τk1),a¯0(τk)}\displaystyle=\sum_{i=1}^{n}a_{i}(y)\bar{x}_{i}+a_{0}(y)-\sum_{i=1}^{n}\min\{\bar{a}_{i}(\tau_{k-1}),\bar{a}_{i}(\tau_{k})\}\bar{x}_{i}+\min\{\bar{a}_{0}(\tau_{k-1}),\bar{a}_{0}(\tau_{k})\}
(Li=1n(x¯i+1))|Yk|,yYk,k=1,2,,N.\displaystyle\leq(L\sum_{i=1}^{n}(\bar{x}_{i}+1))|Y_{k}|,\ \forall y\in Y_{k},k=1,2,...,N.

It follows that g¯k(x¯)0\bar{g}_{k}(\bar{x})\geq 0 if |T||T| is sufficiently small which implies x¯F¯(T)\bar{x}\in\bar{F}(T).

This completes our proof.

Convergence of Algorithm 1 for R1-LSIP

Since xkx_{k} is a solution of R1-LSIP(TkT_{k}) for a consistent subdivision Tk={τjk|j=0,1,,Nk}T_{k}=\{\tau^{k}_{j}\ |\ j=0,1,...,N_{k}\} in the kkth iteration, it must satisfy the KKT condition as follows:

cjA(xk)λjkATk(:,j)=0c-\sum_{j\in A(x^{*}_{k})}\lambda^{k}_{j}A_{T_{k}}(:,j)=0 (12)

where A(xk)={j|A(:,j)xk+bTk(j)=0}A(x^{*}_{k})=\{j\ |\ A(:,j)^{\top}x^{*}_{k}+b_{T_{k}}(j)=0\} and ATk(i,j)=Ai,jlA_{T_{k}}(i,j)=A^{l}_{i,j}, bTk(j)=A0,jlb_{T_{k}}(j)=A^{l}_{0,j} is the corresponding lower bound for ai(y)a_{i}(y) and a0(y)a_{0}(y) on [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}]. By (4) we know that for any τ¯jk[τj1k,τjk]\bar{\tau}^{k}_{j}\in[\tau^{k}_{j-1},\tau^{k}_{j}] there holds that

|ai(τ¯jk)ATk(i,j)|γik|τjkτj1k|p,0in,jA(xk)|a_{i}(\bar{\tau}^{k}_{j})-A_{T_{k}}(i,j)|\leq\gamma^{k}_{i}|\tau^{k}_{j}-\tau^{k}_{j-1}|^{p},0\leq i\leq n,j\in A(x^{*}_{k})

where γik,1in\gamma^{k}_{i},1\leq i\leq n, p1p\geq 1 are constants. Thus, there exist some constants 0βikγik,i=0,1,2,,n0\leq\beta^{k}_{i}\leq\gamma_{i}^{k},i=0,1,2,...,n such that ATk(i,j)=ai(τ¯jk)+βik|τjkτj1k|pA_{T_{k}}(i,j)=a_{i}(\bar{\tau}^{k}_{j})+\beta^{k}_{i}|\tau^{k}_{j}-\tau^{k}_{j-1}|^{p}. Substitute this into (12) and A(xk)A(x^{*}_{k}) we have

cjA(xk)λjk[a(τ¯jk)+(|τjkτj1k|p)βk]=0,\displaystyle c-\sum_{j\in A(x^{*}_{k})}\lambda^{k}_{j}[a(\bar{\tau}^{k}_{j})+(|\tau^{k}_{j}-\tau^{k}_{j-1}|^{p})\beta^{k}]=0,
A(xk)={j|[a(τ¯jk)+(|τjkτj1k|p)βk]xk+a0(τ¯jk)+(|τjkτj1k|p)β0k=0},\displaystyle A(x^{*}_{k})=\{j\ |\ [a(\bar{\tau}^{k}_{j})+(|\tau^{k}_{j}-\tau^{k}_{j-1}|^{p})\beta^{k}]x^{*}_{k}+a_{0}(\bar{\tau}^{k}_{j})+(|\tau^{k}_{j}-\tau^{k}_{j-1}|^{p})\beta_{0}^{k}=0\},

where a(τ¯jk)=(a1(τ¯jk),a2(τ¯jk),,an(τ¯jk))a(\bar{\tau}^{k}_{j})=(a_{1}(\bar{\tau}^{k}_{j}),a_{2}(\bar{\tau}^{k}_{j}),...,a_{n}(\bar{\tau}^{k}_{j}))^{\top}. It follows that xkx^{*}_{k} is a (ϵ,δ)(\epsilon,\delta) KKT point of (LSIP) if the lengths of the subsets [τj1k,τjk][\tau^{k}_{j-1},\tau^{k}_{j}] for jA(xk)j\in A(x^{*}_{k}) converge to zero as kk goes to infinity. This is true due to the similar argument in the proof of theorem 3.9.