This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Robust production planning with budgeted cumulative demand uncertainty

Romain Guillaume Université de Toulouse-IRIT, Toulouse, France, romain.guillaume@irit.fr Adam Kasperski Wrocław University of Science and Technology, Wrocław, Poland, {adam.kasperski,pawel.zielinski}@pwr.edu.pl Paweł Zieliński Wrocław University of Science and Technology, Wrocław, Poland, {adam.kasperski,pawel.zielinski}@pwr.edu.pl
Abstract

This paper deals with a problem of production planning, which is a version of the capacitated single-item lot sizing problem with backordering under demand uncertainty, modeled by uncertain cumulative demands. The well-known interval budgeted uncertainty representation is assumed. Two of its variants are considered. The first one is the discrete budgeted uncertainty, in which at most a specified number of cumulative demands can deviate from their nominal values at the same time. The second variant is the continuous budgeted uncertainty, in which the sum of the deviations of cumulative demands from their nominal values, at the same time, is at most a bound on the total deviation provided. For both cases, in order to choose a production plan that hedges against the cumulative demand uncertainty, the robust minmax criterion is used. Polynomial algorithms for evaluating the impact of uncertainty in the demand on a given production plan in terms of its cost, called the adversarial problem, and for finding robust production plans under the discrete budgeted uncertainty are constructed. Hence, in this case, the problems under consideration are not much computationally harder than their deterministic counterparts. For the continuous budgeted uncertainty, it is shown that the adversarial problem and the problem of computing a robust production plan along with its worst-case cost are NP-hard. In the case, when uncertainty intervals are non-overlapping, they can be solved in pseudopolynomial time and admit fully polynomial time approximation schemes. In the general case, a decomposition algorithm for finding a robust plan is proposed.

Keywords: robustness and sensitivity analysis; production planning; demand uncertainty; robust optimization

1 Introduction

Production planning under uncertainty is a fundamental and important managerial decision making problem in various industries, among others, agriculture, manufacturing, food and entertainment ones (see, e.g., [26]). Uncertainty arises due to the versatility of a market and the bullwhip effect that increases uncertainty through a supply chain (see [29]). In consequence, it induces supply chain risks such as backordering and obsolete inventory, accordingly, there is a need to face uncertainty in planning processes, in order to manage these risks.

Nowadays most companies use the manufacturing resource planning (MRP II) for a manufacturing planning and control, which is composed of three levels [6]: the strategic level (production plan/resource plan), the tactical level (master production schedule/ rough-cut capacity plan and material requirement planning/capacity requirements plan) and the operational level (production activity control/capacity control). In this paper we will be concerned with the study of an impact of uncertainty in models that are applicable to a production planning in the strategic level and/or a master production scheduling in the first level of the tactical one. More specifically, we will look more closely at a capacitated lot sizing problem with backordering under uncertainty. In the literature on production planning (see, e.g., [5, 16, 34, 36, 43]) three different sources of uncertainty such as: demand, process and supply are distinguished. For the aforementioned strategic/tactical planning processes taking demand uncertainty into account plays a crucial role. Therefore, in this paper, we focus on uncertainty in the demand. Namely, we deal with a version of the capacitated lot sizing problem with backordering under uncertainty in the demand.

Various models of demand uncertainty in lot sizing problems have been discussed in the literature so far, each of which has its pros and cons. Typically, uncertainty in demands (parameters) is modeled by specifying a set 𝒰\mathcal{U} of all possible realizations of the demands (parameters), called scenarios. In stochastic models, demands are random variables with known probability distributions that induce a probability distribution in set 𝒰\mathcal{U} and the expected solution performance is commonly optimized [39, 31, 24, 40]. In fuzzy (possibilistic) models, demands are modeled by fuzzy intervals, regarded as possibility distributions, describing the sets of more or less plausible values of demands. These fuzzy intervals induce a possibility distribution in scenario set 𝒰\mathcal{U} and some criteria based on possibility measure are optimized [20, 33, 41].

When no historical data or no information about plausible demand values are available, required to draw probability or possibility distributions for demands, an alternative way to handle demand uncertainty is a robust model. There are two common methods of defining scenario (uncertainty) set 𝒰\mathcal{U} in this model, namely the discrete and interval uncertainty representations. Under the discrete uncertainty representation, set 𝒰\mathcal{U} is defined by explicitly listing all possible realizations of demand scenarios. While, under the interval uncertainty representation, a closed interval is assigned to each demand, which means that it will take some value within this interval, but it is not possible to predict which one. Thus 𝒰\mathcal{U} is the Cartesian product of the intervals. In order to choose a solution, the minmax or minmax regret criteria are usually applied. As the result, a solution minimizing its cost or opportunity loss under a worst scenario which may occur is computed (see, e.g., [28]). Under the discrete uncertainty representation of demands the minmax version of a capacitated lot sizing problem turned out to be NP-hard even for two demand scenarios but can be solved in pseudopolynomial time, if the number of scenarios is constant [28]. When the number of scenarios is a part of the input, the is strongly NP-hard and no algorithm for solving it is know. A situation is computationally much better for the interval representation of demands. It was shown in [20] that the minmax version of the lot sizing problems with backorders can be efficiently solved. Indeed, the problem of computing a robust production plan with no capacity limits can be solved in O(T)O(T) time, where TT is the number of periods. For its capacitated version an effective iterative algorithm based on Benders’ decomposition [8] was provided. At each iteration of the algorithm a worst-case scenario for a feasible production plan is computed by a dynamic programming algorithm. Such a problem of evaluating a given production plan in terms of its worst-case cost is called the adversarial problem. The minmax regret version of two-stage uncapacitated lot sizing problems, studied in [45], can be solved in O(T6logT)O(T^{6}\log T) time.

The minmax (regret) approach, commonly used in robust optimization, can lead to very conservative solutions. In [9, 10] a budgeted uncertainty representation was proposed, which addresses this drawback under the interval uncertainty representation. It allows decision maker to flexibly control the level of conservatism of solutions computed by specifying a parameter Γ\Gamma, called a budget or protection level. The intuition behind this was that it is unlikely that all parameters can deviate from their nominal values at the same time. The first application of the budgeted uncertainty representation to lot sizing problems under demand uncertainty was proposed [11], where the resulting true minmax counterparts were approximated by linear programing problem (LP) and a mixed integer programing one (MIP). It is worth pointing out that the authors in [11] assumed a budgeted model, in which Γ\Gamma is an upper bound on the total scaled deviation of the demands (parameters) from their nominal values under any demand scenario. Along the same line as in [11], the robust optimization was adapted to periodic inventory control and production planning with uncertain product returns and demand in [42], to lot sizing combined with cutting stock problems under uncertain cost and demand in [4], and to a production planning under make-to-stock policies in [1]. In [2, 7, 12, 38] the minmax lot sizing problems under demand uncertainty were solved exactly by algorithms being variants of Benders’ decomposition. They consist in iterative inclusion of rows and columns, resulting from solving an adversarial problem (see also [44]). Hence their effectiveness heavily relies on the computational complexity of adversarial problems. In [12] the adversarial problems, related to computing basestock levels for a specific lot sizing problem, were solved by a dynamic programing algorithm and by a MIP formulation. MIPs also model adversarial problems corresponding to lot sizing problems under demand uncertainty in [7, 38]. While in [2] a more general problem under parameter uncertainty containing, among others, a capacitated lot sizing problem under demand uncertainty, was examined and its corresponding adversarial problem was solved by a dynamic programing algorithm and by a fully polynomial approximation scheme (FPTAS). Therefore, a pseudopolynomial algorithm and an FPTAS were proposed for a robust version of the original general problem under consideration.

In most of the literature devoted to robust lot sizing problems the interval uncertainty representation is used to model uncertainty in demands (see, e.g., [1, 4, 2, 7, 12, 20, 38, 42, 45]). This is not surprising, as it is one of the simplest and most natural ways of handling the uncertainty. In majority of papers the demand uncertainty is interpreted as the uncertainty in actual demands in periods. In this case, however, the uncertainty cumulates in the subsequent periods due to the interval addition, which may be unrealistic in applications. In [22] the uncertainty in cumulative demands was modeled, which resolves this problem. Indeed, it is able to take uncertain demands in periods as well as dependencies between the periods into account [21].

In this paper a capacitated lot sizing problem with backordering under the cumulative demand uncertainty is discussed. The uncertainty in cumulative demands is modeled by using two variants of the interval budgeted uncertainty. In the first one, called the discrete budgeted uncertainty [9, 10], at most a specified number of cumulative demands Γd+\Gamma^{d}\in\mathbb{Z}_{+} can deviate from their nominal values at the same time. In the second variant, called the continuous budgeted uncertainty [35], the total deviation of cumulative demands from their nominal values, at the same time, is at most Γc+\Gamma^{c}\in\mathbb{R}_{+}. The latter variant is similar to that considered in [11], where the total scaled deviation is upper bounded by Γ+\Gamma\in\mathbb{R}_{+}, but it is different from the computational point of view. To the best of our knowledge, the above lot sizing problem with uncertain cumulative demands under the discrete and continuous budgeted uncertainty has not been investigated in the literature so far.

Our contribution: The purpose of this paper is not to motivate the robust approach in lot sizing problems (this was well done in other papers), but rather to provide a complexity characterization, containing both positive and negative results for the problem under consideration. For both variants of the budgeted uncertainty, we analyze the adversarial problem, denoted by Adv, and the problem of finding a minmax (robust) production plan along with its worst-case cost, denoted by MinMax. We first consider the restrictive case, in which the cumulative demand intervals are non-overlapping (it is actually possible for the master production scheduling) and we study then the general case in which the intervals can overlap. Under the discrete budgeted uncertainty, we provide polynomial algorithms for the Adv problem and polynomial linear programming based methods for the MinMax problem in the non-overlapping case. We then extend these results to the general case by showing a characterization of optimal cumulative demand scenarios for Adv. In consequence, under the discrete budgeted model the Adv and MinMax problems can be solved efficiently. Under the continuous budgeted uncertainty the situation is different. In particular, we prove that the Adv problem and, in consequence, the MinMax one are NP-hard even in the non-overlapping case. For the non-overlapping case, we construct pseudopolynomial algorithms for the Adv problem and propose a pseudopolynomial ellipsoidal algorithm and a linear programming program with pseudopolynomial number of constraints and variables for the MinMax problem. Moreover, we show that both problems admit an FPTAS. In the general case, the Adv and MinMax problems still remain NP-hard. Unfortunately, in this case there is no easy characterization of vertex cumulative demand scenarios. Accordingly, we propose a MIP based approach for Adv and an exact solution algorithm, being a variant of Benders’ decomposition, for MinMax.

This paper is organized as follows. In Section 2 we formulate a deterministic capacitated lot sizing problem with backordering and present a model of the cumulative demand uncertainty, i.e. the discrete and continuous budgeted uncertainty representations, together with the Adv and MinMax problems corresponding to the lot sizing problem. In Sections 3 and 4 we study the problem under both budgeted uncertainty representations providing positive and negative results. We finish the paper with some conclusions in Section 5.

2 Problem formulation

In this section we first recall a formulation of the deterministic capacitated single-item lot sizing problem with backordering. Then we assume that demands are subject to uncertainty and present a model of uncertainty, called the budgeted uncertainty representation. In order to choose a robust production plan we apply the minmax criterion, commonly used in robust optimization.

2.1 Deterministic production planning problem

We are given TT periods, a demand dt0d_{t}\geq 0 in each period tt, t[T]t\in[T] ([T][T] denotes the set {1,,T}\{1,\ldots,T\}), production, inventory and backordering costs and a selling price, denoted by cPc^{P}, cIc^{I}, cBc^{B} and bPb^{P}, respectively, which do not depend on period tt. Let xt0x_{t}\geq 0 be a production amount in period tt, t[T]t\in[T]. We assume that the production amounts x1,,xTx_{1},\ldots,x_{T}, called the production plan, can be under some linear constraints. Namely, let 𝕏+T\mathbb{X}\subseteq\mathbb{R}^{T}_{+} be a set of production plans, specified by linear constraints, for instance:

𝕏={𝒙=(x1,,xT):xt0,ltxtut,Lti[t]xiUt,t[T]}+T,\mathbb{X}=\{\boldsymbol{x}=(x_{1},\ldots,x_{T}):\;x_{t}\geq 0,l_{t}\leq x_{t}\leq u_{t},L_{t}\leq\sum_{i\in[t]}x_{i}\leq U_{t},t\in[T]\}\subseteq\mathbb{R}^{T}_{+},

where ltl_{t}, utu_{t} and LtL_{t}, UtU_{t} are prescribed capacity and cumulative capacity limits, respectively. Accordingly, we wish to find a feasible production plan 𝒙𝕏\boldsymbol{x}\in\mathbb{X}, subject to the conditions of satisfying each demand and the capacity limits, which minimizes the total production, storage and backordering costs minus the benefit from selling the product.

Set Dt=i[t]diD_{t}=\sum_{i\in[t]}d_{i} and Xt=i[t]xiX_{t}=\sum_{i\in[t]}x_{i}, i.e. DtD_{t} and XtX_{t} stand for the cumulative demand up to period tt and the cumulative production up to period tt, respectively. We do not examine additional production processes, for example with setup times and costs, which lead to NP-hard problems even for special cases (see, e.g., [17, 14]). The problem under consideration is a version of the capacitated single-item lot sizing problem with backordering (see, e.g., [13, 37]). It can be represented by the following linear program:

min\displaystyle\min t[T](cIIt+cBBt+cPxtbPst)\displaystyle\sum_{t\in[T]}(c^{I}I_{t}+c^{B}B_{t}+c^{P}x_{t}-b^{P}s_{t}) (1)
s.t. BtIt=DtXt\displaystyle B_{t}-I_{t}=D_{t}-X_{t} t[T],\displaystyle t\in[T], (2)
i[t]si=DtBt\displaystyle\sum_{i\in[t]}s_{i}=D_{t}-B_{t} t[T],\displaystyle t\in[T], (3)
Xt=i[t]xi\displaystyle X_{t}=\sum_{i\in[t]}x_{i} t[T],\displaystyle t\in[T], (4)
Bt,It,st0\displaystyle B_{t},I_{t},s_{t}\geq 0 t[T],\displaystyle t\in[T], (5)
𝒙𝕏+T,\displaystyle\boldsymbol{x}\in\mathbb{X}\subseteq\mathbb{R}^{T}_{+}, (6)

where ItI_{t}, BtB_{t} and sts_{t} are inventory level, backordering level and sales of the product at the end of period t[T]t\in[T], respectively. We assume that the initial inventory and backorder levels are equal to 0. There is an optimal solution to (1)-(6) which satisfies BtIt=0B_{t}I_{t}=0 for each t[T]t\in[T], so inventory storage from period tt to period t+1t+1 and backordering from period t+1t+1 to period tt are not performed simultaneously. Indeed, if Bt>0B_{t}>0 and It>0I_{t}>0, then we can modify the solution so that Bt=0B_{t}=0 or It=0I_{t}=0 without violating the constraints (2)-(3) and increasing the objective value. Using this observation, we can rewrite (1)-(6) in the following equivalent compact form, which is more convenient to analyze:

min𝒙𝕏(t[T]max{cI(XtDt),cB(DtXt)}+cPXTbPmin{XT,DT}).\min_{\boldsymbol{x}\in\mathbb{X}}\left(\sum_{t\in[T]}\max\{c^{I}(X_{t}-D_{t}),c^{B}(D_{t}-X_{t})\}+c^{P}X_{T}-b^{P}\min\{X_{T},D_{T}\}\right). (7)

Define

fI(Xt,Dt)={cI(XtDt)if t[T1],cI(XtDt)+cPXTbPDtif t=T,f_{I}(X_{t},D_{t})=\left\{\begin{array}[]{lll}c^{I}(X_{t}-D_{t})&\text{if }t\in[T-1],\\ c^{I}(X_{t}-D_{t})+c^{P}X_{T}-b^{P}D_{t}&\text{if }t=T,\end{array}\right.
fB(Xt,Dt)={cB(DtXt)if t[T1],cB(DtXt)+cPXTbPXtif t=T.f_{B}(X_{t},D_{t})=\left\{\begin{array}[]{lll}c^{B}(D_{t}-X_{t})&\text{if }t\in[T-1],\\ c^{B}(D_{t}-X_{t})+c^{P}X_{T}-b^{P}X_{t}&\text{if }t=T.\end{array}\right.

Hence, after considering two cases, namely XtDtX_{t}\leq D_{t} and Xt>DtX_{t}>D_{t}, for each t[T]t\in[T], problem (7) can be represented as follows

min𝒙𝕏t[T]max{fI(Xt,Dt),fB(Xt,Dt)}.\min_{\boldsymbol{x}\in\mathbb{X}}\sum_{t\in[T]}\max\{f_{I}(X_{t},D_{t}),f_{B}(X_{t},D_{t})\}. (8)

From now on, we will refer to (8) instead of (1)-(6). Observe that fI(Xt,Dt)f_{I}(X_{t},D_{t}) is nonincreasing function of DtD_{t}, fB(Xt,Dt)f_{B}(X_{t},D_{t}) is nondecreasing function of DtD_{t}. Furthermore, the function max{fI(Xt,Dt),fB(Xt,Dt)}\max\{f_{I}(X_{t},D_{t}),f_{B}(X_{t},D_{t})\} is convex in XtX_{t} and DtD_{t}, since both functions fIf_{I} and fBf_{B} are linear in XtX_{t} and DtD_{t}.

2.2 Robust production planning problem

We now admit that demands in problem (8) are subject to uncertainty. In practice, a knowledge about uncertainty in a demand is often expressed as ±Δ\pm\Delta, where Δ\Delta is a possible deviation from a nominal demand value. This means that the actual demand will take some value within the interval determined by ±Δ\pm\Delta, but it is not possible to predict at present which one. In consequence, a simple and natural interval uncertainty representation is induced.

A demand has a twofold interpretation, namely an actual demand dtd_{t} in period tt or a cumulative demand DtD_{t} up to period tt, t[T]t\in[T]. The former interpretation is often considered in the literature. However, in this case the deviations Δ\Delta cumulate in subsequent periods, due to the interval addition, D~t=~t[T]d~t\widetilde{D}_{t}=\widetilde{\sum}_{t\in[T]}\tilde{d}_{t}, dtd~t=[d^tΔ,d^t+Δ]d_{t}\in\tilde{d}_{t}=[\hat{d}_{t}-\Delta,\hat{d}_{t}+\Delta], where d^t\hat{d}_{t} is the nominal value of the demand in period tt. This may be unrealistic in practice, because the deviation for cumulative demand in period tt becomes tΔt\Delta (see Figure 1a). Therefore, in this paper we use a model of uncertainty in the cumulative demands rather than in actual demands, expressed by symmetric intervals D~t=[D^tΔ,D^t+Δ]\widetilde{D}_{t}=[\widehat{D}_{t}-\Delta,\widehat{D}_{t}+\Delta], t[T]t\in[T], prescribed, where D^t\widehat{D}_{t} is the nominal value of the cumulative demand up to period t[T]t\in[T] (see Figure 1b). Such a model is able to take the uncertain demands in periods into account (d~t\tilde{d}_{t} lies in D~t\widetilde{D}_{t}) as well as dependencies between periods. We allow different deviations for each period. Accordingly, the value of cumulative demand D~t\widetilde{D}_{t} is only known to belong to the interval [D^tΔt,D^t+Δt][\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t}+\Delta_{t}], t[T]t\in[T], where 0ΔtD^t0\leq\Delta_{t}\leq\widehat{D}_{t} is the maximum deviation of the cumulative demand from its nominal value D^t0\widehat{D}_{t}\geq 0. Each feasible vector 𝑫=(D1,,DT)\boldsymbol{D}=(D_{1},\ldots,D_{T}) of cumulative demands, called scenario, must satisfy the following two reasonable constraints: Dt0D_{t}\geq 0 for every t[T]t\in[T] and DtDt+1D_{t}\leq D_{t+1} for every t[T1]t\in[T-1]. Since the vector of nominal cumulative demands should be feasible, we also assume that D^tD^t+1\widehat{D}_{t}\leq\widehat{D}_{t+1}, t[T1]t\in[T-1]. Notice that scenario (D1,,DT)(D_{1},\ldots,D_{T}) induces a vector of actual demands in periods tt, i.e. d1=D1d_{1}=D_{1}, dt=DtDt1d_{t}=D_{t}-D_{t-1}, t=2,,Tt=2,\ldots,T.

Refer to caption
Figure 1: (a) Demands in periods under the interval uncertainty. (b) Cumulative demands under the interval uncertainty.

In this paper we study the following two special cases of the interval, symmetric uncertainty representations, that are common in robust optimization, in which scenario sets are defined in the following way (see, e.g., [9, 10, 35]):

𝒰d\displaystyle\mathcal{U}^{d} ={𝑫=(Dt)t[T]:DtDt+1,Dt[D^tΔt,D^t+Δt],|{t:DtD^t}|Γd},\displaystyle=\{\boldsymbol{D}=(D_{t})_{t\in[T]}\,:\,D_{t}\leq D_{t+1},D_{t}\in[\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t}+\Delta_{t}],|\{t\,:\,D_{t}\neq\widehat{D}_{t}\}|\leq\Gamma^{d}\}, (9)
𝒰c\displaystyle\mathcal{U}^{c} ={𝑫=(Dt)t[T]:DtDt+1,Dt[D^tΔt,D^t+Δt],𝑫𝑫^1Γc}.\displaystyle=\{\boldsymbol{D}=(D_{t})_{t\in[T]}\,:\,D_{t}\leq D_{t+1},D_{t}\in[\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t}+\Delta_{t}],\parallel\boldsymbol{D}-\widehat{\boldsymbol{D}}\parallel_{1}\leq\Gamma^{c}\}. (10)

The first representation (9) is called the discrete budgeted uncertainty, where Γd{0}[T]\Gamma^{d}\in\{0\}\cup[T], and the second one (10) is called the continuous budgeted uncertainty, where Γc+\Gamma^{c}\in\mathbb{R}_{+}. The parameters Γd\Gamma^{d} and Γc\Gamma^{c}, called budgets or protection levels, control the amount of uncertainty in 𝒰d\mathcal{U}^{d} and 𝒰c\mathcal{U}^{c}, respectively. If Γd=Γc=0\Gamma^{d}=\Gamma^{c}=0, then all cumulative demands take their nominal values (there is only one scenario). On the other hand, for sufficiently large Γd\Gamma^{d} and Γc\Gamma^{c} the uncertainty sets 𝒰d\mathcal{U}^{d} and 𝒰c\mathcal{U}^{c} are the Cartesian products of [D^tΔt,D^t+Δt][\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t}+\Delta_{t}], t[T]t\in[T], which yields the interval uncertainty representation discussed in [28].

In order to compute a robust production plan, we adopt the minmax approach (see, e.g., [28]), in which we seek a plan minimizing the maximal total cost over all cumulative demand scenarios. This leads to the following minmax problem:

MinMax:OPT=min𝒙𝕏max𝑫𝒰t[T]max{fI(Xt,Dt),fB(Xt,Dt)},\textsc{MinMax}:\;OPT=\min_{\boldsymbol{x}\in\mathbb{X}}\max_{\boldsymbol{D}\in\mathcal{U}}\sum_{t\in[T]}\max\{f_{I}(X_{t},D_{t}),f_{B}(X_{t},D_{t})\}, (11)

where 𝒰{𝒰d,𝒰c}\mathcal{U}\in\{\mathcal{U}^{d},\mathcal{U}^{c}\}, i.e. to the one of computing an optimal production plan 𝒙\boldsymbol{x}^{*} of (11) along with its worst-case cost OPTOPT. Indeed, 𝒙\boldsymbol{x}^{*} is a robust choice, because we are sure that it optimizes against all scenarios in 𝒰\mathcal{U} in which the amount of uncertainty allocated by an adversary to cumulative demands is upper bounded by a budget provided. Furthermore, the budget enables to control the level of robustness of a production plan computed. More specifically, an optimal production plan to the MinMax problem under 𝒰d\mathcal{U}^{d} optimizes against all scenarios, in which at most Γd\Gamma^{d} cumulative demands take values different from their nominal ones at the same time. Moreover, by changing the value of Γd\Gamma^{d}, from 0 to TT, one can flexibly control the level of robustness of the plan computed. An optimal production plan to the MinMax problem under 𝒰c\mathcal{U}^{c} optimizes against all scenarios in 𝒰c\mathcal{U}^{c} in which the total deviation of the cumulative demands from their nominal values, at the same time, is at most a bound on the total variability Γc\Gamma^{c}. In this case one can also flexibly control the level of robustness of the plan computed by changing the value of Γc\Gamma^{c} from 0 to a big number, say t[T]Δt\sum_{t\in[T]}\Delta_{t}.

The MinMax problem contains the inner adversarial problem, i.e.

Adv:max𝑫𝒰t[T]max{fI(Xt,Dt),fB(Xt,Dt)}.\textsc{Adv}:\;\max_{\boldsymbol{D}\in\mathcal{U}}\sum_{t\in[T]}\max\{f_{I}(X_{t},D_{t}),f_{B}(X_{t},D_{t})\}. (12)

The Adv problem consists in finding a cumulative demand scenario 𝑫𝒰\boldsymbol{D}\in\mathcal{U} that maximizes the cost of a given production plan 𝒙𝕏\boldsymbol{x}\in\mathbb{X} over scenario set 𝒰\mathcal{U}. In other words, an adversary maliciously wants to increase the cost of  𝒙\boldsymbol{x}.

Throughout this paper, we study the Adv and MinMax problems under two standing assumptions about cumulative demand uncertainty intervals D~t\widetilde{D}_{t}, t[T]t\in[T]. Under the first one, the intervals are non-overlapping, i.e. D^t+ΔtD^t+1Δt+1\widehat{D}_{t}+\Delta_{t}\leq\widehat{D}_{t+1}-\Delta_{t+1}, t[T1]t\in[T-1]. This assumption is realistic, in particular at the tactical level of planning, for instance in the master production scheduling (MPS) (see, e.g. [6]), where the lengths of periods are big enough (for example, they are equal to one month). This restriction leads to more efficient methods of solving the problems under consideration. Notice that it allows us to drop the constraints DtDt+1D_{t}\leq D_{t+1}, t[T1]t\in[T-1], in the definition of scenario sets 𝒰d\mathcal{U}^{d} and 𝒰c\mathcal{U}^{c}. Under the second assumption, called the general case, we impose no restrictions on the interval bounds, i.e. they can now overlap.

3 Discrete budgeted uncertainty

In this section we consider the problem with uncertainty set 𝒰d\mathcal{U}^{d} defined as (9). We will discuss the Adv and the MinMax problems, i.e. the problems of evaluating a given production plan and computing the robust production plan, respectively. We provide polynomial algorithms for solving both problems.

3.1 Non-overlapping case

In this section we consider the non-overlapping case, i.e we assume that D^t+ΔtD^t+1Δt+1\widehat{D}_{t}+\Delta_{t}\leq\widehat{D}_{t+1}-\Delta_{t+1} for each t[T1]t\in[T-1]. We first focus on the Adv problem, i.e the inner one of MinMax.

Lemma 1.

The Adv problem under 𝒰d\mathcal{U}^{d}, for the non-overlapping case, boils down to the following problem:

max\displaystyle\max t[T]max{fI(Xt,D^tδtΔt),fB(Xt,D^t+δtΔt)}\displaystyle\sum_{t\in[T]}\max\{f_{I}(X_{t},\widehat{D}_{t}-\delta_{t}\Delta_{t}),f_{B}(X_{t},\widehat{D}_{t}+\delta_{t}\Delta_{t})\} (13)
s.t. t[T]δtΓd,\displaystyle\sum_{t\in[T]}\delta_{t}\leq\Gamma^{d}, (14)
0δt1,t[T].\displaystyle 0\leq\delta_{t}\leq 1,\;\;t\in[T]. (15)

Moreover (13)-(15) has an integral optimal solution 𝛅\boldsymbol{\delta}^{*} such that t[T]δt=Γd\sum_{t\in[T]}\delta^{*}_{t}=\Gamma^{d}.

Proof.

The objective (13) is a convex function with respect to 𝜹[0,1]T\boldsymbol{\delta}\in[0,1]^{T}. Hence, it attains the maximum value at a vertex of the convex polytope (14)-(15) (see, e.g., [32]). Moreover, since the matrix of constraints (14) and (15) is totally unimodular and Γd+\Gamma^{d}\in\mathbb{Z}_{+}, an optimal solution 𝜹\boldsymbol{\delta}^{*} to (13)-(15) is integral and has exactly Γd\Gamma^{d} components equal to 1.

Let 𝑫𝒰d\boldsymbol{D}^{{}^{\prime}}\in\mathcal{U}^{d} be an optimal solution of the Adv problem. This solution can be expressed by a feasible solution 𝜹\boldsymbol{\delta}^{{}^{\prime}} to (13)-(15). Indeed, Dt=D^t±δtΔtD^{{}^{\prime}}_{t}=\widehat{D}_{t}\pm\delta^{{}^{\prime}}_{t}\Delta_{t}, t[T]t\in[T], 𝜹[0,1]T\boldsymbol{\delta}^{{}^{\prime}}\in[0,1]^{T}. Obviously t[T]δtΓ\sum_{t\in[T]}\delta^{{}^{\prime}}_{t}\leq\Gamma, since |{t:DtD^t}|Γd|\{t\,:\,D_{t}\neq\widehat{D}_{t}\}|\leq\Gamma^{d}. The value of the objective function of (12) for 𝑫\boldsymbol{D}^{{}^{\prime}} can be bounded from above by the value of (13) for 𝜹\boldsymbol{\delta}^{{}^{\prime}} and, in consequence, by the value of (13) for 𝜹\boldsymbol{\delta}^{*}. Let us form the cumulative demand scenario 𝑫\boldsymbol{D}^{*}, that corresponds to 𝜹\boldsymbol{\delta}^{*}, by setting

Dt={D^tδtΔtif fI(Xt,D^tδtΔt)>fB(Xt,D^t+δtΔt),D^t+δtΔtotherwise.D^{*}_{t}=\begin{cases}\widehat{D}_{t}-\delta^{*}_{t}\Delta_{t}&\text{if $f_{I}(X_{t},\widehat{D}_{t}-\delta^{*}_{t}\Delta_{t})>f_{B}(X_{t},\widehat{D}_{t}+\delta^{*}_{t}\Delta_{t}),$}\\ \widehat{D}_{t}+\delta^{*}_{t}\Delta_{t}&\text{otherwise}.\end{cases} (16)

Since D^t+ΔtD^t+1Δt+1\widehat{D}_{t}+\Delta_{t}\leq\widehat{D}_{t+1}-\Delta_{t+1} for each t[T1]t\in[T-1] (by the assumption that we consider the non-overlapping case), we get DtDt+1D^{*}_{t}\leq D^{*}_{t+1} and thus 𝑫𝒰d\boldsymbol{D}^{*}\in\mathcal{U}^{d}. We see at once that the optimal value of (13) for 𝜹\boldsymbol{\delta}^{*} is equal to the value of the objective function of (12) for 𝑫\boldsymbol{D}^{*}. By the optimality of 𝑫\boldsymbol{D}^{{}^{\prime}}, this value is bounded from above by the value of the objective function of (12) for 𝑫\boldsymbol{D}^{{}^{\prime}}. Hence 𝑫\boldsymbol{D}^{*} is an optimal solution to (12) as well, which proves the lemma. ∎

From Lemma 1 and the integrality of 𝜹\boldsymbol{\delta}^{*}, it follows that an optimal solution 𝑫\boldsymbol{D}^{*} to (12) is such that Dt{D^tΔt,D^t,D^t+Δt}D_{t}^{*}\in\{\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t},\widehat{D}_{t}+\Delta_{t}\} for every t[T]t\in[T]. Accordingly, we can provide an algorithm for finding an optimal solution to (13)-(15). An easy computation shows that the objective function (13) can be rewritten as follows:

t[T]max{fI(Xt,D^t),fB(Xt,D^t)}+t[T]ctδt,\sum_{t\in[T]}\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\}+\sum_{t\in[T]}c_{t}\delta_{t}, (17)

where

ct=max{fI(Xt,D^tΔt),fB(Xt,D^t+Δt)}max{fI(Xt,D^t),fB(Xt,D^t)},t[T],c_{t}=\max\{f_{I}(X_{t},\widehat{D}_{t}-\Delta_{t}),f_{B}(X_{t},\widehat{D}_{t}+\Delta_{t})\}-\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\},\;t\in[T],

are fixed coefficients. The first sum in (17) is constant. Therefore, in order to solve Adv, we need to solve the following problem:

max\displaystyle\max t[T]ctδt\displaystyle\sum_{t\in[T]}c_{t}\delta_{t} (18)
s.t. t[T]δtΓd,\displaystyle\sum_{t\in[T]}\delta_{t}\leq\Gamma^{d}, (19)
0δt1,t[T].\displaystyle 0\leq\delta_{t}\leq 1,\;\;t\in[T]. (20)

Problem (18)-(20) can be solved in O(T)O(T) time. Indeed, we first find, in O(T)O(T) time (see, e.g., [27]), the Γd\Gamma^{d}th largest coefficient, denoted by cσ(Γd)c_{\sigma(\Gamma^{d})}, such that cσ(1)cσ(Γd)cσ(T)c_{\sigma(1)}\geq\cdots\geq c_{\sigma(\Gamma^{d})}\geq\cdots\geq c_{\sigma(T)}, where σ\sigma is a permutation of [T][T]. Then having cσ(Γd)c_{\sigma(\Gamma^{d})} we can choose Γd\Gamma^{d} coefficients cσ(i)c_{\sigma(i)}, i[Γd]i\in[\Gamma^{d}], and set δσ(i)=1\delta^{*}_{\sigma(i)}=1. Having the optimal solution 𝜹\boldsymbol{\delta}^{*}, we can construct the corresponding scenario 𝑫\boldsymbol{D}^{*} as in the proof of Lemma 1 (see (16)). Hence, we get the following theorem.

Theorem 1.

The Adv problem under 𝒰d\mathcal{U}^{d}, for the non-overlapping case, can be solved in O(T)O(T) time.

We now show how to solve the MinMax problem for the non-overlapping case in polynomial time. We will first reformulate Adv as a linear programming problem with respect to XtX_{t}. Then, the linearity will be preserved by adding linear constraints for XtX_{t}. Writing the dual to (18)-(20), we obtain:

min\displaystyle\min\; Γdα+t[T]γt\displaystyle\Gamma^{d}\alpha+\sum_{t\in[T]}\gamma_{t} (21)
s.t. α+γtct,\displaystyle\alpha+\gamma_{t}\geq c_{t}, t[T],\displaystyle t\in[T], (22)
γt0,\displaystyle\gamma_{t}\geq 0, t[T],\displaystyle t\in[T], (23)
α0,\displaystyle\alpha\geq 0, (24)

where α\alpha and γt\gamma_{t} are dual variables. Using (21)-(24), equality (17), and the definition of ctc_{t}, we can rewrite (13)-(15) as:

min\displaystyle\min\; t[T]πt+Γdα+t[T]γt\displaystyle\sum_{t\in[T]}\pi_{t}+\Gamma^{d}\alpha+\sum_{t\in[T]}\gamma_{t} (25)
s.t. πtfI(Xt,D^t),\displaystyle\pi_{t}\geq f_{I}(X_{t},\widehat{D}_{t}), t[T],\displaystyle t\in[T], (26)
πtfB(Xt,D^t),\displaystyle\pi_{t}\geq f_{B}(X_{t},\widehat{D}_{t}), t[T],\displaystyle t\in[T], (27)
α+γtfI(Xt,D^tΔt)πt,\displaystyle\alpha+\gamma_{t}\geq f_{I}(X_{t},\widehat{D}_{t}-\Delta_{t})-\pi_{t}, t[T],\displaystyle t\in[T], (28)
α+γtfB(Xt,D^t+Δt)πt,\displaystyle\alpha+\gamma_{t}\geq f_{B}(X_{t},\widehat{D}_{t}+\Delta_{t})-\pi_{t}, t[T],\displaystyle t\in[T], (29)
α,γt0,\displaystyle\alpha,\gamma_{t}\geq 0, t[T],\displaystyle t\in[T], (30)
πt unrestricted,\displaystyle\pi_{t}\text{ unrestricted}, t[T],\displaystyle t\in[T], (31)

where constraints (26)-(27) specify the maximum operators in the first sum of (17) and the remaining constraints model (22)-(24) and the coefficients ctc_{t}, t[T]t\in[T]. We now prove that (25)-(31) solves the Adv problem.

Lemma 2.

The program (25)-(31) solves the Adv problem.

Proof.

It is enough to show that there is an optimal solution (𝝅,𝜸,α)(\boldsymbol{\pi}^{*},\boldsymbol{\gamma}^{*},\alpha^{*}) to (25)-(31) such that πt=max{fI(Xt,D^t),fB(Xt,D^t)}\pi^{*}_{t}=\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\} for each t[T]t\in[T]. Let us fix t[T]t\in[T] and consider the term πt+γt\pi^{*}_{t}+\gamma^{*}_{t} in the objective function. In an optimal solution, we have

γt=max{[0,fI(Xt,D^tΔt)πtα]+,[0,fB(Xt,D^t+Δt)πtα]+},\gamma^{*}_{t}=\max\{[0,f_{I}(X_{t},\widehat{D}_{t}-\Delta_{t})-\pi^{*}_{t}-\alpha^{*}]_{+},[0,f_{B}(X_{t},\widehat{D}_{t}+\Delta_{t})-\pi^{*}_{t}-\alpha^{*}]_{+}\}, (32)

where [y]+=max{0,y}[y]_{+}=\max\{0,y\}. Suppose πt>fI(Xt,D^t)\pi^{*}_{t}>f_{I}(X_{t},\widehat{D}_{t}) and πt>fB(Xt,D^t)\pi_{t}^{*}>f_{B}(X_{t},\widehat{D}_{t}). We can then fix πt:=πtϵ\pi^{*}_{t}:=\pi^{*}_{t}-\epsilon, for some ϵ>0\epsilon>0, so that πt=max{fI(Xt,D^t),fB(Xt,D^t)}\pi^{*}_{t}=\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\}. Accordingly, we modify γt\gamma^{*}_{t} using (32), preserving the feasibility of (25)-(31) . The new value of γt\gamma_{t}^{*} is increased by at most ϵ\epsilon. In consequence, the objective value (25) does not increase. ∎

Adding linear constraints Xt=i[t]xiX_{t}=\sum_{i\in[t]}x_{i}, t[T]t\in[T], and 𝒙𝕏\boldsymbol{x}\in\mathbb{X} to (25)-(31) and using the fact that fIf_{I} and fBf_{B} are linear with respect to XtX_{t} yield a linear program for the MinMax problem. This leads to the following theorem.

Theorem 2.

The MinMax problem under 𝒰d\mathcal{U}^{d}, for the non-overlapping case, can be solved in polynomial time.

3.2 General case

In this section we drop the assumption that the cumulative demand intervals are non-overlapping. We first consider the Adv problem. The following lemma is the key to constructing algorithms in this section, namely it shows that it is enough to consider only O(T)O(T) values of the cumulative demand in each interval.

Lemma 3.

There is an optimal solution 𝐃𝒰d\boldsymbol{D}^{*}\in\mathcal{U}^{d} to the Adv problem such that

Dk𝒟k=[D^kΔk,D^k+Δk]t[T]{D^tΔt,D^t,D^t+Δt},k[T].D^{*}_{k}\in\mathcal{D}_{k}=[\widehat{D}_{k}-\Delta_{k},\widehat{D}_{k}+\Delta_{k}]\cap\bigcup_{t\in[T]}\{\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t},\widehat{D}_{t}+\Delta_{t}\},\;k\in[T]. (33)
Proof.

Scenario set 𝒰d\mathcal{U}^{d} is not convex. However, it can be decomposed into a union of convex sets in the following way. Let 𝒯(Γd)={[T]:||=Γd}\mathcal{T}(\Gamma^{d})=\{\mathcal{I}\subseteq[T]:\,|\mathcal{I}|=\Gamma^{d}\} and define

𝒰d()={𝑫=(Dt)t[T]:\displaystyle\mathcal{U}^{d}(\mathcal{I})=\{\boldsymbol{D}=(D_{t})_{t\in[T]}\,:\, (t[T1])(DtDt+1),(t)(Dt[D^tΔt,D^t+Δt]),\displaystyle(\forall t\in[T-1])(D_{t}\leq D_{t+1}),(\forall t\in\mathcal{I})(D_{t}\in[\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t}+\Delta_{t}]),
(t[T])(Dt=D^t)}𝒰d,\displaystyle(\forall t\in[T]\setminus\mathcal{I})(D_{t}=\widehat{D}_{t})\}\subset\mathcal{U}^{d}, (34)

where 𝒯(Γd)\mathcal{I}\in\mathcal{T}(\Gamma^{d}). Obviously, 𝒰d()\mathcal{U}^{d}(\mathcal{I}) is a convex polytope. Furthermore, it is easy to check that 𝒰d=𝒯(Γd)𝒰d()\mathcal{U}^{d}=\bigcup_{\mathcal{I}\in\mathcal{T}(\Gamma^{d})}\mathcal{U}^{d}(\mathcal{I}). The objective function in (12) is convex with respect to 𝑫+T\boldsymbol{D}\in\mathbb{R}^{T}_{+} and attains the maximum value at a vertex of a convex polytope (see, e.g., [32]). Hence, and from the fact that 𝒰d\mathcal{U}^{d}\not=\emptyset, there exists 𝒯(Γd)\mathcal{I}^{*}\in\mathcal{T}(\Gamma^{d}) and, consequently, the convex polytope 𝒰d()\mathcal{U}^{d}(\mathcal{I}^{*}), whose vertex is an optimal solution to the Adv problem.

To simplify notation, let us write D¯t=D^tΔt\underline{D}_{t}=\widehat{D}_{t}-\Delta_{t}, D¯t=D^t+Δt\overline{D}_{t}=\widehat{D}_{t}+\Delta_{t} if tt\in\mathcal{I}^{*}; and D¯t=D^t\underline{D}_{t}=\widehat{D}_{t}, D¯t=D^t\overline{D}_{t}=\widehat{D}_{t} if t[T]t\in[T]\setminus\mathcal{I}^{*}. The constraints DtDt+1D_{t}\leq D_{t+1}, t[T1]t\in[T-1], imply the polytope 𝒰d()\mathcal{U}^{d}(\mathcal{I}^{*}) does not change if we narrow the intervals so that

D¯tD¯t+1 and D¯tD¯t+1,t[T1].\underline{D}_{t}\leq\underline{D}_{t+1}\text{ and }\overline{D}_{t}\leq\overline{D}_{t+1},\;t\in[T-1]. (35)

From now on, we assume that the modified bounds D¯t\underline{D}_{t} and D¯t\overline{D}_{t} fulfill (35). Observe that after the narrowing, we get D¯k,D¯k𝒟k\underline{D}_{k},\overline{D}_{k}\in\mathcal{D}_{k}.

Assume that 𝑫=(Dt)t[T]\boldsymbol{D}=(D_{t})_{t\in[T]} is a vertex of 𝒰d()\mathcal{U}^{d}(\mathcal{I}^{*}) and so Dt[D¯t,D¯t]D_{t}\in[\underline{D}_{t},\overline{D}_{t}], t[T]t\in[T]. Suppose that Dk(D¯k,D¯k)D_{k}\in(\underline{D}_{k},\overline{D}_{k}) for some kk\in\mathcal{I}^{*}. Let (Dq,,Dk,,Dr)(D_{q},\dots,D_{k},\dots,D_{r}), qkrq\leq k\leq r, be subsequence of (D1,,DT)(D_{1},\dots,D_{T}), such that q=min{t[T]:Dt=Dk}q=\min\{t\in[T]:D_{t}=D_{k}\} and r=max{t[T]:Dt=Dk}r=\max\{t\in[T]:D_{t}=D_{k}\}. We now claim that Dq=D¯qD_{q}=\overline{D}_{q} or Dr=D¯rD_{r}=\underline{D}_{r}, in consequence Dk=D¯qD_{k}=\overline{D}_{q} or Dk=D¯rD_{k}=\underline{D}_{r}, which completes the proof.

Suppose, contrary to our claim, that the first and the last components of the subsequence are such that Dq[D¯q,D¯q)D_{q}\in[\underline{D}_{q},\overline{D}_{q}) and Dr(D¯r,D¯r]D_{r}\in(\underline{D}_{r},\overline{D}_{r}]. Observe that DqD¯qD_{q}\neq\underline{D}_{q}, because otherwise D¯q=Dq=Dk>D¯k\underline{D}_{q}=D_{q}=D_{k}>\underline{D}_{k}, which contradicts (35). Similarly, DrD¯rD_{r}\neq\overline{D}_{r}. Therefore, Dq(D¯q,D¯q)D_{q}\in(\underline{D}_{q},\overline{D}_{q}) and Dr(D¯r,D¯r)D_{r}\in(\underline{D}_{r},\overline{D}_{r}). We thus have

D1D2Dq1<Dq=Dq+1==Dk==Dr1=Dr<Dr+1DT.D_{1}\leq D_{2}\leq D_{q-1}<D_{q}=D_{q+1}=\dots=D_{k}=\dots=D_{r-1}=D_{r}<D_{r+1}\leq\dots\leq D_{T}.

Let ϵT{𝟎}\boldsymbol{\epsilon}\in\mathbb{R}^{T}\setminus\{\boldsymbol{0}\} be such that ϵt=0\epsilon_{t}=0 for each t[T]{q,,r}t\in[T]\setminus\{q,\dots,r\} and ϵt=σ>0\epsilon_{t}=\sigma>0 for t{q,,r}t\in\{q,\dots,r\}. Since Di(D¯i,D¯i)D_{i}\in(\underline{D}_{i},\overline{D}_{i}) for each i=q,,ri=q,\dots,r, there is sufficiently small, but positive ϵ\boldsymbol{\epsilon}, such that 𝑫ϵ𝒰d()\boldsymbol{D}-\boldsymbol{\epsilon}\in\mathcal{U}^{d}(\mathcal{I}^{*}) and 𝑫+ϵ𝒰d()\boldsymbol{D}+\boldsymbol{\epsilon}\in\mathcal{U}^{d}(\mathcal{I}^{*}). Hence 𝑫\boldsymbol{D} is a strict convex combination of two solutions from 𝒰d()\mathcal{U}^{d}(\mathcal{I}^{*}) and 𝑫\boldsymbol{D} is not a vertex solution. ∎

Equality (33) means that, in order to solve Adv, it is enough to examine only O(T)O(T) values for every cumulative demand DkD_{k}, k[T]k\in[T]. This fact allows us to transform, in polynomial time, the Adv problem to a version of the following restricted longest path problem (RLP for short). We are given a layered directed acyclic graph G=(V,A)G=(V,A). Two nodes 𝔰V\mathfrak{s}\in V and 𝔱V\mathfrak{t}\in V are distinguished as the source node (no arc enters to 𝔰\mathfrak{s}) and the sink node (no arc leaves 𝔱\mathfrak{t}). Two attributes (cuw,δuw)(c_{uw},\delta_{uw}) are associated with each arc (u,w)A(u,w)\in A, namely cuwc_{uw} is the length (cost) and δuw\delta_{uw} is the weight of (u,w)(u,w) and a bound WW on the total weights of paths. In the RLP problem we seek a path pp in GG from 𝔰\mathfrak{s} to 𝔱\mathfrak{t} whose total weight is at most WW ((u,w)pδuwW\sum_{(u,w)\in p}\delta_{uw}\leq W) and the length (cost) of pp ((u,w)pcuw\sum_{(u,w)\in p}c_{uw}) is maximal.

Given an instance of the Adv problem, the corresponding instance of RLP is constructed as follows. We first build a layered directed acyclic graph G=(V,A)G=(V,A). The node set VV is partitioned into T+2T+2 disjoint layers V0,V1,,VT,VT+1V_{0},V_{1},\ldots,V_{T},V_{T+1} in which V0={𝔰}V_{0}=\{\mathfrak{s}\} and VT+1={𝔱}V_{T+1}=\{\mathfrak{t}\} contain the source and the sink node, respectively. Each node uVku\in V_{k}, k[T]k\in[T], corresponds to exactly one possible value of the kkth component of a cumulative demand vertex scenario (see (33)), denoted by DuD_{u}, Du𝒟kD_{u}\in\mathcal{D}_{k}, |Vk|=|𝒟k||V_{k}|=|\mathcal{D}_{k}|. We also partition the arc set AA into T+1T+1 disjoint subsets, A=A1ATAT+1A=A_{1}\cup\cdots\cup A_{T}\cup A_{T+1}. Arc (u,w)A1(u,w)\in A_{1} if uV0u\in V_{0} and wV1w\in V_{1}; (u,w)AT+1(u,w)\in A_{T+1} if uVTu\in V_{T} and wVT+1w\in V_{T+1}; and arc (u,w)Ak(u,w)\in A_{k}, k=2,,Tk=2,\ldots,T, if uVk1u\in V_{k-1}, wVkw\in V_{k} and DuDwD_{u}\leq D_{w}. Set W=ΓdW=\Gamma^{d}. Finally two attributes are associated with each arc (u,w)A(u,w)\in A: cuwc_{uw} and δuw{0,1}\delta_{uw}\in\{0,1\}, whose values are determined as follows:

(cuw,δuw)={(max{fI(Xk,Dw),fB(Xk,Dw)},1)if (u,w)AkDwD^kk[T],(max{fI(Xk,Dw),fB(Xk,Dw)},0)if (u,w)AkDw=D^kk[T],(0,0)if (u,w)AT+1.(c_{uw},\delta_{uw})=\begin{cases}(\max\{f_{I}(X_{k},D_{w}),f_{B}(X_{k},D_{w})\},1)&\text{if $(u,w)\in A_{k}$, $D_{w}\not=\widehat{D}_{k}$, $k\in[T]$,}\\ (\max\{f_{I}(X_{k},D_{w}),f_{B}(X_{k},D_{w})\},0)&\text{if $(u,w)\in A_{k}$, $D_{w}=\widehat{D}_{k}$, $k\in[T]$,}\\ (0,0)&\text{if $(u,w)\in A_{T+1}$}.\end{cases} (36)

The second atribute δuw\delta_{uw} is equal to 1 if the value of DwD_{w} differs from the nominal value D^k\widehat{D}_{k}, and 0 otherwise, k[T]k\in[T]. The transformation can be done in O(T3)O(T^{3}) time, since AA has O(T3)O(T^{3}) arcs. An example is shown in Figure 2. At the nodes, other than 𝔰\mathfrak{s} and 𝔱\mathfrak{t}, the possible values of the cumulative demands DuD_{u} are shown. Observe that t[T]{D^tΔt,D^t,D^t+Δt}={1,2,3,5,6,7,9}\bigcup_{t\in[T]}\{\widehat{D}_{t}-\Delta_{t},\widehat{D}_{t},\widehat{D}_{t}+\Delta_{t}\}=\{1,2,3,5,6,7,9\}. Hence D1𝒟1={1,3,5}D_{1}\in\mathcal{D}_{1}=\{1,3,5\}, D2𝒟2={3,5,6,7,9}D_{2}\in\mathcal{D}_{2}=\{3,5,6,7,9\} and D3𝒟3={5,6,7}D_{3}\in\mathcal{D}_{3}=\{5,6,7\}. We can further assume that D29D_{2}\neq 9, due to the constraint D2D3D_{2}\leq D_{3}.

Refer to caption
Figure 2: Graph for T=3T=3 and D1[3^2,3^+2]D_{1}\in[\hat{3}-2,\hat{3}+2], D2[6^3,6^+3]D_{2}\in[\hat{6}-3,\hat{6}+3], D3[6^1,6^+1]D_{3}\in[\hat{6}-1,\hat{6}+1]. Notice that D27D_{2}\leq 7 for any feasible cumulative demand scenario. If Γd=2\Gamma^{d}=2, then we seek a longest 𝔰\mathfrak{s}-𝔱\mathfrak{t} path using at most 2 solid arcs.
Proposition 1.

A cumulative demand scenario with the cost of CC^{*} is optimal for an instance of the Adv problem if and only if there is an optimal 𝔰\mathfrak{s}-𝔱\mathfrak{t} path with the length (cost) of CC^{*} for the constructed instance of the RLP problem.

Proof.

Suppose that 𝑫=(Dk)k[T]𝒰d\boldsymbol{D}^{*}=(D^{*}_{k})_{k\in[T]}\in\mathcal{U}^{d} with the cost of CC^{*} is an optimal cumulative demand scenario for an instance of Adv for 𝒰d\mathcal{U}^{d}. By Lemma 3, without loss of generality, we can assume that 𝑫\boldsymbol{D}^{*} is a vertex of 𝒰d()\mathcal{U}^{d}(\mathcal{I}) for some 𝒯(Γd)\mathcal{I}\in\mathcal{T}(\Gamma^{d}). Thus Dk𝒟kD^{*}_{k}\in\mathcal{D}_{k}, k[T]k\in[T], (see (33)). From the construction of GG and the definition of 𝒯(Γd)\mathcal{T}(\Gamma^{d}) it follows that 𝑫\boldsymbol{D}^{*} corresponds to an 𝔰\mathfrak{s}-𝔱\mathfrak{t} path in GG, say pp^{*}, which satisfies the budget constraint (u,w)pδuwΓd\sum_{(u,w)\in p^{*}}\delta_{uw}\leq\Gamma^{d} and with the length of CC^{*}. We claim that pp^{*} is an optimal path for RLP in GG. Suppose, contrary to our claim, that there exists a feasible 𝔰\mathfrak{s}-𝔱\mathfrak{t} path pp^{\prime} in GG of length (cost) greater than CC^{*}. By the construction of GG, the first TT arcs of pp^{\prime} correspond to scenario 𝑫=(Dk)k[T]\boldsymbol{D}^{\prime}=(D^{\prime}_{k})_{k\in[T]} such that Dk𝒟kD^{\prime}_{k}\in\mathcal{D}_{k}, k[T]k\in[T], and DtDt+1D^{\prime}_{t}\leq D^{\prime}_{t+1}. Obviously, 𝑫\boldsymbol{D}^{\prime} has the same cost as pp^{\prime}. Since (u,w)pδuwΓd\sum_{(u,w)\in p^{\prime}}\delta_{uw}\leq\Gamma^{d}, 𝑫𝒰d()\boldsymbol{D}^{\prime}\in\mathcal{U}^{d}(\mathcal{I}) for some 𝒯(Γd)\mathcal{I}\in\mathcal{T}(\Gamma^{d}) and in consequence 𝑫𝒰d\boldsymbol{D}^{\prime}\in\mathcal{U}^{d}. This contradicts the optimality of 𝑫\boldsymbol{D}^{*} over set 𝒰d\mathcal{U}^{d}.

Assume that pp^{*} is an optimal 𝔰\mathfrak{s}-𝔱\mathfrak{t} path with the length (cost) of CC^{*} in GG. Similarly, from the construction of GG and the feasibility of pp^{*} it follows that its first TT arcs correspond to scenario 𝑫\boldsymbol{D}^{*} from 𝒰d()\mathcal{U}^{d}(\mathcal{I}) for some 𝒯(Γd)\mathcal{I}\in\mathcal{T}(\Gamma^{d}) and from 𝒰d\mathcal{U}^{d}. The cost of 𝑫\boldsymbol{D}^{*} is equal to the cost of pp^{*}. Furthermore, (33) and again the construction of GG show that each vertex scenario of 𝒰d()\mathcal{U}^{d}(\mathcal{I}) for every 𝒯(Γ)\mathcal{I}\in\mathcal{T}(\Gamma) corresponds to a feasible 𝔰\mathfrak{s}-𝔱\mathfrak{t} path in GG with the same cost, that is not better than CC^{*} due to the optimality of pp^{*}. Lemma 3 now leads to the optimality of 𝑫\boldsymbol{D}^{*} for an instance of Adv. ∎

In general, the restricted longest (shortest) path problem is weakly NP-hard, even for series-parallel graphs (see, e.g., [18]). However, it can be solved in pseudopolynomial time O(|A|W)O(|A|W) in directed acyclic graphs, if the bound W+W\in\mathbb{Z}_{+} and the weights δuw+\delta_{uw}\in\mathbb{Z}_{+}, (u,v)A(u,v)\in A, (see, e.g., [25]). Fortunately, in our case W=ΓdTW=\Gamma^{d}\leq T and AA has O(T3)O(T^{3}) arcs. We are thus led to the following theorem.

Theorem 3.

The Adv problem for 𝒰d\mathcal{U}^{d} can be solved in O(T4)O(T^{4}) time.

We now deal with the problem of computing an optimal production plan. Our goal is to construct a linear programming formulation for the MinMax problem. Unfortunately, a direct approach based on the network flow theory leads to a linear program with the associated polytope that has not the integrality property (see [15]), i.e the one being the intersection of the path polytope defined as the convex hull of the characteristic vectors of 𝔰\mathfrak{s}-𝔱\mathfrak{t} paths in GG and the half-space defined by the budged constraint (u,w)pδuwW\sum_{(u,w)\in p}\delta_{uw}\leq W, even if W=ΓdW=\Gamma^{d} is bounded by an integer TT and the weights δuw{0,1}\delta_{uw}\in\{0,1\}, (u,v)A(u,v)\in A. In consequence, such a restricted longest path problem in GG cannot be solved as the flow based linear program. However, using the fact that W=ΓdTW=\Gamma^{d}\leq T, we will transform in polynomial time our RLP problem to the longest path problem in acyclic graphs, for which a compact linear program can be built. The idea consists in transforming G=(A,V)G=(A,V) into G=(A,V)G^{\prime}=(A^{\prime},V^{\prime}) by splitting each node uu of GG, different from 𝔱\mathfrak{t}, into at most Γd+1\Gamma^{d}+1 nodes labeled as u0,,uΓdu^{0},\dots,u^{\Gamma^{d}}. Each arc (u,w)A(u,w)\in A with the attributes (cuw,δuw)(c_{uw},\delta_{uw}) (see (36)) induces the set of arcs (uj,wj+δuv)(u^{j},w^{j+\delta_{uv}}), j=0,,Γdj=0,\dots,\Gamma^{d}, j+δuwΓdj+\delta_{uw}\leq\Gamma^{d} in AA^{\prime} with the same costs cuwc_{uw}. The nodes from the TTth layer are connected to 𝔱\mathfrak{t} by arcs with 0 cost. We remove from GG^{\prime} all nodes which are not connected to 𝔰0\mathfrak{s}^{0} or 𝔱\mathfrak{t}, obtaining a reduced graph GG’. The resulting graph GG^{\prime}, for the graph GG presented in Figure 2, is shown in Figure 3.

Refer to caption
Figure 3: Graph GG^{\prime} corresponding to GG presented in Figure 2. For readability not all arcs from the A3A^{\prime}_{3} are shown.

Graph GG^{\prime} has O(T2Γd)O(T^{2}\Gamma^{d}) nodes and O(T3Γd)O(T^{3}\Gamma^{d}) arcs. Since ΓdT\Gamma^{d}\leq T, its size is polynomial in the input size of the MinMax problem. We now use the dual linear programming formulation of the longest path problem in GG^{\prime}. Let us associate unrestricted variable πuj\pi_{u}^{j} with each node uju^{j} of GG^{\prime} other than 𝔱\mathfrak{t} and unrestricted variable π𝔱\pi_{\mathfrak{t}} with node tt. The corresponding linear programming formulation for the longest path problem in GG^{\prime} (and for Adv) is a follows:

minπ𝔱s.t.π𝔰0=0,πwjπuicuw,(ui,wj)A,π𝔱πwj0,(wj,𝔱)A.\begin{array}[]{lll}\min&\pi_{\mathfrak{t}}\\ \text{s.t.}&\pi_{\mathfrak{s}}^{0}=0,\\ &\pi_{w}^{j}-\pi_{u}^{i}\geq c_{uw},&(u^{i},w^{j})\in A^{\prime},\\ &\pi_{\mathfrak{t}}-\pi_{w}^{j}\geq 0,&(w^{j},\mathfrak{t})\in A^{\prime}.\end{array}

Using the definition of cuwc_{uw} (see (36)), and adding linear constraints Xt=i[t]xiX_{t}=\sum_{i\in[t]}x_{i}, t[T]t\in[T], and 𝒙𝕏\boldsymbol{x}\in\mathbb{X}, we can rewrite this program as follows:

minπ𝔱s.t.π𝔰0=0,πwjπuifI(Xk,Dw),(ui,wj)A,πwjπuifB(Xk,Dw),(ui,wj)A,π𝔱πwj0,(wj,𝔱)A,Xt=i[t]xi,t[T],𝒙𝕏.\begin{array}[]{lll}\min&\pi_{\mathfrak{t}}\\ \text{s.t.}&\pi_{\mathfrak{s}}^{0}=0,\\ &\pi_{w}^{j}-\pi_{u}^{i}\geq f_{I}(X_{k},D_{w}),&(u^{i},w^{j})\in A^{\prime},\\ &\pi_{w}^{j}-\pi_{u}^{i}\geq f_{B}(X_{k},D_{w}),&(u^{i},w^{j})\in A^{\prime},\\ &\pi_{\mathfrak{t}}-\pi_{w}^{j}\geq 0,&(w^{j},\mathfrak{t})\in A^{\prime},\\ &X_{t}=\sum_{i\in[t]}x_{i},&t\in[T],\\ &\boldsymbol{x}\in\mathbb{X}.\end{array} (37)

Formulation (37) is a linear programming formulation for the MinMax problem, with O(T3)O(T^{3}) variables and O(T4)O(T^{4}) constraints. We thus get the following result:

Theorem 4.

The MinMax problem under 𝒰d\mathcal{U}^{d} in the general case is polynomially solvable.

4 Continuous budgeted uncertainty

In this section we discuss the MinMax and Adv problem under the continuous budgeted cumulative demand uncertainty 𝒰c\mathcal{U}^{c} defined in (10). Similarly as in Section 3, we first study the non-overlapping case and we consider then the general one. We provide negative and positive complexity results for the Adv and MinMax problems.

4.1 Non-overlapping case

We start by analyzing the properties of the Adv problem.

Lemma 4.

The Adv problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, boils down to the following problem:

max\displaystyle\max t[T]max{fI(Xt,D^tδt),fB(Xt,D^t+δt)}\displaystyle\sum_{t\in[T]}\max\{f_{I}(X_{t},\widehat{D}_{t}-\delta_{t}),f_{B}(X_{t},\widehat{D}_{t}+\delta_{t})\} (38)
s.t. t[T]δtΓc,\displaystyle\sum_{t\in[T]}\delta_{t}\leq\Gamma^{c}, (39)
0δtΔt,t[T].\displaystyle 0\leq\delta_{t}\leq\Delta_{t},\;\;t\in[T]. (40)
Proof.

The existence of an optimal vertex solution 𝜹\boldsymbol{\delta}^{*} to (38)-(40) follows from the convexity of the objective function (38) (see, e.g., [32]). Let 𝑫𝒰c\boldsymbol{D}^{{}^{\prime}}\in\mathcal{U}^{c} be an optimal solution of the Adv problem. Since 𝑫𝑫^1Γc\parallel\boldsymbol{D}^{{}^{\prime}}-\widehat{\boldsymbol{D}}\parallel_{1}\leq\Gamma^{c}, Dt=D^t±δtD^{{}^{\prime}}_{t}=\widehat{D}_{t}\pm\delta^{{}^{\prime}}_{t}, where δt[0,Δt]\delta_{t}^{{}^{\prime}}\in[0,\Delta_{t}], t[T]t\in[T], such that t[T]δtΓc\sum_{t\in[T]}\delta^{{}^{\prime}}_{t}\leq\Gamma^{c}. Thus 𝜹\boldsymbol{\delta}^{{}^{\prime}} is a feasible solution to (39)-(40). The value of the objective function of (12) for 𝑫\boldsymbol{D}^{{}^{\prime}} can be bounded from above by the value of (38) for 𝜹\boldsymbol{\delta}^{{}^{\prime}}, and so by the value of (38) for 𝜹\boldsymbol{\delta}^{*}. On the other hand, the cumulative demand scenario 𝑫𝒰c\boldsymbol{D}^{*}\in\mathcal{U}^{c} corresponding to 𝜹\boldsymbol{\delta}^{*} can be built as follows:

Dt={D^tδtif fI(Xt,D^tδt)>fB(Xt,D^t+δt),D^t+δtotherwise,D^{*}_{t}=\begin{cases}\widehat{D}_{t}-\delta^{*}_{t}&\text{if $f_{I}(X_{t},\widehat{D}_{t}-\delta^{*}_{t})>f_{B}(X_{t},\widehat{D}_{t}+\delta^{*}_{t}),$}\\ \widehat{D}_{t}+\delta^{*}_{t}&\text{otherwise},\end{cases} (41)

and by the optimality of 𝑫\boldsymbol{D}^{{}^{\prime}}, the value of the objective function of (12) for 𝑫\boldsymbol{D}^{*}, which equal to the value of (38) for 𝜹\boldsymbol{\delta}^{*}, is bounded from above by the value of (12) for 𝑫\boldsymbol{D}^{{}^{\prime}}. Therefore 𝑫\boldsymbol{D}^{*} is also an optimal solution to (12) and the lemma follows. ∎

Lemma 4 now shows that solving the Adv problem is equivalent to solving (38)-(40). An optimal solution to Adv can be formed according to (41). The following problem is an equivalent reformulation of (38)-(40):

max\displaystyle\max\; A+t[T]ct(δt)\displaystyle A+\sum_{t\in[T]}c_{t}(\delta_{t}) (42)
s.t. t[T]δtΓc,\displaystyle\sum_{t\in[T]}\delta_{t}\leq\Gamma^{c}, (43)
0δtΔt,t[T],\displaystyle 0\leq\delta_{t}\leq\Delta_{t},\;\;t\in[T], (44)

where

A=t[T]max{fI(Xt,D^t),fB(Xt,D^t)}A=\sum_{t\in[T]}\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\}

and

ct(δ)=max{fI(Xt,D^tδ),fB(Xt,D^t+δ)}max{fI(Xt,D^t),fB(Xt,D^t)},t[T],c_{t}(\delta)=\max\{f_{I}(X_{t},\widehat{D}_{t}-\delta),f_{B}(X_{t},\widehat{D}_{t}+\delta)\}-\max\{f_{I}(X_{t},\widehat{D}_{t}),f_{B}(X_{t},\widehat{D}_{t})\},t\in[T],

is a linear or piecewise linear nonnegative convex function in [0,Δt][0,\Delta_{t}], t[T]t\in[T], (see Figure 4). Observe that AA is constant. Thus in order to solve the Adv problem we need to solve the inner optimization problem of (42)-(44), i.e.

max\displaystyle\max t[T]ct(δt)\displaystyle\sum_{t\in[T]}c_{t}(\delta_{t}) (45)
s.t. t[T]δtΓc,\displaystyle\sum_{t\in[T]}\delta_{t}\leq\Gamma^{c}, (46)
0δtΔt,t[T].\displaystyle 0\leq\delta_{t}\leq\Delta_{t},\;\;t\in[T]. (47)
Refer to caption
Figure 4: Functions ct(δ)c_{t}(\delta), t[T]t\in[T], in the problem (42)-(44).

The above problem is a special case of a continuous knapsack problem with separable convex utilities that is weakly NP-hard in general (see [30]). In (45)-(47) the separable convex utilities ct(δ)c_{t}(\delta) have the simpler forms, piecewise linear ones, depicted in Figure 4. The next lemma shows that (45)-(47) is also weakly NP-hard even for the restricted separable convex utilities ct(δ)c_{t}(\delta) shown in Figure 4b and so Adv is weakly NP-hard (the proof is similar in spirit to that in [30]).

Theorem 5.

The Adv problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, is weakly NP-hard.

Proof.

Consider the following weakly NP-hard Subset Sum problem (see, e.g., [18]), in which we are given a collection {a1,,an}\{a_{1},\ldots,a_{n}\} of nn positive integers and an integer bb. We ask if there is a subset S[n]S\subseteq[n] such that iSai=b\sum_{i\in S}a_{i}=b.

We first show a polynomial time reduction from Subset Sum to (45)-(47). We are given an instance of the Subset-Sum and a corresponding instance of is built by setting the following parameters: the number of periods T=nT=n; the costs cP=0c^{P}=0, cI=0c^{I}=0, cB=2c^{B}=2; the selling price bP=0b^{P}=0; the nominal value of cumulative demand D^t=tA\widehat{D}_{t}=tA for every t[T]t\in[T], where A=i[n]aiA=\sum_{i\in[n]}a_{i}; the upper bound Δt=at\Delta_{t}=a_{t} for every t[T]t\in[T]; the cumulative production Xt=tA+12atX_{t}=tA+\frac{1}{2}a_{t} for every t[T]t\in[T]; the budget Γc=b\Gamma^{c}=b. Therefore ct(δ)=max{0,2δat}c_{t}(\delta)=\max\{0,2\delta-a_{t}\}, δ[0,at]\delta\in[0,a_{t}], is a piecewise linear convex utility function, t[T]t\in[T]. Now the problem (45)-(47) has the following form:

maxz=\displaystyle\max z= t[T]max{0,2δtat}\displaystyle\sum_{t\in[T]}\max\{0,2\delta_{t}-a_{t}\} (48)
s.t. t[T]δtb,\displaystyle\sum_{t\in[T]}\delta_{t}\leq b, (49)
0δtat,t[T].\displaystyle 0\leq\delta_{t}\leq a_{t},\;\;t\in[T]. (50)

Note that ct(δ)=max{0,2δat}=atc_{t}(\delta)=\max\{0,2\delta-a_{t}\}=a_{t} if and only if δ=at\delta=a_{t} and ct(δ)=max{0,2δat}<δc_{t}(\delta)=\max\{0,2\delta-a_{t}\}<\delta for δ(0,at)\delta\in(0,a_{t}). Thus, by the constraint (49), the optimal value of (48) is bounded from above by bb. Accordingly, it is easily seen that z=bz=b if and only if the instance of Subset Sum is positive. Indeed, z=bz=b if and only if for each δt>0\delta_{t}>0, the equality δt=at\delta_{t}=a_{t} holds, which means that the answer to Subset Sum is yes. This completes the proof is (45)-(47) is weakly NP-hard. Furthermore, a trivial verification shows that (48)-(50) is an instance of (38)-(40. Hence, the Adv problem under Γc\Gamma^{c} is weakly NP-hard as well. ∎

Before proposing a pseudopolynomial algorithm for the Adv problem, we show the integrality property of an optimal solution to (45)-(47) or equivalently to (38)-(40). The following lemma is a key one.

Lemma 5.

Suppose that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Then there exists an optimal solution 𝛅\boldsymbol{\delta}^{*} to (45)-(47) such that δt{0,1,,Δt}\delta^{*}_{t}\in\{0,1,\ldots,\Delta_{t}\}, t[T]t\in[T].

Proof.

Substituting i[Δt]δti\sum_{i\in[\Delta_{t}]}\delta^{i}_{t} into δt\delta_{t}, where δti[0,1]\delta^{i}_{t}\in[0,1], t[T]t\in[T], we can rewrite (45)-(47) as:

max\displaystyle\max t[T]ct(i[Δt]δti)\displaystyle\sum_{t\in[T]}c_{t}\left(\sum_{i\in[\Delta_{t}]}\delta^{i}_{t}\right)
s.t. t[T]i[Δt]δtiΓc,\displaystyle\sum_{t\in[T]}\sum_{i\in[\Delta_{t}]}\delta^{i}_{t}\leq\Gamma^{c},
0δti1,t[T],i[Δt].\displaystyle 0\leq\delta^{i}_{t}\leq 1,\;\;t\in[T],i\in[\Delta_{t}].

The constraint matrix of the resulting problem is totally unimodular. Hence, be the assumption Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T], each vertex solution is integral, i.e. δti{0,1}\delta^{i}_{t}\in\{0,1\}, t[T],i[Δt]t\in[T],i\in[\Delta_{t}] (see, e.g., [27]). Moreover, since the objective function is convex, it attains its maximum value at a vertex solution (see, e.g., [32]) that is integral. Thus an original optimal solution 𝜹\boldsymbol{\delta}^{*} is integral as well and the lemma follows. ∎

Refer to caption
Figure 5: Graph GG for T=3T=3, Γc=2\Gamma^{c}=2 and Δt=2\Delta_{t}=2, t[T]t\in[T].

Now we ready to give a pseudopolynomial transformation of problem (38)-(40) to a longest path problem in a layered directed acyclic graph G=(V,A)G=(V,A). We are given an instance of (38)-(40), where Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Graph G=(V,A)G=(V,A) is build as follows: the set VV is partitioned into T+2T+2 disjoint layers V0,V1,,VT,VT+1V_{0},V_{1},\ldots,V_{T},V_{T+1}, in which each layer VtV_{t} corresponding to period tt, t[T]t\in[T], has Γc+1\Gamma^{c}+1 nodes denoted by t0,,tΓct^{0},\ldots,t^{\Gamma^{c}}; sets V0={𝔰=00}V_{0}=\{\mathfrak{s}=0^{0}\} and VT+1={𝔱}V_{T+1}=\{\mathfrak{t}\} contain two distinguished nodes, 𝔰\mathfrak{s} and 𝔱\mathfrak{t}. The notation tδt^{\delta}, δ=0,,Γc\delta=0,\ldots,\Gamma^{c}, means that δ\delta units of the available uncertainty Γc\Gamma^{c} have been allocated by an adversary to the cumulative demands in periods from 11 to tt. Each node (t1)δVt1(t-1)^{\delta}\in V_{t-1}, t[T]t\in[T] (including the source node 𝔰=00\mathfrak{s}=0^{0} in V0V_{0}) has at most Δt+1\Delta_{t}+1 arcs that go to nodes in layer VtV_{t}, namely arc ((t1)δ,tδ+δt)((t-1)^{\delta},t^{\delta+\delta_{t}}) exists and it is included to the set of arcs AA if δ+δtΓc\delta+\delta_{t}\leq\Gamma^{c}, where δt=0,,Δt\delta_{t}=0,\ldots,\Delta_{t}. Moreover, we associate with such arc ((t1)δ,tδ+δt)A((t-1)^{\delta},t^{\delta+\delta_{t}})\in A the cost c(t1)δ,tδ+δtc_{(t-1)^{\delta},\,t^{\delta+\delta_{t}}} in the following way (see also (38)):

c(t1)δ,tδ+δt=max{fI(Xt,D^tδt),fB(Xt,D^t+δt)}.c_{(t-1)^{\delta},\,t^{\delta+\delta_{t}}}=\max\{f_{I}(X_{t},\widehat{D}_{t}-\delta_{t}),f_{B}(X_{t},\widehat{D}_{t}+\delta_{t})\}. (51)

Notice that the costs are constant, because 𝒙𝕏\boldsymbol{x}\in\mathbb{X} is fixed. We finish with connecting each node from VTV_{T} with the sink node 𝔱\mathfrak{t} by the arc of zero cost. The transformation can be done in O(TΓcΔmax)O(T\Gamma^{c}\Delta_{\max}) time, where Δmax=maxt[T]Δt\Delta_{\max}=\max_{t\in[T]}\Delta_{t}. An example for T=3T=3, Γc=2\Gamma^{c}=2 and Δt=2\Delta_{t}=2, t[T]t\in[T], is shown in Figure 5.

Proposition 2.

A solution with the cost of CC^{*} is optimal for an instance of problem (38)-(40) if and only if there is a longest 𝔰\mathfrak{s}-𝔱\mathfrak{t} path with the length of CC^{*} in GG constructed.

Proof.

A trivial verification shows that each path from 𝔰\mathfrak{s} to 𝔱\mathfrak{t} in GG (its first TT arcs) models an integral feasible solution 𝜹=(δt)t[T]\boldsymbol{\delta}=(\delta_{t})_{t\in[T]} to (38)-(40) and vise a versa if Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T] (see Lemma 5). Indeed, consider any 𝔰\mathfrak{s}-𝔱\mathfrak{t} path in GG, by the construction of GG, its form is as follows: 𝔰=0010+δ12(0+δ1)+δ2(t1)(0+k[t2]δk)+δt1t0+(k[t1]δk)+δt(T1)(0+k[T2]δk)+δT1T(0+k[T1]δk)+δT𝔱\mathfrak{s}=0^{0}\leadsto 1^{0+\delta_{1}}\leadsto 2^{(0+\delta_{1})+\delta_{2}}\leadsto\cdots\leadsto(t-1)^{(0+\sum_{k\in[t-2]}\delta_{k})+\delta_{t-1}}\leadsto t^{0+(\sum_{k\in[t-1]}\delta_{k})+\delta_{t}}\leadsto\cdots\leadsto(T-1)^{(0+\sum_{k\in[T-2]}\delta_{k})+\delta_{T-1}}\leadsto T^{(0+\sum_{k\in[T-1]}\delta_{k})+\delta_{T}}\leadsto\mathfrak{t}, where δ1Δ1,δ2Δ2,,δtΔt,,δTΔT\delta_{1}\leq\Delta_{1},\delta_{2}\leq\Delta_{2},\ldots,\delta_{t}\leq\Delta_{t},\ldots,\delta_{T}\leq\Delta_{T}, δt+\delta_{t}\in\mathbb{Z}_{+}, and its arcs ((t1)(0+k[t2]δk)+δt1,t0+(k[t1]δk)+δt)A((t-1)^{(0+\sum_{k\in[t-2]}\delta_{k})+\delta_{t-1}},t^{0+(\sum_{k\in[t-1]}\delta_{k})+\delta_{t}})\in A, since 0+(k[t1]δk)+δtΓc0+(\sum_{k\in[t-1]}\delta_{k})+\delta_{t}\leq\Gamma^{c}, t[T]t\in[T]. Thus the total amount of uncertainty to cumulative demands along this path (along its first TT arcs) is at most Γc\Gamma^{c}, i.e. t[T]δtΓc\sum_{t\in[T]}\delta_{t}\leq\Gamma^{c}. Furthermore, it follows from (51) that the cost of this path is equal to the value of the objective function (38) for 𝜹\boldsymbol{\delta}. Accordingly, the costs of an optimal solution to (38)-(40) and the length of a longest path in GG are the same, equal to CC^{*}. ∎

From Lemma 4 and Proposition 2 it follows that solving the Adv problem boils down to finding a longest path from 𝔰\mathfrak{s} to 𝔱\mathfrak{t} in GG built, which can be done in O(|A|+|V|)O(|A|+|V|) (see, e.g., [3]). Taking into account the running time required to construct GG and finding a longest path in GG we obtain the following theorem:

Theorem 6.

Suppose that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Then the Adv problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, can be solved in O(TΓcΔmax)O(T\Gamma^{c}\Delta_{\max}) time.

There are some polynomially solvable cases of the Adv problem (problem (38)-(40)). The first one is obvious, namely, when Γc\Gamma^{c} is bounded by a polynomial of the problem size (notice that ΔmaxΓc\Delta_{\max}\leq\Gamma^{c}). The second case is the uniform one, i.e. bounds Δt=Δ\Delta_{t}=\Delta for every t[T]t\in[T], and then the inner problem (45)-(47) can be solved in O(T2)O(T^{2}) time [30]. The last case, when ct(δ)c_{t}(\delta), in the objective function (45), is linear (see Figure 4a) for every t[T]t\in[T]. Then the problem (45)-(47) becomes a continuous knapsack problem with separable linear utilities that can be solved in O(T)O(T) time (see, e.g., [27]).

It turns out that if Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T], then the Adv problem has a fully polynomial approximation scheme (FPTAS)111A maximization (resp. minimization) problem has an FPTAS if for each ϵ>0\epsilon>0 and every its instance II the inequality OPT(I)(1+ϵ)c(I)OPT(I)\leq(1+\epsilon)c(I) (resp. c(I)(1+ϵ)OPTc(I)\leq(1+\epsilon)OPT) holds, where OPT(I)OPT(I) is the optimal cost of II and c(I)c(I) is the cost returned by an approximation algorithm whose running time is polynomial in both 1/ϵ1/\epsilon and the size of II. It is assumed that the cost of each possible solution of the problem is nonnegative.. Indeed, the existence of the FPTAS follows from the fact that problem (45)-(47) with the integral property is a special case of the nonlinear knapsack problem with a separable nondecreasing objective function, a separable nondecreasing packing (budget) constraint and integer variables that admits an FPTAS (see [23]).

Corollary 1.

Suppose that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Then the Adv problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, admits an FPTAS.

Proof.

Let 𝜹\boldsymbol{\delta}^{*} be an optimal solution to (42)-(44) (equivalently to the Adv problem) and 𝜹\boldsymbol{\delta}^{{}^{\prime}} be a solution to (45)-(47) returned by an FPTAS proposed in [23]. Obviously, the running time of the FPTAS for (42)-(44 is the same as the one for (45)-(47). We only need to show that the inequality A+t[T]ct(δt)(1+ϵ)(A+t[T]ct(δt))A+\sum_{t\in[T]}c_{t}(\delta^{*}_{t})\leq(1+\epsilon)(A+\sum_{t\in[T]}c_{t}(\delta^{{}^{\prime}}_{t})) holds for every ϵ>0\epsilon>0. There is no loss of generality in assuming A0A\geq 0. Hence and from the fact that 𝜹\boldsymbol{\delta}^{{}^{\prime}} is an approximate solution to (45)-(47), we get A+t[T]ct(δt)A+(1+ϵ)t[T]ct(δt)(1+ϵ)(A+t[T]ct(δt))A+\sum_{t\in[T]}c_{t}(\delta^{*}_{t})\leq A+(1+\epsilon)\sum_{t\in[T]}c_{t}(\delta^{{}^{\prime}}_{t})\leq(1+\epsilon)(A+\sum_{t\in[T]}c_{t}(\delta^{{}^{\prime}}_{t})). ∎

We now deal with the MinMax problem. Theorem 5 immediately yields the following corollary.

Corollary 2.

The MinMax problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, is weakly NP-hard.

We now provide some positive results for the MinMax problem. We propose an ellipsoid algorithm based approach, adapted from [2], where a similar class of robust problems (with a different scenario set) has been studied. The MinMax problem (see (11)) can be formulated as the following convex programing model:

min\displaystyle\min\; α\displaystyle\alpha (52)
s.t. F(𝒙,𝑫)α,\displaystyle F(\boldsymbol{x},\boldsymbol{D})\leq\alpha, 𝑫𝒰c,\displaystyle\boldsymbol{D}\in\mathcal{U}^{c}, (53)
𝒙𝕏,\displaystyle\boldsymbol{x}\in\mathbb{X}, (54)

where F(𝒙,𝑫)=t[T]max{fI(Xt,Dt),fB(Xt,Dt)}F(\boldsymbol{x},\boldsymbol{D})=\sum_{t\in[T]}\max\{f_{I}(X_{t},D_{t}),f_{B}(X_{t},D_{t})\} is a convex function (recall that Xt=i[t]xiX_{t}=\sum_{i\in[t]}x_{i}). Thus the above program has infinitely many convex constraints of the form (53) and together with the linear constraints (54) that describe a convex set. One can solve (52)-(54) by the ellipsoid algorithm (see, e.g., [19]). By the equivalence of optimization and separation (see, e.g., [19]), we need only a separation oracle for the convex set \mathbb{P} determined by the constraints (53) and (54), i.e. a procedure, which for given (𝒙,α)T+1(\boldsymbol{x}^{*},\alpha^{*})\in\mathbb{R}^{T+1}, either decide that (𝒙,α)(\boldsymbol{x}^{*},\alpha^{*})\in\mathbb{P} or return a separating hyperplane between \mathbb{P} and (𝒙,α)(\boldsymbol{x}^{*},\alpha^{*}). Write =Adv𝕏\mathbb{P}=\mathbb{P}_{\text{{Adv}}}\cap\mathbb{X}, where Adv\mathbb{P}_{\text{{Adv}}} is a convex set corresponding to constraints (53). Clearly, checking if (𝒙,α)𝕏(\boldsymbol{x}^{*},\alpha^{*})\in\mathbb{X} or forming a separating hyperplane, if (𝒙,α)𝕏(\boldsymbol{x}^{*},\alpha^{*})\not\in\mathbb{X}, that boils down to detecting a violated constraint by (𝒙,α)(\boldsymbol{x}^{*},\alpha^{*}), can be trivially done in polynomial time, since 𝕏\mathbb{X} is explicitly given by a polynomial number of linear constraints. While either deciding that (𝒙,α)Adv(\boldsymbol{x}^{*},\alpha^{*})\in\mathbb{P}_{\text{{Adv}}} or forming a separating hyperplane relies on solving the Adv problem for a 𝒙𝕏\boldsymbol{x}^{*}\in\mathbb{X}. Indeed, if F(𝒙,𝑫)=max𝑫𝒰cF(𝒙,𝑫)αF(\boldsymbol{x}^{*},\boldsymbol{D}^{*})=\max_{\boldsymbol{D}\in\mathcal{U}^{c}}F(\boldsymbol{x}^{*},\boldsymbol{D})\leq\alpha^{*} then (𝒙,α)Adv(\boldsymbol{x}^{*},\alpha^{*})\in\mathbb{P}_{\text{{Adv}}}. Otherwise, a separating hyperplane is of the form: t[T]Ft(𝒙,𝑫)α=0\sum_{t\in[T]}F_{t}(\boldsymbol{x},\boldsymbol{D}^{*})-\alpha^{*}=0, where Ft(𝒙,𝑫)=fI(Xt,Dt)F_{t}(\boldsymbol{x},\boldsymbol{D}^{*})=f_{I}(X_{t},D^{*}_{t}) if fI(Xt,Dt)>fB(Xt,Dt)f_{I}(X^{*}_{t},D^{*}_{t})>f_{B}(X^{*}_{t},D^{*}_{t}) and Ft(𝒙,𝑫)=fB(Xt,Dt)F_{t}(\boldsymbol{x},\boldsymbol{D}^{*})=f_{B}(X_{t},D^{*}_{t}), otherwise. The overall running time of the algorithm for solving (52)-(54) depends on the running time of an algorithm for the Adv problem applied, since the ellipsoid algorithm performs a polynomial number of operations and calls to our separation oracle. On account of the above remark and by Theorem 6, we get the following result:

Theorem 7.

Suppose that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Then the MinMax problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, can be solved in a pseudopolynomial time.

Accordingly, for all the polynomial solvable cases of the Adv problem, aforementioned in this section, one can obtain polynomial algorithms for the MinMax problem. An alternative approach to solve the MinMax problem is a linear programming formulation with pseudopolynomial number of constraints and variables, which is a constructive proof of Theorem 7. Assuming that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T], we can reduce problem (38)-(40), that corresponds to Adv, to finding a longest path in layered weighted graph G=(V,A)G=(V,A) (see Proposition 2). We can use the same reasoning as in the previous section and build the following linear programming problem for MinMax:

min\displaystyle\min\; π𝔱\displaystyle\pi_{\mathfrak{t}} (55)
s.t. πtδ+δtπ(t1)δfI(Xt,D^tδt),\displaystyle\pi^{\delta+\delta_{t}}_{t}-\pi_{(t-1)}^{\delta}\geq f_{I}(X_{t},\widehat{D}_{t}-\delta_{t}), ((t1)δ,tδ+δt)A,\displaystyle((t-1)^{\delta},t^{\delta+\delta_{t}})\in A, (56)
πtδ+δtπ(t1)δfB(Xt,D^t+δt),\displaystyle\pi^{\delta+\delta_{t}}_{t}-\pi_{(t-1)}^{\delta}\geq f_{B}(X_{t},\widehat{D}_{t}+\delta_{t}), ((t1)δ,tδ+δt)A,\displaystyle((t-1)^{\delta},t^{\delta+\delta_{t}})\in A, (57)
π𝔱πTδ0,\displaystyle\pi_{\mathfrak{t}}-\pi_{T}^{\delta}\geq 0, (Tδ,𝔱)A,\displaystyle(T^{\delta},\mathfrak{t})\in A, (58)
π00=0,\displaystyle\pi_{0}^{0}=0, (59)

where πtδ\pi^{\delta}_{t} is unrestricted variable associated with node tδt^{\delta} and π𝔱\pi_{\mathfrak{t}} is unrestricted variable associated with node 𝔱\mathfrak{t} of GG. The number of constraints and variables in (55)-(59) is O(TΓcΔmax)O(T\Gamma^{c}\Delta_{\max}). Now adding linear constraints Xt=i[t]xiX_{t}=\sum_{i\in[t]}x_{i}, t[T]t\in[T], and 𝒙𝕏\boldsymbol{x}\in\mathbb{X} to (55)-(59) gives a linear program for the MinMax problem with a pseudopolynomial number of constraints and variables. It is worth pointing out that all the polynomially solvable cases of the Adv problem, presented in this section, can be modeled by linear programs with polynomial numbers of constraints and variables and thus they apply to MinMax one as well.

We now show that there exists an FPTAS for the MinMax problem. It turns out that the formulation (52)-(54) admits an FPTAS if there exits an FPTAS for max𝑫𝒰cF(𝒙,𝑫)\max_{\boldsymbol{D}\in\mathcal{U}^{c}}F(\boldsymbol{x},\boldsymbol{D}) for a given 𝒙𝕏\boldsymbol{x}\in\mathbb{X} (the Adv problem). This result can easily be adapted from [2, Lemma 3.5]. Corollary 1 now implies:

Corollary 3.

Suppose that Γc,Δt+\Gamma^{c},\Delta_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Then the MinMax problem under 𝒰c\mathcal{U}^{c}, for the non-overlapping case, admits an FPTAS.

4.2 General case

We now drop the assumption D^t+ΔtD^t+1Δt+1\widehat{D}_{t}+\Delta_{t}\leq\widehat{D}_{t+1}-\Delta_{t+1}. Theorem 5 and Corollary 2 now implie the following hardness results for both problems under consideration.

Corollary 4.

The Adv and MinMax problems for 𝒰c\mathcal{U}^{c} in the general case are weakly NP-hard.

From an algorithmic point of view, the situation for the general case is more difficult Recall, that for the non-overlapping case the model for the Adv problem has the integrality property (see Lemma 5), which allowed us to build pseudopolynomial algorithms. Now, in order to ensure that 𝑫𝒰c\boldsymbol{D}\in\mathcal{U}^{c}, we have to add additional constraints DtDt+1D_{t}\leq D_{t+1}, t[T1]t\in[T-1] and the resulting model for the Adv problem takes the following form:

max\displaystyle\max t[T]max{fI(Xt,D^t+δt),fB(Xt,D^t+δt)}\displaystyle\sum_{t\in[T]}\max\{f_{I}(X_{t},\widehat{D}_{t}+\delta_{t}),f_{B}(X_{t},\widehat{D}_{t}+\delta_{t})\} (60)
s.t. t[T]|δt|Γc,\displaystyle\sum_{t\in[T]}|\delta_{t}|\leq\Gamma^{c}, (61)
D^t+δtD^t+1+δt+1,\displaystyle\widehat{D}_{t}+\delta_{t}\leq\widehat{D}_{t+1}+\delta_{t+1}, t[T1],\displaystyle t\in[T-1], (62)
ΔtδtΔt,\displaystyle-\Delta_{t}\leq\delta_{t}\leq\Delta_{t}, t[T].\displaystyle t\in[T]. (63)

The constraints (61)-(63) ensure that the cumulative demand scenario 𝑫\boldsymbol{D}, induced by 𝜹\boldsymbol{\delta}, belongs to 𝒰c\mathcal{U}^{c}. These constraints determine a convex polytope. Since the objective function is convex, its maximum value is attained at a vertex of this convex polytope. However, we show an instance of the Adv problem which has no optimal solutions being integer, when Γc,Δt,D^t+\Gamma^{c},\Delta_{t},\widehat{D}_{t}\in\mathbb{Z}_{+}, t[T]t\in[T]. Let T=3T=3, x1=x2=0x_{1}=x_{2}=0, x3=5x_{3}=5, cB=2c^{B}=2, cI=1c^{I}=1, cP=bP=0c^{P}=b^{P}=0, D^1=3\widehat{D}_{1}=3, D^2=4\widehat{D}_{2}=4, D^3=5\widehat{D}_{3}=5, Δ1=3\Delta_{1}=3, Δ2=2\Delta_{2}=2, Δ3=1\Delta_{3}=1 and Γc=4\Gamma^{c}=4. Now the model (60)-(63) has the following form:

max{14+2(δ1+δ2)δ3:\displaystyle\max\{14+2(\delta_{1}+\delta_{2})-\delta_{3}\,:\, δ1δ21,δ2δ31,|δ1|+|δ2|+|δ3|4,\displaystyle\delta_{1}-\delta_{2}\leq 1,\delta_{2}-\delta_{3}\leq 1,|\delta_{1}|+|\delta_{2}|+|\delta_{3}|\leq 4,
δ1[3,3],δ2[2,2],δ3[1,1]}.\displaystyle\delta_{1}\in[-3,3],\delta_{2}\in[-2,2],\delta_{3}\in[-1,1]\}. (64)

An easy computation shows that δ1=213\delta^{*}_{1}=2\frac{1}{3}, δ2=113\delta^{*}_{2}=1\frac{1}{3} and δ3=13\delta^{*}_{3}=\frac{1}{3} is an optimal vertex solution to (64) and there is no optimal integer solution.

One can easily construct a mixed integer programming (MIP) counterpart of (60)-(63) for the Adv problem, by linearizing (60) and (61), namely

max\displaystyle\max t[T]πt\displaystyle\sum_{t\in[T]}\pi_{t} (65)
s.t. πtfI(Xt,D^t+δt)+Mtγt,\displaystyle\pi_{t}\leq f_{I}(X_{t},\widehat{D}_{t}+\delta_{t})+M_{t}\gamma_{t}, t[T],\displaystyle t\in[T], (66)
πtfB(Xt,D^t+δt)+(1Mt)γt,\displaystyle\pi_{t}\leq f_{B}(X_{t},\widehat{D}_{t}+\delta_{t})+(1-M_{t})\gamma_{t}, t[T],\displaystyle t\in[T], (67)
t[T]βtΓc,\displaystyle\sum_{t\in[T]}\beta_{t}\leq\Gamma^{c}, (68)
δtβt,\displaystyle\delta_{t}\leq\beta_{t}, t[T],\displaystyle t\in[T], (69)
δtβt,\displaystyle-\delta_{t}\leq\beta_{t}, t[T],\displaystyle t\in[T], (70)
D^t+δtD^t+1+δt+1,\displaystyle\widehat{D}_{t}+\delta_{t}\leq\widehat{D}_{t+1}+\delta_{t+1}, t[T1],\displaystyle t\in[T-1], (71)
ΔtδtΔt,\displaystyle-\Delta_{t}\leq\delta_{t}\leq\Delta_{t}, t[T],\displaystyle t\in[T], (72)
βt0,γt{0,1},\displaystyle\beta_{t}\geq 0,\gamma_{t}\in\{0,1\}, t[T],\displaystyle t\in[T], (73)

where MtM_{t}, t[T]t\in[T], are suitably chosen large numbers. Unfortunately (65)-(73) cannot be extended to a compact MIP for the MinMax problem by using dualization.

In order to cope with the MinMax problem we construct a decomposition algorithm that can be seen as a version of Benders’ decomposition (similar algorithms have been previously used in [2, 7, 12, 20, 38, 44]). The idea consists in solving a certain restricted MinMax problem iteratively, which provides exact or approximate solution. At each iteration an approximate production plan is computed. It is then evaluated by solving the Adv problem and the lower and upper bounds on the cost of an optimal production plan for the original MinMax problem are improved. Consider the following linear programming program, called the master problem with 𝕌𝒰c\mathbb{U}\subseteq\mathcal{U}^{c}:

min\displaystyle\min\; α\displaystyle\alpha (74)
s.t. t[T]πt𝑫α,\displaystyle\sum_{t\in[T]}\pi^{\boldsymbol{D}}_{t}\leq\alpha, (75)
fI(Xt,Dt)πt𝑫,\displaystyle f_{I}(X_{t},D_{t})\leq\pi^{\boldsymbol{D}}_{t}, t[T],𝑫𝕌,\displaystyle t\in[T],\boldsymbol{D}\in\mathbb{U}, (76)
fB(Xt,Dt)πt𝑫,\displaystyle f_{B}(X_{t},D_{t})\leq\pi^{\boldsymbol{D}}_{t}, t[T],𝑫𝕌,\displaystyle t\in[T],\boldsymbol{D}\in\mathbb{U}, (77)
Xt=i[t]xi,\displaystyle X_{t}=\sum_{i\in[t]}x_{i}, t[T],\displaystyle t\in[T], (78)
𝒙𝕏.\displaystyle\boldsymbol{x}\in\mathbb{X}. (79)

The constraints (75)-(77) are the linearization of (53). Thus (74)-(79) is a relaxation of the MinMax problem (for 𝕌=𝒰c\mathbb{U}=\mathcal{U}^{c} we get the MinMax one). An optimal solution 𝒙\boldsymbol{x}^{*} to (74)-(79) is an approximate plan for MinMax and its quality is evaluated by solving the MIP model (65)-(73). In this way we get lower an upper bound on the optimal cost. The formal description of the above decomposition procedure is presented in the form of Algorithm 1.

Step 0. LB:=LB:=-\infty, UB:=+UB:=+\infty, 𝕌:={𝑫^}\mathbb{U}:=\{\widehat{\boldsymbol{D}}\}.
Step 1. Solve the master problem (74)-(79) with 𝕌\mathbb{U} and derive an optimal solution (𝒙,α)(\boldsymbol{x}^{*},\alpha^{*}) and update LB:=αLB:=\alpha^{*}.
Step 2. Solve the Adv problem (65)-(73) for 𝒙\boldsymbol{x}^{*} and derive an optimal (a worst-case) cumulative demand scenario 𝑫^+𝜹\widehat{\boldsymbol{D}}+\boldsymbol{\delta}^{*} and update UB:=min{UB,t[T]max{fI(Xt,D^t+δt),fB(Xt,D^t+δt)}}UB:=\min\{UB,\sum_{t\in[T]}\max\{f_{I}(X^{*}_{t},\widehat{D}_{t}+\delta^{*}_{t}),f_{B}(X^{*}_{t},\widehat{D}_{t}+\delta^{*}_{t})\}\}.
Step 3. If UBLBϵUB-LB\leq\epsilon then output 𝒙\boldsymbol{x}^{*} and EXIT.
Step 4. Update 𝕌:=𝕌{𝑫^+𝜹}\mathbb{U}:=\mathbb{U}\cup\{\widehat{\boldsymbol{D}}+\boldsymbol{\delta}^{*}\} and go to Step 1.
Algorithm 1 A decomposition algorithm for the MinMax problem.

Without loss of generality we can assume that 𝒰c\mathcal{U}^{c} is a finite set containing only vertex scenarios, since the Adv problem attains its optimum at vertex scenarios. This assumption ensures the convergence of Algorithm 1 in a finite number iterations. More precisely, Algorithm 1 terminates after at most O(|𝒰c|)O(|\mathcal{U}^{c}|) iterations [44, Proposition 2]. However, in practice, the decomposition based algorithms perform quite small number iterations (see, e.g., [7, 12, 20, 38, 44]).

5 Conclusions

In this paper we have discussed a capacitated production planning under uncertainty. More specifically, we have studied a version of the capacitated single-item lot sizing problem with backordering under the budgeted cumulative demand uncertainty. We have considered two variants of the interval budgeted uncertainty representation and used the minmax criterion to choose a best robust production plan. For both variants, we have examined the problem of evaluating a given production plan in terms of its worst-case cost (the Adv problem) and the problem of finding a robust production plan along with its worst-case cost (the MinMax one). Under the discrete budgeted uncertainty, we have provided polynomial algorithms for the Adv problem and polynomial linear programming based methods for the MinMax problem in the non-overlapping case as well as in the general case. We have shown in this way that introducing uncertainty under the discrete budgeted model does not make the problems much computationally harder than their deterministic counterparts. Under the continuous budgeted uncertainty the problems under consideration have different properties than under the discrete budgeted one. In particular, the Adv problem and in consequence the MinMax one have turned to be weakly NP-hard even in the non-overlapping case. For the non-overlapping case we have constructed pseudopolynomial algorithms for the Adv problem and proposed a pseudopolynomial ellipsoidal algorithm and a linear programming program with a pseudopolynomial number of constraints and variables for the MinMax problem. Furthermore, we have shown that both problems admit an FPTAS. In the general case the problems still remain weakly NP-hard and, unfortunately, there is no easy characterization of vertex cumulative demand scenarios, namely the integral property does not hold. We recall that this property has allowed us to build the pseudopolynomial methods in the non-overlapping case. Accordingly, we have proposed a MIP model for the Adv problem and a constraint generation algorithm for the MinMax problem in the general case.

There is still a number of open questions concerning the examined problems, in particular, under the continuous budgeted uncertainty in the general case. The Adv and MinMax problems are weakly NP-hard. Thus a full-fledged complexity analysis of the problems has to be carried out, i.e. it is interesting to check the existence pseudopolynomial algorithms, FPTASs or approximation algorithms for them. Furthermore, proposing a compact MIP model for the MinMax problem is an interesting open problem.

Acknowledgements

Romain Guillaume was partially supported by the project caasc ANR-18-CE10-0012 of the French National Agency for Research. Adam Kasperski and Paweł Zieliński were supported by the National Science Centre, Poland, grant 2017/25/B/ST6/00486.

References

  • [1] A. Agra, M. Poss, and M. Santos. Optimizing make-to-stock policies through a robust lot-sizing model. International Journal of Production Economics, 200:302–310, 2018.
  • [2] A. Agra, M. C. Santos, D. Nace, and M. Poss. A dynamic programming approach for a class of robust optimization problems. SIAM Journal on Optimization, 26:1799–1823, 2016.
  • [3] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: theory, algorithms, and applications. Prentice Hall, Englewood Cliffs, New Jersey, 1993.
  • [4] D. J. Alem and R. Morabito. Production planning in furniture settings via robust optimization. Computers & Operations Research, 39:139–150, 2012.
  • [5] M. A. Aloulou, A. Dolgui, and M. Y. Kovalyov. A bibliography of non-deterministic lot-sizing models. International Journal of Production Research, 52:2293–2310,, 2014.
  • [6] J. R. T. Arnold, S. N. Chapman, and L. M. Clive. Introduction to Materials Management. Prentice Hall, 7-th edition, 2011.
  • [7] Ö. N. Attila, A. Agra, K. Akartunali, and A. Arulselvan. A decomposition algorithm for robust lot sizing problem with remanufacturing option. In O. Gervasi, B. Murgante, S. Misra, G. Borruso, C. M. Torre, A. M. A. C. Rocha, D. Taniar, B. O. Apduhan, E. N. Stankova, and A. Cuzzocrea, editors, Computational Science and Its Applications - ICCSA 2017, Part II, volume 10405 of Lecture Notes in Computer Science, pages 684–695. Springer, 2017.
  • [8] J. F. Benders. Partitioning procedures for solving mixed-variables programming problems. Numerische Mathematik, 4:238–252, 1962.
  • [9] D. Bertsimas and M. Sim. Robust discrete optimization and network flows. Mathematical Programming, 98:49–71, 2003.
  • [10] D. Bertsimas and M. Sim. The price of robustness. Operations research, 52:35–53, 2004.
  • [11] D. Bertsimas and A. Thiele. A robust optimization approach to inventory theory. Operations Research, 54:150–168, 2006.
  • [12] D. Bienstock and N. Özbay. Computing robust basestock levels. Discrete Optimization, 5:389–414, 2008.
  • [13] N. Brahimi, N. Absi, S. Dauzère-Pérès, and A. Nordli. Single-item dynamic lot-sizing problems: An updated survey. European Journal of Operational Research, 263:838–863, 2017.
  • [14] W. H. Chen and J. M. Thizy. Analysis of relaxations for the multi-item capacitated lot-sizing problem. Annals of Operations Research, 26:29–72, 1990.
  • [15] G. Dahl and B. Realfsen. The cardinality-constrained shortest path problem in 2-graphs. Networks, 36:1–8, 2000.
  • [16] A. Dolgui and C. Prodhon. Supply planning under uncertainties in MRP environments: A state of the art. Annual Reviews in Control, 31:269–279, 2007.
  • [17] M. Florian, K. J. Lenstra, and A. H. G. Rinnooy Kan. Deterministic Production Planning: Algorithms and Complexity. Management Science, 26:669–679, 1980.
  • [18] M. R. Garey and D. S. Johnson. Computers and Intractability. A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979.
  • [19] M. Grötschel, L. Lovász, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer-Verlag, 1993.
  • [20] R. Guillaume, P. Kobylański, and P. Zieliński. A robust lot sizing problem with ill-known demands. Fuzzy Sets and Systems, 206:39–57, 2012.
  • [21] R. Guillaume, C. Thierry, and B. Grabot. Modelling of ill-known requirements and integration in production planning. Production Planning and Control, 22:336–352, 2011.
  • [22] R. Guillaume, C. Thierry, and P. Zieliński. Robust material requirement planning with cumulative demand under uncertainty. International Journal of Production Research, 55:6824–6845, 2017.
  • [23] N. Halman, D. Klabjan, C. Li, J. B. Orlin, and D. Simchi-Levi. Fully Polynomial Time Approximation Schemes for Stochastic Dynamic Programs. SIAM Journal on Discrete Mathematics, 28:1725–1796, 2014.
  • [24] N. Halman, D. Klabjan, M. Mostagir, J. B. Orlin, and D. Simchi-Levi. A fully polynomial-time approximation scheme for single-item stochastic inventory control with discrete demand. Mathematics of Operations Research, 34:674–685, 2009.
  • [25] R. Hassin. Approximation Schemes for the Restricted Shortest Path Problem. Mathematics of Operations Research, 17:36–42, 1992.
  • [26] A. Jamalnia, J.-B. Yang, A. Feili, D.-L. Xu, and G. Jamali. Aggregate production planning under uncertainty: a comprehensive literature survey and future research directions. The International Journal of Advanced Manufacturing Technology, 102:159–181, 2019.
  • [27] B. Korte and J. Vygen. Combinatorial Optimization: Theory and Algorithms, Algorithms and Combinatorics. Springer-Verlag, 2012.
  • [28] P. Kouvelis and G. Yu. Robust Discrete Optimization and its Applications. Kluwer Academic Publishers, 1997.
  • [29] H. L. Lee, V. Padmanabhan, and S. Whang. Information Distortion in a Supply Chain: The Bullwhip Effect. Management Science, 43:546–558, 1997.
  • [30] R. Levi, G. Perakis, and G. Romero. A continuous knapsack problem with separable convex utilities: Approximation algorithms and applications. Operations Research Letters, 42:367–373, 2014.
  • [31] R. Levi, R. Roundy, and D. B. Shmoys. Provably near-optimal sampling-based policies for stochastic inventory control models. Mathematical Methods of Operations Research, 32:821–839, 2007.
  • [32] B. Martos. Nonlinear programming theory and methods. Akadémiai Kiadó, Budapest, 1975.
  • [33] J. Mula, D. Peidro, and R. Poler. The effectiveness of a fuzzy mathematical programming approach for supply chain production planning with fuzzy demand. International Journal of Production Economics, 128:136–143, 2010.
  • [34] J. Mula, R. Poler, J. Garcia-Sabater, and F. C. Lario. Models for production planning under uncertainty: A review. International Journal of Production Economics, 103:271–285, 2006.
  • [35] E. Nasrabadi and J. B. Orlin. Robust optimization with incremental recourse. CoRR, abs/1312.4075, 2013.
  • [36] D. Peidro, J. Mula, R. Poler, and F. C. Lario. Quantitative models for supply chain planning under uncertainty: a review. The International Journal of Advanced Manufacturing Technology, 43:400–420, 2009.
  • [37] Y. Pochet and L. A. Wolsey. Production Planning by Mixed Integer Programming. Springer-Verlag, 2006.
  • [38] M. C. Santos, A. Agra, and M. Poss. Robust inventory theory with perishable products. Annals of Operations Research, 289:473–494, 2020.
  • [39] C. R. Sox. Dynamic lot sizing with random demand and non-stationary costs. Operations Research Letters, 20(4):155–164, 1997.
  • [40] H. Tempelmeier. Stochastic lot sizing problems. In J. M. Smith and B. Tan, editors, Handbook of Stochastic Models and Analysis of Manufacturing System Operations, pages 313–344. Springer New York, 2013.
  • [41] R.-C. Wang and T.-F. Liang. Applying possibilistic linear programming to aggregate production planning. International Journal of Production Economics, 98:328–341, 2005.
  • [42] C. Wei, Y. Li, and X. Cai. Robust optimal policies of production and inventory with uncertain returns and demand. International Journal of Production Economics, 134:357–367, 2011.
  • [43] J. H. Yeung, W. C. K. Wong, and L. Ma. Parameters affecting the effectiveness of MRP systems: A review. International Journal of Production Research, 36:313–332, 1998.
  • [44] B. Zeng and L. Zhao. Solving two-stage robust optimization problems using a column and constraint generation method. Operation Research Letters, 41:457–461, 2013.
  • [45] M. Zhang. Two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty. Operations Research Letters, 39:342–345, 2011.