This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Kameng Nip 22institutetext: Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China
22email: njm11@mails.tsinghua.edu.cn
33institutetext: Zhenbo Wang ( 🖂 )44institutetext: Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China
44email: zwang@math.tsinghua.edu.cn
55institutetext: Fabrice Talla Nobibon 66institutetext: Postdoctoral researcher for Research Foundation–Flanders
ORSTAT, Faculty of Economics and Business, KU Leuven, Belgium
66email: Fabrice.TallaNobibon@kuleuven.be
77institutetext: Roel Leus 88institutetext: ORSTAT, Faculty of Economics and Business, KU Leuven, Belgium
88email: Roel.Leus@kuleuven.be

A Combination of Flow Shop Scheduling and the Shortest Path Problemthanks: A preliminary version of this paper has appeared in the Proceedings of 19th Annual International Computing and Combinatorics Conference (COCOON’13), LNCS, vol. 7936, pp. 680–687.

Kameng Nip    Zhenbo Wang    Fabrice Talla Nobibon    Roel Leus
(Received: date / Accepted: date)
Abstract

This paper studies a combinatorial optimization problem which is obtained by combining the flow shop scheduling problem and the shortest path problem. The objective of the obtained problem is to select a subset of jobs that constitutes a feasible solution to the shortest path problem, and to execute the selected jobs on the flow shop machines to minimize the makespan. We argue that this problem is NP-hard even if the number of machines is two, and is NP-hard in the strong sense for the general case. We propose an intuitive approximation algorithm for the case where the number of machines is an input, and an improved approximation algorithm for fixed number of machines.

Keywords:
approximation algorithm; combination of optimization problems; flow shop scheduling; shortest path

1 Introduction

Combinatorial optimization is an active field in operations research and theoretical computer science. Historically, independent lines separately developed, such as machine scheduling, bin packing, travelling salesman problem, network flows, etc. With the rapid development of science and technology, manufacturing, service and management are often integrated, and decision-makers have to deal with systems involving several characteristics from more than one well-known combinatorial optimization problem. To the best of our knowledge, the combination of optimization problems has received only little attention in literature.

Bodlaender et al. (1994) studied parallel machine scheduling with incompatible jobs, in which two incompatible jobs cannot be processed by the same machine, and the objective is to minimize the makespan. This problem can be considered as a combination of parallel machine scheduling and the coloring problem. Wang and Cui (2012) studied a combination of parallel machine scheduling and the vertex cover problem. The goal is to select a subset of jobs that forms a vertex cover of a given graph and to execute these jobs on mm identical parallel machines to minimize the makespan. They proposed an (32m+1)(3-\frac{2}{m+1})-approximation algorithm for this problem. Wang et al. (2013) have investigated a generalization of the above problem that combines the uniformly related parallel machine scheduling problem and a generalized covering problem. They proposed several approximation algorithms and mentioned as future research other combinations of well-known combinatorial optimization problems. This is the core motivation for this work.

Let us consider the following scenario. We aim at building a railway between two specific cities. The railway needs to cross several adjacent cities, which is determined by a map (a graph). The processing time of manufacturing the rail track for each pair of cites varies between the pairs. Manufacturing a rail track between two cities in the graph is associated with a job. The decision-maker needs to make two main decisions: (1) choosing a path to connect the two cities, and (2) deciding the schedule of manufacturing the rail tracks on this path in the factory. In addition, the manufacturing of rail tracks follows several working stages, each stage must start after the completion of the preceding stages, and we assume that there is only one machine for each stage. We wish to accomplish the manufacturing as early as possible, i.e. minimize the last completion time; this is a standard flow shop scheduling problem. How should a decision-maker choose a feasible path such that the corresponding jobs can be manufactured as early as possible? This problem combines the structure of flow shop scheduling and the shortest path problem. Following the framework introduced by Wang et al. (2013), we can regard our problem as a combination of those two problems.

Finding a simple path between two vertices in a directed graph is a basic problem that can be polynomially solved (Ahuja et al., 1993). Furthermore, if we want to find a path under a certain objective, various optimization problems come within our range of vision. The most famous one is the classic shortest path problem, which can be solved in polynomial time if the graph contains no negative cycle, and otherwise it is NP-hard (Ahuja et al., 1993). Moreover, many optimization problems have a similar structure. For instance, the min-max shortest path problem (Kouvelis and Yu, 1997) studies a problem with multiple weights associated with each arc, and the objective is to find a directed path between two specific vertices such that the value of the maximum among all its total weights is minimized. The multi-objective shortest path problem (Warburton, 1987) also has multiple weights, but the objective is to find a Pareto optimal path between two specific vertices to satisfy some specific objective function.

Flow shop scheduling is one of the three basic models of multi-stage scheduling (the others are open shop scheduling and job shop scheduling). Flow shop scheduling with the objective of minimizing the makespan is usually denoted by Fm||CmaxFm||C_{max}, where mm is the number of machines. In one of the earliest papers on scheduling problems, Johnson (1954) showed that F2||CmaxF2||C_{max} can be solved in O(nlogn)O(n\log n) time, where nn is the number of jobs. On the other hand, Garey et al. (1976) proved that Fm||CmaxFm||C_{max} is strongly NP-hard for m3m\geq 3.

The contributions of this paper include: (1) a formal description of the considered problem, (2) the argument that the considered problem is NP-hard even if m=2m=2, and NP-hard in the strong sense if m3m\geq 3, and (3) several approximation algorithms.

The rest of the paper is organized as follows. In Section 2, we first give a formal definition of the problem stated above, then we briefly review flow shop scheduling and some shortest path problems, and introduce some related algorithms that will be used in the subsequent sections. In Section 3, we study the computational complexity of the combined problem. Section 4 provides several approximation algorithms for this problem. Some concluding remarks are provided in Section 5.

2 Preliminaries

2.1 Problem Description

We first give a formal definition of our problem, which is a combination of the flow shop scheduling problem and the shortest path problem.

Definition 1

Given a directed graph G=(V,A)G=(V,A) with two distinguished vertices s,tVs,t\in V and mm flow shop machines, each arc ajAa_{j}\in A corresponds with a job JjJJ_{j}\in J with processing times (p1j(p_{1j}, p2jp_{2j}, \cdots, pmj)p_{mj}) respectively. The Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max} problem is to find a sts-t directed path PP, and to schedule the jobs of JPJ_{P} on the flow shop machines to yield the minimum makespan over all PP, where JPJ_{P} denotes the set of jobs corresponding to the arcs in PP.

The considered problem is a combination of flow shop scheduling and the classic shortest path problem, mainly because the two optimization problems are special cases of this problem. For example, consider the following instances with m=2m=2. If there is a unique path from ss to tt in GG, as shown in the left of Fig. 1, our problem is the two-machine flow shop scheduling problem. If all the processing times on the second machine are zero, as shown in the right of Fig. 1, then our problem is equivalent to the classic shortest path with respect to the processing times on the first machine. These examples illustrate that these two optimization problems are inherent in the considered problem.

Refer to caption
Figure 1: Special cases of our problem

In this paper, we will use the results of some optimization problems that have a similar structure with the classic shortest path problem. We introduce the following generalized shortest path problem.

Definition 2

Given a directed graph G=(V,A,w1,,wK)G=(V,A,w^{1},\cdots,w^{K}) and two distinguished vertices s,tVs,t\in V with |A|=n|A|=n. Each arc ajA,j=1,,na_{j}\in A,j=1,\cdots,n is associated with KK weights wj1,,wjKw^{1}_{j},\cdots,w^{K}_{j}, and we define the vector wk=(w1k,w2k,,wnk)w^{k}=(w^{k}_{1},w^{k}_{2},\cdots,w^{k}_{n}) for k=1,2,,Kk=1,2,\cdots,K. The goal of our shortest path problem SP(G,s,t,f)SP(G,s,t,f) is to find a sts-t directed path PP that minimizes f(w1,w2,,wK,x)f(w^{1},w^{2},\cdots,w^{K},x), in which ff is a given objective function and x{0,1}nx\in\{0,1\}^{n} contains the decision variables such that xj=1x_{j}=1 if and only if ajPa_{j}\in P.

For ease of exposition, we use SPSP instead of SP(G,s,t,f)SP(G,s,t,f) when there is no danger of confusion. Notice that SPSP is a generalization of various shortest path problems. For instance, if we consider K=1K=1 and f(w1,x)=w1xf(w^{1},x)=w^{1}\cdot x, where \cdot is the dot product, this problem is the classic shortest path problem. If K=2K=2 and f(w1,w2,x)=min{w1x:w2xW}f(w^{1},w^{2},x)=\min\{w^{1}\cdot x:w^{2}\cdot x\leq W\}, where WW is a given number, this problem is the shortest weight-constrained path problem (Garey and Johnson, 1979). If f(w1,w2,,wK,x)=max{w1x,w2x,,wKx}f(w^{1},w^{2},\cdots,w^{K},x)=\max\{w^{1}\cdot x,w^{2}\cdot x,\cdots,w^{K}\cdot x\}, the problem is the min-max shortest path problem (Kouvelis and Yu, 1997). In the following sections, we will analyze our combined problem by setting appropriate weights and objective function in SPSP.

2.2 Algorithms for Flow Shop Scheduling Problems

First, we introduce some trivial bounds for flow shop scheduling. Denote by CmaxC_{max} the makespan of an arbitrary flow shop schedule with job set JJ. A feasible shop schedule is called dense when any machine is idle if and only if there is no job that can be processed at that time on that machine. For arbitrary dense flow shop schedule, we have

Cmaxmaxi{1,,m}{JjJpij},C_{max}\geq\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J}p_{ij}\right\}, (1)

and

CmaxJjJi=1mpij.C_{max}\leq\sum_{J_{j}\in J}\sum^{m}_{i=1}p_{ij}. (2)

For each job, we have

Cmaxi=1mpij,JjJ.C_{max}\geq\sum^{m}_{i=1}p_{ij},\qquad\forall J_{j}\in J. (3)

In flow shop scheduling problems, a schedule is called a permutation schedule if all jobs are processed in the same order on each machine. Conway et al. (1971) proved that there always exists a permutation schedule which is optimal for F2||CmaxF2||C_{max} and F3||CmaxF3||C_{max}. In a permutation schedule, the critical job and critical path are important concepts for the analysis of related algorithms.

Suppose we are given a job set JJ with nn jobs. Let σ=(σ(1),,σ(n))\sigma=(\sigma(1),\cdots,\sigma(n)) be a permutation of (1,,n)(1,\cdots,n) for a three-machine (or two-machine) flow shop, and let {Jσ(1),Jσ(2),,Jσ(n)}\{J_{\sigma(1)},J_{\sigma(2)},\cdots,J_{\sigma(n)}\} be the corresponding schedule. For simplicity of notation, we denote the permutation and the schedule by (1,2,,n)(1,2,\cdots,n) and {J1,J2,,Jn}\{J_{1},J_{2},\cdots,J_{n}\} respectively. A directed graph is defined as follows. We define a vertex (i,j)(i,j) with an associated weight pi,jp_{i,j} for each job JjJ_{j} and each machine MiM_{i}, for i=1,2,3i=1,2,3 (or i=1,2i=1,2) and j=1,2,,nj=1,2,\cdots,n. We include arcs leading from each vertex (i,j)(i,j) towards (i+1,j)(i+1,j), and from (i,j)(i,j) towards (i+1,j+1)(i+1,j+1) for j=1,,n1j=1,\cdots,n-1. The total weight of a maximum weight path from (1,1)(1,1) to (3,n)(3,n) (or (2,n)(2,n)), which is called a critical path, is equal to the makespan of the corresponding permutation schedule. For the three-machine case, the critical jobs with respect to σ\sigma are defined as the jobs JuJ_{u} and JvJ_{v} such that (1,u)(1,u), (2,u)(2,u), (2,v)(2,v) and (3,v)(3,v) appear in the critical path, i.e. the jobs JuJ_{u} and JvJ_{v} satisfy

Cmax=j=1up1j+j=uvp2j+j=vnp3j.C_{max}=\sum^{u}_{j=1}p_{1j}+\sum^{v}_{j=u}p_{2j}+\sum^{n}_{j=v}p_{3j}. (4)

For the two-machine case, the critical job with respect to σ\sigma is defined as the job JνJ_{\nu} such that (1,ν)(1,\nu) and (2,ν)(2,\nu) appear in the critical path, i.e. the job JνJ_{\nu} satisfies

Cmax=j=1νp1j+j=νnp2j.C_{max}=\sum^{\nu}_{j=1}p_{1j}+\sum^{n}_{j=\nu}p_{2j}. (5)

Johnson (1954) proposed a sequencing rule for F2||CmaxF2||C_{max}, which is one of the oldest results of the scheduling literature, and is commonly referred to Johnson’s rule.

Algorithm 1 Johnson’s rule
1: Set S1={JjJ|p1jp2j}S_{1}=\{J_{j}\in J|p_{1j}\leq p_{2j}\} and S2={JjJ|p1j>p2j}S_{2}=\{J_{j}\in J|p_{1j}>p_{2j}\}.
2: Process the jobs in S1S_{1} first in a non-decreasing order of p1jp_{1j}, and then schedule the jobs in S2S_{2} in a non-increasing order of p2jp_{2j}; ties may be broken arbitrarily.

In Johnson’s rule, jobs are scheduled as early as possible. This rule produces a permutation schedule, and Johnson showed that this schedule is optimal. Notice that this schedule is obtained in O(nlogn)O(n\log n) time.

For the general problem Fm||CmaxFm||C_{max}, Gonzalez and Sahni (1978) first presented an m2\lceil\frac{m}{2}\rceil-approximation algorithm that runs in O(mnlogn)O(mn\log{n}) time by solving m2\lceil\frac{m}{2}\rceil two-machine flow shop scheduling problems. Röck and Schmidt (1982) proposed an alternative approach by reducing the original problem to an artificial two-machine flow shop problem; this approach is called machine aggregation heuristic. They obtained a permutation by solving the artificial problem in O(mn+nlogn)O(mn+n\log n) time, and proved that it has the same performance guarantee of m2\lceil\frac{m}{2}\rceil. Based on the machine aggregation heuristic, Chen et al. (1996) proposed an algorithm for F3||CmaxF3||C_{max} with an improved performance guarantee of 53\frac{5}{3}. In the same paper, they also modified the Gonzalez and Sahni’s algorithm if mm is odd, by partitioning the machines into m32\frac{m-3}{2} two-machine flow shop scheduling problems, and one three-machine flow shop scheduling problem which was solved by their 53\frac{5}{3}-approximation algorithm. The modified algorithm has the same performance ratio m2\frac{m}{2} if mm is even, and an improved ratio m2+16\frac{m}{2}+\frac{1}{6} if mm is odd. It is known that a PTAS exists for Fm||CmaxFm||C_{max} (Hall, 1988).

We refer to the aggregation heuristic of Röck and Schmidt (1982) as the RS algorithm, and we will use it later to derive an algorithm for our combined problem. The RS algorithm can be described as follows for the three-machine case.

Algorithm 2 The RS algorithm for F3||CmaxF3||C_{max}
1: Construct an artificial two-machine flow shop scheduling problem with processing times aj=p1j+p2ja_{j}=p_{1j}+p_{2j} on the first machine and bj=p2j+p3jb_{j}=p_{2j}+p_{3j} on the second machine for JjJJ_{j}\in J. Implement Johnson’s rule to obtain an optimal permutation σ\sigma for the two-machine problem.
2: Assign the jobs on the three machines according to σ\sigma as early as possible. Denote the makespan of this permutation schedule as CmaxC_{max}.
3:return  σ\sigma, CmaxC_{max}.

The running time of this algorithm is O(nlogn)O(n\log n), which is the same as Johnson’s rule. Notice that the algorithm returns a permutation schedule, and hence the resulting makespan CmaxC_{max} satisfies the equality (4).

2.3 Algorithms for Shortest Path Problems

In this paper, we will use the following two results of the shortest path problems. The first one is the well-known Dijkstra’s algorithm, which solves the classic shortest path problem with nonnegative edge weights in O(|V|2)O(|V|^{2}) time (Dijkstra, 1959). The second one is a FPTAS result for the min-max shortest path problem, which is presented by Aissi, Bazgan and Vanderpooten (2006). Kouvelis and Yu (1997) first proposed min-max criteria for several problems, including the shortest path problem. Aissi, Bazgan and Vanderpooten (2006) studied the computational complexity and proposed several approximation schemes. The min-max shortest path problem with KK weights wj1,,wjKw^{1}_{j},\cdots,w^{K}_{j} associated with each arc aja_{j}, is to find a path PP between two specific vertices that minimizes maxk{1,,K}ajPwjk\max_{k\in\{1,\cdots,K\}}\sum_{a_{j}\in P}w^{k}_{j}. It was shown that this problem is NP-hard even for K=2K=2, and that a FPTAS exists if KK is a fixed number (Warburton, 1987; Aissi, Bazgan and Vanderpooten, 2006). The algorithm of Aissi, Bazgan and Vanderpooten (2006), referred to the ABV algorithm in this paper, is based on dynamic programming and scaling techniques. The following result implies that the ABV algorithm is a FPTAS if KK is a constant.

Theorem 2.1 (Aissi, Bazgan and Vanderpooten (2006))

Given an arbitrary positive value ϵ>0\epsilon>0, in a given directed graph with KK nonnegative weights associated with each arc, a directed path PP between two specific vertices can be found by the ABV algorithm with the property

maxi{1,2,,K}{ajPwji}(1+ϵ)maxi{1,2,,K}{ajPwji}\max_{i\in\{1,2,\cdots,K\}}\left\{\sum_{a_{j}\in P}w^{i}_{j}\right\}\leq(1+\epsilon)\max_{i\in\{1,2,\cdots,K\}}\left\{\sum_{a_{j}\in P^{\prime}}w^{i}_{j}\right\}

for any other path PP^{\prime} between the two specific vertices, and the running time is O(|A||V|K+1/ϵK)O(|A||V|^{K+1}/\epsilon^{K}).

3 Computational Complexity of Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max}

In this section, we study the computational complexity of our problem. First, it is straightforward that Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max} is NP-hard in the strong sense if m3m\geq 3, as a consequence of the fact that Fm||CmaxFm||C_{max} is a special case of our problem.

On the other hand, although F2||CmaxF2||C_{max} and the classical shortest path are polynomially solvable, we argue that F2|shortestpath|CmaxF2|\mathrm{shortest}~\mathrm{path}|C_{max} is NP-hard. We prove this result by using a reduction from the NP-complete problem partition (Garey and Johnson, 1979). Our proof is similar to the well-known NP-hardness proof for the shortest weight-constrained path problem (Batagelj et al., 2000).

Theorem 3.1

Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max} is NP-hard even if m=2m=2, and is NP-hard in the strong sense for m3m\geq 3.

Refer to caption
Figure 2: The reduction from partition to F2|shortestpath|CmaxF2|\mathrm{shortest}~\mathrm{path}|C_{max}
Proof

We only need to prove the first part. It is easy to see that the decision version of F2|shortestpath|CmaxF2|\mathrm{shortest}~\mathrm{path}|C_{max} belongs to NP. Consider an arbitrary instance of partition with S={a1,,an}S=\{a_{1},\cdots,a_{n}\} with size s(ak)+s(a_{k})\in\mathbb{Z}^{+} for each kk, and let C=aSs(a)/2C=\sum_{a\in S}s(a)/2. We now construct the directed graph G=(V,A,W)G=(V,A,W) and the corresponding jobs. The graph has n+1n+1 vertices v0,v1,,vnv_{0},v_{1},\cdots,v_{n}, each pair of (vk,vk+1),k=0,1,,n1,(v_{k},v_{k+1}),k=0,1,\cdots,n-1, is joined by two parallel arcs (jobs) with processing times (s(ak+1),0)(s(a_{k+1}),0) and (0,s(ak+1))(0,s(a_{k+1})) respectively, both leading from vertex vkv_{k} towards vk+1v_{k+1} (see the left of Fig. 2). We wish to find the jobs corresponding to a path from v0v_{0} to vn+1v_{n+1}. It is not difficult to check that there is a feasible schedule with makespan not more than CC if and only if there is a partition of set SS (see the right of Fig. 2). Therefore, the decision version of F2|shortestpath|CmaxF2|\mathrm{shortest}~\mathrm{path}|C_{max} is NPNP-complete. \Box

4 Approximation Algorithms

4.1 An intuitive Algorithm

To start off, we propose an intuitive algorithm for Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max}. The main idea of this algorithm is to set K=1K=1 and f=w1xf=w^{1}\cdot x in SPSP, i.e. to find a classical shortest path with one specific set of weights. An intuitive setting is wj1=i=1mpijw^{1}_{j}=\sum^{m}_{i=1}p_{ij} for each arc. We find the shortest path with respect to w1w^{1} by Dijkstra’s algorithm, and then schedule the corresponding jobs on the flow shop machines. We refer to this algorithm as the FD algorithm. The subsequent analysis will show that the performance ratio of the FD algorithm remains the same for an arbitrarily selected flow shop scheduling algorithm that provides a dense schedule, regardless of the performance ratio of the algorithm.

Algorithm 3 The FD algorithm
1: Find the shortest path in GG with weights wj1:=i=1mpijw^{1}_{j}:=\sum^{m}_{i=1}p_{ij} by Dijkstra’s algorithm. For the returned path PP, construct the job set JPJ_{P}.
2: Obtain a dense schedule of the jobs of JPJ_{P} by an arbitrary flow shop scheduling algorithm. Let σ\sigma be the returned job schedule and CmaxC_{max} the returned makespan, and denote the job set JPJ_{P} by SS.
3:return  SS, σ\sigma and CmaxC_{max}

It is straightforward that the total running time of the FD algorithm is O(|V|2+T(m,n))O(|V|^{2}+T(m,n)), where T(m,n)T(m,n) is the running time of the flow shop scheduling algorithm. Therefore, suppose the flow shop scheduling algorithm we used is polynomial time, then the FD algorithm is polynomial time even if mm is an input of the instance. Before we analyze the performance of this algorithm, we first introduce some notations. Let JJ^{*} be the set of jobs in an optimal solution, and CmaxC^{*}_{max} be the corresponding makespan, and let SS and CmaxC_{max} be those returned by the FD algorithm respcetively.

Theorem 4.1

The FD algorithm is mm-approximate, and this bound is tight.

Proof

By the lower bound (1) introduced in Section 2.2, we have

mCmaxJjJi=1mpij.mC^{*}_{max}\geq\sum_{J_{j}\in J^{*}}\sum^{m}_{i=1}p_{ij}. (6)

Since the returned path is a shortest path with respect to w1w^{1}, by (2) we have

CmaxJjSi=1mpij=JjSwj1JjJwj1=JjJi=1mpij.C_{max}\leq\sum_{J_{j}\in S}\sum^{m}_{i=1}p_{ij}=\sum_{J_{j}\in S}w^{1}_{j}\leq\sum_{J_{j}\in J^{*}}w^{1}_{j}=\sum_{J_{j}\in J^{*}}\sum^{m}_{i=1}p_{ij}. (7)

By combining (6) with (7), it follows that CmaxmCmax.C_{max}\leq mC^{*}_{max}.

Refer to caption
Figure 3: Tight example for the FD algorithm

Consider the instance shown in Fig. 3. We wish to find a path from vertex v0v_{0} to vmv_{m}. The makespan returned by the FD algorithm is Cmax=mC_{max}=m with the arc (v0,vm)(v_{0},v_{m}), whereas the makespan of an optimal schedule is Cmax=1+ϵC_{max}^{*}=1+\epsilon with the other arcs. Notice that there is only one job in the returned solution, hence the returned makespan remains mm regardless of the algorithm used for the flow shop scheduling. The bound is tight because CmaxCmaxm\frac{C_{max}}{C_{max}^{*}}\rightarrow m when ϵ0\epsilon\rightarrow 0. \Box

4.2 An Improved Algorithm for Fixed mm

In this subsection, we assume that mm, the number of flow shop machines, is a constant. Instead of finding an optimal shortest path from ss to tt with respect to specific weights, we implement the ABV algorithm mentioned in Section 2.3, which will return a (1+ϵ)(1+\epsilon)-approximated solution for the min-max shortest path problem. In other words, we will set K=mK=m and use the objective function f=max{w1x,w2x,,wKx}f=\max\{w^{1}\cdot x,w^{2}\cdot x,\cdots,w^{K}\cdot x\} in SPSP, where the wights w1,w2,,wKw^{1},w^{2},\cdots,w^{K} will be decided later.

Inspired by the work of Gonzalez and Sahni (1978) and Chen et al. (1996), we proceed as follows: after obtaining a feasible path by the ABV algorithm, we schedule the corresponding jobs by partitioning the mm machines into several groups. Denote the machine as MiM_{i}, i=1,,mi=1,\cdots,m (indexed following the routing of the flow shop). More specifically, we partition the mm machines into m3m_{3} groups of three consecutive machines in the routing M3i2M_{3i-2}, M3i1M_{3i-1}, M3iM_{3i} (i=1,,m3i=1,\cdots,m_{3}), m2m_{2} groups of two consecutive machines in the routing M3m3+2i1M_{3m_{3}+2i-1}, M3m3+2iM_{3m_{3}+2i} (i=1,,m2i=1,\cdots,m_{2}), and m1m_{1} individual machines M3m3+2m2+iM_{3m_{3}+2m_{2}+i} (i=1,,m1i=1,\cdots,m_{1}), in which the value of m1m_{1}, m2m_{2}, m3m_{3} will be derived later. For the three-machine subproblems on M3i2M_{3i-2}, M3i1M_{3i-1} and M3iM_{3i} (i=1,,m3i=1,\cdots,m_{3}), we implement the RS algorithm to obtain the permutations. For the two-machine subproblems on M3m3+2i1M_{3m_{3}+2i-1} and M3m3+2iM_{3m_{3}+2i} (i=1,,m2i=1,\cdots,m_{2}), we implement Johnson’s rule to obtain the permutations. The permutations for the single-machine subproblems are arbitrary. Then we form a schedule for the original mm-machine problem, in which the sequences of jobs on machines MiM_{i} are the permutations obtained above, and are executed as early as possible. Notice the property that an optimal schedule is always a permutation schedule only stands for F2||CmaxF2||C_{max} and F3||CmaxF3||C_{max}, and the performance guarantee relies on the properties of critical jobs as we will see in the subsequent analysis. The reason why we partition the mm machines in this particular fashion is related to this fact, as will be explained below.

The main idea of our algorithm is described as follows. We initially set the weights (wj1,wj2,,wjm)=(p1j,p2j,,pmj)(w^{1}_{j},w^{2}_{j},\cdots,w^{m}_{j})=(p_{1j},p_{2j},\cdots,p_{mj}). The algorithm iteratively runs the ABV algorithm and the above partition scheduling algorithm (the values of m1,m2,m3m_{1},m_{2},m_{3} will be decided later) by adopting the following revision policy: in a current schedule, if there exists a job whose weight is large enough with respect to the current makespan, we will modify the weights of arcs corresponding to large jobs to (M,M,,M)(M,M,\cdots,M), where MM is a sufficient large number, and then mark these jobs. The algorithm terminates if no such job exists. Another termination condition is that a marked job appears in a current schedule. We return the schedule with minimum makespan among all current schedules as the solution of the algorithm. We refer to this algorithm as the PAR algorithm. Notice that the weights of arcs may vary in each iteration, whereas the processing times of jobs remain the same throughout this algorithm.

Before we formally state the PAR algorithm, we first provide more details about the parameter choices. For m=2m=2 and m=3m=3, by following the subsequent analysis of the performance of this algorithm, one can verify that the best possible performance ratio is 32\frac{3}{2} and 22 respectively. An intuitive argument is that the best possible performance ratio for the general case of the PAR algorithm is ρ=m1+32m2+2m3\rho=m_{1}+\frac{3}{2}m_{2}+2m_{3}. For a given mm, as m1,m2,m3m_{1},m_{2},m_{3} are nonnegative integers, our task is to minimize m1+32m2+2m3m_{1}+\frac{3}{2}m_{2}+2m_{3} such that m1+2m2+3m3=mm_{1}+2m_{2}+3m_{3}=m. A simple calculation yields the following result:

(m1,m2,m3)={(0,0,m3)ifm=0(mod3),(1,0,m13)ifm=1(mod3),(0,1,m23)ifm=2(mod3),\displaystyle(m_{1},m_{2},m_{3})=\left\{\begin{array}[]{ll}(0,0,\frac{m}{3})&\mathrm{~if~}m=0\pmod{3},\\ (1,0,\frac{m-1}{3})&\mathrm{~if~}m=1\pmod{3},\\ (0,1,\frac{m-2}{3})&\mathrm{~if~}m=2\pmod{3},\end{array}\right. (11)

and

ρ={2m3ifm=0(mod3),2m+13ifm=1(mod3),4m+16ifm=2(mod3).\displaystyle\rho=\left\{\begin{array}[]{ll}\frac{2m}{3}&\mathrm{~if~}m=0\pmod{3},\\ \frac{2m+1}{3}&\mathrm{~if~}m=1\pmod{3},\\ \frac{4m+1}{6}&\mathrm{~if~}m=2\pmod{3}.\end{array}\right. (15)

In other words, the best way is to partition the machines in such a way that we have a maximum number of three-machine subsets. The pseudocode of the PAR algorithm is described by Algorithm 4.

Algorithm 4 The PAR algorithm
1: Derive (m1,m2,m3)(m_{1},m_{2},m_{3}) and ρ\rho using (11) and (15).
2: Initially, (wj1,wj2,,wjm):=(p1j,p2j,,pmj)(w^{1}_{j},w^{2}_{j},\cdots,w^{m}_{j}):=(p_{1j},p_{2j},\cdots,p_{mj}) for each arc ajAa_{j}\in A corresponding to JjJJ_{j}\in J.
3: Given ϵ>0\epsilon>0, implement the ABV algorithm to obtain a path PP for SPSP, and construct the corresponding job set as JPJ_{P}.
4: Partition the mm machines: m3m_{3} three-machines subsets (M3i2,M3i1,M3iM_{3i-2},M_{3i-1},M_{3i}, i=1,,m3i=1,\cdots,m_{3}); one two-machine subsets (Mm1M_{m-1} and MmM_{m}) if m2=1m_{2}=1; one single-machine subset (MmM_{m}) if m1=1m_{1}=1.
5: Run RS algorithm to obtain the permutations for these three-machine flow shops, and Johnson’s rule to obtain the permutation for the two-machine flow shop. Let the sequence of the single-machine problem be arbitrary.
6: For the original problem, schedule the jobs of JPJ_{P} according to those permutations on each machine as early as possible. Denote the returned makespan as CmaxC^{\prime}_{max}, and the job schedule as σ\sigma^{\prime}.
7:S:=JPS:=J_{P}, σ:=σ\sigma:=\sigma^{\prime}, Cmax:=CmaxC_{max}:=C^{\prime}_{max}, D:=D:=\emptyset, M:=(1+ϵ)JjJi=1mpij+1M:=(1+\epsilon)\sum_{J_{j}\in J}\sum_{i=1}^{m}p_{ij}+1.
8:while JPD=J_{P}\cap D=\emptyset and there exists a job JjJ_{j} in JPJ_{P} such that i=1mpij>Cmaxρ\sum_{i=1}^{m}p_{ij}>\frac{C^{\prime}_{max}}{\rho} do
9:  for all jobs with i=1mpij>Cmaxρ\sum_{i=1}^{m}p_{ij}>\frac{C^{\prime}_{max}}{\rho} in J\DJ\backslash D do
10:   (wj1,wj2,,wjm):=(M,M,,M)(w^{1}_{j},w^{2}_{j},\cdots,w^{m}_{j}):=(M,M,\cdots,M), D:=D{Jj}D:=D\cup\{J_{j}\}.
11:  end for
12:  Implement the ABV algorithm to obtain a path PP to SPSP, and construct the corresponding job set as JPJ_{P}.
13:  Schedule the jobs of JPJ_{P} by the rule described in lines 46.
14:  if Cmax<CmaxC^{\prime}_{max}<C_{max} then
15:   S:=JPS:=J_{P}, σ:=σ\sigma:=\sigma^{\prime}, Cmax:=CmaxC_{max}:=C^{\prime}_{max}.
16:  end if
17:end while
18:return  SS, σ\sigma and CmaxC_{max}.

It is easy to see that the PAR algorithm will return a feasible solution of Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max}. We now discuss its computational complexity. Let the total number of jobs be |A|=n|A|=n. Notice that the weights of arcs can be revised at most nn times. It is straightforward that the total running time of the PAR algorithm is O(n2|V|m+1/ϵm+mn2logn)O(n^{2}|V|^{m+1}/\epsilon^{m}+mn^{2}\log n), since there are at most nn iterations, in which the running time of the ABV algorithm is O(n|V|m+1/ϵm)O(n|V|^{m+1}/\epsilon^{m}) and scheduling takes O(mnlogn)O(mn\log n) time. If mm and ϵ\epsilon are fixed numbers, then the PAR algorithm is a polynomial time algorithm.

Let JJ^{*} be the set of jobs in an optimal solution, and CmaxC^{*}_{max} the corresponding makespan, and let SS and CmaxC_{max} be those returned by the PAR algorithm respectively. The following theorem shows the performance of the PAR algorithm.

Theorem 4.2

Given ϵ>0\epsilon>0, the worst-case ratio of the PAR algorithm for Fm|shortestpath|CmaxFm|\mathrm{shortest}~\mathrm{path}|C_{max} is

(1+ϵ)ρ={(1+ϵ)2m3ifm=0(mod3),(1+ϵ)2m+13ifm=1(mod3),(1+ϵ)4m+16ifm=2(mod3).\displaystyle(1+\epsilon)\rho=\left\{\begin{array}[]{ll}(1+\epsilon)\frac{2m}{3}&\mathrm{~if~}m=0\pmod{3},\\ (1+\epsilon)\frac{2m+1}{3}&\mathrm{~if~}m=1\pmod{3},\\ (1+\epsilon)\frac{4m+1}{6}&\mathrm{~if~}m=2\pmod{3}.\end{array}\right. (19)
Proof

We will distinguish two different cases: JDJ^{*}\cap D\neq\emptyset and JD=J^{*}\cap D=\emptyset.

Case 1

JDJ^{*}\cap D\neq\emptyset

In this case, there is at least one job in the optimal solution, say JjJ_{j}, such that Cmax<ρi=1mpijC^{\prime}_{max}<\rho\sum_{i=1}^{m}p_{ij} holds for a current schedule with makespan CmaxC^{\prime}_{max} during the execution. Notice that the schedule returned by the PAR algorithm is the schedule with minimum makespan among all current schedules, and we have CmaxCmaxC_{max}\leq C^{\prime}_{max}. It follows from (3) that

CmaxCmax<ρi=1mpijρCmax.\begin{split}C_{max}\leq C^{\prime}_{max}&<\rho\sum_{i=1}^{m}p_{ij}\leq\rho C^{*}_{max}.\end{split} (20)
Case 2

JD=J^{*}\cap D=\emptyset

Consider the last current schedule during the execution of the algorithm. Denote the corresponding job set and the makespan as JJ^{\prime} and CmaxC^{\prime}_{max} respectively.

In this case, we first argue that JD=J^{\prime}\cap D=\emptyset. Suppose that this is not the case, i.e. JDJ^{\prime}\cap D\neq\emptyset. Since JD=J^{*}\cap D=\emptyset, we know the weights of arcs corresponding to the jobs in JJ^{*} have not been revised. Hence we have (1+ϵ)maxi{1,,m}{JjJwji}<M(1+\epsilon)\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{*}}w^{i}_{j}\right\}<M. Moreover, by the assumption JDJ^{\prime}\cap D\neq\emptyset, we have maxi{1,,m}{JjJwji}M\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{\prime}}w^{i}_{j}\right\}\geq M. By Theorem 2.1, the solution returned by the ABV algorithm satisfies

Mmaxi{1,,m}{JjJwji}(1+ϵ)maxi{1,,m}{JjJwji}<M,M\leq\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{\prime}}w^{i}_{j}\right\}\leq(1+\epsilon)\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{*}}w^{i}_{j}\right\}<M,

which leads to a contradiction.

Remember that in the PAR algorithm, the machines are divided into three parts, namely three-machines subsets together with at most one two-machine subset or a single machine. We solve these subproblems by the RS algorithm, Johnson’s rule and an arbitrary algorithm respectively. It is clear that the sum of the makespans of those schedules is an upper bound for CmaxC^{\prime}_{max}. Denote Cmax2C^{2}_{max} and Jν2J^{2}_{\nu} as the makespan and the critical job of the two-machine subproblem returned by Johnson’s rule, and let the corresponding machines be Mi2,Mi2+1M_{i_{2}},M_{i_{2}+1}. Denote Cmax3C^{3}_{max} and Ju3J^{3}_{u}, Jv3J^{3}_{v} as the makespan and the critical jobs returned by the RS algorithm for the three-machine subproblems with largest makespan, and let the machines be Mi3,Mi3+1,Mi3+2M_{i_{3}},M_{i_{3}+1},M_{i_{3}+2}. Denote the single machine as Mi1M_{i_{1}}, on which the total processing time is JjJpi1j\sum_{J_{j}\in J^{\prime}}p_{i_{1}j}.

For the two-machine case, suppose that pi2,νpi2+1,νp_{i_{2},\nu}\geq p_{i_{2}+1,\nu}. Noticing that pi2,jpi2+1,jp_{i_{2},j}\geq p_{i_{2}+1,j} for the job scheduled after JνJ_{\nu} in the schedule returned by Johnson’s rule and form (5), it follows that

Cmax2JjJpi2,j+pi2+1,νJjJpi2,j+12(pi2,ν+pi2+1,ν).C^{2}_{max}\leq\sum_{J_{j}\in J^{\prime}}p_{i_{2},j}+p_{i_{2}+1,\nu}\leq\sum_{J_{j}\in J^{\prime}}p_{i_{2},j}+\frac{1}{2}(p_{i_{2},\nu}+p_{i_{2}+1,\nu}). (21)

For the three-machine case, we study two subcases corresponding with u=vu=v and u<vu<v for the critical jobs.

Subcase 2.1

u=vu=v.

Consider the schedule with respect to Cmax3C^{3}_{max}. We can rewrite (4) as

Cmax3=j=1u1pi3,j+pi3,u+pi3+1,u+pi3+2,u+j=u+1npi3+2,j.C^{3}_{max}=\sum^{u-1}_{j=1}p_{i_{3},j}+p_{i_{3},u}+p_{i_{3}+1,u}+p_{i_{3}+2,u}+\sum^{n}_{j=u+1}p_{i_{3}+2,j}. (22)

Suppose that the processing times of the critical jobs of the three-machine subproblem satisfy pi3,upi3+2,up_{i_{3},u}\geq p_{i_{3}+2,u}, thus we have pi3u+pi3+1,upi3+1,u+pi3+2,up_{i_{3}u}+p_{i_{3}+1,u}\geq p_{i_{3}+1,u}+p_{i_{3}+2,u}, i.e. aubua_{u}\geq b_{u} for the artificial two-machine flow shop in the RS algorithm. Since the RS algorithm schedules the jobs by Johnson’ rule, thus we have ajbja_{j}\geq b_{j} for the jobs scheduled after JuJ_{u}, i.e. pi3,jpi3+2,jp_{i_{3},j}\geq p_{i_{3}+2,j}. From (22), we have

Cmax3JjJpi3,j+pi3,u+pi3+1,u+pi3+2,u.C^{3}_{max}\leq\sum_{J_{j}\in J^{\prime}}p_{i_{3},j}+p_{i_{3},u}+p_{i_{3}+1,u}+p_{i_{3}+2,u}. (23)

Since JD=J^{\prime}\cap D=\emptyset, we know the weights of arcs corresponding to the jobs in the last current schedule have not been revised, and i=1mpijCmaxρ\sum_{i=1}^{m}p_{ij}\leq\frac{C^{\prime}_{max}}{\rho} for each job JjJJ_{j}\in J^{\prime}, since otherwise the algorithm will continue. Since JD=J^{*}\cap D=\emptyset, the weights of arcs corresponding to the jobs in this optimal schedule have not been revised. Thus, it follows from (1), (3), Theorem 2.1, (21), (23) and the fact that the schedule returned by the PAR algorithm is the schedule with minimum makespan among all current schedules, that

CmaxCmaxm3Cmax3+m2Cmax2+m1JjJpi1,jm3(JjJpi3,j+pi3,u+pi3+1,u+pi3+2,u)+m2(JjJpi2,j+12(pi2,ν+pi2+1,ν))+m1JjJpi1,jm3((1+ϵ)maxi{1,,m}{JjJpij}+Cmaxρ)+m2((1+ϵ)maxi{1,,m}{JjJpij}+Cmax2ρCmax)+m1(1+ϵ)maxi{1,,m}{JjJpij}(m1+m2+m3)(1+ϵ)Cmax+(m22ρ+m3ρ)Cmax\begin{split}C_{max}\leq C^{\prime}_{max}&\leq m_{3}C^{3}_{max}+m_{2}C^{2}_{max}+m_{1}\sum_{J_{j}\in J^{\prime}}p_{i_{1},j}\\ &\leq m_{3}\left(\sum_{J_{j}\in J^{\prime}}p_{i_{3},j}+p_{i_{3},u}+p_{i_{3}+1,u}+p_{i_{3}+2,u}\right)\\ &\quad+m_{2}\left(\sum_{J_{j}\in J^{\prime}}p_{i_{2},j}+\frac{1}{2}(p_{i_{2},\nu}+p_{i_{2}+1,\nu})\right)+m_{1}\sum_{J_{j}\in J^{\prime}}p_{i_{1},j}\\ &\leq m_{3}\left((1+\epsilon)\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{*}}p_{ij}\right\}+\frac{C^{\prime}_{max}}{\rho}\right)\\ &\quad+m_{2}\left((1+\epsilon)\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{*}}p_{ij}\right\}+\frac{C^{\prime}_{max}}{2\rho}C^{\prime}_{max}\right)\\ &\quad+m_{1}(1+\epsilon)\max_{i\in\{1,\cdots,m\}}\left\{\sum_{J_{j}\in J^{*}}p_{ij}\right\}\\ &\leq(m_{1}+m_{2}+m_{3})(1+\epsilon)C^{*}_{max}+\left(\frac{m_{2}}{2\rho}+\frac{m_{3}}{\rho}\right)C^{\prime}_{max}\\ \end{split}

Substituting (11) and (15) into the m1m_{1}, m2m_{2}, m3m_{3} and ρ\rho, by a simple calculation, we arrive at

CmaxCmax(1+ϵ)ρCmax.C_{max}\leq C^{\prime}_{max}\leq(1+\epsilon)\rho C^{*}_{max}. (24)
Subcase 2.2

u<vu<v.

We also assume that pi3,upi3+2,up_{i_{3},u}\geq p_{i_{3}+2,u}, and pi2,νpi2+1,νp_{i_{2},\nu}\geq p_{i_{2}+1,\nu}, an argument similar to the previous case shows that the jobs scheduled after JvJ_{v} satisfies pi3,jpi3+2,jp_{i_{3},j}\geq p_{i_{3}+2,j}. Since u<vu<v, it follows from (4) that

Cmax3JjJpi3,j+j=uvpi3+1,jJjJpi3,j+JjJpi3+1,j.C^{3}_{max}\leq\sum_{J_{j}\in J^{\prime}}p_{i_{3},j}+\sum^{v}_{j=u}p_{i_{3}+1,j}\leq\sum_{J_{j}\in J^{\prime}}p_{i_{3},j}+\sum_{J_{j}\in J^{\prime}}p_{i_{3}+1,j}. (25)

Similarly, it is not difficult to show that

CmaxCmaxm3Cmax3+m2Cmax2+m1JjJpi1,jm3(JjJpi3,j+JjJpi3+1,j)+m2(JjJpi2,j+12(pi2,ν+pi2+1,ν))+m1JjJpi1,j(m1+m2+2m3)(1+ϵ)Cmax+m22ρCmax\begin{split}C_{max}\leq C^{\prime}_{max}&\leq m_{3}C^{3}_{max}+m_{2}C^{2}_{max}+m_{1}\sum_{J_{j}\in J^{\prime}}p_{i_{1},j}\\ &\leq m_{3}\left(\sum_{J_{j}\in J^{\prime}}p_{i_{3},j}+\sum_{J_{j}\in J^{\prime}}p_{i_{3}+1,j}\right)\\ &\quad+m_{2}\left(\sum_{J_{j}\in J^{\prime}}p_{i_{2},j}+\frac{1}{2}(p_{i_{2},\nu}+p_{i_{2}+1,\nu})\right)+m_{1}\sum_{J_{j}\in J^{\prime}}p_{i_{1},j}\\ &\leq(m_{1}+m_{2}+2m_{3})(1+\epsilon)C^{*}_{max}+\frac{m_{2}}{2\rho}C^{\prime}_{max}\\ \end{split}

Substituting (11) and (15) into m1m_{1}, m2m_{2}, m3m_{3} and ρ\rho, by a simple calculation, we obtain

CmaxCmax(1+ϵ)ρCmax.C_{max}\leq C^{\prime}_{max}\leq(1+\epsilon)\rho C^{*}_{max}. (26)

For the cases where the last current schedule has critical jobs satisfying pi2,ν<pi2+1,νp_{i_{2},\nu}<p_{i_{2}+1,\nu} or pi3,u<pi3+2,up_{i_{3},u}<p_{i_{3}+2,u}, analogous arguments would yield the same result.

Now we show that the performance ratio of the PAR algorithm cannot less than ρ\rho. First, we propose two instances for m=2m=2 and m=3m=3.

Refer to caption
Refer to caption
Figure 4: Example for m=2m=2

If m=2m=2, the performance ratio of the PAR algorithm is 32(1+ϵ)\frac{3}{2}(1+\epsilon). Consider the following instance shown in Fig. 4. We wish to find a path from v1v_{1} to v4v_{4}. Notice that the ABV algorithm returns the path with arcs (v1,v2)(v_{1},v_{2}) and (v2,v4)(v_{2},v_{4}), and the corresponding makespan CmaxC^{\prime}_{max} by Johnson’s rule is 33. All the corresponding jobs satisfy p1j+p2j=223Cmaxp_{1j}+p_{2j}=2\leq\frac{2}{3}C^{\prime}_{max}, and thus the algorithm terminates. Therefore, the makespan of the returned schedule by the PAR algorithm is Cmax=3C_{max}=3 (see the right schedule of Fig. 4). On the other hand, the optimal makespan is Cmax=2+4ϵC_{max}^{*}=2+4\epsilon with arcs (v1,v3)(v_{1},v_{3}), (v3,v2)(v_{3},v_{2}) and (v2,v4)(v_{2},v_{4}) (see the left schedule of Fig. 4). The worst case ratio of the PAR algorithm cannot be less than 32\frac{3}{2} as CmaxCmax3/2\frac{C_{max}}{C_{max}^{*}}\rightarrow 3/2 when ϵ0\epsilon\rightarrow 0 for this instance.

Refer to caption
Refer to caption
Figure 5: Example for m=3m=3

For the case where m=3m=3, the performance ratio of the PAR algorithm is 2(1+ϵ)2(1+\epsilon). Consider the instance shown in Fig. 5. We wish to find a path from vertex v1v_{1} to v6v_{6}. Notice that the ABV algorithm returns the path with arcs (v1,v4)(v4,v5)(v5,v6)(v_{1},v_{4})\rightarrow(v_{4},v_{5})\rightarrow(v_{5},v_{6}). The makespan of the schedule returned by the RS algorithm is Cmax=4C^{\prime}_{max}=4. All the corresponding jobs satisfy p1j+p2j=212Cmaxp_{1j}+p_{2j}=2\leq\frac{1}{2}C^{\prime}_{max}, and thus the algorithm terminates. Therefore, the makespan of the schedule returned by the PAR algorithm is Cmax=4C_{max}=4 (see the right schedule of Fig. 5). On the other hand, the makespan of an optimal job schedule is Cmax=2(1+ϵ)2C_{max}^{*}=2(1+\epsilon)^{2}, by selecting the arcs (v1,v2)(v_{1},v_{2}), (v2,v3)(v_{2},v_{3}), and (v3,v6)(v_{3},v_{6}) (see the left schedule of Fig. 5). The worst case ratio of the PAR algorithm cannot be less than 22 as CmaxCmax2\frac{C_{max}}{C_{max}^{*}}\rightarrow 2 when ϵ0\epsilon\rightarrow 0 for this instance.

Refer to caption
Figure 6: Example for fixed mm

By extending and modifying the above examples to the general case, the instance described in Fig. 6 can be used to show that the performance ratio of the PAR algorithm cannot be less than ρ\rho. \Box

5 Conclusions

This paper has studied a combination problem of flow shop scheduling and the shortest path problem. We show the hardness of this problem, and present some approximation algorithms. For future research, it would be interesting to find an approximation algorithm with a better performance ratio for this problem. The question whether F2|shortestpath|CmaxF2|\mathrm{shortest}~\mathrm{path}|C_{max} is NP-hard in the strong sense is still open. One can also consider the combination of other combinatorial optimization problems. All these questions deserve further investigation.

Acknowledgments

This work has been supported by the Bilateral Scientific Cooperation Project BIL10/10 between Tsinghua University and KU Leuven.

References

  • Ahuja et al. (1993) Ahuja RK, Magnanti TL, Orlin JB (1993) Network Flows: Theory, Algorithms, and Applications. Prentice Hall, New Jersey
  • Aissi, Bazgan and Vanderpooten (2006) Aissi H, Bazgan C, Vanderpooten D (2006) Approximating min-max (regret) versions of some polynomial problems. In: Chen, D., Pardolos, P.M. (eds.) COCOON 2006, LNCS, vol 4112, pp 428–438. Springer, Heidelberg
  • Batagelj et al. (2000) Batagelj V, Brandenburg FJ, Mendez P, Sen A (2000) The generalized shortest path problem. CiteSeer Archives
  • Bodlaender et al. (1994) Bodlaender HL, Jansen K, Woeginger GJ (1994) Scheduling with incompatible jobs. Disc Appl Math, 55: 219–232
  • Chen et al. (1996) Chen B, Glass CA, Potts CN, Strusevich VA (1996) A new heuristic for three-machine flow shop scheduling. Oper Res 44: 891–898
  • Conway et al. (1971) Conway RW, Maxwell W, Miller L (1967) Theory of scheduling. Reading
  • Dijkstra (1959) Dijkstra EW (1959) A note on two problems in connexion with graphs. Numer Math 1: 269–271
  • Garey et al. (1976) Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Oper Res 1: 117–129
  • Garey and Johnson (1979) Garey MR, Johnson DS (1979) Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman, San Francisco
  • Gonzalez and Sahni (1978) Gonzalez T, Sahni S (1978) Flowshop and jobshop schedules: complexity and approximation. Oper Res 26: 36–52
  • Hall (1988) Hall LA (1998) Approximability of flow shop scheduling. Math Program 82: 175–190
  • Johnson (1954) Johnson SM (1954) Optimal two- and three-stage production schedules with setup times included. Nav Res Logist Q 1: 61–68
  • Kouvelis and Yu (1997) Kouvelis P, Yu G (1997) Robust discrete optimization and its applications. Kluwer Academic Publishers, Boston
  • Röck and Schmidt (1982) Röck H, Schmidt G (1982) Machine aggregation heuristics in shop scheduling. Method Oper Res 45: 303–314
  • Wang and Cui (2012) Wang Z, Cui Z (2012) Combination of parallel machine scheduling and vertex cover. Theor Comput Sci 460: 10–15
  • Wang et al. (2013) Wang Z, Hong W, He D (2013): Combination of parallel machine scheduling and covering problem. Working paper, Tsinghua University
  • Warburton (1987) Warburton A (1987) Approximation of pareto optima in multiple-objective, shortest-path problems. Oper Res 35: 70–79