Streaming Approximation Scheme for Minimizing Total Completion Time on Parallel Machines Subject to Varying Processing Capacity
Abstract
We study the problem of minimizing total completion time on parallel machines subject to varying processing capacity. In this paper, we develop an approximation scheme for the problem under the data stream model where the input data is massive and cannot fit into memory and thus can only be scanned for a few passes. Our algorithm can compute the approximate value of the optimal total completion time in one pass and output the schedule with the approximate value in two passes.
keywords:
streaming algorithms, scheduling, parallel machines, total completion time, varying processing capacity[inst1]organization = Department of Computer Science, addressline = University of Texas Rio Grande Valley, city = Edinburg, state = TX, postcode=78539, country=USA
[inst2]organization = Department of Computer Science, addressline = College of Staten Island, CUNY, city = Staten Island, state = NY, postcode=10314, country=USA
[inst3]organization = Department of Computer Science, addressline = Purdue University Northwest, city = Hammond, state = IN, postcode=46323, country=USA
1 Introduction
In 1980, Baker and Nuttle [4] studied the problem of scheduling jobs that require a single resource whose availability varies over time. This model was motivated by the situations in which machine availability may be temporarily reduced to conserve energy or interrupted for scheduled maintenance. It also applies to situations in which processing requirements are stated in terms of man-hours and the labor availability varies over time. One example application is rotating Saturday shifts, where a company only maintains a fraction, for example 33%, of the workforce every Saturday.
In 1987, Adiri and Yehudai [1] studied the scheduling problem on single and parallel machines where the service rate of a machine remains constant while a job is being processed and may only be changed upon its completion. A simple application example is a machine tool whose performance is a function of the quality of its cutters which can be replaced only upon completion of one job.
In 2016, Hall et. al [12] proposed a new model of multitasking via shared processing which allows a team to continuously work on its main, or primary tasks while a fixed percentage of its processing capacity may be allocated to process the routinely scheduled activities such as administrative meetings, maintenance work or meal breaks. In these scenarios, a working team can be viewed as a machine with reduced capacity in some periods for processing primary jobs. A manager needs to decide how to schedule the primary jobs on these shared processing machines so as to optimize some performance criteria. In [12], the authors studied the single machine environment only. They assumed that the processing capacity allocated to all the routine jobs is a constant that is strictly less than 1.
Similar models also occur in queuing system where the number of servers can change, or where the service rate of each server can change. In [20], Teghem defined these models as the vacation models and the variable service rate models, respectively. As Doshi pointed out in [7], queuing systems where the server works on primary and secondary customers arise naturally in many computer, communication and production systems. From the point of the primary customers’ view, the server working on the secondary customers is equivalent to the server taking a vacation and not being available to the primary customers during this period.
In this paper, we extend the research on scheduling subject to varying machine processing capacity and study the problems under parallel machine environment so as to optimize some objectives. In our model, we allow different processing capacity during different periods and the change of the processing capacity is independent of the jobs. Although in some aspects the problems have been intensively studied, such as scheduling subject to unavailability constraint, this work is targeting on a more general model compared with all the above mentioned research. This generalized model apparently has a lot of applications which have been discussed in the above literature. Due to historic reasons and different application contexts, different terms have been used in literature to refer to similar concepts, including service rate [1][20], processing capacity [2][5][14], machine capacity[12], sharing ratio[12], etc. In this paper, we will adopt the term processing capacity to refer to the availability of a machine for processing the jobs.
For the proposed general model, in [9] we have studied some problems under the traditional data model where all data can be stored locally and accessed in constant time. In this paper, we will study the proposed model under the data stream environment where the input data is massive and cannot be read into memory. Specifically, we study the data stream model of our problem where the number of jobs is so big that jobs’ information cannot be stored but can only be scanned in one or more passes. This research, as in many other research areas, caters to the need for providing solutions under big data that also emerges in the area of scheduling.
As Muthukrishnan wrote in his paper [18], in the modern world, with more and more data generated, the automatic data feeds are needed for many tasks in the monitoring applications such as atmospheric, astronomical, networking, financial, sensor-related fields, etc. These tasks are time critical and thus it is important to process them in almost real time mode so as to accurately keep pace with the rate of stream updates and reflect rapidly changing trends in the data. Therefore, the researchers are facing the questions under the current and future trend of more demands of data streams processing: what can we (not) do if we are given a certain amount of resources, a data stream rate and a particular analysis task?
A natural approach to dealing with large amount of data of these time critical tasks involves approximations and developing data stream algorithms. Streaming algorithms were initially studied by Munro and Paterson in 1978 ([17]), and then by Flajolet and Martin in 1980s ([8]). The model was formally established by Alon, Matias, and Szegedy in [3] and has received a lot of attention since then.
In this work our goal is to design streaming algorithms for our proposed problem to approximate the optimal solution in a limited number of passes over the data and using limited space. Formally, streaming algorithms are algorithms for processing the input where some or all of of the data is not available for random access but rather arrives as a sequence of items and can be examined in only a few passes (typically just one). The performance of streaming algorithms is measured by three factors: the number of passes the algorithm must run over the stream, the space needed and the update time of the algorithm.
1.1 Problem Definition
Formally our scheduling model can be defined as follows. There is a set of jobs and parallel machines where the processing capacity of machines varies over time. Each job has a processing time and can be processed by any one of the machines uninterruptedly. Associated with each machine are continuous intervals during which the processing capacity of are , , , , respectively, see Figure 1 for an example.

If the machine has full availability during an interval, we say the machine’s processing capacity is and a job, , can be completed in time units; otherwise, if the machine ’s processing capacity is less than , for example , then can be completed in time units.
The objective is to minimize the total completion time of all jobs. For any schedule , let be the completion time of the job in . If the context is clear, we use for short. The total completion time of the schedule is .
In this paper, we study our scheduling problem under data stream model. The goal is to design streaming algorithms to approximate the optimal solution in a limited number of passes over the data and using limited space. Using the three-field notation introduced by Graham et al. [11], our problem is denoted as ; if for all intervals, the processing capacity is greater than or equal to a constant , , our problem is denoted as .
1.2 Literature Review
We first review the related work under the traditional data model. The first related work is done by Baker and Nuttle [4] in 1980. They studied the problems of sequencing jobs for processing by a single resource to minimize a function of job completion times subject to the constraint that the availability of the resource varies over time. Their work showed that a number of well-known results for classical single-machine problems can be applied with little or no modification to the corresponding variable-resource problems. Adiri and Yehudai [1] studied the problem on single and parallel machines with the restrictions such that if a job is being processed, the service rate of a machine remains constant and the service rate can be changed only when the job is completed. In 1992, Hirayama and Kijima [13] studied this problem on single machine where the machine capacity varies stochastically over time.
In 2016, Hall et. al. in [12] studied similar problems in multitasking environment, where a machine does not always have the full capacity for processing primary jobs due to previously scheduled routine jobs. In their work, they assume there is a single machine whose processing capacity is either 1 or a constant during an interval. They showed that the total completion time can be minimized by scheduling the jobs in non-decreasing order of the processing time, but it is unary NP-Hard for the objective function of total weighted completion time.
Another widely studied model is scheduling subject to machine unavailability constraint where the machine has either full capacity, or zero capacity so no jobs can be processed. Various performance criteria and machine environments have been studied under this model, see the survey papers [19] and [15] and the references therein. Among the results, for the objective of minimizing total completion time on parallel machines, the problem is NP-hard.
Other scheduling models with varying processing capacity are also studied in the literature where variation of machine availability are related with the jobs that have been scheduled. These models include scheduling with learning effects (see the survey paper [14] by Janiak et al. and the references therein), scheduling with deteriorating effects (see the paper [5] by Cheng et al.), and interdependent processing capacitys (see the paper [2] by Alidaee et al. and the references therein), etc.
In our model, there are multiple machines, the processing capacity of each machine can change between 0 and 1 from interval to interval and the capacity change can happen while a job is being processed. The goal is to find a schedule to minimize total completion time. In [9] we have showed that there is no polynomial time approximation algorithm unless if the processing capacities are arbitrary for all machines. Then for the problem where the sharing ratios on some machines have a constant lower bound, we analyzed the performance of some classical scheduling algorithms and developed a polynomial time approximation scheme when the number of machines is a constant.
We now review the related work under the data stream model. Since the streaming algorithm model was formally established by Alon, Matias, and Szegedy in [3], it has received a lot of attention. While streaming algorithms have been studied in the field of statistics, optimization, and graph algorithms (see surveys by Muthukrishnan [18] and McGregor [16]) and some research results have been obtained, very limited research on streaming algorithms ([6] [10]) has been done in the field of sequencing and scheduling so far. For the proposed model, in [10] we developed a streaming algorithm for the problem with the objective of makespan minimization.
1.3 New Contribution
In this paper, we present the first efficient streaming algorithm for the problem . Our streaming algorithm can compute an approximation of the optimal value of the total completion time in one pass, and output a schedule that approximates the optimal schedule in two passes.
2 A PTAS for
In this section, we develop a streaming algorithm for our problem when the processing capacities on all machines have a positive constant lower bound, that is, . The algorithm is a PTAS and uses sub-linear space when is a constant.
At the conceptual level, our algorithm has the following two stages:
-
Stage 1: Generate a sketch of the jobs while reading the input stream.
-
Stage 2: Compute an approximation of the optimal value based on the sketch.
In the following two subsections, we will describe each stage in detail.
2.1 Stage 1: Sketch Generation from the Input Stream
In this step we read the job input stream and generate a sketch of using rounding technique. The idea of the procedure in this stage is to split the jobs into large jobs and small jobs and round the processing time of the large jobs so that the number of distinct processing times is reduced. Specifically, the sketch is a set of pairs where each pair contains a rounded processing time and the number of jobs with this rounded processing time. Let . The sketch of can be formally defined as follows:
Definition 1.
Given the error parameter and the lower bound of processing capacity of the machines , let denote the set of large jobs. Let , for each job of , we round up its processing time such that if , then its rounded processing time is . We denote the sketch of by , where is the number of jobs whose rounded processing time is and and are the integers such that and respectively.
When and are known before reading the job stream, one can generate the sketch from the job input stream with the following simple procedure: Whenever a job is read, if it is a small job, skip it and continue. Otherwise, it is a large job. Suppose that , we update the pair by increasing by 1, where .
In reality, however, and may not be obtained accurately without scanning all the jobs. Meanwhile, in many practical scenarios, the estimation of and could be obtained based on priori knowledge. Specifically, an upper bound of , , could be given such that for some ; and a lower bound of , , could be given such that for some . Depending on whether and are known beforehand, we have four cases: (1) both and are known; (2) only is known; (3) only is known; (4) neither nor is known.
For all four cases, we can follow the same procedure below to get the sketch of the job input stream. The main idea is as we read each job , we dynamically update the maximum processing time, , the total number of jobs, , the threshold of processing time for large jobs, , and the sketch if needed. For convenience, in the following procedure we treat as a number, and as .
Algorithm for constructing sketch
-
1.
Let
-
2.
if is not given, set
-
3.
if is not given, set
-
4.
,
-
5.
Initialize , , and
-
6.
Construct while repeatedly reading next
-
6.a.
-
6.b.
If
-
-
if ,
-
6.c.
If
-
6.c.1
where
-
6.c.2
if there is a tuple where ,
-
6.c.3
then update
-
6.c.4
else
-
-
6.c.1
-
6.a.
-
7.
, ,
-
8.
Let
While the above procedure can be used for all four cases, we need different data structures and implementations in each case to achieve time and space efficiency. For cases (1) and (2) where is known, since for some constant , there are at most distinct rounded processing times, and thus we can use an array to store . For cases (3) and (4) where no information about is known, we can use a B-tree to store the elements of . Each node in the tree corresponds to an element with as the key. With being dynamically updated, there are at most distinct rounded processing times, and thus at most nodes in the B-tree at any time. As each job is read in, we may need to insert a new node to B-tree. If , needs to be updated and so does , which is the threshold of processing time for large jobs. Hence the nodes with the key less than should be removed. To minimize the worst case update time for each job, each time when a new node is inserted, we would delete the node with the smallest key if it is less than .
The following lemma gives the space and time complexity for computing sketch from the job input stream for all four cases.
Lemma 2.
Let and be real numbers in . We can compute the sketch of job input stream in one pass with the following performance:
-
1.
Given both an upper bound for and a lower bound for such that and for some and , then it takes update time and space to process each job from the stream.
-
2.
Given only a lower bound for where , then it takes update time and space to process each job in the stream.
-
3.
Given only an upper bound for such that , then it takes update time, and space to process each job in the stream.
-
4.
Given no information about and , then it takes update time, and space to process each job in the stream.
Proof.
We give the proof for four cases separately as follows:
Case 1: Both and are given such that , and for some and .
From the algorithm, the processing time of a large job is at most and at least . Thus, the number of distinct processing times after rounding is at most . We use an array of size to store the elements of and we have
It is easy to see that the update time for each job is time.
Case 2: Only , , is given.
From the algorithm, the processing time of a large job is between and , thus the number of distinct processing time in is at most .
With the array of elements to store the elements of , the update time for each job is .
Case 3: Only , , is given.
We use a B-tree to store the elements of . Since is given, we can calculate based on the updated , . So the number of nodes in the B-tree is bounded by , which is
For each large job, we need to perform at most three operations: a search operation, possibly an insertion and a deletion. The time for each operation is at most the height of the tree:
Case 4: No information about and is known beforehand. We still use B-tree as Case 3. However, without information of , is always 1, the total number of nodes stored in the B-tree is at most . The update time is thus . ∎
2.2 Stage 2: Approximation Computation based on the Sketch
In this stage, we will find an approximate value of the minimum total completion time for our scheduling problem based on the sketch that is obtained from the first stage. The idea is to assign large jobs in the sketch group by group in SPT order to machines where group , , corresponds to pair in sketch . After all groups of jobs are scheduled, we find the minimum total completion time among the remaining schedules, and return an approximate value.
To schedule each group of jobs, we perform two operations:
-
Enumerate: enumerate all assignments of the jobs in the group to machines that satisfy certain property, and
-
Prune: prune the schedules so that only a limited number of schedules are kept.
During the Enumerate operation, we enumerate the assignments of jobs from group to machines using -partition as defined below.
Definition 3.
For two positive integers and , and a real , a -partition of is an ordered tuple such that each is non-negative integer, , and there are at least integers is either or for some integer .
For example, for , to schedule a group of jobs to machines, we enumerate those assignments corresponding to -partitions of . From the definition, some examples of -partitions of 9 are , , . Corresponding to the partition , we schedule 2 jobs on the first machine, 2 jobs on the second machine and the remaining 5 jobs on the last machine. By definition, and are two different partitions corresponding to two different schedules.
During the Prune operation, we remove some schedules so that only a limited number of schedules are kept. Let be a schedule of the jobs on machines, we use to denote the total processing time of the jobs assigned to machine in ; and use to denote the total completion time of the jobs scheduled to in . The schedules are pruned so that no two schedules are “similar”, where “similar” schedules are defined as follows.
Definition 4.
Two schedules , are “similar” with respect to a given parameter if for every , and are both in an interval for some integer , and and are both in an interval for some integer . We use to denote and are “similar” with respect to .
Our complete algorithm for stage 2 is formally presented below in which the Enumerate and Prune operations are performed in Step (3.3.b) and (3.3.c), respectively.
Algorithm for computing the approximate value
Input:
Output: An approximate value of the minimum total completion time for jobs in
Steps
-
1.
let be a positive real such that
-
2.
-
3.
Compute , , which is a set of schedules of jobs from the groups to
-
3.a
let
-
3.b
for each -partition of
-
for each schedule
-
schedule the jobs of the group to the end of based
-
on the partition and let the new schedule be
-
-
-
3.c
prune by repeating the following until can’t be reduced
-
if there are two schedules and in such that
-
-
-
3.a
-
4.
Let be the schedule that has the minimum total completion time, .
-
5.
return as an approximate value of the minimum total completion time of the jobs in .
Before we analyze the performance of the above procedure, we first consider a special case of our scheduling problem: all jobs have equal processing time and there is a single machine that has the processing capacity at least at any time. Suppose that the jobs are continuously scheduled on the machine. The following lemma shows how the total completion time of these jobs changes if we shift the starting time of these jobs and/or insert additionally a small number of identical jobs at the end of these jobs.
Lemma 5.
Let be a schedule of identical jobs of processing time starting from time on a single machine whose processing capacity is at least at any time. Then we have the following cases:
-
(1)
.
-
(2)
If we shift all jobs in so the first job starts at and get a new schedule , then .
-
(3)
If we add additional identical jobs at the end of and get a new schedule , then its total completion time is .
-
(4)
Let be a schedule of identical jobs of processing time starting from time then
Proof.
We will prove (1)-(4) one by one in order.
-
(1)
First it is easy to see that is the total completion time of the jobs when the machine’s processing capacity is always 1, which is obviously a lower bound of . When the machine’s processing capacity is at least , then it takes at most time to complete each job, thus the total completion time is at most .
-
(2)
When we shift the jobs so that the first job starts later, then the completion time of each job is increased by at most . Therefore,
-
(3)
Suppose the last job in completes at time . Then . When we add additional jobs starting from , by (1), the total completion time of the additional jobs is at most . Therefore,
- (4)
∎
Now we analyze the performance of our algorithm. In step 3.3.b, we only consider the schedules of the jobs corresponding to -partitions. Let be any schedule of the jobs in sketch , we will show that at the end of step 3.3.b there is a schedule that is -close to . Let be the number of jobs from group that are scheduled on machine in . The -close schedule to is defined as follows.
Definition 6.
Let be an integer such that . We say a schedule is a -close schedule to if for the jobs in group , the following conditions hold: (1) In , the schedule of jobs from the group form a partition of ; (2) For at least machines, either or .
By definition, if is -close to , then for all , .
The following lemma shows that there is always a schedule at the end of step 3.3.b that is -close to .
Lemma 7.
For any schedule , there exists a -close schedule at the end of step 3.3.b that is -close to .
Proof.
The existence of can be shown by construction. We initialize to be any schedule from . Then we schedule the jobs from group to , starting from . Suppose there are jobs scheduled on in , and for some integer , then if there are less than jobs unscheduled in this group, assign all the remaining jobs to machine ; otherwise, assign jobs to machine and continue to schedule the jobs of this group to the next machine.
It is easy to see that the constructed schedule of jobs from group form a -partition that would be added to in step 3.3.b. By definition, the constructed schedule is a -close schedule to . ∎
We now analyze step 3.3.c of our algorithm where “similar” schedules are pruned after a group of jobs in are scheduled. We will need the following notations:
: the total completion time of jobs from group that are scheduled on in .
: the largest completion time of the jobs from group that are scheduled on in .
: the total processing time of jobs from group that are scheduled to machine in .
For any optimal schedule of the jobs in , let be the partial schedule of jobs from groups to with the processing time at most in . Our next lemma shows that there is a schedule that approximates the partial schedule .
Lemma 8.
For any optimal schedule of the jobs in , let be the partial schedule in of the jobs from groups to . Let , then after some schedules in are pruned at step (3.3.c), there exists a schedule such that
-
(1)
for , and
-
(2)
for .
Proof.
We prove by induction on . First consider . By Lemma 7, at the end of step 3.3.b there is a schedule that is -close to , and for all , which implies . In both schedules and , the jobs are scheduled from time 0 on each machine, by Lemma 5 Case (3), for each machine , we have . If is pruned from at step 3.3.c, then there must be a schedule such that and , so for each machine , , we have
and
Assume the induction hypothesis holds for some . So after schedules in are pruned, there is a schedule in that satisfies the inequalities (1) and (2). Now by the way that we construct schedules and by Lemma 7, in , there must be a schedule that is the same as for the jobs from groups to , and is -close to for the jobs of group .
Then for each machine we have
And compared with , on each machine , the first job from group group in is delayed by at most . By Lemma 5 Case (4), for the jobs from group on each machine , we have and
Then after the “similar” schedules are pruned in our procedure, there is a schedule that is “similar” to , so for each machines we have
and
This completes the proof. ∎
After all groups of jobs are scheduled, our algorithm finds the schedule that has the smallest total completion time among all generated schedules, and then returns the value . In the following we will show that the returned value is an approximate value of the optimal total completion time for the job set .
Lemma 9.
Let be the optimal schedule for jobs in , .
Proof.
We first construct a schedule of jobs in based on the schedule of jobs in the sketch using the following two steps:
-
1.
Replace each job in with the corresponding job from , let the schedule be
-
2.
Insert all small jobs from to starting at time 0 to , and let the schedule be
For each job with processing time , its rounded processing time is , so when we replace with to get , the completion time of each job will not increase, and thus the total completion time of jobs in is at most that of . That is, .
All the small jobs have processing time at most , so the total length is at most . Inserting them to , the completion time of the last small job in is at most , and the other jobs’ completion time is increased by at most . The total completion time of all the jobs is at most
By Lemma 8, there is a schedule that corresponds to the schedule of the large jobs obtained from . Furthermore, where . For schedule , we have . Thus,
∎
Lemma 10.
The Algorithm in stage 2 runs in
time using space.
Proof.
For each rounded processing time and for each of machines, the number of possible jobs assigned is either , or , , so there are values. The remaining jobs will be assigned to the last machine. There are at most ways to assign the jobs of a group to the machines.
For each schedule in , the largest completion time is bounded by , and its total completion time is bounded by . Since we only keep non-similar schedules, there are at most schedules. ∎
Theorem 11.
Let and be a real in . For , there is a one-pass -approximation scheme with the following time and space complexity:
-
1.
Given both an upper bound for the number of jobs and a lower bound for the largest processing time of job such that and for some and , then it takes update time and space to process each job from the stream.
-
2.
Given only a lower bound for where , then it takes update time and space to process each job in the stream.
-
3.
Given only an upper bound for such that , then it takes update time, and space to process each job in the stream.
-
4.
Given no information about and , then it takes update time, and space to process each job in the stream.
-
5.
After processing the input stream, to compute the approximation value, it takes
time using .
Note that our algorithm only finds an approximate value for the optimal total completion time, and it does not generate the schedule of all jobs. If the jobs can be read in a second pass, we can return a schedule of all jobs whose total completion time is at most where is an optimal schedule. Specifically, after the first pass, we store , , , which is the number of large jobs from group that are assigned to machine in . Based on the selected schedule , we get that is the starting time for jobs from group , , on each machine , . We add group that includes all the small jobs and will be scheduled at the beginning on machine , that is, initially . For all , , we update it by adding . In the second pass, for each job scanned, if it is a large job and its rounded processing time is for some , we schedule it to a machine with at and then update by decreasing by 1 and update accordingly; otherwise, job is a small job, and we schedule this job at on machine and update accordingly. The total space needed in the second pass is for storing and for and , which is .
Theorem 12.
There is a two-pass -approximation streaming algorithm for . In the first pass, the approximate value can be obtained with the same metrics as Theorem 11; and in the second pass, a schedule with the approximate value can be returned with processing time and space for each job.
3 Conclusions
In this paper we studied a generalization of the classical identical parallel machine scheduling model, where the processing capacity of machines varies over time. This model is motivated by situations in which machine availability is temporarily reduced to conserve energy or interrupted for scheduled maintenance or varies over time due to the varying labor availability. The goal is to minimize the total completion time.
We studied the problem under the data stream model and presented the first streaming algorithm. Our work follows the study of streaming algorithms in the area of statistics, graph theory, etc, and leads the research direction of streaming algorithms in the area of scheduling. It is expected that more streaming algorithms based big data solutions will be developed in the future.
Our research leaves one unsolved case for the studied problems: Is there a streaming approximation scheme when one of the machines has arbitrary processing capacities? For the future work, it is also interesting to study other performance criteria under the data stream model including maximum tardiness, the number of tardy jobs and other machine environments such as uniform machines, flowshop, etc.
References
- [1] Adiri, I., and Yehudai, Z. Scheduling on machines with variable service rates. Computers & Operations Research 14, 4 (1987), 289–297.
- [2] Alidaee, B., Wang, H., Kethley, B., and Landram, F. G. A unified view of parallel machine scheduling with interdependent processing rates. Journal of Scheduling (2019), 1–17.
- [3] Alon, N., Matias, Y., and Szegedy, M. The space complexity of approximating the frequency moments. Journal of Computer and System Sciences 58, 1 (1999), 137–147.
- [4] Baker, K. R., and Nuttle, H. L. W. Sequencing independent jobs with a single resource. Naval Research Logistics Quarterly 27 (1980), 499–510.
- [5] Cheng, T. C. E., Lee, W.-C., and Wu, C.-C. Single-machine scheduling with deteriorating functions for job processing times. Applied Mathematical Modelling 34 (2010), 4171–4178.
- [6] Cormode, G., and Veselý, P. Streaming algorithms for bin packing and vector scheduling. Theory of Computing Systems 65 (2021), 916–942.
- [7] Doshi, B. T. Queueing systems with vacations—a survey. Queueing Syst. Theory Appl. 1, 1 (jan 1986), 29–66.
- [8] Flajolet, P., and Nigel Martin, G. Probabilistic counting algorithms for data base applications. Journal of Computer and System Sciences 31, 2 (1985), 182–209.
- [9] Fu, B., Huo, Y., and Zhao, H. Multitasking scheduling with shared processing, 2022. Manuscript under review.
- [10] Fu, B., Huo, Y., and Zhao, H. Streaming algorithms for multitasking scheduling with shared processing, 2022. Manuscript under revision.
- [11] Graham, R., Lawler, E., Lenstra, J., and Kan, A. Optimization and approximation in deterministic sequencing and scheduling: a survey. In Discrete Optimization II, P. Hammer, E. Johnson, and B. Korte, Eds., vol. 5 of Annals of Discrete Mathematics. Elsevier, 1979, pp. 287–326.
- [12] Hall, N. G., Leung, J. Y.-T., and lun Li, C. Multitasking via alternate and shared processing: Algorithms and complexity. Discrete Applied Mathematics 208 (2016), 41–58.
- [13] Hirayama, T., and Kijima, M. Single machine scheduling problem when the machine capacity varies stochastically. Operations Research 40 (1992), 376–383.
- [14] Janiak, A., Krysiak, T., and Trela, R. Scheduling problems with learning and ageing effects: A survey. Decision Making in Manufacturing and Services 5, 1 (Oct. 2011), 19–36.
- [15] Ma, Y., Chu, C., and Zuo, C. A survey of scheduling with deterministic machine availability constraints. Computers & Industrial Engineering 58, 2 (2010), 199–211. Scheduling in Healthcare and Industrial Systems.
- [16] Mcgregor, A. Graph stream algorithms: a survey. SIGMOD Record 43 (2014), 9–20.
- [17] Munro, J., and Paterson, M. Selection and sorting with limited storage. Theoretical Computer Science 12, 3 (1980), 315–323.
- [18] Muthukrishnan, S. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science 1, 2 (aug 2005), 117–236.
- [19] Schmidt, G. Scheduling with limited machine availability1this work has been partially supported by intas grant 96-0812.1. European Journal of Operational Research 121, 1 (2000), 1–15.
- [20] Teghem, J. Control of the service process in a queueing system. European Journal of Operational Research 23, 2 (1986), 141–158.