2.3 Arbitrary number of machines
2.3.1 A 6.6623-competitive algorithm
We design an online algorithm for P|online − list, mj|Cmaxsuch that the constructed
schedules can be partitioned in an X and Y part, as in Lemma 2.1, resulting in a small x + y value. To do this, we distinguish between two types of jobs; jobs with a large machine requirement and jobs that require only a few machines for processing. A job j is called big if it requires at least half of the machines, i.e. it has machine requirement mj ≥ ⌈m2⌉, and is called small otherwise. Furthermore, the small jobs
are classified according to their length. A small job j belongs to job class Jk if
βk ≤ p
j < βk+1, where β = 1 + √
10
5 (≈ 1.6325). Note that k may be negative.
Similar classifications can be found in Shelf Algorithms for Strip Packing [2], which are applied to group rectangles of similar height.
In the schedules created by the online algorithm, big jobs are never scheduled in parallel to other jobs, and (where possible) small jobs are put in parallel to other small jobs of the same job class. The intuition behind the online algorithm is the following. Scheduling big jobs results in a relatively high average load in the corre- sponding intervals, and small jobs are either grouped together to a high average load or there is a small job with a relative long processing time. In the proof of 6.6623- competitiveness, the intervals with many small jobs, together with the intervals with big jobs will be compared to the load bound for C∗(σ) (the Y part in Lemma 2.1),
and the intervals with only a few small jobs are compared to the length bound for C∗(σ) (the X part in Lemma 2.1).
In the following, a precise description of Algorithm P J (Parallel Jobs) is given. The Algorithm P J creates schedules where the small jobs of class Jk are either in a
sparse interval Sk or in dense intervals Dik. With nk we count the number of dense
intervals created for job class Jk. All small jobs scheduled in an interval [a, b) start
at a. As a consequence, job j fits in interval [a, b) if the machine requirement of the jobs already in [a, b) plus mj is at most m.
Algorithm P J
Schedule job j as follows:
if job j is small, i.e. mj <⌈m2⌉, and belongs to job class Jk then
Try in the given order:
• Schedule job j in the first Di
k interval where it fits.
• Schedule job j in Sk.
• Set nk := nk+ 1 and let Sk become Dnkk. Create a new interval
Sk at the end of the current schedule with length βk+1. Schedule
job j in Sk.
if job j is big, i.e. mj ≥ ⌈m2⌉ then
βk βk βk+1 βk+1 Dnk k S k m m a ch in es
a big job small jobs with another big job
length in [βk, βk+1)
Figure 2.3: Part of a schedule created by Algorithm P J.
The structure of a schedule created by Algorithm P J is illustrated by Figure 2.3. It is important to note that at any time for each job class Jk there is at most one
sparse interval Sk.
The way Algorithm P J schedules jobs of a specific job class in the dense and sparse intervals strongly resembles bin packing [26]. We can look at each of these intervals as being bins in which we pack the jobs (items). Since all jobs are scheduled to start at the beginning of the interval only the machine requirement matters. In this way, the machine requirement of the job corresponds to the size of the item to be packed. The Algorithm P J packs with a First-Fit strategy, i.e. a small job (an item) is scheduled (packed) in the first interval (bin) it fits.
To bound the competitive ratio of Algorithm P J, we use the fact that the dense intervals Di
k contain quite some load, since for each dense interval there is a small
job that did not fit in this dense interval and had to be scheduled in a newly created sparse interval. In terms of bin packing we have the following lemma.
Lemma 2.3. If items with size less than 12 are packed First-Fit and this packing uses b bins, the total size of the items packed is at least 2(b−1)3 .
Proof. Consider an arbitrary sequence of items with size less than 12 which result in the use of b bins by packing it with First-Fit. Let ˜b be the first bin which is filled less than 2
3. By definition of First-Fit all items in successive bins have size at least 1
3. This implies that all successive bins, except possibly the last, are filled for at
least 2
3. More precisely, they contain precisely two items with size larger than 1 3.
the last bin. However, the existence of ˜b implies that the total item size in the last bin and bin ˜b together is at least 1. So, the total size of the items packed is at least 2
3(b− 2) + 1 ≥ 2(b−1)
3 . If no ˜b exists or if ˜b is the last bin, the lemma trivially
holds.
Taking this bin packing view on the dense and sparse intervals we get the follow- ing.
Lemma 2.4. The total work load in the dense and sparse intervals of the schedule created by Algorithm P J, is at least 2m3β times the length of all dense intervals. Proof. Consider all dense and sparse intervals in the schedule created by Algorithm P J corresponding to one job class Jk. There are in total nk dense intervals and
1 sparse interval, each with length βk+1. By Lemma 2.3 and the definition of job
classes, we know that the total work load of the jobs in job class Jk is at least 2
3nkmβ
k, which equals 2m
3β times the length of all dense intervals of job class Jk.
Using Lemma 2.4 we can connect the length of the online schedule with the load bound on the optimal offline schedule. This gives the necessary tools to prove the upper bound on the performance guarantee of online Algorithm P J.
Theorem 2.5. The competitive ratio of the Algorithm P J is at most 72 +√10 (≈ 6.6623).
Proof. Let σ be an arbitrary sequence of jobs. We partition the schedule [0, CP J(σ)]
created by the online Algorithm P J into three parts: The first part B consists of the intervals in which big jobs are scheduled, the second part D consists of the dense intervals, and finally the third part S contains the sparse intervals.
Since part B contains only jobs with machine requirement mj ≥ ⌈m2⌉, the total
work load in B is at least m
2 · |B|. According to Lemma 2.4, the total work load in
D and S is at least 2m
3β · |D|. Since this work load also has to be scheduled in the
optimal offline solution, we get min{m 2, 2m 3β} · (|B| + |D|) ≤ m · C∗(σ). For β ≥ 4 3, this results in |B| + |D| ≤3β2 · C∗(σ) . (2.3) To simplify the arguments for bounding |S|, we normalize the processing times of the jobs such that J0 is the smallest job class, i.e. the smallest processing time of
the small jobs is between 1 and β. Then, |Sk| = βk+1. Let ¯k be the largest k for
which there is a sparse interval in the online schedule. Since there is at most one sparse interval for each job class Jk, the length of S is bounded by
|S| ≤ ¯ k X k=0 |Sk| = ¯ k X k=0 βk+1=β ¯ k+2− β β− 1 .
On the other hand, since interval S¯k is non empty, we know that there is a job
in the sequence σ with processing time at least |S¯k|
β = β ¯
k. Thus, using the length
bound we get
|S| ≤ β
2
β− 1· C∗(σ) . (2.4) Lemma 2.1, (2.3) and (2.4) lead to the following bound on the makespan of the schedule created by online algorithm P J:
CP J(σ) = |B| + |D| + |S| ≤ µ 3β 2 + β2 β− 1 ¶ · C∗(σ) . Choosing β = 1 +√10
5 (which is larger than 4
3), Algorithm P J has a competitive
ratio of at most 7 2+
√
10 (≈ 6.6623).
It is interesting to note that for other values of β the analysis will only result in bounds worse than 6.6623. However, defining big jobs as jobs with machine requirement of at least ⌈αm⌉, results for all α ∈ [ 10
3(5+√10), 1
2] (≈ [.4084, 0.5]) in
6.6623-competitiveness of P J.
Note. In the independent work of Ye et al. [80] the 6.6623-competitive algorithm is obtained in the setting of online orthogonal strip packing. They also show that the analysis is tight, i.e. there exists an instance for which Algorithm P J is no better than 6.6623-competitive.