Dimension Reduction and Relaxation of Johnson’s Method for Two Machines Flow Shop Scheduling Problem

The traditional dimensionality reduction methods can be generally classified into Feature Extraction (FE) and Feature Selection (FS) approaches. The classical FE algorithms are generally classified into linear and nonlinear algorithms. Linear algorithms such as Principal Component Analysis (PCA) aim to project high dimensional data to a lower-dimensional space by linear transformations according to certain criteria. The central idea of PCA is to reduce the dimensionality of the data set consisting of a large number of variables. In this paper, PCA was used to reduce the dimension of flow shop scheduling problems. This mathematical procedure transforms a number of (possibly) correlated jobs into a smaller number of uncorrelated jobs called principal components, which are the linear combinations of the original jobs. These jobs are carefully determined so that from the solution of the reduced problem multiple solutions of the original high dimensional problem can readily be obtained, or completely characterized, without actually listing the optimal solution(s). The results show that by fixing only some critical jobs at the beginnings and ends of the sequence using Johnson's method, the remaining jobs could be arranged in an arbitrary order in the remaining gap without violating the optimality condition that also guarantees minimum completion time. In this regard, Johnson's method was relaxed by terminating the listing of jobs at the first/last available positions when the job with minimum processing time on either machine attains the highest processing time on the other machine for the first time. By terminating Johnson's algorithm at an early stage, the method minimizes the time needed for sequencing those jobs that could be left arbitrarily. By allowing these jobs to be arranged in arbitrary order it gives job sequencing freedom for job operators without affecting minimum completion time. The results of the study were originally obtained for deterministic scheduling problems but they are more relevant on test problems randomly generated from uniform distribution U[a, b] with lower bound a and upper bound b and normal distribution N[μ, σ] with mean μ and standard deviation σ.


Introduction
wo machines flow shop scheduling problem has been considered as a major problem in machine sequencing because it appears independently and as a sub-problem in the ' -Jobs, m-Machines Problem'. The criterion of optimality in a flow shop scheduling problem is usually specified as minimization of makespan, which is defined as the time gap between the beginning of the first job on the first machine and finishing of the last job on the last machine to ensure that all jobs are completed on all machines. The objective of minimizing makespan in The Two Machines Flow Shop Scheduling model is also known as Johnson's problem.
The Johnson's algorithm is an exact solution method of the two machines, one-way, no-passing scheduling tasks problem, which serves as a basis for many heuristic algorithms. This rule is a complete list of ordering the jobs by filling the first or the last available space based on minimum operation times in the two machines from the waiting list into the optimal list until finally only one free space in the optimal list and one last job to be assigned remain in the waiting list. In this procedure, ! comparisons are needed to obtain the optimal sequence. This needs large computation time for large size problems. To overcome this limitation, the current study identified a relaxation of Johnson's algorithm by developing an early stopover criteria, due to the fact that, after listing only some important jobs at the beginning and end of the optimal sequence using Johnson's method, it does not matter in whichever order the remaining jobs are operated as far as makespan is concerned. These criteria minimize the time needed for further computations of ordering the remaining jobs in either direction (first/last available positions) and it reduces the dimension of the problem. By allowing the remaining jobs to be arranged in an arbitrary order irrespective of Johnson's method without violating the optimality condition, the study creates job sequencing freedom for job operators to give priority without affecting minimum completion time.

Literature Review
Johnson [1] produced a pioneering work on machine sequencing literature and great advancements have been made in the field after other researchers also started to investigate solutions to many related problems. The studies discussed in this literature review are mainly concerned with relaxations that were oriented to produce alternate optimal solutions of the problem different from the strict Johnson's rule. In this regard, an early work on the relaxation of Johnson's method for The Two Machines Flow Shop Scheduling Problem was that of Ikram [2]. The study produced alternate ways of performing jobs in a way different from the one specified by Johnson's method without affecting minimum completion time by interchanging two jobs at a time from the optimal Johnson's sequence subject to the existence of certain conditions. In [3] also, the concept of [2] was used to find alternate optimal solutions for ' -Jobs, 2-Machines Flow Shop Scheduling Problem with transportation time and equivalent job-block'. In [4] the idea was extended to ' -Jobs, 3-Machines' Flow Shop Scheduling Problem' based on the work of [2]. These studies are indicators of increasing pressures on alternate ways of performing the same set of jobs in different ways without affecting minimum completion time. Alternate ways enable the assigning of priorities between jobs, and give freedom for job operators. For those groups of studies that generate alternate optimal sequences by interchanging two jobs at a time from the optimal Johnson's sequence, the total number of such alternate sequences is 2 where is the number of all interchangeable pairs.
In the original paper [1], Johnson also solved the 'n-Jobs, 3-Machines Flow Shop Scheduling Problem' in which the processing order for all the jobs in three machines A, B and C is A → B → C for two particular cases for which all jobs j i ; i = 1, 2, 3, … , n;; satisfy max{B} ≤ min{A} or max{B} ≤ min{C}. Few relaxations were made to this condition. The same assumption was made in [4] to the formulation of their alternate solution for the three machine problem. Conway, Maxwell and Miller [5] have shown that the same rule works if B is a non-bottleneck machine, i.e., is a machine that can process any number of jobs at the same time. Maggu, Alam and Ikram [6] also developed an algorithm for a special type of 'n -Jobs, m-Machines Scheduling Problem' which is an extension of Johnson's ' -Jobs, m-Machines' sequencing rule.
The general ' -Jobs, m-Machines Problem' becomes NP-complete [7] for all ≥ 3 (cannot be solved optimally in polynomial time) and the Johnson's algorithm can be applied only for some few particular cases that obey some primary conditions. The general ' -Jobs, m-Machines' flow shop scheduling problems are NP-Hard, so exact optimization techniques are impractical for large size problems. In other words, classical optimization methods such as the branch and bound method, dynamic programming, etc. can be used only for small size problems. Problem size has been the main challenge for the development of solution methods for these problems because the solution space in its original form is of combinatorial order of number of jobs, which makes it more difficult to solve the problems in polynomial time for large . Therefore, large size problems are solved by heuristic methods.
T Many heuristic methods reduce the machines into two virtual machines and apply Johnson's algorithm based on some specific rules or decisions. Important and earlier heuristic algorithms are due to Palmer [8], Gupta [9] and Campbell, Dudek, and Smith's (CDS) heuristic [10]. The other heuristic is due to Nawaz, Enscore, and Ham [11], and is known as the NEH Heuristic. Both CDS and NEH are constructive heuristics. This means that they produce a sequence of at most − 1 solutions from which the best sequence is chosen. In these methods, all machines are first divided into two groups which are considered as two virtual machines, and the problem is solved by applying Johnson's algorithm. The other heuristic, the HFC heuristic of Koulamas [12] on the other hand, is an improvement heuristic. In contrast, an improvement heuristic starts with a given sequence and searches for improvement, but the computational effort is unpredictable. Improvement heuristics are usually based on generic methods such as neighborhood search. They also use the algorithm of Johnson in the first phase, and then by specific rules, make better solutions, starting from an existing feasible solution in a sequence of steps.
In another heuristic, Rapid Access (RA) heuristic [13], two virtual machines are defined, and as in the CDS heuristic, method and weights are assigned, one for each virtual machine. Finally, the flow shop scheduling problem is solved by applying Johnson's algorithm. In [7] also, two variants of heuristic algorithms were developed to solve the classic flow shop scheduling problem. The first algorithm was a constructive heuristic, in which each job was placed in the optimal schedule based on a greedy-type selection. The second algorithm changes the construction of an optimal schedule in a stochastic manner. In [14] also, genetic algorithms were used to solve The Two Machines Flow Shop Problem with the objective of minimizing makespan.
The multi-objective flow shop scheduling problems have been the subject of extensive studies. The majority of bicriteria flow shop investigations consider the combination of makespan with other performance measures [15][16][17][18][19]. In multi-objective problems, creating alternate solutions for the makespan objective by applying the results of this study will relax the other objectives and open more space for the applicability of multi-criteria decision making.
All the above works emphasize the importance of The Two Machines Flow Shop Scheduling Problem and the high reliance of solution methods on Johnson's algorithm. According to [7], even though the various studies have suggested many approaches, it is difficult to find the simplest approach to find an optimal sequence for solving the -Jobs, -Machines Flow Shop Scheduling Problems. Problem size has been the major challenge for all these heuristic methods. Researchers point out the need for scheduling algorithms to minimize makespan for the ' -Jobs -Machines' flow shop scheduling problems with the simplest steps as an alternative. However, little attempt has been made in the previous studies to decrease problem size for the applicability of the exact (heuristic) methods developed so far. Dimension reduction would create enough room for the application of these methods. Once the optimal sequence for the reduced problem is identified, it enables the complete characterization of all alternative optimal solutions of the original high dimensional problem. This research therefore considers dimension reduction for The Two Machine Flow Shop Scheduling Problem to be an important first step to decrease problem size, while at the same time creating alternative ways of sequencing jobs that guarantee the non-increase of total elapsed time on the fictitious machines formed by reducing the machines into two virtual machines.

Problem Statement and Basic Assumptions
In this problem two machines ( and ) of high automation and unlimited buffer size are working together in such a way that machine is always available to start the next job as soon as it finishes the current job. The finished jobs in machine are then immediately transferred to the queue in machine , and machine operates the jobs in the same order as they have been executed in machine . If there is no job in the queue, machine has to wait until machine finishes the current job. In this case an idle time occurs for machine between its finishing of previous job and until machine A finishes its current one. The objective of the problem is therefore to minimize the sum of all these idle times in machine for all the jobs from start to end. The criterion of optimality in a flow shop scheduling problem is usually specified as minimization of makespan, which is defined as the time gap between the beginning of the first job on the first machine and finishing of the last job on the last machine to ensure that all jobs are completed on both machines. Many variants of the problem have evolved since its formulation. As an objective function, mean flowtime, completion time variance and total tardiness can also be used.
The results originally obtained in [1] are among the very first formal results in the theory of scheduling. The objective of minimizing makespan in The Two Machines Flow Shop Scheduling model is also known as Johnson's problem. The Johnson's algorithm is an exact solution method of the two machines, one-way, no-passing scheduling tasks problem which serves as a basis for many heuristic algorithms [20]. The values of the processing times of a job on machines and are denoted by and respectively for = 1, 2, … , are deterministically known, constant and positive. They include also all the necessary auxiliary times involved in the technological process. The following important assumptions are made in the problem. Assumption 1: A set of unrelated, multiple-operation jobs are available for processing at time zero. (Each job requires 2 operations, and each operation requires a different machine.) Assumption 2: Both machines are continuously available. Assumption 3: Only one operation is carried out on a machine at a time. Assumption 4: Once an operation begins in a machine, it proceeds without interruption.
Assumption 5: Processing times are known in advance and are deterministic, constant and positive. Assumption 6: Setup times for the operations are sequence independent and are included in processing times. Assumption 7: The time required in moving jobs from one machine to another is negligibly small. Assumption 8: The same job-sequence is maintained over each machine, in other words no passing is allowed.

Algorithm: Johnson's Rule
Johnson's Rule: Job = ( , ) precedes job = ( , ) in an optimal sequence { * , = 1, 2, In practice, an optimal sequence is directly constructed with an adaptation of Johnson's Rule. The positions in sequence are filled by a one-pass mechanism that, at each stage, identifies a job that should fill either the first [last] available position.
 Step 1 Examine the columns of and for processing times on machines and and find the smallest processing time among unscheduled jobs (waiting list).  Step 2a If the smallest processing time occurs for the first machine, then place the corresponding job in the first available position in the sequence (optimal list). (Ties may be broken arbitrarily.) Go to step 3.
 Step 2b If the smallest processing time occurs for the second machine, then place the corresponding job in the last available position in the sequence (optimal list). (Ties may be broken arbitrarily.) Go to step 3.
 Step 3 Remove the assigned job from consideration (waiting list) and return to Step 1 until all sequence positions are filled.

Algorithm: Alternate formulation of Johnson's Rule
An alternative way to describe Johnson's Rule [21] that provides a different perspective on the structure of optimal schedules is used for this study. In this formulation, the problem is considered as a sequencing problem put mathematically as = { = ( , ); = 1, 2, 3, … , } where and are processing times of job on machines and , respectively with no passing of jobs on the two machines in the order → . The jobs in are then partitioned into two disjoint clusters 1 and 2 where 1 = { = ( , ) ∈ : ≤ ; = 1, 2, 3, … , } and 2 = { = ( , ) ∈ : > ; = 1, 2, 3, … , }.
We call jobs in 1 jobs of the first kind and jobs in 2 jobs of the second kind. It is important to note that jobs in 1 have longer processing time (at least equal) on the second machine while jobs in 2 have strictly longer processing time on the first machine. Then in the optimal Johnson's sequence, jobs in 1 are first arranged in a non-decreasing order of their processing times on the first machine and then jobs in 2 are arranged in a non-increasing order of their processing times on the second machine. The present approach is often helpful and easy to apply and implement. Therefore, we follow this procedure to describe the optimal Johnson's sequence for The Two Machines Flow Shop Scheduling Problem.
 Step 1: to get an ordered set * 1 , and arrange the members of set 2 in non-increasing order of to get an ordered set * 2 .  Step 3: An optimal sequence is the ordered set * 1 followed by the ordered set * 2 .

Data Mining and Dimension Reduction
Real-world data are typically noisy, enormous in volume, and may originate from a jumble of heterogenous sources [22]. Powerful and versatile tools are very much needed to automatically uncover valuable information from the tremendous amounts of data and to transform such data into organized knowledge. This necessity has led to the birth of data mining. Data mining is the process of discovering interesting patterns and knowledge from large amounts of data, having made a closer investigation of attributes and data values. As a general technology, data mining can be applied to any kind of data as long as the data are meaningful for a target application.
The state-of-art data mining tools could be employed for Flow Shop Scheduling Problems to overcome the limitations of the solution methods for large size problems. Due to the limited number of studies so far which have applied the state-of-the-art dimension reduction methods for Flow Shop Scheduling Problems, this paper examines The Two Machines Flow Shop Scheduling Problem more closely because of its significant contribution to Other Flow Shop Problems.
In the absence of real-world data for a typical study, generated data play an important role. For this study, generated data from a uniform distribution [ , ] with lower bound and upper bound and a normal distribution [ , 2 ] with mean and standard deviation 2 were used for different parameter values, and alternatives were analysed to describe a large size problem in terms of a small number of parameters. The effort was to discover important features of Flow Shop Scheduling Problems. This knowledge discovery process involves a sequence of logical understanding of the basic features of the high dimensional problem to extract a low dimensional representation of its key features.
As was suggested in [22], data mining involves a sequence of procedures that require the planners' understanding of the problem at hand. We further elaborate these steps in the context in which they have been applied to this study. Some of these procedures were applied in earlier studies of Flow Shop Scheduling Problems with a different context. For example, the first step, data integration (where multiple data items may be combined), was studied by equivalent job blocks. The concept of equivalent job blocking was introduced in [23] in the theory of scheduling. In the context of this study, however, multiple jobs were, for simplicity, represented by a single representative job, which may be different from the context of equivalent job blocks.
In the second step of data mining, data selection, data relevant to the analysis task are retrieved from the database.
The other, following, steps of data mining were contextually used for Flow Shop Scheduling Problems to use them for the proposed study to fill the research gap in applying these techniques for dimension reduction.
In the third step, data transformation, data are transformed and consolidated into forms appropriate for mining. This step was used for this study to group jobs with identical processing times that hold equal priority in the optimal sequence. Thus it is enough to represent jobs with equal priority by a single representative job to reduce the dimension of the problem. This step was used in the principal component analysis (see Section 4. 1).
The fourth step, the data mining step, is an essential process where intelligent methods are applied to extract data patterns and to understand how only important patterns are exhibited. This step was used to develop mini-max criteria (see Section 4.2) to reduce the dimension of jobs of the first kind (see Section 4.2.1) and jobs of the second kind (see Section 4.2.2).
The fifth step, the pattern evaluation step, consists of identifying the truly interesting patterns representing knowledge based on interestingness measures, and checking whether further conclusions could be reached about the high dimensional problem from its low dimensional representation. This step was carried out in this research by formulating the results in the form of theorems and giving analytic mathematical proofs (see Section 4.2).
In the sixth step, the knowledge presentation step, illustrations are made to confirm the findings on a theory of knowledge; it aims to present mined knowledge convincingly to other users. This involves the organization of this study in a form of publication with all the necessary background information to acceptable competency levels. In particular, the illustration example in Section 5 also plays this role.
In the next sub-sections the main findings of the study are organized as theorems and algorithms. In section 4.1 Principal Component Analysis (PCA) is discussed in the sense of its application for dimension reduction of Flow Shop Scheduling Problems. In Section 4.2 further dimension reduction is carried out using mini-max criteria. Here, two investigations are made independently for the two kinds of jobs discussed earlier.
For jobs with more processing time on the second machine, an early stopover criteria was achieved when the job assigned to the first available position with minimum processing time criteria, for the first time, attains the highest processing time on the second machine for all jobs of the first kind (see Section 4.2.1). Similarly, for jobs with more processing time on the first machine, an early stopover criteria was achieved when the job assigned to the last available position with minimum processing time criteria, for the first time, attains the highest processing time on the first machine for all jobs of the second kind (see Section 4.2.2). Finally, these results are formulated in the form of algorithms. The first algorithm is a dimension reduction algorithm and it combines the dimension reductions carried out in Section 4.1 and Section 4.2 together. The second algorithm is a relaxation of Johnson's algorithm and it combines the early stopover criteria achieved in Section 4.1 and Section 4.2 as termination criteria for Johnson's algorithm.

Dimension Reduction by Principal Component Analysis
The state-of-the-art Dimensional Reduction (DR) methods are divided into projective methods and methods that model the manifold on which the data lies. Perhaps the simplest approach is to attempt to find low dimensional projections that extract useful information from the data, by maximizing a suitable objective function [25]. Cluster analysis is one of the major data analysis methods which are widely used for many practical applications. The purpose of clustering is to group together data points, which are close to one another [26].
Let problem = { = ( , ); = 1, 2, 3, … , } be the original high dimensional problem where the notation = ( , ); = 1, 2, 3, … , means job has processing time equal to units on machines A and processing time of units on machine , and the aim is to find the optimal sequence S = { * ; = 1, 2, 3, … , } that minimizes total elapsed time of operating all jobs in the same order in the two machines uninterrupted with no passing of jobs on the two machines in the order A → .
This map specifies the kind of job and the corresponding processing time of the job that is relevant for assigning it to the optimal Johnson's sequence. Let the image of under be given by ( ) = ( ) = { : = ( ) ∈ }. We call the elements of ( ) Principal Components of .
Then the inverse image of is given by −1 ( ): ( ) → given by the formula (3).
Equation (3) clusters all jobs into disjoint equivalence classes consisting of exactly those jobs in 1 that have equal processing times on the first machine or those jobs in 2 that have equal processing times on the second machine. In the optimal Johnson's sequence the jobs in each cluster are arranged successively. Let there be principal components Thus all jobs in the same cluster could be represented by a single job from the group and number of reserved positions for these jobs by 1 , 2 , … , . In particular if we determine to represent each cluster with the job that has the longest total processing time on the two machines, then a unique identifier is assigned to each job.
Thus, the original high dimensional Flow Shop Scheduling Problem was defined in terms of the low dimensional sub-problem from which the solution of the original Two Machines Flow Shop Scheduling Problem could be obtained more readily by replacing the representative job by the block of jobs it corresponds to. More specifically, the original ' -Job 2-Machine Problem' is reduced into clusters with 1 , 2 , … , number of jobs respectively where 1 + 2 + ⋯ + = . At this stage of dimension reduction there are at least 1 ! * 2 ! * … * ! alternative optimal sequences to this problem.

Dimension Reduction of Jobs by Means of Mini-Max Criteria
The traditional and the state-of-the-art dimension reduction methods can be generally classified into Feature Extraction (FE) and Feature Selection (FS) approaches. FE algorithms aim to extract features by projecting the original high-dimensional data to a lower-dimensional space through algebraic transformations [26]. The classical FE algorithms are generally classified into linear and nonlinear approaches. In contrast to the FE algorithms, FS algorithms have been widely used on large-scale data and aim at finding out a subset of the most representative features according to some objective function. It is optional that we assume dimensional reduction by PCA method discussed in the previous section has already been carried out for the problem before we apply Mini-Max Criteria in this section.

Principle of Mathematical Induction (PMI)
We use the Principle of Mathematical Induction (PMI) to prove the results of the next section. It states as follows: Let ( ) be a property that depends on natural numbers satisfies the following two conditions: i. ( 0 ) holds true and ii. ( + 1) holds true whenever ( ) holds true. Then ( ) holds true for all natural numbers ≥ 0 . Moreover, these jobs do not create any idle time for machine , and the completion time of job * + on machine is given by the formula (5) for ≤ + ≤ 1 .

Dimension Reduction of Jobs with
Consequently, the completion time of all jobs of the first kind on machine is given by the formula (6) irrespective of the order of operation of the jobs * +1 → * +2 → * +3 → ⋯ → * 1 .
Let us consider the starting and finishing times of the jobs in { + ; = 1, 2, … , 1 − } on the two machines and . It is important to note that at any stage of machine sequencing, if machine finishes job at = ( ) and machine finishes the next job +1 at = 1 ( +1 ), then machine starts job +1 at { 1 ( +1 ), ( )}.
The right side of (13) Thus, the theorem holds true for = 1.
The right side of (19) is the finishing time of job +1 on machine . Hence machine starts job +2 at Thus, the theorem holds true for = 2.
Then machine finishes job +m+1 at time = 1 ( + +1 ) given by equation (24) 1 ( + +1 ) = 1 ( * ) + +1 + +2 + ⋯ + + +1 Using equations (7) The right side of (29) is the finishing time of job +m on machine . Hence machine starts job +m+1 at . The optimal sequence is also obtained by one of these permutations and the total elapsed time in the optimal sequence to finish 1 jobs of the first kind is ( * ) + * +1 + * +2 + ⋯ + * 1 . If, further, we keep all jobs in 2 fixed in the optimal sequence, the total elapsed time to finish all jobs with the above freedom will be equal to the optimal value. Hence any sequence obtained by the above sequencing freedom is optimal. Since we are free to arrange the jobs * +1 , * +2 , * +3 , … , * 1 in an arbitrary order, the above machine sequencing freedom creates ( 1 − )! sequences all of which are optimal. For references, we call the job * in the above theorem formulation the minimal job of { * ; = 1, 2, 3, … , } = * 1 → * 2 → * 3 → ⋯ → * and we call the block of jobs * +1 → * +2 → * +3 → … → * 1 the free jobs of first kind.

Theorem 2
Let the ' -Jobs, 2-Machines' sequencing problem = { = ( , ); = 1, 2, 3, … , } be in the reduced form after dimension reduction using PCA has been performed where the notation = ( , ) means and are processing times of job on machines and respectively for all More specifically, the completion time of the job * on machine is independent of the order of jobs We use induction on the number of jobs in between 1 and . 1. We first show the theorem holds for the case when there is no job in between 1 and i. e. − 1 = 1 . In this case, machine starts job * at 1 ( * 1 ) and completes it at 1 ( * ) given by equation (37).
Thus machine starts job * at Thus, the theorem holds true for the case when − 1 = 1 . 2. We next show the theorem holds for the case when there is only one job in between 1 and i. e. − 2 = 1 . In this case, machine starts job 1 +1 at 1 ( * 1 ) and completes it at 1 ( 1 +1 ) = 1 ( * 1 ) +  Thus, the formula works for = 1. Thus, the theorem holds for the case when − 2 = 1 .

Induction assumption
3. Suppose for any such that 1 + 1 ≤ − ≤ , the finishing time of the job * on machine is independent of the order of jobs − , … , −2 , −1 and is given by equation (50).   Hence we have shown that the theorem also holds for + 1. Thus, by the principle of mathematical induction the theorem holds for all such that 1 + 1 ≤ − ≤ − 1. is symmetric with respect to jobs and so it is independent of the order of operations. Thus, the theorem was proved.
Since the jobs 1 +1 , … , −2 , −1 are permutations of the jobs * 1 +1 , … , * −2 , * −1 , and the optimal sequence is also obtained by one of these permutations and the total elapsed time in the optimal sequence to finish jobs ( * )given by formula (69) reduces to formula (70).
If further we keep all jobs in 1 fixed in the optimal sequence, the total elapsed time to finish all jobs with the above freedom will be equal to the optimal value. Hence any sequence obtained by the above sequencing freedom is optimal. Since we are free to arrange the jobs * 1 +1 → ⋯ → * −2 → * −1 in an arbitrary order, the above machine sequencing freedom creates ( − 1 − 1)! sequences, all of which are optimal. For reference, we call the job * in the above formulation the maximal job of { * ; = 1, 2, 3, … , } = * 1 → * 2 → * 3 → ⋯ → * and we call the block of jobs * 1 +1 → … → * −3 → * −2 → * −1 the free jobs of second kind.

Concluding Remark
Combining the results of Section 4.2.1 and Section 4. At this point by looking at the formula in (72) only, we suggest not to conclude that the union of the free jobs of first and second kind could also be arranged in an arbitrary order because there are just a few particular cases for which this conclusion does not hold true. Thus, we need to compute the sequence dependent starting and completion times of the remaining jobs * +1 → * +2 → * +3 → ⋯ → * on the two machines to find total elapsed time.

Algorithm: Dimension Reduction
Thus we have proved the following algorithm. Let problem = { = ( , ); = 1, 2, 3, … , } with no passing of jobs on the two machines in the order → be given where and are processing times of job on machines and , respectively for = 1, 2, 3, … , .  Step 8 Therefore, the reduced problem at the end of the second phase of dimension reduction becomes sequencing jobs in

Algorithm: Relaxation of Johnson's Algorithm
The Johnson's algorithm is an exact solution method of the two machines, one-way, no-passing scheduling tasks problem, which serves as a basis for many heuristic algorithms. This rule is a complete list of ordering the jobs by filling the first or the last available space based on minimum operation times in the two machines from the waiting list until, finally, only one free space and one last job to be assigned remain in the waiting list. We make ! comparisons to obtain the optimal sequence. To overcome the problem of computation time, the current study identified a relaxation of Johnson's algorithm by developing an early stopover criteria due to the fact that after listing only some critical jobs at the beginning and end of the optimal sequence using Johnson's procedure, it does not matter in whichever order the remaining jobs are operated as far as makespan is concerned. Step 2 (Johnson's algorithm main) Examine the columns of and for processing times on machines and and find the smallest processing time among unscheduled jobs (waiting list). Apply Johnson's algorithm on and remove the scheduled job from , waiting list 1 and waiting list 2 until jobs * or * are assigned. Go to step 3.

 Step 3a (Mini-max criteria 1)
If * is scheduled, then remove all unscheduled jobs in waiting list 1 from and terminate this step. Go to step 2.  Step 3b (Mini-max criteria 2) If * is scheduled, then remove all unscheduled jobs in waiting list 2 from and terminate this step. Go to step 2.

 Step 4 (Relaxation of Johnson's algorithm)
If both * and * are assigned, then terminate Johnson's algorithm main. Go to step 5.

 Step 5a
If waiting list 1 is non-empty, then choose at random a job in waiting list 1 and assign the corresponding job in the first available position in sequence and remove the assigned job from the waiting list 1. Repeat this step until waiting list 1 is empty.
 Step 5b If waiting list 2 is non-empty then choose at random a job in waiting list 2 and assign the corresponding job in the last available position in sequence and remove the assigned job from the waiting list 2. Repeat this step until waiting list 2 is empty.
 Step 6 Repeat steps 5a and 5b until all , waiting list 1 and waiting list 2 are empty. In the relaxation algorithm above, we did not violate Johnson's algorithm except termination criteria. Thus, ties for jobs with equal processing time on the two machines may be broken arbitrarily.

Illustrative Example
In the following example, The Two Machines Flow Shop Sequencing Problem of 100 jobs indexed the first kind before jobs of the second kind. Next all jobs of the first kind only were selected and sorted by ascending value on column 4 to arrange all jobs of the first kind in a non-decreasing order of processing time on the first machine. Similarly, all jobs of the second kind only were selected and sorted in ascending value on column 5 to arrange all jobs of the second kind in a non-increasing order of processing time on the second machine. The resulting sequence was, therefore, an optimal Johnson's sequence and the corresponding sequence positions for all jobs were assigned in column 1, starting from beginning to end. Table 1 below gives the results.
The formulas for starting and completion of jobs on the first machine A were entered in columns 7 and 8. Similarly, the formulas for starting and completion of jobs on second machine B were entered in columns 9 and 10. The formula to calculate the idle time (slack time) due to each job was entered in column 11 so that the formulas automatically run for any other permutations of the jobs.
After identifying the optimal Johnson's sequence, applying the mini-max criteria discussed earlier, the maximum processing time on the second machine for all jobs of the first kind was identified to be 65 , and its first occurrence was in the * 8 in the optimal Johnson's sequence. Similarly applying the mini-max criteria discussed earlier, the maximum processing time on the first machine for all jobs of the second kind was identified to be 63 and its last occurrence was in the * 82 in the optimal Johnson's sequence.
In the next step the concept of random numbers was used to generate multiple alternate optimal sequences and to verify the findings of this study. In column 12, a new sequence in terms of random numbers was defined for all jobs as follows. For the jobs up to and including the minimal job, the same sequence order as the optimal Johnson's sequence was maintained. Also, for jobs starting from the maximal job onward, the same sequence order as the optimal Johnson's sequence was maintained. But for free jobs of the first kind, a random number between 0 and 1 was added to the minimal job position number, and for free jobs of the second kind a random number between 0 and 1 was subtracted from the maximal job position and all jobs are sorted in increasing order of the values in column 12. Then a new sequence was defined in column 13 to get an alternate optimal sequence. Then comparisons were made between the Johnson's sequence in Table 1 and the alternate sequence in Table 2.   Tables 1 and 2). It remains to verify Theorems 1 and 2. We show this by sorting the jobs in optimal Johnson's sequence in ascending order of the values assigned to the jobs in column 12, and compare the finishing times of jobs of the first kind as well as the maximal job. To do this, we copy all the values in the bold cells for Johnson's sequence (Table 1) and sort the jobs in ascending value in column 12. This gives a different sequence of jobs. To check that it is also the optimal sequence, we checked the values in the bold cells with those recorded for Johnson's sequence (Table 1); in particular we checked the total elapsed time ( * 100 ) = for each "sort ascending values on column 12" instruction. It can easily be verified that Table 2 also gives an optimal sequence irrespective of the order of free jobs of the first kind and free jobs of the second kind if they are scheduled together. Thus, the results of the two theorems were verified with this example.
Our final conclusion about dimension reduction was made in reference to the above example. At termination of Johnson's algorithm, the problem size was reduced to those jobs that hold fixed position in all the alternate optimal sequences. In the above example, the problem size was reduced from 100 to only 29 jobs (8 jobs at the beginning, 19 jobs at the end and a representative job each for free jobs of the two kinds. This is a significant dimension reduction at very low cost of computation. This may be expressed as a 71% decrease in problem size. There are at least 15! * 57! alternate optimal sequences for the illustrative example above obtained by this procedure only.

Summary, Conclusion and Recommendation
The purpose of this study was to apply dimension reduction methods for Flow Shop Scheduling Problems to decrease problem size. The development of solution methods for the 'n -Jobs m -Machines' Flow Shop Scheduling Problems was limited by the number of jobs n and number of machines m. Due to these difficulties solutions have been developed either for a small number of jobs or a small number of machines. To enable solution methods to be applicable for a large number of jobs, it was important to cluster jobs into principal components by defining a projective mapping.
This removes the redundant information in the original problem. Then solution methods are needed for only targeted jobs and once the sequence positions of the targeted jobs in the optimal sequence are identified, a number of alternate optimal solutions could be obtained by simple enumeration techniques. Alternate solutions give sequencing freedom for job operators to decide on the relative priority of different jobs. In the optimal sequence it is not necessary to list all the jobs in an order, but rather a few targeted jobs at the beginnings and ends of the sequences completely determine the completion time. More specifically, the first occurrences of the two jobs at the beginnings and ends of the optimal sequence for which the processing time on one machine attains its highest processing time on the other machine are very important because they completely determine the extent to which further computation of jobs to be assigned in the remaining sequence positions is no longer important as far as makespan is concerned. Therefore, Johnson's method was relaxed by terminating listing jobs at the first available positions when the job selected with the minimum processing time criteria on the first machine attains the highest processing time on the second machine for the first time and also by terminating listing jobs at the last available positions when the job with the minimum processing time criteria on the second machine attains the highest processing time on the first machine for the first time. The remaining jobs could be arranged in any convenient way in the remaining gaps without affecting minimum completion time.