Limited Search Space-Based Algorithm for Dual Resource Constrained Scheduling Problem with Multilevel Product Structure

: This study addresses the dual resource constrained ﬂexible job shop scheduling problem (DRCFJSP) with a multilevel product structure. The DRCFJSP is a strong NP-hard problem, and an efﬁcient algorithm is essential for DRCFJSP. In this study, we propose an algorithm for the DRCFJSP with a multilevel product structure to minimize the lateness, makespan, and deviation of the workload with preemptive priorities. To efﬁciently solve the problem within a limited time, the search space is limited based on the possible start and end time, and focus is placed on the intensiﬁcation rather than diversiﬁcation, which can help the algorithm spend more time to ﬁnd an optimal solution in a reasonable solution space. The performance of the proposed algorithm is compared with those of a genetic algorithm and a hybrid genetic algorithm with variable neighborhood search. The numerical experiments demonstrate that the strategy limiting the search space is effective for large and complex problems.


Introduction
Several changes have occurred with the onset of the mass customization era following the traditional mass production era. Additionally, multipurpose machines and manufacturing processes have been adapted to flexible production systems producing a batch of small quantity [1]. In response to this trend, scheduling studies have focused on flexible manufacturing processes and have actively been conducted to solve the flexible job-shop scheduling problem (FJSP). In the FJSP research, various constraints, such as sequence dependent setup time [2][3][4], learning effect [5], and dual resource constraint [6][7][8], are considered depending on the manufacturing environment. This study deals with the dual resource constrained flexible job-shop scheduling problem (DRCFJSP) under the consideration of the worker's skill level for machines and multilevel product structures (MLPS).
Research on the complexity of processes has been conducted, but there has been little research to reflect the complexity of product structures. The majority of the literature assumes a product structure composed of simple sequential operations for a job, which is very different from the bill of materials (BOM) structure that is widely used in manufacturing industries, especially for products requiring assembly processes. In several advanced planning and scheduling (APS) studies, the product complexity was reflected through the concept of MLPS. MLPS is a hierarchical tree that expresses the product structure and is essential for products requiring assembly processes. The top node of tree is a final product. Each child node of the tree represents a component or part of the product. With the MLPS, the operations for all the child nodes associated with the parent node computation time weakens the algorithm's performance. Previous studies on the DRCFJSP were conducted by mitigating some constraints, such as worker skill, batch size, and complex product structure. In particular, the majority of the studies have developed algorithms for a simple product structure, and the metaheuristic algorithms demonstrated good performances. In this study, to deal with more complex problems with additional constraints, we propose a new local search algorithm that identifies a good search space and focuses on the intensification of a more likely space to compensate for the problem of diversification. In addition, the simple exchange of the operation sequence used in most scheduling algorithms in MLPS requires the process of solution repair for the feasibility. This results in heavy modification of the solution and increased computation time. Therefore, we use the priority rules to create a reliable initial solution for the time-based integrated initial solution algorithm, rather than the existing operation sequence expression, and derive an optimal schedule through the local search algorithm that limits the exploration space. To the best of our knowledge, our study is the first to propose an algorithm to solve the DRCFJSP that reflects the MLPS.
The remainder of this paper is organized as follows. Section 2 describes the environment and assumptions for the problem. Section 3 explains the proposed algorithm. In Section 4, we discuss the results based on the numerical experiments. Finally, some insights are discussed along with the conclusion and future research directions in Section 5.

Problem Description
The DRCFJSP can be applied in various industries including automobile industry, equipment industry [6] and electromechanical industry [17]. We consider DRCFJSP with an MLPS to deal with the scheduling problem of manufacturing systems with assembly lines. There is a set of orders, O = {O 1 , O 2 , ..., O n }; a set of final products F = {F 1 , F 2 , ..., F l }; a set of machines M = {M 1 , M 2 , ..., M m }; and a set of workers W = {W 1 , W 2 , ..., W k }. An order, O i , denotes a production request for the final products, F a , within the due date, DD i . MLPS is a hierarchical tree that expresses the BOM structure for a final product and subcomponents. To produce each upper element, various types of child components are required. In Figure 1, the numbers on the arcs connecting each node represent the number of child components required to produce the parent node. Figure  1 illustrates an example of the MLPS for a final product, F1, in Chen et al. [9]. F1 is produced by assembling one component S1 and two C1s, and C3 requires three successive processes, namely, P1, P2, and P3. When more than one process is required to produce a component, the process name is written along with the component name in the node. For example, in Figure 1, C3P1, C3P2, and C3P3 imply that the component C3 is processed in P1, P2, and P3, respectively. Several constraints are considered for an environment of DRCFJSP with MLPS. First, MLPS must be considered to determine the schedule, i.e., a sequence of operations according to MLPS. For example, in Figure 1, the operations for C1 and C2 must be executed before S1. Second, each item and subitem can be processed only in the designated machines, and each worker can work on all machines. The processing time of an item depends on the assigned machine and the skill level of the worker.
Third, each machine and worker cannot simultaneously perform more than two operations. The same type of item cannot be worked on machines at the same time. Finally, the production quantity is determined based on the lot size and the total demand. It must be greater than the demand and must be an integer multiple of the lot size.
In addition to the constraints, the following assumptions are made for the parameter values.

•
The raw materials required for the operation of the items are sufficient.

•
The operation setup time is small enough to be ignored.

•
The lot-size of each operation is fixed.

•
There is a difference in the skill level of each worker according to the machine; however, the improvement in the worker's skill level because of the learning effect is small enough to ignore.
This study aims to minimize the lateness, makespan, and maximum deviation of the workload in the machines. Among these objective functions, lateness minimization has the highest priority, followed by makespan minimization and workload balance; the function with the highest priority is the most preferred. Equation (1) represents the objective function for lateness. Delays in delivery are one of the most important factors in the manufacturing environment, because frequent delivery violations and lateness cause financial damage by lowering the corporate trust and resulting in long-term declines in orders. Equation (2) minimizes the makespan, which is used in many scheduling studies as a measure of process productivity. Equation (3) represents the objective function for minimizing the deviation of workload between machines, and is used to prevent the imbalance of workload between machines. The notations used in this paper are summarized in Table A1 (G1) min

Algorithm Process
Considering MLPS, DRCFJSP is difficult to solve because of many complicated constraints, especially complex product structure and dual resources. The development of an algorithm tailored for the problem is very important to find an optimal solution within a limited time. The rational logic behind the proposed algorithm is that spending more time within a good solution space is better than randomly searching the entire solution space. That is, we want to begin with a good initial solution and not spend much time on checking the feasibility of the solution generated from the local search algorithm. Therefore, we propose a time-based integrated initial solution algorithm in Section 3.2 and a local search algorithm in Section 3.3. Figure 2a is a flow chart of the proposed algorithm, and Figure 2b,c shows GA and GA-VNS, respectively. The proposed algorithm has three unique features compared to GA and GA-VNS. First, the time-based integrated initial solution algorithm is used to find a good initial solution. In contrast to the method of randomly selecting an initial solution, the performance of the algorithm is improved and the computation time based on the three priority rules is reduced. Second, the limited search space algorithm is used to efficiently find candidate solutions. The problem with many constraints is a large infeasible space, and it is not easy to generate candidate solutions that do not change much compared to existing solutions while maintaining feasibility. Based on the concept of EPST and LPST for guaranteeing solution feasibility, we spend more time on generating candidate solutions within the limited search space. Third, we do not use a mutation operator to focus on intensification. A mutation operator can be introduced for diversification but it requires additional computation time.

Initial Solution
This section explains the time-based integrated initial solution algorithm. Using this algorithm, an initial solution is generated by assigning appropriate workers and operations to the available machines over time based on several priority rules. Before applying the priority rules, the operations are generated based on the inventory level and batch size as follows. First, the required production quantity is determined by comparing the order quantity and initial inventory level. Second, the operation quantity is determined as a multiple of the batch size at the time bucket with a positive production quantity. Third, the production quantity at the next time bucket is determined when the operation quantity is greater than the production quantity. The second and third steps are repeated as needed, and the same process is applied to the subitems according to the MLPS.
After generating a list of operations, three types of priority rules are applied for the initial assignment: operation rule, machine rule, and worker rule. First, in the case of the operation rule, a higher priority is randomly assigned to the orders that need to be fulfilled within a day, and a lower priority is assigned to the other orders. Second, according to the machine rule, an available machine with a smaller number of executable items has a higher priority. Figure 3 illustrates an example explaining the machine rule. There are three machines (M 1 , M 2 , and M 3 ) and five items (I 1,1,1 , I 1,1,2 , I 1,1,3 , I 2,0,1 , and I 3,0,1 ). All five items are executable on M 1 , and I 1,1,1 , I 1,1,2 and I 2,0,1 are executable on M 2 . M 3 only produces item I 1,1,1 . Therefore, the number of executable items of M 1 , M 2 , and M 3 are 5, 3, and 1, respectively. According to the machine rule, M 3 has the highest priority; thus, item I 1,1,1 is assigned to M 3 , as depicted in Figure 3a. Then, M 2 becomes available with the smallest number of executable items; thus, an item is assigned to M 2 . In this manner, all the items are assigned to the machines. In contrast, Figure 3b illustrates an example where an available machine with a larger number of executable items has a higher priority. In this case, most of the items are assigned to M 1 , and M 3 becomes idle. This example demonstrates that the proposed machine rule is highly likely to derive a better solution in terms of the idle time, makespan, and workload of the machine. Finally, the worker rule identifies a worker with the highest skill for a machine among the available workers at the beginning of the production. To efficiently derive an initial solution, an available machine is selected based on the machine rule, and an appropriate worker with the highest priority is selected. Then, the item with the highest priority is assigned to the selected machine. Figures 4 and A1 depict the flowchart and pseudocode for the time-based integrated initial solution algorithm.

Local Search
In this section, we propose a local search algorithm, named the limited search space (LSS) algorithm, to avoid the fluctuation of the solution performance. Candidate solutions are generated through the exchange of operations between different machines. First, after randomly selecting an operation, a list of exchangeable operations is generated by checking the machine compatibility and available time buckets. Machine compatibility can be easily checked by using the given data on the designated machines. The available time buckets in which an item can be processed is determined based on the EPST and LPST. EPST refers to the earliest time when all the sub items or raw materials are prepared so that the item is ready for the operation. LPST refers to the latest time that an item must be processed to avoid the delay of finished goods when compared to the previous solution. This process of checking the machine compatibility and available time buckets limit the search space; therefore, solution feasibility is always guaranteed even if there are many other constraints.
Next, the randomly selected operation is exchanged with an operation in the list of exchangeable operations to generate a candidate solution. The processing times of the exchanged operations can be changed depending on the worker skill level assigned to the machine. When the completion time of the operation is delayed when compared to the previous solution, it affects the schedules of the following operations. To maintain the efficiency of the algorithm, the same amount of delay is applied to all the operations that begin after the operation that causes the delay. Then, the candidate is generated by modifying the operation's start time to be as early as possible based on the updated EPST and availability of worker and machine. Figure 5 depicts the pseudocode for the candidate generation.
Finally, the solution is updated based on the performance improvement in terms of the objective functions. Two approaches are employed for the solution update: limited space full search (LSFS) and limited space interactive search (LSIS). LSFS generates all possible candidate solutions using the list of exchangeable operations and determines the best solution. When a better solution is obtained, the solution is updated and moves to the first step of the algorithm. Otherwise, the algorithm is ended. In contrast, LSIS generates only one candidate solution and calculates the objective function value. When the objective function is improved, the solution is updated and immediately moves to the first step of the algorithm and searches a different part of the solution space. Otherwise, the next candidate solution is generated. After generating the candidate solutions one by one, the algorithm stops when the solution is not improved.

Numerical Experiments
The superiority of the proposed algorithm is demonstrated through its comparison with the GA and GA-VNS. The overall algorithm structure of GA and GA-VNS is based on the previous study by Wu et al. [6]. As shown in Figure 2, GA proceeds through the procedures of initialization, selection, crossover, and mutation. The algorithm proposed by Wu et al. [6] was partially modified because it is not suitable for MLPS. In their problem, a final item requires a simple sequence of operations, and the order for a final item occurs only once. Therefore, it is easy to construct an operation sequence chromosome based on the final item. However, this is not appropriate when we consider MLPS and multiple orders for each item. The representation of the operation sequence chromosome based on the final item has a problem of uniqueness. Therefore, the decoding process is modified to consider the MLPS. The feasible suboperations required for the final product are selected based on the MLPS, and the sequence of suboperations is randomly selected if there are multiple feasible suboperations.
The encoding and other parameter settings, such as crossover, mutation, elite, and tournament, are the same as those in the algorithm proposed by Wu et al. [6]. In particular, selection is performed through the elite tournament, and VNS is applied to the top 50% of the candidate s in the case of GA-VNS.

Evaluation of the Initial Solution Algorithm
To verify the effectiveness of the initial solution algorithm, we solve small scale problems. The experiment environment is described as follows.

•
The number of final items is five , and they have the BOM structure in Figure 6. In cases 1 and 2, a BOM can have up to two subparts for the upper part. In cases 3 and 4, the BOM structure is extended up to four levels. The processing time of each operation is randomly generated within a range from 1 to 15 time units.

•
The number of machines is five. On an average, each machine can work on processing 30% items, and designated machines are randomly assigned. • Capacity is either low or high. In the case of low capacity, the available time unit per day is 2000, and in the case of high capacity, 4500 units per day are available.

•
The number of workers is three. The skill level of each worker is randomly generated within a range from 50 to 100%. • Initial inventory level is zero.

•
The lot size is 30. Tables 1 and 2 show numerical results with simple a BOM structure, as shown in Figure 6a. Order data is randomly generated within a range from 1 to 30, and other data remain the same. The performances of LSFS and LSIS are compared with those of GA, GA-VNS, and LSFS without initial solution algorithm, and LSIS without initial solution algorithm. In addition, the performance measures used are the objective function values (G1, G2, and G3) explained in Section 2 and computation time. The results show that LSFS and LSIS with the initial solution algorithm are better than other algorithms. In particular, the performance gap between the algorithm with initial solution and without initial solution algorithm is quite large. As mentioned in the previous section, the main concept of the proposed algorithm is to begin with a relatively good starting point and improve the solution within the limited search space. It is also interesting that the computation time of LSIS and LSFS becomes shorter when the initial solution algorithm is applied. In other words, the performance of LSS-based algorithms is heavily dependent on the initial starting point, and the proposed initial solution algorithm plays a critically important role in achieving good performance. In addition, the performance of LSFS and LSIS is even worse than GA-based algorithms when the initial algorithm is not applied. Different from GA-based algorithms, the proposed algorithm does not use a mutation operator, and it can be worse than GA-based algorithms. The difference in the performances of LSFS and LSIS is not large. However, in terms of computation time, LSIS tends to be slightly superior to LSFS.
Next, we consider a more complex BOM structure shown in Figure 6b. We note that the proposed algorithm is terminated after 7200 seconds because it requires more computation time because of the complex BOM structure. Tables 3 and 4 summarize the results. In most cases, LSFS and LSIS perform better than the other algorithms, but GA performs well in case 3 with low capacity. Because of the nature of a small-size problem, in some cases, GA can be better than the strategy limiting the search space and requiring efforts in the local search, which is less efficient than randomly searching for a larger solution space. In both cases, the initial solution algorithm improves the performance of the algorithm.

Extended Experiments
The FJSP can be categorized into total FJSP and partial FJSP according to the machine compatibility. Cases 5 and 6 represent the partial FJSP, where each operation can be processed only on some machines [18], whereas case 7 represents the total FJSP, where each operation can be processed on all the machines. For extended experiments, we used a simple BOM structure, as in Figure 6a, and increased the problem size including the numbers of final items, machines, and workers.
In case 5, the data used for the experiments are same as those used in the previous section except for order data. In the cases of GA and GA-VNS, 10 iterations were performed with 20 populations due to the limitations in the computational time, unlike in previous studies that performed more than 20 iterations with over 100 populations. The other parameters are similar to those in the study conducted by Wu et al. [6] (ratio of crossover: 0.5; ratio of mutation: 0.5; ratio of elite: 0.2; ratio of VNS: 0.5). Table 5 presents the results of a numerical experiment in case 5. In this case, GA exhibits the best performance, as presented in Table 5. Case 6 represents a larger problem in comparison with case 5, with 20 final items, 10 workers, and 10 machines. The environment contains more flexible processes, and the machines can process an average of 50% of the items. Due to the practical constraints on the computation time, the experiment was conducted by reducing the scale to 10 populations and 10 iterations. Additionally, the proposed algorithm was terminated after 7200 s. Table 6 summarizes the results of the numerical experiment in case 6. As the problem becomes larger, the strategy limiting the search space becomes more efficient, which is different from the results of case 5. The LSS algorithm with LSFS and the LSS with LSIS are better than the GA-based algorithms in terms of all performance measures. The results demonstrate a good performance with a significant difference in terms of lateness in a shorter computation time. For example, at low capacities, the total delayed time units was 9 for the LSFS and LSIS with a computation time of 7200 s. However, the total delayed time units was 70 and 52 for the GA and GA-VNS, respectively, with a computation time of over 9000 s. At high capacities, the lateness is reduced in all algorithms; however, the LSS-based algorithms perform better than the GA-based algorithms. Case 7 assumes the most flexible environment, in which all the machines can operate at all items. The number of final items, workers, and machines are 30, 10, and 10, respectively. Experiments are conducted under the same conditions as those of case 6. Table 7 presents the results of a numerical experiment in case 7. The result of case 7 is similar to that of case 6. Both LSS-based algorithms outperform the GA-based algorithms in a shorter time. LSIS is more efficient than LSFS even in an environment with low capacity, which explores more search space in a shorter time as the problem becomes more complicated. Although the search space is limited, LSFS determines all the possible candidate solutions at each iteration. Therefore, it takes a longer time and does not perform well for more complex problems. Figures 7 and 8 illustrate the relative performance in comparison with the best solution, as the problem size increases in each capacity condition. The relative performance is calculated using Equation (4). The objective function values are normalized, and the best and worst solutions become 1 and 0, respectively. In terms of lateness, the objective function with the highest priority, the GA algorithm generates the best solution when the problem size is small, and the performance of LSIS is not the worst. However, as the problem size increases, LSIS generates the best solution or one that is very close to the best solution, and the performance of the GA algorithm becomes the worst. The performance of GA-VNS is similar to that of GA for cases 6 and 7. With respect to the makespan, the trend of the result is similar to that of the lateness, when the capacity is low. At high capacities, the LSS-based algorithms generate the best solutions or those that are very close to the best solution in all the cases. As for the third objective function, none of the algorithms outperform the others because this study considers a preemptive multiobjective approach, wherein the performance depends on the first and second objective functions.
Relative per f ormance = Current value − Minimum value Maximum value − Minimum value (4) Figure 7. Comparison of optimal performance at low capacity.

Conclusions
This study addressed the DRCFJSP under the consideration of worker's skill and the MLPS. Due to the complexity of the problem, we proposed an efficient algorithm, limiting the search space to solve the complex DRCFJSP with three objective functions to minimize lateness, makespan, and machine workload distribution, which were improved sequentially. The algorithm was composed of a time-based integrated initial solution algorithm and LSS algorithm. The time-based integrated initial solution algorithm was tailored to the problem domain and modified the existing encoding method due to the unique characteristics of the MLPS. The LSS algorithm used the concept of EPST and LSPT to restrict the movement in the solution space, such that the previous solution was not considerably changed. In the limited search algorithm, the solution was updated using LSFS or LSIS.
Numerical experiments demonstrated that the proposed algorithm has an advantage for a complex problem when compared to the GA-based algorithms in terms of time and performance. In particular, when the problem size was small, the proposed algorithm was not always better than GA. In some cases, GA performed better than the proposed algorithm. However, for large-sized problems, we obtained better solutions in a shorter computation time by limiting the search space.
Further research is needed to develop extended models and algorithms considering additional constraints and uncertainties in the production system, such as the learning effect. The workers' skill levels can be improved as the same work is repeated, and thus the efficiency can be changed. Various types of uncertainties must also be considered in future works. Because the actual manufacturing environment has various uncertainties, this research can be expanded to reflect the uncertain processing time, demand fluctuation, and machine failure. To overcome these issues, robust optimization and stochastic programming approaches can be applied.

Conflicts of Interest:
The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript: inventory of the x-th sub item of I α N α,x the amount of x-th sub item required to produce I α LT lot size s β original start time of operation β s β modified start time of operation β Figure A1. Initial allocation process.