A Novel Metaheuristic Hybrid Parthenogenetic Algorithm for Job Shop Scheduling Problems: Applying an Optimization Model

Metaheuristics are primarily developed to explore optimization techniques in many practice areas. Metaheuristics refer to computational procedures leading to finding optimal solutions to optimization problems. Due to the increasing number of optimization problems with large-scale data, there is an ongoing demand for metaheuristic algorithms and the development of new algorithms with more efficiencies and improved convergence speed implemented by a mathematical model. One of the most popular optimization problems is job shop scheduling problems. This paper develops a novel metaheuristic hybrid Parthenogenetic Algorithm (NMHPGA) to optimize flexible job shop scheduling problems for single-machine and multi-machine job shops and a furnace model. This method is based on the principles of genetic algorithm (GA), underlying the combinations of different types of selections, proposed ethnic GA, and hybrid parthenogenetic algorithm. In this paper, a parthenogenetic algorithm (PGA) combined with ethnic selection GA is tested; the parthenogenetic algorithm version includes parthenogenetic operators: swap, reverse, and insert. The ethnic selection uses different selection operators such as stochastic, roulette, sexual, and aging; then, top individuals are selected from each procedure and combined to generate an ethnic population. The ethnic selection procedure is tested with the PGA types on a furnace model, single-machine job shops, and multi-machines with tardiness, earliness, and due date penalties. A comparison of obtained results of the established algorithm with other selection procedures indicated that the NMHPGA is achieving better objective functions with faster convergence speed.


I. INTRODUCTION
Optimization is a design problem that demands appropriate techniques and methods to provide satisfactory results for a reasonable period to reduce costs. In practice, many design problems are complex, and classical optimization methods based on mathematical features cannot find the best results in a limited period. The most common mathematical methods for optimization are Gradient-based methods, which utilize the objective function. Recent research shows a growing interest in optimization with enhanced efficiency, accuracy, The associate editor coordinating the review of this manuscript and approving it for publication was Giambattista Gruosso . and speed rate for tackling optimization. One of these optimization methods is ''Metaheuristic'' [1], [2]. This paper focuses on job shop scheduling problems which are examples of optimization problems in the industry. In the following sections, scheduling and job shop scheduling problems will be discussed. Scheduling refers to controlling workloads in a production process to allocate machinery and human resources and plan production processes optimally. Scheduling has a vital role in the manufacturing process since it affects the productivity of the production line by reducing time and energy consumed in the production process; scheduling problems do not have fixed, efficient solution algorithms due to the complexity of these problems [3], [4].
Scheduling aims to find an optimal processing order to reduce the total makespan of the job shop. A job shop refers to a place where each job is processed on a machine in a limited time. Based on the arrival pattern of the jobs, there are different types of job shops, namely static and dynamic scheduling problems. In static job shops, the jobs need to arrive at the idle shop and be scheduled; in contrast, in the dynamic model, the jobs arrive randomly, and job arrivals are intermittent [5], [6].
Job shop scheduling problems are classified as nondeterministic polynomial-time (NP-hard) problems; such problems are known as 'Hard' problems to solve because as the size of the problem increases linearly, the computation time increases exponentially [3]; in order to solve such complex production scheduling problems, most of the studies focus on the use of artificial intelligence techniques, heuristics and metaheuristic techniques such as neural networks, fuzzy logic, genetic algorithms (GA), particle swarm optimization, simulated annealing, etc. One of the most common techniques to deal with job shop scheduling problems is a genetic algorithm. GA has been the most popular technique in evolutionary computation research. In the traditional GA, the representation used is a fixed-length bit string. Each position in the string represents a particular feature of an individual. Usually, the string is a collection of structural features of a solution with little or no interactions [5], [6].
This paper establishes an intelligence algorithm to deal with flexible job shop scheduling problems via genetic algorithm methodologies based on a parthenogenetic algorithm replacing the crossover operator with three functions: swap, reverse, and insert. The established GA, named novel metaheuristic hybrid parthenogenetic algorithm (NMHPGA), is a combination of ethnic selection GA in which four types of selection, namely stochastic, roulette, sexual, and aging, are combined with the parthenogenetic algorithm applying three functions of PGA, namely swap, reverse, and insert.
A summary of this paper is as follows. Section II discusses the literature and the background of metaheuristics, job shop scheduling, and genetic algorithm. In Section III, the ethnic selection outline is presented. In section IV, the established NMHPGA is discussed. Section V presents some mathematical functions with different characteristics for further utilization in evaluating the developed metaheuristic algorithm, along with other alternative approaches. In section VI, a comprehensive statistical analysis is conducted to compare the results of the new algorithm with the different metaheuristic approaches. Section VII discusses the NHPGA results tested on a case study of the furnace model. Section VIII presents this paper's main findings, including the conclusions and suggestions for future challenges.

A. METAHEURISTIC AND OPTIMIZATION
Metaheuristic was developed by Glover [1] in 1986; the term comprises two main words. Heuristics comes from an old Greek word, ''heuristic'', meaning to discover, while ''meta'' means beyond the ordinary or natural limits of something. Metaheuristics are optimization solution techniques that apply higher-level strategies into search processes of designed problems to find optimal solutions avoiding local optima [1], [4], [7], [8].
The history of using metaheuristics is categorized into five distinct periods [1], [9]. In the first period, there was no formal presentation of metaheuristics methods. However, these methods were used for simple optimization problems. The second period, from 1940 to 1980, was the first formal introduction of metaheuristics. In the third period (1980 to 2000), multiple metaheuristics were proposed for specific applications. The metaheuristic methodology was successfully presented in the fourth period, which is from 2000 until now the fifth period, called the ''scientific'' or ''future'' period, the designing of new metaheuristics will turn into a matter of science [1], [9].
The second category is swarm intelligence-based algorithms which are based on the cooperative behavior of decentralized and self-organized natural or artificial systems. Some examples of this category are as follows: Ant Colony Optimization [17], Particle Swarm Optimization [18], [19], [20], Cat Swarm Optimization [21], Artificial Bee Colony [22] and firefly algorithm [23], Cuckoo Search [24].
The third category of algorithms is motivated by physical laws; moreover, some methods are based on the lifestyle of humans and animals, which are categorized in the fourth category [9], [25].
The application of GA to the job shop scheduling problem is discussed in the forthcoming catagory.

B. SCHEDULING AND JOB SHOP SCHEDULING PROBLEMS
In today's complex manufacturing environment with multiple product lines, each process requires numerous steps and machines for completion; the manufacturing plant's decision-maker should find a way to manage resources to produce products as efficiently as possible effectively. The decision-maker would create a production schedule that prioritizes on-time delivery and minimizes objectives such as a product's flow time. As a result of increased demand, a field of study known as scheduling problems has been developed [26], [27], [28].
Scheduling problems involve finding the optimal schedule under different objectives, machine environments, and job characteristics. Numerous manufacturing processes are complex and extremely difficult to solve with conventional optimization techniques. They are NP-difficult problems that have set the stage for the application of genetic algorithms to such problems. Among the various scheduling problems, there are (1) Job shop scheduling, (2) multiprocessor scheduling, (3) multitask scheduling, (4) parallel machine scheduling, (5) group job scheduling, and (6) resource-constrained project scheduling and dynamic tasks [5].

C. TYPES OF SCHEDULES
Scheduling is the process of sequencing actions to make the execution optimal; scheduling is classified as a nondeterministic polynomial-time (NP-hard) problem, which refers to a tricky optimization problem to be solved [29], [30], [31], [32]. In this context, job shop scheduling problems (JSSP) are considered one of the most popular machine scheduling problems. They have received considerable attention since they involve a challenging optimization problem with many real-world applications. The production schedule has been subjected to many studies in recent years due to the importance of productivity and sustainability in the manufacturing system [27]. Generally, in a classical n × m job shop, we have n jobs J 1 , J 2 of , . . . , J n Of different processing times scheduled on m machines. In the specific variant of job shop scheduling, which is precedence constraints, each job has a set of operations O 1 , O 2 , . . . , O n Processing within a particular order. A common type is flexible job shop, where each operation can be processed on any machine.
Furthermore, there is another classification of job shop scheduling, including single machines and flexible multimachines [29], [30], [31], [32]. This paper focuses on single-machine job shops and multi-machine job shops.

1) SINGLE-MACHINE SCHEDULING PROBLEM WITH TARDINESS AND EARLINESS
Single-resource scheduling with tardiness and earliness penalties is a particular scheduling problem in which each job has a single operation. This model allows several jobs to be optioned at zero timing in a single resource system [5].
The single machine model against common due date is developed, considering that several jobs have to be processed on a single machine where each job has only one operation. All jobs must be ready to be processed at time zero, and for any job finished before the expected due date, the earliness penalty will be applied [5]; The maximum lateness (L max ) is the most significant delay for all due dates, calculated as the maximum value o L 1 L n . The total weighted completion time ( w j C j ) is the sum of the weighted completion times of all n jobs, providing an estimate of the total holding or inventory expenses incurred by the schedule. The sum of completion times is often referred to as flow time, while the weighted sum of completion times is called flow time.
A more general cost function is the discounted total weighted completion time w j 1 − e −rC j , which considers costs discounted at a rate of r (0 < r < 1) per unit time if job j is not completed by time t, an additional cost of w j is incurred during the period [t, t + dt]. If the job j is completed at time t, the total cost incurred over the period [0, t] i w j −e −rt . The total weighted tardiness ( w j T j ) is another cost function more general than the total weighted completion time. The weighted number of tardy jobs ( w j U j ) is a metric of interest as it is simple to record the objective function. However, recent research has focused on objective functions that are not regular, such as earliness penalties, where the earliness of job j is as below when the due date is d j This penalty decreases as C j increases; E j = max d j − C j , 0 ; An example of a non-regular objective is total earliness plus total tardiness, n j=1 E j + n j=1 T j and a more general non-regular objective is the total weighted earliness plus total weighted tardiness, n j=1 w ′ j E j + n j=1 w ′′ j T j where the weight associated with earliness (w ′ j ) may differ from the weight associated with tardiness (w ′′ j ).

2) FLEXIBLE MULTI-MACHINE JOB SHOP SCHEDULING PROBLEM WITH TARDINESS AND EARLINESS PENALTIES
In this type of job shop, operations can be executed on any available machines in the flexible job shop; however, the flexible JSSP is more complicated than the classical JSSP because it introduces more decision levels to determine the job routes to decide what a machine must process among the available options [5].
Parameters: n: Number of jobs m: Number of machines o j : Number of operations for job j M ij : Set of machines capable of processing operation i of job j p ijm : Processing time for operation i of job j on machine m.

3) OBJECTIVE FUNCTION MINIMIZING THE MAKESPAN
In equation (1), p ij Refers to the processing time for an operation, i refers to the machine number, and j refers to the job number. The objective function of open shop scheduling is designed to minimize the upper and lower bound by giving a suitable schedule for ordering operations and jobs [33], [34], [35].

minimize T Total Machines Time (i)
= machine waiting time + n j=1 p ij (1)

1) STANDARD GENETIC ALGORITHM APPROACH
A GA is a global search technique used in the computing system to solve optimization problems. A GA can deal with challenging scheduling problems. John Holland of the University of Michigan developed the GA in 1975 based on the idea of the simulation of natural evolutions [6], [46], [47], [48]. Genetic algorithms (GAs) try to mimic evolution and improve the performance of life through the reproduction of each individual, providing their genetic data to produce offspring that are better adapted to their environment and have a higher chance of survival; This is a fundamental aspect of genetic algorithms and genetic programming. Specialized Markov Chains depict the theoretical basis of GA in terms of state transitions and search procedures [6]. Figure 1 depicts a generic cycle of evolution by natural selection in which the best individuals are continually selected and operated by mutation and crossover. After several generations, the population converges on the superior-performing solution [6], [46], [47], [48].
A GA is developed for solving job shop scheduling by trying to represent the ability of crossover operators to generate feasible schedules without affecting the performance [6], [46], [47], [48].
GA addresses a population of potential solutions. A chromosome signifies each solution. The initial step involves encoding all possible solutions into chromosomes. A set of reproduction operators have to be directly applied to the chromosomes to perform mutations and recombination over solutions; selections can compare each individual within a population using a fitness function. The fitness of the solution corresponds to the value of each chromosome. The main objective of the GA is to maximize the fitness function. However, if the objective is to minimize a cost function, the algorithm represents individuals with lower fitness functions. GA begins by generating an initial chromosome population. The initial population is typically generated randomly. Then, GA loops through an iterative procedure to find the optimum solutions. GA iterations consist of the steps outlined below.
• INITIALISATION AND SELECTION: The first step is the selection of the individuals; it is made randomly.
• REPRODUCTION: In the next step, selected individuals bred offspring in order to generate new chromosomes; the GA can use both crossover and mutation.
• EVALUATION: In this stage, the fitness of the new chromosomes is evaluated.
• REPLACEMENT: In the last step, individuals from the old population are replaced by the new ones; while the population converges toward the optimal solution, the algorithm will be stopped. In this context, the breeding process is the main part of the genetic algorithm. The breeding process creates new and fitter individuals. The breeding process includes three steps, selecting parents, crossing the parents to create new individuals, and replacing old individuals with new ones [46], [47], [48].

a: SELECTION
Selection refers to choosing two parents for crossing from the population. After determining an encoding, the next step is to determine how to perform selection, i.e., how to select individuals from the population that will produce offspring for the next generation and how many offspring each individual will produce. The aim of selection is to involve fitter individuals with the expectation that their offspring will also be fitter. Parents for reproduction are selected from the initial population of chromosomes. The problem is selecting these chromosomes. According to Darwin's theory of evolution, only the fittest survive to reproduce.
Selection is a technique that randomly selects chromosomes from a population based on their evaluation function. The greater the fitness function, the greater the chance of selection. The selection pressure is the extent to which the superior individuals are preferred. The greater the selection pressure, the greater the preference for performance and productivity. This selection pressure motivates the GA to enhance the population's fitness over successive generations. Higher selection pressures result in greater convergence rates. GA should be able to identify optimal or nearly optimal solutions under a broad selection scheme pressure range.
Nevertheless, if the selection pressure is too low, the convergence rate will be slow, and the GA will take excessive time to find the optimal solution. If the selection pressure is too decent, there is a higher probability that the GA will prematurely converge on a suboptimal solution. In addition to providing selection pressure, selection schemes should maintain population diversity, as this helps to prevent premature convergence [46], [47], [48].
Selection needs to be balanced with mutation and crossover-induced variation. Substantial selection causes suboptimal, highly fit individuals to represent the population, reducing the diversity required for change and progress; insufficient selection will cause evolution to proceed too slowly [46], [47], [48].
Following is a discussion of the various selection methods used in this paper to generate new algorithms.

b: STOCHASTIC SELECTION
Stochastic is considered a more practical and realistic scheduling problem than the JSSP in the real world. In this work, the GA is modified when dealing with the JSP, where the fitness function can fluctuate under stochastic circumstances [6].

c: ROULETTE SELECTION
The roulette strategy selects the optimum solutions regarding the expected value where each individual has many frequencies during selection operations. The roulette wheel is segmented, and the individuals with the highest fitness are given more extensive segments for a higher probability of being selected [6].
One of the traditional GA selection methods is roulette selection. In the proportionate reproductive operator, a string is selected from the mating pool with a probability proportional to the fitness. The concept of roulette selection is a linear search through a roulette wheel where the slots are weighted according to the individual's fitness values. A target value, a random proportion of the sum of the population's finesses, is determined. The population is iterated until the desired value is reached; This is a moderately effective method of selection, as it is not guaranteed that fit individuals will be chosen, but they are more likely to be selected. A fit individual will contribute more to the target value, but if it doesn't surpass it, the next chromosome in line has a chance, which may be weak. It is important that the population is not sorted by fitness, as this would significantly bias the selection process. In the roulette wheel selection, the expected value of an individual is the individual's fitness divided by the actual population fitness. Each individual is assigned a portion of the roulette wheel proportional to the fitness level. The wheel is spun N times, where N is the total number of selected populations. Each time the wheel is spun, the individual under the marker is chosen to be the next generation's parents. This technique is executed as follows [6], [46]: reproduction. The remaining steps are similar to those for classic GA [49], [50].

e: AGING SELECTION
Ageing GA is a modified version of a traditional GA in which the age of individuals affects their performance. When a new individual is generated, its age is considered zero. Therefore, with every iteration of age increase in individuals, young individuals are considered less fit than adult individuals; The effectiveness of individuals is measured by considering both the objective function value and their ages [51], [52].

f: CROSSOVER (RECOMBINATION)
Crossover is the process of combining two parent solutions to produce offspring. Following the process of selection (reproduction), the population is enriched with superior individuals. Reproduction duplicates excellent strings but does not generate new ones. The mating pool is treated with a crossover operator in the expectation that it will produce superior offspring. There are different types of crossovers, including single-point crossover, two-point crossover, multipoint crossover, uniform crossover, and so on. The crossover probability is the fundamental parameter in the crossover study (Pc). Crossover probability is a parameter that describes the frequency of crossover; if there is no genetic crossover, offspring are identical to their parents; if there is chromosome crossover, the offspring contain portions of both parents' chromosomes. If the probability of crossing is 100%, all offspring are produced through crossing; if it is zero percent, the entire new generation is created from exact copies of chromosomes from the old population. Crossover is performed hoping that new chromosomes will contain beneficial portions of old chromosomes and therefore be superior. However, allowing a portion of the aging population to survive in the next generation is beneficial.
• f max refers to the highest fitness value in the population; • f avg is the average fitness value in each population; VOLUME 11, 2023 • f ′ refers to a higher fitness value between two individuals: Instead of using fixed pm, it is adjusted based on the following formula: the mutation operator [46].
• f max refers to the highest fitness value in the population; • f avg is the average fitness value in each population; • f ′ refers to a higher mutation value between two individuals [46].

g: MUTATION
Following crossover, the strings undertake the mutation preventing an algorithm from becoming trapped at a local minimum. Mutation serves the dual purpose of recovering lost genetic material and randomly altering genetic information.
Mutation has traditionally been regarded as a straightforward search operator. The mutation explores the entire search space, whereas crossover is meant to exploit the current solution to find better alternatives. The mutation is viewed as a background process to maintain genetic diversity in a population. It introduces new genetic structures into the population by randomly altering some constituents. Mutation aids in escaping the trap of local minima and maintains population diversity. A search space is ergodic if there is a probability greater than zero of producing any solution from any population state [39], [40], [41], [42], [43], [44], [45], [46], [47], [48].

2) GENETIC ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMS (JSSP)
Scheduling, particularly job shop scheduling, has been studied for a substantial period. Some meta-heuristics, such as Simulated Annealing, Taboo Search, and Genetic Algorithms, have been implemented as pure methods and hybrids of different methods due to the NP-Hard nature of the problem, with hybrid methods being superior to pure methods. The primary issue is how to deal with local minima in a timely manner. GA has been studied and successfully implemented alongside the other problems [46], [47], [48]. The JSSP comprises several machines, denoted by M , and some jobs, marked by J . Each job entails M tasks, each with a predetermined duration. Each task must be performed on a single machine, and each job must only visit each machine once. There is a predetermined order to the functions that comprise a job. A machine can only perform a single task at a time. There are no configuration times, release dates, or due dates. The makespan is the time between the start of the first task and the completion of the last task. The objective is to find start times for each task that minimize the makespan [46].

E. GENETIC ALGORITHM IMPLEMENTATION USING MATLAB
MathWorks's MATLAB (Matrix Laboratory) is a scientific software package designed to provide numerical computation and graphics visualization in an advanced programming language. Dr Moler, Chief Scientist at MathWorks, Inc., developed MATLAB to facilitate access to matrix software created for the LINPACK and EIPACK projects. The initial version was written in the late 1970s for matrix theory, linear algebra, and numerical analysis courses. Therefore, MAT-LAB is built on a foundation of advanced matrix software, in which the fundamental data element is a one-dimensional matrix [46].
MATLAB offers a vast array of useful functions for genetic algorithm practitioners as well as those desperately hoping to experiment with the algorithm for the first time. Given the versatility of MATLAB's high-level language, problems can be coded in m-files in a fraction of the time it would take to make C or Fortran programs for the same purpose. When combined with MATLAB's advanced data analysis, visualization, and application domain toolboxes, the user is provided with a uniform environment to investigate the potential of GAs.
The GA Toolbox is a set of flexible tools for implementing various genetic algorithm methods. It contains a collection of procedures, written primarily on m-files, which apply the essential functions in genetic algorithms. In this context, due to the low convergence speed of the standard GA, an improved version of GA is established as a novel metaheuristic hybrid parthenogenetic algorithm (NMHPGA) code using MATLAB.

1) DATA STRUCTURES
The only data type supported by MATLAB is a rectangular matrix of real or complex numeric elements. The primary data structures contained within the Genetic Algorithm toolbox are (1) chromosomes, (2) objective function values, and (3) fitness values. The following subsections discuss these data structures [46].

2) CHROMOSOMES
The chromosome data structure stores the whole population in a single matrix of size N ind by L ind , where N ind represents the number of individuals in the population, and Lind represents the length of the genotypic representation of those individuals. Each row represents an individual's genotype, consisting of base-n, ordinarily binary, values Chrom (4), as shown at the bottom of the next page [46]. This data representation does not impose a structure on the chromosome structure; Everything that is required is that all chromosomes have equal length. Consequently, structured populations or populations with varied genotypic bases can be utilized with the Genetic Algorithm Toolbox if a proper decoding function, mapping chromosomes to phenotypes, is implemented [46]. 56032 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.

3) PHENOTYPES
The decision variables, or phenotypes, are obtained in GA by mapping the chromosome representation into the variable decision space. Each string in the chromosomal structure is decoded to a row vector of order N var , according to the number of dimensions in the search space and the value of the decision variable vector. The decision variables are kept in a matrix with the dimensions N ind by N var . Again, each row corresponds to the phenotype of a specific individual [46].

4) OBJECTIVE FUNCTION VALUES
The performance of phenotypes within the problem domain is evaluated using an objective function. Objective function values may be scalar or vectorial in multi-objective problems. Note that objective function values and fitness values are not necessarily the same. The objective function values are stored in a N ind × N obj matrix, where N obj is the number of objectives. Each row corresponds to the objective vector of an individual [46].

5) FITNESS VALUES
Objective function values are converted into fitness values using a scaling or ranking function. Finesses are nonnegative scalars stored in length-column vectors N ind An example of this subject is shown below, including ranking, an arbitrary fitness function [46].

F. FRAMEWORK OF THE PARTHENOGENETIC ALGORITHM
GA is based on the concept of evolution; survival of the fittest has been extensively used to solve NP-hard problems. In GAs, the candidate solutions are indicated as a population of chromosomes (individuals) consisting of a string of genes.
The crossover operator is the primary genetic operator to generate new offspring by mixing two parents. Unique individuals can inherit some features from their parents. There are different types of traditional crossover operators: one-point crossover, two-point crossover, scattered crossover, etc. [46].
Parthenogenetic algorithm (PGA) is a variant of GA that employs gene recombination and selection instead of the traditional crossover operator to produce offspring. PGA deals with the above issue by removing the crossover operator, improving the genetic algorithm's effectiveness and performance; this is due to the shift operator, which is only performed on a single chromosome, preventing the offspring from the crossover operator from jumping to the invalid solutions area. There are three partheno-genetic operators: swap, reverse, and insert. These three operators change the order of genes in a chromosome to generate a new chromosome [53], [54], [55], [56].

III. ETHNIC SELECTION GENETIC ALGORITHM
Ethnic GA (EGA) is based on combining the different populations generated using various selection methods [57]. Some ethnic groups allow heterosexual partners (SGA), some others prefer middle-aged people (AGA), others do not interfere with partner selection (stochastic GA), and lastly, some prefer string and wealthy partners; these techniques affect the speed of convergence and the global solution.
In this paper, an ethnic selection GA combines four types of selections, including stochastic, aging, sexual, and roulette selections, to test the convergence speed; Moreover, the ethnic selection is combined with a parthenogenetic algorithm in order to propose a novel metaheuristic hybrid parthenogenetic algorithm (NMHPGA) to test and compare results with the standard parthenogenetic algorithms.

IV. NOVEL METAHEURISTIC HYBRID PARTHENOGENETIC ALGORITHM
A novel metaheuristic hybrid parthenogenetic algorithm is developed by combining a variant of already existing selections with the parthenogenetic algorithm. Different optimization selections mixed with the parthenogenetic algorithm (PGA) are established to provide a valid comparative study and evaluate the overall performance of the novel metaheuristic hybrid parthenogenetic algorithm. The metaheuristics algorithms for this purpose are Ageing PGA, Sexual selection PGA, Roulette selection PGA, stochastic selection PGA, and ethnic selection PGA, tested on job shops from industry and mathematical benchmark functions to test the accuracy of the algorithm. NMHPGA consists of ethnic selection mixing with 3 parthenogenetic operators, namely swap, reverse and . . . g 1,Lind g 2,1 g 2,2 g 2,3 . . . g 2,Lind g 3,1 g 3,2 g 3,3 . . . g 3,Lind .
. . . . . . g Nind,1 g Nind,2 g Nind, 3 . . . g Nind,Lind  insert, the algorithm's initial population is produced, and then stages of the algorithm start with removing the cross-over operators. In order to test the accuracy of NMHPGA, five different algorithms consisting of mutation only and replacing cross-over operators with swap reverse and insert functions are tested; however, the algorithms differ in selection types. For the stochastic selection parthenogenetic algorithm (STPGA), the selection type is stochastic selection based on random selection. Moreover, the roulette parthenogenetic algorithm is based on roulette wheel selection (RPGA); moving forward to the sexual parthenogenetic algorithm(SPGA), the selection is sexual selection; in comparison, APGA wish is aging parthenogenetic algorithm is based on aging selection; lastly, the novel metaheuristic hybrid parthenogenetic algorithm is based on the ethnic selection which is the combination of stochastic selection, roulette selection, sexual selection, and the aging selection and finding best fitness function from the combination of the selections. The procedure of the NMHPGA algorithm is illustrated as follows: The first testing stage is based on using standard selection procedures. In contrast, the second stage combines the best individuals selected using different methods into a single population, known as ethnic selection. The NMHPGA does not utilize the crossover function, which is very time-consuming due to the checks to avoid replicated genes in the chromosomes. Figure 2 illustrates flowchart of NMHPGA.
The most recent and improved selection versions are chosen to increase the algorithm's accuracy. Moreover, the algorithm's internal parameters are the most critical in their convergence speed. As a result, the parameters are the most successful configurations based on the literature.

V. MATHEMATICAL BENCHMARK TEST FUNCTIONS
Before testing and resolving an optimization problem, it is essential to identify the functional characteristics that can make the optimization process difficult. In applied mathematics, benchmark functions, also known as artificial landscapes, are primarily employed to evaluate optimization methods' precision, convergence rate, and robustness. In order to develop a new optimization algorithm, it is essential to use benchmark functions to test how the new algorithm performs compared to other algorithms. In this research, various unimodal and multimodal benchmark functions are used to demonstrate the efficacy of the established algorithm. Numerous tests or benchmark functions are indicated in the literature, but no standard set of benchmark functions exists. Test functions should ideally have diverse characteristics to be useful for testing new algorithms. Each metaheuristic method that effectively calculates the optimal points of such procedures makes solving optimization problems more efficient [58].
The benchmark functions used for testing the metaheuristic NMHPGA include Rastrigin, Ackley, Sphere, Rosenbrock, Levy, Griewank, Sum square, Sum of different powers, Rotated Hyper-Ellipsoid and Zakharov function [59], [60], [61]. Figure 3 and Table 1 depict these functions' presentation in more detail. Each function has a unique equation, and their three-dimensional diagrams show the difficulty of locating optimal positions. As demonstrated, Rastrigin, Ackley, and Griewank functions are more complex than the Sphere function due to the presence of both local and global optimal points. Each of the benchmark functions has a global optimal (minimum) at x = 0, y = 0, with f (0, 0) = 0 at this optimal point] [62]. These functions are chosen regarding features like modality, basins, and dimensionality. All test functions are inseparable from increasing the difficulty of optimization. Any technique that reduces the error of looking for optimal spots has a larger capacity to effectively handle optimization problems so that NMHPGA is tested on these benchmark functions.

A. NUMERICAL RESULTS FOR THE BENCHMARK FUNCTIONS
In the following section, numerical results of running the established algorithms on benchmark functions are illustrated; based on the results, the novel algorithm has satisfying results of objective functions. Table 2 presents the 2D Objective function global minima results for different algorithms tested on benchmark functions. In Table 3, the convergence speed comparison on benchmark functions is shown, as the NMHPGA improves the convergence speed, which means it improves the speed. It shows how fast it reaches the best solution and refers to how many generations take to get it. Table 4 similarly shows the results of testing NMHPGA using the benchmark functions for two dimensions, ten dimensions, and 50 dimensions, and in all three categories results are satisfying. Table 4 illustrate that the objective func- tion of NMHPGA is based on the objective function of each benchmark function in order to measure the performance of the algorithm. The objective function of the NMHPGA is close to the defined objective function of each benchmark function equation, demonstrating the algorithm's good performance.

VI. STATISTICAL ANALYSIS OF THE NOVEL METAHEURISTIC HYBRID PARTHENOGENETIC ALGORITHM
Different algorithms' objective function values and convergence speeds are calculated and utilized for statistical analysis. To this end, the comparison of the convergence speed of different algorithms tested on other job shops is shown in the following section.

1) JOB SHOP SCHEDULES
In this paper, three categories of simple benchmarks are tested; the benchmarks focus on only two elements, the number of jobs and arrival pattern, to make the job shops as simple as possible to focus on optimization results and compare the effectiveness of results. The job shops are generated for a simple production line. The schedules are generated randomly using a constrained open-shop algorithm. The first category is category-A (SM), consisting of 10 single-machine job shops with earliness, tardiness, and due date (Table 5).   Moving forward to the next category, the second category is category-B consisting of 10 multi-machine job shops(MM) with 4 machines, eight jobs with earliness, tardiness, and due date, and the last category is category-C consisting of 9 multimachine job shops with earliness, tardiness, and due date ( Table 8). Table 5 and Table 8 illustrate different job shop types used in this paper. Besides, the initial random selection state is equally selected to form a comparative model.
The scheduling problem is based on finding the best scheduling time with the objective function of minimizing the execution time and penalties. The is set to a population size of 56036 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  300 and a generation of 1000. Testing aims to investigate the NMHPGA performance with simple mutation and advanced regeneration and the effect of the selected types of roulette selection, sexual selection, aging selection, and ethnic selection. In this research, MATLAB R2021a has been used.

2) JOB SHOP SCHEDULING AND RELIABILITY TEST RESULTS
Simulation results are shown in Appendix A for the single-machine and multi-machines in Appendix B, respectively. Besides, three simulation results for SM3, MM3, and MM11 are shown in Figure 4, selected randomly to be illustrated in the paper. Each figure indicates the objective function regarding the generation number. The objective function represents the time taken to finish the job shop schedule. Table 5 illustrates the Comparison of objective functions of different algorithms tested on job shops in single machines category A. It is illustrated that NMHPGA decreases the objective function in most cases compared to other algorithms, which shows the better performance of the NMHPGA compared to other algorithms tested in this research. Besides, Table 7 represents a comparison of the convergence speed of different algorithms tested on job shops in single machines category A. As illustrated, the objective function for NMH-PGA has the lowest value with a higher convergence speed; This is concluded from the number of generations shown to reach the best solution; the number of generations is lower for NMHPGA, which means it improves the convergence speed.
Moving forward to the next section, Table 9 illustrates a Comparison of objective functions of different algorithms tested on job shops in multi-machines category B. Moreover, Table 10 indicates a Comparison of the convergence speed of different algorithms tested on job shops in multi-machines category B; Similar to category A results, NMHPGA reduces the objective functions with higher speed. Simulation results of category C are illustrated in Tables 11 and 12. In this VOLUME 11, 2023 56037 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  category, the objective functions of NHPGA decrease and are lower than those of other algorithms with faster convergence.
The convergence speed comparison using the benchmark functions is shown in Table 3, NMHPGA improves the convergence speed, which means it improves the speed of reaching the best solution. Table 4 similarly shows the results of testing NMHPGA on benchmark functions for 2, 10, and 50 dimensions. In all three categories, the results are satisfying because the results are close to the objective functions of zero.
In General, NMHPGA objective function results are better and lower, which means the parthenogenetic algorithm with a combination of ethnic selection can lead to better results in      is primarily focused on reducing time, objective two is based on reducing energy, and objective three is simultaneously reducing time and energy. Before applying the three objectives, the furnace takes 139.2167 (h) and consumes 56042 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.    86.8673E6 (m 3 /h) fuel. Table 13 illustrates the consumed energy for different algorithms for three objectives; objective three is more efficient due to the focus on both time and energy consumption simultaneously; as illustrated, NMH-PGA consumed 83.9666E6 (m 3 /h) of fuel, with more efficient and faster results compared to other algorithms. Table 14 shows the elapsed time after optimizing the schedules achieving more efficient results by applying NMHPGA for this    research. Figure 5 illustrates the furnace model function for three objectives using NMHPGA.

VIII. CONCLUSION AND FUTURE WORK
This paper established a novel hybrid metaheuristic method based on the combination of different types of selection of genetic algorithms.
Ten groups of mathematical benchmark functions, three categories of benchmarks, and a furnace made from the industry were selected to evaluate the established algorithm's performance. The algorithm performance was compared with four other algorithms. VOLUME 11, 2023 The most important findings and summary of results of this paper are as follows: (i) This paper considers different types of GAs with varying kinds of selections. (ii) The PGAs are tested with different selection procedures, which conclude that a combined solution is better than an individual. (iii) The ethnic selection procedure is the best, as it combines the best individuals from different groups. Combining the ethnic selection with the PGA shows better results can be achieved without lengthy crossover procedures. (iv) The advantage of this approach is an improvement in the speed of convergence and the global search point. (v) The NMHPGA, which removes the crossover function and replicates it with swap, insert, and reverse functions, combined with ethnic selection, improves effectiveness and performance due to the operators performing on a single chromosome.
Three categories of job shop benchmarks have been applied to test the established NMHPGA. However, other selection and combination functions can be integrated into future works to improve efficiency with fewer genes. Moreover, more complex benchmarks and industrial case studies can be applied to test the algorithm. As for the convergence speed, it is about finding what iteration the error (cost) reaches to a steady state; this means the best solution is found, and there is no need to keep the algorithm running as the error will not change; this is useful in finding the best solution in a few generations, which takes less time and resources. 16-34.