1 Introduction

Most of the research has focused on nature-based algorithms inspired by interactions of living and non-living objects in evolutionary optimization. The main idea is that nature solves problems instinctively, like finding the shortest path between foods and nests for ants and bees. In the state of the art, many algorithms imitate these behaviors and interactions for solving optimization problems. Especially in the last decade, the number of new metaheuristic algorithms based on metaphors has exploded. Therefore, many researchers [1,2,3,4,5,6] criticized that plenty of metaheuristics have similarities although they introduce different metaphors. According to them, these algorithms should not be regarded as novel algorithms in the literature. However, as Wolpert and Macready [7] mentioned, there cannot be an appropriate algorithm for all problems. The possibility that there may always be a better algorithm motivates researchers to develop new algorithms regardless of metaphors. For this reason, new metaheuristics will continue to be introduced soon [8]. It is worth mentioning that the literature expects algorithms that provide more “optimal-like” solutions without trapping the “novelty” concept.

The term “metaheuristics” incorporates a wide range of techniques. Therefore, it would be better to classify them by considering their solution time, complexity, optimality, and the trade-off between diversification and intensification ability. Fister et al. [9] mentioned that classification of algorithms might depend on various criteria such as main principles, sources of inspiration, perspectives, and motivations. They classified nature-inspired algorithms as the origin of inspirations (Swarm-intelligence-based, bio-inspired, physics-based, chemistry-based).

On the other hand, Blum and Roli [10] summarized the most critical classifications as Nature-inspired vs. non-nature, population-based vs. single point search, dynamic vs. static objective function, one vs. various neighborhood structures, and memory usage vs. memory-less methods. Echevarría et al. [11] classified metaheuristics in terms of the number of solutions and inspiration sources. Beheshti and Shamsuddin [12] handled metaheuristic algorithms regarding inspirations, number of solutions, objective function, neighborhood structure, and memory usage. Sotoudeh-Anvari and Hafezalkotob [13] also classified the origins of inspiration as animals, physics, humans, plants, nature, and biology. They also demonstrated that the most popular foundations of inspiration are animals and physics. In addition, Hussain et al. [14] classified all metaheuristics in terms of their metaphor disciplines biology and physics took the first two places, respectively. Molina et al. [15] proposed two taxonomies as the source of inspiration and the behavior of each algorithm.

This paper proposes an optimization algorithm called Penalty-based Algorithm (PbA) for continuous optimization problems that cannot be solved in polynomial time (i.e., NP-Hard). PbA considers some natural facts and integrates them into an algorithm. Moreover, as a constraint-handling ability of the PbA, we also present a novel multiplicative penalty approach that combines the satisfaction rate and the deviations of constraints besides objective function. Furthermore, the PbA is also inspired by Tabu Search, Elitism selection approaches in terms of memory-based operations. However, memory is used not as a banned list but to eliminate unnecessary repetitive function evaluations in the PbA. Besides, best-so-far solutions which are the best solutions found in the related run are stored to avoid losing them in case of offsets.

It should be noted that the primary purpose of this study is not to duplicate existing algorithms through a new metaphor but to provide an algorithm that reaches the best-known solution values. To obtain better solutions, we constructed the PbA with the best combination of features. Therefore, the primary goal is achieving better results in a specific problem type, regardless of metaphors and similarities.

Moreover, it is seen in the literature that similar algorithms are generally applied in unconstrained problems. Differently, constrained engineering problems are handled within the scope of this study. Thus, it aims to prove that the PbA gives better results than similar algorithms and other algorithms developed in the literature. For this reason, the study will contribute to the literature.

In the following sections, a literature review, and the theoretical background of the PbA are presented, respectively. In the fourth section, the main steps of the PbA are illustrated through an example. After that, the experimental results for specific constrained benchmark problems are given in the fifth section. Finally, the last section concludes and discuss the findings.

2 Literature review

Metaheuristics,” firstly used by Glover in 1986, is a search framework that uses heuristic strategies [16]. They all present randomness and thus provide different solutions in different runs. The outstanding characterization of metaheuristics is that they explore several regions in search space and escape from local optima [17].

By the 1960s, the literature on optimization broadened and turned into a different format called “evolution” [18]. Genetic Algorithm, Evolutionary Programming, Evolutionary Strategies, and Genetic Programming belong to the field of Evolutionary Computation that depends on computational methods inspired by evolutionary processes [19]. Since 2000, researchers have focused on developing new algorithms based on various metaphors [18]. The behavior of animals, laws in science, facts of nature, and social behaviors are some of the inspirations used in the literature. Even the hijacking human cell behavior of coronavirus inspired a newly introduced algorithm called COVIDOA [20]. Figure 1 shows various kinds of metaheuristic algorithms published between 2000 and 2021.

Fig. 1
figure 1

Various Metaheuristic Algorithms Developed in the Recent Past

The algorithms given in Fig. 1 can be classified in terms of inspiration. Some algorithms are mentioned as an example of each classification. Cat Swarm Optimization [21], Artificial Bee Colony [22], Wolf Search Algorithm [23], Dolphin Echolocation [24], Ant Lion Optimizer [25], and Hunger Games Search [26] are some of the animal-inspired algorithms; Central Force Optimization [27], Gravitational Search Algorithm [28], Charged System Search [29], Chemical Reaction Optimization [30], Curved Space optimization [31], Henry Gas Solubility Optimization [32], Weighted Vertices Optimizer [33], and Simulated Kalman Filter [34] are science (physics, chemistry, mathematics)-inspired algorithms; Harmony Search [35] and Melody Search [36] are music-inspired algorithms; Anarchic Society Optimization [37], Brain storm optimization [38, 39], Election algorithm [40], Ideology Algorithm [41], and Human Urbanization Algorithm [42] are social-inspired algorithms.

Similar algorithms in the literature have also been examined in this study. Hysteretic Optimization [43] and Electromagnetism-like Optimization [44] algorithms are the first algorithms that have common concepts with the proposed algorithm. Biswas et al. [45], Siddique and Adeli [17] and Salcedo-Sanz [46] published articles focusing specifically on natural facts algorithms and provided comprehensive literature surveys. Similar algorithms to the PbA are given in Table 1 chronologically.

Table 1 Similar algorithms in the literature

As mentioned above, algorithms can be classified in many ways. However, the main point is the ability to converge optimal-like solutions when the performances of the algorithms are considered. Although the algorithms mentioned above have both common and distinctive features with the PbA, the primary purpose of this study is that the algorithm created with the best combination of features provides better results.

3 Proposed algorithm

The foundation of the PbA was first laid by Erdem [52]. It considers the offset factors that cause changes in fitness values with the minimum objective function value. PbA is a population-based algorithm, and it is structured for solving continuous constrained optimization problems. Various approaches in the literature have inspired the PbA. It is assumed that the global optimum solution is aimed to be reached by considering the interactions of solutions in terms of their fitness values and distances between the potential solutions.

The assumptions considered in the PbA are summarized below:

  • The changes in fitness values are computed by considering both the fitness values of the solutions and distances.

  • Fitness values are calculated according to the solution point and its selected neighbors.

  • No negative improvements are allowed.

  • The multiplicative penalty-based method is used for constraint handling.

  • No solution can have the same fitness value up to a certain degree of precision.

  • The best solution in the population is maintained, and with each offsetting, an attempt is made to reach a better than "best-so-far" solution.

  • Each solution produced in the related run, whether it is a better solution or not; is saved in a database. Thus, extra function evaluation is not required for a previously evaluated solution.

Before explaining each step of the PbA, the pseudocode of the algorithm is given below:

figure a

PbA will be discussed under Determine Intervals, Initialization, Multiplicative Penalty-based Method, Repulsive Forces, Neighborhood, Offsetting, Duplication, and Stopping Condition sub-sections, respectively.

3.1 Determine intervals

The general pseudo-code starts with determining intervals. This stage aims to determine the search space by considering boundaries and constraints. In this stage, boundaries are initially taken as default lower and upper limits. Then, the lower and upper limits are updated by checking all constraints concurrently in an iterative manner. The pseudocode is given below:

figure b

In essence, this approach adopts a principle of searching for the roots of each constraint in light of existing boundaries. This process continues until all constraints are evaluated. Through the process, many potential boundaries are calculated. However, at the end, the minimum values of upper limits and the maximum values of lower limits that satisfy all constraints are determined as final interval values for each variable.

3.2 Initialization

In this step, an initial solution set is generated concerning equal chances according to the determined intervals of variables in the hyperspace with multi-dimensions as much as the number of variables in the optimization model. To some extent, this method has been inspired by the “Scatter Search Algorithm” by Glover [53]. The approach for generating random numbers is proposed by Erdem [52], and then applied recently in [54]. The pseudocode is given below:

figure c

where \({\delta }_{i}^{-}\), \({\delta }_{i}^{+}\) are lower and upper limits, respectively; \({\mathrm{x}}_{i}\) is a generated random number for ith variable, and random () is defined as a function that generates a random number between [0,1].

The initialization step also includes the evaluations of constraints for each candidate solution vector. The solutions generated randomly are tested to whether they satisfy each constraint or not. The evaluation is conducted to calculate the constraint satisfaction rates and the total deviations. Therefore, a low satisfaction rate and significant deviations have resulted in big penalties in fitness values. This part will be handled in detail under the title of Multiplicative Penalty-based Method (MUPE).

PbA is structured as a memory-based algorithm. Although there are different approaches in the literature to use memory, “Tabu Search” and “Elitism Principle” are the pioneers of these approaches. For this reason, these approaches inspired the memory feature of the PbA. The PbA utilizes memory for two purposes. The first one is creating a database where each solution is recorded in a memory along with the evaluation scores (constraint satisfaction rates, total deviations). This procedure is inspired by the principle that each step in the “Tabu Search” algorithm is kept in a "history" mechanism, and the repetition of previous solutions is prohibited by looking at this memory [55]. However, in the PbA, memory is used not as a banned list but to eliminate unnecessary repetitive function evaluations. The second one is about recording the best-so-far positions of the solutions. This approach is inspired by the “Elitism Principle” in the literature. The elitism principle is the selection technique that saves the best solution in the population to eliminate the risk of losing the best solution between iterations [56]. In the PbA, best-so-far solutions are stored to avoid losing them in case of offsets. However, there are no limitations for the number of elitist solutions differently from the original elitism principle. Namely, the best-so-far solutions are kept separately from the relocated solution set.

3.3 Multiplicative penalty-based method (MUPE)

Penalty functions are the most common approaches in the constrained optimization literature for decades [57]. Penalty approaches are utilized to keep violations and feasibility under control by penalizing [58]. The penalty approach developed for the PbA is motivated by the studies [59,60,61,62,63]. The multiplicative penalty-based constraint handling (MUPE) method, firstly proposed by Erdem [52], calculates fitness function by considering satisfied constraints rate and deviations from constraints besides objective function in a multiplicative manner. In the case of unconstrained/bounded problems, the goal function would be the objective function itself. The pseudocode for the calculation of the heuristic fitness function is given below:

figure d

Herein goal function combines both the objective function and all constraints. The goal function acts as a heuristic function as shown below:

$$H\left(\overrightarrow{x}\right)=H(f\left(\overrightarrow{x}\right), d\left(\overrightarrow{x}\right),t\left(\overrightarrow{x}\right))$$
(1)

where, \(f\left(\overrightarrow{x}\right)\) represents Goal 1 which includes an objective function to be minimized/maximized, \(d\left(\overrightarrow{x}\right)\) shows the total amount of violations of the constraints, and this function is controlled by Goal 2, \(t\left(\overrightarrow{x}\right)\) is the ratio of the satisfied constraints which is represented by Goal 3, and finally \(H\left(\overrightarrow{x}\right)\) corresponds to the fitness value of a candidate solution. Here Goal 2 and Goal 3 provide a benchmarking because two infeasible solution points have different constraint violation levels. Goal 2 deals with the total amount of violations measure for solutions. On the other hand, Goal 3 incorporates the ratio of satisfied constraints among overall ones. It can be concluded that selecting the “a solution point that has great violation on a single constraint, but the others satisfied” against “a solution point that has little violations for all the constraints” is a benchmarking interest of this method. If all of the constraints were satisfied for the two solution points, Goal 1 would be the only criteria for comparing these solution points.

This multi-objective function structure could have an acceptable convergence by violating constraints within defined precision. It is worth emphasizing that each goal is considered according to different importance scores, namely scores (c1, c2, c3). The sign of the objective function is also added multiplicatively depending on whether the objective function is more significant than zero or not.

3.4 Offset factor

In the initialization part, the solutions are randomly assigned values, and then they start to reach better solutions. In PbA, an offset factor represents the relationship between a candidate solution vector and its neighbors. The offset factor exerted on each solution employs Eq. (2) where \(H(\overrightarrow{{x}_{i}})\) correspond to the fitness values of candidate solutions and are calculated as below:

$$F=C\frac{H(\overrightarrow{{x}_{i}})H(\overrightarrow{{x}_{l}})}{{d}_{il}^{2}}$$
(2)

where F is an offset factor between i and l solution vectors; C is a constant; \(H\left(\overrightarrow{{x}_{i}}\right), H(\overrightarrow{{x}_{l}})\) are the fitness values of solutions i and l, respectively; dil is the Euclidean distance between the solution vectors and the calculation is demonstrated below:

$${d}_{il}=\sqrt{\sum_{1}^{n}{{(x}_{in}-{x}_{ln})}^{2}}=\sqrt{{({x}_{i1}-{x}_{l1})}^{2}+{({x}_{i2}-{x}_{l2})}^{2}+\dots +{({x}_{in}-{x}_{ln})}^{2}}$$
(3)

After calculating the fitness values of each solution vector, the offset factor caused by the neighbors exerted on that solution is considered to find the new solution point. In the PbA, each solution is exposed to that factor from the closest solution vectors in the population by considering the neighborhood principle.

3.5 Neighborhood

The determination of the number of neighbors is also another critical issue to be addressed. According to Pareto’s Principle, roughly 80% of the effects come from 20% of the causes [63]. This principle corresponds that when neighborhood vectors are sorted in descending order, the magnitudes of the offset factors decrease sharply after the 2nd–5th vectors. For that reason, the rest of the solutions can be ignored in terms of fitness values. In the PbA, it is thought that the two closest neighbors may have the potential power for each solution to change its fitness value. Thus, these neighbors are utilized for the displacement procedure of the potential solution.

3.6 Offsetting

After identifying neighbors, the change in each solution must be calculated as a result of the offset factor. After all the factors exerted on the selected solution \(\overrightarrow{{x}_{i}}\), the unified (or compound) net change is calculated, and the new solution is determined. The first iteration is completed after all the factors from neighbors are considered for each solution vector. Algorithm 5 shows the procedure of neighbors affects the solution until a net force is balanced.

figure e

After the offset factors exerting on a solution, the unit offsetting can be found by Eq. (4). The amount of change for a related solution would be proportional to the related dimensional factor and its fitness function value as shown below:

$$\Delta {x}_{i}^{^{\prime}}=\frac{F\frac{\Delta {x}_{i}^{2}}{{d}^{2}}}{H({x}_{i})}=\frac{F}{H({x}_{i})} \frac{\Delta {x}_{i}^{2}}{{d}^{2}}$$
(4)

After considering all offset factors, each solution has a new fitness value in both magnitude and direction. In addition to the neighbors' effects on the related solution, the best-so-far solution also affects the corresponding solution. The new value is determined by the weighted impact of the neighbors and the best-so-far solution. The determination of the new solution is given in Eq. (5).

$${x}_{i}^{k}= {x}_{i}^{k-1}+[\Delta {x}_{i}^{k-1}\alpha +{x}_{b}^{k-1}\left(1-\alpha \right)], \quad i=1, 2,.., n$$
(5)

where n is the number of dimensions, k is the iteration number, \(\alpha \epsilon \left[0, 1\right]\), and it is a dynamic parameter based on the improvement within loops, and \({x}_{b}^{k-1}\) is the best-so-far solution. Here \(\alpha\) provides an opportunity to control the balance between the neighbors and the best-so-far solution.

In line with the net offset factor, the solutions are exposed; they can change their fitness values if it has a better value, as shown in Eq. (6) for minimization problems. In case of having worse fitness value, the solution retains its current value by preserving the best-so-far solution in the population as in Elitism selection. It would be better to clarify that solution can be changed within the allowed space, determined by the intervals of variables.

$$H\left({x}_{i}^{k}\right)<H\left({x}_{i}^{k-1}\right)$$
(6)

Furthermore, it is essential to clarify that if the new fitness value is out of boundaries, the new solution vector is updated as the boundary. When all offset factors are determined for each solution in the population, the first iteration is completed. These process chains are repeated until no remarkable factor from neighbors occurs. However, if a candidate solution is still under the effect of the offset factor, it changes its current value as a small increased amount of offsetting, as shown in Eq. (7).

$$\Delta {x}_{i}^{k}=\Psi\Delta {x}_{i}^{k-1}$$
(7)

where \(\Psi \in [1.01, 1.1]\) and a subjective parameter. In case of improvements \(\Delta {x}_{i}^{k-1}\) continues to be multiplied by \(\Psi\). After considering all the factors caused by the neighbors, the unified net change is checked lastly and in case of remarkable change, new solution vectors are determined.

3.7 Duplication

As a diversification procedure, the PbA applies a procedure that two solution vectors cannot occupy the same fitness value in a closed system. Thus, solutions can diversify without trapping into a single point. Our experiments assume that fitness values are the same in the case that the first six decimal places are equal. In this step, the solution set is checked whether there is a duplicated solution or not. Thus, it is aimed that the algorithm does not trap into a local optimum by increasing the solution possibilities in the population.

While duplication check provides diversification, offsetting in a wide range can create too much diversity disrupting the balance. For this reason, the purpose of offsetting around the best-known solution is to balance the exploration–exploitation ability of the PbA.

3.8 Stopping condition

The stopping condition is related to the number of function evaluations (FES). The pseudocode for the stopping condition is given below. However, the algorithm will stop in most cases where the main stopping condition is met. The minimum, the maximum, and the average FES values are also reported for each problem.

figure f

4 An illustrative example

In this section, an example is generated to explain the procedure of the proposed algorithm step by step including the most critical issues. The visualization is exemplified through fitness values and the minimization problem as shown in Fig. 2. The demonstration includes 4 crucial issues A. Initial Solution, B. Neighborhood, C. Offsetting, D. Best-so-far effect in the run, respectively. Each step is explained in the following.

Fig. 2
figure 2

An illustration of the PbA

The determination of the search space is the first step of PbA. At this stage, firstly the boundaries of the variables are considered as default, and then the boundaries are updated according to the root value of each constraint in case of having constraints, and the final search space is revealed. After the preliminary preparation is completed, a random initial solution is generated as shown in Algorithm 3 and A section in Fig. 2. The next steps include the efforts to reach a better point in line with the neighborhood relations of the population elements in the initial solution and the relationship between the best-so-far solution (denoted as black) in that solution. In Fig. 2, all steps are visualized over a single solution value denoted red. The neighborhood step identifies the two closest neighbors (denoted as yellow) that may have the potential power for the red solution to change its fitness value. The closeness of the neighbors is considered as the Euclidean distances. Through the offset factors of the neighbors, the red solution will try to reach a better point as shown in section C. The amount of change in its place is calculated with the help of Algorithm 5. Improvement efforts for better solutions are not limited to offset factors of neighbors. Besides, the best-so-far solution in that run also has a certain effect (1 − α) as mentioned in Eq. (5) including the neighbors’ effect (α). Consequently, the red solution takes its new position as shown in section D. It would be better to clarify that these procedures have been applied in case of having better fitness values; otherwise, the solution retains its current position. The first iteration is completed when new solution values are calculated for each solution in the population. These chains of processes are repeated until there is no appreciable offsetting from the neighbors and the best-so-far solution effect.

5 Constrained engineering optimization problems

According to Ezugwu et al. [65], the strength of the metaheuristics can be shown by the application in engineering design, management, industries, finance, and scheduling. This situation attracts the scientific community and industry practitioners. In general, before proposing a new algorithm, it is required to test its performance with appropriate test benchmarks [66].

The highlight of the PbA is the ability to deal with constraints very well because of the multiplicative penalty method. For this reason, the performance of the algorithm is evaluated with the most frequently used engineering benchmark problems (Pressure Vessel, Welded Beam, Tension/Compression Spring Design, Himmelblau’s Function), which may have complex constraints. In addition, although these benchmarks are referred to as engineering design problems in the literature, problems that are essentially aimed at "cost minimization.” The parameters used in the PbA are determined as a result of a trial-and-error test and presented in Table 2.

Table 2 Parameter Settings

In general, the algorithms to which the developed algorithm will be compared are randomly selected and re-run and reported independently from the scholars who developed the algorithm. However, this situation may cause manipulation by using different parameters, different software hardware, or even different programming languages, giving biased results. For this reason, it will be better to compare the developed algorithm with the results of other algorithms as reported in the literature. At this point (if specified), the population size, the number of iterations, or the function evaluation value will be good indicators for comparison. Since the main purpose is to reach the best-known solutions so far or a better solution than the best-known, the algorithms developed, especially in recent years presented in Table 3, have been preferred for comparison.

Table 3 Parameters used for algorithms published in the literature

In the comparison tables for each problem, six decimal places are documented if reported. The results obtained by the PbA are also presented with the same sensitivity. Moreover, we preferred to code the PbA in Python language by utilizing Python libraries and PyCharm [104]. The experiments are executed on an Intel Core i7 computer with a 2.60 GHz CPU and 12 GB RAM under the windows operating system.

In the following sections, the experimental results for Pressure Vessel, Welded Beam, Tension/Compression Spring Design, and Himmelblau’s Function are presented in detail.

5.1 Pressure Vessel

The design of a Pressure Vessel is one of the most used cost optimization problems. The design image of the vessel is given in Fig. 3. This problem has four variables where x3 and x4 are continuous, while x1 and x2 are integers (products of 0.0625 inches) which are the available thickness of the material.

Fig. 3
figure 3

Pressure Vessel [105]

The model of the Pressure Vessel is given below:

$$\begin{gathered} {\text{Minimize}}\;f\left( {\vec{x}} \right) = 0.6224x_{1} x_{3} x_{4} + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84x_{1}^{2} x_{3} \hfill \\ {\text{Subject}}\;{\text{to}} \hfill \\ g_{1} \left( {\vec{x}} \right) = - x_{1} + 0.0193x_{3} \le 0 \hfill \\ g_{2} \left( {\vec{x}} \right) = - x_{2} + 0.00954x_{3} \le 0 \hfill \\ g_{3} \left( {\vec{x}} \right) = - \pi x_{3}^{2} x_{4} - \frac{4}{3}\pi x_{3}^{3} + 1296000 \le 0 \hfill \\ x_{1} ,x_{2} \in \left[ {0.0625, 10} \right];x_{3} \in \left[ {0, 100} \right];x_{4} \in \left[ {0, 240} \right] \hfill \\ \end{gathered}$$
(8)

It would be useful to show how the proposed algorithm handles a problem step by step and arrives at a near-optimal result before reporting the final solutions for each problem. The convergence process of the PbA is exemplified for Pressure Vessel in Table 4. The other engineering design problems converge similarly.

Table 4 Convergence of the proposed algorithm

The experimental results for Pressure Vessel are summarized in Table 5 as descriptive statistics. Although maximum FES is limited to 30,000, actual FES values are also reported as well.

Table 5 Experimental results of Pressure Vessel

The best-so-far solution for the Pressure Vessel obtained by the PbA is given in Table 6. Moreover, the left-hand side values of each constraint are also provided in Table 6 to show the feasibility of the solution.

Table 6 Best-so-far solution for Pressure Vessel

In Fig. 4 a graphic is given in order to show the convergence to the global minimum of the algorithm visually. It shows the objective function values obtained according to the FES value in the related run reaching the global minimum within 30 trials. As it is seen, the best-so-far solution has been reached approximately after 5000 function evaluations.

Fig. 4
figure 4

The convergence to the best-so-far solution (pressure vessel)

The Pressure Vessel problem has been handled with many other metaheuristic algorithms previously. Since there are too many algorithms in the literature, only those developed after 2010 are presented chronologically in Table 7. Most of the solutions could not be able to satisfy “having x1 and x2 as integer products of 0.0625.” The feasible best-known solution for Pressure Vessel has been obtained firstly by FA. However, there are no other algorithms that reach this solution. PbA has provided the same best-known solution with a smaller set size compared to FA.

Table 7 Comparisons for pressure vessel (Best-so-far solution)

The descriptive statistics of the experiments are shown in Table 8. As it is seen, the performance of the PbA is better than the solutions reached by other algorithms. Although the worst and the standard deviation are relatively higher than the others, the feasible best-known solution is obtained by the PbA.

Table 8 Comparisons for pressure vessel (descriptive statistics)

5.2 Welded Beam

The Welded Beam is a structural optimization problem generally preferred as a benchmark [106]. This structure is about the beam and the weld for mass production, as shown in Fig. 5. This cost optimization problem consists of four design variables and six constraints. The model is given in Eq. 9.

$$\begin{gathered} {\text{Minimize}}\;f\left( {\vec{x}} \right) = \left( {1 + c_{1} } \right)x_{1}^{2} x_{2} + c_{2} x_{3} x_{4} \left( {L + x_{2} } \right) \hfill \\ {\text{Subject}}\;{\text{to}} \hfill \\ g_{1} \left( {\vec{x}} \right) = \tau \left( {\vec{x}} \right) - \tau_{{{\text{max}}}} \le 0 \hfill \\ g_{2} \left( {\vec{x}} \right) = \sigma \left( {\vec{x}} \right) - \sigma_{{{\text{max}}}} \le 0 \hfill \\ g_{3} \left( {\vec{x}} \right) = x_{1} - x_{4} \le 0 \hfill \\ g_{4} \left( {\vec{x}} \right) = c_{1} x_{1}^{2} + c_{2} x_{3} x_{4} \left( {L + x_{2} } \right) - 5 \le 0 \hfill \\ g_{5} \left( {\vec{x}} \right) = \delta \left( {\vec{x}} \right) - \delta_{{{\text{max}}}} \le 0 \hfill \\ g_{6} \left( {\vec{x}} \right) = P - P_{c} \left( {\vec{x}} \right) \le 0 \hfill \\ \tau \left( {\vec{x}} \right) = \sqrt {\left( {\tau^{\prime}} \right)^{2} + 2\tau^{\prime } \tau^{\prime \prime } \frac{{x_{2} }}{2R} + (\tau^{\prime \prime } )^{2} } \hfill \\ \tau^{\prime } = \frac{P}{{\sqrt 2 x_{1} x_{2} }};\;\tau^{\prime \prime } = \frac{MR}{J};\;M = P\left( {L + \frac{{X_{2} }}{2}} \right);\;R = \sqrt {\frac{{x_{2}^{2} }}{4} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } \hfill \\ J = 2\left\{ {\sqrt 2 x_{1} x_{2} \left[ {\frac{{x_{2}^{2} }}{12} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } \right]} \right\} \hfill \\ \sigma \left( {\vec{x}} \right) = \frac{6PL}{{x_{4} x_{3}^{2} }};\;\delta \left( {\vec{x}} \right) = \frac{{4PL^{3} }}{{Ex_{3}^{3} x_{4} }} \hfill \\ P_{c} \left( {\vec{x}} \right) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}\left( {1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} } \right) \hfill \\ x_{1} \in \left[ {0.125, 5} \right];x_{2} ,x_{3} \in \left[ {0.1, 10} \right];x_{4} \in \left[ {0.1, 5} \right] \hfill \\ {\text{c}}_{{1}} = 0.{1}0{471};\;{\text{c}}_{{2}} = 0.0{4811};\;{\text{P}} = {6}000;\;{\text{L}} = {14};\;{\text{E}} = {3}0000000;\;{\text{G}} = {12}000000 \hfill \\ \delta_{{{\text{max}}}} = 0.25;\;\tau_{{{\text{max}}}} = 13600;\;\sigma_{{{\text{max}}}} = 30000 \hfill \\ \end{gathered}$$
(9)
Fig. 5
figure 5

Welded beam [105]

The experimental results for Welded Beam are given in Table 9 as descriptive statistics. Only for Welded Beam problem, maximum FES is limited to 100,000, and actual FES values are also reported as well.

Table 9 Experimental results of Welded Beam

The best-so-far solution for Welded Beam obtained by the PbA is given in Table 10. Furthermore, to show the feasibility of the solution, each constraint's left-hand side values are also provided. As seen from the constraint values, only one constraint has a 2.96612E-07 violation which is negligible. Therefore, it can be concluded that the solution is feasible as well.

Table 10 Best-so-far solution for welded beam

In Fig. 6, a graphic is given to show the convergence to the global minimum of the algorithm visually. Figure 6 shows the objective function values obtained according to the FES value in the run reaching the global minimum within 30 trials. The algorithm has converged the best-so-far solution after 30,000 FES, according to the graph.

Fig. 6
figure 6

The convergence to the best-so-far solution (Welded Beam)

The feasible best-known solution for Welded Beam has been obtained as 1.724852 in the literature. Most of the algorithms have reached feasible best-known solutions. However, the solution values obtained differ after the 5th digit after the comma, as shown in Table 11. It is worth noting that the solutions less than 1.724852 are infeasible for some constraints.

Table 11 Comparisons for Welded Beam (Best-so-far solution)

In Table 12, the descriptive statistics for Welded Beam are reported. Table 12 indicates that the PbA achieves the best-known solution nearly (with a 2.04E-05 deviation, which is negligible).

Table 12 Comparisons for Welded Beam (Descriptive Statistics)

5.3 Tension/Compression Spring Design

The Tension/Compression Spring Design Problem is an optimization benchmark that minimizes the weight [107]. There are three variables and four constraints. Its figure and model are given in Fig. 7 and Eq. 10, respectively.

$$\begin{gathered} {\text{Minimize}}\;f\left( {\vec{x}} \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2} \hfill \\ {\text{Subject}}\;{\text{to}} \hfill \\ g_{1} \left( {\vec{x}} \right) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{1}^{4} }} \le 0 \hfill \\ g_{2} \left( {\vec{x}} \right) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566\left( {x_{2} x_{1}^{3} - x_{1}^{4} } \right)}} + \frac{1}{{5108x_{1}^{2} }} - 1 \le 0 \hfill \\ g_{3} \left( {\vec{x}} \right) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{2} x_{3} }} \le 0 \hfill \\ g_{4} \left( {\vec{x}} \right) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0 \hfill \\ x_{1} \in \left[ {0.05, 1} \right];x_{2} \in \left[ {0.25, 1.3} \right];x_{3} \in \left[ {2, 15} \right] \hfill \\ \end{gathered}$$
(10)
Fig. 7
figure 7

Tension/Compression Spring Design [105]

The experimental results for Tension/Compression Spring Design are given in Table 13 as descriptive statistics. Maximum FES is limited to 30,000, and the minimum and the average FES values are also reported as well.

Table 13 Experimental results of Tension/Compression Spring Design

The best-so-far solution for Tension/Compression Spring Design obtained by the PbA is given in Table 14. As is seen in Table 14, all constraints are satisfied.

Table 14 Best-so-far solution for Tension/Compression Spring Design

A graphic is given in Fig. 8 to visually express the algorithm's convergence to the global minimum value. It shows the objective function values obtained according to the FES value in the run reaching the global minimum within 30 trials. It is seen that the best-so-far solution has been obtained at the very beginning of FES values.

Fig. 8
figure 8

The convergence to the best-so-far solution (Tension/Compression Spring Design)

According to Table 15, the feasible best-known solution is 0.012665 reported in the literature. When the digits of the solutions reaching the value of 0.012665 are compared, it is evident that the PbA also reached the value of the best-known solution. It is worth noting that the solutions less than 0.012665 are infeasible to some extent.

Table 15 Comparisons for Tension/Compression Spring Design (best-so-far solution)

The descriptive statistics for Tension/Compression Spring Design are listed in Table 16. As it is seen, the performance of the PbA is better than the solutions reached by other algorithms.

Table 16 Comparisons for Tension/Compression Spring Design (Descriptive Statistics)

5.4 Himmelblau’s Function

Himmelblau proposed a nonlinear constrained optimization problem in 1972, and it is regarded as a mechanical engineering problem [108]. This well-known problem has five variables and six constraints. The model of the problem is given in Eq. 11.

$$\begin{gathered} {\text{Minimize}}\;f\left( {\vec{x}} \right) = 5.3578547x_{3}^{2} + 0.8356891x_{1} x_{5} + 37.293239x_{1} - 40792.141 \hfill \\ {\text{Subject}}\;{\text{to}} \hfill \\ g_{1} \left( {\vec{x}} \right) = 85.334407 + 0.0056858x_{2} x_{5} + 0.00026x_{1} x_{4} - 0.0022053x_{3} x_{5} \hfill \\ g_{2} \left( {\vec{x}} \right) = 80.51249 + 0.0071317x_{2} x_{5} + 0.0029955x_{1} x_{2} + 0.0021813x_{3}^{2} \hfill \\ g_{3} \left( {\vec{x}} \right) = 9.300961 + 0.0047026x_{3} x_{5} + 0.0012547x_{1} x_{3} + 0.0019085x_{3} x_{4} \hfill \\ 0 \le g_{1} \left( {\vec{x}} \right) \le 92 \hfill \\ 90 \le g_{2} \left( {\vec{x}} \right) \le 110 \hfill \\ 20 \le g_{3} \left( {\vec{x}} \right) \le 25 \hfill \\ x_{1} \in \left[ {78,102} \right];x_{2} \in \left[ {33,45} \right];x_{3} ,x_{4} ,x_{5} \in \left[ {27,45} \right] \hfill \\ \end{gathered}$$
(11)

The experimental results for Himmelblau’s Function are given in Table 17 as descriptive statistics. Maximum FES is limited to 30,000, and the minimum and the average FES values are also reported as well.

Table 17 Experimental results of Himmelblau’s Function

The solution for Himmelblau’s Function obtained by the PbA is given in Table 18. As it is seen, all constraints are within the defined range, which means that there are no violations.

Table 18 Best-so-far solution for Himmelblau’s Function

In Fig. 9, a graphic is given in order to show the convergence of the algorithm to the global minimum visually. It shows the objective function values obtained according to the FES value in the run reaching the global minimum within 30 trials. According to Fig. 9, it is seen that the algorithm has achieved convergence of the best-so-far solution with less than 10,000 FES values.

Fig. 9
figure 9

The convergence to the best-so-far solution (Himmelblau)

In Table 19, the feasible best-known solutions obtained for Himmelblau’s Function are given. H-GSA-GA reaches the best-known solution with -31,027.64076. However, it would be better to mention that H-GSA-GA utilizes 20*Dimension as population size, which becomes 100.

Table 19 Comparisons for Himmelblau’s Function (Best-so-far solution)

The descriptive statistics for Himmelblau’s Function are listed in Table 20. As seen, the performance of the PbA for Himmelblau's Function is considerably good.

Table 20 Comparisons for Himmelblau’s Function (Descriptive Statistics)

6 Discussion and conclusion

This study presents a new population-based evolutionary computation model for solving continuous constrained nonlinear optimization problems. The proposed algorithm assumes that candidate solutions in a defined search space are in interaction to survive through having better fitness values. The interaction between candidate solutions is limited with the closest neighbors by considering the Euclidean distance. The outstanding feature of the PbA is the MUPE which considers satisfied constraints rate and deviations from constraints besides objective function in a multiplicative manner. Another prominent feature in the PbA is the control mechanism for duplication, which is for satisfying Pauli’s Exclusion Principle in physics. PbA is structured as a memory-based algorithm inspired by Tabu Search and Elitism at some point. A database structure has been created to reduce the function evaluation load in the algorithm. This database can be thought of as a memory capability of the PbA. However, unlike in Tabu Search, this database is designed as a particle repository, not a banned list. On the other hand, the best solutions achieved are also recorded and kept separately. However, this does not include a specific ratio as in the Elitism approach; on the contrary, the best set is used as much as the population size. Although the PbA is inspired by natural facts and other algorithms in the literature, the primary goal is to achieve better results in a specific problem type, regardless of metaphors and similarities.

To show the performance of the PbA, the most common engineering design problems such as Pressure Vessel, Welded Beam, Tension Compression Spring Design, and Himmelblau’s Function are applied. The experimental studies have shown that the PbA performs well to handle constraints while minimizing objective functions, and the PbA has provided the best-known solutions in the literature. It should be pointed out that solution set size significantly impacts the results. For this reason, the PbA has been performed with the minimum set size listed in the comparison table. It is evident that, in the case of a more extensive set size, the best-known solutions can be reached more quickly in the PbA.

Limitations of this study should be noted as well. The performance of the proposed algorithm is limited by the real-world engineering design problems handled. Furthermore, parameter setting for importance scores in MUPE approach has been determined by trial and error which can be considered a disadvantage of the proposed algorithm. Apart from this, hardware qualifications such as central processing unit (CPU), computer data storage, and the motherboard can also be considered as limitations.

For further studies, different benchmark group problems (unconstrained, multi-objective, combinatorial, etc.) can be solved after some modifications of the PbA. Furthermore, the PbA can be integrated with other constraint handling methods. Modifications can achieve the desired result with less function evaluation value. It can be combined with various algorithms as a different hybrid algorithm. Moreover, as stated by Gambella et al. [109], the concept of optimization problems for machine learning presents novel research directions by considering machine learning models as complement approaches in addition to the existing optimization algorithms. For instance, the proposed algorithm can be implemented in any place where other evolutionary algorithms such as genetic algorithm have been utilized for itemset mining as shown in [110, 111]. Besides, it can be integrated with any deep learning algorithms to diagnose the disease as handled in [112].