Abstract

Circuit design plays a pivotal role in engineering, ensuring the creation of efficient, reliable, and cost-effective electronic devices. The complexity of modern circuit design problems has led to the exploration of multi-objective optimization techniques for circuit design optimization, as traditional optimization tools fall short in handling such problems. While metaheuristic algorithms, especially genetic algorithms, have demonstrated promise, their susceptibility to premature convergence poses challenges. This paper proposes a pioneering approach, the chaotic multi-objective Runge–Kutta algorithm (CMRUN), for circuit design optimization, building upon the Runge–Kutta optimization algorithm. By infusing chaos into the core RUN structure, a refined balance between exploration and exploitation is obtained, critical for addressing complex optimization landscapes, enabling the algorithm to navigate nonlinear and nonconvex optimization challenges effectively. This approach is extended to accommodate multiple objectives, ultimately generating Pareto Fronts for the multiple circuit design goals. The performance of CMRUN is rigorously evaluated against 11 multiobjective algorithms, encompassing 15 benchmark test functions and practical circuit design scenarios. The findings of this study underscore the efficiency and real-world applicability of CMRUN, offering valuable insights for tailoring optimization algorithms to the real-world circuit design challenges.

1. Introduction

Designing analog circuits can be frustrating due to the many constraints to be attained for a circuit to function correctly. An engineer aims to develop a circuit that meets certain specifications and the International Electro-technical Commission (IEC) standards [1]. For most integrated circuits, the design of the analog part of the overall circuit takes the more significant portion of the overall design time. This is because analog circuit design involves tuning the circuit parameters through trial and error until the required output is met. This process is time-consuming and may not produce the desired results.

Analog circuit design optimization refers to the improvement of the performance of an analog circuit by adjusting its parameters or components to meet specific performance goals, such as increased accuracy, stability, power efficiency, or reduced noise [2]. This process involves using mathematical models, simulation tools, and optimization algorithms to identify the best combination of circuit components and parameters to achieve the desired performance characteristics. Optimization may be iterative, with multiple design iterations being evaluated until the desired performance criteria are met [2].

Circuit design requires the engineer to know the components used in the design, and it generally involves three steps: topology selection, component sizing, and layout generation [1]. During topology selection, power loss, power gain, current, temperature, circuit stability, and noise are specified, and constraints are set. Component sizing ensures the circuit design is not bulky and uses the fewest number of components possible to avoid redundancy [1]. Many circuit simulation software, such as Autodesk Eagle, NI Multisim, and LTSpice, allow testing of the circuit parameters to see if the desired results are obtained. This is the most widely used method that involves iterative trial-and-error until the parameters fit. Since the above process is time-consuming, there have been several efforts to develop optimization tools to speed up analog circuit design [1]. However, traditional optimization tools use direct or gradient search methods that utilize initial guesses. These tools struggle with discrete variables and nonlinear constraints and get stuck in local minima when optimizing complex, nonlinear, or nonconvex circuit design problems [1].

Metaheuristic optimization algorithms (MOAs), divided into three classes, namely physics-based, evolutionary, and swarm-based algorithms, replaced the traditional optimization tools for circuit design optimization, especially in handling complex design problems, saving time and resources. Metaheuristics are simply guidelines used to establish rules when solving optimization problems. These rules are referred to as heuristics and are the basic foundations of algorithms. Despite their high performance in producing optimal solutions, MOAs experience several limitations, such as increased sensitivity and difficulty setting control parameters.

The main problem for all MOAs is balancing the exploration and exploitation phases for optimum performance [3]. The exploration phase involves obtaining the global optimal solutions, while the exploitation phase is when the algorithm searches for the local optimal solutions. Algorithms stuck in local optima experience premature convergence, leading to inaccurate results [4]. Convergence is the rate at which an algorithm converges at the global optimum solution. MOAs are stochastic since they use randomly generated components in their optimization process; hence, achieving the appropriate balance between the two phases is difficult [3]. Most researchers try to improve this balance by combining MOAs with other optimizers or choosing control parameters appropriately [4].

In 2021, Ahmadianfar et al. [3] proposed the Runge–Kutta optimization algorithm (RUN) as a more robust alternative to metaphor-based optimization algorithms. The RUN maintains a better balance between the exploration and exploitation phases [3]. This algorithm is divided into two parts. The first part is the search mechanism that utilizes slope calculations of the Runge–Kutta (RK) method to provide a more powerful search space for optimization, and the second part is the enhanced solution quality (ESQ), which improves the algorithm’s efficiency by producing more quality solutions than the initially obtained solutions [3].

Despite the RUN showing superior performance compared to metaphor-based algorithms, it still suffers from premature convergence for some problems; thus, Yıldız et al. [5] proposed using chaos to enhance the base RUN, making it chaotic (CRUN) and improving its performance by increasing the convergence rate while maintaining the balance between exploration and exploitation phases of the algorithm. The CRUN was applied to the real-world design problems and showed superior performance regarding the diversity of solutions and convergence speed [5]. However, both the RUN and CRUN algorithms are single-objective, and circuit design involves optimizing multiple objectives, such as increasing power efficiency while minimizing cost.

This paper proposes a chaotic multi-objective Runge–Kutta optimization algorithm (CMRUN) for circuit design optimization. Introducing chaos into the base RUN enhances its performance by improving the balance between the exploitation and exploration phases and, thus, a high-convergence rate while avoiding local optima. Furthermore, multi-objective optimization techniques are incorporated into this chaotic RUN, allowing the optimization of multiple circuit performance metrics simultaneously [6].

The organization of this paper is as follows: Section 2 explains the formulation and optimization methods of the RUN algorithm, its limitations and methods undertaken to improve it. Section 3 gives the background of the abstract ideas (chaos, multi-objective optimization, and circuit design) used to complete this work. Section 4 describes the methods used to design the CMRUN, the performance measurement criteria, and the circuit parameters to be optimized. Section 5 evaluates the performance of the CMRUN in optimizing benchmark functions and circuit parameters. Section 6 provides the conclusions about this work and suggestions for the future research.

2. Literature Review

2.1. The Runge–Kutta Optimization Algorithm (RUN)
2.1.1. Inspiration behind the Runge–Kutta Optimization Algorithm

The RK method is one of the techniques in numerical methods used to find solutions for first-order ordinary differential equations:

The slope of the line of best fit at point is defined as in the equation above, giving the main idea of what the algorithm will base its search space on. Figure 1 shows the slopes utilized in the RK method.

As shown below, the RK method can be derived using the Taylor series and ignoring the higher order terms.

The equation below is the approximate solution for the Taylor series derivation of the RK method after dropping the higher order terms:

From Equation (3), we can rewrite the equation as shown below to begin finding the formula for the RK fourth-order method:where and

The first-order derivative, which is equivalent to as seen above, can be determined as shown in the expression below:

Therefore, we can use Equation (5) to rewrite Equation (4) as shown below:

The solutions using this method are formulated as shown below:where

From the equations above, uses to give the slope at the beginning of the interval []. is defined by the midpoint slope using and [3]. is defined by the midpoint slope using and , and is determined by the pitch at the end using and . Remember, from the Runge–Kutta method, the value is given by the initial value and the weighted averages of and . This is depicted in Figure 2. Figures 1 and 2 are obtained from the study of Ahmadianfar et al. [3].

2.1.2. Optimization Steps of the RUN

This algorithm uses random components in a swarm-based model, which is population-based, and the RK logic to develop ordinary differential equations used to find the slope [3]. This slope is used as the search mechanism to explore and find the best solutions while observing the rules of the evolution of the swarm-based model. The RK optimizer is mathematically formulated in the sections below.

(1) Initialization of the RUN Algorithm. An initial swarm will undergo evolution for several iterations to start the algorithm. Therefore, initializing a size N population means N random positions are generated [3, 5]. The solutions of the optimization problem are the members of the population with a dimension D. The idea below is used in generating the random initial solutions in the RUN algorithm.

—the lower limit of the l-th parameter and —upper limit of the parameter where —random number in [0,1].

(2) Development of a Searching Mechanism Using the Runge–Kutta Method. The RUN algorithm’s searching mechanism is centered on the RK method and uses random solutions to perform searches in the search space, both globally and locally [3, 5, 7]. Like every other optimizer, this is the core of the program. From Equation (5) of the RK method, the neighbors of are:

—best position and —worst position.

We can use Equation (5) to come up with the first coefficient, , defining it as follows:

—worst solution at each iteration; —best solution at each iteration; and are determined by selecting three random solutions from the population. These three solutions are: and .

From the equation of , we can see that it lacks stochastic behavior. Therefore, to improve the exploration search phase, we introduce randomness and rewrite the equation as follows:—random number in the range [0,1]

The best solution () is essential in enhancing the global search to find the global best solution. Thus, parameter is introduced to grow the best solution’s importance [3].

The range between adjacent positions is given by:

The step size () is given by the equation:—is the scale factor determined by:

The scale factor decreases exponentially during optimization and is determined by the solution space size. The randomness of the numbers in Equations (13)–(15) provides diversification in the searches [3].

The rest of the coefficients , , and are written below:

The numbers and are random numbers in the range [0,1]. The program below determines the worst and best solutions: and .

is the best random solution from random selections (, , and )

The searching mechanism of the RUN is shown in Figure 3, and is given by:

in which

(3) Modernizing the Solutions. During optimization, the RUN algorithm updates the positions of the solutions at each iteration using the RK method since the initial solutions are random selections [3]. The following pseudocode shows how the solutions are updated in the exploitation and exploration phases.wherewhere is a random number, and is a random number with a normal distribution.

The expressions for and are:

The expressions for and are as follows:where is a random number in [0 1]. is the current best solution. is the best solution at each iteration.

The adaptive factor, SF, is the one that provides the balance needed between exploitation and exploration [3, 5]. Initially, SF is large to enhance exploration by increasing diversity, and it decreases in value afterward, thus increasing the number of iterations and enhancing exploitation.

and are the main control parameters of the SF. and are constants; —number of iterations; —maximum number of iterations.

From Equation (22), when in the solution space, the algorithm conducts a global search, and simultaneously, around solution , it does a local search. The exploration phase ensures all high-quality solutions are searched. When , the RUN applies a local search around . Exploitation increases the speed of convergence, focusing on promising solutions. This is done by rewriting Equation (22) as below, where,

—integer number(either 1 or −1); —random number in [0,2]; increases diversity by changing the search directions, and exploitation around reduces with each iteration.

(4) Enhanced Solution Quality. This ensures solutions after each iteration head toward better positions. It operates using the following pseudocodes:in which

—is a random number in [0,1]; —random number is equal to ; —random number (decreases with increase in iterations); —integer number (either 1 or 0 or −1).

The solution () may not be better than the current solution; hence, the following equation gives the algorithm another chance to find a better solution than the two [3].where .

The implementation of () occurs when  <  to move () to a better position. The parameter enhances the importance of the best solution and to find and , respectively, become and since the fitness of is lower than that of . The flowchart of the RUN algorithm is shown in Figure 4 [3].

(5) The Pseudocode for the RUN Algorithm.

Part 1. Initialization
 Initialization
 Randomly generate the initial population of the RUN
 Evaluate the objective function for each population member
 Obtain , , and
Part 2. RUN operators
  forit = 1 : MaxIt
   fori = 1 : np
    forj = 1 : dim
     Evaluate position (Equation 30)
    end for
    ESQ
     if
     Evaluate position (Equation 31)
      if
       if rand <
       Evaluate position (Equation 35)
      end
     end
    end
   Modernize positions and
   endfor
   Modernize position
  it = it + 1
  end
Part 3. return
2.1.3. Limitations of Runge–Kutta Optimizer

Despite the RUN algorithm having better convergence speeds due to appropriate balancing between the local and global searches, it still suffers some limitations similar to the metaphor-based optimizers mentioned below:(i)The algorithm is single-objective based and cannot solve multi-objective optimization problems [3].(ii)The algorithm can be trapped in the local search during ESQ, confining it to only locally optimal solutions [3].

2.2. Methods Undertaken to Improve the RUN’s Performance

The RUN is a fairly new optimizer designed in 2021. This algorithm has demonstrated superior performance to traditional metaphor-based algorithms, and thus, several researchers have proposed ways to improve its performance even more [7]. The RUN is a stochastic population-based algorithm [5]. Unlike single-based optimizers, population-based algorithms randomly generate solutions at each iteration to avoid being trapped in local optima. Therefore, they are superior to single-based algorithms as they attain quality solutions and have increased convergence speed. The randomly generated solutions can share information, ensuring convenient search in complex feature space sceneries [5]. Table 1 shows the pseudocode for the RUN algorithm.

The RUN algorithm edges nature-inspired metaheuristic algorithms in performance. Generally, metaheuristic algorithms are divided into three: evolutionary algorithms (EAs), swarm intelligence algorithms (SIAs), and physics-based algorithms (PBAs) [3]. These algorithms have shown a high capability at obtaining optimum solutions at considerable speed while avoiding getting trapped in local optima. However, when solving complex design problems, they tend to suffer from premature convergence and mismatch between the design variables. Over the years, researchers developed hybrid optimizers to fix the issues encountered by metaheuristics by combining the domineering features of individual algorithms. An example is the hybrid of the whale optimization algorithm and the graygrey wolf optimizer, which showed better performance than the individual algorithms [8]. Figure 5 shows the general flowchart of metaheuristic algorithms [9].

The RUN algorithm was designed to be a powerful and more accurate optimizer that avoids local optima. The RUN finds the global best in solving optimization problems using the ESQ to increase the convergence speed and avoid premature convergence [3]. Although the RUN performs better than other metaheuristic algorithms, it suffers limitations when solving multimodal problems [3]. To solve this problem, researchers have proposed several methods of improving the RUN’s performance. Some of these methods are discussed below.

CENGİZ et al. [9] proposed a method to improve the RUN algorithm using the FDB (fitness distance balance) method. In the RUN algorithm, local minima traps can occur during execution; hence, the global optimal solutions are not attained. The FDB method was proposed to enhance the exploration phase, guiding the algorithm toward global solutions. There are 10 cases presented by CENGİZ et al. [9] to modify the base RUN by incorporating FDB to enhance its performance. Some of these cases include modifying Equation (33) in the following ways:orwhere is the FDB operator.

The FDBRUN produced better results compared to the base algorithm in the study conducted by CENGİZ et al. [9]. The readers are encouraged to read this paper [9] to view the other cases and how they are implemented.

Devi et al. [10] proposed another way to improve the RUN algorithm to avoid premature convergence and enhance the accuracy of solutions. They proposed integrating a local escaping operator (LEO) in the RUN [10]. The LEO improves it by bypassing local minima and thus increasing its convergence. The readers are encouraged to read this paper by Devi et al. [10] to understand better the formulation of the LEO and its integration in the base RUN.

The above methods improve the performance of the RUN algorithm, but the most widely used method to improve metaheuristic algorithms is enhancing them using chaotic maps. Several optimizers have been improved by introducing chaos, such as the chaotic Mayfly algorithm, the chaotic whale optimizer, and the chaotic bat algorithm [1115]. These metaheuristic algorithms’ performance and quality parameters have been tested by applying them to real-world design problems. For example, the chaotic Harris Hawks optimizer developed by Gezici and Livatyali [12] provided better performance and results than the traditional Harris Hawks optimizer [16].

Therefore, from existing research, Yıldız et al. [5] proposed using chaos to enhance the base RUN by making it chaotic (CRUN) to increase its ability to avoid local optima. The CRUN was applied to the real-world design problems and showed superior performance regarding the diversity of solutions and convergence speed compared to the original RUN [5].

In this paper, the approach used to design the CMRUN is similar to that used by Yıldız et al. [5] to design the CRUN. Chaotic maps are integrated into the core RUN structure to strike a refined balance between exploration and exploitation, enabling the algorithm to navigate nonconvex and complex optimization problems. Furthermore, utilizing chaos in the ESQ improves the convergence speed while navigating the algorithm toward global solutions [5]. The algorithm is then modified to generate Pareto Fronts to handle complex problems with multiple objectives [6].

3. Theoretical Background

3.1. Chaotic Maps

In essence, chaotic maps are mathematical functions that behave chaotically, such that slight variations in the original conditions have radically diverse effects. Chaotic maps are typically utilized in control and optimization algorithms to broaden the range of potential solutions and avoid local minima traps [11]. The Logistic, Tent, Circular, Gaussian, and Chebyshev maps are examples of chaotic maps.

Due to their tendency to get stuck in local optima, traditional algorithms struggle to optimize nonconvex and multimodal objective functions. By adding unpredictability to the optimization process, chaotic maps offer a solution to this issue [1215]. The optimization algorithm can better traverse the search space and avoid local optima by using chaotic maps to produce random solutions or perturb the present solution. This is especially helpful in high-dimensional search spaces, where many local optima may exist [5].

Chaotic maps can be incorporated in several ways into optimization algorithms. One method is to use chaotic maps to produce the initial population of solutions in an evolutionary algorithm. A group of random solutions can be produced using the chaotic map, and these solutions can then be evolved by crossover, mutation, and selection processes [15].

Another approach is when a local search algorithm like simulated annealing or tabu search uses chaotic maps to alter the existing solution. The chaotic map can produce a minor perturbation to the present solution, which is accepted or rejected according to a probability distribution [11].

Finally, the parameters of a chaotic system itself can also be optimized using chaotic maps. The chaotic map changes the system’s parameters and alters the behavior of the objective function as it determines the system’s complexity or randomness [5].

Overall, chaotic maps provide an effective tool for exploring complex, nonlinear search spaces and avoiding local optima in the optimization algorithms. Chaotic maps can enhance the effectiveness and efficiency of optimization algorithms in various applications by bringing randomness and diversity into the optimization process [16]. Randomization helps to balance the exploitation and exploration phases for optimal performance [15].

In this paper, 10 chaotic maps are integrated to determine the best chaotic map in circuit design. They are obtained from [5] and are shown in Table 1.

Figure 6 shows each map’s characteristics when implemented in MATLAB.

3.2. Multi-Objective Optimization Techniques

In the engineering world, engineers often encounter several problems with multiple objectives that must be optimized simultaneously. In most cases, these objectives have some sort of conflict, such as improving one objective deteriorates another. Additionally, these objectives have different units of measurement and are called multi-objective optimization problems (MOPs). Unlike in single-objective problems where a single optimal result is obtained by determining which solution is better, MOPs do not have a clear-cut method of determining which solution is better due to conflicts and different measurement units [6].

Circuit design often involves the optimization of multiple objectives, such as the gain output of a system and the cost of components. Thus, in this paper, the RUN algorithm will be made multi-objective to handle several objectives simultaneously. There are several ways of making an algorithm multi-objective for solving MOPs. These methods include many methods as described below.

3.2.1. Weighted Sum Method

This technique involves the combination of several objectives into one objective in MOPs. Each objective is assigned a weight, and then a linear combination is applied to all weighted objectives [6]. The weights reflect the relative importance of each objective and hence can be used to balance the tradeoffs between objectives. For example, consider a MOP with two objectives, and . The objectives can be combined using the weighted sum method as shown below:where and are the weights of and , respectively.

The optimal solution can be found by minimizing and will depend on the values of and , which can be tuned to give the relative importance of each objective.

3.2.2. Scalarizing Method

This technique involves converting an MOP into a single-objective optimization problem by defining a scalarizing function that maps several objectives into one objective. For example, consider a MOP with two objectives, and . The problem is modified to contain one objective, as shown below:where is the scalarizing function.

Based on the specified needs of the problem, the scalarizing function is defined in several ways. For example, it could be a function that reflects the tradeoffs between objectives or as a weighted sum of objectives [6]. The function is then mapped back to the original MOP once an optimal solution is found.

3.2.3. Pareto Optimization

This technique involves finding a set of nondominated solutions, which means that improving one objective has the effect of deteriorating one or more of the other objectives. If no solution exists that is better in at least one or all objectives, the solution obtained is considered Pareto optimal [17, 18]. These nondominated solutions form the Pareto front, which is a representation of all Pareto optimal solutions.

3.2.4. Evolutionary Algorithms

Natural evolution processes inspire these algorithms and come in handy in solving complex MOPs that have conflicting objectives. The operation of evolutionary algorithms involves the iterative generation of populations of solutions, and the fitness function searches for the optimal solution by evaluating the quality of each solution [6]. Modifying the solutions selected occurs through crossover and mutation, creating a fresh set of solutions. The processes are repeated until the optimal solution is attained or a set condition is achieved [17]. Evolutionary algorithms best solve problems with the wide search spaces and complex nonlinear objectives.

3.2.5. Multi-Objective Gradient Descent Methods

This technique optimizes multiple objectives simultaneously using modified gradient descent algorithms. Here, a multi-objective loss function is defined, which combines all the objectives into a single objective. The loss function reflects the tradeoffs between objectives and should be differentiable to make the gradient computation possible. The multi-objective gradient descent algorithm then adjusts the model parameters to minimize the loss function iteratively, whereby at each iteration, the gradient of the loss function is computed. Then, the model parameters are updated in a way that minimizes loss. This process is repeated until a preset condition is achieved or the loss function converges to a minimum.

3.2.6. Decomposition-Based Methods

These are algorithms that divide MOPs into a series of subproblems and optimize each subproblem separately. After obtaining individual solutions to each subproblem, these solutions are combined into one final solution to the original MOP.

The choice of a multi-objective optimization technique typically depends on the unique aspects of the problem and the optimization’s objectives. The choice of technique can significantly affect the effectiveness and quality of the solution since each technique has advantages and disadvantages.

3.3. Optimized Circuit Design

Circuit design is designing electronic circuits by choosing parameters that meet specific performance criteria based on the theoretical expectations. The performance of a circuit is measured using criteria such as noise, stability, speed, and power consumption. Optimized circuit design aims to achieve the desired performance while minimizing the circuit’s cost, size, and complexity [1].

There are several approaches to optimized circuit design, each with advantages and limitations. Here are some of the most commonly used techniques:(i)Top–down design: this approach involves starting with the overall system requirements and working downwards to the circuit level. The design is broken down into a hierarchy of subcircuits, each designed to meet specific requirements. The advantage of this approach is that it ensures that the design meets the overall system requirements, but it can be time-consuming and may not always result in the most efficient circuit.(ii)Bottom-up design: this approach involves starting with individual circuit components and building the overall system. The advantage of this approach is that it can be faster and more efficient than top–down design, but it may not always result in the best overall performance.(iii)Simulation-based design: this approach uses computer simulations to model the circuit’s behavior and optimize its performance. Simulation-based design can be very effective at identifying design flaws and optimizing circuit performance, but it can be time-consuming and computationally expensive.(iv)Optimization algorithms: optimization algorithms such as genetic algorithms can optimize circuit designs by generating a population of candidate solutions and evaluating their performance using simulations [2].

Of all the methods, optimization algorithms help to save time by giving the optimal solution based on the range of input parameters, eliminating the need for trial-and-error, which is time-consuming.

The circuit design proposed in this paper is a single-stage amplifier. Amplifiers are used to improve the strength of weak signals. Multistage amplifiers provide superior performance by allowing more flexibility for the input and output impedances. Understanding the workings of a single-stage amplifier is crucial since they are connected in a cascade to form multistage amplifiers to provide even higher gain [19].

A single-stage amplifier comprises a single transistor with a bias circuit and other components to facilitate the desired gain output for quantities such as current, voltage, and power [19]. The following components and parts are included for a typical single-stage amplifier design.(i)Transistor: the transistor is the active device in the amplifier circuit and is responsible for amplifying the input signal. Different types of transistors can be used depending on the specific requirements of the circuit.(ii)Biasing circuit: the biasing circuit sets the operating point of the transistor, which is important for ensuring that the amplifier operates in the linear region and avoids distortion [19]. The biasing circuit typically consists of resistors and capacitors connected to the transistor.(iii)Input coupling capacitor: used to isolate the input signal from the DC bias of the amplifier circuit.(iv)Output coupling capacitor: the output coupling capacitor isolates the amplifier circuit from the load or the next stage in the circuit by blocking DC signals [19].(v)Load resistor: converts the output current into an output voltage. It determines the gain of the amplifier and is typically chosen to match the load’s impedance.(vi)Feedback network: the feedback network provides negative feedback to the amplifier, which helps reduce distortion and improve stability. The feedback network typically consists of resistors and capacitors connected between the amplifier’s output and input.(vii)Power supply: it powers the amplifier circuit. It typically consists of a DC voltage source and filtering capacitors to remove any AC noise or ripple from the supply voltage.

Several parameters can be optimized for optimum performance in designing a single-stage amplifier. These are:(i)Gain: increasing the gain of the amplifier can improve its sensitivity and ability to amplify weak signals.(ii)Bandwidth: increasing the amplifier’s bandwidth can improve its ability to amplify high-frequency signals.(iii)Input and output impedance: optimizing the input and output impedance can improve the matching between the amplifier and the signal source and load, respectively.(iv)Distortion: distortion is any unwanted modification of the input signal, and it can be caused by nonlinearity in the amplifier circuit. Minimizing distortion can improve the fidelity and accuracy of the output signal.(v)Noise: noise is any unwanted signal that the amplifier circuit adds to the output signal. Minimizing noise can improve the output signal-to-noise ratio (SNR).(vi)Power consumption: minimizing power consumption can improve the amplifier’s efficiency and prolong the battery life in the portable devices.(vii)Stability: stability refers to the ability of the amplifier to operate reliably and predictably over time without oscillations or instability. An unstable amplifier can produce unwanted oscillations, which can cause distortion, noise, and other performance issues.

Figure 7 shows a typical single-stage amplifier circuit. Single-stage amplifiers are used in tape recorders, radio, television receivers, CD players, and public address systems. Therefore, the design of optimal single-stage amplifiers is essential in electronics.

4. Methodology

4.1. Chaotic Multi-Objective Runge–Kutta Optimization Algorithm (CMRUN)

The following steps were undertaken to improve the base RUN algorithm to handle multiple objectives while incorporating chaos to improve the balance between exploitation and exploration searches.

4.1.1. Selecting What Part of the Base RUN to Improve by Chaos

Many optimization algorithms fall short due to the local minima traps, especially when solving nonconvex and multimodal problems. Researchers, therefore, came up with a solution to enhance their performance by increasing the diversity of solutions and thus avoiding local minima traps [1215]. As mentioned in Section 3, several ways to implement chaos in optimization algorithms exist. One way is 7 to use chaotic maps to generate the initial population of solutions, which undergoes selection, crossover, and mutation operations.

Another way is the chaotic map can be used to generate a small perturbation to the current solution, which is then accepted or rejected based on a probability distribution. The final way is chaotic maps can also be used to optimize the parameters of a chaotic system itself [12]. Implementing the CMRUN can take all the above approaches, but this paper uses the approach of creating a perturbation to the current solutions.

From Section 2, the best solution () is essential in enhancing the global search space to find the optimal solution in the original RUN algorithm. The best and worst solutions obtained at each iteration determine the searching mechanism of the RUN algorithm. Ahmadianfar et al. [3] used the rand parameter to introduce randomness and enhance the exploration search. Furthermore, Equations (13)–(15) provide search diversification to avoid local optima traps.

The range between adjacent positions:

The step size:

The scale factor:

By their contribution to the global search of the RUN algorithm, these equations were selected to be improved by chaos, thus improving the overall performance of the original RUN.

The 10 chaotic maps used are represented as a vector, and each map is selected by inputting its index value as in Table 1. The initial value chosen for all the maps is 0.7. The final values of all the maps should lie in the range [0, 1], and thus all values in [−1, 1] are normalized so that they are within the acceptable [0, 1] range. Usually, chaotic parameters are used in the parts of algorithms that require random parameters. In this paper, the parameter for Equations (40)–(42) was replaced with a chaotic parameter .

The chaos vector is first calculated, and then the index of the chaotic map to be integrated determines which vector values will be selected to introduce chaos in the RUN algorithm. The parameters selected are improved at each iteration to ensure new random values are produced with each iteration. These values are then used to introduce chaos in the RUN through Equations (40)–(42), which are modified as follows:

The range between adjacent positions is as follows:

The step size is as follows:

The scale factor is as follows:where is the current position at each iteration.

The above parameters are used in the searching mechanism, and thus, they enhance the quality of solutions by increasing the search space. Furthermore, the algorithm begins optimization using a set of random initial solutions, which get updated after each iteration using the RK method, which employs the searching mechanism. This implies that the solutions after each iteration have chaotic behavior, and thus, the global search is enhanced. Equation (30) is further improved using the chaotic maps to enhance the exploration and exploitation phases by replacing and with , as shown in Equation (46)

The same is done in the code’s ESQ section, as shown in Equation (47).

The original RUN is modified to utilize chaos in updating solutions and converging to the global optimum solution by performing the above changes. Table 2 shows the pseudocode for the CMRUN algorithm.

4.1.2. Generating Pareto Fronts for the CMRUN

The original RUN and the chaotic RUN described above can only optimize single-objective problems. The first step to making the chaotic RUN multi-objective is modifying the objective function to take more than one objective. In this case, the objective function was modified to take two objectives and generate a set of solutions for each objective. The fitness function was also modified to record the solutions for two objectives, generating two convergence curves that form the Pareto Front [5].

(1) The Pseudocode for the CMRUN algorithm.

Part 1. Initialization
Define the number of objective functions (nObj = 2)
Initialize the fitness function for both objective functions
Randomly generate the initial population for the CMRUN
Evaluate the objective function values of each population member
Sort the costs obtained from the objective function
Initialize the chaos parameters
Update the convergence curves of both objectives with the first best costs
Part 2. CMRUN operations
forit = 1: MaxIt
  Update the chaotic parameters
  forn = 1: N
   Apply chaotic parameters in updating the algorithm’s equations
   Determine the solutions , , and for each objective function
   Perform operations to improve and update the solutions
   Update best costs for the objective functions
   Check if solutions go outside the search space and bring them back
   Update chaos parameters for ESQ
   Enhance the solution quality
   forj = 1 : dim
    Determine from Equation 30
   end for
   Perform boundary check for solutions again
   if
    Evaluate position
     if
     if rand <
      Determine position
     end
    end
   end
  Modernize positions and
  end for
  Modernize position
it = it + 1
 end
Part 3. Return and best costs
   Update Convergence Curves
4.2. Evaluation of the Algorithm’s Performance

The 10 chaotic maps were integrated one by one into the CMRUN, and their performances were evaluated using 15 selected benchmark functions. The results of the best version of the CMRUN were compared with those of 11 known multi-objective optimization algorithms.

The benchmark test functions are of two kinds [3]:(i)Unimodal Functions: have one global optimum and test for an algorithm’s exploitation and convergence to the global optimum.(ii)Multimodal Functions: have multiple local optima and are used to test for an algorithm’s exploration and ability to avoid premature convergence.

The following parameters were kept constant to test the CMRUN’s performance and the other 11 optimizers.(i)Population number = 50(ii)Dimension = 30(iii)Maximum iterations = 200(iv)Number of runs = 10.

The benchmark functions used are given in Table 2. The following 11 algorithms were tested to compare their performance with the CMRUN:(1)Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) [20](2)Multi-Objective Particle Swarm Optimization (MOPSO) [21](3)Non-dominated Sorting Genetic Algorithm II (NSGA-II) [22](4)Multi-Objective Firefly Algorithm (MOFA) [23](5)Multi-Objective Bat Algorithm (MOBA) [24](6)Strength Pareto Evolutionary Algorithm 2 (SPEA2) [25](7)Multi-Objective Cuckoo Search (MOCS) [26](8)Multi-Objective Flower Pollination Algorithm (MOFPA) [27](9)Multi-Objective Mayfly Optimization Algorithm (MOMA) [28](10)Hybrid NSGAII-MOPSO Algorithm [29](11)Multi-Objective Non-Sorted Moth FLAME (MOMFO) (NSMFO) [30]

To evaluate the performance, the minimum score of each optimizer for each test function was recorded for 10 runs. These scores’ averages were then calculated with their standard deviations as the criteria for measuring performance.

4.3. Circuit Design

In this paper, a simple circuit for a single-stage amplifier was designed. As discussed in Section 3, a single-stage amplifier has many practical applications in the real world. This paper proposes an algorithm that could be used to optimize its circuit parameters. The circuit chosen is simple to test the CMRUN’s convergence capability. Optimizing a simple circuit and giving out the best parameters would mean the algorithm is powerful enough to be applied to more complex circuit design problems.

The objective functions chosen in this paper are:(i)Noise(ii)Distortion

Noise is any unwanted signal that arises at the output of a circuit. Noise reduces the performance of an electronic device due to errors and reduced sensitivity. Distortion, on the other hand, is when the output signal has been changed by nonlinearity; hence, the circuit does not produce an output consistent with the input.

For the circuit selected, the parameters to be optimized are shown in Table 3.

From the values of resistances and frequencies obtained, the values of capacitances can easily be calculated from the standard formulae. This part of the paper aims to minimize noise and distortion and obtain the values of frequencies and resistances where the two objective functions are minimum and, hence, find the optimum performance of the circuit from the given ranges of inputs.

5. Results, Analysis, and Discussion

An HP Probook G3 with 8 GB RAM and Intel(R) Core(TM) i5-6200U CPU @ 2.30 GHz 2.40 GHz processor was used to run and test the algorithms using MATLAB R2021a.

5.1. Benchmark Test Results of the Chaotic Multi-Objective Runge–Kutta Optimization Algorithm

The CMRUN was implemented using 10 chaotic maps and ran 10 times for each benchmark function. The mean scores were recorded to determine the best chaotic map out of the 10. The best CMRUN version was selected for comparison with the other 11 multi-objective algorithms. The other algorithms were also run 10 times, and their average scores and standard deviations were recorded. The benchmark functions are categorized into two:(i)Unimodal test functions (F1–F5)(ii)Multimodal test functions (F6–F15)

The comparison results of the chaotic maps are shown in Table 4 (the bold numbers are the best results obtained for each function). The results with lower values are the better results. The CMRUN with the Chebyshev map had the best averages for functions F2, F3, F5, F6, F11, F12, and F15. The CMRUN with the Circle map realized the best averages for functions F2, F3, F4, F5, F14, and F15. The CMRUN with the Gauss map best performed in functions F2, F3, F4, F5, F7, F8, and F11. The CMRUN with the Iterative map attained the best averages for functions F1, F2, F3, and F5. The CMRUN with the Logistic map, the Piecewise map, the Singer map, and the Tent map realized the best averages for functions F2, F3, and F5.

The CMRUN attained the best averages for functions F2, F3, F5, F9, and F10 for the Sine map. Lastly, for the Sinuisodal map, the CMRUN achieved the best scores for functions F2, F3, F5, and F13. All the maps realized the global minima for functions F2 and F5. For function F3, the CMRUN achieved global minima with all maps except the Tent map. From these results, the best chaotic maps for the CMRUN are the Chebyshev and Gauss maps, as they both reached the best scores for 7 out of the 15 benchmark functions. In this paper, the CMRUN with the Chebyshev map was chosen to compare with the 11 multi-objective algorithms.

From Table 5, the averages for each algorithm were compared for every benchmark function (the best results are in bold). In this paper, the CMRUN with the Chebyshev map is considered for comparison with the other algorithms.

For the unimodal functions, CMRUN’s best performance is in F3, where it ranks joint first with other algorithms, and in F2 and F5, it ranks second. The CMRUN has the best performance and edges other algorithms in only one of the ten multimodal functions (F15). The CMRUN attains the global optimum in one benchmark function (F3) and its lowest standard deviation in function F5, which is zero.

An algorithm’s convergence rate and solution quality should be considered to determine whether the algorithm is stuck in local optima. The CMRUN struggles in optimizing multimodal functions and ranks position 11 twice. This indicates the CMRUN has difficulty maintaining the balance between exploration and exploitation. The success of an algorithm is linked to its balance between the two phases.

Exploitation focuses on utilizing the current knowledge by concentrating the algorithm’s search around areas likely to contain the best solutions. It refines the search around high-quality solutions and hence finds the local optimum. On the other hand, exploration ventures toward unexplored areas in the search space to find potentially better solutions, thus maintaining diversity. As the optimization progresses, exploration finds potential global solutions. Then, the search space focus shifts toward exploitation, which refines and improves these solutions to find the best global solutions.

The unimodal functions test for an algorithm’s exploitative behavior, and the CMRUN shows good exploitation capabilities from the results. The multimodal functions test an algorithm’s explorative behavior, and the CMRUN struggles to explore the global search space depending on the problem.

From the comparison of the average scores, the CMRUN ranks as shown in Table 6.

5.2. Optimized Circuit Design

The CMRUN’s performance in optimizing circuit design was compared to the performance of the other 11 algorithms. The following parameters were kept constant to conduct this part of the paper.(i)Population number = 100(ii)Dimension = 50(iii)Maximum iterations = 1,000(iv)Number of runs = 10

Table 7 shows the comparison results of optimizing circuit parameters for all 10 chaotic maps. The CMRUN with the iterative map achieves the lowest noise value, while the CMRUN with the Gauss map realizes the highest noise value. The maps rank as follows: Iterative, Logistic, Chebyshev, Sinuisodal, Sine, Tent, Singer, Circle, Piecewise, and Gauss.

The CMRUN with the circle map attains the lowest distortion value, while the CMRUN with the logistic map has the highest. The maps rank as follows: Circle, Sinuisodal, Sine, Tent, Singer, Chebyshev, Piecewise, Iterative, Gauss, and Logistic. The CMRUN with the iterative map offers the best performance as it provides the lowest Pareto sum for all the maps, showing a balanced performance between the two objectives.

Table 8 shows the results for optimized circuit design parameters of the CMRUN map compared to the other 11 algorithms. For consistency, the results of the CMRUN with the Chebyshev map were chosen for comparison. From the results, the CMRUN outperforms the other algorithms in optimizing the circuit parameters. It has the lowest Pareto sum fitness of all the algorithms with weights of [0.5, 0.5] for the objective functions.

The equal weights mean the objective functions are considered equally important. From the results, CMRUN has the lowest noise value of 227.202, followed closely by MOMA. The other algorithms have slightly higher noise values, with MOPSO having the highest noise value. For the distortion values, the algorithms MOFA, MOBA, SPEA-2, MOCS, MOFPA, NSGA-II/MOPSO, and NSMFO achieved the lowest distortion values of 0.0002 or less. CMRUN has a slightly higher value for distortion, the sixth highest. Based on the Pareto sum, CMRUN has the lowest Pareto sum, followed closely by MOMA. Therefore, CMRUN performs generally well compared to other algorithms despite having a higher distortion value than the others because it offers the lowest noise value and lowest Pareto sum, showing better balance in optimizing both objectives.

The CMRUN’s performance in circuit optimization indicates that it performs well in minimizing the fluctuations in the noise values. Still, there are deviations in achieving the lowest value possible for distortion values. This might be due to conflicts or tradeoffs between distortion and noise. Although the CMRUN shows excellence in reducing noise, it sacrifices distortion as improving noise performance deteriorates distortion. Overall, the performance of CMRUN in circuit optimization is superior as it defeats all the algorithms in Pareto sum and offers the lowest noise value. The individual circuit parameters were also recorded and tabulated.

The best parameters obtained are shown in the Table 9.

The convergence curves for CMRUN and the 11 algorithms are shown in Figure 8. These curves show the algorithms’ convergence rate when optimizing circuit parameters.

The convergence curves show that the CMRUN has a very fast convergence rate. This indicates the algorithm’s superiority in obtaining optimal solutions using few iterations. The CMRUN outperforms the other algorithms in search exploration, quickly narrowing the feasible region and converging to the global solution when optimizing circuit parameters. Although a fast convergence could also indicate the algorithm has converged prematurely, and the solutions are suboptimal, this is not the case for the CMRUN, according to the results.

Therefore, it is essential to consider both the convergence rate and the quality of solutions. From the solutions obtained by CMRUN, the solutions are of quality. Thus, its fast convergence rate is desirable, indicating that the algorithm requires fewer iterations to achieve a satisfactory solution. This can lead to significant time savings and computational efficiency, especially in complex optimization problems with a large search space or computationally expensive objective functions.

6. Conclusion

This paper proposed a CMRUN to optimize circuit design. Chaotic maps and Pareto Front were incorporated to enhance the RUN to improve its exploration capability and handle multiple objectives. The CMRUN excelled in exploitation but struggled in exploration searches when optimizing multimodal functions.

From the results, the CMRUN is susceptible to getting stuck in local optima and thus converges prematurely, especially when optimizing multimodal functions. It shows it may not perform well for some circuit design optimization problems. This work can further be extended by implementing standard operators in the CMRUN, such as levy walks (LWs), crossover operators, mutation operators, or the opposite-based learning method to better the solution quality [3].

Further research should be done to develop a more practical implementation of the CMRUN that can optimize complex problems, especially multimodal ones. Also, it would be noteworthy to test whether other chaotic mapping strategies offer better randomization and, thus, a better balance between the exploitation and exploration. The CMRUN could further be upgraded to handle many-objective optimization problems.

Data Availability

The data used to derive the results of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was sanctioned and fully funded by the University of Nairobi, Department of Electrical and Information Engineering.