Transforming Controlled Duffing Oscillator to Optimization Schemes Using New Symmetry-Shifted G ( t )-Polynomials

: This work introduces and studies the important properties of a special class of new symmetry-shifted G ( t ) − olynomials (NSSG). Such polynomials have a symmetry property over the interval [ − 2, 0], with G [ − 2,0 ] n ( 0 ) =( − 1 ) n G [ − 2,0 ] n ( − 2 ) . An explicit formulation of an NSSG operational matrixwas constructed, which served as a powerful tool for obtaining the desired numerical solutions. Then, a modified direct computational algorithm was suggested for solving the controlled Duffing oscillator problem. The idea behind the proposed algorithm is based on using symmetry basis functions, which are important and have real-world applications in physics and engineering. The original controlled Duffing oscillator problem was transformed into a nonlinear quadratic programming problem. Finally, numerical experiments are presented to validate our theoretical results. The numerical results emphasize that the modified proposed approach reaches the desired value of the performance index, with a few computations and with the minimum order of the NSSG basis function when compared with the other existing method, which is an important factor to consider when choosing the appropriate method in other mathematical and engineering applications.


Introduction
Optimal control is a mathematically challenging and practically significant goal.They have been many successful practical applications in a wide range of disciplines, including engineering [1][2][3][4], physics [5,6] and fluid dynamics [7].Recently, the controlled Duffing oscillator problem, which is known to describe many important oscillating phenomena in nonlinear engineering systems, has received considerable attention.The classical Duffing equation was introduced to study electronics [8], signal processing [9], fuzzy modeling and the adaptive control of uncertain chaotic systems [10,11].Since most controlled Duffing oscillator problems cannot be solved explicitly, it is often necessary to resort to numerical techniques, which consist of appropriate combinations of numbers numerical and optimization techniques.
The study of numerical methods has provided an attractive field for researchers of the mathematical sciences.This field of study has seen the appearance of different numerical computational methods and efficient algorithms to solve the controlled Duffing oscillator problem, with each one having disadvantages and advantages.For example, a direct method in [12] was presented to treat the controlled Duffing oscillator numerically.This method requires that both the state control variables, the constraint dynamic system, the boundary conditions and the cost function value be expanded in a Chebyshev series with unknown coefficients.The unknowns that evolve from the Chebyshev series expansion of the cost function value have to be determined at each iteration step.As a result, a large complicated nonlinear system of equations must be solved to obtain an accuracy with a suitable order.A cell-averaging Chebyshev spectral method was presented in [13], and it is based on constructing an interpolation polynomial of the degree n using Chebyshev nodes to approximate both the state and control vectors.The integral and differential expressions that arise from the system dynamics and the cost function are transformed into a nonlinear programming problem, while the work presented in [14] contained a pseudospectral approximate numerical solution for Duffing oscillators that use a differential matrix for Chebyshev points in order to compare the boundary conditions over the interval [-1, 1].The properties of hybrid functions [15], which consist of block-pulse functions plus Legendre polynomials, are studied in [16] for the numerical treatment of Duffing oscillators.The operation matrix of an integration matrix together with hybrid functions have been utilized to reduce the solution of a controlled Duffing oscillator to a solution of algebraic equations.Other numerical treatments for solving Duffing oscillators are as follows: the interpolating scaling functions method [17], the state parameterization based on a linear combination of Chebyshev polynomials [18] and the Chebyshev spectral method [19] together with control variables and state variables.
In this paper, we propose new symmetry-shifted G(t)−polynomials (NSSP) as a base function.These novel polynomials are used to solve the controlled Duffing oscillator problem using a direct state parameterization technique.The idea of this method consists of reducing the controlled Duffing oscillator problem to an optimization problem by expanding the second derivative of the state vector as an NSSP that has unknown coefficients with the aid of an operation matrix of derivatives.The operational matrix of the product is introduced, and this matrix together with the operational matrix of derivatives are then used to transfer the original problem to an optimization one.
The paper is organized as follows: In Section 2, the definition of the new symmetryshifted G(t)−polynomial over the interval [a, b] is presented with some important properties.The NSSP main properties are adopted in Section 3. The controlled Duffing oscillator problem is presented in Section 4, while the proposed algorithm for solving such problem is illustrated in Section 5, where it is converted to a quadratic programming problem.In Section 6, approximated findings and a comparison are made with an existing method in the literature to demonstrate the efficiency and the accuracy of the proposed numerical scheme.

New Symmetry-Shifted G(t)-Polynomial Over [a, b]
The definition of the new symmetry-shifted It is mentioned here that, from the G n (t) for special values of a, b, p and q, one can obtain some of the well-known polynomials.Certain cases of these values are reported in Table 1.
with the initial conditions n (t) can be obtained through the following expression: n (t) have orthogonal properties with respect to the following inner product: where ω(t) is the weight function.
Note that the general matrix form of G n (t) can be written as below: where and H is the triangle matrix in which entries h ij can be evaluated as below: otherwise. (5)

The Convergence Analysis
Theorem 1.A function u(t) that is continuous on [a, b] and satisfies the condition |u(t)| < M, where M is a constant and may be defined by where and will converge to function u(t).
where u(x) is defined in Equation ( 6).From Equation ( 7), one can obtain This means that function u n (t) is a convergence (since it is a Cauchy sequence in the complete Hilbert space L 2 [a, b]).
In order to prove that the series in Equation ( 6) converges to u(t), Let Hence, Equation (10) This shows that , which is the required result.□

The Operation NSSG Matrix of Products
The product of two NSSGs can be expressed in the following theorem.
Proof.This theorem is proved by mathematical induction.Note that G Therefore, Theorem 2 is valid when = 0. Let Equation (13) be valid for all integers m − 1, that is, Multiplying both sides of Equation ( 15) by Then, from Equation ( 15) and with the use of Equation ( 1) one can obtain and Using Equation ( 15), yields ⌈m−n⌉ (t).This is the required result.□

The Operation NSSG Matrix of Derivatives
In this part, the first derivative .

G [a,b]
n (t) is determined in terms of itself.Based on that, the first derivative operational matrix of NSSG will be constructed. where Proof.Consider the odd case, where the induction is proceeded on n.
0 .Assume that the relation in Equation ( 18) holds for n − 1, and n odd.
The validity for n will be proved.
If one differentiates Equation (1), the following is obtained: . where Appling the principle of induction on G This will lead to .
Now, using the identity below as well as Equation (13) in Theorem 2,one can enable obtaining the following result: .
After performing some manipulation, Equation ( 22) can be written as below: .
This is equivalent to the result in Equation (18).In a similar way, one can prove the case for an even order.
On the other hand, the derivative of G can be written in matrix form as illustrated in the following result: □ Corollary 1.Let ∅(t) be the NSSG polynomial vector defined as below Then, for n ≥ 1, the derivative of ∅(t) can be explicitly constructed by .

∅(t) = D∅(t)
where D = d ij is the (n + 1) × (n + 1), which is a lower triangular NSSG polynomial operational matrix of derivatives.For odd n, the matrix D is obtained as below Meanwhile, for even n, the last row of the matrix D is constructed as below: Moreover, the element of matrix can be obtained explicitly in the following form: where ∈ j = 1/2, j = 0, 1, otherwise.
3.4.The Relation between NSSG Over [−2, 0] and the Power Function t n The first six NSSG polynomial over the interval [−2, 0] is given by which can be rewritten in the following form ).
This means that the powers of t can be expressed in terms of the NSSG polynomials of degrees up to n.The general explicit formula is given by the following result.Theorem 4. For every integer n ≥ 0, power t n can be expanded in a unique way as a linear combination of G [−2,0] n (t) as follows: Proof.Mathematical induction is used to prove Equation (23) for n = 1. 23)is true for n = 1.Let Equation ( 23) be valid for n, where n − 1.This means that Multiplying both sides of Equation ( 24) by (2t + 2) yields Using Equation ( 24) yields Hence, the required result can be obtained after using the following identities:

The NSSG Technique for Solving the Controlled Duffing Oscillator Problem
Consider the following controlled Duffing oscillator problem of a linear oscillator, as shown in [6]: which is subject to ..
together with the conditions The problem is in finding control vector u(t), which minimizes (25) to be subject to ( 26) and (27).The exact solution of the controlled linear oscillator can be obtained by applying Pontryagin's maximum principle: Consider an approximation to state variable x(t) using NSSG polynomials with order n as below: The boundary conditions in Equation ( 27) must satisfy the approximate solution in Equation ( 28): . .
x n (0 Then, control variable u(t) is obtained from Equation ( 26) as Then, substituting Equation (33) into Equation (25) gives As a result, the controlled Duffing oscillator problem ( 25)-( 27) is transformed into a nonlinear quadratic programming problem, which can be stated as follows: Let x n (−T) .
x n (0) Is subject to the equality constraint This is a nonlinear quadratic programming problem.The unknown parameters of vector a have to be determined with the below: The advantages of the proposed NSSG polynomial technique in terms of solving the controlled Duffing oscillator problem include the following points: (1) It can deal directly with the second-order derivative in the constrained differential equation, Equation (26), without reducing it to a first-order system.As a result, the number of unknown parameters will be reduced, unlike the technique in [25].(2) It can deal directly with interval t ∈ [−T, 0] while other methods must introduce a suitable transformation according to the base functions [18].(3) The problem is reduced to a quadratic programming problem, which is much easier than the numerical integration of a nonlinear TPBVP derived from Pontryagin's maximum principle method [26].

Results and Discussion
Problems ( 25)-( 27) can be solved in the standard case: w = 1, T = 2, α = 0.5 and ρ = −0.5.By performing Equations ( 28 (39) These are subject to the constraints which can be written as When using Equation (31) to obtain the optimal unknown vector a T , we have a T = 0.0877800707547 −0.125 0.0392099056603 0 −0.0019899764150 .
The NSSG polynomial coefficients for state function x(t) and control function u(t) are listed in Table 2 with different values of n.In Tables 3 and 4, the approximated values for the state and control functions are determined for different values of t ∈ [−2, 0] using different orders of the basis functions n = 4, 5, 6 and 7.  Also, these solutions were plotted according to same values illustrated in Figure 1.As shown in Table 5, a comparison was made between the present method against the solution obtained by radial basis functions [25] for different values of n.This method is based on radial basis functions to approximate the solution of the optimal control problem using the collocation method.Table 6 compares the values of x(t) and u(t), which were obtained by our method with the existing findings in [25].The results in [25] needed three numerical methods that will increase the computational time and effort.As n increases, the absolute errors J exact − J G(t) and decrease significantly, and the results will rapidly tend to the exact values.This table illustrates that the NSSG polynomials have a good convergence rate.
The absolute errors x exact − x G(t) and u exact − u G(t) , as well as the relative errors x exact and , are listed in Table 7 for n = 7, and they are also presented graphically in Figure 2.
Figure 2 shows the decreasing absolute error by the NSSG polynomials with the state parameterization technique.Figure 3 is the last illustration that displays the graph of the absolute values of J with different orders of the NSSG polynomials.All the tables and figures show that the suggested methodology is capable of providing numerical solutions for the controlled osculation problem with high accuracy.The solution obtained from the proposed approach is in good agreement with the existing results, thus demonstrating the reliability of the proposed schemes.The above figures show the decreasing absolute error by the suggested algorithm based on NSSG polynomials.The optimal performance index value J, which is obtained by the proposed method for n = 4, 5, 6 and 7, is a good approximation.The approximate state and control variables are, respectively, as below x 6 (t) = 0.001738299522379t 6 + 0.012706287200501t 5 + 0.001839575251112t 4 −0.089491560685423t

Conclusions
A special optimal control problem called the controlled Duffing oscillator was treated numerically in the present work using NSSG polynomials.The suggested technique uses the constructed NSSG polynomial operational matrix of first-order derivatives in combination with an appropriate direct parameterization scheme.The controlled Duffing oscillator problem was treated approximately with special values of unknown parameters utilizing various orders of NSSG polynomials.The outcomes of the approximate solutions have demonstrated the simplicity and the accuracy of the presented direct technique.Additionally, the proposed methodology can be used in a number of applications to numerically treat different classes of the optimal control problem.The presented results prove and support the satisfactory accuracy and efficiency of the recommended method.It can deal directly with the highest-order derivatives in a constrained differential equation without reducing it to a first-order system.As a result, the number of the unknown parameters can be reduced.This fact was shown when applying the presented NSSG polynomials on the controlled Duffing oscillator.The suggested direct technique is much easier to use than the numerical integration of the nonlinear TPBVP derived from Pontryagin's maximum principle method.The proposed method can be extended to the nonlinear calculus of variations and optimal control problems.

.
The leading coefficient of G [a,b] n (t) is equal to 2 n+1 b−a for n = 1, 2, 3, 4, . . .as we can notice from the recursive formula shown above.Moreover, the explicit analytical formula for G [a,b]

Theorem 2 .
The product of two NSSG polynomials satisfies the following relationship G [a,b]

Theorem 3 .
For n ≥ 1, the following relation can be employed to relate the original NSSG with their first derivative .G [a,b] This is subject to the equality constraints F a − c = 0,

Figure 1 .
Figure 1.Graphs of the approximate solutions x(t) and u(t) using NSSGpolynomials and an exact solution with n = 4, 5, 6 and 7.

Figure 2 .
Figure 2. Graphs of the absolute errors for state x(t) and control u(t) with n = 7.

Figure 3 .
Figure 3. Graph of the absolute error between the NSSG polynomial solution and the exact solution of J with a different n.

Table 2 .
The optimal values of unknown parameters a * .

Table 3 .
The approximate and exact values of x(t).

Table 4 .
The approximate and exact values of u(t).

Table 5 .
Absolute and relative errors.

Table 6 .
The approximate values of x(t) and u(t).

Table 7 .
The absolute and relative errors with n = 7.