Robust Control Using LMI Transformation and Neural-Based Identification for Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems

In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller [7,10,24]. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. A robust control policy is static in contrast to the adaptive (dynamic) control policy where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will be unknown but, for example, bounded. An early example of a robust control method is the high-gain feedback control where the effect of any parameter variations will be negligible with using sufficiently high gain. The overall goal of a control system is to cause the output variable of a dynamic process to follow a desired reference variable accurately. This complex objective can be achieved based on a number of steps. A major one is to develop a mathematical description, called dynamical model, of the process to be controlled [7,10,24]. This dynamical model is usually accomplished using a set of differential equations that describe the dynamic behavior of the system, which can be further represented in state-space using system matrices or in transform-space using transfer functions [7,10,24]. In system modeling, sometimes it is required to identify some of the system parameters. This objective maybe achieved by the use of artificial neural networks (ANN), which are considered as the new generation of information processing networks [5,15,17,28,29]. Artificial neural systems can be defined as physical cellular systems which have the capability of acquiring, storing and utilizing experiential knowledge [15,29], where an ANN consists of an interconnected group of basic processing elements called neurons that perform summing operations and nonlinear function computations. Neurons are usually organized in layers and forward connections, and computations are performed in a parallel mode at all nodes and connections. Each connection is expressed by a numerical value called the weight, where the conducted learning process of a neuron corresponds to the changing of its corresponding weights.


60
When dealing with system modeling and control analysis, there exist equations and inequalities that require optimized solutions.An important expression which is used in robust control is called linear matrix inequality (LMI) which is used to express specific convex optimization problems for which there exist powerful numerical solvers [1,2,6].The important LMI optimization technique was started by the Lyapunov theory showing that the differential equation () is stable if and only if there exists a positive definite matrix [P] such that 0 . The requirement of { 0 P > , 0 known as the Lyapunov inequality on [P] which is a special case of an LMI.By picking any 0 T QQ => and then solving the linear equation T AP P A Q + =− for the matrix [P], it is guaranteed to be positive-definite if the given system is stable.The linear matrix inequalities that arise in system and control theory can be generally formulated as convex optimization problems that are amenable to computer solutions and can be solved using algorithms such as the ellipsoid algorithm [6].
In practical control design problems, the first step is to obtain a proper mathematical model in order to examine the behavior of the system for the purpose of designing an appropriate controller [1,2,3,4,5,7,8,9,10,11,12,13,14,16,17,19,20,21,22,24,25,26,27].Sometimes, this mathematical description involves a certain small parameter (i.e., perturbation).Neglecting this small parameter results in simplifying the order of the designed controller by reducing the order of the corresponding system [1,3,4,5,8,9,11,12,13,14,17,19,20,21,22,25,26].A reduced model can be obtained by neglecting the fast dynamics (i.e., non-dominant eigenvalues) of the system and focusing on the slow dynamics (i.e., dominant eigenvalues).This simplification and reduction of system modeling leads to controller cost minimization [7,10,13].An example is the modern integrated circuits (ICs), where increasing package density forces developers to include side effects.Knowing that these ICs are often modeled by complex RLC-based circuits and systems, this would be very demanding computationally due to the detailed modeling of the original system [16].In control system, due to the fact that feedback controllers don't usually consider all of the dynamics of the functioning system, model reduction is an important issue [4,5,17].The main results in this research include the introduction of a new layered method of intelligent control, that can be used to robustly control the required system dynamics, where the new control hierarchy uses recurrent supervised neural network to identify certain parameters of the transformed system matrix [ # A ], and the corresponding LMI is used to determine the permutation matrix [P] so that a complete system transformation {[ # B ], [ # C ], [ # D ]} is performed.The transformed model is then reduced using the method of singular perturbation and various feedback control schemes are applied to enhance the corresponding system performance, where it is shown that the new hierarchical control method simplifies the model of the dynamical systems and therefore uses simpler controllers that produce the needed system response for specific performance enhancements.Figure 1 illustrates the layout of the utilized new control method.Layer 1 shows the continuous modeling of the dynamical system.Layer 2 shows the discrete system model.Layer 3 illustrates the neural network identification step.Layer 4 presents the Control Fig. 1.The newly utilized hierarchical control method.
While similar hierarchical method of ANN-based identification and LMI-based transformation has been previously utilized within several applications such as for the reduced-order electronic Buck switching-mode power converter [1] and for the reducedorder quantum computation systems [2] with relatively simple state feedback controller implementations, the presented method in this work further shows the successful wide applicability of the introduced intelligent control technique for dynamical systems using various spectrum of control methods such as (a) PID-based control, (b) state feedback control using (1) pole placement-based control and (2) linear quadratic regulator (LQR) optimal control, and (c) output feedback control.Section 2 presents background on recurrent supervised neural networks, linear matrix inequality, system model transformation using neural identification, and model order reduction.Section 3 presents a detailed illustration of the recurrent neural network identification with the LMI optimization techniques for system model order reduction.A practical implementation of the neural network identification and the associated comparative results with and without the use of LMI optimization to the dynamical system model order reduction is presented in Section 4. Section 5 presents the application of the feedback control on the reduced model using PID control, state feedback control using pole assignment, state feedback control using LQR optimal control, and output feedback control.Conclusions and future work are presented in Section 6.

Background
The following sub-sections provide an important background on the artificial supervised recurrent neural networks, system transformation without using LMI, state transformation using LMI, and model order reduction, which can be used for the robust control of dynamic systems, and will be used in the later Sections 3-5.

Artificial recurrent supervised neural networks
The ANN is an emulation of the biological neural system [15,29].The basic model of the neuron is established emulating the functionality of a biological neuron which is the basic signaling unit of the nervous system.The internal process of a neuron maybe mathematically modeled as shown in Figure 2 [15,29].Fig. 2. A mathematical model of the artificial neuron.
As seen in Figure 2, the internal activity of the neuron is produced as: In supervised learning, it is assumed that at each instant of time when the input is applied, the desired response of the system is available [15,29].The difference between the actual and the desired response represents an error measure which is used to correct the network parameters externally.Since the adjustable weights are initially assumed, the error measure may be used to adapt the network's weight matrix [W].A set of input and output patterns, called a training set, is required for this learning mode, where the usually used training algorithm identifies directions of the negative error gradient and reduces the error accordingly [15,29].
The supervised recurrent neural network used for the identification in this research is based on an approximation of the method of steepest descent [15,28,29].The network tries to match the output of certain neurons to the desired values of the system output at a specific instant of time.Consider a network consisting of a total of N neurons with M external input connections, as shown in Figure 3, for a 2 nd order system with two neurons and one external input.The variable g(k) denotes the (M x 1) external input vector which is applied to the network at discrete time k, the variable y(k + 1) denotes the corresponding (N x 1) vector of individual neuron outputs produced one step later at time (k + 1), and the input vector g(k) and one-step delayed output vector y(k) are concatenated to form the ((M + N) x 1) vector u(k) whose i th element is denoted by u i (k).For Λ denotes the set of indices i for which g i (k) is an external input, and denotes the set of indices i for which u i (k) is the output of a neuron (which is y i (k)), the following equation is provided: , The (N x (M + N)) recurrent weight matrix of the network is represented by the variable [W].The net internal activity of neuron j at time k is given by: () = () () where Λ ∪ ß is the union of sets Λ and ß .At the next time step (k + 1), the output of the neuron j is computed by passing v j (k) through the nonlinearity (.) ϕ , thus obtaining: (1 ) = ( ( ) ) The derivation of the recurrent algorithm can be started by using d j (k) to denote the desired (target) response of neuron j at time k, and ς(k) to denote the set of neurons that are chosen to provide externally reachable outputs.A time-varying (N x 1) error vector e(k) is defined whose j th element is given by the following relationship:

System dynamics
System state: internal input Neuron delay The objective is to minimize the cost function E total which is obtained by: total = ( ) To accomplish this objective, the method of steepest descent which requires knowledge of the gradient matrix is used: is the gradient of E(k) with respect to the weight matrix [W].In order to train the recurrent network in real time, the instantaneous estimate of the gradient is used ( ) . For the case of a particular weight m w ˋ(k), the incremental change where is the learning-rate parameter.
Therefore: , where (( ) ) (( ) ) = () Differentiating the net internal activity of neuron j with respect to m w ˋ(k) yields: where ( ) wk w k ∂∂ ˋ equals "1" only when j = m and i = ˋ, and "0" otherwise.Thus: where mj δ is a Kronecker delta equals to "1" when j = m and "0" otherwise, and: Having those equations provides that: The initial state of the network at time (k = 0) is assumed to be zero as follows: The dynamical system is described by the following triply-indexed set of variables ( j m π ˋ): For every time step k and all appropriate j, m and ˋ, system dynamics are controlled by: The values of () π ˋand the error signal e j (k) are used to compute the corresponding weight changes: Using the weight changes, the updated weight m w ˋ(k + 1) is calculated as follows: Repeating this computation procedure provides the minimization of the cost function and thus the objective is achieved.With the many advantages that the neural network has, it is used for the important step of parameter identification in model transformation for the purpose of model order reduction as will be shown in the following section.

Model transformation and linear matrix inequality
In this section, the detailed illustration of system transformation using LMI optimization will be presented.Consider the dynamical system: The state space system representation of Equations ( 4) -(5) may be described by the block diagram shown in Figure 4.In order to determine the transformed [A] matrix, which is [ # A ], the discrete zero input response is obtained.This is achieved by providing the system with some initial state values and setting the system input to zero (u(k) = 0).Hence, the discrete system of Equations ( 4) -( 5), with the initial condition 0 (0) xx = , becomes: (1 ) ( ) We need x(k) as an ANN target to train the network to obtain the needed parameters in [ # where λ i represents the system eigenvalues.This is an upper triangular matrix that preserves the eigenvalues by (1) placing the original eigenvalues on the diagonal and (2) finding the elements # ij A in the upper triangular.This upper triangular matrix form is used to produce the same eigenvalues for the purpose of eliminating the fast dynamics and sustaining the slow dynamics eigenvalues through model order reduction as will be shown in later sections.

Having the [A] and [ #
A ] matrices, the permutation [P] matrix is determined using the LMI optimization technique, as will be illustrated in later sections.The complete system transformation can be achieved as follows where, assuming that , the system of Equations ( 4) -( 5) can be re-written as: , where () ()

Robust Control Using LMI Transformation and Neural-Based Identification for
Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 67 Pre-multiplying the first equation above by [P -1 ], one obtains: which yields the following transformed model: where the transformed system matrices are given by: 1 1 Transforming the system matrix [A] into the form shown in Equation ( 8) can be achieved based on the following definition [18].
, and there is some integer r with 11 rn ≤≤− such that: 1 where The attractive features of the permutation matrix [P] such as being (1) orthogonal and (2) invertible have made this transformation easy to carry out.However, the permutation matrix structure narrows the applicability of this method to a limited category of applications.A form of a similarity transformation can be used to correct this problem for { :

Hence, based on [A] and [ #
A ], the corresponding LMI is used to obtain the transformation matrix [P], and thus the optimization problem will be casted as follows: which can be written in an LMI equivalent form as:

68
where S is a symmetric slack matrix [6].

System transformation using neural identification
A different transformation can be performed based on the use of the recurrent ANN while preserving the eigenvalues to be a subset of the original system.To achieve this goal, the upper triangular block structure produced by the permutation matrix, as shown in Equation ( 15), is used.However, based on the implementation of the ANN, finding the permutation matrix [P] does not have to be performed, but instead [X] and [Z] in Equation ( 15) will contain the system eigenvalues and [Y] in Equation ( 15) will be estimated directly using the corresponding ANN techniques.Hence, the transformation is obtained and the reduction is then achieved.Therefore, another way to obtain a transformed model that preserves the eigenvalues of the reduced model as a subset of the original system is by using ANN training without the LMI optimization technique.This may be achieved based on the assumption that the states are reachable and measurable.Hence, the recurrent ANN can identify the where the eigenvalues are selected as a subset of the original system eigenvalues.

Model order reduction
Linear time-invariant (LTI) models of many physical systems have fast and slow dynamics, which may be referred to as singularly perturbed systems [19].Neglecting the fast dynamics of a singularly perturbed system provides a reduced (i.e., slow) model.This gives the advantage of designing simpler lower-dimensionality reduced-order controllers that are based on the reduced-model information.
To show the formulation of a reduced order system model, consider the singularly perturbed system [9]: () () () where the index r denotes the remained or reduced model.Substituting Equation (21) in Equations ( 18)-( 20) yields the following reduced order model: where {

Neural network identification with lmi optimization for the system model order reduction
In this work, it is our objective to search for a similarity transformation that can be used to decouple a pre-selected eigenvalue set from the system matrix [A].To achieve this objective, training the neural network to identify the transformed discrete system matrix [ [1,2,15,29].For the system of Equations ( 18)-( 20), the discrete model of the dynamical system is obtained as: (1 ) ( ) ( ) The identified discrete model can be written in a detailed form (as was shown in Figure 3) as follows: where k is the time index, and the detailed matrix elements of Equations ( 26)-( 27) were shown in Figure 3 in the previous section.
The recurrent ANN presented in Section 2.1 can be summarized by defining Λ as the set of indices i for which () i gkis an external input, defining ß as the set of indices i for which () i ykis an internal input or a neuron output, and defining () i ukas the combination of the internal and external inputs for which iß ∈ ∪ Λ.Using this setting, training the ANN depends on the internal activity of each neuron which is given by: () () () www.intechopen.com Recent Advances in Robust Control -Novel Approaches and Design Methods

70
where w ji is the weight representing an element in the system matrix or input matrix for At the next time step (k +1), the output (internal input) of the neuron j is computed by passing the activity through the nonlinearity φ(.) as follows: (1 )( ( ) ) With these equations, based on an approximation of the method of steepest descent, the ANN identifies the system matrix [A d ] as illustrated in Equation ( 6) for the zero input response.That is, an error can be obtained by matching a true state output with a neuron output as follows: Now, the objective is to minimize the cost function given by: total () where ς denotes the set of indices j for the output of the neuron structure.This cost function is minimized by estimating the instantaneous gradient of E(k) with respect to the weight matrix [W] and then updating [W] in the negative direction of this gradient [15,29].
In steps, this may be proceeded as follows: -Initialize the weights [W] by a set of uniformly distributed random numbers.Starting at the instant (k = 0), use Equations ( 28) -( 29) to compute the output values of the N neurons (where Nß = ).
-For every time step k and all , jß ∈ mß ∈ and ß ∈ ∪ ˋΛ, compute the dynamics of the system which are governed by the triply-indexed set of variables: (1 )( ( ) ) ( )( ) ( ) with initial conditions (0) 0 and mj δ is given by ( ) wk w k ∂∂ ˋ, which is equal to "1" only when {j = m, i = ˋ} and otherwise it is "0".Notice that, for the special case of a sigmoidal nonlinearity in the form of a logistic function, the derivative () -Compute the weight changes corresponding to the error signal and system dynamics: -Update the weights in accordance with: (1 ) ( ) ( ) Repeat the computation until the desired identification is achieved.

www.intechopen.com
As illustrated in Equations ( 6) -( 7), for the purpose of estimating only the transformed system matrix [ # Using the LMI optimization technique, which was illustrated in Section 2.2, the permutation matrix [P] is then determined.Hence, a complete system transformation, as shown in Equations ( 9) - (10), will be achieved.For the model order reduction, the system in Equations ( 9) -( 10) can be written as: [ ] The following system transformation enables us to decouple the original system into retained (r) and omitted (o) eigenvalues.The retained eigenvalues are the dominant eigenvalues that produce the slow dynamics and the omitted eigenvalues are the nondominant eigenvalues that produce the fast dynamics.Equation (32) maybe written as:

Examples for the dynamic system order reduction using neural identification
The following subsections present the implementation of the new proposed method of system modeling using supervised ANN, with and without using LMI, and using model order reduction, that can be directly utilized for the robust control of dynamic systems.The presented simulations were tested on a PC platform with hardware specifications of Intel Pentium 4 CPU 2.40 GHz, and 504 MB of RAM, and software specifications of MS Windows XP 2002 OS and Matlab 6.5 simulator.

Model reduction using neural-based state transformation and lmi-based complete system transformation
The following example illustrates the idea of dynamic system model order reduction using LMI with comparison to the model order reduction without using LMI.Let us consider the system of a high-performance tape transport which is illustrated in Figure 5.As seen in Figure 5, the system is designed with a small capstan to pull the tape past the read/write heads with the take-up reels turned by DC motors [10].As can be shown, in static equilibrium, the tape tension equals the vacuum force ( o TF = ) and the torque from the motor equals the torque on the capstan ( where T o is the tape tension at the read/write head at equilibrium, F is the constant force (i.e., tape tension for vacuum column), K is the motor torque constant, i o is the equilibrium motor current, and r 1 is the radius of the capstan take-up wheel.The system variables are defined as deviations from this equilibrium, and the system equations of motion are given as follows: D is the damping in the tape-stretch motion, e is the applied input voltage (V), i is the current into capstan motor, J 1 is the combined inertia of the wheel and take-up motor, J 2 is the inertia of the idler, K 1,2 is the spring constant in the tape-stretch motion, K e is the electric constant of the motor, K t is the torque constant of the motor, L is the armature inductance, R is the armature resistance, r 1 is the radius of the take-up wheel, r 2 is the radius of the tape on the idler, T is the tape tension at the read/write head, x 3 is the position of the tape at the head, 3 x $ is the velocity of the tape at the head, 1 is the viscous friction at takeup wheel, 2 is the viscous friction at the wheel, 1 is the angular displacement of the capstan, 2 is the tachometer shaft angle, ω 1 is the speed of the drive wheel 1 θ $ , and ω 2 is the output speed measured by the tachometer output 2 θ $ .
The state space form is derived from the system equations, where there is one input, which is the applied voltage, three outputs which are (1) tape position at the head, (2) tape tension, and (3) tape position at the wheel, and five states which are (1) tape position at the air bearing, (2) drive wheel speed, (3) tape position at the wheel, (4) tachometer output speed, and (5) capstan motor speed.The following sub-sections will present the simulation results for the investigation of different system cases using transformations with and without utilizing the LMI optimization technique.

System transformation using neural identification without utilizing linear matrix inequality
This sub-section presents simulation results for system transformation using ANN-based identification and without using LMI.Case #1.Let us consider the following case of the tape transport: The five eigenvalues are {-10.5772,-9.999, -0.9814, -0.5962 ± j0.8702}, where two eigenvalues are complex and three are real, and thus since (1) not all the eigenvalues are complex and (2) the existing real eigenvalues produce the fast dynamics that we need to eliminate, model order reduction can be applied.As can be seen, two real eigenvalues produce fast dynamics {-10.5772,-9.999} and one real eigenvalue produce slow dynamics {-0.9814}.In order to obtain the reduced model, the reduction based on the identification of the input matrix [ B ] and the transformed system matrix [ Â ] was performed.This identification is achieved utilizing the recurrent ANN.By discretizing the above system with a sampling time T s = 0.1 sec., using a step input with learning time T l = 300 sec., and then training the ANN for the input/output data with a learning rate = 0.005 and with initial weights ] given as: -0.0059 -0.0360 0.0003 -0.0204 -0.0307 0.0499 -0.0283 0.0243 0.0445 -0.0302 -0.0257 -0.0482 0.0359 0.0222 0.0309 0.0294 -0.0405 0.0088 -0.0058 0.0212 -0.0225 -0.0273 0.0079 0.0152 0.0295 -0.0235 -0.0474 -0.0373 -0.0158 -0.016 As observed, all of the system eigenvalues have been preserved in this transformed model with a little difference due to discretization.Using the singular perturbation technique, the following reduced 3 rd order model is obtained as follows: -0.5967 0.8701 -0.1041 0.1021 ( ) -0.8701 -0.5967 0.8034 ( ) 0.0652 ( ) 0 0 -0.9809 0.0860 It is also observed in the above model that the reduced order model has preserved all of its eigenvalues {-0.9809, -0.5967 ± j0.8701} which are a subset of the original system, while the reduced order model obtained using the singular perturbation without system transformation has provided different eigenvalues {-0.8283, -0.5980 ± j0.9304}.Evaluations of the reduced order models (transformed and non-transformed) were obtained by simulating both systems for a step input.Simulation results are shown in Figure 6.Based on Figure 6, it is seen that the non-transformed reduced model provides a response which is better than the transformed reduced model.The cause of this is that the transformation at this point is performed only for the [A] and [B] system matrices leaving the [C] matrix unchanged.Therefore, the system transformation is further considered for complete system transformation using LMI (for {[A], [B], [D]}) as will be seen in subsection 4.1.2,where LMI-based transformation will produce better reduction-based response results than both the non-transformed and transformed without LMI.
Case #2.Consider now the following case: The five eigenvalues are {-9.9973,-2.0002, -0.3696, -0.6912 ± j1.3082}, where two eigenvalues are complex, three are real, and only one eigenvalue is considered to produce fast dynamics {-9.9973}.Using the discretized model with T s = 0.071 sec.for a step input with learning time T l = 70 sec., and through training the ANN for the input/output data with = 3.5 x 10 -5 and initial weight matrix given by: www.intechopen.com

76
-0.0195 0.0194 -0.0130 0.0071 -0.0048 0.0029 -0.0189 0.0055 0.0196 -0.0025 -0.0053 0.0120 -0.0091 0.0168 0.0031 0.0031 0.0134 -0.0038 -0.0061 0.0068 0.0193 0.0145 0.0038 -0.0139 -0.0150 0.0204 -0.0073 0.0180 -0.0085 -0.0161 and by applying the singular perturbation reduction technique, a reduced 4 th order model is obtained as follows: -0.6912 where all the eigenvalues {-2.0002, -0.3696, -0.6912 ± j1.3081} are preserved as a subset of the original system.This reduced 4 th order model is simulated for a step input and then compared to both of the reduced model without transformation and the original system response.Simulation results are shown in Figure 7 where again the non-transformed reduced order model provides a response that is better than the transformed reduced model.The reason for this follows closely the explanation provided for the previous case.
In Figure 8, it is also seen that the response of the non-transformed reduced model is better than the transformed reduced model, which is again caused by leaving the output [C] matrix without transformation.

LMI-based state transformation using neural identification
As observed in the previous subsection, the system transformation without using the LMI optimization method, where its objective was to preserve the system eigenvalues in the reduced model, didn't provide an acceptable response as compared with either the reduced non-transformed or the original responses.

#
Based on this transformed matrix, using the LMI technique, the permutation matrix [P] was computed and then used for the complete system transformation.Therefore, the transformed {

$
-0.0019 0 -0.0139 -0.0025 ( ) -0.0024 -0.0009 -0.0088 ( ) -0.0025 ( ) -0.0001 0.0004 -0.0021 0.0006 where the objective of eigenvalue preservation is clearly achieved.Investigating the performance of this new LMI-based reduced order model shows that the new completely transformed system is better than all the previous reduced models (transformed and nontransformed).This is clearly shown in Figure 9 where the 3 rd order reduced model, based on the LMI optimization transformation, provided a response that is almost the same as the 5 th order original system response.
the transformed [ # A ] was obtained and used to calculate the permutation matrix [P].The complete system transformation was then performed and the reduction technique produced the following 3 rd order reduced model: -0.6910 1.3088 -3.8578 -0.7621 ( ) -1.3088 -0.6910 -1.5719 ( ) -0.1118 ( ) 0 0 -0.3697 0.4466 $ 0.0061 0.0261 0.0111 0.0015 ( ) -0.0459 0.0187 -0.0946 ( ) 0.0015 ( ) 0.0117 0.0155 -0.0080 0.0014 with eigenvalues preserved as desired.Simulating this reduced order model to a step input, as done previously, provided the response shown in Figure 10.Here, the LMI-reduction-based technique has provided a response that is better than both of the reduced non-transformed and non-LMI-reduced transformed responses and is almost identical to the original system response.
the LMI-based transformation and then order reduction were performed.Simulation results of the reduced order models and the original system are shown in Figure 11.Again, the response of the reduced order model using the complete LMI-based transformation is the best as compared to the other reduction techniques.

The application of closed-loop feedback control on the reduced models
Utilizing the LMI-based reduced system models that were presented in the previous section, various control techniques -that can be utilized for the robust control of dynamic systemsare considered in this section to achieve the desired system performance.These control methods include (a) PID control, (b) state feedback control using (1) pole placement for the desired eigenvalue locations and (2) linear quadratic regulator (LQR) optimal control, and (c) output feedback control.

Proportional-Integral-Derivative (PID) control
A PID controller is a generic control loop feedback mechanism which is widely used in industrial control systems [7,10,24].It attempts to correct the error between a measured process variable (output) and a desired set-point (input) by calculating and then providing a corrective signal that can adjust the process accordingly as shown in Figure 12.In the control design process, the three parameters of the PID controller {K p , K i , K d } have to be calculated for some specific process requirements such as system overshoot and settling time.It is normal that once they are calculated and implemented, the response of the system is not actually as desired.Therefore, further tuning of these parameters is needed to provide the desired control action.Focusing on one output of the tape-drive machine, the PID controller using the reduced order model for the desired output was investigated.Hence, the identified reduced 3 rd order model is now considered for the output of the tape position at the head which is given as: Simulating the new PID-controlled system for a step input provided the results shown in Figure 13, where the settling time is almost 1.5 sec.while without the controller was greater than 6 sec.Also as observed, the overshoot has much decreased after using the PID controller.
On the other hand, the other system outputs can be PID-controlled using the cascading of current process PID and new tuning-based PIDs for each output.For the PID-controlled output of the tachometer shaft angle, the controlling scheme would be as shown in Figure 14.As seen in Figure 14, the output of interest (i.e., the 2 nd output) is controlled as desired using the PID controller.However, this will affect the other outputs' performance and therefore a further PID-based tuning operation must be applied.As shown in Figure 14, the tuning process is accomplished using G 1T and G 3T .For example, for the 1 st output:

PID( )
where Y 2 is the Laplace transform of the 2 nd output.Similarly, G 3T can be obtained.

State feedback control
In this section, we will investigate the state feedback control techniques of pole placement and the LQR optimal control for the enhancement of the system performance.

Pole placement for the state feedback control
For the reduced order model in the system of Equations ( 37) -(38), a simple pole placementbased state feedback controller can be designed.For example, assuming that a controller is needed to provide the system with an enhanced system performance by relocating the eigenvalues, the objective can be achieved using the control input given by: where K is the state feedback gain designed based on the desired system eigenvalues.A state feedback control for pole placement can be illustrated by the block diagram shown in Figure 15.where this is illustrated in Figure 16.The overall closed-loop system model may then be written as: such that the closed loop system matrix [A cl ] will provide the new desired system eigenvalues.
For example, for the system of case #3, the state feedback was used to re-assign the eigenvalues with {-1.89, -1.5, -1}.The state feedback control was then found to be of K = [-1.20980.3507 0.0184], which placed the eigenvalues as desired and enhanced the system performance as shown in Figure 17.-compared with the original ____ full order system output step response.

Linear-Quadratic Regulator (LQR) optimal control for the state feedback control
Another method for designing a state feedback control for system performance enhancement may be achieved based on minimizing the cost function given by [10]: which is defined for the system () , where Q and R are weight matrices for the states and input commands.This is known as the LQR problem, which has received much of a special attention due to the fact that it can be solved analytically and that the resulting optimal controller is expressed in an easy-to-implement state feedback control [7,10].The feedback control law that minimizes the values of the cost is given by: where K is the solution of 1 T KR B q − = and [q] is found by solving the algebraic Riccati equation which is described by: where [Q] is the state weighting matrix and [R] is the input weighting matrix.A direct solution for the optimal control gain maybe obtained using the MATLAB statement lqr( , , , ) KA B Q R = , where in our example R = 1, and the [Q] matrix was found using the output [C] matrix such as The LQR optimization technique is to the reduced 3 rd order model in case #3 of subsection 4.1.2for the system behavior enhancement.The state feedback optimal control gain was found K = [-0.0967-0.0192 0.0027], which when simulating the complete system for a step input, provided the normalized output response (with a normalization factor = 1.934) as shown in Figure 18.A s s e e n i n F i g u r e 1 8 , t h e o p t i m a l s t a t e feedback control has enhanced the system performance, which is basically based on selecting new proper locations for the system eigenvalues.

Output feedback control
The output feedback control is another way of controlling the system for certain desired system performance as shown in Figure 19 where the feedback is directly taken from the output.Considering the reduced 3 rd order model in case #3 of subsection 4.1.2for system behavior enhancement using the output feedback control, the feedback control gain is found to be K = [0.5799-2.6276 -11].The normalized controlled system step response is shown in Figure 21, where one can observe that the system behavior is enhanced as desired.

Conclusions and future work
In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller.The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors.A robust control policy is static -in contrast to the adaptive (dynamic) control policy -where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will be unknown but, for example, bounded.This research introduces a new method of hierarchical intelligent robust control for dynamic systems.In order to implement this control method, the order of the dynamic system was reduced.This reduction was performed by the implementation of a recurrent supervised neural network to identify certain elements [A c ] of the transformed system matrix [ # A ], while the other elements [A r ] and [A o ] are set based on the system eigenvalues such that [A r ] contains the dominant eigenvalues (i.e., slow dynamics) and [A o ] contains the non-dominant eigenvalues (i.e., fast dynamics).To obtain the transformed matrix [ # A ], the zero input response was used in order to obtain output data related to the state dynamics, based only on the system matrix [A].After the transformed system matrix was obtained, the optimization algorithm of linear matrix inequality was utilized to determine the permutation matrix [P], which is required to complete the system transformation matrices {[ # B ], [ # C ], [ # D ]}.The reduction process was then applied using the singular perturbation method, which operates on neglecting the faster-dynamics eigenvalues and leaving the dominant slow-dynamics eigenvalues to control the system.The comparison simulation results show clearly that modeling and control of the dynamic system using LMI is superior .com Robust Control Using LMI Transformation and Neural-Based Identification for Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 63

Fig. 3 .
Fig. 3.The utilized 2 nd order recurrent neural network architecture, where the identified matrices are given by {

Fig. 4 .
Fig. 4. Block diagram for the state-space system representation.

dA
] such that the system output will be the same for [A d ] and [ # d A ]. Hence, simulating this system provides the state response corresponding to their initial values with only the [A d ] matrix is being used.Once the input-output data is obtained, transforming the [A d ] matrix is achieved using the ANN training, as will be explained in Section 3. The identified transformed [ # d A ] matrix is then converted back to the continuous form which in general (with all real eigenvalues) takes the following form: www.intechopen.comRecentAdvances in Robust Control -Novel Approaches and Design Methods [ d Â] a n d [ d B ] matrices for a given input signal as illustrated in Figure 3.The ANN identification would lead to the following [ d Â] a n d [ d B ] transformations which (in the case of all real eigenvalues) construct the weight matrix [W] as follows:

dA
], the training is based on the zero input response.Once the training is completed, the obtained weight matrix [W] will be the discrete identified transformed system matrix [ # d A ]. Transforming the identified system back to the continuous form yields the desired continuous transformed system matrix [ # A ].

Fig. 5 .
Fig. 5.The used tape drive system: (a) a front view of a typical tape drive mechanism, and (b) a schematic control model.

Fig. 8 .Case # 1 .
Fig. 8. Reduced 3 rd order models (….transformed, -.-.-.-non-transformed) output responses to a step input along with the non-reduced ( ____ original) 5 th order system output response.Case #1.For the example of case #1 in subsection 4.1.1,the ANN identification is used now to identify only the transformed [ # d A ] matrix.Discretizing the system with T s = 0.1 sec., using a step input with learning time T l = 15 sec., and training the ANN for the input/output data with = 0.001 and initial weights for the [ # d A ] matrix as follows: matrices were then obtained.Performing model order reduction provided the following reduced 3 rd order model:

Fig.
Fig. 15.Block diagram of a state feedback control with {[ or A ], [ or B ], [ or C ], [ or D ]} overall reduced order system matrices.

Fig. 16 .
Fig. 16.Block diagram of the overall state feedback control for pole placement.
.com Robust Control Using LMI Transformation and Neural-Based Identification for Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 85

Fig. 17 .
Fig. 17.Reduced 3 rd order state feedback control (for pole placement) output step response -.-.-.-compared with the original ____ full order system output step response.

Fig. 18 .
Fig. 18.Reduced 3 rd order LQR state feedback control output step response -.-.-.-compared with the original ____ full order system output step response.

Fig. 19 .
Fig. 19.Block diagram of an output feedback control.
the overall block diagram as seen in Figure20.

Fig. 20 .
Fig. 20.An overall block diagram of an output feedback control.