A Simulative Study on Active Disturbance Rejection Control (ADRC) as a Control Tool for Practitioners

As an alternative to both classical PID-type and modern model-based approaches to solving control problems, active disturbance rejection control (ADRC) has gained significant traction in recent years. With its simple tuning method and robustness against process parameter variations, it puts itself forward as a valuable addition to the toolbox of control engineering practitioners. This article aims at providing a single-source introduction and reference to linear ADRC with this audience in mind. A simulative study is carried out using generic first- and second-order plants to enable a quick visual assessment of the abilities of ADRC. Finally, a modified form of the discrete-time case is introduced to speed up real-time implementations as necessary in applications with high dynamic requirements.


Introduction
Active disturbance rejection control (ADRC) [9,8,6,7] has emerged as an alternative that combines easy applicability known from classical PID-type control methods with the power of modern model-based approaches. The foundation for ADRC is an observer who jointly treats actual disturbances and modeling uncertainties, such that only a very coarse process model is necessary in order to design a control loop, which makes ADRC an attractive choice for practitioners and promises good robustness against process variations. Present applications range from power electronics [15], motion control [14] and superconducting radio frequency cavities [16] to tension and temperature control [17].
While it can be shown (cf. Section 2.3) that the linear case of ADRC is equivalent to a special case of classical state space control with disturbance estimation and compensation based on the internal model principle [4], there is an important difference to model-based approaches, such as model predictive control [12] or embedded model control [2]: for the latter, an explicit model of the process to be controlled is necessary. ADRC, on the other hand, does only assume a certain canonical model regardless of the actual process dynamics and leaves all modeling errors to be handled as a disturbance. Of course this may come at the price of performance losses compared to a controller built around a precise process model or a model of the reference trajectory. Therefore, while employing the same mathematical tools, ADRC's unified view and treatment of disturbances can be seen as a certain departure from the model-based control school [13], shifting back the focus from modeling to control. This may be key to its appeal for practitioners.
The remainder of this article is organized as follows: After providing a step-by-step introduction to the linear case of ADRC in the following section, a series of simulative experiments is carried out to demonstrate the abilities of ADRC when being faced with varying process parameters or structural uncertainties and to visually provide insights into the effect of its tuning parameters. For the discrete-time case, which is introduced afterwards, an optimized formulation is presented, which enables a controller implementation with very low input-output delay.

Linear Active Disturbance Rejection Control
The aim of this section is to repeat and present the linear case of ADRC in a self-contained manner, following [6,3]. While the majority of articles introduces ADRC with a second-order process, here the first-order case will be considered first and explicitly, due to its practical importance, since there are many systems-albeit technically nonlinear and higher-order-which exhibit a dominating first-order-like behavior, at least in certain operating points. The second-order case will be developed subsequently with a similar use case in mind. Additionally, it will be shown in Section 2.3 that linear ADRC can be seen as a special case from the perspective of classical state space control with disturbance compensation based on the internal model principle.

First-Order ADRC
Consider a simple first-order process, P(s), with a DC gain, K, and a time constant, T: P(s) = y(s) u(s) = K T s + 1 s c T · y(t) + y(t) = K · u(t) We add an input disturbance, d(t), to the process, abbreviate b = K T and rearrange: As our last modeling step, we substitute b = b 0 + ∆b, where b 0 shall represent the known part of b = K T and ∆b, an (unknown) modeling error, and, finally, obtain Equation (2). We will see soon that all that we need to know about our first-order process to design an ADRC is b 0 ≈ b, i.e., an approximate value of K T . Modeling errors or varying process parameters are represented by ∆b and will be handled internally.
By combining − 1 T ·y(t), the disturbance d(t), and the unknown part ∆b·u(t) to a so-called generalized disturbance, f (t), the model for our process changed from a first-order low-pass type to an integrator. The fundamental idea of ADRC is to implement an extended state observer (ESO) that provides an estimate,f (t), such that we can compensate the impact of f (t) on our process (model) by means of disturbance rejection. All that remains to be handled by the actual controller will then be a process with approximately integrating behavior, which can easily be done, e.g. by means of a simple proportional controller.
In order to derive the estimator, a state space description of the disturbed process in Equation (2) is necessary: Since the "virtual" input, f (t), cannot be measured, a state observer for this kind of process can, of course, only be built using the input, u(t), and output, y(t), of the process. An estimated state,x 2 (t), however, will provide an approximate value of f (t), i.e.,f (t), if the actual generalized disturbance, f (t), can be considered piecewise constant. The equations for the extended state observer (integrator process extended by a generalized disturbance) are given in Equation (4). Note that for linear ADRC, a Luenberger observer is being used, while in the original case of ADRC, a nonlinear observer was employed [7].
One can now use the estimated variables,x 1 (t) =ŷ(t) andx 2 (t) =f (t), to implement the disturbance rejection and the actual controller.
The according structure of the control loop is presented in Figure 1. Since K P acts onŷ(t), rather than the actual output y(t), we do have a estimation-based state feedback controller, but the resemblance to a classical proportional controller is striking to practitioners. In Equation (5), u 0 (t) represents the output of a linear proportional controller.
Control loop structure with ADRC for a first-order process.
The remainder of the control law in u(t) is chosen such that the linear controller acts on a normalized integrator process iff (t) ≈ f (t) holds. The effect can be seen by putting Equation (5) in Equation (2): Ifŷ(t) ≈ y(t) holds, we obtain a first-order closed loop behavior with a pole, s CL = −K P : If the state estimator and disturbance rejection work properly, one has to design a proportional controller only one single time to obtain the same closed loop behavior, regardless of the parameters of the actual process. For example, one can calculate K P from a desired first-order system with 2%-settling time: In order to work properly, observer parameters, l 1 and l 2 , in Equation (4) still have to be determined. Since the observer dynamics must be fast enough, the observer poles, s ESO 1/2 , must be placed left of the closed loop pole, s CL . A simple rule of thumb suggests for both poles: Placing all observer poles at one location is also known as "bandwidth parameterization" [6]. Since the matrix (A − LC) determines the error dynamics of the observer, we can compute the necessary observer gains for the common pole location, s ESO , from its characteristic polynomial: From this equation, the solutions for l 1 and l 2 can immediately be read off: To summarize, in order to implement a linear ADRC for a first-order system, four steps are necessary: 1. Modeling: For a process with (dominating) first-order behavior, P(s) = K T s + 1 , all that needs to be known is an estimate b 0 ≈ K T . 2. Control structure: Implement a proportional controller with disturbance rejection and an extended state observer, as given in Equations (4) and (5): Gernot Herbst 3. Closed loop dynamics: Choose K P , e.g. according to a desired settling time Equation (6): 4. Observer dynamics: Place the observer poles left of the closed loop pole via Equations (7) and (8): It should be noted that the same control structure can be applied to a first-order integrating process: With an input disturbance, d(t), and a substitution, K I = b = b 0 + ∆b, with ∆b representing the unknown part of K I , we can model the process in an identical manner as Equation (2), with all differences hidden in the generalized disturbance, f (t): Therefore, the design of the ADRC for a first-order integrating process can follow the same four design steps given above, with the only distinction that b 0 must be set to b 0 ≈ K I in step 1.

Second-Order ADRC
Following the previous section, we now consider a second-order process, P(s), with a DC gain, K, damping factor, D, and a time constant, T.
As for the first-order case, we add an input disturbance, d(t), abbreviate b = K T 2 and split b into a known and unknown part, b = b 0 + ∆b: With everything else combined into the generalized disturbance, f (t), all that remains of the process model is a double integrator. The state space representation of the disturbed double integrator is: In order to employ a control law similar to the first-order case, an extended state observer is needed to provide an estimation,x 1 (t) =ŷ(t),x 2 (t) = ŷ(t) andx 3 (t) =f (t): A Simulative Study on Active Disturbance Rejection Control (ADRC) as a Control Tool for Practitioners Figure 2. Control loop structure with active disturbance rejection control (ADRC) for a second-order process.
Using the estimated variables, one can implement the disturbance rejection and a linear controller for the remaining double integrator behavior, as shown in Figure 2. A modified PD controller (without the derivative part for the reference value r(t)) will lead to a second-order closed loop behavior with adjustable dynamics. Again, this actually is an estimation-based state feedback controller.
Provided the estimator delivers good estimates, , one obtains after inserting Equation (13) into Equation (10): Under ideal conditions, this leads to: While any second-order dynamics can be set using K P and K D , one practical approach is to tune the closed loop to a critically damped behavior and a desired 2% settling time T settle , i.e., choose K P and K D to get a negative-real double pole, s CL 1/2 = s CL : K P = s CL 2 and K D = −2 · s CL with s CL ≈ − 6 T settle (14) Similar to the first order case, the placement of the observer poles can follow a common rule of thumb: Once the pole locations are chosen in this manner, the observer gains are computed from the characteristic polynomial of (A − LC): The respective solutions for l 1 , l 2 and l 3 are: To summarize, ADRC for a second-order system is designed and implemented as follows: 1. Modeling: For a process with (dominating) second-order behavior, P(s) = K T 2 s 2 + 2DT s + 1 , one only needs to know an approximate value b 0 ≈ K T 2 . 2. Control structure: Implement a proportional controller with disturbance rejection and an extended state observer, as given in Equations (12) and (13): Gernot Herbst 3. Closed loop dynamics: Choose K P and K D , e.g. according to a desired settling time as given in Equation (14): Observer dynamics: Place the observer poles left of the closed loop poles via Equations (15) and (16):

Relation to Linear State Space Control with Disturbance Estimation and Compensation
Given that linear ADRC only employs tools known from classical linear state space control, how can it be compared to existing approaches? In this section, we will demonstrate that linear ADRC can be related to state space control with disturbance estimation and compensation based on the internal model principle [4]. We will start with a linear state space model of a process disturbed by d(t), as follows: Further, we assume to possess a model for the generation of the disturbance, d(t): Note that A d and C d in Equation (18) refer to modeling the disturbance generator in this section only and should not be mistaken for the discrete-time versions of A and C used in Section 4. The process model is now being extended by incorporating the internal state variables, x d (t), of the disturbance generator, resulting in an augmented process model in Equation (19), for which an observer given in Equation (20) can be set up [11]: Accordingly, the standard state space control law can now be enhanced by the estimated state variables of the disturbance generator in order to compensate or minimize the impact of the disturbance on the process, if a suitable feedback matrix, K d , can be found: After inserting Equation (21) into Equation (17), it becomes apparent that-provided an accurate estimation, x d (t), is available-the disturbance may be compensated to the extent that BK d = EC d can be satisfied [11].
We will now compare the combined state and disturbance observer based on the augmented process model, as well as the control law to linear ADRC presented before. The first-and second-order case will be distinguished by (a) and (b): • Process model and disturbance generator: When comparing Equations (20) to (4) and (12), respectively, one obtains the (double) integrator process with a constant disturbance model. The respective matrices E can be found using Equations (2) and (10): gives: gives: One can see that the observer and control law are equivalent in structure for both linear ADRC and a state space approach based on the internal model principle, and the model of the disturbance generator in ADRC could be made more visible by this comparison. If the standard design procedure of a state space observer and controller with disturbance compensation will lead to the same parameter values as ADRC will be verified subsequently by following the necessary steps to design K, K d , G and L based on the same design goals used in linear ADRC before: • Feedback gain K : The closed loop dynamics are determined by the eigenvalues of (A − BK ). For ADRC, all poles were placed on one location, s CL . With this design goal, one obtains for K: • Gain compensation G: In order to eliminate steady state tracking errors, G must be chosen to G = , which gives G = K 1 for both the first-and second-order case.
• Observer gain L: The dynamics of the observer for the augmented system are determined by placing the eigenvalues of A − L C as desired, which is the identical procedure, as in Equations (8) and (16).
• Disturbance compensation gain K d : As mentioned above, K d should be chosen to achieve BK d ! = EC d if possible: Obviously, both designs deliver the same parameters. Based on this comparison, one may view linear ADRC and its controller design as a special case of classical state space control with an observer using a system model augmented by a certain disturbance generator model (following the internal model principle) and disturbance compensation. However, a subtle, but important, distinction has to be made: while the latter relies on a model of the plant (like all modern model-based control approaches), ADRC does always deliberately assume an integrator model for the plant and leaves all modeling errors to be handled by the disturbance estimation. Therefore, ADRC can be applied without accurately modeling the process, which presents a departure from the model-based control school [13].

Simulative Experiments
In the ideal case-with noise-free measurements and unlimited, ideal actuators in the control loop-ADRC would be able to suppress basically all effects of disturbances and parameter variations of the process. In practice, however, we have to live with constraints, such as limited observer dynamics or saturated controller outputs, leading to a compromise when choosing the parameters of the controller.
This section is meant to provide insights into the abilities and limitations of continuous-time ADRC in a visual manner. To that end, the controller-designed once and then left unchanged-will be confronted with a heavily varying process. The influence of the ESO pole placement (relative to the closed loop poles) will also be examined, as well as limitations of the actuator. The experiments are carried out by means of Matlab/Simulink-based simulations.
In Section 4.2, further simulations will be performed for the discrete-time implementation of ADRC, also addressing the effect of noise and sampling time.

First-Order ADRC with a First-Order Process
In a first series of experiments, a continuous-time ADRC (as introduced in Section 2.1) operating on a first-order system will be examined. The process structure and nominal parameters used to design the controller are: The ADRC is designed following the four steps described in Section 2.1. For now, we assume perfect knowledge of our process and set b 0 = K T = 1, but will leave this value unchanged for the rest of the experiments in this section. The desired closed loop settling time is chosen, T settle = 1. To this end, the proportional gain of the linear controller is, according to Equation (6), set to K P = 4 T settle = 4. Unless otherwise noted in individual experiments, the observer poles are chosen be s ESO The respective values of the observer gains are obtained via Equation (8).
Throughout Section 3.1, noise-free control variables and ideal measurements will be assumed. Reaction on noisy measurements will be part of further experiments in Section 4.2.

Sensitivity to Process Parameter Variations
Given the explicit feature of ADRC to cope with modeling errors, our first goal will be to provide visual insights into the control loop behavior under (heavy) variations of process parameters. A series of simulations was run with fixed ADRC parameters as given above and a first-order process with varying parameters, K (DC gain) and T (time constant). In Figure 3, the closed loop step responses are displayed on the left-and the according controller outputs on the right-hand side. For both K and T, values were chosen from an interval reaching from 10% of the nominal value to 1000%, i.e., a decrease and increase by factors up to ten.
The results are quite impressive. One can see that the closed loop step response remains similar or nearly identical to the desired behavior (settling time of one second) for most process parameter settings in Figure 3. Almost only for the five-and ten-fold increased time constant, larger overshoots become visible. In theory, ideal behavior (i.e., almost complete ignorance of parameter variations) can be obtained by placing the observer poles far enough to the left of the closed loop poles, as will be shown in Section 3.1.2. Note that we are not constrained by actuator saturation here; this case will be examined in Section 3.1.3.

Effect of Observer Pole Locations
In the previous Section 3.1.1, the observer poles were placed ten times faster than the closed loop pole (s ESO 1/2 = s ESO = 10 · s CL ). How does the choice of this factor influence the behavior and abilities regarding process parameter variations? We will repeat the simulations with varying time constant, T, of process both with slower and faster observers. To demonstrate a rather extreme case, as well, we will, for the faster setting, apply a factor of s ESO = 100 · s CL .
The results in Figure 4 are ordered from fastest (top) to slowest observer setting (bottom). For the fastest setting, the theoretical ability to almost completely ignore any modeling error is confirmed. In order to achieve this behavior in practice, the actuator must be fast enough and must not saturate within the desired range of parameter variations. For slower observer settings, the actual closed loop dynamics increasingly differ from the desired dynamics, noticeable especially from larger overshoots for process variations with stronger low-pass character.

Effect of Actuator Saturation
In practice, the abilities of any controller are tied to limitations of the actuator, i.e., its dynamics and the realizable range of values of the actuating variable. While the actuator dynamics can be viewed as part of the process dynamics during controller design, one has to take possible effects of actuator saturation into account.
If parameters of our example process change, the control loop behavior may be influenced or limited by actuator saturation in different ways: If the process becomes slower (i.e., the time constant T increases), actuator saturation will increase the settling time. If, on the other hand, the DC gain of the process decreases, the control loop may be not be able to reach the reference value under actuator saturation.
In classical PID-type control, some sort of anti-windup strategy would be required to prevent the side-effects of actuator saturation. We will see in the experiments of this section that for ADRC, those effects can be overcome very simply by feeding the state observer with the limited actuating variable, u lim (t), instead of u(t), either by a measured value or by an actuator model, cf. To demonstrate the behavior of ADRC under actuator saturation, the experiments of Section 3.1.1 will now be repeated under an (arbitrarily chosen) limitation of the actuating variable, |u lim | ≤ 5. Figure 6 shows the control loop behavior with varying process parameters, K and T. On the right-hand side of Figure 6, the controller outputs are shown before being fed into the actuator model, i.e., before the limitation becomes effective. One can see that for reduced process gains, K ≤ 0.2, the reference value cannot be reached anymore. From the respective controller outputs, it becomes apparent that u(t) does not wind up when actuator saturation takes effect, but converges to a steady-state value.
For slower process dynamics, T = 5 and T = 10, the settling time increases considerably, yet there is almost no overshoot visible. Since the actuator is saturated, this already is the fastest possible step response. Obviously, the controller recovers very well from periods of actuator saturation. To summarize, for practical implementations of ADRC, this means that apart from the small modification shown in Figure 5, there are no further anti-windup measures necessary.

Effect of Dead Time
Many practical processes with dominating first-order behavior do exhibit a dead time. While there are many specialized model-based approaches to control such processes, we are interested in how ADRC will handle an unknown-albeit small-amount of dead time in the control loop.
In Figure 7, simulations were performed as in previous experiments, and a dead time, T dead , was added to the process with T dead ≤ 0.1, i.e., up to 10 %, compared to the time constant, T, of the process. As expected, oscillations are inevitably starting to appear with increasing T dead , especially in the controller output. However, this situation can be improved if the dead time of the process is-at least approximately-known. An easy way of incorporating small dead times into ADRC can be found by delaying the controller output fed into the observer by T ESO dead , i.e., using u(t − T ESO dead ) instead of u(t) in Equation (4). In Figure 7c and (d), this approach was implemented using T ESO dead = 0.05. Clearly, the oscillations in the controller output are less prominent, even if T ESO dead does not match the actual dead time.

Effect of Structural Uncertainties
In the experiments carried out so far in this section, it was assumed that our process could be reasonably well described by a first-order model. In practice, such a first-order model almost always results from neglecting higherorder dynamics, e.g. of the actuator. While it could already be seen that ADRC can handle variations of parameters Actuator model To demonstrate this behavior, a second pole was added to the process in the simulations from Figure 8, resulting in a second time constant, T 2 ≤ 0.1, i.e., up to 10% of the dominant time constant, T. One can see that some oscillations start to appear during the transient as the higher-order pole approaches the dominant pole. While these results are acceptable, further simulations showed that ADRC did not provide an advantage comparing to standard PI controllers as large as it does in the case of parameter robustness, cf. Section 3.1.6.

Comparison to PI Control
Given the ubiquity of PID-type controllers in industrial practice, how does the standard approach keep up against ADRC? To that end, we will repeat the experiment regarding sensitivity towards process parameter variations and compare the ADRC results to a PI controller.
For the first-order process, a PI controller is sufficient to achieve any desired first-order closed loop behavior. In order to obtain comparable results, the PI controller is designed for nearly identical closed loop dynamics as the ADRC by aiming for the same settling time and placing a zero on the pole of the first-order process: The simulation results in Figure 9 clearly demonstrate the ability of the ADRC approach to keep the closed loop dynamics similar, even under major parameter variations. The PI controller delivers dynamics that vary heavily, as do the process parameters.

Disturbance Rejection of ADRC and PI
As a final experiment for the continuous-time first-order ADRC, we want to examine the disturbance rejection abilities by injecting an input disturbance into the process for both ADRC and the PI controller from Section 3.1.6.
On the left-hand side of Figure 10, a closed loop step response is shown. The input disturbance is effective during the period from t = 2 until t = 4. While both controllers were tuned for the same reaction on setpoint changes, the impact of the disturbance is compensated for much faster by ADRC compared to the PI controller. In classical control, one would need to tune the PI controller much more aggressively and add a setpoint filter or follow another 2DOF approach in order to obtain similar results [1].  Process and ADRC parameters are as throughout Section 3.1. Input disturbance, d = 1, was effective from t = 2 until t = 4. (a) Step response and reaction on disturbance; (b) Controller output u for (a).

Second-Order ADRC with a Second-Order Process
The second-order ADRC will be examined using a second-order process with the following nominal parameters: Again, perfect knowledge of the process is assumed, b 0 = K T 2 = 1, but then b 0 is left unchanged throughout Section 3. The respective values of the observer gains, l 1/2/3 , are obtained via Equation (16). As in Section 3.1, throughout this section, noise-free control variables and ideal measurements will be assumed.

Sensitivity to Process Parameter Variations
For the second-order ADRC, sensitivity to variations of the process parameters will be examined firstly. Deviations from the original DC gain (K = 1) were made in a range of 10% to 500% by setting K to one of the values 0.1, 0.2, 0.5, 2 and 5. The damping factor, D, of the process was varied within a range of 10% to 1000% of the original value, D = 1, using settings 0.1, 0.2, 0.5, 2, 5 and 10. Finally, the time constant, T, of the second-order process was varied in a range of 50% to 350% of the original value, T = 1, using the values 0.5, 2, 3 and 3.5.
From the results in Figure 11, one can see that the closed loop behavior is almost not at all affected, even by relatively large changes of K and D. Only for very small values of K, the step response becomes slower; and for very large values of the damping, D, some overshoot becomes visible. Changes in the time constant, T, do, for larger values, increase overshoot and oscillations more than in the first-order case.
In order to fully appreciate the robustness towards parameter changes of the process, one has to compare these results against standard controllers, such as PID, as will be done in Section 3.2.6.

Effect of Observer Pole Locations
In the previous section, we saw that changes in the process time constant, T, affected the closed loop behavior stronger than changes in K and D. We will therefore demonstrate the influence of the observer poles on the sensitivity by repeating the experiments with varying T for different observer pole locations.
In the previous experiments, the observer poles were set ten times to the left of the closed loop poles in the s-plane, i.e., s ESO = 10 · s CL . Here, both slower and faster observers will be examined, as well, by setting s ESO = 100 · s CL and s ESO = 5 · s CL .
As visible in the simulation results in Figure 12, almost ideal behavior can-at least in theory-be obtained by placing the observer poles far enough left in the s-plane, i.e., choosing large factors k in s ESO = k · s CL . Of course, this comes at the price of the need for faster actuators and larger controller outputs. Furthermore-as we will see in Section 4.2-faster observers are more sensitive to measurement noise and will be limited by sample time restrictions in discrete time implementations.

Effect of Actuator Saturation
Analogously to Section 3.1.3, the experiments were carried out by extending the controller structure, such that the saturated controller output, u lim (t), was fed back to the observer instead of u(t) (compare Figure 5 for the first-order case). The resulting step responses and controller outputs for varying process parameters can be found in Figure   16 Gernot Herbst 13. As expected, lowering the DC gain, K, of the process forced the controller into saturation, such that the desired process output could not be reached anymore. It has to be stressed again that the controller output did not wind up, as visible in Figure 13.
When decelerating the process (increased damping D), the closed loop dynamics suffered when the controller output ran into saturation (increased settling time), but there were no additional oscillations after recovering from saturation, as known from classical PID control without anti-windup measures.

Effect of Dead Time
Similar to Section 3.1.4, the second-order ADRC was confronted with an unknown dead time in the process with T dead ≤ 0.3. One can see from Figure 14a that the controller and observer work very well together, such that the output is hardly being affected by the dead time, which is also an improvement compared to the first-order case in Section 3.1.4. In the controller output in Figure 14b, however, oscillations become increasingly visible with larger values of T dead . Again, this situation can be improved by delaying the input of the observer, i.e., using u(t − T ESO dead ) instead of u(t) in Equation (12). For a fixed value, T ESO dead = 0.1, the controller behavior is shown in Figure 14c and (d), where the controller oscillations are already significantly reduced, even if T ESO dead does not match the actual dead time.

Effect of Structural Uncertainties
Besides unknown dead times, the controller may be faced with higher-order dynamics in the process. Therefore, the effect of an unknown third pole in the process was examined in Figure 15. The time constant, T 3 , of the third pole was varied within 0.001 ≤ T 3 ≤ 1, i.e., the third pole was identical to the two known poles of the plant in the extreme case. In contrast to the first-order ADRC in Section 3.1.5, the second-order ADRC proved to have very good robustness against an unknown higher-order dynamics, even in the more challenging cases of Figure 15.

Comparison to PI and PID Control
To view and assess the abilities of the second-order ADRC from a perspective of classical control, we will now employ standard PI and PID controllers and-after fixing their parameters-expose them to the same process parameter variations, as done in Section 3.2.1. To ensure comparability, all controllers are being designed for the same closed loop dynamics (settling time T settle = 5, no overshoot) using the nominal process parameters, K = 1, The PI controller is given and parameterized as follows: C PI (s) = K P + K I s with K I = 2.55 K · T settle = 0.51 and K P = K I · 1.5 · D · T = 0.765  For the PID controller, we will employ a PIDT 1 -type controller in two-pole-two-zero form, designed such that the zeros of the controller cancel the process poles. The controller gain is chosen to match the closed loop dynamics to the PI and ADRC case: The simulation results are compiled in Figure 16, where each column represents one of the three controller types and each row, a different process parameter being varied. To summarize, it has to be stated that in each case, the ADRC approach surpasses the results of PI and PID control by a large margin with respect to sensitivity towards parameter variations, with a slight advantage of PID compared to PI control.

Disturbance Rejection of ADRC, PI and PID
As our final experiment for the second-order case, we will evaluate the disturbance rejection capabilities of ADRC, PI and PID control using the controller settings from the previous section. All of them are tuned for the same closed loop response.
For each control loop, an input disturbance, d = 0.5, is applied during a ten-second interval after reaching steady state; compare Figure 17. One can easily recognize that ADRC compensates the disturbance much faster, such that its effect on the control variable remains very low. While both PI and PID controllers could also be tuned more aggressively for a similar disturbance rejection behavior, one would have to employ additional measures, such as setpoint filters, to retain a non-oscillating reference tracking behavior.

Discrete Time ADRC
Practical implementations of a controller with a state observer, such as the ADRC approach, will most likely be done in discrete time form, e.g. employing a microcontroller. Since the actual control law for linear ADRC is based on proportional state feedback, a discrete time version can already be obtained by only discretizing the extended state observer, which will be done in Section 4.1. The quasi-continuous approach will be valid only for sufficiently fast sampling, of course. Otherwise, the state feedback should be designed explicitly for a discretized process model incorporating sampling delays.
In Section 4.2, simulative experiments will be carried out in order to visually assess the influence of the discretisation process and measurement noise on the control loop performance.

Discretisation of the State Observer
For a system without a feed-through term, the standard approach for a discrete-time observer is: If we look at the equation for the estimation error of Equation (22), we can see that-just as in the continuous case-the dynamics of the error decay are determined by the matrix A d − L p · C d : Since the observer gains in L p influence the pole placement for the matrix A d − L p · C d , they can be chosen such that the estimation settles within a desired time.
Here, A d , B d and C d refer to time-discrete versions of the respective matrices in the state space process models Equations (3) and (11) obtained by a discretisation method. Since there is no matrix D d in the observer equations, the discretisation of the model must deliver D d = 0, for example, via zero order hold (ZOH) sampling. This observer approach is also being called "prediction observer", since the current measurement, y(k), will be used as a correction of the estimate only in the subsequent time step,x(k + 1).
In order to reduce unnecessary time delays (which may even destabilize the control loop), it is advisable [10] to employ a different observer approach called "filtered observer" or "current observer" [5]. The underlying idea is-similar to Kalman filtering-to split the update equation in two steps, namely a prediction step to predict x(k |k − 1) based on measurements of the previous time step, k − 1, and a correction step to obtain the final estimate, x(k |k), incorporating the most recent measurement, y(k): If we put the prediction into the correction equation and abbreviatex(k |k) =x(k), we obtain one update equation for the observer: From the estimation error, we can see that, in contrast to the prediction observer, the error dynamics are determined by the matrix When computing the observer gains in L c , this has to be taken into account, i.e., one must choose L c , such that the eigenvalues of (A d − L c · C d · A d ) match the desired observer pole locations.
The discrete time versions of the matrices A, B and C from the the state space process models, which are necessary for the observer equations, can be obtained by ZOH discretisation [5]: For the first-order process in Equation (3), A and B are being discretized as follows, while C d = C = 1 0 and D d = D = 0 remain unchanged: This can be computed via Equation (24) easily, since A i = 0 for i ≥ 2. Following the same procedure for the second-order case in Equation (11), C d = C = 1 0 0 and D d = D = 0 remain unchanged, and one obtains for A d and B d (since A i = 0 for i ≥ 3): Now, one can compute the observer gain, L c = l 1 l 2 T (first-order case) or L c = l 1 l 2 l 3 T (for the second-order ADRC) to obtain the desired observer dynamics. The desired pole locations can, in a first step, be formulated in the s-plane, as in Sections 2.1 and 2.2, and then be mapped to the z-plane: z ESO = e s ESO ·T sample . The

Simulative Experiments
As in the continuous-time experiments in Section 3.1, a first-order process will be examined in this section: Unless otherwise noted in individual experiments, the ADRC will be designed assuming perfect knowledge, b 0 = K/T = 1, with k ESO = 5, and a desired 2% settling time, T settle = 1. The discretisation will be based on a sampling time T sample = 0.01. Gaussian measurement noise will be added with a variance σ 2 noise = 0.0001.

Effect of Sample Time
When choosing the sampling time for a discrete-time implementation of a controller with an observer, one has to consider not only the process dynamics, but also the dynamics of the observer. If, on the other hand, restrictions to sample times are present, the dynamics of the observer will be limited. In the case of ADRC, this would mean that the desired behavior of the process may not be achieved under all circumstances. In Figure 18, simulations were performed using sample times for ADRC ranging from T sample = 0.01 to T sample = 0.20. As expected, oscillations in the controller output are increasingly visible for large sample times. Consequently, the closed loop dynamics differ from the desired first-order behavior as the sampling interval becomes too coarse.

Effect of Measurement Noise
While state feedback controllers based on observers rather than direct measurements are less susceptible to measurement noise, oscillations in the controller output will still occur with higher noise levels. In the simulations presented in Figure 19, normally distributed noise with increasing variance ranging from σ 2 noise = 0 to σ 2 noise = 0.001 was added to the process output. The results correspond to these expectations. The effect of measurement noise on oscillations in the controller output, however, may be mitigated by designing an observer with slower dynamics, which will be demonstrated in the following experiment.

Effect of Observer Pole Locations
In the simulations with varying observer dynamics presented in Figure 20, the typical trade-off between fast setpoint tracking and noise rejection becomes visible. If the observer poles are placed far enough left of the closed loop poles in the s-plane via k ESO , the closed loop dynamics can adopt the desired first-order behavior, but the control variable becomes more sensitive to measurement noise and increasingly exhibits unwanted oscillations. On the other hand, placing observer poles near the closed loop poles provides good measures to suppress the effect of measurement noise on the control action, but the closed loop dynamics will suffer, especially under process parameter changes, cf.

Optimized Discrete-Time Implementation
In a practical discrete-time implementation with tight timing constraints, the lag between sensor input and controller output should be as small as possible, which means that one should try to reduce the computational effort for a controller. Due to the observer-based approach, ADRC has a bigger computational footprint than a classical PIDtype controller. In this section, we will try to reduce the number of computations, on the one hand, and present an implementation focused on low input-output delay, on the other hand.

State Variable Transformation
We start with the discrete-time observer from Equation (23) in the following abbreviated form: and with a control law for the first-order or second-order case: One can simplify the controller structure by scaling the outputs of the observer, such that the multiplications by b 0 , K P and K D (the latter only for the second-order case) can be omitted. The desired scaling of the new state variables, x i , is achieved via: (a) The ADRC structure can then be modified as given in Figure 21. Generally speaking, we perform a coordinate transformation from the old estimated variablesx tox by means of a transformation matrix: Using this matrix T , the state space equations for the extended state observer must be transformed in order to obtain an ESO working with our new state variables,x = T −1 ·x: In this manner, multiplications of the state variables by b 0 , K P and K D can be avoided. Provided the reference, r(k), does not change at each point in time, one can precompute K P b 0 ·r, as well. The control laws are being simplified to the following form:

Minimizing Latency by Precomputation
In an application with fixed sampling frequency, the controller performance can be improved by minimizing the latency between acquisition of input signals and output of the updated controller action within a sampling period. This input-output lag is not necessarily dependent on the overall number of computations of a control law, but rather on the computations necessary to deliver the controller output, with the computations left being performed after that during the remaining time of the sampling period. We will optimize the discrete-time implementation of ADRC in this regard and, subsequently, derive further equations in detail for the second-order case. Following the ideas of the previous section, the estimated state variables must be updated according to Equation (28) and used to compute the controller output in Equation (29), at each point in time k:x (k) = A ESO ·x(k − 1) + B ESO · u(k − 1) + L ESO · y(k) withx = x 1 x 2 x 3 , A ESO = a 11 a 12 a 13 a 21 a 22 a 33 a 31 a 32 a 33 , When inserting Equation (28) into Equation (29), one can see that u(k) depends both on variables from time k, y(k) and r(k), and from time k − 1, u(k − 1) andx(k − 1). An idea for providing u(k) with lower latency would therefore be to precompute all terms stemming from time k − 1 already at k − 1 and only update this precomputed value, u(k |k − 1), at time k to obtain and output u(k) = u(k |k), as soon as the necessary measurements become available. Further optimization is possible if the reference value, r, does not change rapidly, remains constant or is known in advance. We will denote this by r(k + 1|k), i.e., r(k + 1) is already known at time k, for any of the reasons mentioned. Then, only the measured process output, y(k), has to be included in the update step to obtain u(k) = u(k |k): u(k) = u(k |k − 1) − 1 1 1 · L ESO · y(k) = u(k |k − 1) − l 1 +l 2 +l 3 · y(k) In Equation (30), the sum (l 1 +l 2 +l 3 ) can be precomputed as well, such that only one multiplication and one addition is necessary to calculate the controller output, u(k). After that, the remaining time of the sampling period can be used to update the observer states via Equation (31) and precompute the output via Equation (32) for the next point in time, k + 1:x (k + 1|k) = A ESO · x(k |k − 1) + L ESO · y(k) Note that the actual observer states,x(k), are not explicitly present and updated in the equations anymore, since only the precomputed values,x(k + 1|k), are needed. To summarize, a latency-optimized discrete-time implementation has to perform the following four steps at each point in time k, with the new controller output, u(k), being available already after the second step: 1. Acquire the current measurement of the process output, y(k).

Conclusions
By means of simulative experiments using generic first-order and second-order plants, it could be demonstrated that ADRC can be a powerful control tool. In this article, the linear case was examined. It can adapt even to heavily varying process parameters, and-in contrast to "classical" adaptive controllers-do so without having to maintain an explicit model of the process. Since only little knowledge about a process has to be provided along with the desired closed loop dynamics, the parameterization is easy for both continuous time and discrete time cases and, therefore, appealing to practitioners.
For control problems with high dynamic requirements, an optimized formulation of the discrete time linear ADRC equations can be found, which enables the controller output to be computed with only one addition and one multiplication after the sensor input becomes available.
If ADRC can show its full potential in a specific application does, however, depend on the relation of process dynamics, observer dynamics, sampling time and measurement noise. In order to provide the adaptability, the observer has to be fast enough compared to the process and closed loop dynamics, on the one hand. On the other hand, the placement of the observer poles will be limited by sampling frequency and the increasing effects of noise on the control action. As long as a good compromise can be found in this regard, ADRC has to be considered as a strong and welcome alternative to solving practical control problems.