Robust exact differentiators with predefined convergence time

The problem of exactly differentiating a signal with bounded second derivative is considered. A class of differentiators is proposed, which converge to the derivative of such a signal within a fixed, i.e., a finite and uniformly bounded convergence time. A tuning procedure is derived that allows to assign an arbitrary, predefined upper bound for this convergence time. It is furthermore shown that this bound can be made arbitrarily tight by appropriate tuning. The usefulness of the procedure is demonstrated by applying it to the well-known uniform robust exact differentiator, which is included in the considered class of differentiators as a special case.


Introduction
The real-time computation of the exact derivative of a measured signal is a very important problem that has many useful applications, such as robust state estimation and fault detection. A variety of successful approaches to real-time signal differentiation is based on homogeneous sliding-mode algorithms, see, e.g. (Levant, 1998(Levant, , 2003Levant et al., 2017). Due to homogeneity, these approaches have several useful features: robustness with respect to measurement noise, convergence to the derivative in finite time, and existence of appropriate discrete-time implementations (Livne and Levant, 2014;Koch and Reichhartinger, 2019;Koch et al., 2020 Among these features, finite-time convergence (see Roxin, 1966;Bhat and Bernstein, 2000) is one of the most important advantages because, in theory, it allows to exactly reconstruct states and even perturbations within a finite time (Floquet and Barbot, 2007;Bejarano et al., 2007;Li et al., 2014). In a control context, this advantage permits to separate design and stability analysis of the differentiator (i.e., the observer) from that of a nonlinear state-feedback controller. This separation becomes possible because the controller can be switched on after the differentiator has converged (Angulo et al., 2013a). In practice, an upper bound for the finite convergence time is required. While such a bound always exists, it may grow infinitely large with increasingly large initial conditions. To overcome this disadvantage, a stronger stability notion was introduced by Cruz-Zavala et al. (2011) (see also Angulo et al., 2013b) that is uniform with respect to the initial condition and was later called fixed-time convergence (Polyakov, 2011;Polyakov and Fridman, 2014). This property guarantees that the convergence time is uniformly upper bounded by a fixed finite time irrespective of the initial condition. While fixed-time convergent differentiators are not homogeneous, they are typically designed to retain most of the corresponding useful features.
One of the most successful approaches for differentiation in finite time is the first-order robust exact differentiator (Levant, 1998). This differentiator is based on the so-called super-twisting algorithm (STA) originally proposed by Levant (1993), and can differentiate signals with Lipschitz continuous time derivative. Higher-order derivatives can be obtained using its arbitrary-order generalization (Levant, 2003) or also by cascading first-order differentiators in a step-bystep manner (Floquet and Barbot, 2007;Bejarano et al., 2007).
Several extensions of STA-based differentiators with fixedtime convergence exist. Examples are the uniformly convergent arbitrary order differentiator (Angulo et al., 2013b) and the first-order uniform robust exact differentiator proposed by Cruz-Zavala et al. (2011). For the latter, a generalization was presented by Moreno (2011) in the form of the generic second order algorithm, which itself is further generalized as the disturbance-tailored STA by Haimovich and De Battista (2019).
An important practical problem is how to make the convergence time satisfy a desired global upper bound. Together with state-feedback control, this allows the designer to tune beforehand also the time after which the controller is turned on. For the uniform robust exact differentiator, this problem is studied empirically in Fraguela et al. (2012). More recently, the related concepts of prescribed-time stability by Holloway and Krstic (2019) and predefined-time stability by Sánchez-Torres et al. (2015); Sánchez-Torres et al. (2017) have been proposed in that regard. The former concept studies ways to prescribe the actual convergence time, rather than its upper bounds, typically by employing time-varying gains that tend to infinity at the desired convergence time instant (Holloway and Krstic, 2019) or by varying the homogeneity degree (Chitour et al., 2020). The latter concept studies systems for which an arbitrary convergence time bound can be specified by a suitable choice of some free parameters (Sánchez-Torres et al., 2015) or by means of a time-varying redesign of given fixed-time stable systems (Gómez-Gutiérrez, 2020). Algorithms studied in that regard typically are focused on closed loops obtained from controllers, however, rather than differentiators.
Causing a sliding-mode differentiator to satisfy a prescribed convergence time is a challenging problem that requires reasonably tight convergence time bounds. For fixed-time differentiators based on the STA, convergence time bounds were obtained in the course of a Lyapunov-based stability analysis (see, e.g. Moreno, 2011). Simple convergence time bounds for another differentiator approach are given by Basin (2019) (see also references therein). This approach, however, is suitable only for functions with constant time derivative, as shown by Seeber (2020b). For the STA, i.e., the first-order robust exact differentiator, convergence time bounds have been studied more extensively, see, e.g., the recent tutorial (Seeber, 2020a) and references therein. Notably, asymptotically exact bounds have been obtained in Seeber et al. (2018) by computing the STA's convergence time in the form of an improper integral.
The present paper proposes a new class of first-order fixed-time convergent differentiators along with a tuning paradigm for assigning an upper bound to its global uniform convergence time. The considered differentiator is based on the disturbance-tailored STA from Haimovich and De Battista (2019), and thus includes the uniform robust exact differentiator as a special case. The differentiator's convergence time is studied by extending the convergence time computation and the corresponding asymptotically exact bounds from Seeber et al. (2018). The tuning paradigm is obtained by combining these bounds with a homogeneity-like scaling property of the global convergence time.
The paper is structured as follows. Section 2 states the considered problem of obtaining a signal's time derivative within a given finite time, and Section 3 discusses the differentiator structure used for this purpose and its properties. Section 4 proposes a tuning procedure to solve the problem. The main theorem-Theorem 4.8-permits to specify a bound for the differentiator's convergence time based on the computation of only a single convergence time bound for an arbitrary parameter setting. After illustrating the proposed tuning procedure in Section 5 by means of examples, Sections 6 and 7 compute the convergence time function and study the global convergence time of the proposed differentiator, thereby providing the tools required for the presented tuning paradigm. Conclusions are given in Section 8. Section 9 forms an appendix that provides proofs for the technical lemmas used in the paper.
Notation: R, R ≥0 , R >0 denote the reals, nonnegative reals, and positive reals, respectively. If α ∈ R, then |α| denotes its absolute value. For x ∈ R n , x denotes its Euclidean norm. For y, p ∈ R, the abbreviations y p = |y| p sign(y) and y 0 = sign(y) are used. The symbol ' • ' denotes function composition. If ν : D ⊆ R → R, then ν denotes the derivative of ν. A superscript T denotes matrix or vector transposition. For a function γ : R >0 → R, define γ(0) := lim s→0 + γ(s) provided the limit exists. Furthermore, γ −1 denotes the inverse function, if it exists, and γ is written for the derivative of the function with respect to its argument, unless it is a function of time, in which caseγ denotes its time derivative. The convention inf A = ∞ is used if A is the empty set. λ min (·) and λ max (·) denote the minimum and maximum eigenvalues, respectively, of a symmetric matrix, and diag(d 1 , . . . , d n ) ∈ R n×n denotes a diagonal matrix with diagonal entries d 1 , . . . , d n .

Problem Statement
Let f : R ≥0 → R be a continuously differentiable function, whose time derivativeḟ is globally Lipschitz continuous, i.e., whose second time derivativef for almost all t satisfies with a nonnegative Lipschitz constant L. The goal is to exactly reconstruct the time derivativeḟ within a desired, fixed time T > 0, i.e., to obtainḟ (t) for t ≥ T from f (t) for t ≥ 0. Thus, the following problem is addressed.
Problem 1 Given a Lipschitz constant L ≥ 0 and a time T > 0, construct a dynamic system-a differentiator-that takes a function f as input and generates an output y satisfying y(t) =ḟ (t) for all t ≥ T , for every function f satisfying (1).

Differentiator Structure
Problem 1 will be solved employing a differentiator having the structureẏ with state variables y 1 , y 2 , functions ν 1 , ν 2 : R → R given by and parameters k 1 , k 2 , k 3 > 0. The function Φ : R → R is called the Differentiator Generating Function (DGF) and is defined next. Afterwards, the convergence time function of the differentiator and its uniform upper bound, the global convergence time, are introduced. The differentiator's asymptotic robustness properties are also discussed.

Differentiator Generating Function
The DGF Φ has to satisfy the following definition.
Definition 1 (Differentiator Generating Function) A locally absolutely continuous function Φ : R → R is called a Differentiator Generating Function (DGF) if it has the following properties: According to conditions (i) and (ii), ν 1 and ν 2 both are odd functions that are continuous and locally Lipschitz in R\{0}.
The remaining conditions-in particular the limits-yield the following properties of the functions ν 1 and ν 2 , which are formally proven in Section 9.
As a consequence of (4), and since the function ν 2 is odd, it is discontinuous in the origin. Hence, the right-hand side of (2b) is discontinuous, and solutions of (2) are understood in the sense of Filippov (1988). When y 1 = f , specifically, (2b) is to be read as the differential inclusioṅ (5) From this relation, one can see that L ≤ k 2 is a necessary condition for the differentiator to converge, because maintaining y 2 (t) =ḟ (t) requires |f (t)| ≤ k 2 .
Remark 3.2 Functions that satisfy the conditions of Definition 1 are Φ(x) = x 1 2 and Φ(x) = x 1 2 + x 3 2 , for example. These yield the robust exact differentiator proposed in Levant (1998)  By means of the selected structure, the more specific problem that will be solved can now be stated.

Differentiator Error and Convergence Time
To study the differentiator's convergence behavior, the error variables x 1 := f − y 1 , x 2 :=ḟ − y 2 are introduced and aggregated in the vector x = [x 1 x 2 ] T in the following. For notational convenience, the family of functions Φ : R → R is furthermore defined as which satisfies Φ (x) = Φ( 2 x). Using this notation, one has ν 1 = Φ k3 and ν 2 = 2Φ k3 Φ k3 . The corresponding error dynamics then arė One can see thatf enters this system as a perturbation. The casef = 0, i.e., L = 0, is hence called unperturbed case in the following, while L ≥ 0 is referred to as perturbed case.
If Φ satisfies Definition 1, then (7) is a well-defined Filippov inclusion, whose solutions exist and are continuable in forward time. Let x(·, x 0 , f ) denote the solution to (7) satisfying x(0, x 0 , f ) = x 0 . Let τ (x 0 , f ) denote the minimum time t that the trajectory x(·, x 0 , f ) takes to converge to the origin, i.e., Depending on the DGF Φ, the parameters the initial state x 0 and the function f , the convergence time τ (x 0 , f ) may be finite or infinite.
The worst-case convergence time obtained for any f satisfying (1) as a function of the initial state x 0 is called the convergence time function of system (7). It is denoted by T Φ,k L : R 2 → R and is given by The smallest uniform upper bound of this function with respect to the initial statê is called the differentiator's global convergence time. Using this notation, Problem 2 may be restated as follows.
Problem 2 (restated) Given L ≥ 0 and T > 0, select a suitable DGF Φ and positive design parameters k 1 , k 2 , k 3 so that the differentiator's global convergence time satisfieŝ Note that, depending on Φ and k, the global convergence time may be finite or infinite even when the convergence time function yields finite values for all initial states. Fixed-time convergence refers to the case of finiteT(Φ, k, L), whereas (global) finite-time convergence just indicates that T Φ,k L (x 0 ) is finite for every x 0 ∈ R 2 but does not guarantee that T(Φ, k, L) is also finite. For these cases, Lyapunov stability of the origin will additionally be shown, which guarantees that solutions of (7) are also unique in forward time.

Asymptotic Properties and Robustness
Homogeneously approximating the inclusion (7) at the origin in the sense of (Andrieu et al., 2008, Definition 2.1) with degree −1 and weights (2, 1) yields the super-twisting algorithm, i.e., the first-order robust exact differentiator: Theorem 3.3 (Asymptotic Behavior) For every DGF Φ and every k 3 > 0, the STA systeṁ is a homogeneous approximation 1 of (7) at the origin.
Remark 3.4 As a consequence of this theorem, asymptotic (small-signal) measurement noise gains and robustness properties of the proposed differentiator are the same as for the first-order robust exact differentiator, see Levant (1998), provided that k 1 , k 2 are chosen such that (7) and (12) both converge in finite time for allf satisfying (7c).

Design Procedure
This section provides a solution to Problem 2. First, requirements on the DGF Φ are formulated, which provide a guideline for its design. Then, a tuning procedure for the parameters k 1 , k 2 , k 3 is proposed, which is based on a single upper bound of the global convergence time for one set of parameter values. Thus, a complete design procedure is developed.

Design of the Differentiator Generating Function
Selecting a DGF according to Definition 1 provides for a well-posed Filippov inclusion and will be shown in Section 6 to yield a finite convergence time T Φ,k 0 (x 0 ) for L = 0. This does not guarantee, however, that the global convergence timeT(Φ, k, L) is finite, especially for L > 0. In the following, conditions on the DGF are therefore formulated that will allow to establish a finite global convergence time bound. They are motivated by the following result, which is proven in Section 7.2.
1 Note that (Andrieu et al., 2008, Definition 2.1) is applicable only to autonomous vectorfields (i.e., forf = 0 here), but its extension to the non-autonomous case considered here is straightforward, provided that uniformity with respect to the input is also imposed.
One can see that under the conditions of this proposition, the DGF Φ has to be selected such that the integral on the right-hand side of (13) exists and is finite. The concept of an admissible DGF is thus introduced, which satisfies this condition along with some additional technical assumptions.
Condition (i) is required for fixed-time convergence, i.e., for the global convergence time to be finite. The other two conditions (ii) and (iii) can be interpreted as a uniform variant of item (iii) and a global variant of item (v) of Definition 1, respectively. They will be used to establish a convergence time bound when L > 0. In particular, condition (ii) guarantees that Φ(x) grows without bound as x → ∞; and condition (iii) implies the existence of a uniform lower bound i.e., Φ grows faster than Φ decays. Such a lower bound is necessary for finite-time stability with L > 0, because otherwise system (7) exhibits an additional non-zero equilibrium for any arbitrarily small (and constant)f .
satisfy item (iii) of Definition 2 with D = 1. Only the latter is actually admissible, however, due to item (i).

Tuning of the Differentiator's Parameters
The proposed tuning procedure, i.e., the procedure for selecting appropriate values for the differentiator's parameters k 1 , k 2 , k 3 , is based on the following proposition, which in part has also been noted empirically by Fraguela et al. (2012). A formal proof is provided in Section 7.1.
Proposition 4.4 shows how the global convergence time depends on specific scalings of the parameters k and the Lipschitz constant L. The following result allows, in addition, to bound the worst-case convergence time corresponding to a positive L in terms of that corresponding to L = 0. Its proof is also given in Section 7.1.
If L < L, thenT Remark 4.6 Note that k 2 > LD is necessary for the conditions of Proposition 4.5 to be satisfied, while k 2 > L is a necessary condition for the finite-time stability of (7). Thus, it is desirable to keep D ≥ 1 as small as possible when designing or choosing an admissible DGF Φ.
In order to simplify tuning, the following notion of a normalized parameter triple is introduced.
Since L can be interpreted as an upper bound for the Lipschitz constant L, a normalized parameter triple ensures that fixed-time convergence can be guaranteed for all L ≤ 1.
Proposition 4.5 shows that it is sufficient to consider the case L = 0 when studying upper bounds for the global convergence time. As the final prerequisite for the proposed tuning procedure, the following proposition gives such a bound. It is proven in Section 7.3.
Proposition 4.7 (Global Convergence Time Bound) Let k = (k 1 , k 2 , k 3 ) ∈ R 3 >0 and consider an admissible DGF Φ with B, C ∈ R >0 as in Definition 2. If k 2 1 ≥ 8k 2 , then By combining Propositions 4.4 and 4.5, the scalar parameters k 1 , k 2 , k 3 may be computed from a single, arbitrary convergence time bound obtained using Proposition 4.7. This is stated as the following theorem.
Theorem 4.8 (Tuning) Let a Lipschitz constant L ≥ 0 and a desired convergence time T > 0 be given. Consider an admissible DGF Φ, letk = (k 1 ,k 2 ,k 3 ) ∈ R 3 >0 be a normalized parameter triple with respect to Φ, and suppose that for some γ > L. Then, the global convergence time of the differentiator (2) satisfies the bound Remark 4.9 Note that γ acts as a tradeoff parameter that determines the relative magnitude of the parameters k 1 , k 2 , k 3 without influencing the convergence time. With increasing γ, the parameter k 3 decreases, while the parameters k 1 , k 2 , which according to Theorem 3.3 determine the behavior of system (7) close to the origin, increase.
Remark 4.10 Table 1 lists normalized parameter tuples for two admissible DGFs, along with the boundT in (20) that may be used for differentiator tuning with this theorem. The table's contents are derived in Section 5.
PROOF. According to Proposition 4.4 Furthermore, sincek is normalized, Proposition 4.5 may be applied with L ≥ 1 to obtain (24) Combining the two relations completes the proof.

Tightness of Predefined Convergence Time
By using the lower bound forT(Φ, k, L) from Proposition 4.1, the conservatism of the predefined convergence time may be bounded from above, provided that the value of the improper integral in Definition 2 is known exactly: Theorem 4.11 (Tightness of Assigned Bound) Let T > 0 and γ > L ≥ 0, consider an admissible DGF Φ, and suppose that B is such that equality holds in item (i) of Definition 2. Suppose thatk 1 ,k 2 ,k 3 , andT satisfy the conditions of Theorem 4.8 and thatk 2 1 ≥ 8k 2 . If parameters k 1 , k 2 , k 3 are selected using (21), then the ratio of predefined and actual global convergence time is bounded from above by Moreover, ifT =T (Φ,k) from (19), then this upper bound tends to γ γ−L fork 1 → ∞.
Remark 4.12 As a consequence, provided that B is tight and (19) is used, the assigned convergence time bound can be made arbitrarily tight by increasingk 1 and γ.

Uniform Robust Exact Differentiator
As an application of the proposed tuning procedure, consider the uniform robust exact differentiator (Cruz-Zavala et al., 2011). It is generated by the DGF Φ(x) = x 1 2 + x 3 2 and is obtained from (2) and (3) aṡ This DGF is admissible with Using (29) and Proposition 4.7, one obtains the normalized parameter triples and associated convergence time bounds in the first row of Table 1. Choosing for example the first of these entries, a predefined convergence time T may be assigned to the uniform robust exact differentiator by selecting k 1 , k 2 , k 3 according to Theorem 4.8 as Fig. 1 shows simulation results obtained by applying (28) with initial values y 1 (0) = y 2 (0) = 0 to the signal f (t) = 0.75 cos(t) + 0.0025 sin(10t) + t with L = 1 and desired convergence time bound T = 1. The parameters are chosen according to (30) with γ = 4.5L as k 1 = 6, k 2 = 4.5, and k 3 ≈ 4.182. For simplicity, the simulation is performed using forward Euler discretization using a sufficiently small step size T s = 10 −4 . Note, however, that such a simple discretization scheme in general does not achieve global stability, as shown by Levant (2013). In practice, more advanced schemes as proposed by Wetzlinger et al. (2019) (cf. also Rüdiger-Wetzlinger et al. (2021)), for example, should therefore be employed.
Since B is computed in (29a) by solving the integral exactly, Theorem 4.11 may be applied to show that the assigned bound exceeds the worst-case convergence time by no more than a factor of four; this can also be seen in the simulation.  Fig. 1. Differentiation of f (t) = 0.75 cos(t)+0.0025 sin(10t) +t with L = 1 and desired convergence time bound T = 1 by using the proposed differentiator with parameters k1 = 6, k2 = 4.5, k3 ≈ 4.182, initial values y1(0) = y2(0) = 0, and DGF Φ(x) = x 1/2 + x 3/2 , along with the actual convergence time τ ≈ 0.32, obtained using forward Euler discretization with discretization step size Ts = 10 −4 .  Fig. 1 to the STA (i.e., the robust exact differentiator), with additional, uniformly bounded random measurement noise. Fig. 2 compares the steady-state differentiation error magnitude achieved using the proposed approach to the robust exact differentiator (i.e., the STA) using the same simulation settings with additional, uniformly bounded measurement noise. As noise, a uniform random number independently sampled with step size T s is used. For small noise, the behaviors coincide, as shown in Theorem 3.3, and for vanishing noise they are determined by the discretization. Only for very large noises, performance of the fixed-time differentiator eventually deteriorates due to the effectively larger gain that is necessary for achieving fixed-time convergence.

Exponential Differentiator Generating Function
As another example, consider the DGF which generates the differentiatoṙ For this DGF, (tight) admissibility constants are obtained as Using these constants and Proposition 4.7, the second row of Table 1 is obtained. Fig. 3 depicts convergence times obtained from a simulation for both DGFs when differentiating functions f (t) with different slope and sinusoidal frequency. Simulation settings and parameters were selected as before, but with re-tuned parameter k 3 ≈ 4.303 for the exponential DGF to maintain T = 1 as a global convergence time bound. The variation of the initial slope corresponds to a variation of the error system's initial condition x 2 (0) =ḟ (0), while 2 x 1 (0) = 0.
One can see that the convergence time remains bounded by T for all considered functions.

Convergence Time Function
This section studies the convergence time of system (7) as a function of the initial state. First, this function is derived for the unperturbed case and for any DGF (even non-admissible ones) in the form of a convergent improper integral. For admissible DGFs, a convergence time bound for the perturbed case is then shown.
Lemma 6.1 Consider a DGF Φ let k 3 ∈ R >0 . Then, for every > 0 there exist constants C, D ∈ R >0 such that Ψ as defined in (35) satisfies for all z ∈ [− , ] the inequalities Moreover, if Φ is an admissible DGF, then (36) holds for all z with C, D as in Definition 2.

Computation for the Unperturbed Case
Consider system (7) without perturbation, i.e., withf = 0. It is first shown that the corresponding convergence time function may be represented as an improper integral, which is finite for all positive parameters k and all DGFs Φ that satisfy Definition 1.
Following the ideas presented in Cruz-Zavala et al. (2011), system (7) withf = 0 and z = g(x) may be written aṡ As pointed out in Seeber et al. (2018), a linear system dz dτ = Az is obtained with respect to a new time variable τ satisfying Ψ (z 1 )dτ = 2dt (this time-scaling idea was employed for the STA for the first time in Moreno and Guzman, 2011). Therefore, one may intuitively expect to obtain the convergence time by integrating dt along the trajectories of this linear system.
The following technical result, which is proven in Section 9, suggests that this is indeed the case.
Lemma 6.2 Let k = (k 1 , k 2 , k 3 ) ∈ R 3 >0 , and consider a DGF Φ. Then, the function V : R 2 → R ≥0 given by is locally bounded, continuous, and positive definite. For x 1 = 0, it is furthermore differentiable and its time deriva-tiveV along the trajectories of system (7) is given bẏ The fact thatV is equal to minus one forf = 0 suggests that the function V is the unperturbed system's convergence time function. Since V is not everywhere differentiable, however, the following technical lemma is required, which is obtained following ideas presented in Seeber et al. (2018); Haimovich and De Battista (2019), and is proven in Section 9.
Lemma 6.3 Let k ∈ R 3 >0 , L ∈ R ≥0 , c ∈ R >0 and consider a DGF Φ. Let V : R 2 → R be continuous and positive definite and suppose that the time derivativeV along the trajectories of system (7) is well-defined and bounded bẏ V (t, x) ≤ −c for all t, x with x 1 = 0. Then, the system's convergence time T Φ,k L is bounded by and equality holds ifV (t, x) = −c for all t, x with x 1 = 0.
Using this result, the following proposition may now be proven.

Bound for the Perturbed Case
Focusing on admissible DGFs, an upper bound for the convergence time function in the perturbed case is now derived. To that end, the following lemma will be used, which expresses the maximum Lipschitz constant L defined in Proposition 4.5 in the form of an improper integral. Lemma 6.5 (Seeber (2020a, Theorem 3)) Let k 1 , k 2 , D ∈ R >0 . Then, L defined in (16) is given by Using the auxiliary lemmas, an upper bound for the convergence time function in the perturbed case is obtained.
PROOF. Let V be the continuous, positive definite function V defined in Lemma 6.2. For x 1 = 0, this function is differentiable and using Lemmas 6.2, 6.1, and 6.5 one finds that its time derivative along the trajectories of system (7) is bounded bẏ and using Proposition 6.4 to see that V (x) = T Φ,k 0 (x) concludes the proof.

Global Convergence Time
This section investigates properties and bounds of the convergence time function's smallest upper bound: the global convergence timê First, two scaling properties are shown. Then, lower and upper bounds are derived. As before, the quantities introduced in (35) are used throughout this section.

Scaling Properties
In the following, the scaling properties in Propositions 4.4 and 4.5 are shown. The former utilizes a homogeneity-like scaling property of state, time, and parameters of system (7). The latter is an immediate consequence of Proposition 6.6.
PROOF of Proposition 4.5 Taking the supremum with respect to x on each side of (42) yieldŝ i.e., relation (17), which proves the proposition.

Lower Bound
Explicitly solving the integral in (40) is not possible in general. An interesting special case is obtained if g(x) is an eigenvector of A with eigenvalue λ. In this case, one has e T 1 e Aτ g(x) = ce λτ with some c ∈ R. As the following lemma shows, the integral in (40) may then be simplified further. Its proof is given in Section 9.
Lemma 7.1 Let k 3 ∈ R >0 , λ ∈ R <0 , c ∈ R and consider a DGF Φ. Then, It should be highlighted that the use of this lemma does not require the knowledge of the DGF's inverse Φ −1 . Proposition 4.1, which provided the original motivation for the definition of an admissible DGF, may now be proven.
PROOF of Proposition 4.1 Since k 2 1 ≥ 8k 2 , the matrix A has real eigenvalues and its largest eigenvalue is given by Denote by D the set of all x ∈ R 2 such that g(x) is an eigenvector of A with this eigenvalue. Using Proposition 6.4 and Lemma 7.1, one then haŝ This concludes the proof.

Upper Bound
Upper bounds for the global convergence time are now studied. According to the results in the previous sections, it is sufficient to consider such bounds for L = 0, since bounds for L > 0 can then be obtained using Proposition 4.5. Using Proposition 6.4, the expression to bound from above is given byT As a first step towards finding the supremum, the following lemma, which is proven in Section 9, is given. It allows to restrict the range of z to a compact subset of R 2 by extending the domain of integration. In case the eigenvalues of A are real-valued and distinct, the integrand may furthermore be simplified.
In order to obtain global bounds, the following lemma is used, which is essentially an extension of Lemma 7.1 and is proven in Section 9.
Lemma 7.3 Let k 3 > 0 and consider an admissible DGF Φ and a (possibly unbounded) interval (a, b) ⊆ R. Let furthermore h : (a, b) → R be a continuously differentiable function, which for some α ∈ R satisfies Using these results, Proposition 4.7 may now be proven.
PROOF of Proposition 4.7: The second case k 2 1 = 8k 2 in (19) is obtained as the limit of the first case as k 2 1 tends to 8k 2 . Hence, it is sufficient to consider the case k 2 1 > 8k 2 in the following. In this case, one hasT(Φ, k, 0) = sup a∈R T a according to Lemma 7.2 with and λ 2 < λ 1 as in (57). Consider first the case a ≥ 0.
Since Ψ (−z) = Ψ (z), one may apply Lemma 7.3 with h(τ ) = −ae λ1τ − ae λ2τ . This function satisfies h(τ ) < 0 and for all τ ∈ R, because λ 2 < λ 1 < 0. One thus obtains For the case a < 0, let b = −a > 0 and consider the functioñ On the interval I 1 = (−∞, 0) this function is strictly increasing and satisfies in particularh b (τ ) < 0 and for all τ ∈ I 1 . It furthermore has a single inflection point i.e.,h b (τ inf ) = 0. On the interval I 3 = (τ inf , ∞), one has h b (τ ) > 0; thus and consequentlỹ holds for all τ ∈ I 3 . Therefore, Lemma 7.3 may be used with h =h b or h = −h b on the intervals I 1 or I 3 , respectively, to bound the integral in (60) from above. On the remaining interval I 2 = (0, τ inf ), no suitable inequality may be found, becauseh b is positive whileh b changes sign. According to Lemma 6.1, the integrand is bounded by |Ψ (z)| ≤ Ck −1 3 on this finite interval, however, which together with the use of Lemma 7.3 on I 1 and I 3 yields the bound The proof is concluded by noting that holds and thus (62) implies (68).

Conclusions and Outlook
A class of fixed-time convergent differentiators was proposed for differentiating an arbitrary signal with Lipschitz continuous time derivative in a predefined finite time. The differentiators are parameterized using a scalar differentiator generating function (DGF) and three scalar parameters.
Admissibility conditions for the DGF were given, and proper selection of the DGF was shown to yield existing differentiators, such as the uniform robust exact differentiator, as special cases. The proposed tuning procedure allows to assign any predefined convergence time bound by computing the three scalar differentiator parameters using a simple tuning rule. The assigned bound can furthermore be made arbitrarily tight by appropriate selection of two tradeoff parameters that appear in the tuning rule.
For functions with constant derivative, the differentiator's convergence time was computed in the form of an improper integral. It was shown that maximizing such an integral over a compact set yields the actual global convergence time, and an upper bound for it was derived in analytic form.
Future research may study the discrete-time implementation of the differentiator and further investigate its properties under large scale measurement noises. Furthermore, possibilities may be explored for solving the obtained optimization problem in closed form or for extending the proposed differentiator to obtain higher-order derivatives.

Proof of Lemma 3.1
To see that the limits hold pointwise, first note that where L'Hôpital's rule may be applied because, due to items (ii), (iii), and (iv) of Definition 1, Φ(x) and 1 Φ (x) are continuously differentiable and tend to zero at the origin. Using this relation and applying L'Hôpital's rule again yields where the fact that ν 2 is odd is also used.
Uniformity of the limit (70) |ν 2 (y) − 1| = 0 (72) follows from pointwise convergence. To see uniformity of the other limit, consider the derivative g α of the relevant function family g α (x) := α −2 ν 1 (α 2 x) 2 , According to item (ii) of Definition 1, ν 2 is continuously differentiable on R \ {0}. Since ν 2 also stays bounded near the origin due to (70), g α (x) is uniformly bounded with respect to x and α on all compact subsets of R × R. For every x, g α (x) is moreover uniformly bounded with respect to α due to continuity for α ∈ R \ {0} and (71). Hence, the function family g α (x) is uniformly equicontinuous and bounded on compact sets, which implies uniform convergence of g α (x) to its pointwise limit.
9.2 Proof of Lemma 6.1 By differentiating Ψ = Φ −1 k3 twice, one obtains Due to items (ii) and (iii) of Definition 1, these functions are continuous on R \ {0}. Define Ψ (0) = 0 and Ψ (0) = 2; then, due to items (iv) and (v) of Definition 1, respectively, it follows that Ψ and Ψ are continuous at 0 and hence on R. In consequence, Ψ and Ψ are both uniformly bounded on every compact subset of R.

Proof of Lemma 6.2
Consider the function h(x, τ ) = e T 1 e Aτ g(x). The positive definiteness of V follows from the facts that Ψ is positive definite and that h(x, τ ) cannot be zero for all τ ≥ 0 unless x = 0. To show that V (x) is well-defined and locally bounded, consider any x ∈ R 2 . Since A is a Hurwitz matrix, the function h(x, ·) is uniformly bounded and converges to zero. By item (v) of Definition 1 holds, and therefore Ψ (z) ≤ (2 + ε) |z| holds for ε > 0, for sufficiently small values of z. In particular, let δ 0 > 0 be such that Since the integral ∞ 0 |h(x, τ )| dτ converges, V (x) is finite.
Continuity of V in R 2 will next be established by establishing uniform continuity in every closed ball B r = {x ∈ R 2 : x ≤ r}. Note that (77) actually shows that lim z→0 Ψ (z) = 0 = Ψ (0), so that Ψ is continuous at 0. In addition, from Definition 1 and (35b), it then follows that Ψ is continuous everywhere. Since A is Hurwitz, the function h has the following property: there exists λ > 0 such that for all r ≥ 0, there exists M = M (r) ≥ 0 so that |h(x, τ )| ≤ M e −λτ ∀x ∈ B r , ∀τ ≥ 0.
To show differentiability for x 1 = 0, note that Ψ is uniformly bounded on any compact subset of R according to Lemma 6.1 for any DGF Φ. Since, for any given x ∈ R 2 , h(x, τ ) stays in such a compact subset, one obtains ∂V ∂x = ∞ 0 1 2 Ψ (e T 1 e Aτ g(x))e T 1 e Aτ ∂g ∂x dτ, where Lebesgue's dominated convergence theorem may be used to show that differentiation and improper integration may be interchanged because Ψ (h(x, τ )) is uniformly bounded with respect to τ . Using this relation to compute the time derivativeV of V along the trajectories of (7) for f = 0 yieldṡ Ψ (e T 1 e Aτ g(x)) Ψ (e T 1 g(x)) e T 1 e Aτ Ag(x) dτ = ∞ 0 d dτ Ψ (e T 1 e Aτ g(x)) Ψ (e T 1 g(x)) dτ = −1.
Using this result, the claim follows by noting thatV depends affinely onf and by using (7) and (80) to compute it.
9.4 Proof of Lemma 6.3 Consider any x 0 ∈ R 2 , denote by τ := T Φ,k L (x 0 ) the corresponding convergence time. Assume to the contrary that τ > c −1 V (x 0 ). Then, there exists > 0 such that also τ − > c −1 V (x 0 ). Let x(t) be a trajectory of the system that satisfies x(t) = 0 for all t ≤ τ − and consider this trajectory on the interval I = [0, τ − ]. Since x 1 (t) = 0 impliesẋ 1 (t) = x 2 (t) = 0 for all t ∈ I, there is only a finite number of zero crossings of x 1 on the interval I anḋ V (t, x(t)) exists for all but finitely many t ∈ I. It is therefore Henstock-Kurzweil integrable (Bogachev, 2007, Theorem 5.7.7) and V (x(t)) = V (x 0 ) + t 0V (x(τ )) dτ for all t ∈ I. SinceV is furthermore bounded from above, it is also Lebesgue integrable (Bogachev, 2007, Corollary 5.7.11) and the integrals' values coincide (cf. Bogachev, 2007, Theorem 5.7.14). Therefore, one has which contradicts the fact that V is positive definite. Therefore, τ ≤ c −1 V (x 0 ).

Proof of Lemma 7.2
To show (55), note that A is Hurwitz and, thus, for every z ∈ R 2 \ {0} there exists σ ∈ R (depending on z) such that v = e −Aσ z satisfies v = 1, i.e., v ∈ D. Furthermore, the function mapping z to σ is surjective. Hence, (54)