Guaranteed Trajectory Tracking under Learned Dynamics with Contraction Metrics and Disturbance Estimation

This paper presents an approach to trajectory-centric learning control based on contraction metrics and disturbance estimation for nonlinear systems subject to matched uncertainties. The approach uses deep neural networks to learn uncertain dynamics while still providing guarantees of transient tracking performance throughout the learning phase. Within the proposed approach, a disturbance estimation law is adopted to estimate the pointwise value of the uncertainty, with pre-computable estimation error bounds (EEBs). The learned dynamics, the estimated disturbances, and the EEBs are then incorporated in a robust Riemann energy condition to compute the control law that guarantees exponential convergence of actual trajectories to desired ones throughout the learning phase, even when the learned model is poor. On the other hand, with improved accuracy, the learned model can help improve the robustness of the tracking controller, e.g., against input delays, and can be incorporated to plan better trajectories with improved performance, e.g., lower energy consumption and shorter travel time.The proposed framework is validated on a planar quadrotor example.


I. INTRODUCTION
Robotic and autonomous systems often exhibit nonlinear dynamics and operate in uncertain and disturbed environments.Planning and executing a trajectory is one of the most common ways for an autonomous system to achieve a mission.However, the presence of uncertainties and disturbances, together with the nonlinear dynamics, brings significant challenges to safe planning and execution of a trajectory.Built upon contraction theory and disturbance estimation, this paper presents a trajectory-centric learning control approach that allows for using different model learning tools to learn uncertain dynamics while providing guaranteed tracking performance in the form of exponential trajectory convergence throughout the learning phase.Our approach hinges on control contraction metrics (CCMs) and uncertainty estimation.

A. Related Work
Control design methods for uncertain systems can be roughly classified into adaptive/robust approaches and learning-based approaches.Robust approaches such as H ∞ control [1], sliding mode control [2], and robust/tube model predictive control (MPC) [3], [4], usually consider parametric uncertainties or bounded disturbances, and design controllers to achieve certain performance despite the presence of such uncertainties.Disturbance-observer (DOB) based control and related methods such as active disturbance rejection control (ADRC) [5] usually lump parametric uncertainties, unmodeled dynamics, and external disturbances together as a "total disturbance", estimate it via an observer such as DOB and extended state observer (ESO) [5], and then compute control actions to compensate for the estimated disturbance [6].On the other hand, adaptive control methods such as model reference adaptive control (MRAC) [7] and L 1 adaptive control [8] rely on online adaptation to estimate parametric or nonparametric uncertainties and use of the estimated value in the control design to provide stability and performance guarantees.While significant progress has been made in the linear setting, trajectory tracking for nonlinear uncertain systems with transient performance guarantees has been less successful in terms of analytical quantification, yet it is required for safety guarantees of robotic and autonomous systems.
Contraction theory [9] provides a powerful tool for analyzing general nonlinear systems in a differential framework and is focused on studying the convergence between pairs of state trajectories towards each other, i.e., Fig. 1: Proposed control architecture incorporating learned dynamics incremental stability.It has recently been extended for constructive control design, e.g., via control contraction metrics (CCMs) [10], [11].Compared to incremental Lyapunov function approaches for studying incremental stability, contraction metrics present an intrinsic characterization of incremental stability (i.e., invariant under change of coordinates); additionally, the search for a CCM can be achieved using the sum of squares (SOS) optimization or semidefinite programming [12], and DNN optimization [13], [14].For nonlinear uncertain systems, CCM has been integrated with adaptive and robust control to address parametric [15] and non-parametric uncertainties [16], [17].The issue of bounded disturbances in contraction-based control has been tackled through robust analysis [12] or synthesis [18], [19].For more work related to contraction theory and CCM for nonlinear stability analysis and control synthesis, the readers can refer to a recent tutorial paper [20] and the references therein.
On the other hand, recent years have witnessed an increased use of machine learning (ML) to learn dynamics models, which are then incorporated into control-theoretic approaches to generate the control law.For modelbased learning control with safety and/or transient performance guarantees, most of the existing research relies on quantifying the learned model error, and robustly handling such an error in the controller design or analyzing its effect on the control performance [21], [22], [23].As a result, researchers have largely relied on Gaussian process regression (GPR) to learn uncertain dynamics, due to its inherent ability to quantify the learned model error [21], [22].Additionally, in almost all the existing research, the control performance is directly determined by the quality of the learned model, i.e., a poorly learned model naturally leads to poor control performance.Deep neural networks (DNNs) were used to approximate state-dependent uncertainties in adaptive control design in [24], [25].However, these results only provide asymptotic (i.e., no transient) performance guarantees at most, and investigate pure control problems without considering planning.Moreover, they either consider linear nominal systems or leverage feedback linearization to cancel the (estimated) nonlinear dynamics, which can only be done for fully actuated systems.In contrast, this paper considers the planning-control pipeline and does not try to cancel the nonlinearity, thereby allowing the systems to be underactuated.In [26], [23], the authors used DNNs for batch-wise learning of uncertain dynamics from scratch; however, good tracking performance cannot be achieved when the learned uncertainty model is poor.

Statement of Contributions:
We propose a contraction-based learning control architecture for nonlinear systems with matched uncertainties (depicted in Fig. 1).The proposed architecture allows for using different ML tools, e.g., DNNs, for learning the uncertainties while guaranteeing exponential trajectory convergence under certain conditions throughout the learning phase.It leverages a disturbance estimator with a pre-computable estimation error bound (EEB) and a robust Riemann energy condition to compute the control signal.It is empirically shown that learning can help improve the robustness of the controller and facilitate better trajectory planning.We demonstrate the efficacy of the proposed approach using a planar quadrotor example.
This work builds on [16] with several key differences.The authors of [16] introduce a robust tracking controller utilizing CCM and disturbance estimation without involving model learning.In contrast, this work adapts the controller to handle scenarios where machine learning tools are used to learn unknown dynamics, offering tracking performance guarantees throughout the learning phase.Additionally, this research empirically showcases the advantages of integrating learning to improve trajectory planning and strengthen the robustness of the closed-loop system, aspects that are not explored in [16].
Notations.Let R n and R m×n denote the n-dimensional real vector space and the space of real m by n matrices, respectively.0 and I denote a zero matrix and an identity matrix of compatible dimensions, respectively.∥•∥ ∞ and ∥•∥ denotes the ∞-norm and 2-norm of a vector/matrix, respectively.Given a vector y, let y i denote its ith element.For a vector y ∈ R n and a matrix-valued function ∂xi y i denote the directional derivative of M (x) along y.For symmetric matrices P and Q, P > Q (P ≥ Q) means P − Q is positive definite (semidefinite).⟨X⟩ stands for X + X ⊤ .Finally, we use ⊖ to denote the Minkowski set difference.

II. PRELIMINARIES AND PROBLEM SETTING
Consider a nonlinear control-affine system where x(t) ∈ X ⊂ R n and u(t) ∈ U ⊂ R m are state and input vectors, respectively, f : R n → R n and B : R n → R m are known functions that are locally Lipschitz, d : R n → R m is an unknown function denoting the matched model uncertainties.B(x) is assumed to have full column rank for all x ∈ X .Additionally, X is a compact set that includes the origin, and U is the control constraint set defined as where u i and ūi represent the lower and upper bounds of the ith control channel, respectively.
Assumption 1.There exist known positive constants L B , L d and b d such that for any x, y ∈ X , the following holds: Remark 2. Assumption 1 does not assume that the system states stay in X (and thus are bounded).We will prove the boundedness of x later in Theorem 1. Assumption 1 merely indicates that d(x) is locally Lipschitz with a known bound on the Lipschitz constant and is bounded by a prior known constant in the compact set X .
Assumption 1 is not very restrictive as the local Lipschitz bound in X for d(x) can be conservatively estimated from prior knowledge.Additionally, given the local Lipschitz constant bound L d and the compact set X , we can always derive a uniform bound using Lipschitz property if a bound for d(x) for any x in X is known.For example, supposing d(0) ≤ b 0 d , we have d(x) ≤ b 0 d + L d max x∈X ∥x∥.In practice, leveraging prior knowledge about the system can result in a tighter bound than the one based on Lipschitz continuity.Thus, we assume a uniform bound.Under Assumption 1, it will be shown later (in Section II-D) that the pointwise value of d(x(t)) at any time t can be estimated with pre-computable estimation error bounds (EEBs).

A. Learning Uncertain Dynamics
Given a collection of data points {(x i , d i )} N i=1 with N denoting the number of data points, the uncertain function d(x) can be learned using ML tools.As a demonstration purpose, we choose to use DNNs, due to their significant potential in dynamics learning attributed to their expressive power and the fact that they have been rarely explored for dynamics learning with safety and/or performance guarantees.Denoting the learned function as d(x) and the model error as d(x) ≜ d(x) − d(x), the actual dynamics (1) can be rewritten as where The learned dynamics can now be represented as Remark 3. The above setting includes the special case of no learning, corresponding to d(x) ≜ 0.
Note that the performance guarantees provided by the proposed framework are agnostic to the model learning tools used, as long as the following assumption can be satisfied.Assumption 2. We are able to obtain a uniform error bound for the learned function d(x), i.e., we could compute a constant δ d such that max Remark 4. Assumption 2 can be easily satisfied when using a broad class of ML tools.For instance, when Gaussian processes are used, a uniform error bound (UEB) can be computed using the approach in [27].When using DNNs, we could use spectral-normalized DNNs (SN-DNNs) [28] where x i is one of the N number of data points.The preceding inequality implies (5) holds with δ d = max x * ∈X min i∈{1,...,N } ∥ d

B. Problem Setting
The learned dynamics (4) (including the special case of d = 0) can be incorporated in a motion planner or trajectory optimizer to plan a desired trajectory (x ⋆ (•), u ⋆ (•)) to minimize a specific cost function.Suppose Assumptions 1, 2, and 3 hold.The focus of the paper includes (i) designing a feedback controller to track the desired state trajectory x ⋆ (•) with guaranteed tracking performance despite the presence of the model error d(x), and (ii) empirically demonstrating the benefits of learning in improving the robustness and reducing the cost associated with the actual trajectory.In the following, we will present some preliminaries on CCM and uncertainty estimation used to build our solution.

C. CCM for the Nominal Dynamics
CCM extends contraction analysis [9] to controlled dynamic systems, where the analysis simultaneously seeks a controller and a metric that characterizes the contraction properties of the closed-loop system [10].According to [10], a symmetric matrix-valued function M (x) serves as a strong CCM for the nominal (uncertainty-free) system in X , if there exist positive constants α 1 , α 2 and λ such that hold for all δ x ̸ = 0 and x ∈ X .
Remark 5. Similar to the synthesis of Lyapunov functions, given dynamics, a strong CCM can be systematically synthesized using convex optimization, more specially, sum of squares programming [10], [12], [18].
Given a CCM M (x), a feasible trajectory (x ⋆ (•), u ⋆ (•)) satisfying the nominal dynamics (6), and the actual state x(t) at t, the control signal can be constructed as follows [10], [12].At any t > 0, compute a minimal-energy path (termed as geodesic) γ(•, t) connecting x(t) and x ⋆ (t), e.g., using the pseudospectral method [29].Note that the geodesic is always a straight line segment if the metric is constant.Next, compute the Riemann energy of the geodesic, defined as E(x ⋆ (t), x(t)) = 1 0 γ s (s, t) ⊤ M (γ(s, t)))γ s (s, t)ds, where γ s (s) ≜ ∂γ ∂s .Finally, by interpreting the Riemann energy as an incremental control Lyapunov function, we can construct a control signal u(t) such that where ) ẋ⋆ with the dependence on t omitted for brevity, ẋ(t) is defined in (6) and ẋ⋆ (t) = f (x ⋆ (t)) + B(x ⋆ (t))u ⋆ (t).In practice, one may want to compute u(t) with a minimal u(t) − u ⋆ (t) such that (8) holds, which can be achieved by setting u(t) = u ⋆ (t) + k(t, x ⋆ , x) with k(t, x ⋆ , x) obtained via solving a quadratic programming (QP) problem [10], [12]: at each time t.The problem ( 9) is commonly referred to as the pointwise minimum-norm control problem and possesses an analytic solution [30].The performance guarantees provided by the CCM-based controller can be summarized in the following theorem.
[10] Suppose Assumption 3 holds for the nominal system (6) with positive constants α 1 , α 2 and λ.Then, a control law satisfying (8) universally exponentially stabilizes the system (6), which can be expressed mathematically as The following lemma from [12] establishes a bound on the tracking control effort from solving (9). where represents the largest eigenvalue, and σ >0 (•) denotes the smallest non-zero singular value.

D. Uncertainty Estimation with Computable Error Bounds
We leverage a disturbance estimator described in [16] to estimate the value of the uncertainty d(x(t)) at each time instant.More importantly, an estimation error bound can be pre-computed and systematically reduced by tuning a parameter of the estimator.The estimator comprises a state predictor and an update law.The state predictor is defined by where x(t) ≜ x(t) − x(t) denotes the prediction error, a > 0 is a scalar constant.The estimate, σ(t), is given by a piecewise-constant update law: where T is an estimation sampling time, and i = 0, 1, 2, • • • .Finally, the value of d(x) at time t is estimated as where B † (x(t)) is the pseudoinverse of B(x(t)).The following lemma establishes the estimation error bound associated with the disturbance estimator defined by ( 13) and (14).Lemma 3 can be considered as a simplified version of [16,Lemma 4] in the sense that the uncertainty considered in [16] can explicitly depend on both time and states, i.e., represented as d(t, x), while the uncertainty in this paper depends on states only.The proof is similar to that in [16] and is omitted for brevity.
Lemma 3. [16] Given the dynamics (1) subject to Assumption 1, and the disturbance estimator in (13) and ( 14), the estimation error can be bounded as for all t in [0, ξ], where with constants L B , L d and b d from Assumption 1.Moreover, lim T →0 δ0 de (t) = 0, for any t ≥ T .Remark 6.According to Lemma 3, the estimation error after a single sampling interval can be arbitrarily reduced by decreasing T .Remark 7. As explained in [16,Remark 5], the error bound δ0 de (t) can be quite conservative primarily due to the conservatives introduced in computing ϕ and α(T ).For practical implementation, one could benefit from empirical studies, such as conducting simulations with a selection of user-defined functions of d(x) to identify a more refined bound than δ0 de (t) defined in (17).

III. DISTURBANCE ESTIMATION-BASED CONTRACTION CONTROL UNDER LEARNED DYNAMICS
This section introduces an approach based on CCM and disturbance estimation to ensure the UES of the uncertain system (2) even when the learned model d(x) is poor.

A. CCMs and Feasible Trajectories for the True System
The first step is to search for a valid CCM for the uncertain system (2).Due to the special structure of (2) attributed to the matched uncertainty assumption, we have the following lemma.The proof is straightforward by following [15] that considers matched parametric uncertainties and is thus omitted.Lemma 4. If a contraction metric M (x) satisfies the strong CCM condition (7) for the nominal system, then the same metric satisfies the strong CCM condition for the learned dynamics (4), as well as for the uncertain system (2) with the learned dynamics d(x).
Equation ( 20) and Assumption 1 imply that d(x) ∈ D for any x ∈ X .As discussed in Section II-C, when provided with a CCM and a feasible desired trajectory x ⋆ (t) and u ⋆ (t), a controller can be designed to exponentially stabilize the actual state trajectory x(t) to the desired trajectory x ⋆ (t).In practice, we only have access to learned dynamics to plan a trajectory.We now present a lemma providing the condition under which x ⋆ (t) planned using the learned dynamics (4) also represents a feasible state trajectory for the true system.
Lemma 5. Considering a trajectory and Assumption 1 hold, then, x ⋆ (•) is a feasible state trajectory for the true system (1).

B. Filtered Disturbance Estimate
Using a small T in the estimation law ( 14) may introduce high-frequency components into the control signal, which could potentially compromise the robustness of the closed-loop system, e.g., against input delay.This has been demonstrated in the adaptive control literature, [8, Sections 1.3 and 2.1.4],which shows that a high adaptation rate (corresponding to a small T here) will lead to a reduced time-delay margin.To prevent the high-frequency signal in the estimation loop induced by small T from entering the control loop, we can use a low-pass filter to smooth the estimated uncertainty before using it to compute control signals, as inspired by the L 1 adaptive control theory [8].More specifically, we define the filtered disturbance estimate ď(t) as where L[•] and L −1 For simplicity, we select C(s) to be where k j f (j = 1, . . ., m) is the filter bandwidth for the jth uncertainty channel.C(s) in ( 23) can be described by a state-space model , and B f is a m×m matrix with all elements equal to 0 except the (j, j) element equal to k j f .Define Leveraging the bound in (17) and solution of state-space systems, we can straightforwardly derive an EEB on ď(t) − d(x(t)), formally stated in the following lemma.
Lemma 6.Consider the filtered disturbance estimate given in (22).If Assumption 2 holds and condition (17) holds for all t in [0, ξ], then, for all t in [0, ξ], where Proof.Equation ( 22) implies where Notice that (28) can be represented with a state-space model: where x f (t) ∈ R m is the state vector of the filter.From ( 17) and ( 30), we have for any t in [0, T ], where the second inequality is due to the fact that e Af (t−τ ) As a result, we have On the other hand, since  27) and ( 31), we obtain (25).■ The L 1 norm of a linear time-invariant system can be easily computed using the impulse response [8, Section A.7.1].
Remark 8. From the definitions of δ0 de (t) in ( 17) and of ψ 1 (t) in ( 26), one can see that lim T →0 ψ 1 (t) = 0, for all t ≥ T .On the other hand, δ d is expected to decrease with the improved accuracy of the learned model d(x).
Remark 9.As explained in Remark 7, the bound δ0 de (t) could be quite conservative, which leads to a conservative bound, i.e., ψ 1 (t), for ∆ 1 (t) (defined in (28)).Additionally, the bound on ∆ 2 (t) can also be quite conservative due to the use of the L 1 norm of a system and the L ∞ norms of inputs and outputs (e.g., characterized by [8,Lemma A.7.1]).As a result, the bound δde (t) is most likely rather conservative.For practical implementation, one could leverage some empirical study, e.g., doing simulations under a few user-selected functions of d(x) and d(x) and determining a tighter bound based on the simulation results.
Remark 10.For many control techniques, including a low-pass filter will induce phase delay and may damage system performance and/or robustness.However, this is not true for adaptive or uncertainty estimation-based control.As demonstrated in the adaptive control literature [8], including a low-pass filter with properly designed bandwidth can actually improve system robustness, e.g., against input delay.Simulation tests in Appendix A indicate that this is also true for the proposed control technique.Additionally, the low-pass filter is only applied to the estimate of the learned model error (i.e., d(x(t))), and influences the estimation error bound.Such an influence will decrease when the learned model improves.

C. Robust Riemann Energy Condition under Learned Dynamics
Section II-C demonstrates that with a nominal system and a CCM established for it, one can devise a controller to limit the rate of decrease of the Riemann energy, as described by (8).Now, given uncertain dynamics with the learned model in (2), and the planned trajectory (x ⋆ (•), u ⋆ (•)) using the learned model, under the condition (21), the condition (8) now becomes where ẋ(t) = f (x(t)) + B(x(t))(u(t) + d(x(t))) denotes the true dynamics evaluated at x(t), and F l (x, u) denotes the learned dynamics defined in (3).Obviously, (32) is not implementable since it depends on the uncertainty d(x(t)).However, by estimating the value of d(x(t)) at each time t with a computable estimation error bound, we could establish a robust condition for (32) [16].In particular, consider the filtered disturbance estimate introduced in Section III-B that estimates d(x(t)) as ď(t) at each time t with an error bound δde (t) defined in (25).Then, a sufficient condition for (32) can be obtained as where ẋ(t) ≜ f (x) + B(x)(u(t) + ď(t)).Moreover, since M (x) satisfies the CCM condition (7), control input u(t) that satisfies (33) will exist for any t ≥ 0, irrespective of the size of δ, in the absence of control limits, i.e., U = R m .We call condition (33) the robust Riemann energy (RRE) condition.
Remark 11.From Sections II-D and III-C, we see that the disturbance estimate provided by ( 13) and ( 14) and incorporated in the RRE condition (33) is the discrepancy between the true dynamics and the nominal dynamics without the learned model (i.e., d(x)).Alternatively, we can easily adapt the estimation law ( 13) and ( 14) to estimate d(x) (i.e., the discrepancy between learned dynamics (4) and true dynamics), and adjust the RRE condition accordingly.However, as characterized in (17), the EEB depends on the local Lipschitz bound of the uncertainty to be estimated, which indicates that a Lipschitz bound for d(x) is needed to establish the EEB for it.

D. Guaranteed Trajectory Tracking Under Learned Dynamics
Similar to Section we can compute a control input at each time t to satisfy the RRE condition (33).In practice, one may want to compute u(t) with a minimal u(t) − u ⋆ (t) , which can be achieved by setting u(t) = u ⋆ (t) + k * (t, x ⋆ , x) with k * (t, x ⋆ , x) obtained via solving a QP problem: at each t ≥ 0, where and ( 35) is an equivalent representation of (33), ď(t) is the filtered disturbance estimate via ( 13)-( 15) and ( 22), δde (t) is defined in (25), and F l (•, •) is defined in (3).The problem (34) is commonly referred to as a min-norm problem and can be analytically solved by [30] k Remark 12.The proposed controller is inspired by the L 1 adaptive control theory [8].In fact, we adopt the estimation mechanism (the PWCE law in ( 13) and ( 14)) used within an L 1 controller.However, instead of directly canceling the estimated disturbance as one would do with an L 1 controller, the proposed approach incorporates the estimated uncertainty and the EEBs into the robust Riemann energy condition (35) to compute the control signal, which ensures exponential convergence of actual trajectories to desired ones.
The following lemma establishes a bound on the control input given by solving (34).
Lemma 7. Suppose Assumptions 2-3 hold, and the uncertainty estimate error is bounded according to (25).Moreover, suppose x ⋆ , x ∈ X such that d(x ⋆ , x) ≤ d for a scalar constant d ≥ 0.Then, the control effort from solving (34) is bounded as where Ĝ(x) ≜ M ∂ f ∂x +∂ f M +2λM with f defined in (3).Proof.Due to Assumption 2, M (x) is a valid CCM for the learned dynamics (4), according to Lemma 4. By applying Lemma 2 to the learned dynamics (4), we can obtain that at any t ≥ 0, for any feasible x ⋆ (t) and x(t) satisfying d(x ⋆ , x) ≤ d, there always exists a k 0 subject to Setting with ϕ 1 defined in (37), we have LHS of (35) = LHS of (39 which implies that k defined in ( 40) is a feasible solution for (34).Therefore, the optimal solution for (34) satisfies , which proves (38).■ The main theoretical result of the paper is stated below.
Theorem 1.Consider an uncertain system (2) with learned dynamics d(x).Suppose Assumptions 1-2 hold and Assumption 3 holds with positive constants α 1 , α 2 and λ.Furthermore, suppose the initial state vector x(0) ∈ X and a continuous trajectory (x ⋆ (•), u ⋆ (•)) planned using the learned dynamics (4) satisfy ( 21), ( 41) and (42), for any t ≥ 0, where int(•) denotes the interior of a set, δL u is defined in (38 for an arbitrary ε > 0.Then, the control law u(t) = u ⋆ (t) + k * (t, x ⋆ , x) with k * (t, x ⋆ , x) solving (34) ensures that u(t) ∈ U and x(t) ∈ X for all t ≥ 0, and universally exponentially stabilizes the system (2) in the sense that (11) holds.We next prove x(t) ∈ X and u(t) ∈ U for all t ≥ 0 by contradiction.Assume this is not true.Since x(t) and u(t) are continuous1 there must exist a time τ such that Now, examine the evolution of the system within the interval [0, τ ).Due to (44), the error bound in (17) holds within [0, τ ).Also, notice that condition (21) ensures that x ⋆ (•) represents a feasible trajectory for the uncertain system (2) with input constraints according to Lemma 5. Therefore, the control law given by (34) guarantees the fulfillment of of the RRE condition (33), and consequently, of condition (32), which further implies (8), or equivalently, for all t in [0, τ ).Lemma 1 and (46) indicate that x(t) ≤ x ⋆ (t) + α2 α1 x(0) − x ⋆ (0) e −λt ∀t ∈ [0, τ ), which, together with (42), implies that x(t) stays in the interior of X , ∀t ∈ [0, τ ).Considering that x(t) is continuous, we have x(τ ) ∈ X , contradicting the first condition in (45).As a result, we have Now consider the second condition in (45).Condition (46) implies d(x ⋆ (t), x(t)) ≤ d(x ⋆ (0), x(0))e −λt < d for any t in [0, τ ), where d(x ⋆ , x) denotes the Riemann distance between x ⋆ and x and d is defined in (43).Considering the continuity of x ⋆ (t) and x(t), we have Due to (47) and u(t) ∈ U for all t ∈ [0, τ ) (from (44)), it follows from Lemma 3 that the error bound in (17) holds in [0, τ ], which, together with (47), implies the filter-dependent error bound in (25) holds in [0, τ ].This fact, along with (48), indicates that (38) holds, i.e., k * (t, x ⋆ , x) ≤ δL u , for all t in [0, τ ].Further considering (41), we have u(t) = u ⋆ (t) + k * (t, x ⋆ , x) ∈ U for all t in [0, τ ], which, together with (47), contradicts (45).Therefore, we conclude that x(t) ∈ X and u(t) ∈ U for all t ≥ 0. From the development of the proof, it is clear that the the UES of the closed-loop system in X with the control law given by the solution of (34) is achieved.■ Remark 13.Theorem 1 asserts that under specific conditions, the proposed controller ensures exponential convergence of the actual state trajectory to a desired trajectory that may be planned using a potentially poorly learned model.On the other hand, the improved accuracy of the learned model reduces the error bound δ d and thus the EEB δde (t), and improve the robustness of the controller, as will be demonstrated by simulation results in Section IV.
Remark 14.The exponential convergence assurance described in Theorem 1 relies on a continuous-time implementation of the controller.However, in practical applications, controllers are typically implemented on digital processors using fixed sampling times.Consequently, the property of exponential convergence may be slightly compromised, as noted in Section IV.

E. Discussions
With consideration of (38), condition (41) in Theorem 1 requires that when planning nominal input trajectories, enough control authority should be left for the control law defined by (34).Additionally, the control effort bound δL u depends on the error bound of the learned model, δ d; a poorly learned model with large δ d will lead to large δL u , making it challenging to satisfy (41).
While Theorem 1 only mentions the trajectory tracking performance, we will empirically show the benefits of learning in facilitating better trajectory planning and improving the robustness of the controller in Section IV.

IV. SIMULATION RESULTS
We validate the proposed learning control approach on a 2D quadrotor introduced in [12].We selected this example because the 2D quadrotor, while simpler than a 3D quadrotor, presents significant control challenges due to its nonlinear and unstable dynamics.Additionally, it offers a suitable scenario for demonstrating the applicability of the proposed control architecture, specifically, maintaining safety and reducing energy consumption in the presence of disturbances..The dynamics of the vehicle are given by where p x and p z represent the positions in x and z directions, respectively, v x and v z denote the lateral velocity and velocity along the thrust axis in the body frame.ϕ is the angle between the x direction of the body frame and the x direction of the inertia frame.The input vector u = [u 1 , u 2 ] contains the thrust force produced by the two propellers.m and J represent the mass and moment of inertia about the out-of-plane axis, respectively.l denotes the distance between each propeller and the vehicle center, and d(x) signifies the unknown disturbances exerted on the propellers.Specific parameter values are assigned as follows: m = 0.486 kg, J = 0.00383 Kg m 2 , and l = 0.25 m.We choose d(x) to be d(x) = ρ(x, z)•0.5(v 2 x +v 2 z )[1, 1] ⊤ , where ρ(x, z) = 1/((x − 5) 2 + (y − 5) 2 + 1) represents the disturbance intensity whose values in a specific location are denoted by the color at this location in Fig. 2. We consider three navigation tasks with different start and target points, while avoiding the three circular obstacles as illustrated in Fig. 2. The planned trajectories were computed using OptimTraj [31] to minimize a cost function J = Ta 0 u(t) 2 dt + 5T a , where T a denotes the arrival time.The actual start points for Tasks 1∼3 were intentionally set to be different from the desired ones used for trajectory planning.

A. Control Design
For computing a CCM, we parameterized the CCM W by ϕ and v x , and set the convergence rate λ to 0.8.Additionally, we enforced the constraints: . More details about synthesizing the CCM and computing the geodesic can be found in [18].All the subsequent computations and simulations except DNN training (which was done in Python using PyTorch) were done in Matlab R2021b.For estimating the disturbance using ( 13)-( 15), we set a = 10.It is straightforward to confirm that L d = 4, b d = 3.54, and L B = 0 (due to the constant nature of B) satisfy Assumption 1.By discretizing the space X into a grid, one can determine the constant ϕ in Lemma 3 to be ϕ = 783.96.Based on (17), to achieve an error bound δ0 de = 0.1, the estimation sampling time needs to satisfy T s ≤ 2.04 × 10 −7 s.However, as mentioned in Remark 7, the error bound calculated from (17) might be overly conservative.Through simulations, we determined that a sampling time of 0.002 s was sufficient to achieve the desired error bound and thus set T s = 0.002 s.

B. Performance and Robustness Across Learning Transients
Figure 2 (top) illustrates the planned and actual trajectories under the proposed controller utilizing the RRE condition and disturbance estimation (referred to as DE-CCM), in the presence of no, moderate and good learned model for uncertain dynamics.For these results, we did not low-pass filter the estimated uncertainty, which is equivalent to setting C(s) = I in (22).Spectral-normalized DNNs [28] (Remark 4) with four inputs, two outputs and four hidden layers were used for model learning.For training data collection, we planned and executed nine trajectories with different start and end points, as shown in Fig. 3.The data collected during the execution of these trajectories were used to train the moderate model.However, these nine trajectories were still not enough to fully explore the state space.Sufficient exploration of the state space is necessary to learn a good uncertainty model.Thanks to the performance guarantee, the DE-CCM controller facilitates safe exploration, as demonstrated in Fig. 3.For illustration purposes, we directly used the true uncertainty model to generate the data and used the generated data for training, which yielded the good model.As depicted in Fig. 2, the actual trajectories generated by DE-CCM exhibited expected convergence towards the desired trajectories during the learning phase across all three tasks.The minor deviations observed between the actual and desired trajectories under DE-CCM can be attributed to the finite step size used in the ODE solver employed for simulations (refer to Remark 14).The planned trajectory for Task 2 in the moderate learning case seemed weird near the end point.This is because the learned model was not accurate in the area due to a lack of sufficient exploration.However, with the DE-CCM controller, the quadrotor was still able to track the trajectory.Figure 4 depicts the trajectories of true, learned and estimated disturbances in the presence of no and good learning for Task 1, while the trajectories for Tasks 2 and 3 are similar and thus omitted.One can see that the estimated disturbances were always fairly close to the true disturbances.Also, the area with high disturbance intensity was avoided under good learning, which explains the smaller disturbance encountered.Figure 5 shows trajectories of Riemann energy E(x ⋆ , x) in the presence of no, and good learning for Task 1, while the trajectories for Tasks 2 and 3 are similar and thus omitted.It is evident that E(x ⋆ , x) under DE-CCM decreased exponentially across all scenarios, irrespective of the quality of the learned mode To compare, we implemented three additional controllers, namely, a vanilla CCM controller that disregards the uncertainty or learned model error, adaptive CCM (Ad-CCM) controllers from [15] with true and polynomial regressors, and a robust CCM (RCCM) controller that can be seen as a specific case of a DE-CCM controller with the uncertainty estimate equal to zero and the EEB equal to the disturbance bound b d .Since the Ad-CCM controller [15] needs a parametric structure for the uncertainty in the form of Φ(x)θ with Φ(x) being a known regressor matrix, and θ being the vector of unknown parameters.For the no-learning case, we assume that we know the regressor matrix for the original uncertainty d(x).For the learning cases, since we do not know the regressor matrix for the learned model error d(x), we used a second-order polynomial regressor matrix: Figure 2 (bottom) shows the tracking control performances yielded by these additional controllers for Task 1 under different learning scenarios.The observed trajectories resulting from the CCM controller showed significant deviations from the planned trajectories and occasionally encountered collisions with obstacles, except in the case of good learning.Additionally, Ad-CCM yielded poor tracking performance under the moderate learning case.The poor performance could be attributed to the fact that the uncertainty d(x) may not have a parametric structure, or even if it does, the selected regressor may not be sufficient to represent it.RCCM achieved similar performance as compared to the proposed method, but shows weaker robustness against control input delays, as demonstrated later.

C. Improved Planning and System Robustness With Learning
Fig. 6 shows the costs J associated with the actual trajectories achieved by DE-CCM under different learning qualities.As expected, the good model helped plan better trajectories, leading to reduced costs for all three tasks.It is not a surprise that the poor and moderate models led to a temporal increase in the costs for some tasks.In practice, we may not use the poorly learned model directly for planning trajectories.DE-CCM ensures that in case one really does so, the planned trajectories can still be tracked well.
The role of the low-pass filter in protecting system robustness under no learning case is illustrated in Appendix A. We next tested the robustness of RCCM and DE-CCM against input delays under different learning scenarios.Note Fig. 6: Costs of actual trajectories achieved by DE-CCM throughout the learning phase that, unlike linear systems for which gain and phase margins are commonly used as robustness criteria, we often evaluate the robustness of nonlinear systems in terms of their tolerance for input delays.Under an input delay of ∆t, the plant dynamics (1) becomes ẋ = f (x) + B(x) u(t − ∆t) + d(x) .For these experiments, we leveraged a low-pass filter C(s) = 30 s+30 I 2 to filter the estimated disturbance following (22).In principle, the EEB of 0.1 used in the previous experiments would not hold anymore as the presence of the filter would lead to a larger error bound according to (25).However, we kept using the same EEB of 0.1 as the theoretical bound according to (25) could be quite conservative.The Riemann energy, which indicates the tracking performance for all states, is shown in Fig. 7.One can see that under both delay cases, DE-CCM achieves smaller and less oscillatory Riemann energy, indicating better robustness and tracking performance, compared to RCCM.Additionally, the robustness of DE-CCM against input delays in the presence of good learning is significantly improved compared to the no-learning case, which illustrates the benefits of incorporating learning.This can be explained as follows.The input delay may cause the disturbance estimate ď0 (t) to be highly oscillatory and a large discrepancy between ď0 (t) and d(x(t)).The low pass filter C(s) can filter out the high-frequency oscillatory component of ď0 (t).Under good learning, according to (22), the learned model d(x(t)) approaches the true uncertainty d(x(t)); as a result, the filtered disturbance estimate ď(t) defined in (22) can be much closer to d(x(t)) leading to improved robustness and performance, compared to no and moderate learning cases.

V. CONCLUSIONS
This paper presents a disturbance estimation-based contraction control architecture that allows for using model learning tools (e.g., a neural network) to learn uncertain dynamics while guaranteeing exponential trajectory convergence during learning transients under certain conditions.The architecture uses a disturbance estimator to estimate the value of the uncertainty, i.e., the difference between nominal dynamics and actual dynamics, with precomputable estimation error bounds (EEBs), at each time instant.The learned dynamics, the estimated disturbances, and the EEBs are then incorporated into a robust Riemann energy condition, which is used to compute the control signal that guarantees exponential convergence to the desired trajectory throughout the learning phase.On the other hand, we show that learning can facilitate better trajectory planning and improve the robustness of the closed-loop system, e.g., against input delays.The proposed framework is validated on a planar quadrotor example.
Future directions could involve addressing broader uncertainties, especially unmatched uncertainties prevalent in practical systems, minimizing the conservatism of the estimation error bound, and demonstrating the efficacy of the proposed control framework with alternative model learning tools.

Fig. 2 :
Fig. 2: Top: Planned and executed trajectories under our proposed DE-CCM for Tasks 1∼3 under different learning scenarios.Bottom: Tracking performance of DE-CCM, Ad-CCM [15], RCCM (which can be seen as a special case of DE-CCM with uncertainty estimate set to 0), and CCM for Task 1 under different learning scenarios Start points of planned and actual trajectories are intentionally set to be different to reveal the transient response under different scenarios.

Fig. 3 :
Fig. 3: Safe exploration of the state space for learning uncertain dynamics under DE-CCM

Fig. 4 :
Fig. 4: Trajectories of true, learned, and estimated disturbances in the presence of no and good learning for Task 1.The notations d 1 , d1 and ď1 denote the first element of d, d and ď, respectively.

5 :
Trajectories of Riemann energy E(x ⋆ , x) (left) and the first element of control input (right), i.e., u 1 , in the presence of no, poor and good learning for Task 1

Fig. 7 :
Fig. 7: Riemann Energy yielded by DE-CCM and RCCM under different learning cases in with an input delay of 10 ms (top) and 30 ms (bottom).Note that the plots on the right are zoomed-in versions of the corresponding plots on the left.Learning improves the robustness of both DE-CCM and RCCM against input delays.
(to enfoce that d(x) has a Lispitchz bound L d in X ) and compute the UEB as follows.Obviously, since both d(x) and d(x) have a local Lipschitz bound L d in X , the model error d(x) has a local Lipschitz bound 2L d in X .As a result, given any point x * ∈ X , we have [•] denote the Laplace transform and inverse Laplace transform operators, respectively, and C(s) is a m × m strictly-proper transfer function matrix denoting a stable low pass filter.Notice that ď0 (t) − d(x(t)) is an estimate of the learned model error d(x(t)) = d(t) − d(x(t)).Filtering d(x(t)) is not necessary because it will not induce high-frequency signals into the control loop, unlike ď0 (x(t)).