Nearest Neighbor Control For Practical Stabilization of Passive Nonlinear Systems

This paper studies static output feedback stabilization of continuous-time (incrementally) passive nonlinear systems where the control actions can only be chosen from a discrete (and possibly finite) set of points. For this purpose, we are working under the assumption that the system under consideration is large-time norm observable and the convex hull of the realizable control actions contains the target constant input (which corresponds to the equilibrium point) in its interior. We propose a nearest-neighbor based static feedback mapping from the output space to the finite set of control actions, that is able to practically stabilize the closed-loop systems. Consequently, we show that for such systems with $m$-dimensional input space, it is sufficient to have $m+1$ discrete input points (other than zero for general passive systems or the target constant input for incrementally passive systems). Furthermore, we present a constructive algorithm to design such $m+1$ nonzero input points that satisfy the conditions for practical stability using our proposed nearest-neighbor control.


Introduction
In several applications ranging from control of physical systems to networked control, exact implementation of a feedback control law is not possible due to the constraints at the level of sensors/actuators, or the constraints at the level of communication channels.Problems related to analysis, or the design of control laws, in the presence of such constraints have addressed considerable attention in the literature (De Persis and Jayawardhana, 2012;Delchamps, 1990;Elia and Mitter, 2001;Hayakawa et al., 2009;Jafarian and De Persis, 2015).In this paper, we focus our attention on continuous-time dynamical systems where the input space is constrained to finite discrete sets.An example of such a system with actuation constraints is the design of the power take-off systems of the Ocean Grazer wave energy converter (WEC), where the device can only activate a constant actuator systems from a pre-specified finite set (Barradas-Berglind et al., 2016;Wei et al., 2017).Another example is the use of a fixed set of pulse attitude control thrusters for inducing 6 degree-of-freedom motion in spacecrafts or landers (Aretskin-Hariton et al., 2018;Kienitz and Bals, 2005).
Control design methods with appropriate analysis techniques, where binary input or minimal information is considered, have been discussed, among many others, in (Elia and Mitter, 2001;Kao and Venkatesh, 2002) for linear systems, and in (Cortés, 2006;De Persis and Jayawardhana, 2012;Jafarian and De Persis, 2015) for the networked control systems setting.As these papers consider the use of binary input values per input dimension, the stabilization of an m-dimensional input-output system implies that there should be at least 2 m admissible input values and the stabilizing control law must dynamically assign one of these values as control input at every time instance.In this paper, we shall focus on designing control laws with a minimal set of discrete control values whose cardinality is at most m + 1, if we exclude the origin of the input space.
We consider nonlinear systems described by where the state x(t) ∈ R n and the input and output signals u(t), y(t) ∈ R m .The functions f , g, and h are assumed to be continuously differentiable, f (0) = 0, g(x) is full-rank for all x, and h(0) = 0.For the developments carried out in this paper, the underlying assumption is that the inputoutput system Σ is passive (in appropriate sense).The basic problem studied in this paper is the stabilization of Σ under limited actuation /information transmission; that is, the control input u can only take values from a finite discrete set U := {u 0 , u 1 , u 2 , . . ., u p } with u i ∈ R m for each i = 0, . . ., p.
Passive systems have received attention in different research fields as they are able to model physical phenomena exhibited by almost all thermo-chemo-electromechanical systems (Ortega et al., 2013;van der Schaft et al., 2013).In this regard, most of the aforementioned systems carry natural energy properties that can be related to passivity.In particular, such systems are said to be passive if the rate of change of the systems' "stored energy" never exceeds the power supplied by the environment through their external ports.There are different classes of passivity.For example, incremental passivity and differential passivity.These variations of passivity, along with the "original" passivity notion have been shown to be useful for control design purposes (Jayawardhana et al., 2007;Kosaraju et al., 2019).We refer interested readers to the various expositions on passive systems in (Khalil, 2014;Ortega et al., 2013;Sepulchre et al., 2012;van der Schaft, 2016).Our results are also applicable to a class of nonlinear systems that can be made passive by feedback control, as investigated, for instance, in (Byrnes et al., 1991;Fradkov, 2008;Fradkov and Hill, 1998).
For the stabilization problem studied in this paper, it is assumed that we have a stabilizing output feedback law y → F (y) (when U is continuum).When we impose the constraint that the actuation set U is finite, two relevant questions for its stabilization are: a) how to map F (y) to an element in U?; and b) how to determine the minimal cardinality of U? To address these questions for the system class Σ, we design a mapping φ : R m → U, with U being discrete (and possibly minimal), such that u = φ(F (y)) ∈ U practically stabilizes Σ.
The question of designing the quantization mapping φ : R m → U has been addressed in various forms in literature.Since the input can only take the available values in the discrete set U, the quantizer φ, in some sense, defines the partition of the input space with respect to U, where each cell of the partition is associated to an element of the set U. In most of the existing works, the input set U is chosen such that the resulting partition has some structure.For instance, when U := {−N, −N + 1, . . ., N − 1, N } m , a partition in the form of a regular grid facilitates design and analysis (Ceragioli and De Persis, 2007;De Persis and Jayawardhana, 2012;Delchamps, 1990;Jafarian and De Persis, 2015;Liberzon and Hespanha, 2005;Tatikonda, 2000).Other examples include logarithmic quantizers (Elia and Mitter, 2001;Fu and de Souza, 2009), which are optimal with respect to a certain density metric.However, if we fix the discrete set U, then the question of finding the best possible partition for this given set U has not received much attention in the literature.In this paper, we address the later viewpoint by defining a simple static mapping that maps F (y) to the nearest element in U which (practically) stabilizes the system.If certain stability conditions are satisfied, the resulting partition is described by convex polytopes, and to the best of authors' knowledge, such structures have not appeared in the literature on quantized control of nonlinear systems.
The second question of finding the minimal set U for feedback stabilization has also received considerable attention.One question regarding this matter is on the minimal cardinality of the set U. As an example, consider the work of (Nair et al., 2007).In this paper, a discrete-time linear system, under some appropriate setting, is stabilizable if the number of bits per sample (rate of communication) is greater than the intrinsic entropy of the system.Similar results are available for continuous-time systems setting in (Colonius, 2012;Colonius and Kawan, 2009).To the best of authors' knowledge, there has not been a dedicated study on computing the entropy of passive nonlinear systems.Therefore, the question of how many symbols are necessary or sufficient for stabilization of a passive nonlinear systems has not been addressed.However, we do find some results on quantized control of passive system.In (Cortés, 2006;Jafarian and De Persis, 2015), under certain passivity structure in the dynamics, Σ is shown to be practically stabilizable by using binary control for each input dimension which directly translates to 2 m +1 elements in U, e.g., As a relaxation of aforementioned results, and dealing with rather generic class of multi-input multi-output passive nonlinear systems, we show in this paper that such practical stabilization can be achieved by simply using m + 1 elements in U, in addition to {0} or the required constant input u * when the system is required to track a desired constant reference y * .We do so by proposing the nearest-neighbor based control laws and analyze the stability of the closed-loop systems when the input u can only be taken from the finite discrete set U.Moreover, we provide algorithmic procedure to construct minimal discrete sets that are able to practically stabilize the systems by means of nearest-neighbor based control law.Our design methodology is such that the overall closed-loop system is an interconnection of a passive system with an optimization-based selection rule for the input.Dynamical systems where the inputs are computed from solving an optimization problem, and are discontinuous appear in different applications (Brogliato and Tanwani, 2020).Passivity of the open-loop system is an important structural property that helps us analyzing the overall system in such cases.When quantization effect is of a particular concern, the interconnection of passive systems and quantizers has been studied for the past decade in various different contexts.For instance, the practical stability analysis of passive systems in a feedback loop with a quantizer using an adapted circle criterion for nonsmooth systems is presented in (Jayawardhana et al., 2011).
The rest of the paper is organized as follows.In Section 2, we provide some preliminaries on set-valued dynamics resulting from the use of nonsmooth control laws and on convex polytopes; and formulate the control problem.In Section 3, we describe our nearest neighbor control (NNC) approach, and the results showing practical convergence for passive systems.Using similar approach, we generalize the results in Section 4 by considering the practical stabilization of a nonzero equilibrium for incrementally passive nonlinear systems.Some simple designs of the minimal action set along with their construction pro-cedures and properties associated to the NNC approach are provided and analyzed in Section 5. Finally, some concluding remarks are provided in Section 6.
A concise version of the results presented in Section 3 has also appeared in the conference version of our paper (Jayawardhana et al., 2019).However, in this article, we carry out the proofs differently and with more rigor, which allow us to tackle higher dimensional systems.The generalizations studied in Section 4, and the design methods proposed in Section 5 have not been addressed in any of authors' previous works.

Preliminaries and Problem Formulation
Notation: For a vector in R n , or a matrix in R m×n , we denote the Euclidean norm and the corresponding induced norm by • .For a signal z : R ≥0 → R n , the essential supremum norm of z over an interval For simplicity, we write B (0) as B .The inner product of two vectors µ, ν ∈ R m is denoted by µ, ν .For a given set S ⊂ R m , and a vector µ ∈ R m , we let µ, S := { µ, ν | ν ∈ S}.For a discrete set U, its cardinality is denoted by card(U).The convex hull of vertices from a discrete set U is denoted by conv(U).The interior of a set S ⊂ R n is denoted by int (S).A unit vector whose i-th element is 1 and the other elements are 0 is denoted by e i .A vector whose entries are 1 is denoted by 1.A continuous function γ : R ≥0 → R ≥0 is of class K if it is continuous, strictly increasing, and γ(0) = 0. We say that γ : R ≥0 → R ≥0 is of class K ∞ if γ is of class K and unbounded.

Passive systems and observability notions
The central object of this paper is the nonlinear control systems Σ given in (1).The fundamental property that we associate with Σ is that, it is passive, i.e., for all pairs of input and output signals u, y, we have T 0 y(t), u(t) dt > −∞ for all T > 0; see (Willems, 1972;van der Schaft, 2016;Ortega et al., 2013) for some primary references on passive systems.By the well-known Hill-Moylan conditions, the passivity of Σ implies that there exists a positive definite storage function H : R n → R ≥0 such that ∇H(x), f (x) ≤ 0 and ∇H(x), g(x) = h (x).Without loss of generality, we assume that the storage function H is proper, i.e. all level sets of H are compact.
Using the passivity assumption on Σ, it is immediate to see that u ≡ 0 implies that all level sets of H are positively invariant.More precisely, for any c > 0, if H(x(0)) ≤ c then H(x(t)) ≤ c for all t ≥ 0. In other words, if we initialize the state of Σ such that x(0) ∈ Ω c := {ξ|H(ξ) ≤ c} with u ≡ 0 then x(t) ∈ Ω c for all t ≥ 0. We will use this property later to establish the practical stability of our closed-loop systems in conjunction with the following observability notion from (Hespanha et al., 2005).
Definition 1.The system (1) is large-time initial-state norm observable if there exist τ > 0, and γ, χ ∈ K ∞ such that the solution x of (1) satisfies for all t ≥ 0, x(0) ∈ R n , and locally essentially bounded and measurable inputs u : R ≥0 → R m .
In this work, we will use the large-time initial-state norm observability property for the autonomous system (with In this case, large-time initial-state norm observability of (2) implies We note that in the standard passivity-based control literature, the notion of zero-state observability or zero-state detectability is typically assumed for establishing the convergence of the state to zero in the Ω-limit set.However, these notions cannot be used to conclude the boundedness of the state trajectories given the bound on the output trajectories.Therefore, instead of using these notions, we will use the above large-time initial-state norm observability for deducing the practical stability based on the information on y in the Ω-limit set.
Remark 1.If the dynamics in system (2) are linear, that is, ẋ = Ax, y = Cx, and the pair (A, C) is observable, then one can quantify γ in (3) using the observability Gramian.In particular, if for τ > 0 −1 t+τ t e A (s−t) C y(s) ds, for each t ≥ 0, and τ > 0, which in particular yields for each t ≥ 0, and any τ > 0.

Stabilization problem with limited control
We are interested in feedback stabilization of the system Σ described in (1) using the output measurements.The key element of our problem is that the input u can only take values in a discrete set, which is finite.Thus, the objective is to find a reasonable way to map the outputs (taking values in R m ) to a finite set such that the closedloop system is stable in some appropriate sense.More formally, we address the following problem: Practical output-feedback stabilization with limited control (POS-LC): For a given system Σ as in (1) and for a given ball B with > 0, determine the finite set U := {u 0 , u 1 , . . ., u p } ⊂ R m with minimal cardinality, and describe the mapping φ : R m → U such that the closedloop system of (1) with u = φ(y) satisfies x(t) → B as t → ∞ for all initial conditions x(0) ∈ R n .
In our problem formulation, both the construction of a discrete set U, as well as the design of the stabilizing map φ constitute our control problem.Compared to the numerous works in the literature on quantized control, our job in solving POS-LC problem is facilitated under the passivity structure, along with the appropriate observability notion.In particular, for the first of results, we will work under the following basic assumption for solving POS-LC: (A0) The system Σ in ( 1) is passive with a proper and positive definite storage function H and, the corresponding autonomous system (2) is large-time initial-state norm-observable for some τ > 0 and γ ∈ K ∞ .
Remark 2. In (A0), we require the storage function to be positive definite.In general, passivity of system (1) only implies the existence of a positive semidefinite storage function.However, if we add zero-state-observability condition, then the resulting storage function is positive definite (Hill and Moylan, 1976, Lemma 1).In our setup, inequality (3) implies such an observability notion.

Set-valued analysis: Basic notions
In studying the aforementioned control problem, we recall some fundamental definitions found in the literature on differential inclusions and convex polytopes, which would be useful for analysis in later sections.

Regularized differential inclusions
It turns out that a mapping which maps output from a continuum to a discrete set of control actions is essentially discontinuous (with respect to usual topology on R m ).Differential equations with such state-dependent discontinuities need regularization so that the solutions are properly defined.For a discontinuous map F : R n → R n , we can define a set-valued map K(F ) by convexifying F as follows where co(S) is the convex closure of S. The set-valued mapping K(F ) is the Krasovskii regularization of F , and under certain regularity assumptions on F , K(F ) is compact and convex-valued, and moreover it is upper semicontinuous.1For an upper semicontinuous mapping Φ : A Krasovskii solution x(•) on an interval I = [0, T ), T > 0 is an absolutely continuous function x : I → R n such that (4) holds almost everywhere on I.It is maximal if it has no right extension and it is a global solution if I = R ≥0 .For any upper semicontinuous set-valued map Φ such that Φ(ξ) is compact and convex for every ξ ∈ R n , the following properties have been established (see, e.g., (Jayawardhana et al., 2011, Lemma 1)): (i). the differential inclusion (4) has a solution on an interval I; (ii).every solution can be extended to a maximal one; and (iii).if the maximal solution is bounded then it is global.

Convex polytopes
Next, we present the definition of convex polytopes and some of their notable examples that are related to our problem.We refer to (Okabe et al., 2009) and (Toth et al., 2017) for additional material on this topic.In general, there are two basic representation of convex polytopes.Firstly, the vertex representation of a convex polytope in R m , or commonly referred to as the V-representation, is an m-polytope defined by the convex hull of a finite set of points in R m ; i.e. for any set of points U ⊂ R m , the V-representation of a convex polytope defined by U is given by P V (U) := conv (U).Another way to define an m-polytope is by intersecting finite-number of halfspaces, commonly referred to as the H-representation, that is given by P When it is clear from the context, we will omit the arguments in P V and P H in the rest of this paper.
One simple example of m-polytopes is the mdimensional simplex, commonly referred to as m-simplex.
For particular examples, 1-simplex is a line, 2-simplex is a triangle, and 3-simplex is a tetrahedron.The formal definition of m-simplices is given by: Definition 2 (m-simplex).Let S := {s 0 , s 1 , . . ., s m } with s i ∈ R m , i = 0, 1, . . ., m be an affinely independent set, i.e. for any s i ∈ S, the set for some λ ∈ R >0 .
Another notable example of m-polytopes is the mdimensional hypercubes: the m-cubes and the m-crosspolytopes.For a given λ ∈ R >0 , an m-cube C m is given by C For our purposes, the utility of convex polytopes is seen in partitioning the output space R m into a finite number of cells which can then be associated to a control action.In particular, given a finite set S ⊂ R m with card(S) = q, the space R m can be partitioned into q number of cells where every cell contains all points in R m that are closer to an element of S than any other element.Such cells are commonly referred to as Voronoi cells and are defined as follows.
Definition 3. Consider a countable set S ⊂ R m .The Voronoi cell of a point s ∈ S is defined by Remark 3. Note that every Voronoi cell is a closed and convex polyhedron since they can always be represented by the solution of a system of linear inequalities.

Nearest-Neighbor Control for Passive Systems
In this section, we provide our first solution for the general passive systems when the practical stabilization of the origin is required.The motivation behind our design of these elements is to work with minimal number of elements in the set U which yield the desired performance using the static output feedback only.Toward this end, the only assumption we associate with the set U is the following: (A1) For a given set U := {u 0 , u 1 , u 2 , . . ., u p }, with u 0 = 0, there exists an index set I ⊂ {1, . . ., p} such that the set V := {u i } i∈I ⊂ U defines the vertices of a convex polytope satisfying, 0 ∈ int (conv (V)).
An immediate consequence of (A1) is the following lemma, which is used in the derivation of our forthcoming main result.
Lemma 1.Consider a discrete set U ⊂ R m that satisfies (A1).Then, there exists δ > 0 such that that is, the following implication holds for each Proof.Based on Assumption (A1), consider the sets I := {1, . . ., q}, and V := {v 1 , . . ., v q } ⊂ U such that q ≤ p and 0 ∈ int (conv (V)).Let S = V ∪ {0}.From the definition of Voronoi cells, it readily follows that V U (0) ⊆ V S (0), and therefore, it suffices to show that V S (0) ⊂ B δ .Toward that end, we first observe that the Voronoi cell V S (0) can be described as (8) Thus, from (8), we know that V S (0) is a closed convex polyhedron.It remains to show that V S (0) is bounded.Indeed, boundedness implies that we can choose δ = max ṽ∈ V S (0) ( ṽ ), such that B δ is the smallest ball containing the set V S (0), which by definition of Voronoi cell is equivalent to (7).

Unity output feedback
Using the result of Lemma 1 and the assumptions introduced thus far, we can define a feedback mapping φ which maps the measured outputs to the discrete set U to achieve practical stabilization.In this regard, we first consider the mapping φ : R m ⇒ U, defined as The feedback control u = φ(y), with φ given in (10), can be seen as a quantized version of the unity output feedback.That is, when U is the continuum space R m , solution to the optimization problem ( 10) is none other than u = φ(y) = −y.This quantization rule φ maps −y to the nearest element in the set U with respect to the Euclidean distance.The partitions in the output space induced by such quantization rule indeed result in Voronoi cells, and the resulting control law is hence discontinuous taking constant value in each of the Voronoi cells, see Fig. 1.By choosing u = φ(y), the closed system is thus given by ẋ As φ(y) is a non-smooth operator, we consider instead the following regularized differential inclusion We note that the solution of ( 11) is basically interpreted in the sense of (12).In the following result, we analyze the asymptotic behavior of the solutions of ( 12) and show that they converge to B , for a given > 0, if the elements of set U satisfy certain conditions.
Proof.For a fixed y ∈ R m , suppose that φ(y) = {u i } i∈Jy for some J y ⊂ {0, 1, . . ., p}.It follows from (10) that {u i } i∈Jy are the closest points to −y.Now, for each i ∈ J y , we have that 2 (see (8) also for construction of the following 2 When u i is the closest point to −y, we know that the inequality u i +y 2 ≤ u j +y 2 holds for all j ∈ {0, 1, . . ., p}.By taking u j = 0, and noting that u i + y 2 = u i + y, u i + y = u i 2 + 2 u i , y + y 2 , we can also conclude that u i , y ≤ − 1 2 u i 2 .
inequality using the definition of a half-space bounded by a hyperplane), Therefore, for each y ∈ R m , and u i ∈ φ(y), i ∈ J y , we get Based on this property of φ(y), y , we can now analyze the behavior of the closed-loop system given by ( 12).
For the storage function H associated with the openloop system, we evaluate its derivative along the solutions of ( 12) in following two cases: Based on the computation of φ(y), y , with non-zero φ(y), it follows that y, conv(W y ) ⊂ − u y,max y , −0.5 u y,min 2 , where we let u y,max := max w∈Wy w , and u y,min := min w∈Wy w .Therefore, Ḣ(x) ≤ −0.5 u y,min 2 ; when 0 ∈ φ(y), or the other possibility is that, (ii): 0 = φ(y) = {u 0 } = W y so that J y = {0}.In this case, following the same arguments as in case (i) Since {0} is the only element of W y , y, conv(W y ) = {0}.
This implies that, for the case when φ(y) = {0}, we have Combining the two cases, it holds that for J y ⊂ {0, 1, . . ., p}, we have Ḣ(x) ≤ 0, and Ḣ(x) = 0, if and only if 0 ∈ φ(y).As H(x) is non-increasing along system trajectories in both the cases (i) and (ii), and since H is proper, all system trajectories are bounded and contained in the compact set Ω and let M be the largest invariant set (with respect to system (12)) contained in Z x .By the LaSalle invariance principle, all trajectories belonging to the compact set Ω 0 converge to the set M , see for example (Brogliato and Tanwani, 2020, Theorem 6.5).We next show that, because of the large-time norm observability and Lemma 1, it holds that M ⊂ B .To see this, take an arbitrary point z ∈ M , and consider a solution of system (12) over an interval [s, s + τ ] starting from z; that is, consider x : [s, s+τ ] → R n which solves (12) and x(s) = z ∈ M .Due to the forward invariance of set M , the corresponding solution x(t) ∈ M , for each t ∈ [s, s+τ ].Consequently, φ(h(x(t))) = {0}, and because of Lemma 1, |h(x(t))| ≤ δ for each t ∈ [s, s + τ ].Invoking the large-time initial state norm-observability assumption, it holds that x(s) = z ≤ γ(δ) ≤ , where the last inequality is a consequence of (13).Since z ∈ M is arbitrary, it holds that M ⊂ B .
In summary, we have shown that for all initial conditions x(0) ∈ R n , and hence the desired assertion holds.
As first application of Proposition 1, we are interested in specifying the invariant set when the set of control actions is described by a set of equidistant points along each axis of the output space.
Remark 4. In contrast to the choice of U in Example 2 where we used (9) to construct the discrete set U in R 2 , the constant δ in Corollary 1 is less than max ṽ∈ V ṽ .This is due to the choice of the set V in the proof of Corollary 1 that is dense enough such that {y | φ(y) = 0} ⊂ conv(V).From this corollary, one can conclude that two-level quantization with N = 1 suffices to get a global practical stabilization property for passive nonlinear systems.This binary control law restricts however the convergence rate of the closed-loop system.It converges to the desired compact ball in a linear fashion and may not be desirable when the initial condition is very far from the origin.The use of higher quantization level (e.g., N 1) can provide a better convergence rate when it is initialized within the quantization range.

Sector bounded feedback
We next present a generalization of the result in Proposition 1 on how the nearest neighbor rule can be used to quantize more generic nonlinear feedback laws.In Proposition 1, when U is the continuum space of R m , the resulting control law is simply given by u = −y, i.e., it is a unity output feedback law.Using standard result in passive systems theory, the closed-loop system will satisfy Ḣ ≤ − y 2 .Furthermore, the application of LaSalle invariance principle with zero-state detectability allows us to conclude that x(t) → 0 asymptotically.As the underlying system is passive, we can in fact stabilize it with any sectorbounded nonlinear feedback of the form y → −F (y), where for all y ∈ R m .There are a number of reasons for considering such feedback laws rather than the unity output feedback law.For instance, we can attain a prescribed L 2 -gain disturbance attenuation level or we can shape the transient behavior by adjusting the gains on different domain of y.In the following proposition, we consider such sector-bounded output feedback law F (y), and how the nearest neighbor rule can be used to map such feedbacks in the limited control input set U to guarantee practical stabilization.
Proposition 2. Consider a nonlinear system Σ described by (1) that satisfies (A0), and a discrete set U ⊂ R m satisfying (A1) so that (6) holds for some δ > 0. For the mapping φ given in (10), let µ min,1 ∈ (0, 1] be such that3 , for all z ∈ R m , Assume that the constants k 1 , k 2 , k 3 describing the function F , as in (15), satisfy for a given > 0. Then the control law u = φ(F (y)) globally practically stabilizes Σ with respect to B .
Proof.We basically show that, for any y ∈ R m , we have for some J y ⊂ {0, 1, . . ., p} such that φ(F (y)) = {u i } i∈Jy and κ i,y > 0. The rest of the proof follows a pattern similar to that of Proposition 1. First, with φ(F (y)) = {u i } i∈Jy , suppose that 0 / ∈ φ(F (y)), so that J y ⊂ {1, . . ., p}.It follows from ( 10) that {u i } i∈Jy are the closest points to −F (y) which implies that where Under the given hypothesis, µ min,1 ≤ µ i,1 for each i ∈ J y .
On the other hand, Since k 1 y 2 ≤ F (y), y and F (y) ≤ k 3 y , the minimum value of µ 2 (with respect to all choices of F that satisfy ( 15)) is given by µ min,2 = k 1 /k 3 .Now, note that, in general, κ i,y ∈ [−1, 1].It can be shown that if ( 16), (17a), and (20) hold with µ 2 ∈ [µ min,2 , 1], then there exist κ min > 0 such that κ i,y ∈ [κ min , 1].For each y ∈ R m and i ∈ J y , we introduce the Gram matrix G i,y as having the property that (see also (Castano et al., 2016)) G i,y 0 and thus det(G i,y ) ≥ 0. This implies that 0 ≤ y 2 F (y) By rewriting above inequality in terms of their respective norms and constants µ i,1 , µ 2 , and κ i,y , we have that, for each From the last inequality, we can prove whether κ i,y > 0 whenever condition (17a) is satisfied, by only investigating the case where κ i,y ≤ µ i,1 µ 2 .The last inequality, paired with condition (17a), gives the following result Note that the above arguments hold for all i ∈ J y , and (18) holds for some κ i,y > 0. Secondly, in case, J y = {0}, we have φ(F (y)) = {0} and φ(F (y)), y = {0}.Thus, (18) holds trivially since u 0 = 0.
Combining the two cases, we see that (18) holds for J y ⊂ {0, 1, . . ., p}.Following the same line of arguments as in the proof of Proposition 1, (18) implies that the storage function is nondecreasing along the solutions of the closedloop system and the solutions converge to a set M , where Hence for any trajectory starting with initial condition x(s) = z ∈ M , it holds that the corresponding output satisfies F (y(t)) ≤ δ for all t ≥ s.
k1 for all t ≥ s.By the property of large-time initial-state norm-observability of (2), it holds that, and this holds for each z ∈ M .Hence, M ⊆ B and in particular, each trajectory converges to B as t → ∞.
Remark 5.The condition (17a) requires that the nonlinearity should lie in a relatively thin sector bound.When F (y) = ky, i.e, it is a proportional controller with a scalar gain k > 0, then the condition (17a) holds trivially, since µ min,1 > 0 and k1 k3 = k k = 1.Consequently, it follows from this proposition that we can make the practical stabilization ball arbitrary small by assigning a large gain k.

An illustrative example
Example 3. Consider the following nonlinear system where x := x 1 x 2 x 3 ∈ R 3 and y := y 1 y 2 , u := It can be checked that by using the proper storage function H(x) = 1 2 x 2 1 + 1 2 x 2 2 + 1 4 x 4 3 , the system Σ ex is passive.Indeed, a straightforward computation gives us Ḣ = y, u .Note that the above system can be written as a nonlinear port-Hamiltonian system, describing a nonlinear RLC circuit (Castanos et al., 2009): We will now show that Σ ex satisfies the large-time initial-state norm observability condition.As the bound on x 3 for the large-time norm observability can directly be obtained from the output y, we need to compute the bound on [ x1  x2 ].If we consider the sub-system of [ x1 x2 ] with x 1 as its output (and is equal to y 1 ), it is a linear system with A = 0 −1 1 0 , B = [ 1 0 ], C = 1 0 and its input is x 3 3 = y 2 .Thus as (A, C) is observable, the observability Gramian is given by Simulation results of Σex using the control approach proposed in the Proposition 1 with discrete input set Uex as in ( 9) and fixed parameters θex = 0 and α = 0.1.It can be seen that once both the state x and the output y enters their respective convergence ball, the control input is zero.
where * denotes the convolution operation and H is the convolution matrix kernel given by H(t) = Ce At .Since e At = 1 for all t, it follows then that Since by the definition of y, x 3 [t,t+π] = y 2 1 3 [t,t+π] ≤ y 1 3 [t,t+π] , it follows from the inequality above that In other words, the function γ in (3) is given by γ(s) = 4s + s 1 3 .We can now use the results in Proposition 1 to practically stabilize Σ ex .We choose the control set to be U ex given in (9), and the desired stability margin to be = 1.Then, based on the function γ computed for the system Σ ex , we get γ(δ) < 1 if δ ∈ 0, 1 8 .Using the same discrete set as in (9) along with the function φ as in (10), we can fix θ ex = 0 and α = 0.1 such that the system Σ ex is globally practically stable with respect to B , with = 1, as shown in the simulation results in Figure 2.

Nearest-Neighbor Control for Incrementally Passive Systems With Constant Inputs
In many cases, the desired equilibrium point of the passive nonlinear system Σ as in (1) is not equal to the minimum of the associated storage function H. Instead, it may correspond to an arbitrary constant input.For these cases, a constant input u * ∈ R m with its corresponding steadystate solution x * ∈ R n defines the steady-state relation given by the set The problem of practically stabilizing the system Σ around x * ∈ R n is equivalent to practically stabilizing x = x − x * around the origin, with (•) = (•) − (•) * denoting the incremental variable.Thus, the incremental system is given by For this matter, the passivity of the mapping u → y is, in the original system Σ, referred to as incremental passivity with respect to constant input; and is defined as follows (Jayawardhana et al., 2007).
Definition 4 (Constant Incremental Passivity).Consider the nonlinear system Σ as in (1).The system Σ is said to be incrementally passive with respect to constant input if, for every (x * , u * ) ∈ E, the corresponding incremental system Σ in ( 23) with input u and output y, is passive; that is, there exists a storage function H 0 : R n → R ≥0 such that Ḣ0 = ∇H 0 , ẋ ≤ u, y .
Note that the incremental passivity is a stronger requirement than the passivity notion considered in Section 3. In particular, one can find examples of systems which are passive but not incrementally passive.Also, constant incremental passivity defined above is equivalent to shifted passivity as in (Monshizadeh et al., 2019;van der Schaft, 2016) and equilibrium-independent passivity as in (Hines et al., 2011).Nevertheless, the term constant incremental passivity is preferred in this paper because the pair (x * , u * ) can be arbitrary and most importantly, the incremental function is used in the definition.In the remainder of this section, we study stabilization of incrementally passive systems with finite set of control actions.

Steady-state u * ∈ U
In the case of constant incremental passivity, the corresponding constant input u * is often known from the knowledge of the nominal system (1).Then we can simply design the finite input set U such that it contains u * .Thus it is natural to adapt the assumption (A1) to the current setting that brings us to the following proposition.
Proposition 3. Consider the system Σ as in (1), and a finite set of control actions U = {u 0 , u 1 , . . ., u p } ⊂ R m .Assume that: (A2) Σ is constant-incrementally passive with the proper storage function H 0 (x, x * ) for all pair (x * , u * ) ∈ E; (A3) u * ∈ U, with u 0 = u * , and there exists a subset V of U such that u * ∈ int (conv (V)); and (A4) the autonomous incremental system Σ with u = u * is large-time initial-state norm-observable, i.e. there exists τ > 0 and γ ∈ K ∞ such that the solution of the autonomous incremental system Σ u=u * satisfies x(t) ≤ γ y [t,t+τ ] for all x(0) ∈ R n , t ≥ 0.
The proof of Proposition 3 can be developed similarly to the proof of Proposition 1, by considering where the set is defined by shifting the original input set U such that u * is now the origin of the input/output space of the constant incremental system.This means that we can use the constant-incremental nearest-neighbor map φ so that the constant incremental system has the same structure as (1).Then the rest of the proof follows from the proof of Proposition 1. Finally, since the output and state variables of the constant incremental system converge to B δ and B , respectively, as t → ∞, we can conclude practical stability, i.e. y → B δ and x → B (x * ) as t → ∞.

Sector bounded feedback
Similar to the results in the previous section, sector bounded nonlinear mapping F that satisfies (15) can easily be included in the constant-incrementally passive systems case.This is due to the fact given by ( 26).Then the following proposition is true.

Revisiting an illustrative example
Example 3 (Continued).Consider the nonlinear system Σ ex along with the associated storage function H(x) as in Example 3. It can be shown easily (following the main results in (Jayawardhana et al., 2007)) that Σ ex is constantincrementally passive.Indeed, for any (x * , u * ) ∈ E, we can define which has a global unique minimum at x * and is related to the original storage function H(x) following (Jayawardhana et al., 2007) by It follows immediately that Ḣ0 = y, u .
We will now show that the autonomous incremental system of Σ ex satisfies the large-time initial-state norm observability conditions.Let the function γ be computed by considering u = u * for all t ≥ 0 as provided in the following.Consider the incremental system of Σ ex with u = u * for all t ≥ 0, i.e.

Σ ex u=u
Following the computation in Example 3, we first compute the bound on the subsystem x1 x2 by considering y 2 as the input and y 1 = x 1 as the output.Then we have a linear system with A = A = 0 −1 1 0 , B = B = [ 1 0 ], C = C = 1 0 .Hence, following a similar routine computation as before, we get Accordingly, for x 3 , we have that 2 , for all x 3 .Hence, In other words, the large-time initial-state normobservability function is given by γ(s, x * 3 ) = 4s + 4 3x * 3 2 s 2 .We can now use the results in Proposition 3 to practically stabilize Σ ex around any arbitrary steady-state relation Figure 3: Simulation results of Σex using the control approach proposed in the Propostion 3 with discrete input set Uex := Uex + u * with Uex be as the input set used in Example 3.Here we have that u * ∈ Uex.Similar to before, in this simulation, once both the state x and the output y enter their respective convergence ball, the control action is switched to u * for the rest of the simulation.and = 0.5.Then, by the large-time initial-state normobservability property of the incremental system, we can choose δ = 0.1 to generate the discrete set of control actions.In this case, we can translate the previously used discrete set such that u * is in the realizable control actions, i.e.U ex := U ex + u * with U ex be as the discrete input set used in Example 3. The mapping control law with the mapping φ can then be demonstrated as shown in Figure 3.

Minimal Control Actions: Constructions and Bounds
In the earlier sections, we have shown that a nearest neighbor approach is a powerful tool for global practical stabilization of passive nonlinear systems.Indeed, when we are given a limited choice of static control inputs, assumptions (A1) and (A3) provide us a way to check whether the given set of inputs is applicable by means of nearest neighbor approach for the practical stabilization problem.If these assumptions hold for a finite set U, then it is of interest to compute the smallest number δ > 0 associated with Voronoi cell V U (u * ), such that Since our control design achieves convergence up to a ball of radius γ(δ), with γ(•) being the output-to-state gain in large-time initial-state norm-observability assumption, the knowledge of δ basically determines how close the trajectories can get to the desired equilibrium with our proposed controller.Let us recall the assumption (A1) and its generalization (A3), where we assume that, for a finite set U, the desired equilibrium u * ∈ int(conv(V)).To obtain U of minimal cardinality, the following result, borrowed from (Brondsted, 1983, Corollary 9.5), is of interest: Lemma 2. For a finite set S ⊂ R m , the minimal cardinality of S such that int (conv (S)) = ∅ is equal to m + 1.
An immediate consequence is that, for practical stabilization of passive systems, it suffices to consider a control set U with m + 2 elements (including u * ), provided they satisfy a certain geometric configuration.
Since m ≥ 1, the inequality (33) can simply be rewritten as Next, we observe that each of the vertices of the Voronoi cell V Sreg∪{0} (0) can be obtained by solving m equations taken from (32) and/or (33).Let V be the set of all vertices of V Sreg∪{0} (0).Then { λ 2 ṽi } with ṽi being a column vector where the i-th element is given by 2 − m − √ m + 1 and the other m−1 elements are 1.Therefore, the minimum value of δ for which V Sreg∪{0} (0) ⊂ B δ is given by and which is the desired expression.
Next, let us consider the regular m-simplex centered at the origin with vertices S 0 reg .Proof.Let us denote the set S := S 0 reg {0}.Then, by following the same proof as before, we have that the set V S (0) is equal to the solution set of the following system of inequalities, Since all points in S 0 reg have the same distance from the origin, we can pick any set of m equations from the above inequalities in order to get one of the vertices of V S (0).Let us now choose all m equations from (34) because they have a nice symmetric structure given by, where a = m + √ m + 1.Since A is symmetric, we can find a symmetric matrix A −1 such that AA −1 = A −1 A = I m×m .Via routine computation, we have that, Therefore, the minimum bound on the set V S (0) is, which completes the proof.
We have shown in Lemma 3 and Lemma 4 above that for the two types of discrete sets, whose elements form the vertices of regular m-simplices', the minimum bounds of the Voronoi cell of the origin can be computed in a closed-form manner.Now, for a given incrementally passive system Σ and admissible reference signal u * with large-time normobservability function γ ∈ K when u = u * , for a given stability margin > 0, the value of the bound δ can be chosen as large as possible such that γ (δ) ≤ .Thus, for a given > 0, norm-observability function γ of the system Σ, and a desired rotation matrix R, we can choose δ > 0 that satisfies γ (δ) ≤ and construct the minimal set U ⊂ R m that satisfies (A3) as follows: The same discrete set can be constructed by using U := (RS reg (0) ∪ {0}) + u * ; by fixing α = δ and at each time instance where we have shown that our proposed control laws are able to stabilize the systems up to some desirable distance from the equilibrium.In addition, our results provide an insight on the lower bound on the number of control elements that guarantee practical stability.We have also provided methods to design the finite set of control actions with minimal cardinality.Questions related to improving the convergence rate with more (than necessary) control elements and/or to eliminate the chattering effects are being investigated as further directions of research.
and we say that b Sm = 1 m+1 m i=0 s i is its barycenter.Example 1.One special case of m-simplices is a regular m-simplex S m,reg where all vertices have equal distances to its barycenter and, one possibly simple choice for such a simplex is S m,reg := conv λ e 1 , . . ., e m ,
Figure2: Simulation results of Σex using the control approach proposed in the Proposition 1 with discrete input set Uex as in (9) and fixed parameters θex = 0 and α = 0.1.It can be seen that once both the state x and the output y enters their respective convergence ball, the control input is zero.

Lemma 4 .
Consider the vertices of a regular m-simplex centered at the origin S 0 reg = S reg − b Sreg where S reg and b Sreg are defined in (30) and (31).Then the bound δ > 0 such that V S 0 reg ∪{0} (0) ⊂ B δ is given by A −1 = bI + c(11 − I), whose main diagonal elements are b 1. U := (RS reg ∪ {0}) + u * with Recall the discrete set U ex as in Example 2.