The Collective Dynamics of a Stochastic Port-Hamiltonian Self-Driven Agent Model in One Dimension

The collective motion of self-driven agents is a phenomenon of great interest in interacting particle systems. In this paper, we develop and analyze a model of agent motion in one dimension with periodic boundaries using a stochastic port-Hamiltonian system (PHS). The interaction model is symmetric and based on nearest neighbors. The distance-based terms and kinematic relations correspond to the skew-symmetric Hamiltonian structure of the PHS, while the velocity difference terms make the system dissipative. The originality of the approaches lies in the stochastic noise that plays the role of an external input. It turns out that the stochastic PHS with a quadratic potential is an Ornstein-Uhlenbeck process, for which we explicitly determine the distribution for any time $t\ge 0$ and in the limit $t\to\infty$. We characterize the collective motion by showing that the agents' mean velocity is Brownian noise whose fluctuations increase with time, while the variance of the agents' velocities and distances, which quantify the coordination of the agents' motion, converge. The motion model does not specify a preferred direction of motion. However, assuming a equilibrium uniform starting configuration, the results show that the noise triggers rapidly coordinated agent motion determined by the Brownian behavior of the mean velocity. Interestingly, simulation results show that some theoretical properties obtained with the Ornstein-Uhlenbeck process also hold for the nonlinear model with general interaction potential.


Introduction
Collective motion of self-driven agents is the spontaneous formation of coordinated behavior in a common direction of motion.Collective motion and swarming behavior can be observed in various systems of living organisms (schools of fish, flocks of birds, herds of animals, colonies of bacteria), in crowds and road traffic, or non-living active systems, such as microswimmers and other self-driven particle systems (see, e.g., the reviews [50,32,43]).The large variety of systems in which collective motion can be observed is fascinating.It suggests that the phenomenon is universal.It is then interesting to formulate parsimonious motion models that reproduce the spontaneous coordination of the dynamics.This field of research is largely developed in statistical physics and the physics of active matter [1,2,22,38,43,27].
The collective motion of a multi-agent system is generally characterized using the ensemble mean velocity of the agents as an order parameter [50].One of the most famous microscopic models describing collective motion is the Vicsek model introduced by Tamàs Vicsek et al. in the 1990s [49] and its extensions [6,15,34].This model shows a phase transition from a disordered state to large-scale ordered motion in one [12] and two dimensions [49].Nowadays, many studies report on phase transitions to collective motion using multi-agent and self-driven particle systems, including various topological and metric interaction fields and noise models [3,35,24,33,14].In traffic engineering, the study of long-term dynamics, the so-called collective behavior, and its control is a current field of research.Initial work from the 1950s [5,23,26,37] showed that nonlinear follow-the-leader models can exhibit severe stability problems.
However, this topic has currently returned to the focus of public interest in connection with automated driving.For example, driving assistance systems with adaptive cruise control (ACC) can lead to unstable group dynamics of vehicles [7,25,31,44].Thus, it is of great interest to make future driving assistance systems both efficient and robust to disturbances using special stabilization techniques based mostly on anticipation approaches [28,46,51].
In the 1980s, pioneering work by Arjan van der Schaft introduced port-Hamiltonian systems (PHS) for modeling nonlinear physical systems [47,48].In contrast to the usual Hamiltonian systems, PHS, in addition to describing the Hamiltonian dynamics through the input/output ports, also allow control and external factors to be taken into account and enable direct calculation of the system output.Similarly, systems from different physical domains (so-called multi-physics systems) can be formulated as PHS and the proper coupling of PHS systems again yields a PHS system [39].It is this functional structure of the PHS, which mitigates the modeling between conserved quantities, dissipation, input, and output, that is an extremely useful representation of many systems.
There are several ways to impart randomness to PHS.Several works in the literature use explicit, finitedimensional input-state-output PHSs and study the impacts of white noise perturbations resulting in the state dynamics dX(t) = J(X(t)) − R(X(t)) ∇H(X(t)) + g(X(t)) u(t) dt + σ(X(t)) dW (t) and output y(t) = g (X(t))∇H(X(t)).This approach was put forward in [42] and was subsequently extended in, e.g., [8,19,30,41].A key observation for this approach is that, although the infinitesimal influence of noise on the state is in expectation zero, it nevertheless increases the mean energy of the system at the rate tr(σ (Hess H)σ)/2 and therefore these stochastic expansions are no longer inherently passive.First proposals for incorporating randomness into PHS through the implicit formalism of Dirac structures were recently made in [9] and [29].
In this paper, we propose a stochastic motion model of agents that can be formulated using a stochastic port Hamiltonian system.More precisely, we consider the motion of N ∈ N agents on a ring of length L > 0. In our model the infinitesimal change of velocity of agent n ∈ {1, . . ., N } depends linearly on the velocity difference to the two direct neighbors.In particular, agent n accelerates if her speed is smaller than the mean speed of the two direct neighbors and slows down otherwise.Moreover, her infinitesimal change of velocity depends on the distance to the two direct neighbors.We introduce a convex potential U : R → [0, ∞) that dictates how agent n reacts to the distances.The bigger the distance to the agent in front and the smaller the distance to the follower, the higher the acceleration and vice versa.In particular, the agent dynamics are symmetric, with no preferred direction of motion.This deterministic law of motion is perturbed by agent-individual white noise processes.
We show that our model suits into the stochastic pH framework outlined above and analyze its Hamiltonian behavior.In the case of a quadratic potential we perform an explicit analysis on the distributional properties of the system.In particular, we characterize its long-time limit in closed form using an eigendecomposition of the system matrix and results on so-called Dowker's sums.We find that the ensemble mean velocity diverges while the ensemble variances of the agent velocities and distances converge to a steady-state distributions -thereby establishing collective motion.In contrast to classical approaches, this collective motion is purely noise-induced and does not result from a phase transition.More precisely, the motion model being linear in the case of a quadratic potential, it allows the collective motion to be analyzed explicitly.Interestingly, the simulation experiments show that some of the theoretical results seem to remain valid for general potentials and nonlinear interaction terms.
The paper is organized as follows.In Section 2 we introduce our agent motion model under consideration jointly with its formulation as a port-Hamiltonian system.In Section 3 we consider the case of a quadratic potential for which the system is an Ornstein-Uhlenbeck process.Finally, we present in Section 4 some illustrative simulation results that support our theoretical findings.
General notations: Throughout this article we use the following notations: For N ∈ N the vector 1 = (1, 1, . . ., 1) ∈ R N denotes the vector consisting of ones and I ∈ R N ×N denotes the N -dimensional identity matrix.We denote by i ∈ C the imaginary unit.For a, b ∈ R we denote by a + ib = a − ib the complex conjugate and by Re(a + ib) = a the real part of the complex number a + ib ∈ C. For a matrix A ∈ C N ×N we denote by A * = Ā its complex conjugate.

Definition of the Model and its Hamiltonian Behavior
Let us first start with defining the model under consideration.

Notations
We consider N ∈ {3, 4, . . ., } agents on a segment of length L with periodic boundaries.We denote the positions and velocities of the agents at time t, respectively.We assume that the positions q(t) and the velocities p(t) of the N agents at time t = 0 are known, and that the positions of the agents are initially ordered by their indices, i.e., Note.We systematically use in the following the index n + 1 for the nearest neighbor on the right and n − 1 for the nearest neighbor on the left.Note that the right neighbor of the N th agent is the first agent, i.e., n + 1 = 1 if n = N , and conversely, the left neighbor of the 1st agent is the N th agent, i.e., n The distances of the agents to their immediate right neighbors The distance to the left is Q N for the first agent and Q n−1 for the nth agent, n ∈ {2, . . ., N }.The index order of the agents at time zero makes the initial distance positive

Agent Motion Model
To formulate our stochastic motion model, we introduce a probability space (Ω, F, P) on which there exists an N -dimensional standard Brownian motion The motion model reads for the n-th agent at time t ∈ [0, ∞) with β ∈ (0, ∞) a dissipation rate, σ ∈ R the noise volatility and U the derivative of a convex potential Stronger regularity assumptions may be necessary to guarantee the existence of the solution.A common choice is the quadratic functional U (x) = (αx) 2 /2, x ∈ R, for some α ∈ (0, ∞).
Remark 1. Equation (3) describes the motion model in a Hamiltonian fashion.An asymmetric model with the same characteristics has recently been introduced for modeling stop-and-go waves in traffic flow [40].The first line in (3) is simply the differential version of (2).The second line in (3) models the stochastic law of motion of the N agents.The acceleration dp n of agent n depends on the velocities of her direct neighbors and the distances to her direct neighbors.Let us first consider the dependence on the velocities.The acceleration dp n of agent n increases linearly in the difference p n+1 − p n between her right neighbor's velocity and her own velocity and decreases linearly in the difference p n − p n−1 between her own velocity and her left neighbor's velocity (both with slope β > 0).In particular, if her velocity coincides with the mean velocity of her two neighbors (i.e., p n (t) = (p n+1 (t) − p n (t))/2), then the velocity term does not contribute to the acceleration of agent n.Concerning the dependence on the distance to the direct neighbors, we remark that the acceleration increases the higher the distance Q n to the right neighbor and decreases the higher the distance Q n−1 to the left neighbor.The precise dependence on the distances is dictated by the derivative of the convex potential U .If agent n is located exactly in the middle between its neighbors (i.e., Q n (t) = Q n−1 (t)), then the distance term does not contribute to the acceleration of agent n.From the description so far, we see that this deterministic system is in equilibrium if all agents move at the same speed and have equidistant positions.The last term in the second line of (3), however, introduces a stochastic perturbation by white noise which brings the system out of equilibrium.
Remark 2. Note that the motion model is symmetric: the interaction model with neighbors is identical whether the agent moves to the right (velocity positive) or to the left (velocity negative).There is no preferred direction of motion.
Remark 3. The velocity dynamics depends only on the relative positions (distance) and the relative velocities to the two nearest neighbors.It is therefore more convenient to represent the system by the distance and velocity variables (Q, p) instead of (q, p).Note that in the (Q, p) representation, the relation holds at any time t ∈ [0, ∞) because of the periodic boundaries.In particular, for any n ∈ {1, . . ., N } the distance Q n can be derived from the other distances Q l , l ∈ {1, . . ., N } \ {n}, and thus the linear equations describing the dynamics of Q in (3) are linearly dependent.
Remark 4. The ensemble's mean velocity is a Brownian motion with variance σ 2 /N for any potential function U .In fact, thanks to the telescopic form of the model (3) and the periodic boundaries, we have Remark 5. We note that within our model it is possible that the initial ordering (1) is not preserved at all times and that collisions happen, i.e., for each t ≥ 0 there exists with positive probability n ∈ {1, . . ., N } such that Q n (t) ≤ 0. This is due to the fact that each agents' velocity is driven by an individual Brownian motion.Incorporating non-collusion measures makes the system analysis more challenging and constitutes an interesting question for future research.We remark that the probability of collisions and overtaking shrinks as α and β increase and as σ decreases.Our explicit analysis in Section 3 in case of a quadratic potential allows for a quantification of these probabilities in the steady state distribution.

Port-Hamiltonian Formulation
Next, we rewrite the system (3) in matrix form and identify a port-Hamiltonian structure.), the dynamics of the periodic system (3) are given by with and the Hamiltonian operator H : R 2N → R, Moreover, the matrix J is skew-symmetric by N × N block while R is symmetric positive semi-definite.
Proof.First note that for all (Q, p) ∈ R 2N we have .

Moreover, we have
It follows directly from the model (3) that and hence Clearly, J is skew-symmetric and R is symmetric.Moreover, it holds for all and hence R is positive semidefinite.
Remark 6.In the port-Hamiltonian formulation of the model, both the kinematics between the distance and the relative velocity and the kinematics between the relative velocities and the distance potential terms are part of the Hamiltonian and the skew-symmetric matrix J of the port-Hamiltonian system.These components represent the conservative part of the system.The velocity difference terms in the motion model correspond to the dissipation matrix R in the port-Hamiltonian system, while the noise plays the role of an external input (some disturbances).The port-Hamiltonian system is linear in the sense that J, R and G are constant.The nonlinear components arise only from the distance-based potential U and the Hamiltonian.

Hamiltonian Behavior
Put differently, the generator L of (Z(t)) t∈[0,∞) satisfies for all sufficiently regular f : Using the skew-symmetry of J this implies for the Hamiltonian H (given by (7)) that Note that the Hamiltonian behavior in time does not depend explicitly on the distance Q and the potential U thanks to the skew symmetry.Further remarks on the Hamiltonian behavior can be found below.
Remark 7. The deterministic system (σ = 0) is stable, i.e., the Hamiltonian is non-increasing over time.Indeed, in the case σ = 0 equation (9) reads for all t ∈ [0, ∞) Recall that according to Remark 3 and Remark 4 the deterministic system (3) is always in a state where the ensemble's mean distance and the ensemble's mean velocity satisfy p n (0).By Jensen's inequality (using the convexity of U ) it follows that the unique minimum of H over all Since ker(A) = {λp * |λ ∈ R}, we see in the case β > 0 from (10) that H(Z(t)) is strictly decreasing whenever p(t) = p * .Moreover, whenever p(t) = p * but Q(t) = Q * the dynamics (3) ensure that p is moved away from p * and thus H is also decreasing in this situation.This indicates the convergence of the deterministic system to the uniform configuration (Q * , p * ) as t → ∞.We make this statement rigorous in the special case of a quadratic potential U in Section 3.
Remark 8.In contrast, the Hamiltonian for the stochastic system could increase in expectation over time.Indeed, if we start with uniform velocities p 0 (t) = p * , then it holds that provided that σ > 0 (here we tacitly assume sufficient regularity for the stochastic integral in (9) to vanish in expectation).For this reason, we cannot directly analyze the stability of the system using Lyapunov-style arguments in combination with the Hamiltonian.Indeed, in Remark 9 we see that in general the stochastic system does not converge to a limiting distribution.
Remark 9. Note that the ensemble's mean velocity p satisfies If Z(t) converged weakly as t → ∞ then by the continuous mapping theorem also p(t) would converge weakly.However, by Remark 4 the ensemble's mean velocity p is a Brownian motion with variance σ 2 /N which does not converge weakly as t → ∞ in the stochastic case σ > 0. Hence Z(t) does not converge weakly as t → ∞ if σ > 0.

Explicit Collective Motion Analysis in case of a Quadratic Potential
For α ∈ (0, ∞) we consider as the distance-based potential the quadratic function In this case the system (3) is linear.Indeed, the gradient of the Hamiltonian reads and the system is an Ornstein-Uhlenbeck linear stochastic process with (recall the definition of A in ( 6)).The Hamiltonian for the quadratic potential is given by The Ornstein-Uhlenbeck system (11) can be explicitly solved.We obtain using Duhamel's formula (see, e.g., [21,Section 4.4.6] or [36,Section 3.7] for this and the further results on multivariate Ornstein-Uhlenbeck processes that we use in the sequel).Furthermore, (Z(t)) t∈[0,∞) is a Gaussian process.In particular, for all t ∈ [0, ∞) the random variable Z(t) is normal with expectation and covariance matrix We aim at describing the system's limit behavior as t → ∞.As we have seen in Remark 9 in the stochastic case σ > 0, the original system in Z = (Q, p)-coordinates does not converge weakly as t → ∞.However, in Subsection 3.4 we show that passing from velocity coordinates p to deviations D from the ensemble's mean velocity leads to stable dynamics.We formally introduce the process D in the next subsection.

Deviation from Ensemble Mean Velocity
In the sequel we analyze the agents' deviation from the ensemble's mean velocity.To do so, recall that is the ensemble's mean velocity and recall from Remark 4 that p is a Brownian motion with variance σ 2 /N .We introduce the deviation of agent n from the ensemble's mean velocity as Then, the deviation vector D(t) = (D n (t)) N n=1 at time t ≥ 0 is given by Next, we introduce the new processes Since for every t ∈ [0, ∞) the random variable Z(t) is normally distributed with expectation µ Z (t) (given by ( 13)) and covariance matrix Σ Z (t) (given by ( 14)) we obtain that for every t ∈ [0, ∞) the random variable X(t) is normally distributed with expectation and covariance matrix Moreover, note that AM = A and M A = A. Hence A M = A and M A = A .This implies and hence (X(t)) t≥0 satisfies the dynamics We aim at explicitly describing the limit distribution of (X(t)) t≥0 as t → ∞ (see Theorem 3.8 for the main result in this regard).To this end, we follow the instructive route to first compute µ X (t) and Σ X (t) (and also µ Z (t) and Σ Z (t)) for fixed t ≥ 0 explicitly and then determine the limits µ X (∞) and Σ X (∞) as t → ∞ (see also Remark 15 for a verification of Σ X (∞) via the Lyapunov equation associated to (20)).To compute the matrix exponentials e tB , t ≥ 0, showing up in (18) and (19) we first provide an eigendecomposition of the matrix B.

Eigenanalysis for the Matrices A and B
In this subsection we provide the complex eigendecomposition WΛW −1 of B which allows to compute the matrix exponentials e tB , t ≥ 0, that characterize the expectation vectors µ Z (t) and covariance matrices Σ Z (t).Note that the matrix B is not symmetric and not even normal.Using the eigendecomposition of the circulant matrix A we show that B still admits an eigendecomposition if for all j ∈ {1, 2, . . ., N − 1} we have β 2 µ j − 4α 2 = 0. To this end, we first fix some notation that we use throughout the section.
and let For all j ∈ {1, 2, . . ., N − 1}, k ∈ {1, 2} let By W ∈ C 2N ×2N we denote the matrix whose columns are given by the vectors If for all j ∈ {1, 2, . . ., N − 1} it holds that β 2 µ j − 4α 2 = 0, then for all j ∈ {1, 2, . . ., N − 1}, k ∈ {1, 2} we introduce Finally, we introduce the matrix The first result of this subsection provides N vectors that are eigenvectors of both A and A .Moreover, it presents the corresponding eigenvalues.Since these eigenvalues are distinct, it follows that the N eigenvectors are linearly independent.
Lemma 3.2.The family (v j ) j∈{0,1,...,N −1} is an orthonormal basis of C N (with respect to the standard inner product u, v = u * v = ū v on C N ) and for every j ∈ {0, 1, . . ., N − 1} it holds that v j is an eigenvector of A with eigenvalue κ j = ω j − 1 and v j is an eigenvector of A with eigenvalue κj = ω (N −1)j − 1.Moreover, it holds for all j ∈ {0, 1, . . ., N − 1} that Proof.For the first part we refer the reader to [20,Chapter 4].Next, note that we have the following circular property of the unit roots ω This implies that The next lemma shows that from each eigenvector v of A that is also an eigenvector of A we can create (up to) two eigenvectors of B.
and assume that v is an eigenvector of A and A with eigenvalues κ and κ, respectively, i.e., Av = κv and A v = κv.Let λ 1 , λ 2 ∈ C be the complex roots of z → z 2 +βκκz+α 2 κκ and for j ∈ {1, 2} let Then for all j ∈ {1, 2} we have that Bw j = λ j w j .
Note.In the case κ = 0 we have w j = 0 for j ∈ {1, 2} and hence w j is not an eigenvector of B.
Proof.For all j ∈ {1, 2} it holds that The next result shows that under the condition that α = 0 and that for all j ∈ {1, 2, . . ., N − 1} it holds that β 2 (1 − cos 2πj N ) = 2α 2 that B is diagonalizable and provides explicit representations of its eigenvalues and eigenvectors.Proposition 3.4.Assume that for all j ∈ {1, 2, . . ., N − 1} it holds that and for all j ∈ {1, 2, . . ., N  2 }, k ∈ {1, 2} it holds that λ j,k = λ N −j,k .In particular, B has exactly 1 + 2 N 2 different eigenvalues.Note.Before proceeding with the proof of Proposition 3.4, we remark that if there exists j ∈ {1, 2, . . ., N − 1} such that β 2 (1−cos 2πj N ) = 2α 2 then λ j,1 = λ j,2 .In this case the geometric multiplicity of λ j,1 = λ j,2 is one and hence smaller than the algebraic multiplicity (two).Thus B is not diagonalizable in this situation.We refer to Remark 11 and Remark 12 on how to dispense with this condition in the computation of the expectations and covariance matrices of X and Z.
To sum up, we see that under our assumption α, β > 0 we have two zero eigenvalues and 2N − 2 eigenvalues with strictly negative real parts.As we will see in Proposition 3.7 the two zero eigenvalues λ 0,1 and λ 0,2 lead to the divergence of Σ Z (t) as t → ∞ (cf.Remark 9).For the process X, however, we will see that in the direction of the associated eigenvectors w 0,1 and w 0,2 there is also no noise component.This together with the negativity of the real parts of the remaining eigenvalues ensures convergence of X.
To conclude the proof we show that the dimension of this direct sum is 2N .Since w 0,1 and w 0,2 are linearly independent the eigenspace associated to 0 has at least dimension 2. Next, for every j ∈ {1, 2, . . ., N −1 2 }, k ∈ {1, 2}, there are the eigenvectors w j,k and w N −j,k associated to the eigenvalues λ j,k .Since v j and v N −j are eigenvectors of A with distinct eigenvalues κ j and κ N −j , it follows that w j,k and w N −j,k are linearly independent.This implies that for every j ∈ {1, 2, . . ., N −1 2 }, k ∈ {1, 2} the eigenspace associated to λ j,k has at least dimension 2.
If N is odd, then and the sum of the dimensions of all these eigenspaces is at least and we additionally have the eigenspaces associated to λ N/2,k for k ∈ {1, 2}, which have at least dimension 1 (each of them contains the vector w N/2,k , respectively).Hence, in the case, where N is even the sum of the dimensions of all these eigenspaces is at least . This completes the proof.
Proposition 3.4 provides the matrix of eigenvectors W of B. Lemma 3.5 presents its inverse W −1 .
Next, denote by U the right-hand side of (24).We first consider the first N rows of the product UW.Note that The fact that v j , j ∈ {0, . . ., N −1}, forms an orthonormal basis of C N implies for all l, m ∈ {2, . . ., N −1} Moreover, for all l ∈ {2, . . ., N − 1}, m ∈ {N + 1, . . ., 2N } we have Thus the claim for the first N rows of UW is established.The claim for the last N rows follows in a similar way.

Explicit Representations of the Time-Marginal Distributions of Z and X
In this subsection we use the eigendecomposition of B established in Subsection 3.2 to obtain for every t ≥ 0 explicit representations of the expectation vectors µ X (t) and µ Z (t) and the covariance matrices Σ X (t) and Σ Z (t).Throughout this subsection we use the notation of Setting 3.1.Let Λ ∈ C 2N ×2N be the diagonal matrix with the vector on its diagonal.Then we have that This implies for all t ≥ 0 that e tB = We tΛ W −1 .

Limit Distribution of X
In this subsection we take the limits in Lemma 3.6 and Proposition 3.7 to obtain an explicit representation of the limit distribution of (X(t)) t≥0 as t → ∞.To this end, recall the definition of the matrix Theorem 3.8.The process (X(t)) t≥0 given by (20) converges for t → ∞ in distribution to a normal distribution with expectation Proof.For all j ∈ {1, . . ., N }, k ∈ {1, 2}, we have that Re(λ j,k ) < 0 (see also Remark 10).Under the assumption that for all j ∈ {1, 2, . . ., N − 1} it holds that β 2 1 − cos 2πj N = 2α 2 this together with Lemma 3.6 and Proposition 3.7 yields that lim t→∞ µ X (t) = µ X (∞) and lim t→∞ Σ X (t) = Σ X (∞).As outlined in Remark 11 and Remark 12 this convergence also holds in the case where the assumption is not satisfied.
Since for every t ≥ 0 the random variable X(t) is normal with expectation vector µ X (t) and covariance matrix Σ X (t) it follows with Lévy's continuity theorem that (X(t)) t≥0 converges weakly to a normal distribution with expectation µ X (∞) and covariance matrix Σ X (∞) as t → ∞.
Remark 13.By Lemma 3.6 also the expectation vectors µ Z (t) of Z(t) converge to as t → ∞.However, the covariance matrices Σ Z (t), t ≥ 0, do not converge in the case σ = 0 and hence (Z(t)) t≥0 cannot converge as we have already observed in Remark 9.
Remark 14.Note that the matrix K is singular.Indeed, the fact that v 0 = 1 is orthogonal to all v j , j ∈ {1, . . ., N − 1}, implies that Kv 0 = 0.The reason for this is, as already outlined in Remark 3, that our system is overdetermined.For example, we have for all t ≥ 0 that Q It follows that the limit distribution is degenerate.By eliminating for example the last row and last column of K one obtains a non-degenerate normal distribution and can reconstruct the last components as outlined above.
Remark 15.The steady-state covariance matrix Σ X (∞) is in general also characterized by the matrix Lyapunov equation associated to the system (20) (see, e.g., [21,Section 4.4.6] or [36,Section 3.7]).In our setting this equation reads Using the representation of Σ X (∞) from Theorem 3.8 this is equivalent to and hence to 0 Next, note that by Lemma 3.2 we have for all j ∈ {1, . . ., N − 1} that Av j = κ j v j and A v j = κj v j .This implies Similarly, we obtain KA = A K. Again by Lemma 3.2 we have for all j ∈ {1, . . ., N −1} that κ j κj = µ j .This implies that Hence, we see that Σ X (∞) indeed satisfies the matrix Lyapunov equation.
Before discussing Theorem 3.8 in detail, we derive in the next result a closed form representation of the matrix K. To do so, we realize that the entries of K are instances of so-called Dowker's sums in twisted form (the diagonal elements of K have the untwisted form) [10].This family of cosecant sums having closed forms that involve higher-order Bernoulli polynomials was first presented in a series of papers by Dowker describing his theory of the Casimir effect [16], asymptotic expansion of the integrated heat kernel on cones [17], and in connection with the celebrated Verlinde's formula for the dimensions of vector bundles on moduli spaces [18].Proposition 3.9.For all l, m ∈ {1, . . ., N } the (l, m)-entry of K satisfies Proof.First note that for j ∈ {1, . . ., N − 1} and l, m ∈ {1, . . ., N } the (l, m)-entry of the matrix v j v * j is given by This implies that for all l, m ∈ {1, . . ., N } the (l, m)-entry K satisfies The symmetries of sin and cos then imply that Theorem 3.8 and Proposition 3.9 show that the limit distribution of (X(t)) t≥0 is Gaussian with expectation vector and covariance matrix where the matrix K = K(N ) only depends on N and satisfies We denote by X(∞) = (Q(∞), D(∞)) ∈ R 2N a random vector with this distribution.Then we see that in the limit t → ∞ • the agents' positions are distributed equidistantly in expectation, • the deviations D n (∞), n ∈ {1, . . ., N }, from the ensemble's mean velocity are zero in expectation, • the covariance of deviations D n (∞), n ∈ {1, . . ., N }, from the ensemble's mean velocity are independent of α and proportional to 1 β , • the covariance of the distances Q n (∞), n ∈ {1, . . ., N }, is equal to 1 α 2 times the covariance of the deviations from the mean velocity and • the covariances between distances Q n (∞), n ∈ {1, . . ., N }, and velocity deviations D n (∞), n ∈ {1, . . ., N }, are zero, and hence independent (since they are Gaussian).
The matrix K is circulant.This reflects the interchangeability of the agents and the symmetry of the problem.To further discuss the implications of ( 41), we take the first agent as representative agent and only consider the first row of m=1 describes the first row of K and thus the block covariance matrices in (41).In particular, the vector (c(m)) N m=1 specifies up to the factor σ 2 2α 2 β the covariances between the distance Q 1 (∞) between the first and the second agent and the distances Q n (∞), n ∈ {1, . . ., N }, between the other neighboring agents.Moreover, it also describes up to the factor σ 2 2β the covariances between the first agent's deviation D 1 (∞) from the ensemble's mean velocity with the velocity deviations D n (∞), n ∈ {1, . . ., N }, of the other agents.Note that c( . So c describes a convex parabola with vertex at ( N 2 + 1, − N 2 +2 24N ).In particular, c(1) = N 2 −1 12N -the variance of the first agent's coordinates -is the maximal entry of (c(m)) N m=1 .As m increases, c(m) first decreases, crosses zero before it reaches its minimal negative value at m = N 2 + 1 and m = N 2 + 1 (which are the same if N is even).Note that agents m = N 2 + 1 and m = N 2 + 1 are the agents with the largest distance to the first agent.As m further increases, c(m) increases as well.This discussion implies that in the steady state there is with high probability maximally one wave/cluster of agents.For example, if Q 1 is smaller than the expectation L/N , then the immediate neighbors (with large probability) also have a small distances.The more distant agents (in particular m = N 2 + 1 and m = N 2 + 1 ) have negative correlation and thus large distances with higher probability.This means roughly speaking that the agents agglomerate around the first agent.A similar observation applies to the velocity coordinates.The entries of K are depicted in Figure 1.

Numerical Experiments
We present in this section some simulation results of agents on a segment with periodic boundaries.The code can be downloaded and the simulations can be computed in real time on the online platform at https://www.vzu.uni-wuppertal.de/fileadmin/site/vzu/SimulatingCollective Motion.html.

Simulation Setup
We simulate the trajectories of N = 20 agents on a segment of length L = 501 with periodic boundary conditions.The initial condition is uniform with velocity zero, i.e.

Numerical solver
The simulations are computed using an implicit/explicit Euler-Maruyama scheme with time step δt.We denote in the following the system actualisation by t k = kδt, k ∈ {0, 1, 2, . ..}.The numerical scheme for the n-th agent at time t k is given by with (ξ n (k)) N n=0 , k ∈ {0, 1, 2, . ..}, independent one-dimensional standard normal random variables.Note that different simulation schemes for deterministic port-Hamiltonian pedestrian models are compared in [45].The implicit/explicit Euler schemes prove to be efficient solvers.In the following, we set the time step to δt = 0.001, which seems to be a good compromise between accurate numerical approximations and reasonable run times.

Simulation scenario
In the motion model (3), the parameters α and κ control the distances while the parameter β controls the relative velocities with the neighbors.Three simulation scenarios are examined in the following, focusing on the role of the parameters α and κ.We consider the quadratic potential for Figure 2: First simulation scenario with α = 0.1 and κ = 2 (Ornstein-Uhlenbeck process).Top panel: Agents' trajectories.Middle panel: Agents' velocity features.Bottom panel: Agents' distance features.The agents appear to move to the left before moving in a coordinated manner to the right (top panel).Indeed, the mean velocity, being a Brownian motion, fluctuates towards positive values before visiting negative values (middle panel).The ensemble variance of the agents' velocities seems to become quickly stationary.However, the dynamics are disordered and even show jamming waves.The ensemble variance of the agents' distance presents large fluctuations (bottom panel).
in the third simulation scenario with α = 1 and κ = 4.The system is no longer an Ornstein-Uhlenbeck process in this scenario since the distance-based interaction term U (Q) = α 4 Q 3 is no longer linear.The hard regulation of the distance induced by the nonlinearity of the interaction term makes the trajectories almost equi-distant, although fluctuating according to the Brownian motion of the mean velocity.Indeed, the ensemble variance of the agents' distance presents large fluctuations in the first scenario, while it is reduced in the second scenario and close to zero in the third scenario (see, respectively, Figures 2, 3, and 4, bottom panels).

Autocorrelations of the Ensemble Variances
The trajectories of the velocity and distance ensemble variances with α = 0.01 is more regular than the trajectories with α = 1 in the first and second linear scenarios for which κ = 2.The trajectories with the nonlinear model with α = 1 and κ = 4 are much less regular again (see Figure 4  Top panel: Agents' trajectories.Middle panel: Agents' velocity features.Bottom panel: Agents' distance features.The collective motion of the agents is more uniform when α = 1 than when α = 0.1 (compare with the first scenario Figure 2).The variability of the distance is much more reduced for α = 1, but the variability of the velocity shows qualitatively comparable characteristics.
panels).The Figure 5 presents the empirical autocorrelation for the ensemble variances of the agents' velocities and distances for the three scenarios.The estimation are obtained based on simulation histories of K max = 10 000 000 iterations.A rich variety of dynamics can be observed.The first scenario with α = 0.1 present overdamped features for the autocorrelation function (gray curves in Figure 5).In fact, it holds for this scenario (N = 20, β = 1 and 1 2 β 2 (1 − cos(2πj/N )) ≈ 0.02) and the system eigenvalues ( 21) are all purely real (see also Remark 10), making the dynamics oscillation-free.The overdamped stability does not hold for α = 1 (second scenario).The eigenvalues are complex numbers and the velocity and distance ensemble variances describe damped oscillatory behaviors (blue curves in Figure 5).For the nonlinear model with κ = 4, we observe extremely reduced oscillations (third scenario, orange curves in Figure 5).Indeed, the trajectories show deterministic features, even if they collectively fluctuate according to the Brownian motion of the mean velocity.This reduced oscillatory behavior explains why the ensemble variance trajectories present irregular characteristics (see Figure 4, bottom and middle panels).The collective motion of the agents is more regular again in the case of the hard (non-linear) distancebased interaction term (compare with the first and second scenario Figures 2 and 3).The distance's variability is close to zero, and the agents move synchronously according to the Brownian motion of the mean velocity.Interestingly, even in the non-linear case, the velocity variability shows similar distribution characteristics to those obtained with the linear model.

Distributions of the Ensemble Variances
In this section, we analyze the distribution of the ensemble variances of agents' velocities and distances for long simulation times.For the quadratic potentials U with κ = 2, the random variables D(t k ) and E(t k ) are normally distributed for every k ∈ N.This implies that for every k ∈ N the ensemble variance of velocities follow generalized chisquared distributions.For large k ∈ N the random variables D(t k ) and E(t k ) are asymptotically centered normal with covariance respectively (see Theorem 3.8).Moreover, they are asymptotically independent.This implies that for large k ∈ N the ensemble variances V p (t k ) and V Q (t k ) have asymptotically independent chi-squared distributions whose parameters are determined through (43).Note that the distribution for the ensemble's velocity variance is independent of α.In particular, still in the case κ = 2, the asymptotic expected ensemble variance of velocities is given by 24N β = 0.83125 as K n,n = (N 2 − 1)/(12N ) for all n ∈ {1, . . ., N }, see (34), N = 20, and β = σ = 1.Similarly, the asymptotic expected ensemble variance of distances reads The trajectories of the ensemble variances of the agents' velocities and distances exhibit different characteristics.However, similar ranges of variation for the velocity variance appear for the three scenarios including the nonlinear model with κ = 4 as well (third scenario), see Figures 2, 3, and 4, middle panels.We simulate the three scenarios over K max = 200 000 000 iterations and collect every 50 000 iterations the ensemble variances of agents' velocities and distances (samples of 4 000 observations).We estimate the theoretical chi-squared distribution using Monte Carlo simulation of random variables CU 2 /N , with U : Ω → R N a random vector of independent standard normal random variables and C ∈ R N ×N the Cholesky decomposition of the covariance matrix Σ D (∞) and Σ E (∞), respectively, see (43) (i.e., CC = Σ D (∞) for the chi-squared distribution of the ensemble's velocity variance while CC = Σ E (∞) for the ensemble's distance variance).The histograms of simulated measurements and the theoretical chi-squared distributions are presented in Figure 6.Interestingly, it turns out that the chi-squared distribution of the agents' asymptotic ensemble's velocity variance also fits the histograms of the simulation of the nonlinear model with κ = 4 (third scenario).This is surprising since the system is no longer an Ornstein-Uhlenbeck process if κ = 4. Different distance-based interaction terms (e.g., with κ = 6 or U (x) = exp(αx)) presented the same characteristic in further numerical experiments.
We have thus seen that the distribution of the agents' asymptotic ensemble's velocity variance does not display a dependence on α and κ.In contrast, the asymptotic distribution of the ensemble variance of the agents' distances is directly impacted by these parameters.The ensemble variance of the agents' distances is asymptotically proportional to the inverse of the square of the parameter α in the linear case for which κ = 2 (see Theorem 3.8 and Figure 6, bottom panels).The ensemble variance of the distances is thus on average a factor of 100 larger in the first scenario than in the second.In the third scenario, where the distance-based interaction term is cubic, the distance variance is even more reduced and close to zero.This is not surprising, since the non-linearity of the interaction term makes the model extremely sensitive to fluctuations in the distances to the nearest-neighbors.As a result, the distribution of the agents in space becomes increasingly uniform in the scenarios 1, 2, and 3 (see the trajectories in Figures 2, 3 Ensemble variance of distances 0.000 0.002 0.004 0 500 1500 Figure 6: Histograms of the ensemble variance of the agents' velocities (top panels) and the ensemble variance of the agents' distances (bottom panels) obtained by simulation and the theoretical generalized chi-squared distributions for, from the left to the right, the first, second and third scenarios.As expected, the theoretical results match the simulation in the first and second scenario with κ = 2 (left and middle panels).It is interesting to note that the ensemble variance of velocities for the nonlinear model also appears to follow a chi-squared distribution (third scenario with κ = 4, top right panel).

Figure 3 :
Figure 3: Second simulation scenario with α = 1 and κ = 2 (Ornstein-Uhlenbeck process).Top panel: Agents' trajectories.Middle panel: Agents' velocity features.Bottom panel: Agents' distance features.The collective motion of the agents is more uniform when α = 1 than when α = 0.1 (compare with the first scenario Figure2).The variability of the distance is much more reduced for α = 1, but the variability of the velocity shows qualitatively comparable characteristics.

Figure 4 :
Figure 4: Third simulation scenario with α = 1 and κ = 4 (non-linear model).Top panel: Agents' trajectories.Middle panel: Agents' velocity features.Bottom panel: Agents' distance features.The collective motion of the agents is more regular again in the case of the hard (non-linear) distancebased interaction term (compare with the first and second scenario Figures2 and 3).The distance's variability is close to zero, and the agents move synchronously according to the Brownian motion of the mean velocity.Interestingly, even in the non-linear case, the velocity variability shows similar distribution characteristics to those obtained with the linear model.

= 1 , κ = 2 α = 1 , κ = 4 Figure 5 :
Figure 5: Autocorrelation functions (ACF) for the ensemble variances of velocities (left panel) and distances (right panel).The dynamics being overdamped in the first scenario where α = 0.01 and κ = 2, the ACFs show no oscillation.The dynamics are solely damped when α = 1 and κ = 2 and the ACF present oscillations with a period close to 10.For the nonlinear model with α = 1 and κ = 4, the dynamics are again much more oscillatory (period close to 0.2).