Learning mean-field equations from particle data using WSINDy

We develop a weak-form sparse identification method for interacting particle systems (IPS) with the primary goals of reducing computational complexity for large particle number N and offering robustness to either intrinsic or extrinsic noise. In particular, we use concepts from mean-field theory of IPS in combination with the weak-form sparse identification of nonlinear dynamics algorithm (WSINDy) to provide a fast and reliable system identification scheme for recovering the governing stochastic differential equations for an IPS when the number of particles per experiment N is on the order of several thousands and the number of experiments M is less than 100. This is in contrast to existing work showing that system identification for N less than 100 and M on the order of several thousand is feasible using strong-form methods. We prove that under some standard regularity assumptions the scheme converges with rate O(N−1∕2) in the ordinary least squares setting and we demonstrate the convergence rate numerically on several systems in one and two spatial dimensions. Our examples include a canonical problem from homogenization theory (as a first step towards learning coarse-grained models), the dynamics of an attractive–repulsive swarm, and the IPS description of the parabolic–elliptic Keller–Segel model for chemotaxis. Code is available at https://github.com/MathBioCU/WSINDy_IPS.

(SINDy) [1], we developed a weak form version (WSINDy) for ODEs [2] and for PDEs [3]. In this work, we develop a formulation for discovering governing stochastic differential equations (SDEs) for interacting particle systems (IPS). To promote clarity and for reference later in the article, we first state the problem of interest. Subsequently, we will provide a discussion of background concepts and current results in the literature. Consider a particle system X t = (X t (1) , …, X t (N) ) ∈ ℝ Nd where on some fixed time window t ∈ [0, T ], each particle X t (i) ∈ ℝ d evolves according to the overdamped dynamics dX t (i) = − ∇K * μ t N X t (i) − ∇V X t (i) dt + σ(X t (i) )dB t (i) (1.1) with initial data X 0 (i) each drawn independently from some probability measure μ 0 ∈ P p (ℝ d ), where P p (ℝ d ) is the space probability measures on ℝ d with finite pth moment. 1 Here, K is the interaction potential defining the pairwise forces between particles, V is the local potential containing all exogenous forces, σ is the diffusivity, and B t (i) i = 1, …, N are independent Brownian motions each adapted to the same filtered probability space (Ω, ℬ, ℙ, ((ℱ t ) t ≥ 0 ). The empirical measure is defined where we set ∇K(0) = 0 whenever ∇K(0) is undefined. The recovery problem we wish to solve is the following.
(P) Let X = (X t (1) , …, X t (M) ) be discrete-time data at L timepoints t ≔ (t 1 , …, t L for M i.i.d. trials of the process (1.1) with K = K ⋆ , V = V ⋆ , and σ = σ ⋆ and let Y = X + ε be a corrupted dataset. For some fixed compact domain D ⊂ ℝ d containing supp (Y ), and finite-dimensional hypothesis spaces 2 ℋ K ⊂ L 2 (D − D), ℋ V ⊂ L 2 (D), and ℋ σ ⊂ L 2 (D), solve ill-posed. We consider two cases: (i) ε ≠ 0 and σ ⋆ = 0, corresponding to purely extrinsic noise, and (ii) ε = 0 and σ ⋆ ≠ 0, corresponding to purely intrinsic noise. The extrinsic noise case is important for many applications, such as cell tracking, where uncertainty is present in the position measurements. In this case we examine ε representing i.i.d. Gaussian noise with mean zero and variance 3 ϵ 2 I d added to each particle position in X. In the case of purely intrinsic noise, identification of the diffusivity σ ⋆ is required as well as the deterministic forces on each particle as defined by K ⋆ and V ⋆ . A natural next step is to consider the case with both extrinsic and intrinsic noise. However, the combined noise case is sufficiently nuanced as to render it beyond the scope of the article, and we leave it for future work.

Background
Interacting particle systems (IPS) such as (1.1) are used to describe physical and artificial phenomena in a range of fields including astrophysics [4,5], molecular dynamics [6], cellular biology [7][8][9], and opinion dynamics [10]. In many cases the number of particles N is large, with cell migration experiments often tracking 10 3 -10 6 cells and simulations in physics (molecular dynamics, particle-in-cell, etc.) requiring N in the range 10 6 -10 12 .
Inference of such systems from particle data thus requires efficient means of computing pairwise forces from O(N 2 ) interactions at each timestep for multiple candidate interaction potentials K. Frequently, so-called mean-field equations at the continuum level are sufficient to describe the evolution of the system, however in many cases (e.g. chemotaxis in biology [11]) only phenomenological mean-field equations are available. Moreover, it is often unclear how many particles N are needed for a mean-field description to suffice. Many disciplines are now developing machine learning techniques to extract coarse-grained dynamics from high-fidelity simulations (see [12] for a recent review in molecular dynamics). In this work we provide a means for inferring governing mean-field equations from particle data assumed to follow the dynamics (1.1) that is highly efficient for large N, and is effective in learning mean-field equations when N is in range 10 3 -10 5 .
Inference of the drift and diffusion terms for stochastic differential equations (SDEs) is by now a mature field, with the primary method being maximum-likelihood estimation, which uses Girsanov's theorem together with the Radon-Nikodym derivative to arrive at a log-likelihood function for regression. See [13,14] for some early works and [15] for a textbook on this approach. More recently, sparse regression approaches using the Kramers-Moyal expansion have been developed [16][17][18] and the authors of [19] use sparse regression to learn population level ODEs from agent-based modeling simulations. The authors of [20] also derived a bias-correcting regression framework for inferring the drift and diffusion in underdamped Langevin dynamics, and in [21] a neural network-based algorithm for inferring SDEs was developed.
as a Gaussian process regression algorithm recently developed in [22], applications of maximum likelihood theory are by far the most frequently studied. An early but often overlooked work by Kasonga [23] extends the maximum-likelihood approach to inference of the interaction potential K, assuming full availability of the continuous particle trajectories and the diffusivity σ. Two decades later, Bishwal [24] further extended this approach to discrete particle observations in the specific context of linear particle interactions. In both cases, a sequence of finite-dimensional subspaces is used to approximate the interaction function, and convergence is shown as the dimension of the subspace J and number of particles N both approach infinity. More recently, the maximum likelihood approach has been carried out in [25,26] in the case of radial interactions and in [27] in the case of linear particle interactions and single-trajectory data (i.e. one instance of the particle system). The authors of [28] recently developed an online maximum likelihood method for inference of IPS, and in [29] maximum likelihood is applied to parameter estimation in an IPS for pedestrian flow. It should also be noted that parameter estimation for IPS is common in biological sciences, with the most frequently used technique being nonlinear least squares with a cost function comprised of summary statistics [7,30].
Problem (P) is made challenging by the coupled effects of K, V , and σ. In each of the previously mentioned algorithms, the assumption is made that σ is known and/or that K takes a specific form (radial or linear). In addition, the maximum likelihood-based approach approximates the differential dX t (i) of particle i using a 1st-order finite difference: , which is especially ill-suited to problems involving extrinsic noise in the particle positions. Our primary goal is to show that the weak-form sparse regression framework allows for identification of the full model (K, V , σ), with significantly reduced computational complexity, when N is on the order of several thousands or more. We use a two-step process: the density of particles is approximated using a density kernel G and then the WSINDy algorithm (weak-form sparse identification of nonlinear dynamics) is applied in the PDE setting [2,3]. WSINDy is a modified version of the original SINDy algorithm [1,31] where the weak formulation of the dynamics is enforced using a family of test functions that offers reduced computational complexity, high-accuracy recovery in low-noise regimes, and increased robustness to high-noise scenarios. The feasibility of this approach for IPS is grounded in the convergence of IPS to associated mean-field equations. The reduction in computational complexity follows from the reduction in evaluation of candidate potentials (as discussed in Section 4.2), as well as the convolutional nature of the weak-form algorithm.
To the best of our knowledge, we present here the first weak-form sparse regression approach for inference of interacting particle systems, however we now review several related approaches that have recently been developed. In [32], the authors learn local hydrodynamic equations from active matter particle systems using the SINDy algorithm in the strong-form PDE setting. In contrast to [32], our approach learns nonlocal equations using the weak-form, however similarly to [32] we perform model selection and inference of parameters using sparse regression at the continuum level. The weak form provides an advantage because no smoothness is required on the particle density (for requisite smoothness the authors of [32] use a Gaussian kernel, which is more expensive to compute than simple particle binning as done here). The authors of [33] developed an integral formulation for inference of plasma physics models from PIC data using SINDy, however their method involves first computing strong-form derivatives and then averaging, rather than integration by parts against test functions as done here, and as in [32], the learned models are local. In [34], the authors apply the maximum likelihood approach in the continuum setting on the underlying nonlocal Fokker-Planck equation and learn directly the nonlocal PDE using strong-form discretizations of the dynamics. While we similarly use the continuum setting for inference (albeit in weak form), our approach differs from [34] in that it is designed for the more realistic setting of discrete-time particle data, rather than pointwise data on the particle density (assumed to be smooth in [34]).

Contributions
The purpose of the present article is to show that the weak form provides an advantage in speed and accuracy compared with existing inference methods for particle systems when the number of particles is sufficiently large (on the order of several thousand or more). The key points of this article include:

I.
Formulation of a weak-form sparse recovery algorithm for simultaneous identification of the particle interaction force K, local potential V , and diffusivity σ from discrete-time particle data.

II.
Convergence with rate O(N −1 ∕ 2 ) of the resulting full-rank least-squares solution as the number of particles N ∞ and timestep Δt 0.
III. Numerical illustration of (II) along with robustness to either intrinsic randomness (e.g. Brownian motion) or extrinsic randomness (e.g. additive measurement noise).

Paper organization
In Section 3 we review results from mean-field theory used to show convergence of the weak-form method. In Section 4 we introduce the WSINDy algorithm applied to interacting particles, including hyperparameter selection, computational complexity, and convergence of the method under suitable assumptions in the limit of large N. Section 5 contains numerical examples exhibiting the convergence rates of the previous section and examining the robustness of the algorithm to various sources of corruption, and Section 6 contains a discussion of extensions and future directions. In the Appendix we provide information on the hyperparameters used A.1, derivation of the homogenized equation ( 5 where μ t is a weak-measure solution to the mean-field dynamics ∂ t μ t = ∇ ⋅ (μ t ∇K * μ t ) + ∇ ⋅ (μ t ∇V ) Eq. (3.1) describes the evolution of the distribution of the McKean-Vlasov process dX t = − ∇K * μ t (X t ) dt − ∇V (X t ) dt + σ(X t ) dB t . (3.2) This implies that as N ∞, an initially correlated particle system driven by pairwise interaction becomes uncorrelated and only interacts with its mean-field distribution μ t . In particular, the following theorem summarizes several mean-field results taken from the review article [35] with proofs in [36,37]. 6 Theorem ( [35][36][37]). Assume that ΔK is globally Lipschitz, V = 0, and σ(x) = σ = const. In addition assume that μ 0 ∈ P 2 (ℝ d ). Then for any T > 0, for all t ≤ T it holds that i.
There exists a unique solution (X t , μ t ) where X t is a strong solution to (3.2) and μ t is a weak-measure solution to (3.1).

ii.
For any ϕ ∈ C b with C depending on Lip( ∇K) and T .

iii.
For any k ∈ ℕ, a.e. −t < T , the k-particle marginal 4 We use the notation t μ t to denote the evolution of probability measures. Subscripts will not be used to denote differentiation.
The previous result immediately extends to the case of ∇V and σ both globally Lipschitz and has been extended to ∇K only locally-Lipschitz in [38], ∇K with Coulomb-type singularity at the origin in [39], and domains with boundaries in [40,41]. Analysis of the model (3.1) continues to evolve in various contexts, including analysis of equilibria [42][43][44] and connections to deep learning [45]. For our convergence result below we simply assume that K ⋆ , V ⋆ , σ ⋆ and μ 0 are such that (i) and (ii) from the above theorem hold.

Weak form
Despite the O(N −1 ∕ 2 ) convergence of the empirical measure in previous theorem, it is unclear at what particle number N the mean-field equations become a suitable framework for inference using particle data, due to the complex variance structure at any finite N. A key piece of the present work is to show that the weak form of the mean-field equations does indeed provide a suitable setting when N is at least several thousands. Moreover, since in many cases (3.1) can only be understood in a weak sense, the weak form is the natural framework for identification. We say that μ t is a weak solution to (3.1) if for any ψ ∈ C 2 (ℝ d × (0, T )) compactly supported it holds that Tr ∇ 2 ψ(x, t)σ(x)σ T (x) dμ t (x)dt, (3.4) where ∇ 2 ψ denotes the Hessian of ψ and Tr(A) is the trace of the matrix A. Our method requires discretizing (3.4) for all ψ ∈ Ψ where Ψ = (ψ 1 , …, ψ n ) is a suitable test function basis, and approximating the mean-field distribution μ t with a density U t constructed from discrete particle data at time t. We then find K, V , and σ within specified finite-dimensional function spaces.
inner product ⟨·, ·⟩, and (vi) sparsity factors λ for the modified sequential thresholding least-squares Algorithm 4.2 (MSTLS) reviewed below. We discuss choices of these hyperparameters in Section 4.1, computational complexity of the algorithm in Section 4.2, convergence of the algorithm in Section 4.3. In Section 4.4 we briefly discuss gaps between theory and practice. Table 1 includes a list of notations used throughout.
In the present work we adopt the scheme used in the application of WSINDy for local PDEs [3], which includes the trapezoidal rule in space and time with test functions ψ compactly supported in D × (0, T ). We take D to be a rectangular domain enclosing supp (Y ) and C ⊂ D to be equally-spaced in order to efficiently evaluate convolution terms. In what follows we denote by ⟨·, ·⟨ the continuous inner product, 〈 ⋅ , ⋅ 〉 the inner product over 〈 ⋅ , ⋅ 〉 ℎ evaluated using the composite trapezoidal rule in space with meshwidth D × [0, T ] and Lebesgue integration in time, and by ℎ the trapezoidal rule in both space and time, with meshwidth 〈 ⋅ , ⋅ 〉 ℎ, Δt in space and ℎ in time. With some abuse of notation, Δt will denote the convolution of f * g and f, understood to be discrete or continuous by the context. Note also that we denote by g, μ N , and μ the measures over U defined by Here the density kernel is defined and in this setting the corresponding spatial grid X t is the set of center-points of the bins C = (c k ) k , from which we define the discrete histogram data B k . The discrete histogram U t = U t (C) then serves as an approximation to the mean-field distribution U t .
Pointwise estimation of densities from samples of particles usually requires large numbers of particles to achieve reasonably low variance, and in general the variance grows inversely proportional to the bin width μ t . One benefit of the weak form is that integrating against a histogram ℎ does not suffer from the same increase in variance with small U. In particular, for all μ ∈ p(ℝ d ) and ψ ∈ C 1 (ℝ d ) a universal constant. Let C be the histogram computed with kernel U using (4.1) with G bins and equal sidelength n. Then for any ℎ in ψ compactly supported in C 1 (ℝ d ), we have the mean-squared error (for D depending on C and C) E 〈ψ, U〉 ℎ − 〈ψ, μ〉 2 ≤ C ‖ψ‖ C 1 2 (ℎ 2 + N −1 ) .

Remark 1.
We note that (4.2) follows immediately for d i.i.d., 8 and also for Y (i) ∼ μ a solution to (1.1) at time Y = X t with mean-field distribution t according to (3.3) (for suitable μ = μ t , K, and V ), which is the setting of the current article.
Proof of Lemma 1. First we note that by compact support of σ, the trapezoidal rule can be written The indicator function is defined 1 A (x) ≔ 1, x ∈ A 0, x ∉ A ⋅ . 8 In this case (4.2) is the variance of a Monte-Carlo estimator for ∫ ψ(x)dμ(x).

Messenger and Bortz
Page 9 Physica D. Author manuscript; available in PMC 2023 July 20.
where the midpoint approximation ψ of ψ C is given by Hence we simply split the error and use (4.2): The previous lemma in particular shows that small bin width ψ does not negatively impact ℎ as an estimator of 〈ψ, U〉 ℎ , which is in contrast to 〈ψ, μ〉 as a pointwise estimator of U(x).
For example, if we assume that μ(x) is sampled from a Y density C 1 , it is well known that the mean-square optimal bin width is μ [46]. Summarizing this result, elementary computation reveals the pointwise bias For the variance we get and hence a bound for the mean-squared error Minimizing the bound over L k = max x ∈ B k | ∇μ(x) | we find an approximately optimal bin

Messenger and Bortz Page 10
Physica D. Author manuscript; available in PMC 2023 July 20.
which provides an overall pointwise root-mean-squared error of ℎ. Hence, not only does the weak form remove the inverse O(N −1 ∕ 3 ) dependence in the variance, but fewer particles are needed to accurately approximate integrals of the density ℎ.

Test function basis-For
the test functions μ we use the same approach as the PDE setting [3], namely we fix a reference test function (ψ k ) 1 ≤ k ≤ n and set where ψ is a fixed set of query points. This, together with a separable representation enables construction of the linear system (Q ≔ {(x k , t k )} 1 ≤ k ≤ n , G) using the FFT. We choose b, ϕ j , of the form where 1 ≤ j ≤ d + 1 is the integer support parameter such that m is supported on ϕ m, p points of spacing 2m + 1 and Δ ∈ {ℎ, Δt} is the degree of p ≥ 1. For simplicity we set ϕ m, p for ϕ j = ϕ m x , p x and 1 ≤ j ≤ d, so that only the numbers ϕ d + 1 = ϕ m t , p t , m x , p x , m t need to be specified.
Since p t has exactly ϕ m, p weak derivatives, p and p x must be at least as large as the maximum spatial and temporal derivatives appearing in the library p t , or L, p x ≥ 2. Larger p t ≥ 1 results in higher-accuracy enforcement of the weak form (3.4) in low-noise situations (see Lemma 2 of [2] for details), however the convergence analysis below indicates that smaller p, Lip( ∂ α ψ), may reduce variance. The support parameter m determines the length and time scales of interest and must be chosen small enough to extract relevant scales yet large enough to sufficiently reduce variance.
In [3, Appendix A] the authors developed a changepoint algorithm to choose | α | ≤ 2, m x , m t , p x automatically from the Fourier spectrum of the data p t . Here, for each of the three examples in Section 5, we fix U across all particle numbers ψ, extrinsic noises N, and intrinsic noises ε, in order to instead focus on convergence in σ. To strike a balance between accuracy and small N we choose Lip(ψ) and p t = 3 throughout. We used a combination of the changepoint algorithm and manual tuning to arrive at p x = 5 and m x which work well across all noise levels and numbers of particles examined. Query points m t are taken to be an equally-spaced subgrid of Q with spacing C and s x for spatial and temporal coordinates. The resulting values s t , p x , p t , m x , m t , and s x determine the weak discretization scheme and can be found in Appendix A.1 for each example below.
The results in Section 5 appear robust to s t , 3 ≤ p x . In addition, choosing p t ≤ 9 and m x specific to each dataset m t using the changepoint method often improves results. Although automated Messenger and Bortz Page 11 Physica D. Author manuscript; available in PMC 2023 July 20. in the changepoint algorithm, we recommend visualizing the overlap between the Fourier spectra of Y and ψ when choosing U, p x , p t , m x in order to directly observe which the modes in the data will experience filtering under convolution with m t . In general, there is much flexibility in the choice of ψ. Optimizing ψ continues to be an active area of research.

Trial function library-
The general Algorithm 4.1 does not impose a radial structure for the interaction potential ψ, nor does it assume any prior knowledge that the particle system is in fact interacting.
In the examples below, 9 the libraries K, L K , L V are composed of monomial and/or trigonometric terms to demonstrate that sparse regression is effective in selecting the correct combination of nonlocal drift, local drift, and diffusion terms. Rank deficiency can result, however, from naive choices of nonlocal and local bases. Consider the kernel L σ , which satisfies 10 including the same power-law terms in both libraries M 1 (μ t ) and L K will lead to rank deficiency. This is easily avoided by incorporating known symmetries of the model (3.2), however in general we recommend that the user builds the library L V incrementally and monitors the condition number of L while selecting terms.

Sparse regression-As in [3]
, we enforce sparsity using a modified sequential thresholding least-squares algorithm (MSTLS), included as Algorithm 4.2, where the "modifications" are two-fold. First, we incorporate into the thresholding step the magnitude of the overall term G as well as the coefficient magnitude w j G j 2 , by defining non-uniform lower and upper thresholds (4.5) where | w j | is the number of columns in J = J K + J V + J σ . Second, we perform a grid search 11 over candidate sparsity parameters G and choose the parameter λ that is the smallest minimizer over λ of the cost function 10 This is not true in domains with boundaries, where nonlocalities can be seen to impart mean translation [42]. 11 Note that this is feasible because the STLS algorithm terminates in finitely many iterations. where λ is the output of the sequential thresholding algorithm with non-uniform thresholds (4.5) and w λ is the least-squares solution. 12 The final coefficient vector is then set to We now review some aspects of Algorithm 4.2. Results from [47] on the convergence of STLS carry over for the inner loop of Algorithm 4.2, namely if w = w λ is full-rank, the inner loop terminates in at most G iterations with a resulting coefficient vector J that is a local minimizer of the cost function w λ . This implies that the full algorithm terminates in atmost F (w) = ‖Gw − b‖ 2 2 + λ 2 ‖w‖ 0 least-squares solves (each on a subset of columns of mJ).
When considering recovery of the true weight vector G, Theorem 1 implies convergence in particle number w ⋆ of N to w when w ⋆ is full-rank. The rate of convergence depends implicitly on the condition number of G, hence it is recommended that one builds the library G incrementally, stopping before the conditional number L grows too large. If κ(G) is rank deficient, classical recovery guarantees from compressive sensing do not necessarily apply, due to high correlations between the columns of G (recall each column is constructed from the same dataset G). 13 One may employ additional regularization (e.g. Tikhonov regularization as in [31]); however, in general, improvements to existing sparse regression algorithms for rank-deficient, noisy, and highly-correlated matrices is an active area of research.

U
The bounds (4.5) enforce a quasi-dominant balance rule, such that w j G j 2 is within log 10 (λ) orders of magnitude from ‖b‖ 2 and | w j | is within log 10 (λ) orders of magnitude from 1 (the coefficient of time derivative ∂ t μ t . This is specifically designed to handle poorly-scaled data (see the Burgers and Korteweg-de Vries examples in [3]), however we leave a more thorough examination of the thresholding requirements necessary for models with multiple scales to future work.
As the sum of two relative errors, minimizers of the cost function ℒ equally weight the accuracy and sparsity of w λ . By choosing λ to be the smallest minimizer of ℒ over λ, we identify the thresholds λ ∈ λ such that λ < λ as those resulting in an overfit model. We commonly choose λ to be log-equally spaced (e.g. 50 points from 10 −4 to 1), and starting from a coarse grid, refine λ until the minimum of ℒ is stationary. 12 The Moore-Penrose inverse A † is defined for a rank-r matrix A using the reduced SVD A = U r Σ r V r * as A † ≔ V r Σ r −1 U r * . The subscript r denotes restriction to the first r columns. 13 In particular, correlations result in large mutual incoherence, which renders algorithms such as Basis Pursuit, Orthogonal Matching Pursuit, and Hard Thresholding Pursuit useless (see [48,Chapter 5] for details).
Messenger and Bortz Page 13 Physica D. Author manuscript; available in PMC 2023 July 20.

Computational complexity
To compute convolutions against ∇K for each K ∈ L K , we first evaluate ( ∂ x i K) 1 ≤ i ≤ d at the grid C − C defined by where ℎ is the spacing of C and n ℓ , 1 ≤ ℓ ≤ d, is the number of points in C along the ℓth coordinate.
n ℓ is the number of points in C. We then use the d-dimensional FFT to compute the convolutions where only entries corresponding to particle interactions within C are retained. For d = 1 this amounts to O( | C | log | C | ) flops per timestep. For d = 2 and higher dimensions, the d-dimensional FFT is considerably slower unless one of the arrays is separable. To enforce separability, trial interaction potentials in L K can be chosen to be a sum of separable functions, in which case only a series of one-dimensional FFTs are needed to compute ∂ x i K * U t , and again the cost is O( | C | log | C | ) per timestep. When K is not separable, a low-rank approximation can be computed from ∂ x i K, which again reduces convolutions to a series of one-dimensional FFTs. For d = 2, this is accomplished using the truncated SVD, while for higher dimensions there does not exist a unique best rank-Q tensor approximation, although several efficient algorithms are available to compute a sufficiently accurate decomposition [49][50][51] (and the field of fast tensor decompositions is advancing rapidly).
We propose to compute convolutions by first computing a low-rank decomposition of ∂ x i K using the randomized truncated SVD [52] or a suitable randomized tensor decomposition and then applying the d-dimensional FFT as a series of one-dimensional FFTs. In the 14  examples below we consider only d = 1 and d = 2, and leave extension to higher dimensions to future work.
Using low-rank approximations, the mean-field approach provides a significant reduction in computational complexity compared to direct evaluations of particle trajectories when N is sufficiently large. A particle-level computation of the nonlocal force in weak-form requires evaluating terms of the form For a single candidate interaction potential K, a collection of J test functions ψ, and M experiments, this amounts to MLN 2 + MLNJ function evaluations in ℝ d and O(MLN 2 J) flops. If we use the proposed method, employing the convolutional weak form with a separable reference test function ψ (as in WSINDy for PDEs [3]) and exploiting a rank Q approximation of ∂ x K when computing convolutions against interaction potential, we instead evaluate using O(LQ | C | log( | C | )) flops and only 2 d | C | evaluations of ∂ x K, reused at each of the L timepoints. 15 Fig. 1

Convergence
We now show that the estimators K, V , and σ of the weak-form method converge with a rate O(ℎ + N −1 ∕ 2 + Δt η ) when ordinary least squares are used (i.e. λ = 0) and only M = 1 experiment is available. Here η > 0 is the Hölder exponent of the sample paths of the process X t . We assume that D, C, G, P ℎ and the resulting histogram U = (U t ) t ≤ T are as in Section 4.1.2. We make the following assumptions on the true model and resulting linear system throughout this section.
Assumption H. Let p ≥ 1 be fixed. 15 We neglect the cost of computing the histogram U and evaluating ψ(C), together amounting to an additional O(NML + | C | ) flops, as these terms are lower order and reused in each column of G and b. (H.1) For each N ≥ 2, X t = (X t (1) , …, X t (N) ) is a strong solution to (1.1) for t ∈ [0, T ], and for some η > 0 the sample paths t X t (i) (ω) are almost-surely η-Hölder continuous, i.e.
for some C η > 0, (H.2) The initial particle distribution μ 0 satisfies the moment bound (H.3) ∇K ⋆ and ∇V ⋆ satisfy for some C p > 0 the growth bound: (H.4) For the same constant C p > 0, it holds that 16 The test functions (ψ k ) 1 ≤ k ≤ η ⊂ C 2 (ℝ d × (0, T )) are compactly supported and together with the library L are such that G has full column rank with 17 G † 1 ≤ C G almost surely for some constant C G > 0.
(H.6) The true functions K ⋆ , V ⋆ , and σ ⋆ are in the span of L.

Proof. See Appendix A.4.
With the following lemma, we can relate the histogram U to the empirical measure μ N through ℒ using the inner product 〈 ⋅ , ⋅ 〉 ℎ defined by trapezoidal-rule integration in space and continuous integration in time.

Proof. See Appendix A.4.
The previous estimates directly lead to the following bound on the model coefficients w: Theorem 1. Assume that Assumption H holds. Let w be the learned model coefficients and w ⋆ the true model coefficients. For C independent of N, ℎ, and Δt it holds that E w − w * 1 ≤ C ℎ + N −1 ∕ 2 + Δt η .
Proof. Using that K ⋆ , V ⋆ , and σ ⋆ are in the span of L (H.6), we have that where G k T is the kth row of G. From Lemmas 2-4 we have Using that G is full rank, it holds that w = G † b = G † L + w ⋆ , hence the result follows from the uniform bound on G † 1 (H.   and similarly for V and σ. Finally, setting ℎ = N −α for α > 0 will ensure convergence as N ∞ and Δt 0.

Theory vs. Practice
We now make several remarks about the practical performance of Algorithm 4.1 with respect to the theoretical convergence of Theorem 1.

Remark 2.
An important case of Theorem 1 is σ ⋆ = 0, in which case μ t N itself is a weak-measure solution to the mean-field Eq. (3.1) and the algorithm returns, for η ≥ 2, ‖w − w ⋆ ‖ 1 ≤ C(ℎ + Δt η ). This partially explains the accuracy observed for purely-extrinsic noise examples in Figs. 5 and 9. We note further that in the absence of noise (ε = 0 and σ ⋆ = 0, not included in this work) Algorithm 4.1 recovers systems to high accuracy similarly to WSINDy applied to local dynamical systems [2,3].
Remark 3. Algorithm 4.1 in general implements sparse regression, yet Theorem 1 deals with ordinary least squares. Since least squares is a common subroutine of many sparse regression algorithms (including the MSTLS algorithm used here), the result is still relevant to sparse regression. Lastly, the full-rank assumption on G implies that as N ∞ sequential thresholding reduces to least squares. For any fixed M > 1, the N ∞ limit results in convergence, however, the N-fixed and M ∞ limit does not result in convergence, as this does not lead to the mean-field equations. 18 The examples below show that using M > 1 has a practical advantage, and in Appendix A.3 we demonstrate that even for small particle systems (N = 10) the large M regime yields satisfactory results.

Examples
We now demonstrate the successful identification of several particle systems in one and two spatial dimensions as well as the O(N −1 ∕ 2 ) convergence predicted in Theorem 1. In each case we use Algorithm 4.1 to discover a mean-field equation of the form (3.1) from discrete-time particle data. For each dataset we simulate the associated interacting particle system X t given by (1.1) using the Euler-Maruyama scheme (initial conditions and timestep are given in each example). We assess the ability of WSINDy to select the correct model using the true positivity ratio 19 TPR(w) = TP TP+FN + FP (5.1) where TP is the number of correctly identified nonzero coefficients, FN is the number of coefficients falsely identified as zero, and FP is the number of coefficients falsely identified as nonzero [53]. To demonstrate the O(N −1 ∕ 2 ) convergence given by (4.10), for correctly identified models (i.e. TPR(w) = 1) we compute the relative ℓ 2 -error of the recovered interaction force ∇K, local force ∇V , and diffusivity σ over C − C and C, respectively, denoting this by ‖ ⋅ ‖ in the plots below. Results are averaged over 100 trials.
For the computational grid C we first compute the sample standard deviation s of Y and we choose D to be the rectangular grid extending 3s from the mean of Y in each spatial dimension. We then set C to have 128 points in x and y for d = 2 dimensions, and 256 points in x for d = 1, noting that these numbers are fairly arbitrary, and used to show that the grid need not be too large. We set the sparsity factors so that log 10 (λ) contains 100 equally spaced points from −4 to 0. More information on the specifications of each example can be found in Appendix A.1. (MATLAB code used to generate examples is available at https://github.com/ MathBioCU/WSINDy_IPS.)

Two-dimensional local model and homogenization
The first system we examine is a local model (K ⋆ (x, y) = 0) defined by the local potential V ⋆ (x, y) = − x − y and diffusivity σ ⋆ (x, y) = 2 (1 + 0.95 cos(ωx) cos(ωy))I 2 , where I 2 is the identity in R 2 . This results in a constant advection, variable diffusivity mean-field model 20 ∂ t μ t = − ∂ x μ t − ∂ y μ t + Δ [(1 + 0.95 cos(ωx) cos(ωy)) μ t ] . (5.2) The purpose of this example is three-fold. First, we are interested in the ability of Algorithm 4.1 to correctly identify a local model from a library containing both local and nonlocal terms. Next, we evaluate whether the O(N −1 ∕ 2 ) convergence is realized. Lastly, we investigate whether for large ω the weak-form identifies the associated homogenized equation (derived in Appendix A.2) ∂ t μ t = − ∂ x μ t − ∂ y μ t + ωΔμ t , (5.3) where ω is given by the harmonic mean of diffusivity: For ω ∈ {1, 20} we evolve the particles from an initial Gaussian distribution with mean zero and covariance I 2 and record particle positions for 100 timesteps with Δt = 0.02 (subsampled from a simulation with timestep 10 −4 ). We use a rectangular domain D of approximate sidelength 10 and compute histograms with 128 bins in x and y for a spatial resolution of Δx ≈ 0.078 (see Fig. 2 for solution snapshots), over which ω ≈ 0.62. For ω = 1 we compare recovered equations with the full model (5.2), while for ω = 20 we compare with (5.3), for comparison computing ω over each domain D using MATLAB's integral2. Fig. 3 shows that as the particle number increases, we do in fact recover the desired equations, with TPR(w) approaching one as N increases. For ω = 1 we observe O(N −1 ∕ 2 ) convergence of the local potential V and the diffusivity σ. For ω = 20, we observe approximate O(N −1 ∕ 2 ) convergence of V , and σ converging to within 2% of 2ω, the homogenized diffusivity (higher accuracy can hardly be expected for ω = 20 since (5.3) is itself an approximation in the limit of infinite ω).

One-dimensional nonlocal model
We simulate the evolution of particle systems under the quadratic attraction/Newtonian repulsion potential 1 2 Since the model is local, (5.2) is the Fokker-Planck equation for the distribution of each particle, rather than only in the limit of infinite particles.
Messenger and Bortz Page 20 Physica D. Author manuscript; available in PMC 2023 July 20. with no external potential (V = 0). The − | x | portion of K QANR , leading to a discontinuity in ∇K, is the one-dimensional free-space Green's function for −Δ. For d ≥ 1, when replaced by the corresponding Green's function in d dimensions, the distribution of particles evolves under K QANR into the characteristic of the unit ball in ℝ d , which has implications for design and control of autonomous systems [54]. We compare three diffusivity profiles, σ(x) = 0 corresponding to zero intrinsic noise, σ(x) = 2(0.1) leading to constant-diffusivity intrinsic noise, and σ(x) = 2(0.1) | x − 2 | leading to variable-diffusivity intrinsic noise. With zero intrinsic noise (σ(x) = 0), we examine the effect of extrinsic noise on recovery, and assume uncertainty in the particle positions due to measurement noise at each timestep, Y = X + ε, For the case of extrinsic noise (Fig. 5), we use only one experiment (M = 1) and examine the number of particles N and the noise ratio ϵ. We find that recovery is accurate and reliable for ϵ ≤ 0.1, yielding correct identification of K QANR with less than 1% relative error in at least 98/100 trials. Increasing N from 500 to 8000 leads to minor improvements in accuracy for ϵ ≤ 0.1, but otherwise has little effect, implying that for low to moderate noise levels the mean-field equations are readily identifiable even from smaller particle systems.
For ϵ = 10 −1 ∕ 2 ≈ 0.3162 (see Fig. 4 (bottom right) for an example histogram), we observe a decrease in TPR(w) (Fig. 5 middle panel) resulting from the generic identification of a linear diffusion term v ∂ xx u with v ≈ 0.05. Using that 2v ≈ 2(0.05) = ϵ, we can identify this as the best-fit intrinsic noise model. Furthermore, increases in N lead to reliable identification of the drift term, as measured by TPR(w drift ) (rightmost panel Fig. 5) which is the restriction of TPR to drift terms L K and L V .
For constant diffusivity σ(x) = 2(0.1) (Fig. 6), the full model is recovered with less than 3% errors in K and σ in at least 98/100 trials when the total particle count NM is at least 8000, and yields errors less than 1% for NM ≥ 16, 000. The error trends for K and σ in this case both strongly agree with the predicted O(N −1 ∕ 2 ) rate. For non-constant diffusivity σ(x) = 2(0.1) | x − 2 | (Fig. 7), we also observe robust recovery (TPR(w) ≥ 0.95) for NM ≥ 8000 with error trends close to O(N −1 ∕ 2 ), although the accuracy in K and σ is diminished due to the strong order Δt 1 ∕ 2 convergence of Euler-Maruyama applied to diffusivities σ that are unbounded in x [55].

Two-dimensional nonlocal model
We now discuss an example of singular interaction in two spatial dimensions using the logarithmic potential is the critical diffusivity such that σ > σ c leads diffusion-dominated spreading of particles throughout the domain (vanishing particle density at every point in ℝ 2 ) and σ < σ c leads to aggregation-dominated concentration of the particle density to the dirac-delta located at the center of mass of the initial particle density [44,56]. For σ = 0 we examine the affect of additive i.i.d. measurement noise ε ∼ N(0, ϵ 2 ‖X t ‖ RMS 2 ) for ϵ ∈ {0.01, 0.0316, 0.1, 0.316, 1}.
We simulate the particle system with a cutoff potential with δ = 0.01, so that K δ is Lipschitz and ∇K δ has a jump discontinuity at the origin. Initial particle positions are uniformly distributed on a disk of radius 2 and the particle position data consists of 81 timepoints recorded at a resolution Δt = 0.1, coarsened from 0.0025. Histograms are created with 128 × 128 bins in x and y of sidelength ℎ = 0.0469 (see Fig. 8 for histogram snapshots over time). We examine M = 2 0 , …, 2 6 experiments with N = 2000 or N = 4000 particles.
In Fig. 9 we observe a similar trend in the σ = 0 case as in the 1D nonlocal example, namely that recovery for ϵ ≤ 0.01 is robust with low errors in K (on the order of 0.0032), only in this case the full model is robustly recovered up to ϵ = 0.316. At ϵ = 1, with N = 4000 the method frequently identifies a diffusion term vΔu with v ≈ 0.5 = ϵ 2 ∕ 2, and for N = 2000 the method occasionally identifies the backwards diffusion equation ∂ t μ t = − αΔμ t , α > 0. This is easily prevented by enforcing positivity of σ, however we leave this and other constraints as an extension for future work.
With diffusivity σ = 1 4π , we obtain TPR(w) approximately greater than 0.95 for NM ≥ 16, 000 (Fig. 10, right), with an error trend in K following an O(N −1 ∕ 2 ) rate, and a trend in σ of roughly O(N −2 ∕ 3 ). Since convergence in M for any fixed N is not covered by the theorem above, this shows that combining multiple experiments may yield similar accuracy trends for moderately-sized particle systems.

Discussion
We have developed a weak-form method for sparse identification of governing equations for interacting particle systems using the formalism of mean-field equations. In particular, we have investigated two lines of inquiry, (1) is the mean-field setting applicable for inference from medium-size batches of particles? And (2) can a low-cost, low-regularity density approximation such as a histogram be used to enforce weak-form agreement with the mean-field PDE? We have demonstrated on several examples that the answer is yes to both questions, despite the fact that the mean-field equations are only valid in the limit of infinitely many particles (N ∞). This framework is suitable for systems of several thousand particles in one and two spatial dimensions, and we have proved convergence in N for the associated least-squares problem using simple histograms as approximate particle densities. In addition, the sparse regression approach allows one to identify the full system, including interaction potential K, local potential V , and diffusivity σ.
It was initially unclear whether the mean-field setting could be utilized in weak form for finite particle batches, hence this can be seen as a proof of concept for particle systems with N in the range 10 3 -10 5 . With convergence in N and low computational complexity, our weak-form approach is well-suited as is for much larger particle systems. In the opposite regime, for small fixed N, the authors of [26] show that their maximum likelihood-based method converges as M ∞ (i.e. in the limit of infinite experiments). While the same convergence does not hold for our weak-form method, the results in Section 5 suggest that in practice, combining M independent experiments each with N particles improves results. Furthermore, we include evidence in Appendix A.3 that even for small N, our method correctly identifies the mean-field model when M is large enough, with performance similar to that in [26]. We leave a full investigation of the interplay between M and N to future work.
In the operable regime of N > 10 3 , there is potential for improvements and extensions in many directions. On the subject of density estimation, histograms are highly efficient, yet they lead to piecewise-constant approximations of μ t and hence O(ℎ) errors. Choosing a density kernel G to achieve high-accuracy quadrature without sacrificing the O(N) runtime of histogram computation seems prudent, although one must be cautious about making assumptions on the smoothness of mean-field distribution μ t . For instance, in the 1D nonlocal example 5.2, discontinuities develop in μ t for the case σ = 0, hence a histogram approximation is more appropriate than using e.g. a Gaussian kernel.
The computational grid C, quadrature method 〈 ⋅ , ⋅ 〉 ℎ, Δt , and reference test function ψ may also be optimized further or adapted to specific problems. The approach chosen here of C equally-spaced and separable piecewise-polynomial ψ, along with integration using the trapezoidal quadrature, has several advantages, including high accuracy and fast computation using convolutions. However, this may need adjustment for higher dimensions. It might be advantageous to adapt C to the data Y , however this may prevent one from evaluating (G, b) using the FFT if a non-uniform grid results, hence increases the overall computational complexity. One could also use multiple reference test functions ψ. The possibilities of varying the test functions (within the smoothness requirements of the library L) have been largely unexplored in weak-form identification methods.
Several theoretical questions remain unanswered, namely model recovery statistics for finite N. As a consequence of Theorem 1, as well as convergence results on sequential thresholding [47], we have that G being full-rank and L containing the true model is sufficient to guarantee convergence w w ⋆ as N ∞ at the rate O(N −1 ∕ 2 ). Noise, whether extrinsic or intrinsic, for finite N may result in identification of an incorrect model when G is poorly-conditioned. The effect is more severe if the true model has a small coefficient, which requires a small threshold λ, which correspondingly may lead to a non-sparse solution. These are sensitivities of any sparse regression algorithm (see e.g. [57]) and accounting for the effect of noise and poor conditioning is an active area of research in equation discovery.
We also note that several researchers have focused on the uniqueness in kernel identifiability [34,58]. This issue does not directly apply to our scenario 21 of identifying the triple (K, V , σ). Moreover, in the cases we considered, we do not see any identifiability issues (e.g. rank deficiency) even in the high noise case with low particle number. Quantifying the transition to identifiability as N ∞ as a function of the condition number κ(G) is an important subject for future work.
For extensions, the example system (5.2) and resulting homogenization motivates further study of effective equations for systems with complex microstructure. In other fields this is described as coarse-graining. A related line of study is inference of 2nd-order particle systems, as explored in [32], which often lead to an infinite hierarchy of meanfield equations. Our weak-form approach may provide a principled method for truncated and closing such hierarchies using particle data. Another extension is to enforce convex constraints in the regression problem, such as lower bounds on diffusivity, or K with long-range attraction depending on the distribution ρ π ∈ P([0, ∞)) of pairwise distances (see [26] for further use of ρ π ). Finally, the framework we have introduced can easily be used to find nonlocal models from continuous solution data (e.g. given U instead of Y ), whereby questions of nonlocal representations of models can be investigated.
where G is the Green's function for ( − Δ) −1 with homogeneous Dirichlet boundary conditions on ∂Ω. By the coercivity of a we have that ‖u ϵ ‖ L 2 (Ω) is uniformly bounded in ϵ. By the lemma in [59,Section 2.4], up to a subsequence {ϵ j } j ∈ ℕ , there exists a function u(x, y) periodic in its second variable such that for any continuous function ϕ(x, x ∕ ϵ), we have lim ϵ 0 ∫ u ϵ (x)ϕ(x, x ∕ ϵ)dx = ∬ u(x, y)ϕ(x, y)dydx .
Setting ϕ(x, y) = ϕ(x), we see that on the same subsequence, u ϵ ∫ u(x, y)dy. Applying the same lemma to the constant series u ϵ = 1 and letting ϕ(x, x ∕ ϵ) = ϕ(x)a −1 (x, x ∕ ϵ), we see that (up to possibly a second subsequence), Letting a * (x) ≔ ∫ dy a(x, y) −1 and putting together the previous limits, we see that and hence u * solves the homogenized equation Δ (a * u * ) = f .

A.3. Recovery for small N and large M
The related maximum-likelihood approach [26] is shown to be suitable for small N  holding weakly, which depends on the 2-particle marginal ρ t (2), N [35]. Nevertheless, using the 1D nonlocal example in Section 5.2 with σ = 2(0.1) ≈ 0.45, we observe in Fig. 11 22 in K is less than 1% and the runtime is approximately 0.9 s. The lack of convergence in M is reflected in the diffusivity (middle panel of Fig. 11), where the error appears to plateau at around 1.7% for ℎ ≈ 0.0468 and at 3.5% for ℎ ≈ 0.0234. The lower resolution (larger binwidth ℎ) appears to yield slightly better results, possibly indicating that larger ℎ produces a coarse-graining effect such that ρ (2), N ≈ ρ (1), N ⊗ ρ (1), N over larger distances, although this effect deserves more thorough study in future work.

A.4. Technical lemmas
We Together with the pth moment bound on μ 0 (H.2), this implies where ‖ ⋅ ‖ F is the Frobenius norm.

FIG. 11.
Recovery of (3.1) in one spatial dimension for C p and T with only ψ particles per experiment.
Proof of Lemma 3. Using the notation f C from Lemma 1 to denote piecewise constant approximation of a function f over the domain D using the grid C, we have The right-hand side includes an interaction error E interact followed by a sum E linear of terms that are linear in the difference between a locally Lipschitz function and its piecewise constant approximation. Hence, we can bound E linear using smoothness of ψ (H.5), the moment assumptions on μ t N (H.2), and the growth assumptions on V and σ (H.3)-(H.4).
Specifically, for x ∈ B k with center c k , the growth assumptions imply 23 ‖f‖ p, q for vector-valued functions f : ℝ d ℝ d denotes the L q norm over x of the ℓ p norm of f(x). Also recall that ρ t (1) is the X t (1) -marginal of the process X t ∈ ℝ dN .

Messenger and Bortz Page 29
Physica D. Author manuscript; available in PMC 2023 July 20.
We get the same bound for I 2 . Summing over ℓ, and taking the average in i, we then get which implies the desired bound on the difference (A.5).
22 For comparison, in [26] Fig. 4 the error in recovering K using the maximum-likelihood approach on an opinion dynamics example for M = 10 3.6 , N = 10, and σ = 0.5 is approximately 100 × 10 −1.2 % = 6.3%. 23 ‖f‖ p, q for vector-valued functions f : ℝ d ℝ d denotes the L q norm over x of the ℓ p norm of f(x). Also recall that ρ t (1) is the X t (1) -marginal of the process X t ∈ ℝ dN .  Factor by which the mean-field evaluation of interaction forces using histograms reduces total function evaluations as a function of particle number N and average gridpoints per   Convergence of σ (left) and ∇V (middle), recall ‖ ⋅ ‖ denotes the ℓ 2 norm, for (5.2) with ω ∈ {1, 20}, as well as TPR(w) (right). For ω = 1, results are compared to the exact model    Recovery of (3.1) in one spatial dimension for K ⋆ = K QANR and σ ⋆ = 2(0.1).

Messenger and Bortz Page 40
Physica D. Author manuscript; available in PMC 2023 July 20.  Recovery of (3.1) in two spatial dimensions with K ⋆ given by (5.5) from deterministic particles (σ ⋆ = 0) with extrinsic noise ϵ.   Table 1 Notations used throughout. ith t particle in the particle system (1.   Table 2 Trial function library for local 2D example (Section 5.1).  Table 3 Trial function library for nonlocal 1D example (Section 5.2).