Statistical Inference using the Morse-Smale Complex

The Morse-Smale complex of a function $f$ decomposes the sample space into cells where $f$ is increasing or decreasing. When applied to nonparametric density estimation and regression, it provides a way to represent, visualize, and compare multivariate functions. In this paper, we present some statistical results on estimating Morse-Smale complexes. This allows us to derive new results for two existing methods: mode clustering and Morse-Smale regression. We also develop two new methods based on the Morse-Smale complex: a visualization technique for multivariate functions and a two-sample, multivariate hypothesis test.


Introduction
Let f be a smooth, real-valued function defined on a compact set K ∈ R d .In this paper, f will be a regression function or a density function.The Morse-Smale complex of f is a partition of K based on the gradient flow induced by f .Roughly speaking, the complex consists of sets, called crystals or cells, comprised of regions where f is increasing or decreasing.Figure 1 shows the Morse-Smale complex for a two-dimensional function.The cells are the intersections of the basins of attractions (under the gradient flow) of the function's maxima and minima.The function f is piecewise monotonic over cells with respect to some directions.In a sense, the Morse-Smale complex provides a generalization of isotonic regression.
Because the Morse-Smale complex represents a multivariate function in terms of regions on which the function has simple behavior, the Morse-Smale complex has useful applications in statistics, including in clustering, regression, testing, and visualization.For instance, when f is a density function, the basins of attraction of f 's modes are the (population) clusters for density-mode clustering q q q q q q q q q q q q q q q q q Descending Manifolds (a) Descending manifold q q q q q q q q q q q q q q q q q Ascending Manifolds (b) Ascending manifold q q q q q q q q q q q q q q q q q d−cell (c) d-cell q q q q q q q q q q q q q q q q q All the d−cells (also known as mean shift clustering (Fukunaga and Hostetler, 1975;Chacón et al., 2015)), each of which is a union of cells from the Morse-Smale complex.
Similarly, when f is a regression function, the cells of the Morse-Smale complex give regions on which f has simple behavior.Fitting f over the Morse-Smale cells provides a generalization of nonparametric, isotone regression; Gerber et al. (2013) proposes such a method.The Morse-Smale representation of a multivariate function f is a useful tool for visualizing f 's structure, as shown by Gerber et al. (2010).In addition, suppose we want to compare two multi-dimensional datasets X = (X 1 , . . ., X n ) and Y = (Y 1 , . . ., Y m ).We start by forming the Morse-Smale complex of p − q where p is density estimate from X and q is density estimate from Y .Figure 2 shows a visualization built from this complex.
The circles represent cells of the Morse-Smale complex.Attached to each cell is a pie-chart showing what fraction of the cell has p significantly larger than q.This visualization is a multi-dimensional extension of the method proposed for two or three dimensions in Duong (2013).For all these applications, the Morse-Smale complex needs to be estimated.To the best of our knowledge, no theory has been developed for this estimation problem, prior to this paper.We have three goals in this paper: to show that many existing problems can be cast in terms of the Morse-Smale complex, to q q q q q q q q Control > GvHD GvHD > Control Fig 2 .Graft-versus-Host Disease (GvHD) dataset (Brinkman et al., 2007).This is a d = 4 dimensional dataset.We estimate the density difference based on the kernel density estimator and find regions where the two densities are significantly different.Then we visualize the density difference using the Morse-Smale complex.Each green circle denotes a d-cell, which is a partition for the support K. The size of circle is proportional to the size of cell.If two cells are neighborhors, we add a line connecting them; the thickness of the line denotes the amount of boundary they share.The pie charts show the ratio of the regions within each cell where the two densities are significantly different from each other.See Section 3.4 for more details.
develop some new statistical methods based on the Morse-Smale complex, and to develop the statistical theory for estimating the complex.
Main results.The main results of this paper are: 1. Consistency of the Morse-Smale Complex.We prove the stability of the Morse-Smale complex (Theorem 1) in the following sense: if B and B are boundaries of the descending d-manifolds (or ascending 0-manifolds) of p and p (defined in Section 2), then 2. Risk Bound for Mode clustering (mean-shift clustering; section 3.1): We bound the risk of mode clustering in Theorem 2. 3. Morse-Smale regression (section 3.2): In Theorems 4 and 5, we bound the risk of Morse-Smale regression, a multivariate regression method proposed in Gerber et al. (2010); Gerber and Potter (2011); Gerber et al. (2013) that synthesizes nonparametric regression and linear regression.4. Morse-Smale signatures (section 3.3): We introduce a new visualization method for densities and regression functions.5. Morse-Smale two-sample testing (section 3.4): We develop a new method for multivariate two-sample testing that can have good power.
Related work.The mathematical foundations for the Morse-Smale complex are from Morse theory (Morse, 1925(Morse, , 1930;;Milnor, 1963).Morse theory has many applications including computer vision (Paris and Durand, 2007), computational geometry (Cohen-Steiner et al., 2007) and topological data analysis (Chazal et al., 2014).Previous work on the stability of the Morse-Smale complex can be found in Chen et al. (2016) and Chazal et al. (2014) but they only consider critical points rather than the whole Morse-Smale complex.Arias-Castro et al. (2016) prove pointwise convergence for the gradient ascent curves but this is not sufficient for proving the stability of the complex because the convergence of complexes requires convergence of multiple curves and the constants in the convergence rate derived from Arias-Castro et al. (2016) vary from points to points and some constants diverge when we are getting closer to the boundaries of complexes.Thus, we cannot obtain a uniform convergence of gradient ascent curves directly based on their results.Morse-Smale regression and visualization were proposed in Gerber et al. (2010); Gerber and Potter (2011);Gerber et al. (2013).

Morse Theory
To motivate formal definitions, we start with the simple, one-dimensional example depicted in Figure 3.The left panel shows the sets associated with each local maximum (i.e. the basins of attraction of the maxima).The middle panel shows the sets associated with each local minimum.The right panel show the intersections of these basins, which gives the Morse-Smale complex defined by the function.Each interval in the complex, called a cell, is a region where the function is increasing or decreasing.Now we give a formal definition.Let f : K ⊂ R d → R be a function with bounded third derivatives that is defined on a compact set K. Let g(x) = ∇f (x) and H(x) = ∇∇f (x) be the gradient and Hessian matrix of f , respectively, and let λ j (x) be the jth largest eigenvalue of H(x).Define C = {x ∈ K : g(x) = 0} to be the set of all f 's critical points, which we call the critical set.Using the signs of the eigenvalues of the Hessian, the critical set C can be partitioned into (1) We define C 0 , C d to be the sets of all local maxima and minima (corresponding to all eigenvalues being negative and positive respectively).The set C k is called k−th order critical set.
A smooth function f is called a Morse function (Morse, 1925;Milnor, 1963) if its Hessian matrix is non-degenerate at each critical point.That is, In what follows we assume f is a Morse function (actually, later we will assume further that f is a Morse-Smale function).
Given any point x ∈ K, we define the gradient ascent flow starting at x, π x : R + → K, by A particle on this flow moves along the gradient from x towards a "destination" given by dest(x) ≡ lim t→∞ π x (t).
It can be shown that dest(x) ∈ C for x ∈ K.We can thus partition K based on the value of dest(x).These partitions are called descending manifolds in Morse theory (Morse, 1925;Milnor, 1963).Recall C k is the k-th order critical points, we assume (3) That is, D k is the collection of all points whose gradient ascent flow converges to a (d−k)-th order critical point and D k,j is the collection of points whose gradient ascent flow converges to the j-th element of Banyaga and Hurtubise (2004), each D k is a disjoint union of k-dimensional manifolds (D k,j is a k-dimensional manifold).We call D k,j a descending k-manifold of f .Each descending k-manifold is a k-dimensional manifold such that the gradient flow from every point converges to the same The top panels of Figure 4 give an example of the descending manifolds for a two dimensional case.
The ascending manifolds are similar to descending manifolds but are defined through the gradient descent flow.More precisely, given any x ∈ K, a gradient descent flow γ : R + → K starting from x is given by (4) q q q q q q q q q q q q q q q q q (a) q q q q q q q q q (b) Unlike the ascending flow defined in (2), γ x is a flow that moves along the gradient descent direction.The descent flow γ x shares similar properties to the ascent flow π x ; the limiting point lim t→∞ γ x (t) ∈ C is also in critical set when f is a Morse function.Thus, similarly to D k and D k,j , we define (5) A k and A k,j have dimension d − k and each A k,j is a partition for A k and {A 0 , • • • , A d } consist of a partition for K.We call each A k,j an ascending kmanifold to f .A smooth function f is called a Morse-Smale function if it is a Morse function and any pair of the ascending and descending manifolds of f intersect each other transversely (which means that pairs of manifolds are not parallel at their intersections); see e.g.Banyaga and Hurtubise (2004) for more details.In this paper, we also assume that f is a Morse-Smale function.Note that by the Kupka-Smale Theorem (see e.g.Theorem 6.6 in Banyaga and Hurtubise (2004)), Morse-Smale functions are generic (dense) in the collection of smooth functions.For more details, we refer to Section 6.1 in Banyaga and Hurtubise (2004).
A k-cell (also called Morse-Smale cell or crystal) is the non-empty intersection between any descending k 1 -manifold and an ascending (d − k 2 )-manifold such that k = min{k 1 , k 2 } (the ascending (d − k 2 )-manifold has dimension k 2 ).When we simply say a cell, we are referring to the d-cell since d-cells consists of the majority of K (the totality of k-cells with k < d has Lebesgue measure 0).The Morse-Smale complex for f is the collection of all k-cells for k = 0, • • • , d.The bottom panels of Figure 4 give examples for the ascending manifolds and the d-cells for d = 2. Another example is given in Figure 1.
The cells of a smooth function can be used to construct an additive decomposition that is useful in data analysis.For a Morse-Smale function f , let E 1 , • • • , E L be its associated cells.Then we can decompose f into where each f (x) behaves like a multivariate isotonic function (Barlow et al., 1972;Bacchetti, 1989).Namely, f (x) = f (x) when x ∈ E .This decomposition is because within each E , f has exact a local mode and a local minimum on the boundary of E .The fact that f admits such a decomposition will be used frequently in Section 3.2 and 3.3.Among all descending/ascending manifolds, the descending d-manifolds and the ascending 0-manifolds are often of great interest.For instance, mode clustering (Li et al., 2007;Azzalini and Torelli, 2007) uses the descending d-manifolds to partition the domain K into clusters.Morse-Smale regression (Gerber and Potter, 2011;Gerber et al., 2013)  0-manifolds).Regions outside descending d-manifolds or ascending 0-manifolds have Lebesgue measure 0. Thus, later in our theoretical analysis, we will focus on the stability of the set D d and A 0 (see Section 4.1).We define boundaries of The set B will be used frequently in Section 4.

Mode Clustering
Mode clustering (Li et al., 2007;Azzalini and Torelli, 2007;Chacón and Duong, 2013;Arias-Castro et al., 2016;Chacón et al., 2015;Chen et al., 2016) is a clustering technique based on the Morse-Smale complex and is also known as mean-shift clustering (Fukunaga and Hostetler, 1975;Cheng, 1995;Comaniciu and Meer, 2002).Mode clustering uses the descending d-manifolds of the density function p to partition the whole space K. (Although the d-manifolds do not contain all points in K, the regions outside d-manifolds have Lebesgue measure 0).See Figure 5 for an example.Now, we briefly describe the procedure of mode clustering.Let X = {X 1 , • • • , X n } be a random sample from density p defined on a compact set K and assumed to be a Morse function.Recall that dest(x) is the destination of the gradient ascent flow starting from x. Mode clustering partitions the sample based on dest(x) for each point; specifically, it partitions where each m is a local mode of p.We can also view mode clustering as a clustering technique based on the d-descending manifolds.Let be the d-descending manifolds of p, assuming that L is the number of local modes.Then each cluster X = X D d, .
In practice, however, we do not know p so we have to use a density estimator p n .A common density estimator is the kernel density estimator (KDE): where K is a smooth kernel function and h > 0 is the smoothing parameter.
Note that mode clustering is not limited to the KDE; other density estimators also give us a sample-based mode clustering.Based on the KDE, we are able to estimate gradient g n (x), the gradient flows π x (t), and the destination dest n (x) (note that the mean shift algorithm is an algorithm to perform these tasks).Thus, we can estimate the d-descending manifolds by the plug-in from p n .Let be the d-descending manifolds of p n , where L is the number of local modes of p n .The estimated clusters will be X 1 , • • • , X L , where each X = X D d, .Figure 5 displays an example of mode clustering using the KDE.
A nice property of mode clustering is that there is a clear population quantity that our estimator (clusters based on the given sample) is estimating: the population partition of the data points.Thus we can consider properties of the procedure such as consistency, which we discuss in detail in Section 4.2.

Morse-Smale Regression
Let (X, Y ) be a random pair where Y ∈ R and is challenging for d of even moderate size.A common way to address this problem is to use a simple regression function that can be estimated with low variance.For example, one might use an additive regression of the form m(x) = j m j (x j ) which is a sum of one-dimensional smooth functions.Although the true regression function is unlikely to be of this form, it is often the case that the resulting estimator is useful.
A different approach, Morse-Smale regression (MSR), is suggested in Gerber et al. (2013).This takes advantage of the (relatively) simple structure of the Morse-Smale complex and the isotone behavior of the function on each cell.Specifically, MSR constructs a piecewise linear approximation to m(x) over the cells of the Morse-Smale complex.
We first define the population version of the MSR.Let m(x) = E(Y |X = x) be the regression function and is assumed to be a Morse-Smale function.Let E 1 , • • • E L be the d-cells for m.The Morse-Smale Regression for m is a piecewise linear function within each cell E such that where (µ , β ) are obtained by minimizing mean square error: That is, m MSR is the best linear piecewise predictor using the d-cells.One can also view MSR as using a linear function to approximate f in the additive model ( 6).Note that m MSR is well defined except on the boundaries of E that have Lebesgue measure 0. Now we define the sample version of the MSR.
Throughout section 3.2, we assume the density of covariates X is bounded, positive and has a compact support K and the response Y has finite second moment.
Let m n be a smooth nonparametric regression estimator for m.We call m n the pilot estimator.For instance, one may use the kernel regression Nadaraya (1964) ) as the pilot estimator.We define d-cells for where ( µ , β ) are obtained by minimizing the empirical squared error: This MSR is slightly different from the original version in Gerber et al. (2013).
We will discuss the difference in Remark 1. Computing the parameters of MSR is not very difficult-we only need to compute the cell labels of each observation (this can be done by the mean shift algorithm or some fast variants such as the quick-shift algorithm Vedaldi and Soatto 2008) and then fit a linear regression within each cell.MSR may give low prediction error in some cases; see Gerber et al. (2013) for some concrete examples.In Theorem 5, we prove that we may estimate m MSR at a fast rate.Moreover, the regression function may be visualized by the methods discussed later.Gerber et al. (2013) does not use d-cells of a pilot nonparametric estimate m n .Instead, they directly find local modes and minima using the original data points (X i , Y i ).This saves computational effort but comes with a price: there is no clear population quantity being estimated by their approach.That is, when the sample size increases to infinity, there is no guarantee that their method will converge.In our case, we apply a consistent pilot estimate for m and construct d-cells on this pilot estimate.As is shown in Theorem 4, our method is consistent for this population quantity.

Morse-Smale Signatures and Visualization
In this section we define a new method for visualizing multivariate functions based on the Morse-Smale complex, called Morse-Smale signatures.The idea is very similar to the Morse-Smale regression but the signatures can be applied to any Morse-Smale function.
Let E 1 , • • • , E K be the d-cells (nonempty intersection of a descending dmanifold and an ascending 0-manifold) for a Morse-Smale function f that has a compact support K. The function f depends on the context of the problem.For density estimation, f is the density p or its estimator p n .For regression problem, f is the regression function m or a nonparametric estimator m n .For two sample test, f is the density difference p 1 − p 2 or the estimated density difference p 1 − p 2 .Note that E 1 , • • • , E K form a partition for K except a Lebesgue measure 0 set.Each cell corresponds to a unique pair of a local mode and a local minimum.Thus, the local modes and minima along with d-cells form a bipartite graph which we call it signature graph.The signature graph contains geometric information about f .See Figure 6 and 7 for examples.
The signature is defined as follows.We project the maxima and minima of the function into R 2 using multidimensional scaling.We connect a maximum and minimum by an edge if there exists a cell that connects them.The width of the edge is proportional to the norm of the linear coefficients of the linear approximation to the function within the cell.The linear approximation is where This is again a linear approximation for f in the additive model ( 6).Note that f MS may not be continuos when we move from one cell to another.The summary statistics for the edge associated with cell E are the parameters (η † , γ † ).We call the function f MS the (Morse-Smale) approximation function; it is the best piecewise-linear representation for f (piecewise linear within each cell) under L 2 error given the d-cells.This function is well-defined except on a set of Lebesgue measure 0 (the boundaries of each cell).See Figure 6 for a example on the approximation function.The details are in Algorithm 1.
Example. Figure 7 is an example using the GvHD dataset.We first conduct multidimensional scaling (Kruskal, 1964) on the local modes and minima for f and plot them on the 2-D plane.In Figure 7, the blue dots are local modes and the green dots are local minima.These dots act as the nodes for the signature graph.Then we add edges, representing the cells for f that connect pairs of local modes and minima, to form the signature graph.Lastly, we adjust the width for the edges according to the strength (L 2 norm) of regression function within each cell (i.e.γ † ).Algorithm 1 provides a summary for visualizing a general multivariate function using what we described in this paragraph.Algorithm 1 Visualization using Morse-Smale Signatures 1. Find local modes and minima of f on the discretized points the grid points for modes and minima.and 2. can be done by using a k-nearest neighbor gradient ascent/descent method; see Algorithm 1 in Gerber et al. ( 2013)).

Partition {x
3. For each cell X , fit a linear regression with , where x i ∈ X .Let the regression coefficients (without intercept) be β .4. Apply multidimensional scaling to modes and minima jointly.Denote their 2 dimensional representation points as Add edge to a pair of mode and minimum if there exist a cell that connects them.The width of the edge is in proportional to β (for cell X ).q q q q q q Modes Minima Cells Fig 7 .Morse-Smale Signature visualization (Algorithm 1) of the density difference for GvHD dataset (see Figure 2).The blue dots are local modes; the green dots are local minima; the brown lines are d-cells.These dots and lines form the signature graph.The width indicates the L 2 norm for the slope of regression coefficients.i.e. γ † .The location for modes and minima are obtained by multidimensional scaling so that the relative distance is preserved.

Two Sample Comparison
The Morse-Smale complex can be used to compare two samples.There are two ways to do this.The first one is to test the difference in two density functions locally and then use the Morse-Smale signatures to visualize regions where the two samples are different.The second approach is to conduct a nonparametric two sample test within each Morse-Smale cell.The advantage of the first approach is that we obtain a visual display on where the two densities are different.The merit of the second method is that we gain additional power in testing the density difference by using the shape information.

Visualizing the Density Difference
Let X 1 , . . .X n and Y 1 , . . ., Y m be two random sample with densities p X and p Y .In a two sample comparison, we not only want to know if p X = p Y but we also want to find the regions that they significantly disagree.That is, we are doing the local tests simultaneously for all x ∈ K and we are interested in the regions where we reject H 0 (x).A common approach is to estimate the density for both sample by the KDE and set a threshold to pickup those regions that the density difference is large.Namely, we first construct density estimates are where we have strong evidence to reject H 0 (x).The threshold λ can be picked by quantile values of the bootstrapped L ∞ density deviation to control type 1 error or can be chosen by controlling the false discovery rate (Duong, 2013).
Unfortunately, Γ(λ) is hard to visualize when d > 3.So we use the Morse-Smale complex for f and visualize Γ(λ) by its behavior on the d-cells of the complex.Algorithm 2 gives a method for visualizing density differences like Γ(λ) in the context of comparing two independent samples.

Algorithm 2 Visualization For Two Sample Test
Input: Sample 1: {X 1 , ...Xn}, Sample 2: {Y 1 , • • • , Ym}, threshold λ and radius constant r 0 1.Compute the density estimates p X and p Y .2. Compute the difference function f = p X − p Y and the significant regions For cell E , do (4-1) and (4-2): 4-1.compute the cell center e , cell size V = Vol(E ), 4-2.compute the positive significant ratio and negative significant ratio 5. For every pair of cell E j and E (j = ), compute the shared boundary size: where Vol d−1 is the d − 1 dimensional Lebesgue measure.6. Do multidimensional scaling (Kruskal, 1964) 7. Place a ball center at each e with radius r 0 × √ V .8. If r + + r − > 0, add a pie chart center at e with radius r 0 × √ V × (r + + r − ).The pie chart contains two groups, each with ratio r + +r − .9. Add a line to connect two nodes e j and e if B j > 0. We may adjust the thickness of the line according to B j .
An example for Algorithm 2 is in Figure 2, in which we apply the visualization algorithm for the the GvHD dataset by using kernel density estimator.We choose the threshold λ by bootstrapping the L ∞ difference for f i.e. sup x | f * (x)− f (x)|, where f * is the density difference for the bootstrap sample.We pick α = 95% upper quantile value for the bootstrap deviation as the threshold.
The radius constant r 0 is defined by the user.It is a constant for visualization and does not affect the analysis.Algorithm 2 preserves the relative position for each cell and visualizes the cell according to its size.The pie-chart provides the ratio of regions where the two densities are significantly different.The lines connecting two cells provide the geometric information about how cells are connected to each other.
By applying Algorithm 2 to the GvHD dataset (Figure 2), we find that there are 6 cells and one cell much larger than the others.Moreover, in most regions, the blue regions are larger than the red areas.This indicates that compared to the density of the control group, the density of the GvHD group seem to concentrates more so that the regions above the threshold are larger.

Morse-Smale Two-Sample Test
Here we introduce a technique combining the energy test (Baringhaus and Franz, 2004;Székely andRizzo, 2004, 2013) and the Morse-Smale complex to conduct a two sample test.We call our method the Morse-Smale Energy test (MSE test).The advantage of the MSE test is that it is a nonparametric test and its power can be higher than the energy test; see Figure 8.Moreover, we can combine our test with the visualization tool proposed in the previous section (Algorithm 2); see Figure 9 for an example for displaying p-values from MSE test when visualizing the density difference.
Before we introduce our method, we first review the ordinary energy test.Given two random variables X ∈ R d and Y ∈ R d , the energy distance is defined as where X and Y are iid copies of X and Y .The energy distance has several useful applications such as the goodness-of-fit testing (Székely and Rizzo, 2005), two sample testing (Baringhaus and Franz, 2004;Székely andRizzo, 2004, 2013), clustering (Szekely and Rizzo, 2005), and distance components (Rizzo et al., 2010) to name but few.We recommend an excellent review paper in (Székely and Rizzo, 2013).
For the two sample test, let X 1 , • • • , X n and Y 1 , • • • , Y m be the two samples we want to test.The sample version of energy distance is (22) If X and Y are from the sample population (the same density), E(X, Y ) P → 0. Numerically, we use the permutation test for computing the p-value for E(X, Y ).This can be done quickly in the R-package 'energy' (Rizzo and Szekely, 2008).Now we formally introduce our testing procedure: the MSE test (see Algorithm 3 for a summary).Our test consists of three steps.First, we split the data into two halves.Second, we use one half of the data (contains both samples) to do a nonparametric density estimation (e.g. the KDE) and then compute the Morse-Smale complex (d-cells).Last, we use the other half of the data to conduct the energy distance two sample test 'within each d-cell'.That is, we partition the second half of the data by the d-cells.Within each cell, we do the energy distance test.If we have L cells, we will have L p-values from the energy distance test.We reject H 0 if any one of the L p-values is smaller than α/L (this is from Bonferroni correction).Figure 9 provides an example for using the above procedure (Algorithm 3) along with the visualization method proposed in Algorithm 2. Data splitting is used to avoid using the same data twice, which ensures we have a valid test.
For cell E , do 4-1 and 4-2: 4-1.Find X and Y in the second sample D 2 , 4-2.Do the energy test for two sample comparison.Let the p-value be p( ) 5. Reject H 0 if p( ) < α/L for some .
Example. Figure 8 shows a simple comparison for the proposed MSE test to the usual Energy test.We consider a K = 4 Gaussian mixture model in d = 2 with standard deviation of each component being the same σ = 0.2 and the proportion for each component is (0.2, 0.5, 0.2, 0.1).The left panel displays a sample with N = 500 from this mixture distribution.We draw the first sample from this Gaussian mixture model.For the second sample, we draw a similar Gaussian mixture model except that we change the deviation of one component.In the middle panel, we change the deviation to the third component (C3 in left panel, which contains 20% data points).In the right panel, we change the deviation to the fourth component (C4 in left panel, which contains 10% data points).We use significance level α = 0.05 and for MSE test, we consider the Bonferroni correction and the smoothing bandwidth is chosen using Silverman's rule of thumb (Silverman, 1986).
Note that in both the middle and the right panels, the left most case (added deviation equals 0) is where H 0 should not be rejected.As can be seen from Figure 8, the MSE test has much stronger power compared to the usual Energy test.
The original energy test has low power while the MSE test has higher power.This is because the two distributions only differ at a small portion of the regions so that a global test like energy test requires large sample sizes to detect the difference.On the other hand, the MSE test partitions the space according to the density difference so that it is capable of detecting the local difference.
Example.In addition to the higher power, we may combine the MSE test with the visualization tool in Algorithm 2. Figure 9 displays an example where q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q −0.5 0.0

Change Deviation of C3
Deviation of C3 Power q q q q q q q q q q q q N=1000, MS E−test N=500, MS E−test N=1000, E−test N=500, E−test q q q q 0.20

Change Deviation of C4
Deviation of C4 Power q q q q q q q q q q q q N=1000, MS E−test we visualize the density difference and simultaneously indicate the p-values from the Energy test within each cell using the GvHD dataset.This provides us more information about how two distributions differ from each other.

Theoretical Analysis
We first define some notation for the theoretical analysis.Let f be a smooth function.We define f ∞ = sup x |f (x)| to be the L ∞ -norm of f .In addition, let f j,max denote the elementwise L ∞ -norm for j-th derivatives of f .For instance, We also define f 0,max = f ∞ .We further define The quantity f − h * ,max measures the difference between two functions f and h up to -th order derivative.
For two sets A, B, the Hausdorff distance is where A ⊕ r = {y : min x∈A x − y ≤ r}.The Hausdorff distance is like the L ∞ distance for sets.Let f : K ⊂ R d → R be a smooth function with bounded third derivatives.Note that as long as f −f * 3,max is small, f is also a Morse function by Lemma 9. Let D denote the boundaries of the descending d-manifolds of f .We will show if f − f * 3,max is sufficiently small, then Haus( D, D) = O( f − f 1,max ).q q q q q q p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 An example using both Algorithm 2 and 3 to the GvHD dataset introduced in Figure 2. We use data splitting as described in Algorithm 3.For the first part of the data, we compute the cells and visualize the cells using Algorithm 2. Then we apply the energy distance two sample test for each cell as described in Algorithm 3 and we annotate each cell with a pvalue.Note that the visualization is slightly different to Figure 2 since we use only half of the original dataset in this case.

Stability of the Morse-Smale Complex
Before we state our theorem, we first derive some properties of descending manifolds.Recall that we are interested in B = ∂D d , the boundary of the descending d-manifolds (and B is also the union of all j-descending manifolds for j < d).
Since each D j is a collection of smooth j-dimensional manifolds embedded in R d , for every x ∈ D j , there exists a basis v 1 (x), Bredon, 1993;Helgason, 1979).That is, v 1 (x), • • • , v d−j (x) span the normal space to D j at x.For simplicity, we write Note the number of columns d − j ≡ d − j(x) in V (x) depends on which D j the point x belongs to.We use j rather than j(x) to simplify the notation.For denote the normal space to B at x.One can view V(x) as the normal map of the manifold D j at x ∈ D j .
For each x ∈ B, define the projected Hessian which is the Hessian matrix of p by taking gradients along column space of determine how the gradient flows are moving away from B. We let λ min (M ) be the smallest eigenvalue for a symmetric matrix M .If M is a scalar (just one point), then λ min (M ) = M .
This assumption is very mild; it requires that the gradient flow moves away from the boundary of ascending manifolds.In terms of mode clustering, this requires the gradient flow to move away from the boundaries of clusters.For a point x ∈ D d−1 , let v 1 (x) be the corresponding normal direction.Then the gradient g(x) is normal to v 1 (x) by definition.That is, v 1 (x) T g(x) = v 1 (x) T ∇p(x) = 0, which means that the gradient along v 1 (x) is 0. Assumption (D) means that the the second derivative along v 1 (x) is positive, which implies that the density along direction v 1 (x) behaves like a local minimum at point x.Intuitively, this is how we expect the density to behave around the boundaries: gradient flows are moving away from the boundaries (except for those flows that are already on the boundaries).
Theorem 1 (Stability of descending d-manifolds) Let f, f : K ⊂ R d → R be two smooth functions with bounded third derivatives defined as above and let B, B be the boundaries of the associated ascending manifolds.Assume f is a Morse function satisfying condition (D).When f − f * 3,max is sufficiently small, This theorem shows that the boundaries of descending d-manifolds for two Morse functions are close to each other and the difference between the boundaries is controlled by the rate of the first derivative difference.
Similarly to descending manifolds, we can define all the analogous quantities for ascending manifolds.We introduce the following assumption: Note that λ max (M ) denotes the largest eigenvalue of a matrix M .If M is a scalar, λ max (M ) = M .Under assumption (A), we have a similar stability result (Theorem 1) for ascending manifolds.Assumptions (A) and (D) together imply the stability of d-cells.
Theorem 1 can be applied to nonparametric density estimation.Our goal is to estimate the boundary of the descending d-manifolds, B, of the unknown population density function p.Our estimator is B n , the boundary of the descending d-manifolds to a nonparametric density estimator e.g. the kernel density estimate p n .Then under certain regularity condition, their difference is given by We will see this result in the next section when we discuss mode clustering.
Similar reasoning works for the nonparametric regression case.Assume that we are interested in B, the boundary of descending d-manifolds, for the regression function m(x) = E(Y |X = x).And our estimator B is again a plug-in estimate based on m n (x), a nonparametric regression estimator (e.g., kernel estimator).Then under mild regularity conditions,

Consistency of Mode Clustering
A direct application of Theorem 1 is the consistency of mode clustering.Let K (α) be the α-th derivative of K and let BC r denote the collection of functions with bounded continuously derivatives up to the r-th order.We consider the following two assumptions on the kernel function: (K1) The kernel function K ∈ BC 3 and is symmetric, non-negative and for all α = 0, 1, 2, 3. (K2) The kernel function satisfies condition K 1 of Gine and Guillou (2002).That is, there exists some A, v > 0 such that for all 0 < < 1, sup , where N (T, d, ) is the −covering number for a semi-metric space (T, d) and (K1) is a common assumption; see Wasserman (2006).( K2) is a weak assumption guarantee the consistency for KDE under L ∞ norm; this assumption first appeared in Gine and Guillou (2002) and has been widely assumed (Einmahl and Mason, 2005;Rinaldo et al., 2010;Genovese et al., 2012;Rinaldo et al., 2012;Genovese et al., 2014;Chen et al., 2015).
Theorem 2 (Consistency for mode clustering) Let p, p n be the density function and the KDE.Let B and B n be the boundaries of clusters by mode clustering over p and p n respectively.Assume (D) for p and (K1-2), then when The proof is simply to combine Theorem 1 and the rate of convergence for estimating the gradient of density using KDE (Theorem 8).Thus, we omit the proof.Theorem 2 gives a bound for the rate of convergence for the boundaries for mode clustering.The rate can be decomposed into two parts, the bias O(h 2 ) and the (square root of) variance O P log(n) nh d+2 .This rate is the same for the L ∞ -loss of estimating the gradient of a density function, which makes sense since the mode clustering is completely determined by the gradient of density.
Another way to describe the consistency for mode clustering is to show that the proportion of data points that are incorrectly clustered (mis-clustered) converges to 0. This can be quantified by the use of Rand index (Rand, 1971;Hubert and Arabie, 1985;Vinh et al., 2009), which measures the similarity between two partitions of the data points.Let dest(x) and dest n (x) be the destination of gradient of the true density function p(x) and the KDE p n (x).For a pair of points x, y, we define ) Thus, Ψ(x, y) = 1 if x, y are in the same cluster and 0 if they are not.The Rand index for mode clustering using p versus using p n is which is the proportion of pairs of data points that the two clustering results disagree on.If two clusterings output the same partition, the Rand index will be 1.
Theorem 3 (Bound on Rand Index) Assume (D) for p and (K1-2).Then when log n nh d+6 → 0, h → 0, the adjusted Rand index Theorem 3 shows that the Rand index converges to 1 in probability, which establishes the consistency of mode clustering in an alternative way.Theorem 3 shows that the proportion of data points that are incorrectly assigned (compared with mode clustering using population p) is bounded by the rate asymptotically.Azizyan et al. (2015) also derived the convergence rate of the mode clustering for the rand index.Here we briefly compare our results to theirs.Azizyan et al. (2015) consider a low-noise condition that leads to a fast convergence rate when clusters are well-separated.Their approach can even be applied to the case of increasing dimensions.In our case (Theorem 3), we consider a fixed dimension scenario but we do not assume the low-noise condition.Thus, the main difference between Theorem 3 and the result in Azizyan et al. (2015) is the assumptions being made so our result complements the findings in Azizyan et al. (2015).

Consistency of Morse-Smale Regression
In what follows, we will show that m n,MSR (x) is a consistent estimator of m MSR (x).Recall that where E is the d-cell defined on m and the parameters are And m n,MSR is the two-stage estimator to m MSR (x) defined by where { E : = 1, • • • , L} are the collection of cells of the pilot nonparametric regression estimator m n and µ , β are the regression parameters from equation ( 12): Theorem 4 (Consistency of Morse-Smale Regression) Assume (A) and (D) for m and assume m is a Morse-Smale function.Then when log n nh d+6 → 0, h → 0, we have uniformly for all x except for a set N n with Lebesgue measure O P ( m n −m 1,max ), Theorem 4 states that when we have a consistent pilot nonparametric regression estimator (such as the kernel regression), the proposed MSR estimator converges to the population MSR.Similarly as in Theorem 6, the set N n are regions around the boundaries of cells where we cannot distinguish their host cell.Note that when we use the kernel regression as the pilot estimator m n , Theorem 4 becomes under regular smoothness conditions.Now we consider a special case where we may obtain parametric rate of convergence for estimating m MSR .Let E = ∂ (E 1 • • • E L ) be the boundaries of all cells.We consider the following low-noise condition: for some A, β > 0. Equation ( 36) is Tsybakov's low noise condition (Audibert et al., 2007) applied to the boundaries of cells.Namely, (36) states that it is unlikely to many observations near the boundaries of cells of m.Under this low-noise condition, we obtain the following result using kernel regression.
Theorem 5 (Fast Rate of Convergence for Morse-Smale Regression) Let the pilot estimator m n be the kernel regression estimator.Assume (A) and (D) for m and assume m is a Morse-Smale function.Assume also (36) holds for the covariate X and (K1-2) for the kernel function.Also assume that . Then uniformly for all x except for a set N n with Therefore, when β > 6+d 4 , we have Theorem 5 shows that when the low noise condition holds, we obtain a fast rate of convergence for estimating m MSR .Note that the pilot estimator m n does not ahve to be a kernel estimator; other approaches such as the local polynomial regression will also work.

Consistency of the Morse-Smale Signature
Another application of Theorem 1 is to bound the difference of two Morse-Smale signatures.Let f be a Morse-Smale function with cells E 1 , . . ., E L .Recall that the Morse-Smale signatures are the bipartite graph and summary statistics (locations, density values) for local modes, local minima, and cells.It is known in the literature (see, e.g., Lemma 9) that when two functions f , f are sufficiently close, then where c j , c j are critical points f and f respectively.This implies the stability of local modes and minima.
So what we need is the stability of the summary statistics (η † , γ † ) associated with the edges (cells).Recall that these summaries are defined through ( 14) For another function f , let ( η † , γ † ) be its signatures for cell E .The following theorem shows that if two functions are close, their corresponding Morse-Smale signatures are also close.
Theorem 6 Let f be a Morse-Smale function satisfying assumptions A and D, and let f be a smooth function.Then when log n nh d+6 → 0, h → 0, after relabeling the indices of cells of f , Theorem 6 shows stability of the signatures (η † , γ † ).Note that Theorem 6 also implies that the stability of piecewise approximation Together with the stability of critical points (39), Theorem 6 proves the stability of Morse-Smale signatures.

Example: Morse-Smale Density Estimation
As an example for Theorem 6, we consider density estimation.Let p be the density of random sample X 1 , • • • , X n and recall that p n is the kernel density estimator.Let (η † , γ † ) be the signature for p under cell E and ( η † , γ † ) be the signature for p n under cell E .The following corollary guarantees the consistency of Morse-Smale signatures for the KDE.
Corollary 7 Assume (A,D) holds for p and the kernel function satisfies (K1-2).Then when log n nh d+6 → 0, h → 0, after relabeling we have The proof to Corollary 7 is a simple application of Theorem 6 with the rate of convergence for the first derivative of the KDE (Theorem 8).So we omit the proof.The optimal rate in Corollary 7 is when we choose h to be of order .
Remark 2 When we compute the Morse-Smale approximation function, we may have some numerical problem in low-density regions because the density estimate p n may have unbounded support.In this case, some cells may be unbounded, and the majority of these cells may have extremely low density value, which makes the approximation function 0. Thus, in practice, we will restrict ourselves only to the regions whose density is above a pre-defined threshold λ so that every cell is bounded.A simple data-driven threshold is λ = 0.05 sup x p n (x).Note that Theorem 7 still works in this case but with a slight modification: the cells are define on the regions {x : p h (x) ≥ 0.05 × sup x p h (x)}.
Remark 3 Note that for a density function, local minima may not exist or the gradient flow may not lead us to a local minimum in some regions.For instance, for a Gaussian distribution, there is no local minimum and except for the center of the Gaussian, if we follow the gradient descent path, we will move to infinity.Thus, in this case we only consider the boundaries of ascending 0-manifolds corresponding to well-defined local minima and assumptions (A) is only for the boundaries corresponding to these ascending manifolds.
Remark 4 When we apply the Morse-Smale complex to nonparametric density estimation or regression, we need to choose the tuning parameter.For instance, in the MSR, we may use kernel regression or local polynomial regression so we need to choose the smoothing bandwidth.For the density estimation problem or mode clustering, we need to choose the smoothing bandwidth for the kernel smoother.In the case of regression, because we have the response variable, we would recommend to choose the tuning parameter by cross-validation.For the kernel density estimator (and mode clustering), because the optimal rate depends on the gradient estimation, we recommend choosing the smoothing bandwidth using the normal reference rule for gradient estimation or the cross-validation method for gradient estimation (Duong et al., 2007;Chacón et al., 2011).

Discussion
In this paper, we introduced the Morse-Smale complex and the summary signatures for nonparametric inference.We demonstrated that the Morse-Smale complex can be applied to various statistical problems such as clustering, regression and two sample comparisons.We showed that a smooth multivariate function can be summarized by a few parameters associated with a bipartite graph, representing the local modes, minima and the complex for the underlying function.Moreover, we proved a fundamental theorem about the stability of the Morse-Smale complex.Based on the stability theorem, we derived consistency for mode clustering and regression.
The Morse-Smale complex provides a method to synthesize both parametric and nonparametric inference.Compared to parametric inference, we have a more flexible model to study the structure of the underlying distribution.Compared to nonparametric inference, the use of the Morse-Smale complex yields a visualizable representation for the underlying multivariate structures.This reveals that we may gain additional insights in data analysis by using geometric features.
Although the Morse-Smale complex has many potential statistical applications, we need to be careful when applying it to a data set whose dimension is large (say d > 10).When the dimension is large, the curse of dimensionality kicks in and the nonparametric estimators (in both density estimation problems or regression analysis) are not accurate so the errors of the estimated Morse-Smale complex can be huge.
Here we list some possible extensions for future research: • Asymptotic distribution.We have proved the consistency (and the rate of convergence) for estimating the complex but the limiting distribution is still unknown.If we can derive the limiting distribution and show that some resampling method (e.g. the bootstrap Efron (1979)) converges to the same distribution, we can construct confidence sets for the complex.• Minimax theory.Despite the fact that we have derived the rate of convergence for a plug-in estimator for the complex, we did not prove its optimality.We conjecture the minimax rate for estimating the complex should be related to the rate for estimating the gradient and the smoothness around complex (Audibert et al., 2007;Singh et al., 2009).2. after suitable relabeling the indices, Note that similar result appears in Theorem 1 of Chen et al. (2016).This lemma shows that two close Morse functions p, q will have similar critical points.The proof of Theorem 1 requires several working lemmas.We provide a chart for how we are going to prove Theorem 1.
First, we define some notations about gradient flows.Recall that π x (t) ∈ K is the gradient (ascent) flow starting at x: For x that is not on the boundary set D, we define the time: where m is the destination of π x .That is, t (x) is the time to arrive the regions around a local mode.First, we prove a property for the direction of the gradient field around boundaries.
Lemma 10 (Gradient field and boundaries) Assume the notations in Theorem 1 and assume f is a Morse function with bounded third derivatives and satisfies assumption (D).Let s(x) = x − Π x , where Π x ∈ B is the projected point from x onto B (when Π x is not unique, just pick any projected point).For any q ∈ B, let x be a point near q such that x − q ∈ V(q), the normal space of B at q. Let δ(x) = x − q and e(x) = x−q x−q denote the unit vector.Then According to (a), any gradient flow line start from a point x that is close to the boundaries (distance < δ 1 ), this flow line is always moving away from the boundaries when the current location is close to the boundaries.The flow line can temporally get closer to the boundaries when it is away from boundaries (distance > δ 1 ) 1.For every point x such that we have g(x) T s(x) ≥ 0.
That is, the gradient is pushing x away from the boundaries.2. When δ(x) ≤ Proof.
Claim 1.Because the projection of x onto B is Π x , s(x) ∈ V(Π x ) and s(x) T g(Π x ) = 0 (recall that for p ∈ B, V(p) is the collection of normal vectors of B at p).
Recall that d(x, B) = s(x) is the projected distance.By the fact that (40) Note that we use the vector-value Taylor's theorem in the first inequality and the fact that for two close points x, y, the difference in the j-the element of gradient g j (x) − g j (y) has the following expansion where H j (y) = ∇g j (y) and T j (y) = ∇∇g j (y) is the Hessian matrix of g j (y), whose elements are the third derivatives of f (y).Thus, when d(x, B) ≤ 2Hmin d 2 • f 3,max , s(x) T g(x) ≥ 0, which proves the first claim.Claim 2. By definition, e(x) T g(q) = 0 because g(q) is in tangent space of B at q and e(x) is in the normal space of B at q. Thus, (x) = e(x) T g(x) = e(x) T (g(x) − g(q)) Note that in the first inequality we use the same lower bound as the one in claim 1.Also note that x − q = e(x)δ(x) and e(x) is in the normal space of B at π(x) so the third inequality follows from assumption (D).
Lemma 10 can be used to prove the following result.
Lemma 11 (Distance between flows and boundaries) Assume the notations as the above and assumption (D).Then for all x such that 0 The main idea is that the projected gradient (gradient projected to the normal space of nearby boundaries) is always positive.This means that the flow cannot move "closer" to the boundaries.
Proof.By Lemma 10, for every point x near to the boundaries (d(x, B) < δ 1 ), the gradient is moving this point away from the boundaries.Thus, for any flow π x (t), once it touches the region B ⊕ δ 1 , it will move away from this region.So when a flow leaves B ⊕ δ 1 , it can never come back.
Therefore, the only case that a flow can be within the region B ⊕ δ 1 is that it starts at some x ∈ B ⊕ δ 1 .i.e. d(x, B) < δ 1 .Now consider a flow start at x such that 0 < d(x, B) ≤ δ 1 .By Lemma 10, the gradient g(x) leads x to move away from the boundaries B. Thus, whenever π x (t) ∈ B ⊕ δ 1 , the gradient is pushing π x (t) away from B. As a result, the time that π x (t) is closest to B is at the beginning of the flow .i.e.t = 0.This implies that d(π With Lemma 11, we are able to bound the low gradient regions since the flow cannot move infinitely close to critical points except its destination.Let λ min > 0 be the minimal 'absolute' value of eigenvalues of all critical points. Lemma 12 (Bounds on low gradient regions) Assume the density function f is a Morse function and has bounded third derivatives.Let C denote the collection of all critical points and let λ min is the minimal 'absolute' eigenvalue for Hessian matrix H(x) evaluated at x ∈ C. Then there exists a constant δ 2 > 0 such that for every δ ≤ δ 2 .
Proof.Because the support K is compact and x ∈ K → g(x) is continuous, for any g 0 > 0 sufficiently small, there exists a constant R(g and when g 0 → 0, R(g 0 ) → 0. Thus, there is a constant g 1 > 0 such that R(g 1 ) = λmin The set C ⊕ λmin 2 f 3,max has a useful feature: for any x ∈ C ⊕ λmin 2 f 3,max , Using the above feature and the fact that G 1 (g 1 ) ⊂ C ⊕ λmin 2d 3 f 3,max , for any x ∈ G 1 (g 1 ), we have the following inequalities: where g 1 is the constant such that R(g 1 ) = Lemma 13 (Bounds on gradient flow) Using the notations above and assumption (D), let δ 1 be defined in Lemma 11 and δ 2 be defined in Lemma 12,equation (43).Then for all x such that and picking such that δ 2 > 2 > δ, we have Proof.
We consider the flow π x starting at x (not on the boundaries) such that For 0 ≤ t ≤ t (x), the entire flow is within the set This is because by Lemma 11, the flow line cannot get closer to the boundaries B within distance δ, and the flow stops when its distance to its destination is at .Thus, if we can prove that every point within H( , δ) has gradient lowered bounded by δ λmin 2 , we have completed the proof.That is, we want to show that inf x∈H( ,δ) To show the lower bound, we focus on those points whose gradient is small.Let By Lemma 12, the S(δ) are regions around critical points such that S(δ) ⊂ C ⊕ δ.
Since we have chosen such that ≥ δ 2 and by the fact that critical points are either in M , the collection of all local modes, or in B the boundaries so that, the minimal distance between H( , δ) and critical points C is greater that δ (see equation ( 44) for the definition of H( , δ)).Thus, Now by the fact that all π x (t) with d(x, B) < δ are within the set H( , δ) (equation ( 45)), we conclude the result.
Lemma 13 links the constant γ (δ) and the minimal gradient, which can be used to bound the time t (x) uniformly and further leads to the following result.
Lemma 14 Let K(δ) = {x ∈ K : d(x, B) ≥ δ} = K\(B ⊕δ) and δ 0 be defined as Lemma 13 and M is the collection of all local modes.Assume that f has bounded third derivative and is a Morse function and that assumption (D) holds.Let f be another smooth function.There exists constants c * , c 0 , c 1 , 0 that all depend only on f such that when ( , δ) satisfy the following condition then for all x ∈ K(δ) Note that condition (47) holds when ( , δ) are sufficiently small.
Proof.The proof of this lemma is closely related to the proof of Theorem 2 of Arias-Castro et al. (2016).The results in Arias-Castro et al. ( 2016) is a pointwise convergence of gradient flows; now we will generalize their findings to the uniform convergence.
where A is a constant that depends only on f (actually, we only need f − f * 2,max to be small here).Thus, whenever f − f * 3,max satisfies every M has an unique corresponding point in M and vice versa.In addition, for a pair of local modes (m j , m j ) : m j ∈ M, m j ∈ M , their distance is bounded by m j − m j ≤ σ 3 .Now we pick ( , δ) such that they satisfy equation ( 47).Then when f − f * 3,max is sufficiently small, by Lemma 14, for every x ∈ H( , δ) we have Thus, whenever π x (t) and π x (t) leads to the same pair of modes.Namely, the boundaries B will not intersect the region H( , δ).And it is obvious that B cannot intersect B(M, √ ).To conclude, because by definition, K(δ) = H( , δ) ∩ B(M, √ ).
Thus, B ⊂ K(δ) C = B ⊕ δ, which implies Haus(B, B) ≤ δ < Hmin d 2 f 3,max (note that δ < δ 0 ≤ Hmin d 2 f 3,max appears in equation ( 47) and Lemma 13).Part 2: Rate of convergence.To derive the convergence rate, we use proof by contradiction.Let q ∈ B, q ∈ B a pair of points such that their distance attains the Hausdorff distance Haus B, B .Namely, q and q satisfy q − q = Haus B, B and either q is the projected point from q onto B or q is the projected point from q onto B.
Recall that V(x) is the normal space to B at x ∈ B and we define V(x) similarly for x ∈ B. An important property of the pair q, q is that q − q ∈ V(q), V( q).If this is not true, we can slightly perturb q (or q) on B (or B) to get a projection distance larger than the Hausdorff distance, which leads to a contradiction.Now we choose x to be a point between q, q such that x = 1 3 q + 2 3 q.We define e(x) = q−x q−x and e(x) = q−x q−x .Then e(x) ∈ V(q) and e(x) ∈ V( q) and e(x) = − e(x).
(57) Thus, for every x between q, q, e(x) T g(x) > 0, , e(x) T g(x) = − e(x) T g(x) < 0. (58) Note that we can apply Lemma 10 to f and its gradient because when f − f * 2 is sufficiently small, the assumption (D) holds for f as well.
To prove the asymptotic rate of the rand index, we assume that for every local mode of p, there exists one and only one local mode of p n that is close to the specific mode of p.By Lemma 9, this is true when p n − p * 3,max is sufficiently small.Thus, after relabeling, the local mode m of p n is an estimator to the local mode m of p.Let W be the basin of attraction to m using ∇ p n and W be the basin of attraction to m using ∇p.Let A B = {x : x ∈ A, x / ∈ B} ∪ {x : x ∈ B, x / ∈ A} be the symmetric difference between sets A and B. The regions are where the two mode clustering disagree with each other.Note that E n are regions between the two boundaries B n and B Given a pair of points X i and X j , Ψ(X i , X j ) = Ψ n (X i , X j ) =⇒ X i or X j ∈ E n . (61) By the definition of rand index (30), 1 Ψ(X i , X j ) = Ψ n (X i , X j ) (62) Thus, if we can bound the ratio of data points within E n , we can bound the rate of rand index.Since K is compact and p has bounded second derivatives, the volume of E n is bounded by Now we consider a collection of subsets of K: where R < ∞ is the diameter for K.For any set A ⊂ K, let P (X i ∈ A) and P n (A) = 1 n n i=1 1(X i ∈ A) denote the probability of an observation within A and the empirical estimate for that probability, respectively.It is easy to see that V n ∈ V for all n and the class V has a finite VC dimension (actually, the VC dimension is 1).By the empirical process theory (or so-called VC theory, see e.

Fig 1 .
Fig 1.An example of a Morse-Smale complex.The green dots are local minima; the blue dots are local modes; the violet dots are saddle points.Panels (a) and (b) give examples of descending d-manifolds (blue region) and an ascending 0-manifold (green region).Panel (c) shows the corresponding d-cell (yellow region).Panel (d) is shows all d-cells.

Fig 3 .
Fig 3. A one dimensional example.The blue dots are local modes and the green dots are local minima.Left panel: the basins of attraction for two local modes are colored by brown and orange.Middle panel: the basin of attraction (negative gradient) for the local minima are colored by red, purple and violet.Right panel: The intersection of the basins, which are called d-cells.

Fig 4 .
Fig 4. Two-dimensional examples of critical points, descending manifolds, ascending manifolds, and 2-cells.This is the same function as Figure 1.(a): The set C k for k = 0, 1, 2.The four blue dots are C 0 , the collection of local modes (each of them is c 0,j some j = 1, • • • , 4).The four orange dots are C 1 , the collection of saddle points (each of them is c 1,j for some j = 1, • • • , 4).The green dots are C 2 , the collection of local minima (each green dot is c 2,j for some j = 1, • • • , 9).(b): The set D k for k = 0, 1, 2. The yellow area is D 2 (each subregion separated by blue curves are D 2,j , j = 1, • • • , 4).The two blue curves are D 1 (each of the 4 blue segments are D 1,j , j = 1, • • • , 4).The green dots are D 0 (also C 2 ), the collection of local minima (each green dot is D 0,j for some j = 1, • • • , 9).(b): The set A k for k = 0, 1, 2. The yellow area is A 0 (each subregion separated by red curves are A 0,j , j = 1, • • • , 9).The two red curves are A 1 (each of the 4 red segments are A 1,j , j = 1, • • • , 4).The blue dots are A 2 (also C 0 ), the collection of local modes (each green dot is A 0,j for some j = 1, • • • , 4).(d): Example for 2-cells.The thick blue curves are D 1 and thick red curves are A 1 .
Fig 5.An example of mode clustering.(a): Basin of attraction for each local mode (red +).Black dots are data points.(b): Gradient flow (blue lines) for each data point.The gradient flow starts at one data point and ends at one local modes.(c): Mode clustering; we use the destination for gradient flow to cluster data points.

Fig 6 .
Fig 6.Morse-Smale signatures for a smooth function.(a): The original function.The blue dots are local modes, the green dots are local minima and the pink dot is a saddle point.(b): The Morse-Smale approximation to (a).This is the best piecewise linear approximation to the original function given d-cells.(c): This bipartite graph has nodes that are local modes and minima and edges that represent the d-cells.Note that we can summarize the smooth function (a) by the signature graph (c) and the parameters for constructing approximation function (b).The signature graph and parameters for approximation function define the Morse-Smale signatures.

Fig 11 .
Fig 11.Illustration for Lemma 10 and 11. (a): We show that the angle between projection vector s(x) and the gradient g(x) is always right whenever x is closed to the boundaries B. (b):According to (a), any gradient flow line start from a point x that is close to the boundaries (distance < δ 1 ), this flow line is always moving away from the boundaries when the current location is close to the boundaries.The flow line can temporally get closer to the boundaries when it is away from boundaries (distance > δ 1 )

Fig 13 .
Fig 13.Result from Lemma 13: lower bound on minimal gradient.This plot shows possible values for minimal gradient η (x) (pink regions) when d(x, B) is known.Note that we have chosen 2 < δ 2 .
inArias-Castro et al. (2016) (proof to their Theorem 2maxt (x) + 2 f − f ∞ (50) under condition (48) and < 0 for some constant 0 .Thus, the key is to bound t (x).Recall that x ∈ H( , δ).Now consider the and f .The second part of the proof is to derive the convergence rate.Because Haus(B, B) < Hmin d 2 f 3,max , we can apply the second assertion of Lemma 10 to derive the rate of convergence.Note that C and C are the critical points for f and f and M ≡ C 0 , M ≡ C 0 are the local modes for f and f .Part 1: Haus(B, B) < Hmin d 2 • f 3,max , the upper bound for Hausdorff distance.Let σ = min{ x − y : x, y ∈ M, x = y}.That is, σ is the smallest distance between any pair of distinct local modes.By Lemma 9, when f − f * 3,max is small, f and f have the same number of critical points and lim t→∞ π x (t) − lim t→∞ π x (t) ≤ c * f − f ∞ ≤ c * f − f * 3,max .
Vol(E n ) = O Haus( B n , B) .(63)NoteVol(A) denotes the volume (Lebesgue measure) of a set A. We now construct a region surrounding B such thatE n ⊂ B ⊕ Haus( B n , B) = V n(64)and Vol(V n ) = O Haus( B n , B) .
g. Vapnik and Chervonenkis (1971)), sup A∈V P (X i ∈ A) − P n (A) = O P log Algorithm 3 Morse-Smale Energy Test (MSE test) Input: Sample 1: {X 1 , ...Xn}, Sample 2: {Y 1 , • • • , Ym}, smoothing parameter h, significance level α 1. Randomly split the data into halves D 1 and D 2 ; both contain equal number of X and Y (assuming n and m are even).2. Compute the KDE p X and p Y by the first sample D 1 An example comparing the Morse-Smale Energy test to the original Energy test.We consider a d = 2, K = 4 Gaussian mixture model.Left panel: an instance for the Gaussian mixture.We have four mixture components, denoting as C1, C2, C3 and C4.They have equal standard deviation (σ = 0.2) and the proportions for each components are (0.2, 0.5, 0.2, 0.1).Middle panel: We changed the standard deviations of component C3 to 0.3, 0.4 and 0.5 and compute the power for the MSE test and the usual Energy test at sample size N = 500 and 1000.(Standard deviation equals 0.2 is where H 0 should not be rejected.)Right panel: We add the variance of component C4 (the smallest component) and do the same comparison as in the middle panel.We pick the significance level α = 0.05 (gray horizontal line) and in the MSE test, we reject H 0 if the minimal p-value is less than α/L, where L is the number of cells (i.e.we are using the Bonferroni correction).
Bhatia 1997) third derivative of f and A F is the Frobenius norm of the matrix A. By Hoffman-Wielandt theorem (see, e.g., page 165 ofBhatia 1997), the eigenvalues between H(x) and H(c) is bounded by H(x) − H(c) F .Therefore, the smallest eigenvalue of H(x) must be greater than or equal to the smallest eigenvalue of H(c) minus λmin 2 .Because λ min is the smallest absolute eigenvalues of H(c) for all c ∈ C, the smallest eigenvalue of H(x) is greater than or equal to λmin 2 , for all x Fig 12. Illustration for H( , δ).The thick black lines are boundaries B; solid dots are local modes; box is local minimum; empty dots are saddle points.The three purple lines denote possible gradient flows starting from some points x with d(x, B) = δ.The gray disks denote all possible regions such that g ≤ λ min2 δ.Thus, the amount of gradient within the set H( , δ) is greater or equal to λ min 2 δ.