Fast and exact fixed-radius neighbor search based on sorting

Fixed-radius near neighbor search is a fundamental data operation that retrieves all data points within a user-specified distance to a query point. There are efficient algorithms that can provide fast approximate query responses, but they often have a very compute-intensive indexing phase and require careful parameter tuning. Therefore, exact brute force and tree-based search methods are still widely used. Here we propose a new fixed-radius near neighbor search method, called SNN, that significantly improves over brute force and tree-based methods in terms of index and query time, provably returns exact results, and requires no parameter tuning. SNN exploits a sorting of the data points by their first principal component to prune the query search space. Further speedup is gained from an efficient implementation using high-level basic linear algebra subprograms (BLAS). We provide theoretical analysis of our method and demonstrate its practical performance when used stand-alone and when applied within the DBSCAN clustering algorithm.

There are two main types of nearest neighbor (NN) search: k-nearest neighbor and fixed-radius near neighbor search.Fixed-radius NN search, also referred to as radius query, aims at identifying all data points within a given distance from a query point; see Bentley (1975b) for a historical review.The most straightforward way of finding nearest neighbors is via a linear search through the whole database, also known as exhaustive or brute force search.Though considered inelegant, it is still widely used, e.g., in combination with GPU acceleration (Garcia et al., 2008).
Existing NN search approaches can be broadly divided into exact and approximate methods.In many applications, approximate methods are an effective solution for performing fast queries while allowing for a small loss.Well-established approximate NN search techniques include randomized k-d trees (Silpa-Anan and Hartley, 2008), hierarchical k-means (Nister and Stewenius, 2006), locality sensitive hashing (Indyk and Motwani, 1998), HNSW (Malkov and Yashunin, 2020), and ScaNN (Guo et al., 2020).Considerable drawbacks of most approximate NN search algorithms are their potentially long indexing time and the need for the tuning of additional hyperparameters such as the trade-off between recall versus index and query time.Furthermore, to the best of our knowledge, all approximate NN methods for which

RELATED WORK
NN search methods can broadly be classified into approximate or exact methods, depending on whether they return exact or approximate answers to queries (Cayton and Dasgupta, 2007).It is widely accepted that for high-dimensional data there are no exact NN search methods which are asymptotically more efficient than exhaustive search; see, e.g., (Muja, 2013, Chap. 3) and (Francis-Landau and Durme, 2019).Exact NN methods based on k-d tree (Bentley, 1975a;Friedman et al., 1977), balltree (Omohundro, 1989), VP-tree (Yianilos, 1993), cover tree (Beygelzimer et al., 2006), and RP tree (Dasgupta and Sinha, 2013) only perform well on low-dimensional data.This shortcoming is often referred to as the curse of dimensionality (Indyk and Motwani, 1998).However, note that negative asymptotic results do not rule out the possibility of algorithms and implementations that perform significantly (by orders of magnitude) faster than brute force search in practice, even on real-world high-dimensional data sets.
To speedup NN search, modern approaches generally focus on two aspects, namely indexing and sketching.The indexing aims to construct a data structure that prunes the search space for a given query, hopefully resulting in fewer distance computations.Sketching, on the other hand, aims at reducing the cost of each distance computation by using a compressed approximate representation of the data.
The most widely used indexing strategy is space partitioning.Some of the earliest approaches are based on tree structures such as k-d tree (Bentley, 1975a;Friedman et al., 1977), balltree (Omohundro, 1989), VP-tree (Yianilos, 1993), and cover tree (Beygelzimer et al., 2006).The tree-based methods are known to become inefficient for high-dimensional data.One of the remedies are randomization (e.g., (Dasgupta and Sinha, 2013;Ram and Sinha, 2019)) and ensembling (e.g., the FLANN nearest neighbor search tool by Muja and Lowe (2009), which empirically shows competitive performance against approximate NN methods).Another popular space partitioning method is locality-sensitive hashing (LSH); see, e.g., (Indyk and Motwani, 1998).LSH leverages a set of hash functions from the locality-sensitive hash family and it guarantees that similar queries are hashed into the same buckets with higher probability than less similar ones.This method was originally introduced for the binary Hamming space by Indyk and Motwani (1998), and it was later extended to the Euclidean space (Datar et al., 2004).In (Bawa et al., 2005) a self-tuning index for LSH based similarity search was introduced.A partitioning approach based on neural networks and LSH was proposed in Dong et al. (2020).Another interesting method is GriSPy (Chalela et al., 2021), which performs fixed-radius NN search using regular grid search-to construct a regular grid for the index-with the possibility of working with periodic boundary conditions.This method, however, has high memory demand because the grid computations grow exponentially with the space dimension.This paper focuses on exact fixed-radius NN search.The implementations available in the most widely used scientific computing environments are all based on tree structures, including findNeighborsInRadius in MATLAB (The MathWorks Inc., 2022), NearestNeighbors in scikit-learn (Pedregosa et al., 2011), and spatial in SciPy (Virtanen et al., 2020).This is in contrast to our SNN method introduced below which does not utilise any tree structures.

SORTING-BASED NN SEARCH
Suppose we have n data points p 1 , . . ., p n ∈ R d (represented as column vectors) and d ≪ n.The fixedradius NN problem consists of finding the subset of data points that is closest to a given query point q ∈ R d (may be out-of-sample) with respect to some distance metric.Throughout this paper, the vector norm ∥ • ∥ = ∥ • ∥ 2 is the Euclidean one, though it also possible to identify nearest neighbors with other distances such as • cosine distance: assuming normalized data (with ∥u∥ = ∥v∥ = 1), the cosine distance is Hence, the cosine distance can be computed from the Euclidean distance via • angular distance: the angular distance θ ∈ [0, π] between two normalized vectors u, v satisfies Therefore, closest angle neighbors can be identified via Euclidean distance.
• maximum inner product similarity (see, e.g., Bachrach et al. (2014)): for not necessarily normalized vectors we can consider the transformed data points pi = [ ξ 2 − ∥p i ∥ 2 , p T i ] T with ξ := max i ∥p i ∥ and the transformed query point q = [0, q T ] T .Then Since ξ and q are independent of the index i, we have argmin i ∥ pi − q∥ 2 = argmax i p T i q.
• Manhattan distance: since ∥p i − q∥ 2 ≤ ∥p i − q∥ 1 , any points satisfying ∥p i − q∥ 1 > R must necessarily satisfy ∥p i − q∥ 2 > R. Hence, the sorting-based exclusion criterion proposed in section 3.2 to prune the query search space can also be used for the Manhattan distance.
Our algorithm, called SNN, will return the required indices of the nearest neighbors in the Euclidean norm, and can also return the corresponding distances if needed.SNN uses three essential ingredients to obtain its speed.First, a sorting-based exclusion criterion is used to prune the search space for a given query.Second, pre-calculated dot products of the data points allow for a reduction of arithmetic complexity.Third, a reformulation of the distance criterion in terms of matrices (instead of vectors) allows for the use of high-level basic linear algebra subprograms (BLAS, Blackford et al. (2002)).In the following, we explain these ingredients in more detail.

Indexing
Before sorting the data, all data points are centered by subtracting the empirical mean value of each dimension: This operation will not affect the pairwise Euclidean distance between the data points and can be performed in O(dn) operations, i.e. with linear complexity in n.We then compute the first principal component v 1 ∈ R d , i.e., the vector along which the data {x i } exhibits largest empirical variance.This vector can be 3/17 computed by a thin singular value decomposition of the tall-skinny data matrix X where U ∈ R n×d and V ∈ R d×d have orthonormal columns and The principal components are given as the columns of V = [v 1 , . . ., v d ] and we require only the first column v 1 .The score of a point x i along v 1 is where e i denotes the i-th canonical unit vector in R n .In other words, the scores α i of all points can be read off from the first column of U = [u 1 , . . ., u d ] times σ 1 .The computation of the scores using a thin SVD requires O(nd 2 ) operations and is therefore linear in n.
The next (and most important) step is to order all data points x i by their α i scores; that is, This sorting will generally require a time complexity of O(n log n) independent of the data dimension d.We also precompute the squared-and-halved norm of each data point, x i = (x T i x i )/2 for i = 1, 2, . . ., n.This is of complexity O(nd), i.e., again linear in n.All these computations are done exactly once in the indexing phase and only the scores [α i ], the numbers [x i ], and the single vector v 1 need to be stored.See Algorithm 1 for a summary.
Compute the mean-centered matrix X with rows x i := p i − µ 4: Compute the singular value decomposition of X = UΣV T 5: Compute the sorting keys

Query
Given a query point q and user-specified search radius R, we want to retrieve all data points p i satisfying ∥p i − q∥ ≤ R. Figure 1 illustrates our approach.We first compute the mean-centered query x q := q − mean({p j }) and the corresponding score α q := x T q v 1 .By utilizing the Cauchy-Schwarz inequality, we have Since we have sorted the As a consequence, we only need to consider candidates x i whose indices are in J := { j 1 +1, j 1 +2, . . ., j 2 − 1} and we can determine the smallest subset by finding the largest j 1 and smallest j 2 satisfying the above statements, respectively.As the α i are sorted, this can be achieved via binary search in O(log n) operations.
Note that the indices in J are continuous integers, and hence it is memory efficient to access X(J, :), the submatrix of X whose row indices are in J.This will be important later.
Finally, we filter all data points in the reduced set X(J, :), retaining only those data points whose distance to the query point x q is less or equal to R, i.e., points satisfying ∥x j − x q ∥ 2 ≤ R 2 .The query phase is summarized in Algorithm 2.

COMPUTATIONAL CONSIDERATIONS
The compute-intensive step of the query procedure is the computation of (3) for all vectors x j with indices j ∈ J. Assuming that these vectors have d features, one evaluation of (3) requires 3d − 1 floating point operations (flop): d flop for the subtractions, d flop for the squaring, and d − 1 flop for the summation.In total, |J|(3d − 1) flop are required to compute all |J| squared distances.We can equivalently rewrite (3) as ∥x j − x q ∥ 2 = x T j x j + x T q x q − 2x T j x q and instead verify the radius condition as This form has the advantage that all the squared-and-halved norms x j = (x T j x j )/2 (i = 1, 2, . . ., n) have been precomputed during the indexing phase.Hence, in the query phase, the left-hand side of (4) can be evaluated for all |J| points x j using only 2d|J| flop: (2d − 1)|J| for the inner products and |J| subtractions.
Merely counting flop, (4) saves about 1/3 of arithmetic operations over (3).An additional advantage results from the fact that all inner products in (4) can be computed as X(J, :) T x q using level-2 BLAS matrix-vector multiplication (gemv), resulting in further speedup on modern computing architectures.If multiple query points are given, say x (1) q , . . ., x (ℓ) q , a level-3 BLAS matrix-matrix multiplication (gemm) evaluates X(J, : q ] in one go, where J is the union of candidates for all ℓ query points.One may be concerned that the computation using (4) incurs more rounding error than the usual formula (3).We now prove that this is not the case.First, note that division or multiplication by 2 does not incur rounding error.Using the standard model of floating point arithmetic, we have f l(a • b) = (a • b)(1 ± δ ) for any elementary operation • ∈ {+, −, ×, /}, where 0 ≤ δ ≤ u with the unit roundoff u (Higham, 2002, Chap. 1).Suppose we have two vectors x and y where x i and y i denote their respective coordinates.Then computing in floating point arithmetic amounts to evaluating , and so on.
Continuing this recursion we arrive at Assuming ju < 1 and using (Higham, 2002, Lemma 3.1) we have Hence, showing that the left-hand side of (3) can be evaluated with high relative accuracy.
A very similar calculation can be done for the formula the expression that is used to derive (4).Using the standard result for inner products (Higham, 2002, eq.(3.2)) the same bound on the relative accuracy of floating-point evaluation as obtained for (3).
Algorithm 2 SNN Query 1: Input: Query vector q; user-specified radius R; output from Algorithm 1 2: Compute x q := q − µ 3: Compute the sorting score of x q , i.e., α q := x T q v 1 4: Select candidate index range J so that |α j − α q | ≤ R for all j ∈ J 5: Compute d := x(J) − X(J, :) T x q using the precomputed x = [x i ] 6: Return: Points x j with d j ≤ (R 2 − x T q x q )/2 according to (4)

THEORETICAL ANALYSIS
The efficiency of the SNN query in Algorithm 2 is dependent on the number of pairwise distance computations that are performed in Step 5, depending on the size of the index set |J|.If the index set J is the full {1, 2, . . ., n}, then the algorithm reduces to exhaustive search over the whole dataset {x 1 , x 2 , . . ., x n }, which is undesirable.For the algorithm to be most efficient, |J| would exactly coincide with the indices of data points x i that satisfy ∥x i − x q ∥ ≤ R. In practice, the index set J will be somewhere in between these two extremes.Thus, it is natural to ask: How likely is it that First note that, using the singular value decomposition (1) of the data matrix X, we can derive an upper bound on ∥x i − x q ∥ that complements the lower bound (2).Using that x T i = e T i X = e T i UΣV T , where e i ∈ R n denotes the ith canonical unit vector, and denoting the elements of U by u i j , we have and the gap in these inequalities depends on σ 2 , the second singular value of X.Indeed, if σ 2 = 0, then all data points x i lie on a straight line passing through the origin and their distances correspond exactly to the difference in their first principal coordinates.This is a best-case scenario for Algorithm 2 as all candidates x j , j ∈ J, found in Step 4 are indeed also nearest neighbors.If, on the other hand, σ 2 is relatively large compared to σ 1 , the gap in the inequalities (5) becomes large and |α i − α q | may be a crude underestimation of the distance ∥x i − x q ∥.In order to get a qualitative understanding of how the number of distance computations in Algorithm 2 depends on the various parameters (dimension d, singular values of the data matrix, query radius R, etc.), we consider the following model.Let {x i } n i=1 be a large sample of points whose d components are normally distributed with zero mean and standard deviation [1, s, . . ., s], s < 1, respectively.These points describe an elongated "Gaussian blob" in R d , with the elongation controlled by s.In the large data limit (n → ∞) the singular values of the data matrix X = [x 1 , . . ., x n ] T approach √ n, s √ n, . . ., s √ n and the principal components approach the canonical unit vectors e 1 , e 2 , . . ., e d .As a consequence, the principal coordinates α i = e T 1 x i follow a standard normal distribution, and hence for any c ∈ R the probability that |α i − c| ≤ R is given as On the other hand, the probability that ∥x i − [c, 0, . . ., 0] T ∥ ≤ R is given by where F denotes the χ 2 cumulative distribution function.In this model we can think of the point x q := [c, 0, . . ., 0] T as a query point, and our aim is to identify all data points x i within a radius R of this query point.Since ∥x i − x q ∥ ≤ R implies that |α i − c| ≤ R, we have P 1 ≥ P 2 .Hence, the quotient P 2 /P 1 can be interpreted as a conditional probability of a point x i satisfying ∥x i − x q ∥ ≤ R given that |e T 1 x i − c| ≤ R, i.e., Ideally, we would like this quotient P = P 2 /P 1 be close to 1, and it is now easy to study the dependence on the various parameters.First note that P 1 does not depend on s nor d, and hence the only effect these two parameters have on P is via the factor F R 2 −(r−c) 2 s 2 ; d − 1 in the integrand of P 2 .This term corresponds to the probability that the sum of squares of d − 1 independent Gaussian random variables with mean zero and standard deviation s is less or equal to R 2 − (r − c) 2 .Hence, P 2 and therefore P are monotonically decreasing as s or d are increasing.This is consistent with intuition: as s increases, the elongated point cloud {x i } becomes more spherical and hence it gets more difficult to find a direction in which to enumerate (sort) the points naturally.And this problem gets more pronounced in higher dimensions d.
We now show that the "efficiency ratio" P converges to 1 as R increases.In other words, the identification of candidate points x j , j ∈ J, should become relatively more efficient as the query radius R increases.(Here relative is meant in the sense that candidate points become more likely to be fixed-radius nearest neighbors as R increases.Informally, as R → ∞, all n data points are candidates and also nearest neighbors and so the efficiency ratio must be 1.)First note that for an arbitrarily small ε > 0 there exists a radius R 1 > 1 such that To see this, note that the cumulative distribution function F increases monotonically from 0 to 1 as its first argument increases from 0 to ∞. Hence there exists a value T for which F(t, d − 1) > 1 − ε for all t ≥ T .Now we just need to find R 2 such that The left-hand side is a quadratic function with roots at r = c ± R 2 , symmetric with respect to the maximum at r = c.Hence choosing R 2 such that , or any value R 2 larger than that, will be sufficient.Now, setting R = max{R 1 , R 2 }, we have Hence, both P 1 and P 2 come arbitrarily close to 1 as R increases, and so does their quotient P = P 2 /P 1 .

EXPERIMENTAL EVALUATION
Our experiments are conducted on a compute server with two Intel Xeon Silver 4114 2.2G processors, 1.5 TB RAM, with operating system Linux Debian 11.All algorithms are forced to run in a single thread with the same settings for fair comparison.We only consider algorithms for which stable Cython or Python implementation are freely available.Our SNN algorithm is implemented in native Python (i.e., no Cython is used), while scikit-learn's (Pedregosa et al., 2011) k-d tree and balltree NN algorithms, and hence also scikit-learn's DBSCAN method, use Cython for some part of the computation.Numerical values are reported to four significant digits.The code and data to reproduce the experiments in this paper can be downloaded from https://github.com/nla-group/snn.

Near neighbor query on synthetic data
We first compare k-d tree, balltree, and SNN on synthetically generated data to study their dependence on the data size n and the data dimension d.We also include two brute force methods, the one in scikitlearn Pedregosa et al. (2011) (denoted as brute force 1) and another one implemented by us (denoted as brute force 2) which exploits BLAS level-2.(One might say that brute force 2 is equivalent to SNN without index construction and without search space pruning.)The leaf size for scikit-learn's k-d tree and balltree is kept at the default value 40.The n data points are obtained by sampling from the uniform distribution on [0, 1] d .
Table 1.The table shows the ratio of returned data points from the synthetic uniformly distributed dataset, relative to the overall number of points n, as the query radius R and the dimension d is varied; The ratios confirm that our parameter choices lead to queries over a wide order-in-magnitude variation of query return sizes.For the first test we vary the number of data points n (the index size) from 2,000 to 20,000 in increments of 2,000.The number of features is either d = 2 or d = 50.We then query the nearest neighbors of each data point for varying radius R. The ratio of returned data points relative to the overall number of points is listed in Table 1.As expected, this ratio is approximately independent of n.We have chosen the radii R so that a good order-of-magnitude variation in the ratio is obtained, in order to simulate queries with small to large returns.The timings of the index and query phases of the various NN algorithms are shown in Figure 2 (left).Note that the brute force methods do not require the construction of an index.Among k-d tree, balltree, and SNN, our method has the shortest indexing phase.The query time is obtained as an average over all queries, over the two considered dimensions d ∈ {2, 50}, and over all considered radii R.SNN performs best, with the average query time being between 5 and 9.7 times faster than balltree (the fastest tree-based method).We have verified that for all methods, when run on the same datasets with the same radius parameter, the set of returned points coincide.
For the second test we fix the number of data points at n = 10, 000 and vary the dimension d = 2, 32, . . ., 272.We perform queries for five selected radii as shown in Table 1.The table confirms that we have a wide variation in the number of returned data points relative to the overall number of points n, ranging from empty returns to returning all data points.The indexing and query timings are shown in Figure 2 (right).Again, among k-d tree, balltree, and SNN, our method has the shortest indexing phase.The query time is obtained as an average over all n query points and over all considered radii R. SNN performs best, with the average query time being between 3.5 and 6 times faster than balltree (the fastest tree-based method).

Comparison with GriSPy
GriSPy (Chalela et al., 2021), perhaps the most recent work on fixed-radius NN search, is an exact search algorithm which claims to be superior over the tree-based algorithms in SciPy.GriSPy indexes the data points into regular grids and creates a hash table in which the keys are the cell coordinates, and the values are lists containing the indices of the points within the corresponding cell.As there is an open-source implementation available, we can easily compare GriSPy against SNN.However, GriSPy has a rather high memory demand which forced us to perform a separate experiments with reduced data sizes and dimensions as compared to the ones in the previous Section 6.1.
Again we consider n uniformly distributed data points in [0, 1] d , but now with (i) varying data size from n = 1, 000 to 100, 000 and averaging the runtime of five different radius queries with R = 0.05, 0.1, . . ., 0.25, and (ii) varying dimension over d = 2, 3, 4. The precise parameters and the corresponding ratio of returned data points are listed in Table 2.All queries are repeated 1,000 times and timings are averaged.Both experiments (i) and (ii) use the same query size as the index size.Brute force query methods do not require an index construction, hence are omitted on the left.Our SNN method is the best performer in all cases, in some cases 10 times faster than the best tree-based method (balltree).
The index and query timings are illustrated in Figure 3.We find that SNN indexing is about an order of magnitude faster than GriSPy over all tested parameters.For the experiment (i) where the data size is varied, we find that SNN is up to two orders of magnitude faster than GriSPy.For experiment (ii), we see that the SNN query time is more stable than GriSPy with respect to increasing data dimension.

Near neighbor query on real-world data
We now compare various fixed-radius NN search methods on datasets from the benchmark collection by Aumüller et al. (2020): Fashion-MNIST (abbreviated as F-MNIST), SIFT, GIST, GloVe100, and DEEP1B.Each dataset has an index set of n points and a separate out-of-sample query set with n ′ < n points.See Table 3 for a summary of the data.
Table 4 lists the timings for the index construction of the tree-based methods and SNN.For all datasets, SNN is least 5.9 times faster than balltree (the fasted tree-based method).Significant speedups are gained in particular for large datasets: for the largest dataset DEEP1B, SNN creates its index more than 32 times faster than balltree.
The query times averaged over all n ′ points from the query set are listed in Table 5.We have included tests over different radii R in order to obtain a good order-of-magnitude variation in the number of returned nearest neighbors relative to the index size n, assessing the algorithms over a range of possible scenarios 10/17   (Yandex and Lempitsky, 2016) from small to large query returns.See the return ratios υ listed in Table 5.Again, in all cases, SNN consistently performs the fastest queries over all datasets and radii.SNN is between about 6 and 14 times faster than balltree (the fastest tree-based method).For the datasets GloVe100 and DEEP1B, SNN displays the lowest speedup of about 1.6 compared to our brute force 2 implementation, indicating that for these datasets the sorting-based exclusion criterion does not significantly prune the search space.(These are datasets for which the angular distance is used, i.e., all data points are projected onto the unit sphere.)For the other datasets, SNN achieves significant speedups between 2.6 and 5.6 compared to brute force 2, owing to effective search space pruning.

An application to clustering
We now wish to demonstrate the performance gains that can be obtained with SNN using the DBSCAN clustering method (Campello et al., 2015;Jang and Jiang, 2019) as an example.To this end we replace the nearest neighbor search method in scikit-learn's DBSCAN implementation with SNN.To enure all variants perform the exact same NN queries, we rewrite all batch NN queries into loops of single queries and force all computations to run in a single threat.Except these modifications, DBSCAN remains unchanged and in all case returns exactly the same result when called on the same data and with the same hyperparameters (eps and min sample).
We select datasets from the UCI Machine Learning Repository (Dua and Graff, 2017); see Table 6.All datasets are pre-processed by z-score standardization (i.e., we shift each feature to zero mean and scale it to unit variance).We cluster the data for various choices of DBSCAN's eps parameter and list the measured total runtime in Table 7.The parameter eps has the same interpretation as SNN's radius parameter R. In all cases, we have fixed DBSCAN's second hyperparameter min sample at 5. The normalized mutual information (NMI) (Cover and Thomas, 2006) of the obtained clusterings is also listed in Table 7.
The runtimes in Table 7 show that DBSCAN with SNN is a very promising combination, consistently outperforming the other combinations.When compared to using non-batched and non-parallelized DBSCAN with balltree, DBSCAN with SNN performs between 3.5 and 16 times faster while returning precisely the same clustering results.

CONCLUSIONS
We presented a fixed-radius nearest neighbor (NN) search method called SNN.Compared to other exact NN search methods based on k-d tree or balltree data structures, SNN is trivial to implement and exhibits faster index and query time.We also demonstrated that SNN outperforms different implementations of brute force search.Just like brute force search, SNN requires no parameter tuning and is straightforward to use.We believe that SNN could become a valuable tool in applications such as the MultiDark Simulation (Klypin et al., 2016) or the Millennium Simulation (Boylan-Kolchin et al., 2009).We also demonstrated that SNN can lead to significant performance gains when used for nearest neighbor search within the DBSCAN clustering method.
While we have demonstrated SNN speedups in single-threaded computations on a CPU, we believe 13/17  that the method's reliance on high-level BLAS operations makes it suitable for parallel GPU computations.
A careful CUDA implementation of SNN and extensive testing will be subject of future work.

Figure 1 .
Figure 1.Query with radius R. The data points in the shaded band have their first principal coordinate within a distance R from the first principal coordinate of the query point, and hence are NN candidates.All data points are sorted so that all candidates have continuous indices.

Figure 2 .
Figure 2.Comparing SNN to brute force search and tree-based methods.Total index time (top) and average query time (bottom) for the synthetic uniformly distributed dataset, all in seconds, as the data size n is varied (left) or the dimension d is varied (right).Brute force query methods do not require an index construction, hence are omitted on the left.Our SNN method is the best performer in all cases, in some cases 10 times faster than the best tree-based method (balltree).

Table 2 .
The table shows the ratio of returned data points from the synthetic uniformly distributed dataset, relative to the overall number of points n, as the data volumn n, query radius R and the dimension d is varied.

Table 3 .
Summary of the real-world datasets

Table 4 .
Index time in milliseconds for fixed-radius NN search on the real-world datasets (rounded to four significant digits).Lower is better and the best values are highlighted in bold.

Table 5 .
Query time per data point in milliseconds for real-world data, averaged over n ′ out-of-sample queries.The search radius is R and υ is the average ratio of returned data points relative to the overall number of data points n.Lower is better and the best values are highlighted in bold.
Figure 3. Comparing GriSPy and SNN.Total index time (top) and average query time (bottom) for on uniformly distributed data, all in seconds, as the data size n is varied (left) or the dimension d is varied (right).Our SNN method significantly outperforms GriSPy both in terms of indexing and query runtime.

Table 6 .
Clustering datasets in the UCI Machine Learning Repository

Table 7 .
Total DBSCAN runtime in milliseconds when different NN search algorithms are used.The DBSCAN radius parameter is eps and the achieved normalized mutual information is NMI.Best runtimes are highlighted in bold.