Methods to integrate multinormals and compute classification measures

Univariate and multivariate normal probability distributions are widely used when modeling decisions under uncertainty. Computing the performance of such models requires integrating these distributions over specific domains, which can vary widely across models. Besides some special cases, there exist no general analytical expressions, standard numerical methods or software for these integrals. Here we present mathematical results and open-source software that provide (i) the probability in any domain of a normal in any dimensions with any parameters, (ii) the probability density, cumulative distribution, and inverse cumulative distribution of any function of a normal vector, (iii) the classification errors among any number of normal distributions, the Bayes-optimal discriminability index and relation to the operating characteristic, (iv) ways to scale the discriminability of two distributions, (v) dimension reduction and visualizations for such problems, and (vi) tests for how reliably these methods may be used on given data. We demonstrate these tools with vision research applications of detecting occluding objects in natural scenes, and detecting camouflage.


Introduction
The univariate or multivariate normal (henceforth called simply 'normal') is arguably the most important and widely-used probability distribution.It is frequently used because various central-limit theorems guarantee that normal distributions will occur commonly in nature, and because it is the simplest and most tractable distribution that allows arbitrary correlations between the variables.
Normal distributions form the basis of many theories and models in the natural and social sciences.For example, they are the foundation of Bayesian statistical decision/classification theories using Gaussian discriminant analysis, 2 and are widely applied in * abhranil.das@utexas.edudiverse fields such as vision science, neuroscience, probabilistic planning in robotics, psychology, and economics.These theories specify optimal performance under uncertainty, and are often used to provide a benchmark against which to evaluate the performance (behavior) of humans, other animals, neural circuits or algorithms.They also serve as a starting point in developing other models/theories that describe sub-optimal performance of agents.
To compute the performance predicted by such theories it is necessary to integrate the normal distributions over specific domains.For example, a particularly common task in vision science is classification into two categories (e.g.detection and discrimination tasks).The predicted maximum accuracy in such tasks is determined by integrating normals over domains defined by a quadratic decision boundary. 3,4 redicted accuracy of some of the possible sub-optimal models is determined by integrating over other domains.
Except for some special cases [5][6][7] (e.g.multinormal probabilities in rectangular domains, or flat domains such as when two normals have equal covariance), there exists no general analytical expression for these integrals, and we must use numerical methods, such as integrating over a grid, or Monte-Carlo integration.
Since the normal distribution tails off infinitely outwards, it is inefficient to numerically integrate it over a finite uniform Cartesian grid, which would be large and collect ever-reducing masses outwards, yet omit some mass wherever the grid ends.Also, if the normal is elongated by unequal variances and strong covariances, or the integration domain is complex and non-contiguous, naïve integration grids will waste resources in regions or directions that have low density or are outside the domain.One then needs to visually inspect and arduously hand-tailor the integration grid to fit the shape of each separate problem.
Monte-Carlo integration involves sampling from the multinormal, then counting the fraction of samples in the integration domain.This does not have the above inefficiencies or problems, but has other issues.Unlike grid integration, the desired precision cannot be specified, but must be determined by measuring the spread across multiple runs.Also, when the probability in the integration domain is very small (e.g. to compute the classification error rate or discriminability d ′ for highly-separated normal distributions), it cannot be reliably sampled without a large number of samples, which costs resource and time (see the performance benchmark section for a comparison).
Thus, there is no single analytical expression, numerical method or standard software tool to quickly and accurately integrate arbitrary normals over arbitrary domains, or to compute classification errors and the discriminability index d ′ .Evaluating these quantities is often simplified by making the limiting assumption of equal variance.This impedes the quick testing, comparison, and optimization of models.Here we describe a mathematical method and accompanying software implementation which provides functions to (i) integrate normals with arbitrary means and covariances in any number of dimensions over arbitrary domains, (ii) compute the pdf, cdf and inverse cdf of any function of a multinormal variable (normal vector), and (iii) compute the performance of classifying amongst any number of normals.This software is available as a Matlab toolbox 'Integrate and classify normal distributions', and the source code is at github.com/abhranildas/IntClassNorm.
We first review and assimilate previous mathematical results into a generalized chi-squared method that can integrate arbitrary normals over quadratic domains.Then we present our raytrace method to integrate arbitrary normals over any domain, and consequently to compute the distribution of any real-valued function of a normal vector.We describe how these results can be used to compute error rates (and other relevant quantities) for Bayes-optimal and custom classifiers, given arbitrary priors and outcome cost matrix.We then present some methods to reduce problems to fewer dimensions, for analysis or visualization.Next, we provide a way to test whether directly measured samples from the actual distributions in a classification problem are close enough to normal to trust the computations from the toolbox.After describing the methods and software toolbox with examples, we demonstrate their accuracy and speed across a variety of problems.We show that for quadratic-domain problems both the generalized chi-squared method and the ray-trace method are accurate, but vary in relative speed depending on the particular problem.Of course, for domains that are not quadratic, only the ray-trace method applies.Finally, we illustrate the methods with two applications from our laboratory: modeling detection of occluding targets in natural scenes, and detecting camouflage.
2 Integrating the normal 2.1 In quadratic domains: the generalized chisquare method Integrating the normal in quadratic domains is important for computing the maximum possible classification accuracy.The problem is the following: given a column vector x ∼ N (µ, Σ), find the probability that (Here and henceforth, bold upper-case symbols represent matrices, bold lower-case symbols represent vectors, and regular lower-case symbols represent scalars.)This can be viewed as the multi-dimensional integral of the normal probability over the domain q(x) > 0 (that we call the 'normal probability view'), or the single-dimensional integral of the probability of the scalar quadratic function q(x) of a normal vector, above 0 (the 'function probability view').
Note that x = Sz + µ, where z is standard multinormal, and the symmetric square root S = Σ 1 2 may be regarded as the multidimensional sd, since it linearly scales the normal (like σ in 1d), and its eigenvectors and values are the axes of the 1 sd error ellipsoid.We first invert this transform to standardize the normal: z = S −1 (x − µ).This decorrelates or 'whitens' the variables, and transforms the integration domain to a different quadratic: ( Now the problem is to find the probability of the standard normal z in this domain.If there is no quadratic term Q 2 , q(z) is normally distributed, the integration domain boundary is a flat, and the probability is Φ( q0 ∥ q1∥ ), where Φ is the standard normal cdf. 5 Otherwise, say Q 2 = RDR ′ is its eigen-decomposition, where R is orthogonal, i.e. a rotoreflection.So y = R ′ z is also standard normal, and in this space the quadratic is: (i and i ′ index the nonzero and zero eigenvalues) a weighted sum of non-central chi-square variables χ ′2 , each with 1 degree of freedom, and a normal variable x ∼ N (m, s 2 ).So this is a generalized chi-square variable χw,k,λ,s,m , where we merge the non-central chi-squares with the same weights, so that the vector of their weights w are the unique nonzero eigenvalues among D i , their degrees of freedom k are the numbers of times the eigenvalues occur, and their non-centralities, and normal parameters are: In a later paper, 8 we derive the inverse map, i.e. from the parameters of a generalized chi-square distribution, to the parameters of the corresponding quadratic form of a multinormal.
The required probability, p ( χ > 0), is now a 1d integral, computable using, say, Ruben's, 9 Imhof's, 10,11 IFFT 8 or ray 8 methods.We use the Matlab toolbox 'Generalized chi-square distribution' that we developed (source code is at github.com/abhranildas/gx2), which can compute the generalized chi-square parameters corresponding to a quadratic form of a normal vector, its statistics, cdf (using three different methods), pdf, inverse cdf and random numbers.
Previous software implements specific forms of this theory for particular quadratics such as ellipsoids. 7The method described here correctly handles all quadratics (ellipsoids, hyperboloids, paraboloids and degenerate conics) in all dimensions.

In any domain: the ray-trace method
We present below our method to integrate the normal distribution in an arbitrary domain, which takes an entirely different approach than the generalized chi-square method.Genz and Bretz 6 outlined this approach and mentioned that it may be used for general domains, but the method they explicitly developed was for (hyper-)rectangular domains only.Here we explicitly develop the method in detail for general domains.A broad overview is as follows.We first standardize the normal to make it spherically symmetric, then we integrate it in spherical polar coordinates, outwards from the center.We first calculate the radial integral by sending 'rays' from the center to trace out the integration domain in every direction, i.e. determine the points where each ray crosses into and out of the domain (akin to the computer graphics method of ray-tracing, that traces light rays outward from the projection center to compute where it hits the different objects it has to render).By knowing these crossing points, we then calculate the probability amount on each ray.Then we add up these probabilities over all the angles.This method of breaking up the problem produces fast and accurate results to arbitrary tolerance for all problem shapes, without needing any manual adjustment.

Standard polar form
The problem is to find the probability that f (x) > 0, where f (x) is a sufficiently general function (with a finite number of zeros in any direction within the integration span around the normal mean, i.e. without rare pathologies such as the Dirichlet function with infinite zeros in any interval).As before, we first standardize the space to obtain f (z) = f (Sz + µ).Then we switch to polar axis-angle coordinates z and n: any point z = zn, where the unit vector n denotes the angle of that point, and z is its coordinate along the axis in this direction.Then the integral can be written as: where Ω is the domain where f (z) > 0, and Ωn is its slice along the axis n, i.e. the intervals along the axis where the axial domain function fn (z) = f (zn) > 0. This may be called the 'standard polar form' of the integral.dn is the differential angle element (dθ in 2d, sin θ dθ dϕ in 3d, sin θ sin 2 ψ dθ dϕ dψ in 4d etc).

Integration domain on a ray
First let us consider the axial integration along direction n.Imagine that we 'trace' the integration domain with an axis through the origin in this direction (a bi-directional 'ray' marked by the arrow in fig.1a), i.e. determine the part of this ray axis that is in the integration domain, defined by fn (z) > 0. For example, if the integration domain is a quadratic such as eq.2, its 1d trace by the ray is given by: This is a scalar quadratic domain in z that varies with the direction.Fig. 1b is an example of such a domain.The ray domain function fn crosses 0 at z 1 and z 2 , and the integration domain is below z 1 (which is negative), and above z 2 .
Note that a sufficient description of such domains on an axis is to specify all the points at which the domain function crosses zero, and its overall sign, which determines which regions are within and which are outside the domain (so any overall scaling of the domain function does not matter).That is, we specify whether or not the beginning of the ray (at −∞) is inside the domain, and all the points at which the ray crosses the domain.We denote the first by the initial sign ψ(n) = sign( fn (−∞)) = 1/−1/0 if the ray begins inside/outside/grazing the integration domain.For a quadratic domain, for example: We can encapsulate all three cases into the simple clever expression: The crossing points are the zeros z i (n) of fn (z) = f (zSn + µ) (z i n are then the boundary points in the full space).For a quadratic domain qn (z), these are simply its roots.For a general domain, the zeros are harder to compute.Chebyshev polynomial approximations 12 aim to find all zeros of a general function, but can be slow.Other numerical algorithms can find all function zeros in an interval to arbitrary accuracy.We use such an algorithm to find the zeros of fn (z) within (−m, m).This amounts to raytracing f (x) within a Mahalanobis distance m of the normal.The maximum error in the integral due to this approximation is the standard multinormal probability beyond a radius of m, which in k dimensions is Fχ k (m), the complementary cdf of the chi distribution with k degrees of freedom.
In fig. 1, the initial sign along the ray is 1, and z 1 and z 2 are the crossing points.
Most generally, this method can integrate in any domain for which we can return its 'trace' (i.e. the initial sign and crossing points) along any ray n through any origin o.So if a domain is already supplied in the form of these 'ray-trace' functions ψ(o, n) and z i (o, n), our method can readily integrate over it.For example, the ray-trace function of the domain y > k in 2d returns ψ = −sign(n y ) and z = k−oy ny .When supplied with quadratic domain coefficients, or a general implicit domain f (x) > 0, the toolbox ray-traces it automatically under the hood.For an implicit domain, the numerical root-finding works only in a finite interval, and is slower and may introduce small errors.So, if possible, a slightly faster and more accurate alternative to the implicit domain format is to directly construct its ray-trace function by hand.

Standard normal distribution on a ray
In order to integrate over piecewise intervals of z such as fig.1b, we shall first calculate the semi-definite integral up to some z, then stitch them together over the intervals with the right signs.
Consider the probability in the angular slice dn below some negative z such as z 1 in fig.1a.Note that the probability of a standard normal beyond some radius is given by the chi distribution.If Ω k is the total angle in k dimensions (2 in 1d, 2π in 2d, 4π in 3d, 2π 2 in 4d), and F χ k (x) is the cdf of the chi distribution with k degrees of freedom, we have: So the probability in the angular slice dn below a negative z is dn Ω k .Now, for the probability in the angular slice below a positive z (such as z 2 ), we need to add two probabilities: that in the finite cone from the origin to the point, which is Ω k , and that in the entire semi-infinite cone on the negative side, which is dn Ω k , to obtain [1 + F χ k (z)] dn Ω k .Thus, the probability in an angular slice dn below a positive or negative z is We normalize this by the total probability in the angular slice, 2 dn Ω k , to define the distribution of the standard normal along a ray: Its density is found by differentiating: ϕ ray k (z) = f χ k (|z|)/2, so it is simply the chi distribution symmetrically extended to negative numbers.Notice that ϕ ray 1 (z) = ϕ(z), but in higher dimensions it rises, then falls outward (fig.1b), due to the opposing effects of the density falling but the volume of the angular slice growing outward.Since Matlab does not yet incorporate the chi distribution, we instead define, in terms of the chi-square distribution,

Probability in an angular slice
We can now write the total probability in the angular slice of fig. 1 as the sum of terms accounting for the initial sign and each root.The total volume fraction of the double cone is 2dn Ω k .Now first consider only the initial sign and no roots.Then if the ray starts inside the domain (ψ = 1), it stays inside, and the probability content is 2dn Ω k .If it begins and stays outside (ψ = −1), it is 0. And if it grazes the domain throughout (ψ = 0), half of the angular volume is inside the domain and half is outside, so the probability is dn Ω k .So without accounting for roots, the probability in general is . To this we add, sequentially for each root, the probability from the root to ∞, signed according to whether we are entering or exiting the domain at that root.So we have, for fig. 1, The sign of the first root term is always opposite to ψ, and subsequent signs alternate as we enter and leave the domain.In general then, we can write: dn can be computed, for up to 4d, by numerically integrating α(n) over a grid of angles spanning half the angular space (since we account for both directions of a ray), using any standard scheme.An adaptive grid can match the shape of the integration boundary (finer grid at angles where the boundary is sharply changing), and also set its fineness to evaluate the integral to a desired absolute or relative precision.Fig. 2a, top, illustrates integrating a trivariate normal with arbitrary covariance in an implictly-defined toroidal domain Beyond 4d (or even in general), we can use Monte Carlo integration over the angles.We draw a sample of random numbers from the standard multinormal in those dimensions, then normalize their magnitudes, to get a uniform random sample of rays n, γ is an optimized boundary between the samples.The three error rates are: of the normals with l, of the samples with l, and of the samples with γ. f.Classifying several 2d normals with arbitrary means and covariances.g.Top: 1d projection of a 4d normal integral over a quadratic domain q(x) > 0. Bottom: Projection of the classification of two 4d normals based on samples, with unequal priors, and unequal outcome values (correctly classifying the blue class is valued 4 times the red, hence the optimal criterion is shifted), onto the axis of the Bayes decision variable β.Histograms and smooth curves are the projections of the samples and the fitted normals.The sample-optimized boundary γ = 0 cannot be uniquely projected to this β axis.h.Classification based on four 4d non-normal samples, with different priors and outcome values, projected on the axis along (1,1,1,1).The boundaries cannot be projected to this axis.over which the expectation ⟨α(n)⟩/2 is the probability estimate.Fig. 2b shows the computation of the 4d standard normal probability in the domain f p (x) = Since the algorithm already computes the boundary points over its angular integration grid, they may be stored for plotting and inspecting the boundary.Rather than an adaptive integration grid though, boundaries are often best visualized over a uniform grid (uniform array of angles in 2D, or a Fibonacci sphere in 3D 13 ), which we can explicitly supply for this purpose.

Set operations on domains
Some applications require more complex integration or classification domains built using set operations (inversion/union/intersection) on simpler domains.With implicit domain formats this is easy.For example, if f A (x) > 0 and f B (x) > 0 define two domains A and B, then A c , A ∩ B, and 2a, bottom, illustrates integrating a 2d normal in a domain built by the union of two circles.
As we noted before, computations are faster and more accurate when domains are supplied in explicit ray-trace form, than as implicit functions.The toolbox provides functions to convert quadratic and general implicit domains to ray-trace format, and functions to use set operations on these to build complex ray-trace domains.For example, when a domain is inverted, only the initial sign of a ray through it flips, and for the intersection of several domains, the initial sign of a ray is the minimum of its individual initial signs, and the roots are found by collecting those roots of each domain where every other domain is positive.

Probabilities of functions of a normal vector
We previously mentioned the equivalent 'normal probability' and 'function probability' views of conceptualizing a normal integral.So far we have mostly used the normal probability view, seeing scalar functions f (x) as defining integral domains of the normal x.But in the function probability view, f (x) is instead seen as a mapping from the multi-dimensional variable x to a scalar, which in some contexts can be considered a decision variable.Hence, integrating the normal in the multi-dimensional domain f (x) > 0 corresponds to integrating the 1d pdf of the decision variable f (x) beyond 0. It is sometimes helpful to plot this 1d pdf, especially when there are too many dimensions of x to visualize the normal probability view.
Conversely, given any scalar function f (x) of a normal, its cdf, F f (c) = p(f (x) < c), is computed as the normal probability in the domain c − f (x) > 0. From this, we can obtain the pdf by numerically differentiating the cdf.In a later paper, we also analytically differentiate the ray-tracing method to directly compute the pdf of a scalar function of a normal vector. 8(If it is a quadratic function, its generalized chi-square pdf can also be computed by convolving the constituent noncentral chi-square pdf's.)Figs.2ac and g show 1d pdf's of functions computed in this way.Also, inverting the function cdf using a numerical root-finding method gives us its inverse cdf.
With these methods to obtain the pdf, cdf and inverse cdf of functions of a normal vector, we can conveniently compute certain quantities.For example, if x and y are jointly normal with µ x = 1, µ y = 2, σ x = .1,σ y = .2,and ρ xy = .8,we can compute the pdf, cdf and inverse cdf of the function x y , and determine, say, that its mean, median and sd are respectively 1.03, 1 and 0.21.
The probability of a vector (multi-valued) function of the normal, e.g.f (x) = f 1 (x) f 2 (x) , in some f -domain (which may also be seen as the joint probability of two scalar functions), is again the normal probability in a corresponding xdomain.For example, the joint cdf F f (c 1 , c 2 ) is the function probability in an explicit domain: p (f 1 < c 1 , f 2 < c 2 ), and can be computed as the normal probability in the intersection of the x-domains then gives the joint pdf of the vector function.The probability of such a vector function in an implicit domain, i.e. p (g (f ) > 0), is computed as the normal probability in the implicit domain: p (h (x) > 0) where h = g • f .Fig. 2c illustrates the function probability and normal probability views of the implicit integral p(h = x 1 sin x 2 − x 2 cos x 1 > 1).The 83rd percentile of this function h (using the inverse cdf) is 4.87.

Classifying normal samples
Suppose observations come from several normal distributions with parameters µ i , Σ i and priors p i , and the outcome values (rewards and penalties) of classifying them are represented in a matrix V: v ij is the value of classifying a sample from i as j.
If the true class is i, selecting i over others provides a relative value gain of v i := v ii − j̸ =i v ij .Given a sample x, the expected value gain of deciding i is therefore ⟨v(i|x)⟩ = p(i|x)v i = p(x|i) p i v i .The Bayes-optimal decision is to assign each sample to the class that maximizes this expected value gain, or its log: When the outcome value is simply the correctness of classification, V = 1 (so each v i = 1), then this quantity is the log posterior, ln p(i|x), and when priors are also equal, it is the log likelihood.

Two normals
Suppose there are only two normal classes a and b.The Bayesoptimal decision rule is to pick A if (upper-case denotes the estimated classes): 0, where: This quadratic β(x) is the Bayes classifier, or the Bayes decision variable that, when compared to 0, maximizes expected gain.
When V = 1, the Bayes decision variable is the log posterior ratio, and this decision rule minimizes overall error.The error rates of different types, i.e. true and false positives and negatives, are then the probabilities of the normals on either side of the quadratic boundary β(x) = 0.These probabilities can be computed entirely numerically using the ray-trace method, or we can first arrive at mathematical expressions using the generalized chi-square method (as follows), which are then numerically evaluated.The overall error p(e) is the prior-weighted sum of the error rates of each normal.
Further, when priors are equal, the Bayes decision variable is the log likelihood ratio (of a vs b), which can be called l(x).

Single-interval (yes/no) task
Consider a yes/no task where the stimulus x comes from one of two equally likely 1d normals a and b with means µ a , µ b and sd's σ a > σ b (fig.3a).The optimal decision (eq. 3) is to pick a if the Bayes decision variable (log likelihood ratio of l a b (x) is a scaled and shifted 1 dof noncentral chi-square for each class (fig.3b), and the Bayes error rates are: and p(e) is their average.

Two-interval task
Now consider an equal-priors two-interval task, where two stimuli x 1 and x 2 come from each of the (general) distributions a and b.A decision rule commonly employed here is to check which stimulus is larger. 14,15 ut note that the optimal strategy is to determine whether the tuple x = (x 1 , x 2 ) came from the joint distribution ab of independent a and b (in that order), or from ba (opposite order).To do this we compute, given x, the log likelihood ratio l ab ba of ab vs ba, which turns out to be simply related to the log likelihood ratios l a b for the individual stimuli in the singlea b 0 10 -2 10 1 log likelihood ratio log likelihood ratio l (x) interval task: The optimal rule is to pick ab if l ab ba . This is the familiar decision rule: 3 the observer gets a log likelihood ratio from each distribution for the single-interval task (e.g.fig.3b), and picks the larger likelihood ratio (not the larger stimulus).
When a and b are normals, ab is the 2d normal with mean µ ab = (µ a , µ b ) and sd S ab = diag(σ a , σ b ), and ba is its flipped version (fig.3c).The optimal decision rule (eq. 3) boils down to selecting ab when: When σ a = σ b , this is the usual condition of whether x 1 > x 2 .But when σ a ̸ = σ b , this optimal decision boundary comprises two perpendicular lines, solid and dashed (fig.3c).The x 1 > x 2 criterion is to use only the solid boundary, which is sub-optimal.
The minimum error rate p(e) is the probability that the difference distribution of the two categories of fig.3b exceeds 0. l ab ba is the difference of scaled and shifted noncentral chi-squares l a b , so has generalized chi-square distributions for each category (fig.3d), and we can calculate that p(e) = p ( χw,k,λ,0,0 ) < 0, where If the two stimuli themselves arise from k-dimensional normals N (µ a , Σ a ) and N (µ b , Σ b ), then the optimal discrimination is between 2k-dimensional normals ab and ba, whose means are the concatenations of µ a and µ b , and covariances are the blockdiagonal concatenations of Σ a and Σ b , in opposite order to each other.

m-interval task
Consider the m-interval (m-alternative forced choice) task with m stimuli, one from the signal distribution N (µ a , σ a ), and the rest from N (µ b , σ b ).Following previous reasoning, the probability of the ith stimulus being the signal is an m-d normal, with mean vector whose ith entry is µ a and the rest are µ b , and diagonal sd matrix whose ith entry is σ a and the rest are σ b .The part of this log likelihood that varies across i is: The optimal response is to pick the m-d normal with highest likelihood, i.e. pick the x i with the largest value of the second term above, i.e. with the largest log likelihood ratio l of a vs b, which is the familiar rule. 3nalogous to Wickens 16 eq.6.19, the maximum accuracy is then given by: where f a and F b are the pdf and cdf of l under a and b, which are known, so this can be evaluated numerically.For example, for 2, 3 and 4 intervals with the parameters of fig.3a, the accuracy is 0.74 (fig.3c), 0.64 and 0.58 (see example in the getting started guide for the toolbox).When the variances are equal, these computed accuracies match table 6.1 of Wickens. 16

Discriminability index
Bayesian classifiers are often used to model behavioral (or neural) performance in binary classification.Within the Bayesian modeling framework, it is possible to estimate, from the pattern of errors, the separation (or overlap) of the decision variable distributions for the two categories, independent of the decision criterion (which may differ from the optimal value of zero).The discriminability index d ′ measures this separation.If the two underlying distributions are equal-variance univariate normals a and b, then d ′ = |µ a − µ b |/σ, and if they are multivariate with equal covariance matrices, then it is their Mahalanobis distance: , where σ µ = 1/∥S −1 µ∥ is the 1d slice of the sd along the unit vector µ through the means, i.e. the multi-dimensional d ′ equals the d ′ along the 1d slice through the means.For bivariate distributions, we can calculate using the Mahalanobis distance that: where ρ is the correlation coefficient, and here d ′ , i.e. including the signs of the mean differences instead of the absolute.In other words, the correlation reduces or increases the overall discriminability of the distributions, depending on their relative position.
For univariate distributions with unequal variances, there exist several contending discriminability indices. 14,16,17 Acommon one is Simpson and Fitter's d ′ a = |µ a − µ b |/σ rms , 14 extended to general dimensions as the Mahalanobis distance using the pooled covariance, i.e. with S rms = [(Σ a + Σ b ) /2] 1 2 as the common sd. 18nother index is Egan and Clarke's 19 which we here extend to general dimensions using S avg = (S a + S b ) /2.
These unequal-covariance measures are simple approximations that do not describe the exact separation between the distributions.However, our methods can be used to define a discriminability index that exactly describes the separation between two arbitrary distributions (even non-normal).First, we determine the minimum possible (Bayes) error when V = 1 and priors are equal.This is p(e) = (p(B|a) + p(A|b))/2, where we can also write these error rates in terms of the distributions f a and f b of the log likelihood ratios: (For 1d normals, these are given by eq.4.) The Bayes error rate is also the overlap area of the distributions.If p a (x) and p b (x) are the two distributions weighted by their priors, so that their masses together sum to 1, then the Bayes error rate is their overlap area (e.g. the overlap area in fig.3a), which is the area under the function which is the minimum of the two distributions: p(e) = min(p a (x), p b (x)) dx.
If we know the optimal boundary between two distributions, we can compute p(e) by computing the masses on either side of it that add to the error rate.But if we do not know the optimal boundary, e.g. when we have empirically sampled histograms, we can compute p(e) using the area of overlap above, but the value will vary with our choice of histogram bins.
We now define the Bayes discriminability index as the equalvariance index that corresponds to this same Bayes error, i.e. the separation between two unit variance normals that have the same  , with increasing separation between two 1d and two 2d normals, for different ratios s of their sd's.b.Left : two normals with 1 sd error ellipses corresponding to their sd matrices Sa and S b , and their average and rms sd matrices.Right: the space has been linearly transformed, so that a is now standard normal, and b is aligned with the coordinate axes.c.Discriminating two highlyseparated 1d normals.
overlap as the two distributions, which comes out to be twice the z-score of the maximum accuracy: This index is the best possible discriminability, i.e. by an ideal observer.It extends to all cases as a smooth function of the layout and shapes of the distributions, and reduces to d ′ for equal variance/covariance normals.
d ′ b is a positive-definite statistical distance measure that is free of assumptions about the distributions, like the Kullback-Leibler divergence D KL (a, b), which is the expected log likelihood ratio l under the a distribution (mean of the blue distributions in figs.3b and d).
) is symmetric for the two distributions.However, d ′ b does not satisfy the triangle inequality.For example, consider three equalwidth, consecutively overlapping uniform distributions: a over [0, 3], b over [2, 5], and c over [4, 7].b overlaps with a and c: ), but a and c do not overlap: c).In fig.4a, we compare d ′ b with d ′ a and d ′ e for different meanseparations and sd ratios of two normals, in 1d and 2d.We first take two 1d normals and increase their discriminability by equally shrinking their sd's while maintaining their ratio σ a /σ b = s, i.e. effectively separating the means.We repeat this by starting with two 2d normals with different sd matrices, one of them scaled by different values s each time, then shrink them equally.
We can theoretically show that d ′ a ≤ d ′ e in all dimensions and cases.In 1d, this is simply because σ avg ≤ σ rms , and at the limit of highly unequal sd's, √ 2, which is the 30% underestimate.In higher dimensions, we can show analogous results using fig.4b as an example.The left figure shows two normals with error ellipses corresponding to their sd's, and their average and rms sd's.Now we make two linear transformations of the space: first we standardize normal a, then we diagonalize normal b (i.e. a rotation that aligns the axes of error ellipse b with the coordinate axes).In this space (right figure), S a = 1, S b is diagonal, and the axes of S avg and S rms are the average and rms of the corresponding axes of S a and S b .S rms is hence bigger than S avg , so has larger overlap at the same separation, so d ′ a ≤ d ′ e .The ratio of d ′ a and d ′ e is ∥S −1 rms µ∥/∥S −1 avg µ∥, the ratio of the 1d slices of the average and rms sd's along the axis through the means.When these are highly unequal, we again have We can also show that at large separation in 1d, d ′ e converges to d ′ b .Consider normals at 0 and 1 with sd's sσ and σ (fig.4c).At large separation (σ → 0), the boundary points, where the distributions cross, are s s±1 .The right boundary is 1 σ(s−1) sd's from each normal, so it adds as much accuracy for the left normal as it subtracts for the right.So only the inner boundary is useful, which is 1 σ(s+1) sd's from each normal.The overlap here thus corresponds to So, when two 1d normals are too far apart to compute their overlap (see performance section) and hence d ′ b , the toolbox returns d ′ e instead.Given that d ′ e is often the better approximation to the best discriminability d ′ b , why is d ′ a used so often?Simpson and Fitter 14 argued that d ′ a is the best index, because it is the accuracy in a two-interval task with stimuli x 1 and x 2 , using the criterion x 1 > x 2 .But as we saw, this is not the optimal way to do this task.The optimal error p(e) is instead as calculated previously, and In sum, d ′ b is the maximum discriminability between normals in all cases, including two-interval tasks, especially when means are closer and variances are unequal.d ′ e often approximates it better than d ′ a , e.g. when the decision variable in a classification task is modeled as two unequal-variance 1d normals.

Contribution to d ′ from each dimension
For two distributions in several dimensions, we could ask, how much is each dimension or feature contributing to the overall discriminability?If the two distributions have the same diagonal covariance matrix, i.e. the dimensions are independent, the squared d ′ s of the individual dimensions add to the overall squared d ′ .But when there are correlations or the covariance matrices are different, the total d ′2 is not a simple Pythagorean sum across the dimensions (e.g.eq. 5 in 2d).
Consider starting with only one dimension of the two distributions, and adding the rest one by one, which keeps increasing the overall discriminability.But the amount by which each new dimension raises it will depend on how much it is correlated with the existing dimensions.So the contribution of a dimension to the discriminability is not unique, but depends on the order in which we add the dimensions.One way we can uniquely define the contribution then is by measuring how much the Bayes discriminability d in the full space (we will drop the subscript for this discussion) falls if we remove the dimension.This measures the information a dimension contains that is not already contained in the other dimensions.If the discriminability with dimension i removed is d ′ −i , we can define the contribution of dimension i as . This is the same as the individual discriminability of dimension i when the covariances are equal and diagonal, but in the other cases, this measure more accurately reflects the con- 3), and a shifting likelihood-ratio between a normal and a t distribution in 4d (purple).The optimal two-interval accuracies of the 1d normals (fig.3c) and the 4d distributions (fig.5b) are 0.74 and 0.97, equal to the areas under their likelihood-ratio curves here.The points marked on these curves are the farthest from the diagonal, and correspond to the Bayes discriminability.b.Distributions of the log likelihood ratio of the 4d t vs normal distribution.Sweeping the criterion corresponds to moving along the purple ROC curve of a. tribution of a dimension than its individual discriminability (fig.9d shows a comparison).
Fig. 2d shows an example of these contributions in a 2d case, and figs.9b, d and h show how this lets us identify which features are the most useful for detection tasks.

ROC curves
ROC curves track the outcome rates from a single criterion swept across two 1d distributions (e.g.black curve of fig.5a), or varying the likelihood-ratio between any two distributions in any dimensions (green and purple curves), which corresponds to sweeping a single criterion across the 1d distributions of the likelihood ratio l (figs.3b and 5b for the green and purple curves).
Discriminability indices are frequently estimated from ROC curves.d ′ a is √ 2 times the z-score of the single-criterion ROC curve area.d ′ b has no such simple relationship with curve area, but can be estimated in different ways.Even though d ′ b uses both criteria for unequal-variance 1d normals, it can still be estimated from the usual single-criterion ROC curve.Assume that the normals are N (0, 1) and N (µ, σ 2 ).From the single-criterion ROC curve, we first estimate µ and σ, then we use our method to compute d ′ of normals with these parameters.d ′ b can also be estimated from a likelihood-ratio ROC curve.For any two distributions in any dimensions, d ′ b corresponds to the accuracy at the point along their likelihood-ratio ROC curve that maximizes p(hit)−p(false alarm), which is the farthest point from the diagonal, where the curve tangent is parallel to the diagonal (fig.5a).

Custom classifiers
Sometimes, instead of the optimal classifier, we need to test and compare sub-optimal classifiers, e.g. one that ignores a cue, or some cue covariances, or a simple linear classifier.So the toolbox allows the user to extract the optimal boundary and change it, and explicitly supply some custom sub-optimal classification boundary.Fig. 2d compares the classification of two bivariate normals using the optimal boundary (which corresponds to d ′ b ), vs. using a hand-supplied linear boundary.Just as with integration, one can supply these custom classification domains in quadratic, ray-trace or implicit form, and use set operations on them.

Classifying using data
If instead of normal parameters, we have labelled data as input, we can estimate the parameters.The maximum-likelihood estimates of means, covariances and priors of normals are simply the sample means, covariances and relative counts.With these parameters we can compute the optimal classifier β(x) and the error matrix.We can further calculate another quadratic boundary γ(x) to better separate the given samples: starting with β(x), we optimize its (k + 1)(k + 2)/2 independent parameters to maximize the classification outcome value of the given samples.This is important for non-normal samples, where the optimal boundary between estimated normals may not be a good classifier.This optimization then improves classification while still staying within the smooth quadratic family and preventing overfitting.Fig. 2e shows classification based on labelled non-normal samples.
If, along with labelled samples, we supply a custom quadratic classifier, the toolbox instead optimizes this for the sample.This is useful, say, in the following case: suppose we have already computed the optimal classifier for samples in some feature space.Now if we augment the data with additional features, we may start from the existing classifier (with its coefficients augmented with zeros in the new dimensions) to find the optimal classifier in the larger feature space.

Scaling the discriminability of data
Consider two samples x a and x b , with means µ a , µ b and covariances Σ a , Σ b (sd matrices S a , S b ).In some cases, we may want to scale their discriminability by moving them closer or far- ther apart.One such case is when we are modeling a detection or classification task, and the model performance exceeds that of the subject or observed data.In that case, we can move the model variable distributions closer together so that it matches the observed performance, while also predicting which specific data points should start overlapping and be misclassified.
To do this, we first model the samples as multinormals: x a ∼ S a z + µ a and x b ∼ S b z + µ b .Now we effect transformations to linearly interpolate these multinormals closer together.There are different flavours of doing this depending on the case.If we suppose that a is the noise distribution, then it should stay fixed while the signal distribution b is pulled towards it.So we introduce the interpolation factor r, such that the parameters of distribution b become: As r goes to 0, distribution b linearly morphs into a.We can effect this transformation on the data points x b by first whitening them: , then linearly transforming them to the new parameters: We can also have other flavours, such as interpolating the parameters of the two distributions towards their common means.One important thing to note here is that the distribution parameters that we should linearly interpolate are the mean vector and the sd matrix, not the covariance matrix, because the points are a linear function of these parameters.
Fig. 6a shows two distributions where this linear warping is implemented.These can be general distributions.Here, for example, the blue one is normal while the orange one isn't.The orange distribution is interpolated towards the blue in steps from r = 1 through 0.05, and the d ′ shrinks.Using the optimal classification boundary at any step in the scaling will tell us which specific data points would be misclassified in the model, which we can then compare with our subject or observed data.
A different way that we can scale the discriminability of two data distributions in one dimension instead of many, is by computing their decision variables (log likelihood ratio l that a point belongs to a vs b) under the normal model, then moving them closer together or farther apart.Fig. 6b, top subplot, shows the two distributions of the Bayes decision variable for the two original bivariate samples in plot a that were farthest apart.(The blue sample a is now on the right because its log likelihood ratio is positive.)If the median value of a decision variable distribution is l m , we can map the decision variables to l ′ = l − l m (1 − r).As the scale factor r goes from 1 to 0, each decision variable distribution is pulled in until half of its points are on either side of 0, i.e. the error rate goes to chance and the discriminability goes to 0. Once again, we can use this to scale down the discriminability of a model to match a subject or observed data, and see if the model predicts the specific trials or data points that were misclassified.

Multiple normals
The optimal classifier between two normals is a quadratic, so error rates can be computed using the generalized chi-square method or the ray-trace method.When classifying amongst more than two normals, the decision region for each normal is the intersection of its quadratic decision regions q i n (x) > 0 with all the other normals i, and may be written as: This is not a quadratic, so only the ray-trace method can compute the error rates here, by using the intersection operation on the domains as described before.Fig. 2f shows the classification of several normals with arbitrary means and covariances.

Combining and reducing dimensions
It is often useful to combine the multiple dimensions in a problem to fewer, or one dimension. 20Mapping many-dimensional integration and classification problems to fewer dimensions allows visualization, which can help us understand multivariate normal models and their predictions, and to check how adequately they represent the empirical or other theoretical probability distributions for a problem.
As we have described, the multi-dimensional problem of integrating a normal probability in the domain f (x) > 0 can be viewed as the 1d integral of the pdf of f (x) above 0. Similarly, multi-dimensional binary classification problems with a classifier f (x) can be mapped to a 1d classification between two distributions of the scalar decision variable f (x), with the criterion at 0, while preserving all classification errors.For optimally classifying between two normals, mapping to the Bayes decision variable β(x) is the optimal quadratic combination of the dimensions.For integration and binary classification problems in any dimensions, the toolbox can plot these 1d 'function probability' views (fig.2g).With multiple classes, there is no single decision variable to map the space to, but the toolbox can plot the projection along any chosen vector direction.Fig. 2h shows the classification of samples from four 4d t distributions using normal fits, projected onto the axis along (1,1,1,1).
For a many-dimensional classification problem, we can also define a decision variable on a subset of dimensions to combine them into one, then combine those combinations further etc, according to the logic of the problem.
In the sections below, we shall see examples of such applications, where we map to fewer dimensions to see how well a multivariate normal model works for a problem, and also combine groups of cues to organize a problem and get visual insight.

Testing a normal model for classification
The results developed here are for normal distributions.But even when the variables in a classification problem are not exactly normal (e.g.either they are an empirical sample, or they are from some known, but non-normal distribution), we can still use the current methods if we check whether normals are an adequate model for them.One test, as described before, is to project the distributions to one dimension, either by mapping to a quadratic form (fig. 2g), or to an axis (fig.2h), where we can visually compare the projections of the observed distributions and those of their fitted normals.
We could further explicitly test the normality of the variables with measures like negentropy, but this is stricter than necessary.If the final test of the normal model is against outcomes of a limited-trials classification experiment, then it is enough to check for agreement between outcome counts predicted by the true distributions and their normal approximations, given the number of trials.For any classification boundary, we can calculate outcome rates, e.g.p(A|a) for a hit, determined from the true distributions, vs. from the normal approximations.The count of hits in a task is binomial with parameters equal to the number of a trials and p(A|a), so we can compare its count distribution between the true and the normal model.
If the classes are well-separated (e.g. for ideal observers), the optimal boundary provides near-perfect accuracy on both the true and the normal distributions, so comparing yields no insight.To make the test more informative, we repeat it as we sweep the boundary across the space into regions of error, to show if the normal model still stands.This is similar to how the decision criterion between two 1d distributions is swept to create an ROC curve that characterizes the classification more richly than a single boundary.In multiple dimensions, there is more than one unique way to sweep the boundary.We pick two common suboptimal boundary families.The first corresponds to an observer being biased towards one type of error or another, i.e. a change in the assumed ratio of priors or outcome values.The second is an observer having an internal discriminability different from the true (external) one (e.g.due to blurring the distributions by internal noise), so adopting a boundary corresponding to covariance matrices that are scaled by a factor.When there are two classes, the boundaries for both of these sub-optimal observers correspond to a shift in the constant offset q 0 (eq.3), i.e. a shift in the likelihood ratio of the two normals.So we are simply moving along the normal likelihood-ratio ROC curves, as we compare the outcome rates of the true and the normal distributions.Fig. 7a shows the classification of two empirical distributions, where a is not normal, and gray curves show this family of boundaries, which are simply contours of the log likelihood ratio l.Since a and b are well-separated, the ROC curves for both true and normal distributions would hug the top and left margins, so they cannot be compared.Instead, we detach the hits and false alarms from each other and plot them individually against the changing likelihood ratio criterion, which gives us more insight.Fig. 7b shows the mean ± sd bands of hits and false alarms from applying these boundaries on samples of 100 trials (typical of a psychophysics experiment) from each true distribution, vs. the normal approximations.They exactly coincide for false alarms / correct rejections, but deviate for hits / misses, correctly reflecting that b is normal but a is not.The investigator can judge if this deviation is small enough to be ignored for their problem.Now consider the case of applying these tests to multi-class problems.The two kinds of sub-optimal boundaries we picked are no longer the same family here.Recall that the classification problem of 2h had four 4d t distributions.Fig. 7c shows similar tests to see if this problem (with priors now equal) are wellmodeled by normals.The family of boundaries corresponds to varying the assumed prior p b .We may compare any of the 16 outcome rates here, e.g.p(B|b), and also the overall error p(e).When there are multiple classes, for any given true class, the numbers of responses in the different classes are multinomially distributed, so that the total number of wrong responses is again binomially distributed.p(e) is the prior-weighted sum of these binomially distributed individual errors, so we can calculate its mean and sd predicted by the observed vs. the normal distributions.Fig. 7d shows the test across boundaries corresponding to all covariance matrices scaled by a factor, changing the d ′ between the classes.Some other notable suboptimal boundaries to consider for this test are ones that correspond to adding independent noise to the cues (which changes only their variances but not their covariances), ones that ignore certain cues or cue covariances, or simple flat boundaries.As seen here, even for many-dimensional distributions that cannot be visualized, these tests can be performed to reveal some of their structure, and to show which specific outcomes deviate from normal prediction for which boundaries.
When the problem variables have a known non-normal theoretical distribution, the maximum-likelihood normal model is the one that matches its mean and covariance, and these tests can be performed by theoretically calculating or bootstrap sampling the error rate distributions induced by the known true distributions.

Matlab toolbox: functions and examples
For an integration problem, the toolbox provides the function integrate_normal that inputs the normal parameters and the integration domain (as quadratic domain coefficients, limits of a rectangular domain, a ray-trace domain function or an implicit domain function), and outputs the integral and its complement, the boundary points computed, and a plot of the normal probability or function probability view.The toolbox also provides functions norm_fun_pdf , norm_fun_cdf and norm_fun_inv to com-pute pdf's, cdf's and inverse cdf's of functions of (multi)normals.
The function classify_normals for a binary classification problem inputs normal parameters, priors, outcome values and an optional classification boundary, and outputs the coefficients of the quadratic boundary β and points on it, the error matrix and discriminability indices d ′ b , d ′ a and d ′ e , and produces a normal probability or function probability plot.Optionally, it can also return the contributions to d ′ from each dimension along with a divided color-bar in the plot.With sample input, it first estimates the normal parameters (means, covariances and priors) and returns the optimal boundary β, and also which sample points were correctly classified.In addition, it returns the coefficients of the sampleoptimal boundary γ and points on it, error matrices and d ′ b values of classifying the samples using β and γ, and the mapped scalar decision variables β(x) and γ(x) from the samples.The ability of scaling the d ′ , by either warping the multivariate distributions, or their scalar decision variables, is available in this function in the options d_scale and d_scale_type .The function classify_normals_multi for classifying multiple normals.
Many  9) are available as interactive demos in the 'getting started' live script of the toolbox, and can be easily adapted to other problems.

Performance benchmarks
In this section we test the performance (accuracy and speed) of our Matlab implementations of the generalized chi-square and ray-trace algorithms, against a standard Monte Carlo integration algorithm.We first set up a case where we know the ground truth, and compare the estimates of all three methods at the limit of high discriminability (low error rates) where it is most challenging (which occurs for computational models and ideal observers).We take two 3d normals with the same covariance matrix, so that the true discriminability d ′ is exactly calculated as their Mahalanobis distance.Now we increase their separation, while computing the optimal error with each of our methods at maximum precision, and a discriminability from it.The generalized chisquare method is very fast (due to the trivial planar boundary here), and the ray-trace method takes an average of 40s.For a fair comparison, we use 10 8 samples for the Monte Carlo, which also takes ∼40s.Each method returns an estimate d′ .Fig. 8a shows the relative inaccuracies | d′ − d ′ |/d ′ as true d ′ increases.With increasing separation, the Monte Carlo method quickly becomes inaccurate, since the error rate, i.e. the probability content in the integration domain, becomes too small to be sampled.The method stops working beyond d ′ ≈ 10, where none of the 10 8 samples fall in the error domain.In contrast, inaccuracies in our methods are extremely small, of the order of the double-precision machine epsilon ϵ, demonstrating that the algorithms contain no systematic error besides machine imprecision (however, Matlab's native integration methods may not always reach the desired precision for a problem).This is possible because a variety of techniques are built into our algorithms to preserve accuracy, such as holding tiny and large summands separate to prevent rounding, using symbolic instead of numerical calculations, and using accurate tail probabilities.The inaccuracies do not grow with increasing   Next, we compare the three methods across several problems of fig. 2. The values here are large enough that Monte Carlo estimates are reliable and quick, so we use it as a provisional ground truth.We compute the values with all three methods up to maximum practicable precisions, then we calculate the relative (fractional) differences of our methods from the Monte Carlo.If a value is within the spread of the Monte Carlo estimates, we call the relative difference 0. Table 8b lists these, along with the times to compute the values to 1% precision on an AMD Ryzen Threadripper 2950X 16-core processor.We see that both of our methods produce accurate values at comparable speeds.

Conclusions
In this paper, we presented our methods and open-source software for computing integrals and classification performance for normal distributions over a wide range of situations.
We began by describing how to integrate any multinormal distribution in a quadratic domain using the generalized chi-square method, then presented our ray-trace method to integrate in any domain, using examples from our software.We explained how this is synonymous with computing cdf's of quadratic and arbitrary functions of normal vectors, which can then be used to compute their pdf's and inverse cdf's as well.
We then described how to compute, given the parameters of multiple multinormals or labelled data from them, the classification error matrix, with optimal or sub-optimal classifiers, and the maximum (Bayes-optimal) discriminability index d ′ b between two normals, and the contributions to it from each dimension.We showed that the common indices d ′ a and d ′ e underestimate d ′ b , and that contrary to common use, d ′ e is often a better approximation than d ′ a , even for two-interval tasks.We presented two methods to scale the discriminability of data, to say, match a model to a subject or observations.
We next described methods to merge and reduce dimensions for normal integration and classification problems without losing information.We presented tests for how reliably all the above methods, which assume normal distributions, can be used for other distributions.We followed this by demonstrating the speed and accuracy of the methods and software on different problems.
Finally, we illustrated all of the above methods on two visual detection research projects from our laboratory.
Although not developed here, the approach of our ray-trace integration method may carry over to other univariate and multivariate distributions.In the method, we spherically symmetrize the normal, find its distribution along any ray from the center, then add it over a grid of angles.This transforms all problem shapes to the canonical spherical form, then efficiently integrates outward from the center of the distribution.Some distributions, e.g.log-normal, can simply be transformed to a normal and then integrated with this method.For example, if y ∼ lognormal(µ = 1, σ = 0.5), then we can compute, say, p(sin y > 0) = p(sin e x > 0) = 0.65 (where x is normal), and all other quantities such as pdf's, cdf's, and inverse cdf's of its arbitrary functions (see toolbox example guide).
For other distributions, our general method is still useful if they are already spherically symmetric (i.e.spherical distributions), or can be made so (e.g.elliptical distributions), and the ray distribution through the sphere can be found.When they cannot be spherized, the ray distribution (if calculable) will depend on the orientation, just as the integration domain does.But once this additional dependency has been taken into account, integrating along rays from the center should still be the efficient method for distributions that fall off away from their center.

dnFigure 1 :
Figure 1: Method schematic.a.Standard normal error ellipse is blue.Arrow indicates a ray from it at angle n in an angular slice dn, crossing the gray integration domain f (z) > 0 at z1 and z2.b. 1d slice of this picture along the ray.The standard normal density along a ray is blue.fn(z) is the slice of the domain function f (z) along the ray, crossing 0 at z1 and z2.

Figure 2 :
Figure 2: Toolbox outputs for some integration and classification problems.a. Top: the probability of a 3d normal (blue shows 1 sd error ellipsoid) in an implicit toroidal domain ft(x) > 0. Black dots are boundary points within 3 sd traced by the ray method, across Matlab's adaptive integration grid over angles.Inset: pdf of ft(x) and its integrated part (blue overlay).Bottom: integrating a 2d normal (blue error ellipse) in a domain built by the union of two circles.b.Estimates of the 4d standard normal probability in the 4d polyhedral domain fp(x) = 4 i=1 |xi| < 1 using the ray-trace method with Monte Carlo ray-sampling, across 5 runs, converging with growing sample size of rays.Inset: pdf of fp(x) and its integrated part.c.Left: heat map of joint pdf of two functions of a 2d normal, to be integrated over the implicit domain f1 − f2 > 1 (overlay).Right: corresponding integral of the normal over the domain h(x) = x1 sin x2 − x2 cos x1 > 1 (blue regions), 'traced' up to 3 sd (black dots).Inset: pdf of h(x) and its integrated part.d.Classifying two 2d normals using the optimal boundary l, which yields the Bayes-optimal discriminability d ′ b .Color-bar shows the proportions by which the two dimensions contribute to the overall discriminability.d ′ e and d ′ a are approximate discriminability indices.A custom suboptimal boundary such as the green line can also be used for classification.e. Classification based on samples (dots) from non-normal distributions.Filled ellipses are error ellipses of fitted normals.γ is an optimized boundary between the samples.The three error rates are: of the normals with l, of the samples with l, and of the samples with γ. f.Classifying several 2d normals with arbitrary means and covariances.g.Top: 1d projection of a 4d normal integral over a quadratic domain q(x) > 0. Bottom: Projection of the classification of two 4d normals based on samples, with unequal priors, and unequal outcome values (correctly classifying the blue class is valued 4 times the red, hence the optimal criterion is shifted), onto the axis of the Bayes decision variable β.Histograms and smooth curves are the projections of the samples and the fitted normals.The sample-optimized boundary γ = 0 cannot be uniquely projected to this β axis.h.Classification based on four 4d non-normal samples, with different priors and outcome values, projected on the axis along (1,1,1,1).The boundaries cannot be projected to this axis.
Fig. 2c, left, is an example of a joint pdf of two functions of a bivariate normal with µ = −2 5 and Σ = 10 −7 −7 10 , computed in this way.

Figure 3 :
Figure 3: Binary yes/no and two-interval classification tasks.a. Optimal yes/no decision between two unequal-variance 1d normal distributions.b.The same task transformed to the log likelihood ratio axis (log vertical axis for clarity).c.Optimal two-interval discrimination between the same 1d normal distributions a and b is actually a discrimination between 2d normals ab and ba.d.The task transformed to the log likelihood ratio axis.

Figure 4 :
Figure 4: Comparing discriminability indices.a. Plots of existing indices d ′ a and d ′ e as fractions of the Bayes index d ′ b, with increasing separation between two 1d and two 2d normals, for different ratios s of their sd's.b.Left : two normals with 1 sd error ellipses corresponding to their sd matrices Sa and S b , and their average and rms sd matrices.Right: the space has been linearly transformed, so that a is now standard normal, and b is aligned with the coordinate axes.c.Discriminating two highlyseparated 1d normals.

Figure 5 :
Figure 5: ROC curves.a. Yes/no ROC curves for a single shifting criterion (black), vs. a shifting likelihood-ratio (green), between the two 1d normals of fig.3a (adapted from Wickens, 16 fig.9.3), and a shifting likelihood-ratio between a normal and a t distribution in 4d (purple).The optimal two-interval accuracies of the 1d normals (fig.3c) and the 4d distributions (fig.5b) are 0.74 and 0.97, equal to the areas under their likelihood-ratio curves here.The points marked on these curves are the farthest from the diagonal, and correspond to the Bayes discriminability.b.Distributions of the log likelihood ratio of the 4d t vs normal distribution.Sweeping the criterion corresponds to moving along the purple ROC curve of a.

Figure 6 :
Figure 6: Scaling the discriminability of two distributions by linear interpolation.The discriminability of two general distributions (the orange one is non-normal) is scaled by linearly interpolating the mean and sd matrix of the orange towards the blue.Ellipses show their error ellipses.b.Scaling the discriminability of the data distributions in plot a by computing their decision variables under the normal model, and pulling them towards 0. p(e) is the error rate.

Figure 7 :
Figure 7: Testing normal approximations for classification.a. Classifying two empirical distributions (a is not normal).Gray curves are contours of l, i.e. the family of boundaries corresponding to varying likelihood ratios of the two fitted normals.b.Mean ± sd of hit and false alarm fractions observed (color fills) vs. predicted by the normal model (outlines), along this family of boundaries.Vertical line is the optimal boundary.c.Similar bands for class b hits and overall error, for the 4d 4-class problem of fig.2h, across boundaries assuming different priors p b , and d. across boundaries assuming different covariance scales (d ′ s).

Figure 8 :
Figure 8: Performance benchmarks of the generalized chi-square (denoted by χ) and ray-trace methods, against a standard Monte-Carlo method.a. Relative inaccuracies in d ′ estimates by the Monte-Carlo method (across multiple runs), and our two methods, as the true d ′ increases.Monte-Carlo estimates that take similar time rapidly become erroneous, failing beyond d ′ ≈ 10.Our methods stay extremely accurate (around machine epsilon ϵ) up to d ′ ≈ 75, which corresponds to the smallest error rate representable in double precision ('realmin').b.For several problems of fig.2, relative differences in the outputs of the two methods from the Monte Carlo estimate, and computation times for 1% precision.
is the best discriminability.Unfortunately, this does not have a simple relationship with d ′ b (a, b) for the yes/no task.But we can calculate mathematically here that d ′ e (ab, ba) = different example problems, including every problem discussed in this paper (examples in figs.2, 3, 4 and 5, tests in figs.7 and 8, and research applications in fig.