Nonlinear blind source separation exploiting spatial nonstationarity

In spatial blind source separation the observed multivariate random fields are assumed to be mixtures of latent spatially dependent random fields. The objective is to recover latent random fields by estimating the unmixing transformation. Currently, the algorithms for spatial blind source separation can only estimate linear unmixing transformations. Nonlinear blind source separation methods for spatial data are scarce. In this paper we extend an identifiable variational autoencoder that can estimate nonlinear unmixing transformations to spatially dependent data and demonstrate its performance for both stationary and nonstationary spatial data using simulations. In addition, we introduce scaled mean absolute Shapley additive explanations for interpreting the latent components through nonlinear mixing transformation. The spatial identifiable variational autoencoder is applied to a geochemical dataset to find the latent random fields, which are then interpreted by using the scaled mean absolute Shapley additive explanations. Finally, we illustrate how the proposed method can be used as a pre-processing method when making multivariate predictions.


Introduction
Nowadays spatially indexed data are encountered in many fields of science.For example, in geochemistry, samples are collected at different locations and analysed for their chemical compositions to detect patterns, for identifying areas of pollution, or for mining purposes.From now on, we denote the location as s ∈ S ⊂ R q , and let x(s) = (x 1 (s), . . ., x d (s)) ⊤ be the observable d-variate vector.Here the set of possible locations S is the domain and the data are usually called spatial data.In most applications q = 2 and for simplicity, without loss of generality, we also assume the same from now on.Spatial data are usually characterized by its mean function m(s) and a (spatial) covariance function C(x(s), x(s ′ )) = E (x(s) − m(s))(x(s ′ ) − m(s ′ )) ⊤ , where s, s ′ ∈ S. The main idea in modeling is that observations closer together are more similar than observations further apart.However, as the covariance function is usually quite complex, modeling spatial data is often quite difficult and is, for example, much more challenging than modeling time series data, which have a natural direction (past to present) that is missing in spatial data.Thus, x(s) is quite often assumed as stationary to simplify spatial modeling, meaning that m(s) = m for all s ∈ S and C(x(s), x(s ′ )) = C(||s − s ′ ||) for all s, s ′ ∈ S, i.e. the mean is the same at all locations and the spatial dependence depends only on the distance between observations.For an overview discussing the complexity of modeling C, see [1].

arXiv:2311.08004v2 [stat.ME] 20 Dec 2023
Another suggestion for simplifying spatial data modeling is considering a latent component modeling as a preprocessing step.The data are then represented using d independent, latent components in z(s) = (z 1 (s), . . ., z d (s)) ⊤ .Thus, after pre-processing, univariate models can be fitted to components and the components can be interpreted individually and predicted separately [2].While the latent components can simply be seen as useful modeling tools, they often turn out to have physical meanings for which however subject knowledge is usually needed.A popular approach to latent component modeling is linear blind source separation (BSS) [3], which aims to find independent latents components using the observed data by assuming that the observations are generated from the components through some linear mixing system.Recently, [4,5,6] suggested a spatial BSS (SBSS) approach for stationary multivariate spatial data, where the independent latent components are obtained by jointly diagonalizing two or more moment-based matrices.SBSS was also extended to nonstationary multivariate spatial data by [7], then yielding nonstationary spatial source separation (SNSS).In the SNSS model, x(s) is assumed to be stationary with respect to m but not with respect to C. A drawback of SBSS and SNSS is that both assume a linear mixing procedure, which is interpretable and mathematically tractable but might be too simplistic in reality.
The motivation for the use of nonlinear BSS approach is the same as for linear BSS, that is, one wants to find a useful representation of high-dimensional data to be used in further analyses [8].As the mixing procedure is assumed to be nonlinear, the components are unidentifiable https://www.overleaf.com/project/637f3b06f58fac80b0f38431withoutadditional assumptions on the mixing function or on the distribution of the latent components [9].Recently, unsupervised deep learning methods, such as variational autoencoders (VAEs) [10], and their extensions, such as [11,12], were applied to find unmixing function and latent components based on the observed data.As the regular VAE solutions in general suffer from unidentifiability issues, [13] proposed in a time series context the identifiable VAE (iVAE) method, which uses some additionally observed data to make the VAE model identifiable.In [14] a structured nonlinear BSS framework, which is suitable also for two dimensional graphs, was proposed.However, to our knowledge, models for spatial nonlinear BSS have not been developed or studied yet.
In this paper, we extend the iVAE method for spatially dependent data and study its performance using vast simulation studies and a real geochemical data example.Nonlinear BSS framework in general lacks tools for latent component interpretation, which is an essential task when analyzing the discovered latent representation.Therefore, a procedure based on Shapley additive explanations [15] for interpreting the latent components through nonlinear mixing transformation is introduced and illustrated.The paper is organized as follows.We review the basics of linear and nonlinear BSS in Section 2. Sections 3 and 4 introduce new iVAE method and new tools for interpreting the latent components, respectively.In Sections 5 and 6, simulation studies and a real data example are used to illustrate the performance properties of iVAE.The paper is concluded with a discussion in Sections 7 and 8.

Background
Before showing how iVAE can be extended to spatial data, we give some background on linear and nonlinear BSS as well as Shapley values, which are later used to interpret the latent components.

Linear and nonlinear BSS
Assume for now that x = (x 1 , . . ., x d ) ⊤ is a d-variate observable random vector.In linear blind source separation (BSS) one assumes that x is a linear mixture of unknown d-variate where A is an invertible d × d mixing matrix.The goal in BSS is to recover both A and z based on x only, but this cannot be done without making some additional assumptions on z.Different BSS methods vary in the assumption made on latent components.For example, in independent component analysis (ICA) we assume iid data with independent latent components, and in second-order source separation (SOS) we assume that the components are weakly stationary time series.For general overview of BSS methods, see [3].In [5], spatial BSS methods were introduced.
As the assumption on linear mixing is often too restrictive, nonlinear BSS methods have been introduced.For a recent review of methods, see [8].Notice that although the methodology is often referred to as nonlinear ICA, in what follows, we use the more general term, nonlinear BSS, as it allows the components to be of any data type.In nonlinear BSS one assumes that x is generated by applying an unknown invertible mixing transformation f : R d → R d on the latent components z as The objective is then to identify the transformation q : R d → R d which returns z as based on the observations of the vector x only.Generally, without additional assumptions on the mixing function or on the distribution of the latent components, nonlinear BSS is unidentifiable as there is an infinite amount of nonlinear transformations to generate mutually independent components from the observations [9].Thus, additional constraints on the distribution of z must be introduced to obtain identifiability without restricting the mixing transformation f severely.
In many recent studies [16,17,18,13,19], the main assumption leading to identifiability is that each latent component z i , i = 1, . . ., d, is statistically dependent on an m-dimensional auxiliary variable u, and that the conditional distribution is a member of the family of exponential distributions with parameters λ i,k , k = 1, . . ., r.Assuming the independence of the d conditional latent distributions yields then the joint distribution p(z|u) = d i=1 p(z i |u).Following [13], the dependence of the parameters λ i,k on u is expressed using a function λ(u) = (λ 1 (u), . . ., λ d (u)), where λ i (u) = (λ i,1 (u), . . ., λ i,r (u)) ⊤ contains the parameters depending on u for component i.This function must be learned, for example, using some neural network.The joint conditional distribution can then be written as where ⊤ contains sufficient statistics.The dimension r of each sufficient statistic T i (z i ) and λ i (u) is assumed to be fixed.By assuming the source distribution (4) and the additionally observed auxiliary variable u, the aim becomes identifying the latent sources using the unmixing transformation q(x, u).
In a time series context there are several examples of auxiliary variables found in the literature.In the case of stationary time series, one can use as u, for example, the previous observations [17], and for nonstationary time series data, the time segment of the observation can be used as [16,13,19].We propose in Section 3 a nonlinear BSS method for spatial data.The method extends the method proposed in [13] that uses VAEs with auxiliary variables to learn the full identifiable generative model, and is thus referred as identifiable VAE (iVAE).

SHAP values and mean absolute SHAP values
The main advantage of using nonlinear BSS methods is that they can discover some meaningful underlying latent representation through a nonlinear mixing environment by recovering the true latent components z.However, the interpretation of the latent representation can be difficult without knowing how the observed variables interact with the latent components.In case of linear BSS, it is easy to inspect the contributions through mixing/unmixing matrices, but when the mixing function is nonlinear, we do not have such methods available.Later in this paper, we suggest novel metrics for evaluating contributions of input variables for nonlinear models with multiple outputs, such as iVAE.We explain the procedure of calculating these values in detail in Section 4, but let us first review here the Shapley values [20] and Shapley additive explanations (SHAP) [15] that our metrics base upon.
The SHAP framework is based on the Shapley values, which are originally used in cooperative game theory to fairly distribute the payout of a game between a group of players.Let K be a set of K players and γ be a function giving the payout of the game for any set of players K.The Shapley values calculated based on the function γ produce the attributions a i for each player x i ∈ K so that where A ⊆ K, a 0 is a baseline output without any players and x ′ i = 1(x i ∈ A) is a binary variable which gets a value of 1 if x i belongs in set A and value of 0 otherwise.The function 1 is an indicator function.The Shapley value for player x i is then given by where | • | denotes the size of a set.The game can be looked as any mathematical function.Then, the function inputs correspond the players, and the output of the function corresponds the payload.If the game γ is a statistical model, the players are observations of K variables K = {x 1 , . . ., x K } and the payout is the predicted value of a variable y produced by γ with an input A ⊆ K, then the attribution of an observation x i of ith variable to the outcome prediction is given by a i .
Typically, statistical models have fixed input size, which is why the formulation in (6) The conditional expected value E(γ(x)|A) is calculated in practice by averaging over the outputs of γ of a representative background data while keeping the observed values in set A fixed.While it is possible to calculate the exact SHAP values using this procedure, the computational burden becomes heavy when the number of variables is high.
To reduce the computational burden, the kernel SHAP method [15] can be used.The kernel SHAP approximates the SHAP values ãi based on local interpretable model-agnostic explanations (LIME) [21] by minimizing the equation where with respect to the values ãi .By using the specific kernel π(A) called Shapley kernel, the algorithm in fact recovers the Shapley values.For more details, see [15].
The SHAP values are calculated individually for each (multivariate) observation/prediction pair making them a useful tool to interpret how each component of the observation affects the model prediction.In many cases we are interested in the population level effects, that is, how the observed variables affect the model prediction on population level.Methods, such as mean absolute SHAP (MASHAP) [22] and Shapley global additive importance (SAGE) [23], aim to obtain the population level importance for the input variables.In this paper we focus on MASHAP values, which can be easily extended to models with multiple outputs as will be shown in Section 4. Let now n be the number of observations and d the number of observable variables, and let ãi,j be a SHAP value for ith variable of jth observation.
The MASHAP value for the ith variable is then calculated as The MASHAP values v i can be interpreted as contribution or importance of ith input variable to the whole output population.
3 Nonstationary spatial source separation using VAE In this paper, we use iVAE [13] to solve the nonlinear SNSS problem by assuming nonstationarity in the variance of the latent fields.We use spatial segmentation as an auxiliary variable u and assume that the sources are distributed as in (4).With spatial segmentation we mean that the domain S is divided into m segments S i ⊂ S, so that S i ∩ S j = ∅ for all i ̸ = j and ∪ m i=1 S i = S, i, j = 1, . . ., m.Then, the auxiliary variable for the observation x(s) is u(s) = (1(s ∈ S 1 ), . . . ,1(s ∈ S m ))) ⊤ , i.e. a m-dimensional standard basis vector giving the segment corresponding to the spatial location of the observation.From now on, we use the notation x := x(s), z := z(s), and u := u(s) for the sake of convenience.iVAE assumes the following generating model: where p f (x|z) is defined as which means that x can be composed of an independent noise vector ϵ f and a mixing transformation f (z) as x = f (z) + ϵ f .The distribution p ϵ f is assumed to be a zero mean Gaussian distribution with infinitesimal variance to match the form of the nonnoisy nonlinear ICA model in (2).In the limit of infinite data, iVAE recovers the true latent components z up to permutation and signed scaling under some mild conditions on the mixing function f , the sufficient statistics T and the auxiliary variable u [13].Here, we focus on Gaussian distributed latent variables, in which case the variances of the latent components are required to vary enough based on the auxiliary variable u, and the mixing function f is required to have continuous partial derivatives.
iVAE consists mainly of the encoder g, the decoder h and the auxiliary function w.The encoder g(x, u) = (g µ (x, u) ⊤ , g σ (x, u) ⊤ ) ⊤ maps the observed data (x, u) to mean and variance vectors µ z|x,u and σ z|x,u .The mean and the variance are used to sample a new latent representation z ′ by using a reparametrization trick [10], which is generating z ′ as z ′ = µ z|x,u + σ ⊤ z|x,u ϵ, where ϵ ∼ N (0, I) and I is an identity matrix.The estimate of the unmixing transformation q is obtained as g µ .This means that we obtain the latent components z as µ z|x,u , which is a mean of the variational distribution given by the encoder.Meanwhile, the decoder h(z) estimates the true mixing function f (z) and maps the latent representation z ′ back to observable data x ′ .The auxiliary function w(u) estimates the parameters λ(u) of ( 4) in the segment given by u.The functions g, h and w are deep neural networks, which are flexible function estimators and we denote their parameters as θ g , θ h , and θ w , respectively, which must be estimated.Coming back to the generating model as in (11), we now have that where p θ h (x|z) is defined as in (12).iVAE learns simultaneously the full latent generative model and the variational approximation q θg (z|x, u) of its true distribution p q (z|x, u).The parameters θ = (θ ⊤ g , θ ⊤ h , θ ⊤ w ) ⊤ are estimated by maximizing the lower bound of the data log-likelihood defined by The variational approximation q θg (z|x, u) is obtained using a reparametrization trick.The schematic presentation of iVAE is illustrated in Fig. 1.In our simulation studies in Section 5 we assume the source density distribution p θw (z|u) to be Gaussian, where the mean and variance vectors, µ z|u and σ z|u , are given by the auxiliary function w(u).Hence, in practice we have , where β > 0 is a constant close to zero as p θ h should estimate the true distribution p f with infinitesimal variance.We use β = 0.01.

Auxiliary function Decoder
Figure 1: Schematic presentation of iVAE.iVAE learns simultaneously the encoder, the decoder, and the auxiliary function.The latent components z are obtained as µ z|x,u , which is a mean of the variational distribution given by the encoder.

Interpreting the latent representation
In this section, we introduce the scaled MASHAP values and explain how they can be used to evaluate variable importance for model with multiple outputs.
MASHAP and SAGE, as defined in Section 2.2, estimate successfully the population level importance of input variables for a single output variable.However, for a model with multiple outputs it would be beneficial to be able to compare the population level importance values across the different outputs, as well as to obtain an importance value for each input variable with respect to all output variables.Here, we suggest a heuristic procedure to obtain scaled MASHAP values, which are comparable importance values between different output variables, and to calculate average scaled MASHAP values, which are interpreted as a population level importance values for the models with multiple output variables.The procedure to obtain scaled MASHAP and average scaled MASHAP values is summarized in Algorithm 1, and discussed in detail next.
Algorithm 1: An algorithm for calculating the scaled MASHAP values and average scaled MASHAP values.
Calculate SHAP values ãk,1 , . . .ãk,n as in ( 7); Calculate MASHAP value v k as in (10); end Let γ * be a model or a function with H-dimensional output.As SHAP and MASHAP assume a function with a single output, we calculate the MASHAP values independently for each output i = 1, . . ., H of the function γ * by setting γ = γ * i .Then, we obtain the contributions of each input variable to each output varibale.However, by using this procedure, the MASHAP values are not comparable across different input variables.This is because the MASHAP values give only the relative importance of each input variable to a single output variable.As the distributions and scales of the output variables might differ, the sum of the MASHAP values of the input variables is not constant.Hence, the values are not relative to each other across the output variables, and comparisons of the MASHAP values might lead to false interpretations.
To make the MASHAP values comparable, we suggest scaling the values to unit sum for each individual output variable.The scaling ensures that all output variables are weighted identically making the scaled MASHAP values comparable also across the outputs.In addition, we propose average scaled MASHAP values to identify the importance of the input variables for a model with multiple output variables.Let V be a matrix whose rows v k contain the scaled MASHAP values for the kth input variable.By taking means column-wise, we get a K-dimensional vector v * giving the relative importance of each input variable.
In context of nonlinear BSS, the scaled MASHAP values can be used to evaluate the contribution of each observed variable to a latent component, and the contribution of each latent component to an observable variable.By calculating the average scaled MASHAP values for the BSS model's mixing estimate, we obtain importance values for each latent component.When the average scaled MASHAP value is high, the latent component explains high proportion of the original variables, and when it is low, the latent component does not have much impact on most of the variables.
The interpretations based on scaled MASHAP and average scaled MASHAP values are demonstrated in more detail in Section 6.
While we have the theoretical results of identifiability when the variances of latent components are varying enough based on the auxiliary variable u, these results apply only in limit of infinite data.In real life applications, with finite data, there is however no guarantee that the identifiablity conditions are fulfilled.In this section, we aim to study the performance of iVAE in various spatial scenarios with finite sample size using the simulation studies.We consider six different simulation settings, where the observed data are generated from nonlinear ICA model (2).The underlying latent components z are generated differently in each setting.Some of the settings exhibit nonstationary variance, meaning that the identifiability conditions are fulfilled, while some settings are stationary, for which the identifiability conditions are not fulfilled.We compare iVAE to SBSS and SNSS, although they are developed for linear mixing and are thus not optimal when the mixing function is nonlinear.In addition, iVAE is compared against a modified version of time contrastive learning (TCL) [16], where the auxiliary variable is a spatial segmentation instead of a time segmentation.TCL exploits nonstationarity in variance when solving the BSS problem and can estimate nonlinear unmixing transformations.The simulations can be reproduced using R 4.3.0[24] together with R packages SpatialBSS [25], gstat [26] and NonlinearBSS.The NonlinearBSS package, which is available at https:// github.com/mikasip/NonlinearBSS,contains an R implementation of spatial iVAE and some methods to generate nonstationary spatial data from the nonlinear ICA model (2).The simulations were executed on the CSC Puhti cluster, a high-performance computing environment.
In all simulation settings, n = 5000 locations s j , j = 1, . . ., n, are sampled uniformly in the spatial domain S = [0, 100] × [0, 100].For each location s j a three-variate observation x j = x(s j ) is generated, meaning that d = 3 in all simulations.In Settings 1, 2 and 3, the data consist of ten clusters and are generated using different parameters in each cluster.The clusters are generated by sampling ten random points and dividing the field so that each cluster contains the locations that have the smallest distance to the corresponding point.Notice that we did not use the ground truth segmentation as an auxiliary variable in modeling as was done in previous studies, such as [16,17,18,13], in a time series context.Instead, we used small enough segments to allow the model to approximate the ground truth clusters without having prior information on them.This is illustrated in Fig. 2, where n = 5000 uniformly sampled points in ten clusters are presented.In the figure, the colors represent the ten generated ground truth clusters and the grid on top of the plot illustrates the spatial segmentation used by iVAE and TCL.Settings 1 and 2. The latent field consists of identically and independently distributed (iid) three-variate Gaussian vectors.Each cluster, k = 1, . . ., 10, has a unique, diagonal covariance matrix In Settings 1 and 2 the mean vectors are µ k = (0, 0, 0) ⊤ and µ k = (µ 1,k , µ 2,k , µ 3,k ) ⊤ , with µ 1,k , µ 2,k , µ 3,k ∼ Unif(−5, 5), respectively.
Setting 3. The latent field consists of three-variate Gaussian fields with a Matern correlation structure.Each cluster has its own parameters for the Matern covariance function where h = ||s − s ′ || is the Euclidean distance between two locations s and s ′ , ν > 0 is a scale parameter, ϕ > 0 is a range parameter, Γ is the gamma function, and K ν is the modified Bessel function of the second kind with shape parameter ν.The range and scale parameters are generated independently from ν i ∼ Unif(0.1, 5) and ϕ i ∼ Unif(0.5, 8).Setting 6.The latent field is a single three-variate random field with a nonstationary Matern correlation structure as presented in [27]: where h = ||s − s ′ || and we denote ν(s, where σ : R 2 → R + , ν : R 2 → R + and ϕ : R 2 → R + are the local variance, shape, and range parameter functions, respectively.We choose these functions as Settings 1 and 2 are considered the easiest settings for the iVAE model as variances (and means in Setting 2) of the latent fields explicitly change between the clusters.These settings are spatial variants of the time series settings of some previous simulation studies, such as [13,16], where the latent components have varying means/variances based on some time segements.The settings aim to set a baseline performance under conditions, where the variances are explicitly changing based on our chosen auxiliary variable.Meanwhile, Settings 4 and 5 are used to measure the performance when the latent fields are stationary, and thus not identifiable in theory.Of these two, Setting 4 has higher variability in sample mean and in sample variance through the latent fields making it more optimal for iVAE.Settings 3 and 6 are chosen to illustrate the performance when the latent fields are nonstationary.
Mixing procedure.The observed data x are generated by applying a mixing function f L to the latent fields z as x = f L (z).The mixing function f L is generated using multilayer perceptron (MLP) with L layers following previous studies [16,18,13].In making the mixing function invertible and differentiable, the number of hidden units in each layer equals the number of latent components and the nonlinear activation functions are smooth bijective functions, such as a hyperbolic tangent or exponential linear unit (ELU) [28].In addition, the mixing matrices of the layers of MLP are normalized to have unit length row and column vectors to guarantee that none of the latent components vanish in the mixing process.Let ω i be the activation function of ith layer and B i be the normalized mixing matrix of ith layer.Then, the mixing function f L is defined as In these simulations, we used linear activation ω L (x) = x for the last layer and ELU activation i = 1, . . ., L − 1, for the other layers.This means that the mixing function f 1 with one layer corresponds to a linear mixing case.The simulations are repeated 1000 times with 1, 2 and 3 mixing layers.
Model specifications.In iVAE's encoder, decoder, and auxiliary function, we used three hidden layers with 128 units and leaky rectified linear unit (ReLU) activation to ensure that the model is capable of estimating the possible nonlinearities in mixing/unmixing functions and in the function λ(u), which is modelled by the auxiliary function.
Likewise, in TCL, we used three hidden layers with 128 units and leaky ReLU activation.For both models, iVAE and TCL, a 20 × 20 segmentation was used as an auxiliary variable, meaning that the spatial domain was divided into m = 400 equally sized squares.This segmentation is illustrated in Fig 2 .iVAE and TCL were trained for 200 epochs with a batch size of 64.The initial learning rate was 0.01, which was decreased with second-order polynomial decay over 10000 training steps until the learning rate of 0.0001.SBSS and SNSS algorithms used two ring kernels defined by the parameters r 1 = (0, 2) and r 2 = (2, 4).Further, SNSS used a 10 × 10 segmentation.The sizes of the rings and the segmentation for SNSS yielded the best average performance in an initial simulation containing 100 trials of each setting with various segmentations and ring sizes.
Performance index.We used the mean correlation coefficient (MCC), which is a widely used performance metric in nonlinear ICA, to measure the performance of different methods, e.g.[16,17,13,18].MCC is a function of the correlation matrix K = Cor(ẑ, z), where ẑ includes the estimated latent components, and is calculated as where P is a set of all possible permutation matrices, tr(•) is the trace of a matrix, and abs(•) denotes taking the absolute value of a matrix elementwise.The optimal MCC value is one in which case the estimated and true sources are perfectly correlated up to their signs.
Results.The resulting MCC values are presented in Fig. 3.In Settings 1, 2, 3 and 6, where the variances of the latent fields were varying based on the spatial location, and thus fulfilling the identifiability conditions, iVAE showed superior performance as compared with SBSS, SNSS, and TCL.In Setting 1, where the latent fields are zero-mean Gaussian with varying variances between the clusters, SNSS performed almost as well as iVAE, especially when the number of mixing layers was one.However, in Setting 2, where the mean also changed between the clusters, the performance of SNSS dropped considerably, whereas the performance of iVAE remained good.In Setting 3, SBSS and SNSS had similar performances, but the methods did not outperform iVAE, especially when the number of mixing layers was high.Meanwhile, Settings 4 and 5 are stationary and thus, in theory, more favorable for SBSS.Surprisingly, in Setting 4, where the Matern covariance function's range parameters ϕ were high and the scale parameters ν were low, iVAE outperformed SBSS.This might be because such Matern parameters lead to high spatial dependence and larger variability in sample variance compared to Setting 5, which means that the mean and the variance are changing in some degree through out the spatial domain allowing the identifiability.However, in Setting 5, where the parameters were selected so that the mean and the variance were more stable throughout the field, SBSS slightly outperformed iVAE.In Setting 6, iVAE was the only method that was reliably capable of separating the sources.In conclusion, as long as the sample variance has enough variability in space, iVAE recovers the latent components well and outperform the competing methods.

Real data example
In this section we demonstrate the spatial iVAE, SBSS and SNSS methods to a dataset derived from GEMAS geochemical mapping project [29], which is available in the R package robCompositions [30].The application is two-fold.First, the latent fields are estimated with each method and the latent representations are compared using scaled MASHAP values.As we do not have any ground truth latent fields available as in simulations in Section 5, only the interpretability of the latent representations are compared.Second, we study if the prediction power can be increased by predicting the latent fields to new locations instead of predicting the observed variables directly.The predicted latent fields are then  back transformed to the original variables by using the estimated mixing trasformations provided by iVAE, SBSS and SNSS.TCL is not applied for the data as it does not estimate the mixing transformation f .For this reason, the scaled MASHAP values cannot be utilized for evaluating the importance of the latent components.For the same reason, the method is incapable of back transforming the estimated latent components, and thus cannot be applied for prediction purposes.The code for the analysis of the data is available at https://github.com/mikasip/NonlinearSNSS.The analysis was executed on a laptop running Windows 10, equipped with an Intel Core i5 processor (1.7 GHz) and 16 GB RAM.
The dataset consists of 2108 consentration measurements of 18 chemical elements (Al, Ba, Ca, Cr, Fe, K, Mg, Mn, Na, Nb, P, Si, Sr, Ti, V, Y, Zn, Zr) in agricultural soil samples.We dropped one observation measured in Canary islands as it lacks neighboring observations.Therefore n = 2107.The sample locations are presented in Fig. 4.
In this example, we are dealing with compositional data, meaning that the variables carry relative information to other variables rather than absolute information.In this dataset, the measurements are chemical elements' relative proportions of the whole soil sample measured as milligrams per kilogram.Due to the sum constraints, compositional data do not follow the Euclidean geometry and it is custom to transform such data prior analysis.[4] argue that in the context of BSS, the isometric log-ratio (ilr) [31] transformation is a natural choice yielding full rank data.While ilr coordinates are not easy to interpret, they have the additional benefit that they have a one-to-one relationship with the centralized log-ratio (clr) transformation.The clr transformation is popular among practitioners, because, although the coordinates are singular, they are interpretable.Accordingly, the data are processed using an ilr transformation which reduces the dimension of the dataset by one, and therefore the dimension of the preprocessed data is 17.For interpretation purposes the ilr coordinates are transformed to clr coordinates and their interpretation goes similarly as in the case of log-transformed data, see [31] for further details.
In this application, we use the same iVAE model hyperparameters as in the simulation studies of Section 5.As an auxiliary variable, we use 10km × 10km spatial segmentation, which is illustrated in Fig. 4. Note that all spatial segments with zero observations are dropped.For SBSS and SNSS, we select the parameters provided in [7], where  the same dataset was studied.Hence, we use one ring kernel with radius of 167 km for both SBSS and SNSS, and divide the spatial domain to four equally sized square segments for SNSS.
First, the contributions of the latent components to the clr transformed variables are determined for iVAE, SBSS and SNSS by calculating the scaled MASHAP values for each method's mixing function estimate.For mixing function estimate, the input is the estimated latent components and the output is the ilr transformed observations, which are then transformed back in the clr transformed observations.Then, the scaled MASHAP values can be interpreted as importance of each latent feature to an original variable (in log scale).The averages of the scaled MASHAP values can be interpreted as importance of each latent feature to the whole dataset.The scaled MASHAP values are calculated by using 200 observations as background data.To ensure that the background data represents the whole dataset as well as possible, the background locations are selected one by one, so that the new location has the maximum distance to any previously selected location.The obtained background data are presented in Fig. 5.
For interpretation, we order latent components in decreasing order based on the average scaled MASHAP values.By inspecting the resulting scaled MASHAP values of iVAE's mixing function estimate provided in Table 1, we see that the latent components 1 and 2 are the most important ones as they have the highest average scaled MASHAP values of 0.192 and 0.189, whereas the values of other components are smaller than 0.094.The fact, that the average scaled MASHAP values of latent components IC13-IC17 are almost zero, indicates that these components are essentially noise and do not explain any of the chemical elements.Hence, fewer latent components would be enough to model the data.The average scaled MASHAP values of SBSS and SNSS, provided in A in Table 3 and Table 4, are spread more evenly for the latent components.SBSS has average scaled MASHAP values from 0.035 to 0.114 and SNSS from 0.029 to 0.127.This indicates that none of the components can be dropped without losing information of the dataset.Also, based on the scaled MASHAP values, it is hard to determine the most interesting components, which makes the interpretation more difficult.
The two most important latent components provided by each of the methods are inspected more closely.The components based on iVAE are plotted in Figure 6, and the components given by SBSS and SNSS are plotted in A Figures 7  and 8, respectively.Of components recovered by iVAE, the first latent field shows north-south behaviour by having low values in north and higher ones in south, while the second latent field separates the middle Europe by having higher values there and lower ones in south and north.SBSS and SNSS both have one component separating the area from Ukraine to northern Germany (the second component for SBSS and the first for SNSS), while the other ones do not show that clear spatial structures.For SBSS, the first component has slightly lower values in diagonal area from United Kingdom to Greece while the rest of the map has high values, and the second component for SNSS shows any spatial behaviour only in area of Portugal and Spain.
We also calculated the scaled MASHAP values for iVAE's unmixing function estimate, where the original observations are treated as input together with the auxiliary variables, and the output is the latent components.By this, we can determine which observed variables contribute the most to the each of the latent components.Same background data is used as provided in Fig. 5.By looking at the scaled MASHAP values for the unmixing function estimate, provided in Table 5, it is evident that Ba and V contribute the most to the latent component 1, while the latent component 2 has highest contributions of Ca, Sr and Mg.
Finally, we study if the prediction power can be increased by predicting the latent components rather than the original observations.We perform 10-fold crossvalidation by dividing the data randomly to 10 equally sized sets which are used one by one as a test data, while the rest of the data is used as a training data.The goal is to predict the observed chemical elements to the locations of the test data.As the data are multivariate and the variables are dependent on each other, the options are to use some multivariate modeling approach which takes the dependencies into account, or to find independent latent components which can be modelled individually.As a multivariate modeling approach we consider cokriging, which takes cross dependencies of each variable pair into account.Cokriging is compared against latent component approaches, where the latent components are found by iVAE, SBSS and SNSS, and the independent latent components are modelled individually using ordinary kriging and universal kriging.Ordinary kriging assumes a stationary mean in the space whereas universal kriging allows a trend in space.For details about kriging and cokriging, see e.g.[32,33].
In cokriging, we fit Matern variogram models for all variograms and cross-variograms of the ilr transformed variables.For variogram models, we select a mutual range parameter by calculating the mean of range parameters of each variable's fitted univariate variograms.The mutual range parameter is selected in order to have a positive definite cokriging system, which is required to calculate predictions to new locations.The Matern parameters for each variogram and cross-variogram are fitted using ordinary least squares (OLS) method.As the predicting with cokriging is computationally very expensive with such many variables and large sample size, we use only 50 nearest points for cokriging predictions to make it feasible.
In ordinary and universal kriging, we fit Matern, exponential and spherical variogram models for each independent component and select the best based on the OLS method.For ordinary kriging, we allow the mean of the spatial field to be a linear function of spatial coordinates.To make a fair comparision, we use only 50 nearest points also for kriging predictions.The predicted latent components are transformed back to ilr variables using each method's mixing function estimate.At last, the predicted data are transformed from ilr space to clr space, where the performance is evaluated.As performance measures, we use mean squared error (MSE) mean absolute error (MAE) and root mean squared error (RMSE).MSE, MAE and RMSE are calculated as 2 , where x is the true value and x is the estimated one.The average performances over the 10-fold crossvalidation are presented in Table 2.The lowest MSE, MAE and RMSE are obtained when iVAE is combined with ordinary kriging making it the best performing prediction method.SNSS combined with kriging slightly outperforms cokriging, and SBSS combined with kriging has very similar performance as cokriging.Universal kriging did not increase the performance, which indicates that the fields do not have linear trend in space.Because of  the performance gain of iVAE combined with kriging, and the fact univariate kriging is computationally much more feasible than cokriging, we consider iVAE combined with kriging the best prediction method.

Conclusions
In this paper we extended iVAE, first proposed by [13], to the nonlinear SNSS setting.We discussed the theoretical background of the method and provided useful tools for interpreting the latent components through nonlinear mixing environment using SHAP values.We used simulation studies to illustrate the performance of iVAE method in nonlinear SBSS and SNSS settings, and applied the method for a geochemical dataset.
Simulation studies revealed that iVAE outperformed its competitors in all settings where the variability in variance of the latent fields was relatively high, that is, the variances changed relatively much between different locations.The variability was achieved either by having multiple (in our simulations, ten) clusters with varying variances, by having latent fields with a nonstationary correlation structure or by having latent fields with stationary correlation structure that yields strong spatial dependence and enough variability in sample variances in space.In the stationary setting, where the mean and variance did not change enough, SBSS still performed better.
In the geochemical application, we utilized iVAE, SBSS and SNSS to find the latent fields for the geochemical dataset.We interpreted the latent fields provided by SBSS, SNSS and iVAE by calculating the scaled MASHAP values for the mixing function estimates, and discovered which fields are the most important ones for each method.Based on the scaled MASHAP values, iVAE provided the easiest interpretable components, two of which were the most important ones.We examined the results further by comparing the two most important component of SBSS, SNSS and iVAE by plotting the corresponding latent fields.iVAE provided clearer structure for the components than the competing methods.In depth interpretations of the latent components, however, requires subject expertise and is beyond the scope of this paper.We also studied if the predicition power can be increased by pre-processing the data using iVAE, and then predicting the independent latent components univariately, and backtransforming the predictions to the original variable space.The results suggest that by using iVAE as pre-processing method, the prediction power can be increased compared to alternative multivariate predicting approach.

Discussion
Based on the results of this paper, iVAE is a preferable method in settings, where the variances of the latent fields are not stable across space.However, in stationary settings where the sample mean and sample variance did not change enough, SBSS still performed better.In practice, this means for example having small range and large shape parameters in a Matern covariance function.However, many real-world spatial phenomena, such as temperature or humidity, often show high spatial dependence making iVAE the preferable method.To overcome the performance drop in stationary settings with low spatial dependence, it requires development of new models/methods which do not assume nonstationarity for identifiability.This is left for future work.
In the geochemical application, we calculated the average scaled MASHAP values of the latent components of spatial iVAE's mixing function estimate, and discovered that fewer components might be sufficient to model the dataset.iVAE allows modelling the data with fewer latent components than the observed varibles but currently there are, to the best of our knowledge, no methods for estimating the latent dimension.Hence, in future work, procedures for estimating the dimension of the latent representation will be developed.
In our simulation studies, TCL performed poorly in all settings.This is most likely due to nonoptimal simulation setups for the method.TCL relies on classifying observed data into chosen segments; when the classification task is too easy (e.g.we have data with changing mean or the number of real clusters is too low), TCL succeeds in classification without finding ICs.This problem was previously acknowledged also in [13].Thus, various methods should be also compared in more favourable settings for TCL in future work.

Figure 2 :
Figure 2: Five thousand simulated points in a 100 × 100 field based on ten randomly generated clusters.The colors indicate true cluster membership and the grid on top of the plot illustrates the spatial segmentation used by TCL and iVAE.

Figure 3 :
Figure 3: Boxlots of the mean correlation coefficients (MCC) of 1000 simulation trials of Settings 1-6 for iVAE, SBSS, SNSS and TCL.The color of the boxplots indicate the number of mixing layers where one layer corresponds to linear mixing.

Figure 4 :
Figure 4: Sample locations of GEMAS dataset and 100km grid representing the segmentation used in iVAE model.

Figure 5 :
Figure 5: Sample locations used as background data for calculating the SHAP values.

Figure 6 :
Figure 6: The first (left) and the second (right) latent fields for GEMAS dataset recovered by iVAE.

Figure 7 :
Figure 7: The first (left) and the second (right) latent fields for GEMAS dataset recovered by SBSS.

Figure 8 :
Figure 8: The first (left) and the second (right) latent fields for GEMAS dataset recovered by SNSS.

Table 1 :
The scaled MASHAP values for the decoder part of the trained iVAE model.

Table 2 :
MSE, MAE and RMSE prediction errors of 10-fold crossvalidation, where the chemical elements of the GEMAS dataset were predicted to new spatial locations using either cokriging or iVAE, SBSS and SNSS combined with ordinary or universal kriging.

Table 3 :
Scaled MASHAP values for SBSS method's mixing function estimate.

Table 4 :
Scaled MASHAP values for SNSS method's mixing function estimate.

Table 5 :
The scaled MASHAP values for the iVAE model's encoder part.