The Effects of Peeling on Finite Element Method -based EEG Source Reconstruction

The problem of reconstructing brain activity from electric potential measurements performed on the surface of a human head is not an easy task: not just because the solution of the related inverse problem is fundamentally ill-posed (not unique), but because the methods utilized in constructing a synthetic forward solution themselves contain many inaccuracies. One of these is the fact that the usual method of modelling primary currents in the human head via dipoles brings about at least 2 modelling errors: one from the singularity introduced by the dipole, and one from placing such dipoles near conductivity discontinuities in the active brain layer boundaries. In this article we observe how the removal of possible source locations from the surfaces of active brain layers affects the localisation accuracy of two inverse methods, sLORETA and Dipole Scan, at different signal-to-noise ratios (SNR), when the H (div) source model is used. We also describe the finite element forward solver used to construct the synthetic EEG data, that was fed to the inverse methods as input, in addition to the meshes that were used as the domains of the forward and inverse solvers. Our results suggest that there is a slight general improvement in the localisation results, especially at lower noise levels. The applied inverse algorithm and brain compartment under observation also affect the accuracy.


Introduction
The electroencephalography (EEG) forward problem of attempting to mathematically construct the electric potentials u produced by given electrical activity J p in the human brain is almost a century-old endeavour [1] [2], the requirements of which are fairly well known. For simplified cases, such as homogenous and unbounded conductors, layered conductors and spherical head models, there exist analytical or formulaic solutions [3][4] [5]. However, realistic head geometries require the application of numerical approaches, such as the finite difference method [6], boundary or surface element method [7] [8] and finite or volume element method [9][10] [11] to solve the same problem.
The primary currents J p themselves are often modelled as electrical dipoles. One of the problems that arises in formulating the forward problem in this way is the appearance of singularities in the potential field u generated by the dipoles [9] [10]: u is inversely proportional to the distance r from the dipole position x, where u is singular. This has implications on the convergence of numerical methods, which attempt to build the forward solution based on a finite element model of the human head, with conductivity jumps between the different brain compartments. More specifically, in the finite element formulation [12], the load vector f is not welldefined in the case of a singular source. The accuracy of the forward solution also ends up being reduced, as a dipole is placed near a boundary of an active compartment [9][13] [12].
To tackle the issue of potential singularities themselves, a so called subtraction method [9][10] [14] [15] has been recently utilized. It involves splitting the potential field u into a sum of two separate potentials, a problematic singularity potential u ∞ and a correction potential u corr , and solving the EEG forward problem in the case of u corr . This amounts to removing the singularity from the dipole model.
Another approach to handling the dipolar singularity is the so-called H (div) model [13] [12], which assumes a higher smoothness or regularity at the primary source level, being neurophysiologically well motivated [7] [16]. Assuming such smoothness, the H (div) approach resolves the ill-definedness of the load vector f by replacing the theoretical dipole with a non-singular function. This is achieved by requiring that the model of a dipole is square-integrable in the finite element domain Ω, or belongs to the space H (div) = H (div, Ω), whose elements can be constructed as linear combinations of divergence-conforming basis functions. Here dipolar moments d are approximated by a vector, whose orientation is defined by a set of finite element nodes surrounding a central node, taking into account the a priori information about the primary currents which are normally oriented to the surface of the gray matter layer. Since the vector is only supported by a few nodes, it is very focal and is therefore able to fit inside the thin gray matter layer, if the resolution of the mesh is fine enough.
Both approaches suffer from inaccuracies near conductiv-ity jumps. In the case of H (div), which is used in this work, a dipole might still be placed at the very edge of an active brain compartment, with some of the supporting finite element nodes in a neighboring compartment. Due to the different conductivities in the different compartments, the volumetric current between the compartments ends up being altered, resulting in a change in the modelled potential field u [13]. For the subtraction method, its numerical accuracy decreases as a dipole is placed near a conductivity jump, as the upper bound of its error is a function of the distance from the conductivity jump [9]. The main purpose of this article is to find out how limiting the possible source positions to a given distance from conductivity jumps affects the localisation accuracy of two inverse methods, sLORETA [17] and Dipole Scan [18], when a set of synthetic EEG measurements has been given to them as input. This is all performed in an adapted high-resolution finite element mesh, with a base resolution of 2 mm, with additional refinements performed on the active surfaces, to accommodate the H (div) approach in the 1-4 mm-thick gray matter layer. The motivation behind choosing the two inverse methods lies in the fact that sLORETA has been suggested to localise distributed patch-like sources in the entire head volume, if the number of sources and the amount of measurement noise remains low, with SNR > 10 dB [19] [20]. Dipole Scan has been shown to work well in the case of cortically constrained single-dipole EEG reconstructions, with only a few millimeters' spatial deviations between reconstructions and original source dipoles, in the simulation studies of [21].
In this article, Section 2 will focus on discussing the anatomy of the forward problem, how the utilized inverse methods localise sources based on the forward solution and how the peeling or removal of possible source positions from surfaces of active brain layers is performed at the start of the forward algorithm. In Section 3, we then present our results, evaluate the performance of the peeling algorithm, compare the numerical forward solver to an analytical one and finally see how the peeling of possible source locations affects the localisation accuracy in a realistic head model. In Section 4 results are discussed, Section 5 summarizes our results and Section 6 presents possible future directions of study.

Mathematical methods
For the purpose of testing inverse reconstructions of brain activity, we apply the Matlab-based software suite Zeffiro Interface [11], which builds H (div)-based lead field matrices L for different brain imaging modalities, such as electroencephalography (EEG) or magnetoencephalography (MEG) in a volumetric domain Ω, discretized with a finite element mesh [22]. Here a forward solver refers to finding a scalar or vector field, generated by a (synthetic) set of dipole-like sources or the primary current distribution J p = J p (x) at positions x in the domain Ω. This is done by mapping J p to an electric or magnetic set of sensors placed on the surface of Ω, i.e., by multiplying J p with L [7], as in [11][12] Here M could consist of electric potentials u or magnetic field strenghts H at the sensors S, and E is the error arising from numerical approximation and measurement noise. Once L is constructed, it plays a pivotal role in producing the inverted dipoles or primary current distribution J ∧ p in a given volume [23] [24]. This is illustrated in Figure 2 [25] illustration of the duality between forward and inverse problems in EEG and MEG imaging. Here L is the lead field matrix and K = K (L) an inversion method -specific kernel, that provides a reconstruction of the original activities J p based on the measurements at the sensors. [26] Regardless of the modality of L and in the absense of coarse artefacts, the origins of the localisation error ∆ of Figure 2.1 in the entire inversion process can be split roughly into 3 components: Here E T +J p is the modelling error resulting from the discretization or tetrahedralization T of the head model and the configuration such as positioning and orientation of the source space J p [27][28] [29]. The symbol E M represents the error brought to the measurements u and/or H by (simulated) measurement noise [30][31] [32] and E K is an inverse method -specific error brought about by, among other things, the biasing nature of each method, caused by prior assumptions regarding the measurements M or the primary current distribution J p in the derivation of said methods [33][34][30] [35]. One of the most significant contributors to the term E T +J p in (2.2) is the actual shape and structure of the head model used. It has been reported that using spherical models instead of realistic ones generated by segmenting MRI or CT data increase the localisation error ∆ by as much as 40 mm [36]. On the other hand, inaccuracies in the meshing procedure, such as labeling finite elements into compartments they do not belong to, or not modelling enough different compartments to accurately simulate brain structure also contribute to the error. For example, it has been concluded [37], that the absence of a cerebrospinal fluid (CSF) layer would have detrimental effects on the accuracy of the forward solution L and the ensuing inverse reconstruction J ∧ p [38][39] [40].
The tetrahedralization T might also affect the generation of the source space J p , as it has the possibility of restricting how synthetic sources can be placed into the head model. In the case where J p forms a divergence-conforming field, the source space is directly anchored to the tetrahedra, and interpolated across their faces and edges, due to the presence of a face-intersecting and edgewise divergenceconforming source model [13]. This is visualized in Fig  In the first case, the opposing node pair (n 5 , n 1 ) function as the ends of a dipole, whereas in the second case it is the pair (n 1 , n 2 ). The red vectors d denote the directions of the dipoles from negative to positive end. The H (div) source model places both kinds of sources into the FE mesh. [13] This holds significance, because as as a direct consequence of Maxwell's equations the electric and magnetic components of an electromagnetic field are perpendicular to each other [41]. Hence EEG electrodes are sensitive to the radial dipoles pointing in their direction, whereas MEG sensors best detect tangential sources, that are perpendicular to the ideal EEG orientations [40]. Therefore, both EEG and MEG might fail to detect some source orientations, which are themselves subject to modelling errors.
The possible causes for the measurement noise component E M of the localisation error ∆ are numerous. These include insufficient shielding of the imaging room, poor properties of the measuring equipment, the ill-positioning of the sensors on the scalp, the lack of establishment of proper electrical contact between electrodes and skin via the application of electolyte gel, and so forth. [31][32] To model the uncertainties related to E M , we add Gaußian noise to the simulated signal. To understand how the added noise affects the reconstruction of J p in the inversion phase, we investigate a sample of inverse estimates obtained with random Gaußian noise realizations.
To finally consider the E K component of ∆ in (2.2), the ill-posed nature of the inverse problem needs to be taken into account [34][42] [43]. The problem is underdetermined, characterized by a larger number of variables or degrees of freedom than there are restrictions or equations in the forward model. This leads to the need to incorporate prior-assumptions regarding the primary current distribution J p of equation (2.1) into the model. This can be achieved by applying regularization or penalty functions to the related cost function [42] [44]. In what follows, we briefly review two inverse methods, sLORETA and Dipole Scan, that represent different approaches to favouring certain kinds of unique estimates J ∧ p of J p .

sLORETA
In Standardized Low Resolution Tomography or sLORETA [17], the Moore-Penrose pseudoinverse L † = (︁ L T L )︁ −1 L T of the typical least-squares solution is modified to produce an initial regularized solution J ∧ p as follows: This initial solution is then standardized by dividing its entries by the square roots of the corresponding diagonal entries in the resolution matrix L ‡ L. As shown in [17], standardization balances out the initial solution that otherwise is a priori known to be biased towards the sensors. Equation (2.3) is a solution to the objective functional [17] which represents a regularized fit between the measurements and the lead field projection of the primary current J p . The role of c is to set the zero potential level of the electric field. The term λ∥J p ∥ 2 is the regularization penalty function in which the coefficient λ ≥ 0 is the regularization parameter, chosen here according to [45].

Dipole Scan
A second inverse method used in estimating source positions in this paper is the so-called Dipole Scan method [18], where following the inverse kernel idea of Figure 2.1, a filter matrix W = W (x) gives a local estimate J ∧ p of the primary current distribution J p at x as follows [46]: Here M is the measured data. The filter can be chosen to be the Moore-Penrose pseudoinverse, which does not perform any kind of filtering on the data [18], or a truncated singular value decomposition (tSVD) of L at x [21][47] [44]. The latter of these methods has a regularizing effect on the solution [18] [44], meaning the high spatial noise components are not amplified. This alone does not suffice for actually locating sources, since for that purpose one needs an actual objective function to be optimized. Dipole Scan then minimizes the relative residual variance with M avg being an average measurement, or maximizes its complement, the goodness of fit to determine the best source position.

Head models and the computation of the lead field
To compute the forward and inverse solutions to the biomagnetic source modelling problem presented in the previous subsection, one has to discretize the volume conductor such that a computer can process it. To this end, Zeffiro Interface builds finite element domains from given MRI segmentations [22]. The spherical volume conductor has an analytic or formulaic solution to the computation of the lead field L [4], and hence it provides a useful point of comparison when analysing the goodness of the result. Later on, we analyse the differences between the numerical and analytical solutions to the forward EEG problem. In addition to the spherical head model, we also take a look at the realistic human head model [48] seen in Figure 2 It is also known that jump discontinuities in the conductivities of the brain compartments affect the stiffness matrix needed in construction of the finite element forward solution [9][10] [13] [40]. Hence the amount of peeling of the active brain layers seen in Figure 2.4 (b) is also varied slightly, to see how outliers in the reconstruction are affected.
To this end, the lead field routine of Zeffiro Interface was augmented to first peel off the unwanted layers and only then start the lead field construction and source positioning. The algorithm now has the following structure: 1. use the peeling algorithm to select a true subset of the active brain elements and evenly distribute allowed source positions into those tetrahedra, 2. build a system matrix A [49] for the finite element mesh, 3. use A to build a transfer matrix T [50][13] [51,9] by solving one linear system per sensor position, 4. build a source space interpolation matrix D based on the H (div) source model [12][26] [13] with either positionbased optimization (PBO) [52] or mean position and orientation (MPO) [26] used as optimization methods, 5. compute the lead field as the matrix product L = T D and 6. subtract the mean measurement from each column of L to set the zero potential level of the solution.
To further disseminate on how the peeling algorithm Note that step 3 results in the outermost layer being peeled off, regardless of what the peeling depth is. This is desirable, as it removes the chance of singularities due to discontinuities appearing in the solution.

Measures for comparing the numerical and analytical lead fields
To compare the EEG lead fields produced by the forward solver in the case of Figure 2.3, the relative difference measure and magnitude measure were employed to compute differences between the analytical [4] and numerical lead fields L a and L n . Here ∥L∥ 2,1 denotes the 2-norm of L along the rows of L.

Evaluation of the localisation error
To evaluate the localisation error ∆ of equation (2.2), 10 000 synthetic sources were placed evenly into the active regions of the volume conductor: Cerebellum cortex, Amygdala, Thalamus, Caudate, Nucleus accumbens, Putamen, Hippocampus, Pallidum, Brain stem and Ventral Diencephalon. Then a lead field L corresponding to these sources was computed, as specified at the end of subsection 2.4. For each source given by the position-direction-amplitude triplet (x, d, a), synthetic measurements M (x,d,a) were constructed according to with ∥d∥ = 1, a > 0, and the notation (L | S, x) indicating a restriction of L to the rows corresponding to all sensors S and the columns corresponding to the dipole position x. Here SNR is the noise level in decibels, as in [SNR] = dB, and N is a normally distributed random variable, with mean µ = 0 and variance σ 2 = 1.
With the simulated measurements M (x,d,a) in place, they were then inverted with sLORETA and Dipole Scan, to produce a distribution of reconstructions: one inv d i for each x i in the original source space 1 . The position of the most focal reconstruction x I was sought by finding the index I of the direction with the largest norm or dipole moment, as in following the maximum principle of source localization [53][17] [54]. The localisation error was then computed as as in the differences among the positions in the original source space. The ⎷ 3 in (2.12) is the norm of a Cartesian dipole.

Spatial dispersion
As Figure 2.1 implies, to construct an estimate J ∧ p of the original synthetic source distribution J p in a linear fashion, one can multiply a lead field L by an inverse kernel K to obtain a so-called resolution matrix R, such that J ∧ p = K LJ p = RJ p . In such a case, the columns of R or R i describe how the activity at the corresponding source positions x i are blurred in this process [23]. The width of this blurring is given by the so-called spatial dispersion measure [55] where k ranges over the indices of the source positions x k in a given ROI around x i , p k are the dipole moments of the reconstructions in the ROI and d k,i are the distances of each dipole in the ROI from x i . In this study, it is observed how varying the peeling depth d p affects the measure in equation (2.13). 1 Constituting an inverse crime. [44] 3. Results

Evaluating the peeling algorithm
We start off with a discussion on how well the peeling algorithm itself works. Figure 3.1 presents the effects of refining the gray matter layer of the Ary model of Figure 2.3 on its peeling, the disqualification of nodes and associated tetrahedra from the set of valid source positions. The peeling was done within the distances of 0.5 and 1.0 mm from the inner and outer surfaces of the thin gray matter layer. We consider peeling to be necessary, because placing H (div) dipoles in positions of conductivity discontinuities would cause significant forward errors [13].  As can be observed, refinement plays an important role in the peeling process: it prevents an excessive reduction of possible source positions from the active layer. This can be observed in the unrefined 1.0 mm case shown in Subfigure (c), where a hole is punched through the gray matter layer. This means that a perfectly valid dipolar source location is excluded from the possible set of source positions during forward modelling, or the computation of the lead field matrix L. Taking a closer look at the realistic head model of Figure 2.4 (b) also displays a similar effect with a peeling depth of 0.1 mm, which is displayed in Figure 3.2.
Peeling will remove all those tetrahedra with one or more nodes closer to surface than a given peeling depth. The reason for this is to make sure that at least one layer of tetrahedra is removed, so that sources absolutely cannot be placed in tetrahedra, that are right next to another compartment with possibly differing conductivity.
It turns out that even with the refinement performed on the surface of the active gray matter layer, there are still parts of the compartment which are not fine enough with 0.1 mm peeling, and hence holes in possible source positions, such as the ones seen in the upper left and right corners of Figure 3.2, are formed. This is again due to the requirement that at least one tetrahedral surface layer is removed from the set of possible source positions, which results in the effective peeling depth being greater than the low numerical value provided by the user. The peeling algorithm then seems to function as intended. The base resolution of the mesh was 2 mm, with refinement referring to the surface of the active layer being refined as seen in Figure 3.1. Eccentricity refers to the relative radius of the position at which the comparison was performed. As suggested by [10], eccentricities of over 98 % are of special interest, as they correspond to where a primary dipole might be physiologically located, somewhere between the external granular and pyramidal layers (layers 2-3) of the cerebral cortex, and hence observed here.

Measuring the goodness of the solver against an analytical model
With (2.8), in almost all cases the PBO medians remain below the 0.02 limit, except at the highest two eccentricities. The upper outlier quantiles q u = q 75% + 1.5(q 75% − q 25% ), which the whiskers of Figure 3.3 correspond to, mostly remain close to 0.1, with the case of unrefined d p = 0.5 mm approaching 0.02 at the two highest eccentricities. With (2.9), the median remains below 0.04, with the upper quantile q u behaving similarly to what was observed with (2.8).
Increasing the resolution of the finite element mesh near active layer boundaries via mesh refinement seems to be mostly reducing (2.8), especially towards the higher eccentricities. For (2.9), the refinement actually seems to increase the median differences between the analytical and numerical lead fields by roughly 0.02, which is seen in the horizontal middle lines of the box plots of Figure 3.3. However, the quantiles q u were lowered by a few percent with d p = 0.5 mm, suggesting that refinement has a net positive statistical impact on the result.

Source localisation in a realistic head model
In the spirit of Cuffin et al. [56], Tables 3.1-3.2 present mean values µ of localisation errors ∆ of equation (2.12) and their standard deviations σ for 10 000-source lead fields L, corresponding to different peeling depths d p . The lead fields were inverted 20 times with sLORETA [17] and Dipole Scan [18] at different measurement noise levels, each inversion having a different white noise realization. The cells of the Tables 3.1-3.2 are color mapped based on the largest ∆ and σ in each table. Here Dipole Scan produces superior localisation results when compared to sLORETA, with both low and high SNR levels, which is an expected result in search of a single source, matching the prior model of Dipole Scan. Especially in the case of Dipole Scan, the mean error ∆ ≈ 10.5 mm reported by Cuffin is reached already at SNR = 15 dB, whereas with sLORETA a value as high as 30 dB has to be used, for comparable values to manifest themselves.
To graphically observe how the peeling affects outliers of ∆, Figures 3.4 and 3.5 were formed. They display box plots of ∆ against the two lowest noise levels in the case of sLORETA and Dipole Scan, respectively.
In the case of sLORETA, going from a peeling of 0 mm to 0.5 mm results in a disappearance of at least few outlier markers. Further, moving from 0.5 mm to 1.0 mm peeling further reduces the outliers, which is seen in the sparsity of the outlier point clouds. With Dipole Scan, the peeling results are similar, as the extent of outliers is reduced with increased peeling.
To further observe how peeling affects the outliers, Tables 3.3 and 3.4 show the per-noise-level numbers of sources, whose localisation errors satisfy ∆ > µ + 2σ, for a sample of 20 reconstructions obtained with a given noise level. Here µ and σ are the expected sample mean and standard deviation values of ∆, provided by Cuffin et al [56, Table 1]. These are contrasted against the numbers of columns in each respective 10 000-source lead field L by color-mapping each data    directions, and hence the number of columns in L is threefold, when compared to the mentioned number of sources.  For sLORETA we observed the following: the more tetra are peeled, the more outliers occur at noise levels below 15 dB. On the other hand, Dipole Scan displays a rather consistent outcome at and below 15 dB noise level, with increased peeling reducing the numbers of outliers. In either case, it seems that noise might be the major contributing factor below the 15 dB mark, with Dipole Scan being slightly more resistant to noise effects.
To get a sense of where the statistical outliers are located, Figures 3.6 and 3.7 display the localisation error ∆ of sLORETA and Dipole Scan as a function of position, in the active gray matter compartment. Colors in the figures have been adjusted, such that the obvious outlier positions with ∆ ≥ µ + 2σ are highlighted in red. The displayed noise levels are the ones where peeling was discovered to have an observable reduction in ∆.
To illustrate how the localisation error fluctuates locally at lower noise levels, both deep in the brain and more superficially, Figures 3.9 and 3.11 display thalamically focused views of the above images, whereas Figures 3.8 and 3.10 do the same for the cortex. The displayed noise levels are chosen to be the ones, where visible improvements can be seen.
These mappings demonstrate that peeling can help in reducing localisation error universally in the active domain, when the noise level is low enough: for example at 30 dB SNR    in Figure 3.11, the localisation error in the part of the frontal lobe in front of the thalamus decreases, as d p is increased. A similar effect can be seen in Figure 3.8, where localisation error in the cerebrum, in front of the central sulcus decreases.
To observe how the peeling of the active layers affects the accuracy of localisation with sLORETA and Dipole Scan locally, Figures 3.12-3.13 display relative strenghts of dipole moments of the reconstructed dipolar distribution J ∧ p = J ∧ p (x) for a single dipole near Brodmann area 3b, around the central sulcus. Both tangential and radial sources are observed, as these correspond to the ideal cases of MEG and EEG, respectively. In the case of sLORETA, the region around the single reconstructed tangential source increases in amplitude, as peeling depth d p is increased, which can be seen as the moving of the bright yellow area from right to left, towards the dipole posi- tion itself. The difference in the normal case is much smaller. With Dipole Scan, the effects of d p on the inversion of the tangential measurements are not obvious. The case of a radial or normal source allows the differences to become more apparent: the area that contains the strongest intesity shrinks, which makes the reconstruction more localised.
Finally, we observe the dispersion measures of equation (2.13) for sLORETA and Dipole Scan at 30 dB noise level. These are presented in Figures 3.14 and 3.15, respectively. The ROI of dispersion around each source position was set at 30 mm.

Discussion
EEG source modelling is often performed on spherical head models [4][36] [57], or head models that are realistically shaped, but which only contain 3 or 4 brain compartments, such as gray matter, skull, skin and maybe the cerebrospinal fluid layers [37]. In addition, instead of these models being volumetric, only boundaries of the modelled brain compartments might be included, if forward modelling is based on boundary element methods [7] [8]. Those are less resourceintensive than finite element methods, but also do not model the volumetric aspects of the problem as accurately.
In a realistic situation, there are at least 19 different brain tissue types, and this is only if one does not take into account the inhomogeneities within these major compartments, such as different types of bone in the skull [58]. Each compartment can have its own isotropic or anisotropic conductivity structure, which affects the forward solution L. Simplified approaches thus have little hope of modelling the subtleties introduced by a realistic head model [57]. Therefore in this paper, we performed our investigations in a realistically shaped high-resolution finite element head model [48], with 19 major brain compartments present. Most importantly, the modelled brain activity was restricted to realistically shaped and volumetrically refined compartments of gray matter, where especially the cortical portion is only a few mm thick. This again required (as one option) the use of the tightly supported H (div) source model, which allowed the placement of dipoles with nonzero lengths into these thin gray matter layers.
In this setting, we investigated how restricting tetrahedra, into which a set of H (div) dipoles J p = J p (x) might be placed, affects the source localisation error ∆ of sLORETA and Dipole Scan. The restriction was based on disallowing the placement of dipoles near surfaces of active layers. The relevance of this study comes from the well-known fact, that the convergence of the forward EEG solution is negatively affected, when sources are placed near conductivity discontinuities at said tissue boundaries. This applies both to boundary element methods [59] [60], and different finite element approaches, such as the Subtraction Method [9] and the H (div) approach [13], which was used in this paper. This again negatively affects the localisation of synthetic sources via different inverse methods.
Before presenting the above results, we observed how the peeling algorithm works. In addition to this, our numerical forward solver was compared to a semi-analytical one, to ensure that it works appropriately in a simplified spherical domain and at higher eccentricities, where cortical sources are known to reside, but where the errors are also known to be more extensive.
The peeling algorithm, which restricts the positions x where sources can be placed, was found to function as intended: it recognized the intended tetrahedra and always removed at least one layer from the surfaces of the specified brain layers, to prevent singularities from appearing in the forward solution because of discontinuities or jumps in conductivities between neighbouring source tetra. The only anomaly related to the source positioning after peeling was that the d p = 0.5 mm case seemed to contain more sources than the d p = 0 mm case. This can be explained with Zeffiro Interface's iterative way of evenly distributing sources into the active volume, if the initial guess is not near the user-given amount.
Comparing the numerical lead field or forward solution L to that of the semi-analytical 3-layer Ary model [4] produced appropriately good results, even at above 98 % eccentricities, with upper 75 % quantiles of magnitude measure (MAG) and relative difference measure (RDM) being at most 0.06 and 0.04, respectively. It was not obvious, whether a more refined mesh would diminish differences between the semianalytical and numerical solutions L a and L n . In fact, having a more refined mesh seems to produce deteriorated median results in the case of MAG, whereas with RDM the outcome is slightly improved. This was especially true when PBO [52] was used for synthetic source interpolation, and hence it was chosen as the optimization method in the case of the realistic head model. MPO [26] was also considered, but was deemed less suitable, as according to [13] MPO is less stable than PBO.
When looking at the average localisation errors ∆, and comparing the results to those of Cuffin et. al. [56], one can observe that the performance of sLORETA is inferior to that of Dipole Scan. The reported ∆ < 10.5 mm is only reached at lower noise levels, when SNR ≥ 30 dB, whereas with Dipole Scan an average within acceptable boundaries can be obtained all the way down to SNR = 15 dB. In the case of sLORETA, the peeling cannot be said to improve the average localisation accuracy, as utilizing it seems to first improve the result when going from a peeling depth d p = 0 mm to d p = 0.5 mm, but then localisation accuracy is decreased again, when d p is increased to 1 mm. Only at the very highest SNR can a slight but systematic decrease in the average ∆ be seen. Dipole Scan is slightly more consistent in its performance with respect to peeling, as with SNR ≥ 20 dB the added peeling no longer increases the average ∆. It should be noted, that Cuffin used a 3-compartment boundary element model with a simplex search method to locate sources [61], and hence our results are not directly comparable.
The behaviour of localisation error outliers also reflect the above state of matters. In the case of sLORETA, the initial impression is that peeling seems to reduce the general denseness of the outliers clouds further above the box plots, but at the same time some maximum outliers are further away from the tops of the whiskers, especially in the case of d p = 0.5 mm. Peeling somewhat increases the number of outliers for the inverse method at higher SNR, although with SNR < 20 dB there seem to be cases where the outliers are reduced with added peeling. A possible explanation for this is that peeling reduces statistical variability in the low-SNR cases of sLORETA. Dipole Scan again performs more consistently, with the numbers of outliers being reduced at every SNR ≥ 15 dB, when the peeling depth d p is increased. It might then be said that for Dipole Scan, the effects of random noise seem to dominate when SNR is reduced beyond this point.
When observing the localisation error as a function of primary dipole position, ∆ = ∆(x) of sLORETA and Dipole Scan, the superior performance of the latter becomes obvious. With sLORETA, while the actual scale of the localisation error is reduced as SNR increases, we end up having more outliers with a relatively large ∆ in the entire volume.
In contrast, the behaviour of Dipole Scan is again more consistent: not only does the localisation error go down with increasing SNR, as one would expect, the localisation error outliers become more focused into the deep structures of the brain, where the sources are further away from the sensors. The benefits of increasing the peeling depth d p are also not unambiguous in either case. It seems that especially in the case of sLORETA, increasing d p simply moves the positions where outliers occur around the volume, instead of eliminating them. There are still places where the areas containing larger errors become less focal, such as in the parietal region at SNR = 25 dB. The fluctuation of the higher localisation error patches around the volume is also visible in the case of Dipole Scan, although maybe to a lesser extent. The consistent reduction in ∆ with increased peeling seems to be slightly more prominent, as can be seen in the environment of corpus callosum and around the lower part of the frontal lobe at SNR between 20-30 dB. The takeaway from this is, that peeling the active brain layers does not guarantee unambiguously better localisation results, but that it might do so on a regional and SNR basis. A look at the relative strengths of reconstructions at SNR = 30 dB also mostly supports this interpretation from the point of view of varying the peeling depth d p , although there seems to be a hope of seeing an actual improvement in the case of Dipole Scan and a normal source, which is the ideal case for EEG: we can see the reconstruction becoming slightly more focal around the plotted source. With sLORETA, the peeling seems to increase the general strength of the reconstruction around the tangential source, whereas in the case of a normal source we see a very slight shift of the local peak of the distribution J ∧ p towards the source itself. The dispersion measures of the results of different inverse methods were seen to be improved, with parts of regions of largest dispersions seeing a reduction of 1.0 mm in the width of the peak around a reconstructed dipole position. This is yet another indication of the reconstruction becoming more focal in those regions, as a reduction in dispersion indicates, that the amplitude of J ∧ p is reduced in the 30 mm ROI for the dispersion around each respective source position, with the most focal point being at the center of the ROI.

Conclusions
Summa summarum, while restricting the synthetic dipoles J p , that model brain activity to be further away from active brain compartment boundaries, and therefore from conductivity jumps, the reconstructed distribution of dipoles J ∧ p is altered, but not definitely improved. The peeling algorithm responsible for the application of the above restriction was enabled by a local refinement or resolution increase of the finite element mesh, and was shown to uniformly reduce statistical outliers in the forward solution L, which maps J p to the observed potentials at the EEG electrodes, thereby increasing its robustness.
This reduction in outliers is best observed with noise levels below 30 dB, which occur in medical studies, where possibly thousands of stimuli might be applied over the duration of the experiment, such as when measuring somatosensoryevoked potential responses [62]. In addition to improving the robustness of the forward solution, peeling had the same effect on the reconstruction, in addition to enhancing the regularity of it. The importance of the regularity of a reconstruction comes from the need to locate multiple or distributed sources, such as in the case of localising epileptogenic zones or their irritative regions for the purposes of treatment [63][64].

Future prospects
Follow-up studies might include further increasing the base resolution and/or the refinement of the active layers in the mesh, so that the possible locations of dipoles would become more varied. Peeling depths might also be observed in a wider or more refined range than was presented in this study, to find out whether there is some optimal value for it. This optimal value might depend on the mesh geometry, and therefore the tests might be performed on multiple different head models with the same mesh resolution.
For case studies, the use of the peeling technique might be applied to head models of patients with conditions such as microlissencephaly [65], where the cortex is flattened and possibly thickened. A schizencephalic [65][66] brain model, where a part of the brain has been displaced, and the displaced sections are connected only by a thin ribbon of gray matter is another possibly interesting target of peeling.