Basis mapping methods for forward and inverse problems

This paper describes a novel method for mapping between basis representation of a field variable over a domain in the context of numerical modelling and inverse problems. In the numerical solution of inverse problems, a continuous scalar or vector field over a domain may be represented in different finite‐dimensional basis approximations, such as an unstructured mesh basis for the numerical solution of the forward problem, and a regular grid basis for the representation of the solution of the inverse problem. Mapping between the basis representations is generally lossy, and the objective of the mapping procedure is to minimise the errors incurred. We present in this paper a novel mapping mechanism that is based on a minimisation of the L2 or H1 norm of the difference between the two basis representations. We provide examples of mapping in 2D and 3D problems, between an unstructured mesh basis representative of an FEM approximation, and different types of structured basis including piecewise constant and linear pixel basis, and blob basis as a representation of the inverse basis. A comparison with results from a simple sampling‐based mapping algorithm shows the superior performance of the method proposed here. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd.


INTRODUCTION
Inverse problems arise in almost all areas of science whenever the parameters of a model need to be inferred from observed data. A standard approach is to define a model-based reconstruction problem as the inversion of a mapping f W X 7 ! Y between function spaces X (solution space) and Y (data space). The topology of these spaces is determined by the physical nature of the parameters and data for a given problem. Except in a limited set of special cases, the mapping f cannot be inverted explicitly, and the problem is set in variational form. The task of the reconstruction is then to find parameters O x that minimise an objective function .x/, defined as a norm between model and measurement data y obs : where D is a data fit function (typically the negative log-likelihood) and R is regularisation term with regularisation parameter . In many inverse problems in imaging, the forward operator represents the propagation of waves or radiation, such as electromagnetic, acoustic or optical, and needs to be modelled computationally. For many problems, f is then represented by a partial differential equation (PDE) in a domain of interest , with appropriate boundary conditions on boundary @ , and a projection operator that 4 M. SCHWEIGER AND S. ARRIDGE maps the solution of the PDE to a discrete set of measurements. The inverse problem is then one of parameter identification and is often non-linear and strongly ill-posed. Examples include the following: Electrical Impedance Tomography, where the forward model is the generalised Laplace Equation (a low-frequency approximation of Maxwell's Equations) and the parameter of the inverse problem is admittivity [1]; Diffuse Optical Tomography, and its variants including Fluorescence Diffuse Optical Tomography where the forward model is the diffusion equation (a high-scattering approximation of the Radiative Transfer Equation) and the parameter of the inverse problem is absorption and/or diffusivity [2]; and Seismic Tomography, where the forward model is a hyperbolic equation for acoustic propagation (a simplification of the Elasticity Equation), and the parameter of the inverse problem represents various mechanical characteristics of rock such as density and/or stiffness [3].
In most practical applications, evaluation of f requires a numerical solution strategy that discretises the continuous into a finite-dimensional problem and represents the parameters x.r/ and field variables by a finite-dimensional basis expansion U. We denote the discretised forward problem by where x u is the approximation of spatial parameters x in basis U. The most common numerical approaches include the finite element method (FEM), finite difference method and boundary element method.
In this paper, we consider FEM as the numerical realisation of the forward problem, where is subdivided into an unstructured mesh of non-overlapping elements of simple shape, such as triangles in 2D problems and tetrahedra in 3D problems, that define U as a local, piecewise polynomial basis expansion.
The choice of basis for the forward problem is governed by considerations of computational stability and accuracy of the numerical model. This is often not an optimal choice for the inverse problem, where it may be more appropriate to adapt the basis expansion to physical limits of resolution of the reconstruction and to enforce smoothness constraints in the admissible solutions.
It is therefore often desirable to represent the inverse problem Equation 2 in a different basis representation V at a different resolution and with different basis functions: For example, V might be a regular pixel or voxel grid representing an image of the reconstructed parameter distribution or a blob basis that imposes smoothness constraints on the reconstruction. Where U and V are different, a mapping must be devised to transfer the expansion of a variable x.r/ between basis representations x u and x v . For example, in an iterative minimisation approach, an updated parameter set x v must be mapped to x u to evaluate the forward model for computing . The mapping algorithm should be optimal in the sense that the discretisation error of the mapped basis expansion is minimal. In general, the mapping is non-trivial, because the basis expansions are non-conforming and require the integration of basis functions over partial support areas. Previously, [4] we have presented a simple mapping method that relied on U being of higher resolution than V such that basis functions of V can be adequately represented by an expansion in U, reducing the mapping problem to operations in U that can be performed with standard FEM tools.
In this paper, we propose an optimisation-based approach to define the mapping, which avoids these limitations; either or both bases may be of arbitrary, variable or multi-resolution. The key technique is the splitting of mesh elements comprising basis U such that the subdivided elements align with the boundaries of support of the basis functions of V . This allows to compute the integrals of products of mixed basis functions. The integrals are evaluated with a quadrature rule that is either exact, where V is piecewise polynomial, or approximate in the general case.

Finite element basis
We consider the model to be a finite element discretisation of a PDE over a simply connected domain R d ; d D ¹2; 3º with boundary @ . is divided into P non-overlapping elements e i ; i D 1::P 5 with element subdomain S.e i / Q such that [ P i D1 S.e i / D Q , where Q is an approximation of that replaces boundary @ with a polygonal or polyhedral approximation @ Q .
We define a piecewise polynomial basis expansion U .k/ of order k composed of N basis functions u i .r/; i D 1::N; r 2 Q with limited support, such that a continuous variable x.r/ in is represented by the basis approximation where X u D ¹X u 1 ; : : : ; X u N º 2 R N is the coefficient vector representing x in basis U D ¹u 1 ; : : : ; u N º. For k D 0, we have N D P and u i are piecewise constant, u i .r/ D 1 if r 2 S.e i /I 0 otherwise. More frequently, orders k > 1 are chosen that provide a piecewise C k continuous basis expansion. In this case, each u i is associated with a node n i 2 R d ; i D 1::N that participates in a subset of elements e j.i/ , and u i is a piecewise polynomial function of order k with support S.u i / D [ j.i/ S.e j / Q . For k D 1, the nodes are given by the element vertices. For k > 1, additional nodes are required along element edges, faces and volumes to define u i . For simplicity, we restrict the discussion in this paper to polynomial basis functions, and most frequently to k D 1. To simplify notation, we abbreviate U D U .1/ . The methods discussed after this can be directly applied to other choices of FEM basis expansion.

Inverse problem basis
While the choice of model basis U is governed by considerations of numerical stability and accuracy of the FEM forward model, it may not be suitable to represent the results of the solution of an inverse problem, nor be a natural representation of the parameters of the forward model. We therefore introduce a separate inverse problem basis V consisting of M basis functions v j .r/; j D 1::M; r 2 of limited support S.v j / . V may be chosen to match the physical resolution limits of the problem or to impose a specific level of smoothness in the images it can represent.
A large number of choices for V can be considered. In the simplest case, V may represent a piecewise constant pixel or voxel image, such that v j .r/ D 1I if r 2 pixel j I 0 otherwise. Alternatively, in analogy to the unstructured FEM basis, V can be continuous polynomial basis with v j associated with the vertices of the basis grid and support over neighbouring pixels according to the order of expansion.
Further, V can represent a blob basis, consisting of radially symmetric basis functions that depend only on the distance d D jjr R j jj from a reference point R j , and are truncated to a support radius d max , such that ( 7) and v j .r/ D 0 if d > d max . The blob centres R j may be arranged in a regular grid or some other distribution [5,6]. Blobs are often preferred in image reconstruction problems because they enforce an inherent smoothness in the solution that can be controlled by the choice of v i and by selecting specific shape parameters and the support radius [7].
Let v j .d / denote the radial profile of v j .r/. Different choices for v j .d / exist [4], including Kaiser-Bessel window functions, where I m is the modified Bessel function of order m and˛is an adjustable shape parameter, where defines the shape, cubic B-spline, where d k D .k 2/d max =2 and

Basis mapping
Having defined forward model basis U and inverse problem basis V , we need to define mappings U ! V and V ! U that map the basis representation x u in U of a field variable x.r/ to the corresponding representation x v in V and vice versa. In general, the mapping will not be lossless, because x u cannot be represented exactly in V . We therefore define the mapping problem as follows: given a coefficient vector X u for basis U, find the corresponding vector X v of basis coefficients in V such that a norm of the difference jjx u x v jj C , for some choice of a normed linear space C , is minimised. We have previously [4] presented a simplistic sampling-based method of basis mapping, where the coefficients given in one basis expansion are projected onto the other basis expansion by sampling at the node or grid positions of the target basis. The mapping V ! U is then given by where v j .n i / denotes the value of basis function v j on node n i of basis U. Likewise, the mapping U ! V is given by where g j denotes the position of the grid point associated with basis function v j . We can thus define a projection matrix G uv 2 R N M for mapping the coefficients of the two basis representations from V to U (Algorithm 1).
The construction of the reverse projection matrix O G vu 2 R M N for mapping from U to V is performed analogously.
In the following section, we will introduce a more rigorous basis mapping approach that provides the solution as the optimisation of an error functional.

Error norms for basis mapping
Define an objective function based on the L 2 norm of the difference between the basis representations,ˆL where We define B uu and B vv as self-mass-matrices, that is, the symmetric positive definite mass matrices of the respective bases U and V . We define B uv as the co-mass-matrix containing mixed integrals of products of basis coefficients from U and V . The mapping of a function x u in U to x v in V is then given by minimising which is found by solving Likewise, the mapping from V to U of function x v is given by In the following, we will denote the mapping procedure given by Equations 18 and 20 as the least squares mapping (LSM) method, to distinguish it from the sampling mapping (SM) method outlined in Algorithm 1. Instead of the previous L 2 norm, we also consider the Sobolev norm H 1 : where˛is a scalar, positive parameter and In analogy with the simpler least-squares case, we define D uu and D vv as self-stiffness-matrices, that is, the symmetric positive stiffness matrices of the respective bases U and V and D uv as the costiffness-matrix containing mixed integrals of scalar products of the gradients of basis coefficients from U and V .

Element subdivision
The computation of the co-mass-matrix B uv in Equation 16 and the co-stiffness-matrix D uv in Equation 22 requires the calculation of the integrals of mixed products u i v j and their derivatives ru i rv j . These are generally not trivial to compute, because U and V are non-conforming, and the intersection of the areas of support of S.u i / \ S.v j / may not coincide with element boundaries in either basis. We solve this problem by element subdivision: split the elements representing the support of u i into sub-elements along a cutting surface defined by the support boundary of v j . Then compute the integrals over all sub-elements within the support of v j and sum them up. For some choices of U and V , the integrals may be evaluated analytically, for example, where both U and V are piecewise polynomial. Otherwise, a numerical quadrature scheme may be employed. Element subdivision and integration over partial elements is a well-known problem in different applications of the FEM. It occurs for example in the computation of non-matching and overlapping meshes, where a mesh representing an object is overlayed on a mesh representing a background [8,9], where elastic materials are inserted into an elastic background of different properties [10], or where an elastic body is immersed into a fluid [11]. Cut element problems also arise in ficticious domain problems, where physical domain boundaries may intersect a mesh [12,13]. A common approach for treating interface problems on unfitted meshes is the use of Nitsche's method [14], which allows for discontinuities across element volumes. Dolbow et al. [15] and Hansbo et al. [16] use Nitsche's method for computing jumps in a field without re-aligning the mesh.
The subdivision of tetrahedral elements has been described in the context of physical cutting of materials, for example, soft tissue cutting during surgical interventions [17,18], fracture modelling [19] or surgical reconstruction [20].
Where V represents a regular pixel or voxel basis, with planar boundaries between basis functions, the resulting subdivided elements are polyhedra, which can then be further subdivided into simple elements such as triangles and tetrahedra, and standard FEM techniques can be applied to compute the integrals. If V represents a more complex basis with non-planar boundaries, such as a blob basis composed of overlapping radially symmetric basis functions, the subdivided elements have non-trivial curved boundaries. In this case, we propose to represent the boundaries by a polygonal approximation and proceed as before by further subdividing into triangles and tetrahedra.
In general, for arbitrary basis definitions V , a numerical quadrature scheme may have to be employed to compute B uv and D uv .
In the following subsections we describe the methods of element subdivision for 2D problems using an unstructured triangular basis and 3D problems using an unstructured tetrahedral basis for U. . For 2D problems, we consider triangular elements with piecewise polynomial basis functions for the FEM basis U, while V may be either a regular pixel basis with piecewise bi-polynomial basis functions or a blob basis with radially symmetric basis functions.
The element-splitting algorithm for the calculation of co-matrices B uv and D uv is shown in Algorithm 2. For each combination of basis functions v j ; j D 1::M and u i ; i D 1::N , loop over all triangles e k S.u i /. Calculate the intersection W D S.e k / \ S.v j /, using a Sutherland-Hodgman [21] algorithm. If S.v j / is not polygonal, as is the case for blob basis functions, we replace S.v j / with an n-sided polygonal approximation Q S.v j / (Figure 1(b)). The accuracy of the mapping and its computational cost depend on n. For the computations in this paper, support areas for 2D blob basis functions were approximated by a regular polygon with n D 32. As both S.e k / and S.v j / (or its approximation) are polygonal and convex, W is also polygonal and convex. Let W be a .P k C 2/-sided polygon with P k > 1 (omitting the trivial case W D ;). W can then easily be subdivided into P k sub-triangles Q e m with [

Subdivision in 3D.
For 3D problems, we consider tetrahedral elements for the finite element basis U. For basis V , we use either a regular voxel grid with piecewise constant or trilinear basis functions, or a blob basis with radially symmetric basis functions. We employ a 3D extension of the Sutherland-Hodgman algorithm for computing the intersections of the basis functions. In analogy to the 2D case, the spherical support area S.v j / of blob basis functions is approximated by a regular polyhedron. For the results presented in this paper, we used a 20-sided icosahedron as an approximation Algorithm 3 shows the process of element splitting employed for calculation of co-matrices B uv ij and D uv ij for 3D problems. The algorithm is similar to the 2D case, but in order to reduce the clipping process to a sequence of elementary operations of splitting a tetrahedron along a single intersecting plane, we perform the splitting along each face of S.v j / iteratively and re-form a clipped tetrahedral volume mesh at each step. The process starts with the single-element mesh m D e k . For each face of S.v j /, the elements of m are split by the plane defined by this face. Elements entirely external are discarded, elements entirely internal are retained and intersecting elements are split and replaced in m by their internal fragments.
There are two possible cases for the intersection of a tetrahedron at a single plane ( Figure 2): (i) The vertex nodes are split 1-3. In this case, we can split the tetrahedron into one subtetrahedron on one side of the cutting plane and three tetrahedra on the other side. (ii) The vertex nodes are split 2-2. In this case, we can split the tetrahedron into three subtetrahedra on each side of the cutting plane.

11
The cases where one or more of the vertices of the tetrahedron coincide with the clipping plane are considered separately, to avoid the generation of degenerate subtetrahedra with coinciding vertices. If exactly one vertex is on the cutting plane, the tetrahedron can be split into three subtetrahedra. If two vertices are on the cutting plane, the tetrahedron can be split into two subtetrahedra. The trivial case where three vertices are on the cutting plane does not require subdivision. Figure 3 shows the result of intersecting and subdividing a tetrahedron e k with the icosahedral approximation of the support S.v j / of a blob basis function v j .

13
If V is a regular voxel basis with cuboid basis support S.v j /, we can re-arrange the splitting process to make it more efficient. We exploit the fact that many basis functions v j have coplanar faces and thus share common cutting planes. We discard the outer loop over basis functions v j , which would result in re-calculating identical cutting planes for different basis functions, and instead split the object tetrahedron e k on all parallel cutting planes it intersects, retaining both the subtetrahedra on the left and right of each cutting operation, and labelling the generated subtetrahedra according to the cutting slice they are located in. This process is repeated with the cutting planes in the other two spatial directions. The result is a sub-mesh over the object tetrahedron where each tetrahedral sub-element is labelled according to the voxel it is located in. The integral contributions for B uv ij and D uv ij are then computed for all participating voxels j (Algorithm 4). Figure 4 shows the result of such a subdivision of an object tetrahedron (top image) along the cutting planes of a voxel basis in the x, y and z directions. The colours indicate the voxel labels of each of the generated sub-tetrahedra.

Self-mass-matrix calculations
The matrices B uu in Equation 16 and D uu in Equation 22 can be computed with standard tools of finite element analysis. For example, there exist analytic formulae for products of polynomial shape functions over simple elements such as triangles and tetrahedra [22]. More complicated element types, such as isoparametric elements with curved boundaries, may require a numerical quadrature scheme. 14 M. SCHWEIGER AND S. ARRIDGE The computation of B vv and D vv is straightforward for polynomial basis V . For blob bases, the computation of B vv becomes non-trivial, because it requires the integral of the product of two radial basis functions v i and v j over their region of overlap W D S.v i / \ S.v j /. We replace S by their polygonal approximations Q S , compute W with the method described in Algorithm 2 or 3, and divide the overlap into sub-elements. The integrals R v i v j are evaluated with a numerical quadrature scheme over the sub-elements and summed up. The process is shown in Algorithm 5.
If the blob basis is arranged in a regular grid, only a small number of different overlap configurations occur, which makes computation of B vv computationally inexpensive compared with B uv . The number of neighbours contributing to each row of B vv depends on the size of d max relative to the grid spacing. Algorithm 6 shows the modified process.

Co-mass-matrix calculations
In order to compute the integrals in Equation 16 for the elements of B uv , we employ Algorithm 2 for 2D problems and Algorithm 3 for 3D problems to perform the required element splitting, and sum the integrals over the generated sub-elements.
If both U and V consist of polynomial basis functions, the integrals R u i v j over the sub-elements can be computed exactly. For more general choices, such as V representing a blob basis, a numerical quadrature scheme may be used.
In the simplest case, V .0/ represents a piecewise constant pixel or voxel basis In this case, the components of B uv simplify to where the sum runs over all elements e k S.u i /. After subdivision of element e k along the edges of pixel j into m sub-elements Q e kl ; l D 1::m, excluding sub-elements outside voxel j , The integral of u i over Q e kl can be computed in closed form when U is a polynomial basis. If V consists of higher-order polynomial basis functions such as piecewise bilinear or trilinear functions, then the basis indices j refer to the vertex positions joining the voxels, rather than the voxels themselves, and each basis function has support over multiple pixels. For the computation of B uv , the same element subdivision strategy can be employed, and a numerical quadrature rule is used for evaluating the integrals.

Stiffness matrix calculations
The calculation of the self and co-stiffness matrices in Equation 22 for the computation of the H 1 norm follows along the same lines as discussed for the mass matrix computation. Again, the computation of self-stiffness matrix D uu is performed with standard finite element techniques, which include direct and exact solutions for polynomial basis functions.
Similarly, evaluation of self-stiffness matrix D vv can make use of the same FEM machinery for certain basis function types, such as piecewise bi-polynomial and tri-polynomial basis functions. The basis functions must, however, be differentiable, which precludes piecewise constant bases. Where direct expressions of the integrals R rv i rv j over elements are not 15 available, numerical schemes are employed. This includes blob bases, where the calculation of the intersection W D S.v i /\S.v j / is identical to the process for B vv as laid out in Algorithms 5 and 6. Equally, for co-stiffness matrix D uv the subdivision process is identical to B uv , which means that the computation of the B and D matrices can be performed simultaneously on the subdivided elements, as indicated by Algorithms 2 and 3. The integrals R ru i rv j over the sub-elements are computed directly where available, or by numerical quadrature otherwise.

Direct mapping from pixel images
In addition to the mapping between bases U and V , it may be useful to map from a high-resolution regular pixel or voxel image directly into V . For example, a parameter distribution may be provided in the form of an image and must be expressed in V to form an initial condition for a reconstruction problem.
Let basis W be the piecewise constant pixel or voxel basis of dimension P representing the image. We require a mapping from W to V . With the mapping tools defined earlier, we could map from W to a mesh basis U and then back from U to V . However, the intermediate mapping to a mesh basis expansion would lead to a loss of accuracy. Instead, we provide a direct mapping from pixel values X w in W to basis coefficients O X v in V : and V j is the support area of pixel j . Similar to the mapping U ! V , the mapping W ! V is performed by subdividing the support area of each v i along the intersecting pixel edges and summing up the integrals over the contributing subregions.

Adaptive basis formulation
For efficient inverse problem solution, a basis with locally varying resolution is often desirable. The unstructured mesh basis U may be locally refined in accordance with an a posteriori error estimate. Likewise, the inverse basis V may be iteratively adapted to features that emerge during the reconstruction.
Starting from a choice of V with basis functions arranged in a regular grid, a natural choice for refinement is a quadtree or octree approach, where a pixel or voxel may be subdivided into four pixels or eight voxels of equal size. This type of adaptive refinement is well suited for the basis mapping techniques described in this paper, because the required element-splitting procedures are already in place, and the quad-and octree subdivisions of a voxel v k in V affect the mapping matrices only locally, by replacing a column in B uv with those of its descendants. Figure 5 shows an overview of the different basis representations and mappings between them in a chart. The original 'Lena' image X w (a) consists of a piecewise constant 512 512 pixel image (basis W ). This is mapped into a regular bilinear 32 32 basis X v (b) using the W ! V mapping step and mapped back to the high-resolution pixel basis with mapping V ! W (d). Using a mesh -in this example, a circular mesh consisting of 413 nodes with inhomogeneous node density (c) -we perform the mapping V ! U (e), and map the result back to basis W with step U ! W (f).   Figure 7 shows the equivalent basis mapping steps for a different source image, using a 512 512 piecewise constant pixel representation of the 'Lena' test image. The same mesh for representing U and the same three choices for V were used as for Figure 6.

3D basis mapping examples
The results of a 3D mapping problem are shown in Figure 8 for a 3D checkerboard source image, and Figure 9 for a 3D phantom source image containing spherical and ellipsoidal inclusions in a homogeneous background. In both cases, the source images (a) were represented in a 128 128 128 piecewise constant voxel basis W , the target domain was a cylinder, and the mesh representation for basis U consisted of 12,676 nodes and 55,296 elements (c). The source image was first mapped into the mesh basis U (b) and then back into the original tri-linear voxel grid basis. Cross sections of the results for the SM (d) and LSM basis mapping algorithms (e) are shown. For both cases, the L 2 errors of the differences between the original basis coefficients (b) and the V ! U ! V mapped basis coefficients are smaller for the new LSM algorithm (e) than for the SM algorithm (d).

Comparison of mapping norms
To compare the effect of the choice of norm minimised by the mapping algorithm, we compare the mapping results using the L 2 norm (Equation 15) and the H 1 norm (Equation 21). It can be seen that the use of the H 1 norm produces a smoother result in the mapped images, while the L 2 norm retains more high-frequency features of the original image.

Mapping errors
To demonstrate the advantage of the LSM mapping algorithm proposed in this paper compared with the SM method [4], we perform a mapping of a source image to different basis expansions and compare the mapping errors in the final mapped image with the source image. In particular, we compare the errors incurred by the mappings V ! U and U ! V for a range of choices of V in 2D and 3D examples. Figure 11 shows the 512 512 pixel 2D checkerboard source image X w (a) and an unstructured circular mesh, consisting of 682 nodes, used to define basis U (b).
For the first test, we perform the mapping X w W !U ! X u U !W ! Q X w . Figures 11(c) and (d) show Q X w obtained with the SM and LSM methods, respectively. Figures 11(e) and (f) show the absolute differences j. Q X w X w / i j for both cases. It can be seen that the new algorithm provides a better approximation of the internal structure of the source image, at the cost of some loss of homogeneity in the constant regions. The L 2 norms of the image difference for the SM and LSM mapping results are 122.7 and 103.8, respectively. Figure 12 shows a corresponding comparison between the SM and LSM method, using the 'Lena' test image, and a higher-resolution mesh for basis U. Again, it can be seen that the LSM algorithm provides an improved mapping result and generates smaller errors. for different choices of V and U, by comparing the norms of the image differences jj Q X w M X w jj D , D D ¹L 2 ; H 1 º. Figure 13 shows cross sections of the mappings Equation 28 and 29 for a 3D checkerboard pattern (a,b) and a 3D test phantom (c,d) using a trilinear voxel basis (a,c) and cubic spline blob basis (b,d). In each case, the first column shows the reference image Q X w , while columns 2 to 4 show M X w for SM, LSM(L 2 ) and LSM(H 1 ), respectively. The error norms of the image differences between the W ! V ! W and W ! V ! U ! V ! W mappings are shown in Table I  algorithm applying both L 2 and H 1 error functionals. The table lists both the L 2 and H 1 norms of the image differences. It can be seen that the LSM approach consistently achieves lower mapping errors than the SM method. Finally, we explore the potential of reducing mapping errors by modifying basis U adaptively, in a 2D example. Figure 14 shows a 512 512 'Lena' source image (a) and an initial unstructured circular mesh with 682 nodes (b). Images (c) and (d) show the result of the W ! U ! W mapping for the SM and LSM algorithms, respectively. The L 2 norms for the image errors are 44.8 and 38.0. The pixel-wise image errors are then used to define a local node density target for re-meshing the circular domain with Delauny triangulation. The meshing parameters were set such that the total number of nodes was kept approximately constant. Images (e) and (f) show the resulting mesh structures, with node counts of 856 and 819 for the SM and LSM algorithms. The result of the W ! U 0 ! W mapping of the source image, where U 0 denotes the adaptive mesh basis, is shown in images (g) and (h), and the pointwise image errors in images (i) and (j). It can be seen that the proposed algorithm provides a superior mapping result, despite using a lower-dimensional mesh basis.

Adaptive basis results
As an example for an adaptive basis refinement, Figure 15 shows an iterative quadtree subdivision of an initial 16  column of images shows the structure of the quadtree mesh at each subdivision level. The second column shows the source image, mapped in the corresponding piecewise constant adaptive basis, and the last column shows the differences to the original image.
The motivation in adaptive basis definition is its computational efficiency combined with low error of the represented parameter. Table II shows a comparison of the adaptive meshes at the five levels shown in Figure 15 with a uniformly high-resolution mesh at the corresponding refinement level. It can be seen that the adaptive meshes reduce the basis dimension significantly while retaining a similar level of mapping error as the fully refined meshes.

Parallel computation
Computation of matrices B uv and D uv is computationally expensive due to the large number of element subdivisions and the need for a numerical quadrature over the sub-elements. To make the  problem tractable for larger meshes, we have devised a parallelised method for computation, using a threaded approach for shared-memory architectures. The mesh elements are distributed over the available threads, and element subdivision and integration are performed in parallel. Figure 16 shows the computation time for the mapping matrices for a mesh with 12,676 tetrahedral elements for basis U, and a regular voxel grid of dimension 128 128 8 using trilinear basis functions for basis V , as a function of the inverse of the number of threads used. The relationship is approximately linear, demonstrating good scalability.    Figure 16. Runtimes for computation of basis matrices B uu , B uv and B uv against inverse of thread count, using a multithreaded implementation.

CONCLUSIONS
We have presented in this paper a novel method for mapping a field variable x.r/, defined in a bounded domain , from one finite-dimensional basis representation x U .r/ to a different representation x V .r/. Such mappings between basis expansions may arise in numerical analysis where different components of the computational pipeline operate on different representations of the same spatial variable. In this paper, we considered a problem typical for an inverse problem setting, where the problem parameters are expressed in different basis expansions for the forward and reconstruction problems. We assumed the forward model to operate on an unstructured piecewise polynomial finite element basis representation, while the solution of the inverse problem is represented by a regular grid of basis functions, be it a piecewise constant or polynomial pixel or voxel representation, or a blob basis representation with radially symmetric basis functions arranged in a regular grid. Adaptive approaches to the numerical solution of inverse problems, such as those described in the introduction, that could potentially benefit from the basis mapping algorithms derived in this paper include electromagnetic imaging, where the forward model is given by Maxwell's equations [23,24], diffuse optical tomography [4,25] and fluorescence tomography [26,27], electrical impedance tomography [28], as well as in adaptive inverse electrocardiography [29] and electroencephalography problems [30]. Applications can also be found in geophysics, using adaptive methods for seismic tomography [31], local earthquake tomography [32] and magnetic susceptibility mapping [33]. Furthermore, they may have relevance in other areas, besides inverse problems, such as in the fields of computational fluid dynamics [34] and in adaptive approaches to non-rigid image registration [35].
Our proposed method of basis mapping is based on an error-functional approach that minimises a norm of the difference between the two basis representations. We have compared this new LSM method with a previous approach (SM) that was based on sampling the source basis representation U at the nodal positions of the target basis V . We have shown the new method presented here to achieve consistently superior mapping results in terms of the error functional residual than the SM method.
In addition, the LSM method offers a choice of norms for defining the error functional to be minimised, which can be used to emphasise certain image properties. In this paper, we have compared the L 2 and H 1 norms, and demonstrated that the latter allows to suppress high spatial frequencies in the mapped images.
The new mapping algorithm requires the computation of integrals of mixed products of basis functions, R u i v j or their derivatives, which is non-trivial because the support of u i and v j does not coincide. The calculation of these integrals is implemented by subdividing the elements of the unstructured mesh representing basis U along the boundaries of support of each v j and performing 27 the integrals by summing the integrals over the sub-elements. The subdivision adds a significant amount of computational cost, which can however be mitigated by parallel computation.
We have presented mapping results in both 2D and 3D problems, using unstructured meshes with locally varying node density and piecewise linear basis functions for basis U, and regular voxel grids with piecewise constant, or piecewise bi-and tri-linear basis functions for basis V . We have also demonstrated the use of a blob basis with radially symmetric basis functions. In this case, the limit of support of v j is approximated by a polygonal or polyhedral surface to facilitate the element subdivision.
We have finally introduced the potential to apply more complex basis representations for V , where a quadtree or octree representation was employed to construct a hierarchical basis expansion with locally varying resolution. This method could be used to perform adaptive refinement of the basis representation during reconstruction or to incorporate prior information about interal structure.