Level sets and anisotropic mesh adaptation

In this paper, we focus on the problem of adapting dynamic triangulations during numerical simulations to reduce the approximation errors. Dynamically evolving interfaces arise in many applications, such as free surfaces in multiphase flows and moving surfaces in fluid-structure interactions. In such simulations, it is often required to preserve a high quality interface discretization thus posing significant challenges in adapting the triangulation in the vicinity of the interface, especially if its geometry or its topology changes dramatically during the simulation. Our approach combines an efficient levelset formulation to represent the interface in the flow equations with an anisotropic mesh adaptation scheme based on a Riemannian metric tensor to prescribe size, shape and orientation of the elements. Experimental results are provided to emphasize the effectiveness of this technique for dynamically evolving interfaces in flow simulations.

1. Introduction. Nowadays, many computational simulations of PDE problems involve adaptive triangulations. In such triangulations, the mesh points are locally adapted in order to achieve high concentration of nodes in regions where the gradient of the solution is important. In general, adaptive methods attempt to equidistribute the approximation errors by adapting the mesh density to a discrete metric tensor field based on the Hessian of the solution [22]. In recent years, anisotropy allowed to significantly improve the accuracy of the solution and of the surface discretization as well as to maintain the order of convergence of the computational schemes [5]. These techniques have successfully proved their ability in reducing the number of unknowns while preserving the accuracy of the numerical solutions, at least in steady-state simulations. However, when dealing with unsteady phenomena involving dynamically changing interfaces, such as free surfaces in multiphase flows or moving boundaries in fluid-structure interactions, the challenge becomes to keep updated with the arbitrary evolution of the interfaces, including changes of topology.
Usually, an interface can be characterized by a discontinuity of physicial quantities like pressure, viscosity or density. Motion can be described in Eulerian or Lagrangian coordinates in continuum mechanics. In Lagrangian numerical schemes, each mesh vertex is associated with a particle of material and shall follow the displacement of this particle during time. Hence, flows with significant deformations may lead to large mesh distortions and subsequent accuracy problems related to numerical errors that are very tedious to remove using complex remeshing procedures. In this spirit, vorticity is often a cause of mesh tangling, i.e., when overlapping, inverted or non-convex elements are created, thus leading to aborting the ongoing computation. Eulerian schemes, on the other hand, the computational mesh remains fixed as the medium moves with respect to the mesh. Hence, these methods intrinsically avoid any mesh-related difficulty although the interfaces and the boundaries must be accurately discretized to resolve flow details. Moreover, Eulerian methods may lead to poor performances in solid mechanics simulations. It is thus important to rely on robust and efficient meshing techniques to discretize the computational domain.
In this paper, we investigate the problem of adapting dynamic triangulations during numerical flow simulations to reduce approximation errors. More precisely, given an interface of codimension one between two fluids, we are interested in computing its subsequent motion under a velocity field v related to the physics of the problem. Following the ideas suggested by Dervieux-Thomasset [17] and Osher-Sethian [33], we introduce a levelset function that represents the interface in the incompressible flow equations and the motion is tracked by convecting the level sets with the velocity field v using a first order Hamilton-Jacobi equation. We propose a dynamic anisotropic adaptation of the triangulation using a Riemannian metric tensor to prescribe the size and the stretching of the elements in the vicinity of the interface in combination with mesh modification operations. The latter are used for maintaining the mesh quality as well as the accuracy of the calculation. We provide a theoretical bound on the geometric approximation error when discretizing the interface with anisotropic elements.
As we have complex flow simulations in mind, it is very important to have an accurate representation of the computational domain boundaries (curves or surfaces). Very few approaches have been proposed for anisotropic (re)meshing of curve or surface triangulations [23] and dynamic surfaces introduce significant difficulties and constraints to mesh adaptation. A desirable procedure should be able to deal with complex geometry and topology as well as with non uniform data. We will show that the control of the geometric representation of an interface using an anisotropic metric tensor field can be used to the purpose of reconstructing a curve or a surface from an unorganized data set, i.e., a point cloud. Following Zhao et al. [41], we propose a variational approach based on a distance function to the dataset. The regularization method is related to the local density of points and provide a smoother surface reconstruction than piecewise affine. The differential properties of the distance function will allow to define an anisotropic metric tensor used to enhance the boundary representation of the domain.
The remainder of this paper is structured as follows. Section 2 presents some background material on anisotropic mesh generation and metric-based mesh adaptation. Section 3 introduces a levelset formulation for time-dependent and interface tracking problems. Section 4 presents numerical results and discussions for static and dynamic interface capturing. Finally, Section 5 concludes the paper with an introduction on open problems in dynamic simulations.

2.
Metric-based mesh adaptation. In this section, we present some basic notions and definitions on triangulations and meshes, metric tensor fields and anisotropic mesh adaptation.
Throughout this paper, Ω denotes a simply connected, open, bounded domain in R n ; Ω is the closure of Ω and |Ω| represents its n-dimensional measure or its volume.
2.1. Basic notions. Let consider a polyhedral domain Ω in R n and suppose given a family of triangulations T h on Ω, where the parameter h > 0 represents the element size characterizing this family. We assume that each element K in T h is a closed sub-domain of Ω and that Ω ⊂ K∈T h K and also that the intersection of any two different elements is either empty or reduced to a n − 1-simplex (elements cannot overlap). Under these assumptions, let define a uniform mesh T h of Ω as a mesh in which all elements are equally sized and regular (equilateral). In such case, where |T h | denotes the number of mesh elements in T h . Following [30], a regular family of meshes is a family which satisfies the conditions: (i) there exists a constant τ such that: where h K = diam(K) and ρ K = sup{diam(B), B is a ball contained in K} are the diameter and in-diameter of K, respectively, (ii) the size is such that: A quasi-uniform mesh is a mesh for which (i) is satisfied and the variation of its element size is bounded by a constant. Hence, Relation (1) is true for a quasiuniform meshes. In the context of finite element approximation spaces, it may be usefull to introduce the following notion. An affine family of triangulations is a family such that any element K in the family can be mapped from a reference elementK through an affine mapping. This is equivalent to state that there exists an invertible affine mapping F :K → K, such that K = F (K), and in addition, the reference element is chosen to be equilateral with unitary size |K| = 1.

Mesh adaptation.
Mesh adaptation aims at improving the efficiency and the accuracy of numerical solutions by concentrating more nodes in regions of large solution variations than in other regions of the computational domain. As a consequence, the number of mesh nodes required to achieve a given accuracy can be dramatically reduced thus resulting in a reduction of the computational cost. Traditionally, isotropic mesh adaptation has received much attention, where almost regular (equilateral) mesh elements are only ajusted in size based on an error estimate. However, in regions of large solution gradient, adaptive isotropic meshes usually contain too many elements. Moreover, if the problem at hand exhibits strong anisotropic features, like shock waves or boundary layers in fluid dynamics, it is desirable to adjust the element size as well as the element shape and orientation to better represent for the solution variations. In most cases, elements with high aspect ratio are needed, thus resulting in an anisotropic mesh. In recent years, research on anisotropic mesh adaptation has been intensified and consequently mathematically sound error estimates have been developed, see for instance [8], [11], [21], [19]. These works have led to the concept of optimal triangles [10] and the affine interpolation error and its high-order derivatives, gradient and Hessian, for regular (quadratic) functions provided a convenient way of dealing with anisotropy using the so-called metric tensor field [3] [25]. From a practical point of view, classical mesh adaptation algorithms fall into two categories, h-methods or r-methods. On the one hand, h-methods involve local mesh modification operations such as edge flipping, edge-vertex collapsing, vertex addition and vertex relocation. Anisotropic meshes are generated using various strategies, like the Delaunay-based triangulation method, the advancing-front method, the quadtree-octree-based method and methods combining local mesh modifications; see [22] for an overview. A metric tensor prescribes the size, stretching and orientation of mesh elements at any place in the domain and is usually associated with mesh vertices. On the other hand, r-methods employ only vertex relocation procedures to adapt the mesh. Typically, they are based on solving elliptic or parabolic PDE systems [14] or use a functional to account for desired mesh properties like orthogonality or smoothness [39]. For unsteady problems, the nodes are moved according to the solution of an equation involving velocities of nodes in order to preserve the concentration of nodes in regions of high gradient of the numerical solution. Moving mesh methods can be based on a variational mesh generator [29].
2.3. Metric tensor. As mentioned, anisotropic mesh adaptation relies on an error estimate or an error indicator to prescribe the element size, shape and orientation using a matrix-valued field. In the continuous geometric viewpoint, the metric defining an element is represented by an ellipsoid. Hence, size, shape and orientation notions are associated with its volume, length ratios between the lengths of its semiaxes and its principal axis vectors, respectively.
Commonly in anisotropic mesh generation, the metric tensor denoted as M (x) allows to create a quasi-uniform mesh in the metric related to M . More precisely, the mesh elements are equilateral in the metric, this corresponding to the condition [30]: where M K is an average of M (x) on element K and F is the Jacobian matrix of the affine mapping F . Moreover, the volume of the element in T h is unitary: or, in a discrete formulation: By extension, in the metric given by M (x) for any x in Ω, the length of a curve γ is defined as follows: where γ(t) : [0, 1] → R n is a parametrization of γ.
In R n , the metric M K associated with an element K can be represented by its unit sphere, an ellipsoid B(K) defined as [25]: where O denotes the center of B(K). We assume that the metric tensor M (x) is a symmetric positive definite n×n matrix. Then, the spectral decomposition theorem leads to express matrix M in terms of its n eigenvalue-eigenvector pairs (λ i , e i ) as: where the normalized eigenvectors of M are the columns of matrix P = [e 1 , . . . , e n ] such that P P t = P t P = I and Λ is the diagonal matrix with eigenvalues λ i on the diagonal. The geometric meaning of this decomposition is easy to understand. As a matter of fact, the matrix P determines the orientation and the matrix Λ prescribes the size and shape of any element K in T h .
To describe the mesh modifications used to create anisotropic meshes, we need to introduce primarily the notion of mesh quality.

Mesh quality.
It is important to introduce quality measures in order to evaluate how closely the controls on size, shape and orientation prescribed by the metric M (x) at any x in Ω are satisfied. Mesh quality assessment has been an area of extensive research in the context of finite element methods, essentially for isotropic meshes (see [9] for a review).
Mesh quality measures can be defined to assess individual elements as well as the entire mesh T h . The geometric (shape) quality measure of an element K can be defined based on the Jacobian matrix as: where · F denotes the Frobenius matrix norm. Notice that Q geo (K) ≥ 1 and that the best quality Q geo (K) = 1 is achieved for an equilateral element in T h . Following [30], it is easy to prove that for all K ∈ T h : where µ min and µ max are the minimal and maximal singular values of F K , respectively. These inequalities mean that the geometric quality of K is equivalent to the aspect ratio of K, as the dimension n is small. By extension, the geometric mesh quality is defined as: Similar quality measures can be defined to assess the alignment and equidistribution conditions. However, for practical reasons, we advocate the use of a single measure to assess the quality of an element, like for instance: where e i denotes here any of the k edges of K and α is a normalization coefficient such that Q ani (K) = 1 for an equilateral element. Again, Q ani (K) ≥ 1 for all K in T h . The larger max K Q ani (K) is, the farther the mesh deviates from satisfying the shape and size and orientation conditions.

Error estimates.
Among the various meshing strategies developed over the years, the definition of the metric tensor based on the Hessian of a solution variable seems nowadays commonly generalized in the meshing community. This choice is largely motivated by the interesting numerical results obtained by the seminal work of D'Azevedo [10] on linear interpolation for quadratic functions on triangles. In this approach, the metric tensor is defined as: given the decomposition of the Hessian of function v. The metric M is further modified by imposing a minimal and maximal edge lengths to avoid unrealistic metric. The local sizes of an element with respect to the main directions prescribed by the eigenvectors are related to the eigenvalues: . . , n. Moreover, the metric indicates the local density (distribution) of mesh nodes at any position x in the domain directly from the local sizes h i . Let suppose the mesh density function d be defined as where N denotes the current number of vertices, then the optimal number of mesh vertices or the mesh complexity, hereafter denoted as C, will be given by the formula, for all x in Ω: Given a metric tensor M defined on Ω and considering a solution variable u, we proposed the following error model for the local interpolation error on a mesh T h , for all x in Ω [4], : where B(x) = {y ∈ Ω, (y, M (x)x) ≤ 1} denotes the unit ball centered at x. A practical bound on the interpolation error on an element K ∈ T h in L ∞ norm is given by (see [25] for more details on the estimation): where v ⊂ K represents any vector inscribed in K and c n is a constant related to the dimension of the space R n . It is possible to show that the optimal directions of the metric coincide with the main directions of the Hessian, thus leading to the following error model: under the assuption that the Hessian of a C 2 continuous function u can be decomposed as follows: where the eigenvectors of H(u) are the columns of the matrix R(u) = [e 1 , . . . , e n ]. The next step is to search for a function M (x) minimizing the L p norm of the error, for a given number N of vertices. This means solving the problem: under the constraint: This optimization problem is solved in two steps; at first the anisotropic quotients are computed an then the density d(x) is evaluated. As a result, the optimal continuous metric is defined using the following decomposition: where the matrix D L p and the eigenvalues λ i are given in the following table: Assuming a sequence of metrics M i corresponding to a given number of vertices N , the order of convergence k for the norm e Mi (x) L p is attained if the following inequality holds: According to this definition, all the metrics previously defined lead to a convergence order of two (k = 2).
In a numerical adaptive computation, the unknown Hessian of a solution variable H(u(x)) at any x in Ω, must be replaced by a discrete approximation H h (x). In [2], sufficient conditions are given for the approximation H h under which the mesh quasiuniform in the metric |H h | is also quasi-uniform in the metric |H(x)|. Obviously, if the initial mesh used to recover the Hessian H h is far from the optimal one, the discrete hessian may not approximate the continuous one. A few methods allow to recover the hessian H h from the piecewise linear function u h defined at mesh vertices. For instance, a weak definition of the discrete Hessian is established using Green formula as follows: where B(x) denotes the stencil of vertex x and ϕ x represents the piecewise linear P 1 finite element function associated with the interior node x. At a boundary node, the Hessian H h (x) can be extrapolated from its neighboring interior nodes.
Other Hessian recovery methods involve a projection technique [42] or a least-square approximation of the Hessian matrix based on a second order Taylor expansion of a C 2 continuous function u * h sufficiently close to the numerical solution u h [25]. All these techniques have linear complexity with respect to the number of mesh nodes. It turns out that the choice of the recovery method depends strongly on the norm in which the error has to be minimized.
2.6. Anisotropic mesh adaptation. For the generation of anisotropic meshes, h-adaptation has been the most widely used approach, mainly because of it is robust and reliable as well as conceptually simple (see [22] for a survey of h and p-methods). Typically, a local error estimate is designed and then local mesh modification operations such as vertex deletion, vertex addition, edge swapping and vertex relocation, are iteratively used to produce the desired adapted mesh. The main idea consists in generating quasi-uniform meshes in the metric supplied by the error estimate that prescribes the size, shape and orientation of the elements throughout the computational domain.
Several meshing strategies have been developed for generating anisotropic meshes according to a metric prescription. For example, such techniques include the Delaunay triangulation method [27], [24], [18], the advancing-front method [26], the bubble mesh method [40] and methods combining local modifications with smoothing [13]. On the other hand, variational methods have received much attention in the recent years typically for generating structured meshes as they are especially well suited for finite difference schemes. In these approaches, an adaptive mesh is considered as the image of a computational mesh under a coordinate transformation determined by a functional, for instance based on geometric considerations [14].
The key issue is then to design the proper convex functionals, usually not directly provided by an error estimate.
In this section, attention is focused on the anisotropic Delaunay-based method developed in [18]. Since the early paper of Delaunay [16], the properties of Delaunay triangulations have been extensively analyzed, especially with respect to their computational geometry features. Here, we consider the extension of the Delaunay measure to generate anisotropic simplices. At an elementary level, constructing a Delaunay triangulation requires finding all simplices having a circumsphere containing a given point p in R n . This operation can be performed using the following test, the Delaunay measure, for all K in T h : where ρ(K) is the circumradius of K and O is the center of the circumsphere. The anisotropic measure of α is derived from the previous formula by computing all distance terms with respect to a prescribed metric tensor field M [25]. Considering a matrix-valued field leads however to solving a system of non linear equations and is then more computationally expensive [18].
Since mesh quality is related to the quality of the worst element in the mesh, it is important to attempt removing badly shaped or incorrectly sized elements in the mesh [22]. To this end, local mesh modifications are carried out in an iterative manner, based on an edge length analysis with respect to the metric M . Hence, smaller elements are removed using an vertex removal or edge contraction procedure while larger elements are further refined using an edge splitting technique. Additionally, edge flipping and node relocation operations are used to improve the overall quality of the mesh. An efficiency index allows to measure the adequacy of the mesh edges with the supplied metric tensor field: where l i denotes the length of edge i in the current mesh.
Regarding the adaptation of curves and surfaces, specific procedures must be developed to account for the differential geometric properties of the manifolds [25]. To generalize the above formulation to surface meshes, we prefer to define a threedimensional metric tensor on a local parametric space, for instance using a quadricbased surface analysis [23]. In this approach, the curvature tensors at each vertex of a given triangulation are estimated to approximate the Hessian of a locally parametrized surface.
As we have seen, metric-based adaptive meshing techniques provide a convenient yet powerful framework to numerical simulations. The numerical accuracy can be largely improved by adapting the mesh node density to the metric tensor field provided by an error estimate ased on the Hessian of the solution. In the next section, we will provide two examples of anisotropic adaptation coupled with levelset techniques for dynamic simulations.
3. Levelset methods. Introduced by Dervieux [17] more than three decades ago and rediscovered a few years later by Osher [33], levelsets have since proven to be successful for dealing with moving fronts and interfaces. In this section, we will review the basic concepts behind the levelset methods and propose two applications of levelsets to emphasize their effectiveness in tracking evolving interfaces using anisotropic triangulations.
3.1. Basic concepts. The underlying concept behind the levelset method is quite simple. Given an interface in R n of codimension one bounding an open domain Ω, the objective is to compute its motion under a velocity field v(t, x). To this end, a Lipschitz continuous function φ(t, x), x ∈ R n , is introduced and the interface is chosen to coincide with the set where φ(t, x) = 0. The function φ is assumed to have the following properties: Hence, at all time the interface coincide with the set Γ(t) for which φ vanishes. In other words, if the velocity v depends on the surface geometry and if the domain Ω(t) evolves in time at a speed v(t, x), for instance v = (α − βH)n, with n the normal to the surface and H its mean curvature, then φ(t, x(t)) = 0 for all x in ∂Ω(t). Using the derivation rule will lead to the subsequent equation: and as one can write: the following first order Hamilton-Jacobi equation is then posed in R n : ∂φ ∂t As n and ∇φ are colinear vectors then T . ∇φ = 0 for all tangent vector T . In two dimensions, v = v n n + v t T , the equation φ t + (v n n + v t T ) . ∇φ = 0 shows that only the normal component of the velocity is involved and so it is easy to deduce the characteristic equation of levelsets: By embedding the interface as a levelset of a higher dimensional function, the dimension of the advection problem is augmented by one. But, even if they are computationally more expensive to discretize than other representation methods, levelsets offer a convenient way to handle topological changes. Unfortunately, singularities are typical of the solutions of the Hamilton-Jacobi equation (27). From the mathematical point of view, viscosity solutions have been proposed to this problem by [15] to find a physically meaningfull solution to this problem. From the numerical point of view, the existence and appareance of singularities in the solution implies that adequate numerical methods have to be used to obtain the unique viscosity solution. This key point has been largely discussed on uniform grids or Cartesian meshes and ad-hoc schemes have been proposed involving monotonicity, upwinding, essentially non-oscillatory (ENO) schemes and weighted essentially non-oscillatory (WENO) schemes, see for instance [34], [31]. The extension of these schemes on unstructured isotropic meshes has also been investigated; for instance in [1], [28].
In all these approaches, the approximation error is strongly related to the quality of the discretisation as well as to the mesh density in the vicinity of the interface. To this end, we propose the construction of a Riemannian metric tensor to help controlling the generation of anisotropic elements close to the interface.

3.2.
Geometric approximation of the interface. While analytical (implicit) descriptions of the interface may be convenient, complex manifolds do not usually have such simple representations. The implicit representation can be stored with a discretization of the domain Ω ⊂ R n in a finite number of sample points at which function φ is approximated. Since only the points near the manifold φ = 0 are important to accurately represent the interface, clustering the points near the interface will be the main challenge of approximation techniques.
In two dimensions, we have shown [20] that it is possible to construct an accurate piecewise linear approximation Γ h of the manifold Γ(t) describing the interface φ(x, t) = 0. Let suppose the levelset function φ is known at the vertices of a given triangulation T h of Ω. A discrete anisotropic metric tensor field can be defined in order and the triangulation T h can be adapted to create quasi-uniform meshes in this metric. We consider as the approximation space of φ the space of affine Lagrange finite elements P 1 (the basis functions being denoted as ϕ) and write the approximation φ h of φ for all x in Ω as (omitting the time variable): where np denotes the number of mesh vertices in T h . Considering the set E of all elements intersected by Γ(t), E = {K ∈ T h , K ∩ Γ(t) = 0}, and denoting by λ j the barycentric coordinate of point x associated with the vertex x j of K, the distance between Γ h and Γ(t) can be bounded by above as follows, for all x in Γ h : In the Frénet frame (p, ∇φ ⊥ , ∇φ) associated with any point p in the neighborhood of Γ(t), introducing h x = max j=1,3 max p∈K |(p − x j ) · ∇φ ⊥ (p)| and h y = max j=1,3 max p∈K |(p − x j ) · ∇φ(p)| respectively, enables us to deduce an upper bound of the distance, under the reasonable assumption that |∇φ| = 1; for all p in K, |φ(p)| < h y , and for all x in Γ h : where κ 1 (resp. κ 2 ) represents the maximal (resp. minimal) local curvature of Γ(t) in element K. This result can be interpreted as an error estimate for the approximation error φ − φ h . Hence, the metric tensor supplied by this error estimate to prescribe the size, shape and orientation of the elements throughout the domain Ω is defined in two dimensions by the matrix: at any vertex of an element K intersected by Γ(t) and by a matrix λI 2 , λ ∈ R + at any other mesh vertex. This is sufficient to ensure that for all x in Γ h : Finally, the optimal quasi-uniform anisotropic triangulation in the metric provided by the error estimate is obtained by adapting the mesh T h using local modifications.
Highly stretched elements will be naturally created in the vicinity of Γ(t) and aligned along the tangent to the isocurve. We will now focus on two applications of this meshing strategy to adapt dynamic triangulations in incompressible flow simulations to reduce the approximation errors and for the purpose of reconstructing smooth manifolds from an unorganized point set.
3.3. Incompressible flows. According to Osher [33], levelset methods add dynamic to implicit surfaces. It was then legitimate to employ levelsets in timedependent simulations, like fluid dynamics. As an example, we consider the problem of the deformation of a bubble of liquid (denoted fluid 1) enclosed in a domain Ω filled with a less dense liquid (denoted fluid 2) and subjected to the gravity. We consider that the interface Γ(t) is described by the zero set of the levelset function φ; the function φ is supposed positive in fluid 1 and negative in fluid 2. The model describing the motion of the interface is described by the non-dimensional Navier-Stokes equations: ρ ∂u ∂t + u · ∇u = −∇p + ∇ · (µ(∇u + ∇u t )) + ρ g ∀(t, x) ∈ R + × Ω (33) where the density ρ is assumed constant and µ represents the variable viscosity. Dirichlet u = u D or Neumann σ · n = s N , where the stress tensor σ corresponds to µ(∇u + ∇u t ) − p I, boundary conditions and the following boundary conditions on the velocity u close the system: where the right-hand side term γ κ n corresponds to a surface tension force involving the local curvature κ of the interface. The interface Γ(t) is advected by the flow field u(x, t) and hence, its geometry as well as its topology changes in time.
The finite element discretization of this problem shall account for the inf -sup condition and we assume that we have some appropriate finite dimensional function space for the trial and weighting function spaces corresponding to the velocity, pressure and the levelset function. Concerning the function spaces, we use Lagrange P 1 elements for the pressure p and augmented P 1 elements for the velocity u. In this problem, the mass is assumed constant in each subdomain: In practice, the characteristic function χ is replaced by a more regular function φ like the distance function to the interface Γ(t). The resulting equation can then be solved using a method of characteristics [35].
The computational domain Ω is discretized in a quasi-uniform mesh T h . Notice that in this approach the mesh does not need to conform to the interface Γ(t). Hence, discontinuous integrals must be computed in the fluid formulation because viscosities and densities are discontinuous in all elements intersected by Γ(t). To overcome this problem, a common technique consists in considering a region of thickness 2 in the neighborhood of the interface φ(t, x) < and to smooth the density and the viscosity over this region [32]. A drawback of such method is the need to keep the interface thickness constant in time, thus requiring to reinitialize the levelset in order to keep it a distance function. The use of anisotropic adaptation in the vicinity of the interface allows to locate this interface with a great deal of accuracy. To this end, we use the technique described above to define a metric tensor field based on the local curvature of the interface and we rely on local mesh modifications to adapt the triangulation to this metric. Consequently, we are able to compute the integrals very accurately. This is not the only example where the adaptation of quasi-uniform triangulations with respect to a metric tensor given by an error estimate is able to improve significantly the accuracy of the computation. The following section will emphasize another application of the adaptive anisotropic meshing techniques.
3.4. Reconstruction of manifolds from point clouds. Reconstructing a smooth curve or surface manifold from an unorganized point set in R n is a challenging topic in many numerical simulations. However, the problem being formulated in this way is ill-posed, the uniqueness of the solution being not guaranteed. A large amount of effort has been spent on reconstructing triangulated surfaces using the Delaunay triangulation or its dual the Voronoï diagram; see for instance [7], [12]. Robust algorithms now exist that can deal with arbitrary data sets. However, this approach is not well-suited to deal with dynamic surfaces that are subjected to large deformations and topological changes. Recently, Zhao et al. introduced a weighted minimal surface model based on a variational and partial differential equation formulation [41]. In this formulation, only the distance to the data set is used as input and the reconstructed surface is in principle smoother than piecewise linear. We will modify this model and combine it with adaptive anisotropic refinement to achieve a highly accurate curve or surface reconstruction.
Define for all x in a closed bounded Ω in R n , d(x) = dist(x, S) to be the distance function to a given point set S contained in Ω. The following surface energy is defined for the variational formulation: where Γ(t) is here an arbitrary manifold in R n moving in time and ds is the surface area. The idea is to look at a local minimizer of this functional that behaves like a minimal surface with respect to the data set. Instead of tracking the manifold Γ(t), we will consider it as the zero levelset of a function φ(t, x) and focus on the Hamilton-Jacobi time-dependent equation (27) which olds on Γ(t). Hence, we start from an initial closed surface that encloses the data set and follow the gradient flow in Equation (37). The convection of Γ(t) in a velocity field created by the potential field corresponding to the distance function v(x) = −∇d(x) is described by the first order linear differential equation: for which the normal velocity is less than or equal to one and each point of the manifold is attracted by its closest point in S. This equation can be solved using a time step ∆t = O(h). We derive the time dependent partial differential equation for the levelset function, for all x in Ω: and extending the convection model above to all levesets of φ(t, x), we naturally deduce the equation: To obtain the minimal manifold, we consider the gradient flow of the energy functional for φ(t, x): where n = ∇φ |∇φ| s is the unit normal and κ = ∇ · ∇φ |∇φ| denotes the mean curvature of Γ(t). Neglecting the scaling factor d p−1 (x)( d p (x)ds) 1 p −1 leads to the simplified equation: ∂φ The steady-state solution of this equation is given by the Euler-Lagrange equation: At the equilibrium, we have the expected result: and since the maximum of d(x) on the approximate surface is inversely proportional to the density of the data set, it is possible to deduce local conditions on the sampling density and the local feature size [6]. Obviously, the functional (37) penalizes the deviation of the deforming surface from the points of the data set S. With sparse data however, the surface will continue to deform in the direction of the negative gradient of the distance potential d(x), even after the surface is fitting the points. To overcome this drawback, Solem et al. [37] suggested to replace the energy functional by the following that minimizes only the shortest distance from the points in S to the surface, if |∇φ| = 1: where δ (x, S) = y∈S δ (x − y), δ (·) being a smooth version of the Dirac delta function for a given thickness ; for instance a uniform Gaussian with standard deviation . This formulation has removed all terms containing ∇φ(t, x) in the integrand. According to the Lagrange-Euler equation, the condition to reach a minimum is now: and the evolution equation becomes: The energy will decreases until it reaches a minimum as the time derivative is: (48) Three main numerical issues are faced in this surface reconstruction problems. At first, a fast and accurate algorithm to compute the distance function to an arbitrary data set is needed. Secondly, it may be desirable to start from an initial closed manifold enclosing the data set for the gradient flow. Finally, the time-dependent partial differential equation for the levelset function must be solved acurately. As pointed out in [41], this reconstruction problem has two scales. One corresponds to the resolution (sampling) of the data set. The other is the resolution of the triangulation of the domain Ω.
In this approach, an initial manifold is continuously deformed by following the gradient flow of the energy functional (37). To this end, Equation (42) is solved numerically using an explicit scheme with a time step δt restricted to the order of O(h 2 ), according to a CFL condition. Therefore, it is important to minimize the number of nodes, thus to maximize the element size, in the regions far from the minimal surface. This is were mesh adaptation will play again an important role. Starting from a uniform coarse triangulation of Ω, we can rapidly evolve the levelset function φ from an arbitrary initial surface until it reaches the neighborhood of the data set. To further accelerate the calculation, the curvature term can be neglected in the convection equation during the first steps. Then, using the local error estimate described previously to control the geometric approximation of an interface, the triangulation is adapted into a quasi-uniform mesh with respect to a metric tensor field related to the local curvature of the leveset φ(t, x) = 0.
This concludes our presentation of anisotropic mesh adaptation for dynamic simulations based on a levelset formulation. The next section will present numerical results to emphasize the efficiency of this approach.
4. Numerical results. In this section, we present some numerical results of anisotropic mesh adaptation for two and three-dimensional examples of static and dynamic simulations. The numerical results are obtained using in-house solver and meshing software coauthored by the authors with the valuable help of C. Dobrzynski (IMB, Université Bordeaux I) that we would like to thank here.

Curve reconstruction.
To illustrate the adaptive remeshing using a level set method, we consider a domain Ω = [0, 1] 2 and an closed curve Γ corresponding to the zero levelset of an implicit function φ(x) defined for all x in R 2 ; Figure 1. Given a data set S containing 1000 points, an initial uniform coarse triangulation T h has been generated on the domain Ω, containing 15 × 15 vertices. The initial levelset curve Γ was chosen as the smallest circle enclosing all data points. A large number of iterations have been required to advect the interface curve until it comes close to the data set, at a distance . Then, the interface region φ(x) < has been anisotropically refined using the geometric error estimate described in section 3.2. A few adaptation steps have been necessary to fully capture the interface by further advecting the curve Γ. The final quasi-uniform adapted mesh contains about 4 100 vertices and corresponds to an error in L ∞ norm of 5 × 10 −3 . A uniform isotropic mesh for the same error level would almost certainly contain about 10 5 vertices. 4.2. Bifluid simulation. As pointed out hereabove, anisotropic mesh adaptation provides most prominent advantages when dealing with dynamically evolving computational domains. Figure 2 shows an example of a circular bubble of viscous fluid 1 immersed in a less viscous fluid 1. The generalized Stokes equations for bifluid simulation has been solved using a domain-decomposition approach. The time stepping has been splitted between the computation of the velocity field and the subsequent advection of the interface represented by a levelset function φ. Initially, the velocity is zero u = 0 at time t = 0. The simulation has been run up to time t = 2, where the bubble has moved more than a diameter and the mesh (the interface) has been largely deformed. The sequence shows the adapted meshes at iteration 1, 5, 10 and 20, for a time step dt = 0.1. The adaptation was based on a geometric error estimate related to the local curvature of the interface as well as to the interpolation error on the velocity field. 5. Conclusion. In this paper, we presented an effective approach for the metricbased anisotropic adaptation of dynamically evolving manifolds in R n . A sharp mathematical bound on the approximation error allows to reconstruct the geometry and the topology of complex manifolds represented by levelsets with a high level of accuracy. A variational approach and PDE based formulation has been presented for the reliable and accurat reconstruction of smooth manifolds. This formulation involves only a distance function to the data set and the levelset method is used as a deforming tool for an implicit curve or surface. The coupling of local mesh adaptation with leveset techniques to describe and follow the moving interface in flow simulations has proved to be efficient. The anisotropic adaptation in the vicinity of the interface allows to locate and to track this interface accurately and the method of characteristics employed to solve the advection equation to deform the interface allows for large time steps, thus reducing further the overall computational effort.
In our future work, we will focus on the simplification of the numerical schemes for solving the PDE based models to advect the interface in order to speed up the computation. We will also find a sharp estimate for the approximation error of a manifold in three dimensions.