Stability and continuity of functions of least gradient

In this note we prove that on metric measure spaces, functions of least gradient, as well as local minimizers of the area functional (after modification on a set of measure zero) are continuous everywhere outside their jump sets. As a tool, we develop some stability properties of sequences of least gradient functions. We also apply these tools to prove a maximum principle for functions of least gradient that arise as solutions to a Dirichlet problem.


Introduction
The theory of minimal surfaces in the Euclidean setting has been studied extensively, for example, in [1], [30], [9], [13], [32], [33], [35] from the point of view of regularity. The literature on this subject is extensive, and it is impossible to list all references; only a small sampling is given here. Much of this study had been in the direction of understanding the regularity of minimal surfaces obtained (locally) as graphs of functions. However, the work of [9], [35] and [37] and others studies more general "least gradient" functions and their regularity, and it is shown in [35] that if boundary data is Lipschitz continuous and the (Euclidean) boundary of a domain has positive mean curvature, then the least gradient solution to the corresponding Dirichlet problem is locally Lipschitz continuous in the domain. However, such Lipschitz regularity has been shown to fail even in a simple weighted Euclidean setting, see [15], where an example is given of a solution with jump discontinuities in the domain, even though the boundary data is Lipschitz continuous. Therefore, in a more general setting, it is natural to ask whether functions of least gradient are continuous outside their jump sets. The principal goal of this note is to show that every function of least gradient is necessarily continuous outside its jump set, even when the boundary data is not continuous.
The setting we consider here is that of a complete metric measure space X = (X, d, µ) equipped with a doubling Borel regular outer measure µ supporting a ( , )-Poincaré inequality. We consider functions of bounded variation in the sense of [3], [27], and [6], and functions of least gradient in a domain Ω ⊂ X.
In considering regularity properties of functions of least gradient, we need some tools related to stability properties of least gradient function families. Therefore we extend the study to also include questions related to stability properties of least gradient functions (minimizers) and quasiminimizers. We show that being a function of least gradient is a property preserved under L loc (Ω)-convergence. We then obtain partial regularity results for functions of least gradient. Namely, we show that such minimizers are continuous at points of approximate continuity, that is, away from the jump discontinuities of the function. Observe that by the results of [6], the jump set of a BV function has σ-nite co-dimension Hausdor measure; hence there is a plenitude of points where the least gradient function is continuous.
As a further application of the tools developed to study the above regularity, we obtain a maximum principle for least gradient functions obtained as solutions to a Dirichlet problem.
In tandem with the development of least gradient theory in the metric setting, the papers [15] and [16] develop the existence and trace theory of minimizers of functionals of linear growth in the metric setting (analogously to the problem considered by Giusti in [13]). In the case that the function f of linear growth satis es f ( ) = or lim inf t→ + (f (t) − f ( ))/t > , the results of this note can be adapted to study the regularity and maximum principle properties of the associated minimizers, but mostly we limit ourselves to the least gradient theory. However, in the case of the area functional f (t) = √ + t , we give an explicit proof that minimizers of this functional are continuous at points of approximate continuity.
This paper is organized as follows. In Section 2 we introduce the concepts and background needed for our study, and in Section 3 we consider the stability problem for least gradient functions on a given domain. In Section 4 we use the tools developed in Section 3 to show that least gradient functions are continuous everywhere outside their jump sets. In Section 5 we give a further application of the tools developed in Section 3 to prove a maximum principle for functions of least gradient that arise as solutions to a Dirichlet problem. In the nal section of this paper we extend the continuity result of Section 4 to the case of the area functional.

Notation and background
In this section we introduce the notation and the problems we consider.
In this paper, (X, d, µ) is a complete metric space equipped with a metric d and a Borel regular outer measure µ. The measure is assumed to be doubling, meaning that there exists a constant c d > such that We say that a property holds for almost every x ∈ X, or a.e. x ∈ X, if there is a set A ⊂ X with µ(A) = and the property holds outside A. We will use the letter C to denote a positive constant whose value is not necessarily the same at each occurrence.
We recall that a complete metric space endowed with a doubling measure is proper, that is, closed and bounded sets are compact. Since X is proper, for any open set Ω ⊂ X we de ne e.g. Lip loc (Ω) as the space of functions that are Lipschitz in every Ω Ω. The notation Ω Ω means that Ω is open and that Ω is a compact subset of Ω.
For any set A ⊂ X and < R < ∞, the restricted spherical Hausdor content of codimension is de ned as The Hausdor measure of codimension of a set A ⊂ X is A curve is a recti able continuous mapping from a compact interval to X, and is usually denoted by the symbol γ. A nonnegative Borel function g on X is an upper gradient of an extended real-valued function u on X if for all curves γ on X, we have |u(x) − u(y)| ≤ γ g ds whenever both u(x) and u(y) are nite, and γ g ds = ∞ otherwise. Here x and y are the end points of γ. Given a locally Lipschitz continuous function u on Ω, we de ne the local Lipschitz constant of u as In particular, Lip u is known to be an upper gradient of u, see e.g. [11,Proposition 1.11]. We consider the following norm with the in mum taken over all upper gradients g of u. The Newton-Sobolev, or Newtonian space is de ned as Similarly, we can de ne N , (Ω) for an open set Ω ⊂ X. For more on Newtonian spaces, we refer to [31] or [8].
Next we recall the de nition and basic properties of functions of bounded variation on metric spaces. A good discussion of BV functions in the Euclidean setting can be found in [36] and [5]. In the metric setting, the corresponding theory was rst studied by Miranda Jr. in [27], and further developed in [2], [3], [6], and [23]. For u ∈ L loc (X), we de ne the total variation of u as where gu i is an upper gradient of u i . In the literature, the upper gradient is sometimes replaced by a local Lipschitz constant and the approximating functions u i are sometimes required to be locally Lipschitz, but all the de nitions give the same result according to [4,Theorem 1.1]. We say that a function u ∈ L (X) is of bounded variation, and denote u ∈ BV(X), if Du (X) < ∞. Moreover, a µ-measurable set E ⊂ X is said to be of nite perimeter if Dχ E (X) < ∞. By replacing X with an open set Ω ⊂ X in the de nition of the total variation, we can de ne Du (Ω). The BV norm is given by For an arbitrary set A ⊂ X, we de ne  We have the following coarea formula given by Miranda in [27,Proposition 4.2]: if F ⊂ X is a Borel set and u ∈ BV(X), we have The approximate upper and lower limits of an extended real-valued function u on X are: We assume that the space X supports a ( , )-Poincaré inequality, meaning that for some constants c P > and λ ≥ , for every ball B(x, r), for every locally integrable function u, and for every upper gradient g of u, we have For BV functions, we get the following version of the ( , )-Poincaré inequality. Given any locally integrable function u, by applying the ( , )-Poincaré inequality to an approximating sequence of functions, we get , λr)) .
De nition 2.1. We denote by BVc(Ω) the functions g ∈ BV(X) with compact support in Ω, and by BV (Ω) the functions g ∈ BV(X) such that g = µ-a.e. in X \ Ω.
We next recall the various notions of minimizers; see also e.g. [9].
Following [21], we say that a set E ⊂ X is of minimal surface in Ω if χ E ∈ BV loc (Ω) and for all g ∈ BVc(Ω) and K = supt(g).
In our de nition of sets of minimal surface, observe that the sets F are precisely ones that can be written as (E ∪ F) \ G for some relatively compact µ-measurable subsets F, G of Ω. Thus our de nition is consistent with [21]. Observe also that by the de nition and measure property of the total variation, the support of the perturbation K can always be replaced by any larger relatively compact subset of Ω in the above de nitions.
We also consider the corresponding Dirichlet problem in the metric setting. Recall that in the Euclidean setting with a Lipschitz domain Ω, the least gradient problem with a given boundary datum f can be stated as min Du (Ω) : u ∈ BV(Ω), u = f on ∂Ω .
If we do not require, for instance, the continuity of the boundary data, then the boundary value has to be understood in a suitable sense, e.g. as a trace of a BV function. One possibility is to consider a relaxed problem with a penalization term where Tu and Tf are traces of u and f on ∂Ω, provided such traces exist, and H n− is the n − -dimensional Hausdor measure. In order to avoid working directly with traces of BV functions, we consider an equivalent problem formulated in a larger domain, namely X.

De nition 2.3.
Given an open set Ω ⊂ X and f ∈ BV(X), we say that u ∈ BV(X) is a solution to the Dirichlet problem for least gradients in Ω with boundary data f , if u − f ∈ BV (Ω) and whenever g ∈ BV (Ω), we have Note that such a solution is a function of least gradient and that this problem is the same as minimizing , the problem is equivalent to, and the minimizers are the same as, when minimizing Du (X) over all u ∈ BV(X) that satisfy u − f ∈ BV (Ω). Thus the problem we consider here is of the same type as in [15], [16].

Stability of least gradient function families
To answer questions regarding continuity properties of functions of least gradient (outside their jump sets), it turns out that tools related to the stability of least gradient function families are needed. In this section we therefore study such stability properties.
The following stability result is key in the study of continuity properties of functions of least gradient. In the Euclidean setting, the proof of this result found in [26,Theorema 3] is based on trace theorems for BV functions. Given the lack of trace theorems in the metric setting, the proof given here is di erent from that of [26], but the philosophy underlying the proof is the same. We thank Michele Miranda for explaining the proof in [26] (which is in Italian) and for suggesting a way to modify it.

Proposition 3.1.
Let Ω ⊂ X be an open set, let u k ∈ BV loc (Ω), k ∈ N, be a sequence of functions of least gradient in Ω, and suppose that u is a function in Ω such that u k → u in L loc (Ω). Then u is a function of least gradient in Ω.
An analogous statement holds for sequences of Q-quasi least gradient functions; the L loc (Ω)-limits of such sequences are also Q-quasi least gradient functions. The proof is mutatis mutandis the same as the proof of the above proposition; we leave the interested reader to verify this.
To prove the above proposition, we need the following version of the product rule (Leibniz rule) for functions of bounded variation.
Proof. According to [ where gu i and gv i are upper gradients of u i and v i , respectively. Then , and every ηu i + ( − η)v i has an upper gradient Lemma 2.18]. Now take any open sets U U ⊂ Ω, and any ψ ∈ Lip c (U) with ≤ ψ ≤ and ψ = in U . Then we have By the fact that U U was arbitrary and by the inner regularity of the total variation, see the proof of [27,Theorem 3.4], this implies that Since the variation measure of arbitrary sets is de ned by approximation with open sets, we have the result.
We will also need the following inequality from [21, Inequality (4.3)] for functions of least gradient in Ω. Whenever B = B(x, R) Ω, we have the De Giorgi inequality for every < r < R: Observe that in [21], this is proved when u is the characteristic function of a set, but the proof works for more general functions as well.
Proof of Proposition 3.1. Take any compact set K ⊂ Ω. Because u k → u in L loc (Ω), we know that the functions u k are bounded in L (K ) for compact sets K ⊂ Ω. Hence by covering K by balls of radius r so that concentric balls of radius r are relatively compact subsets of Ω, and by then applying the above De Giorgi inequality, we see that sup By the fact that u k → u in L loc (Ω) and by the lower semicontinuity of the total variation, we can conclude that u ∈ BV loc (Ω). To show that u is of least gradient in Ω, we x a function g ∈ BV(Ω) such that the support K of g is a compact subset of Ω. We need to show that By (3.2) we know that the sequence of Radon measures Du k is locally bounded in Ω. Hence a diagonalization argument gives a subsequence, also denoted by Du k , and a Radon measure ν on Ω, such that Du k → ν weakly * in Ω.
Let K ⊂ Ω be a compact set such that K ⊂ K and and let ε > such that Kε := x∈K B(x, ε) Ω. We choose a Lipschitz function η on X such that ≤ η ≤ on X, η = in K, and η = in X \ K ϵ/ , and for each positive integer k we set Note that g k = u k on Ω \ K ε/ , and so by the lower semicontinuity of the total variation and the fact that u k → u in L (Kε), the minimality of u k , and the Leibniz rule of Lemma 3.2, we have Letting ε → , we obtain Since K contains the support of g, we conclude that u is of least gradient in Ω.
While the above proposition does not require the functions u k to be in the global space BV(X), the next stability result considers what happens when each u k ∈ BV(X) is a solution to the Dirichlet problem for least gradients with boundary data in BV(X).

Proposition 3.3. Let Ω be a bounded open set in
Take a sequence of functions f k ∈ BV(X), k ∈ N, and suppose that each u k ∈ BV(X) is a solution to the Dirichlet problem for least gradients in Ω with boundary data f k . Suppose also that Then there is a function u ∈ BV(X) such that a subsequence of u k converges to u in L (Ω), and u is a solution to the Dirichlet problem for least gradients in Ω with boundary data f .
where C depends only on the radius of B, the Poincaré inequality constants, the doubling constant of µ, and the ratio µ By de nition, we know that u k − f k ∈ BV (Ω), and hence for each positive integer k, The last inequality follows from the fact that u k is a solution to the Dirichlet problem for least gradients with boundary data f k . It follows that the sequence {u k −f k } k is a bounded sequence in BV (Ω) ⊂ BV(X), and hence by the compact embedding theorem for BV(X) (see [27,Theorem 3.7]), there is a subsequence, also denoted by {u k − f k } k , and a function v ∈ BV (Ω), such that u k − f k → v in L loc (X), and so by the compactness of Ω, By the lower semicontinuity of the total variation, we have Now let g ∈ BV (Ω), and h := f + g. Furthermore, let f k := f k + g. Since each u k is a solution to the Dirichlet problem for least gradients with boundary data f k , we have By combining (3.3) with these facts, we get Thus u is a solution to the Dirichlet problem with boundary data f . This concludes the proof.
The above two stability results require the sequence u k to converge in L loc (Ω) (while the second stability result above did not explicitly require this, it was an almost immediate consequence of the hypothesis). The next proposition considers the weakest form of stability, namely, what happens when the sequence u k is only known to converge pointwise almost everywhere to a function u in Ω.
Recall that given an extended real-valued function u on Ω, its super-level sets are sets of the form {x ∈ Ω : u(x) > t} for t ∈ R. To prove this proposition, we need the following two lemmas, which will also be quite useful in the study of continuity properties of minimizers undertaken in the next section. The argument in the lemmas is based on Bombieri-De Giorgi-Giusti [9].
Similarly, we have Therefore we have by (2.2) again, and since the variation measure of general sets is de ned by approximation with open sets, we can conclude that Du + Du = Du . Proof. Take any t ∈ R and let u , u be de ned as in the previous lemma. Let g ∈ BVc(Ω) with K := supt(g). Then we have by the previous lemma, the minimality of u, and the subadditivity of the total variation,

It follows that
so that u is also of least gradient in Ω. Mutatis mutandis we can show that u is also of least gradient in Ω.
Hence we have that whenever ε > , the function is of least gradient in Ω. By the Lebesgue dominated convergence theorem, for any compact K ⊂ Ω it is true that as ε → . Hence u t,ε → χ Et in L loc (Ω) as ε → , from which, together with Proposition 3.1, we conclude that χ Et is of least gradient in Ω. This completes the proof of the lemma.
Proof of Proposition 3.4. Since u k → u almost everywhere in Ω, it follows that for almost every t ∈ R we have χ {x∈Ω : u k (x)>t} → χ {x∈Ω : u(x)>t} almost everywhere in Ω. Indeed, to see this, we set N to be the collection of points x ∈ Ω such that u k (x) does not converge to u(x). Then µ(N) = . For any t ∈ R we have that if u k (x) ≤ t for a subsequence of k but u(x) > t, then x ∈ N. So, setting for each x ∈ Ω\(K t ∪N) we see that χ {y∈Ω : u k (y)>t} (x) → χ {y∈Ω : u(y)>t} (x). Note that for x ∈ K t , we have u(x) = t. Therefore when s ≠ t we have that Ks ∩ K t is empty. Thus the family {K t } t∈R is pairwise disjoint, and hence by the local niteness of µ there is at most a countable number of t ∈ R for which µ(K t ) > . We conclude that for almost every t ∈ R, χ {x∈Ω : u k (x)>t} → χ {x∈Ω : u(x)>t} almost everywhere in Ω. By Lemma 3.6 we know that χ {x∈Ω : u k (x)>t} is of least gradient in Ω for each such t ∈ R, and so by the De Giorgi inequality (3.1), we know that whenever B(x , r) Ω, It follows that whenever K is a compact subset of Ω, This implies that C K := sup k Du k (K) is nite for any compact K ⊂ Ω, so this case reduces to the last case presented in the proposition. Let us thus assume that whenever K ⊂ Ω is compact, we have C K < ∞. Then for any ball B(x , λr) Ω we have by the ( , )-Poincaré inequality that By using the compactness result [27, Theorem 3.7] again, the sequence of BV functions {u k − (u k ) B(x ,r) } k has a subsequence that converges in L loc (B(x , r)) to some function v ∈ BV (B(x , r)). By picking a further subsequence if necessary, we have u k − (u k ) B(x ,r) → v pointwise a.e. in B(x , r). Since also u k → u pointwise and u is nite almost everywhere, the corresponding subsequence of the sequence {u k,B(x ,r) } k must also converge to some number α B(x ,r) ∈ R. Thus we have u k → u in L loc (B(x , r)), and by the lower semicontinuity of the total variation, u ∈ BV (B(x , r)). Hence u ∈ BV loc (Ω). By covering Ω with balls and using a diagonal argument, we can pick a subsequence u k for which u k → u in L loc (Ω), and then it follows from Proposition 3.1 that u is of least gradient in Ω.

Continuity of functions of least gradient
An example in [15] shows that even when the boundary data f is Lipschitz, in general it is not true that there is a continuous solution to the Dirichlet problem for the area functional with boundary data f . A minor modi cation of the example shows that the same phenomenon occurs also in the case of the Dirichlet problem for least gradients. This is in contrast to the Euclidean situation, where it is known that if the boundary of the domain has strictly positive mean curvature (in a weak sense) and the boundary data is Lipschitz, then there is exactly one Lipschitz solution; see for example [35], [28], [29], [37]. The example in [15] is in a Euclidean convex Lipschitz domain, equipped with a -admissible weight in the sense of [18]. Hence even with the mildest modi cation of the Euclidean setting, things can go wrong. We will show here that in a rather general setting of a metric measure space, functions of least gradient are continuous everywhere outside their jump sets (after modi cation on a set of measure zero of course).
Recall that a function u ∈ L loc (X) is approximately continuous outside a set of measure zero. We can also rede ne a function u ∈ L loc (X) in a set of µ-measure zero so that, outside the jump set Su := {x ∈ X : u ∨ (x) > u ∧ (x)}, u is everywhere approximately continuous. To complete the proof that a function of least gradient is continuous outside its jump set, we need the upcoming theorem.
Note that as the characteristic function of a cardioid shows, approximate continuity need not imply continuity, even under modi cation on a set of measure zero. Furthermore, we know that for u ∈ BV loc (X), the jump set Su is of σ-nite H-measure; this follows from [6,Theorem 5.3]. Hence our claim that a function of least gradient, after modi cation on a null set, is continuous everywhere outside its jump set, is quite strong. We know that a function of least gradient is locally bounded, see [15,Theorem 4.2,Remark 3.4]. Thus |t| < ∞. Let ε > . We now show that, given the choice of representative u = u ∨ , there exists rε > such that u ≥ t − ε in B(x, rε). By the choice of representative u = u ∨ , it is enough to show that µ(B(x, rε) \ E t−ε ) = , where To do so, we will apply the porosity result of [21] to the level sets of u.
We know from Lemma 3.6 that the characteristic function χ Et−ε of the super-level set E t−ε is a function of least gradient, and so E t−ε is a set of minimal surface (set of minimal boundary surface in the language of [21]). De ne E t−ε according to [21,Theorem 5.2]. Now it is enough to show that x ∈ int E t−ε . Because x ∉ Su, we have that u ∨ (x) = u ∧ (x) = t, and so x ∉ ext E t−ε . Thus it su ces to show that x ∉ ∂ E t−ε . Let us assume, contrary to this, that x ∈ ∂ E t−ε . By the porosity of Ω \ E t−ε , this means that there exists rx > and C ≥ such that whenever < r < rx, there is a point z ∈ B(x, r/ ) such that where the constant C is independent of x and r. Now B(z, r/ C) ⊂ B(x, r), and the doubling property of the measure gives that µ(B(z, r/ C)) ≥ γµ(B(x, r)), where < γ < is independent of x and r. Thus This contradicts the fact that the approximate limit of u at x is t, since u ∧ (x) = t implies that Therefore x ∉ ∂ E t−ε , and hence x ∈ int E t−ε . As noted earlier, this implies that u ≥ t − ε everywhere in B(x, rε) for some rε > . By applying a similar argument to the sublevel sets F t := {x ∈ Ω : u ≤ t}, we get x ≤ t + ε in B(x, rε) for a possibly smaller rε > . Thus for every ε > there exists rε > such that |u(x) − u(y)| ≤ ε for all y ∈ B(x, r). Hence u is continuous at x.

Maximum principle
In this section we prove a maximum principle for solutions to the Dirichlet problem for least gradients. Note that by considering truncations of the approximating sequences in the de nition of the total variation, we have the following: if f ∈ BV(X) with M ≤ f ≤ M , then there is a solution u ∈ BV(X) to the Dirichlet problem for least gradients with boundary data f such that M ≤ u ≤ M . Since we do not have a uniqueness result, this does not automatically imply that all solutions enjoy the same property; one has to prove the maximum principle independently. from which we see that Du (Ω) = . Therefore by the ( , )-Poincaré inequality for BV functions, it follows that u is locally a.e. constant in Ω, and because Ω is connected, u is a.e. constant in Ω. We denote this constant by L. As pointed out above, u − f ∈ BV (Ω); it follows that since u + u − f = u − f ∈ BV (Ω), we must have u ∈ BV (Ω), that is, u = Lχ Ω ∈ BV(X). Therefore if L ≠ , we must have that P(Ω, X) is nite, and because Du (Ω) = , we must in fact have that P(Ω, X) = .
Unlike in the nonlinear potential theory associated with p-harmonic functions for p > , here we cannot replace the condition µ(X \ Ω) > with the requirement Cap (X \ Ω) > , since there are closed sets of measure zero but with positive -capacity, and if µ(X \ Ω) = , all functions u ∈ BV(X) satisfy any given boundary values f ∈ BV(X).

Regularity of minimizers of the functional f (t) = √ + t
The area functional given by the function f : √ + t is much studied in the setting of the Bernstein problem. However, this functional does not satisfy either the condition f ( ) = or lim t→ + (f (t) − f ( ))/t > , and hence the regularity results obtained so far in this paper do not apply to this functional. However, by directly considering the meaning of minimizing this functional, we obtain an analogous regularity result in this section.
While we stick to the model case f (t) = √ + t , it is easy to verify that the computations and results presented in this section also apply to more general functionals, where f satis es the growth conditions for all t ≥ and some constants < m ≤ M < ∞. The only di erence is that various constants will also depend on m and M.
Given an open set Ω ⊂ X and a function u ∈ L loc (Ω), we de ne the functional by Here each gu i is an upper gradient of u i ∈ N , loc (Ω). The above de nition of F agrees with that of [15,Definition 3.2], since under the assumption of a ( , )-Poincaré inequality and the doubling condition of the measure, we know that Lipschitz continuous functions are dense in N , (X) (see for example [31]), and hence locally Lipschitz continuous functions form a dense subclass of N , loc (Ω) (see [8,Theorem 5.47]). Note that if F(u, Ω) is nite, then necessarily u ∈ BV loc (Ω) with Du (Ω) < ∞. It is shown in [16] that F(u, ·) extends to a Radon measure on Ω.
We say that a function u ∈ BV loc (Ω) is a minimizer of the functional F, or an F-minimizer, if whenever v ∈ BVc(Ω), we have F(u, K) ≤ F(u + v, K), where K = supt(v).
Minimizers as considered in [15] are global minimizers, where the test functions v are required only to be in BV (Ω). Our notion of minimizers is a local one in this sense. Clearly the results in [15] have local analogs in our setting. In particular, we know from [15,Theorem 4.2] that minimizers in our sense are locally bounded in Ω.
To study the regularity properties of F-minimizers, we consider a related metric measure space, X × R, equipped with the measure µ × H (where H is the Lebesgue measure on R), and the metric d∞ given by d∞ ((x, t), (y, s)) := max{d(x, y), |t − s|} for x, y ∈ X, t, s ∈ R. Since both X and R support a ( , )-Poincaré inequality and µ, H are doubling measures, the product space X × R equipped with d∞ and µ × H also supports a ( , )-Poincaré inequality and µ × H is doubling; see [34,Proposition 1.4].
Given an F-minimizer u on Ω, we de ne the subgraph Note that for any µ-measurable function u, the set Eu is µ × H-measurable in X × R, see [12, p. 66]. We will use the next theorem in the study of regularity of F-minimizers outside their jump sets.
In proving a converse inequality, we need to consider competing sets for Eu that are not necessarily subgraphs of functions. By [15,Theorem 4.2] we know that u is locally bounded, since it is a minimizer of F. Suppose that F ⊂ Ω × R is a µ × H-measurable set such that F∆Eu is a compact subset of Ω × R. We need to show that Dχ Eu (F∆Eu) ≤ C Dχ F (F∆Eu). (6.2) Note that by the compactness of F∆Eu, there is an open set U Ω that contains the projection of F∆Eu to X.
Since F∆Eu is compact in Ω × R and u is bounded in U, we can assume that u ≥ in U, and that (x, t) ∈ F for all x ∈ U and t ≤ . Let us set v : U → [ , ∞] to be the function note that this is µ-measurable by Fubini's theorem. Now v and u may di er only in a compact subset of U (namely, the projection of F∆Eu to X). Let B ε i be a covering of U × R by balls of radius ε, with bounded overlap, where ε will be chosen shortly, and let φ ε i be a corresponding partition of unity by Lipschitz functions. For each ε > , the discrete convolution ψε : as ε → , and satis es (below, U Cε denotes the Cε-neighborhood of U) For the construction of a discrete convolution and its properties, see e.g. the proof of [22, Theorem 6.5].
Above we need to have U Cε ⊂ Ω, but this is true for small enough ε, since U Ω. We also need to have Dχ F (U Cε × R) < ∞, but we can assume this, since otherwise (6.2) necessarily holds. Similarly we can assume that Dχ F (∂U × R) = .
Recall the de nition of the local Lipschitz constant Lip from (2.1). For a function h(x, t) in the product space X × R, we will also use the notation Fix ε > and set vε on U to be the function this clearly converges to v in L (U) as ε → . For x, y ∈ U we have Here we only need to integrate over a nite interval, since u is bounded in U and F is a perturbation of the subgraph Eu in a bounded set. Since ψε is Lipschitz continuous, vε is also Lipschitz continuous in U. By taking a limit as y → x, we get by Lebesgue's dominated convergence theorem By combining this with (6.3), we get On the other hand, by the fact that ψε is Lipschitz continuous and for each x ∈ U, ψε(x, t) goes from to as t increases from − ε to ∞, we see that It follows that v ∈ BV(U) with Now, by using (6.1), the fact that u is an F-minimizer, and nally (6.4), we see that since we had Dχ F (∂U × R) = . The above implies (6.2), so that Eu is a set of quasiminimal surface in Ω × R.
A more thorough analysis of the relationship between the area functional F(u, Ω) and the perimeter of the subgraph P(Eu , Ω × R) has recently been conducted by Ambrosio et al. in [7] under an additional assumption on the metric space X which is di erent from the assumptions made in this paper. Indeed, [7] considers functions on more general product spaces X × Y with X, Y satisfying an assumption concerning equality between two notions of minimal gradients, whereas we consider the simpler case Y = R. Given these di ering assumptions, neither the following theorem nor [7, Theorem 5.1] is a special case of the other. As in Section 4, by modifying a function u ∈ BV loc (Ω) on a set of measure zero if necessary, we can assume that whenever x ∈ Ω \ Su, we have That is, every point in Ω \ Su is a point of approximate continuity of u. Proof. According to Theorem 6.1, Eu is a set of quasiminimal surface in Ω × R ⊂ X × R equipped with d∞ and µ × H , and hence by [21] has the properties of [Ω × R] ∩ ∂ * Eu = [Ω × R] ∩ ∂ Eu and porosity, where Eu is de ned according to (4.1). Since [Ω × R] ∩ ∂ * Eu = [Ω × R] ∩ ∂ Eu, for any x ∈ Ω we necessarily have (x, t) ∈ Eu for t < u ∧ (x) and (x, t) ∉ Eu for t > u ∨ (x). Moreover, whenever u ∧ (x) ≤ t ≤ u ∨ (x) and r > , we must have µ × H (B((x, t), r) ∩ Eu) > , µ × H (B((x, t), r) \ Eu) > . (6.5) If we add all points (x, t) with t ≤ u ∨ (x) to Eu, we get precisely the subgraph Eu of u = u ∨ . Clearly µ × H (Eu ∆ Eu) = , and by (6.5) we have ∂Eu = ∂ Eu, so that we also have [Ω × R] ∩ ∂ * Eu = [Ω × R] ∩ ∂Eu, and the porosity property holds for Eu and its complement. Let x ∈ Ω \ Su, so that x is a point of approximate continuity of u, and let t := u(x). Since F-minimizers are locally bounded by [15,Theorem 4.1], we have |t| < ∞. We wish to show that whenever ε > , there is a ball B(x, r) ⊂ Ω such that u > t − ε on B(x, r). Suppose not, then there is some ε > such that whenever r > with B(x, r) ⊂ Ω, there is some y ∈ B(x, r) with u(y) ≤ t − ε. In particular, we can choose < r < ε/ .
Since (y, u(y)) ∈ ∂Eu, by the porosity of X \ Eu (see [21,Theorem 5.2]) we know that there is some point (w, s) in the ball (with respect to the metric d∞) B(y, r)×(u(y)− r, u(y)+ r) such that the d∞-ball B(w, r/C)×(s − r/C, s + r/C) lies in [Ω × R] \ Eu. Note that to use this result, we need B(x, r) ⊂ Ω. Now for a.e. z ∈ B(w, r/C), we have u(z) ≤ s − r/C. Since |s − u(y)| < r, we have s < r + u(y) ≤ r + t − ε.
Thus for a.e. z ∈ B(w, r/C) ⊂ B(x, r), Since this is true for every r < ε/ , and the measure µ(B(w, r/C)) is comparable to µ(B(x, r)) by the doubling property of µ, we cannot have the approximate limit of u at x be t. This is a contradiction of the assumption that x is a point of approximate continuity of u with u(x) = t. Thus for each ε > , there is some r > such that u > t − ε on B(x, r). A similar argument shows that for each ε > , there is some r > such that u < t + ε on B(x, r). That is, u is continuous at x.
Recall that even in a weighted Euclidean setting we cannot insist on u being continuous everywhere (that is, we cannot insist on the jump set Su being empty), as demonstrated by [15,Example 5.2].