1 Introduction

The concept of non-local perimeter of a given Borel set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure which corresponds to the fractional Laplacian was proposed in [7]. This is the so-called \(\alpha \)-perimeter and it is defined via the following integral formula

$$\begin{aligned} \textrm{Per}_\alpha (E) = \int _E\int _{E^c}\frac{1}{|x-y|^{d+\alpha }}\, \textrm{d}x\, \textrm{d}y, \end{aligned}$$
(1.1)

where \(0<\alpha <1\) and \(E^c\) is the complement of E. By |x| we denote the Euclidean norm of \(x\in {\mathbb {R}}^d\). This object is strongly related to the fractional Sobolev norm and it has been intensively studied over last years, see [2, 6,7,8,9,10, 17, 22, 23, 25, 27, 40]. We also refer to [18, 19] for the case of fractional norms related to Feller generators.

The main motivation for the present article was another interesting variant of non-local perimeter which is defined through a given non-singular kernel. More precisely, if \(J:{\mathbb {R}}^d \rightarrow [0,\infty )\) is a radially symmetric and integrable function then the corresponding J-perimeter of a set E is given by

$$\begin{aligned} \textrm{Per}_J (E) = \int _E\int _{E^c} J(x-y)\, \textrm{d}y \, \textrm{d}x. \end{aligned}$$
(1.2)

In [30] the authors established basic properties and convergence results for J-perimeters, see also [4, 11, 31, 34, 35]. For a treatment on a unified framework for non-local perimeters and curvatures we refer to [13].

Our principal goal is to introduce a notion of non-local perimeter which is defined with the aid of a given positive Borel measure on \({\mathbb {R}}^d\). If we look at (1.1) and (1.2) from a probabilistic point of view then it becomes evident that the \(\alpha \)-perimeter is linked with an \(\alpha \)-stable Lévy process while J-perimeter is associated with a compound Poisson process. We thus aim to develop a unified approach that encompasses both the concepts as special cases. We emphasize that our methods are, however, purely analytical.

Before we define the object of our study, we recall the definition of the classical perimeter. The perimeter of a Borel set \(E\subset {\mathbb {R}}^d\) can be defined in the variational language as the total mass of the total variation measure of the indicator function \({\textbf{1}}_E\). More precisely, the distributional gradient Du of a function \(u\in L^1(\Omega ) \) (here \(\Omega \subseteq {\mathbb {R}}^d\) is an open set) is a vector-valued Radon measure and its total variation in \(\Omega \) is defined as

$$\begin{aligned} \vert Du \vert (\Omega ) = \sup \left\{ \int _{\Omega }u(x)\, \textrm{div}\, v(x)\, \textrm{d}x:\, v\in C_c^\infty (\Omega , {\mathbb {R}}^d),\, \vert v(x)\vert \le 1,\, \text {for}\ x\in \Omega \right\} . \end{aligned}$$

The perimeter of a Borel set \(E\subset {\mathbb {R}}^d\) is then given by the total variation of \({\textbf{1}}_E\) in \({\mathbb {R}}^d\), i.e.

$$\begin{aligned} \textrm{Per}(E) = |D{\textbf{1}}_E |({\mathbb {R}}^d). \end{aligned}$$
(1.3)

It is known that for a set E of finite Lebesgue measure, \(\textrm{Per}(E)\) is finite if, and only if, \({\textbf{1}}_E\in \textrm{BV}({\mathbb {R}}^d)\), where the space of functions of bounded variations is defined as

$$\begin{aligned} \textrm{BV} ({\mathbb {R}}^d) = \left\{ u\in L^1({\mathbb {R}}^d):\, |Du|({\mathbb {R}}^d)<\infty \right\} . \end{aligned}$$

The space \(\textrm{BV}\) is endowed with the norm \(\Vert u \Vert _{\textrm{BV}} = \Vert u\Vert _{L^1}+ \vert Du\vert \) and it holds \(W^{1,1}({\mathbb {R}}^d) \subset \textrm{BV}({\mathbb {R}}^d)\). Our main reference for functions of bounded variation is [3].

For any positive Borel measure \(\nu \) on \({\mathbb {R}}^d\) satisfying

$$\begin{aligned} \int (1\wedge |x|)\, \nu (\textrm{d}x)< \infty \quad \textrm{and}\quad \nu (\{0\})=0 \end{aligned}$$
(1.4)

we consider the corresponding non-local \(\nu \)-perimeter of a Borel set \( E\subset {\mathbb {R}}^d\) defined as

$$\begin{aligned} \textrm{Per}_{\nu } (E) = \int _{E} \int _{E^c-x}\nu (\textrm{d}y)\, \textrm{d}x . \end{aligned}$$

It was recently observed in [15] that perimeters of this type appear as limit objects in the asymptotics of the heat content related to Lévy processes of bounded variation. It was proved in [15, Lemma 1] that for a set E of finite Lebesgue measure and of finite perimeter, \(\textrm{Per}_\nu (E)\) is finite as well. To the non-local \(\nu \)-perimeter we attach the space

$$\begin{aligned} \textrm{BV}_{\nu } ({\mathbb {R}}^d) = \left\{ u\in L^1({\mathbb {R}}^d):\, \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} |u(x+y)-u(y)|\, \nu (\textrm{d}x)\, \textrm{d}y <\infty \right\} . \end{aligned}$$
(1.5)

It is equipped with the norm \(\Vert u\Vert _{\textrm{BV}_{\nu }} = \Vert u\Vert _{L^1} + {\mathcal {F}}_{\nu }(u)\), where

$$\begin{aligned} {\mathcal {F}}_{\nu } (u) = \frac{1}{2}\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} |u(x+y)-u(y)|\, \nu (\textrm{d}x)\, \textrm{d}y . \end{aligned}$$

In Sect. 2 we show that \(\textrm{BV}_\nu ({\mathbb {R}}^d)\) is a Banach space and \(\textrm{BV}({\mathbb {R}}^d)\subset \textrm{BV}_{\nu } ({\mathbb {R}}^d)\). For sets of finite Lebesgue measure their \(\nu \)-perimeter can be computed through the formula \(\textrm{Per}_\nu (E)={\mathcal {F}}_\nu ({\textbf{1}}_E)\). Furthermore, we also find a co-area formula for the \(\nu \)-perimeter and we observe that for the \(\nu \)-perimeter a version of isoperimetric inequality holds in the case when the measure \(\nu \) is given by a radially increasing kernel.

There has been a vivid interest in asymptotic behaviour and convergence results for fractional perimeters in recent years. The asymptotics as \(\alpha \uparrow 1\) were found in [36] (see also [6, 16]; and recent paper [12] for second order asymptotics). In this case the following convergence holds for any set E of finite Lebesgue measure and of finite perimeter,

$$\begin{aligned} \lim _{\alpha \uparrow 1}\, (1-\alpha )\textrm{Per}_\alpha (E) = \frac{K_{1,d}}{2}\, \textrm{Per}(E), \end{aligned}$$
(1.6)

where

$$\begin{aligned} K_{1,d}= \int _{{\mathbb {S}}^{d-1}}|e\cdot \theta |\sigma (\textrm{d}\theta ),\quad |e|=1. \end{aligned}$$
(1.7)

Here, \(\sigma \) stands for the usual surface measure on the unit sphere. One can show (see e.g. [26]) that \(K_{1,d}= 2\varpi _{d-1}\), where \( \varpi _d = \pi ^{d/2}/\Gamma \left( \frac{d}{2}+1\right) \) is the Lebesgue measure of the unit ball in \({\mathbb {R}}^d\). On the other hand, if \(\alpha \downarrow 0\), the asymptotic result was found in [32] (see also [17] for a more detailed treatment) and it asserts that for any bounded set E of finite perimeter,

$$\begin{aligned} \lim _{\alpha \downarrow 0}\alpha \textrm{Per}_\alpha (E) = \kappa _{d-1}|E|, \end{aligned}$$
(1.8)

where |E| stands for the Lebesgue measure of E and \(\kappa _{d-1}=\sigma ({\mathbb {S}}^{d-1}) =d\varpi _d\). For J-perimeters it was found in [30] that under the assumption that J has compact support and for bounded sets E of finite perimeter the following convergence holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\varepsilon ^{-1}\textrm{Per}_{J_\varepsilon }(E) = C_J^{-1}\textrm{Per}(E), \end{aligned}$$
(1.9)

where \(J_\varepsilon (x) = \varepsilon ^{-d}J(x/\varepsilon )\) and \(C_J = 2( \int _{{\mathbb {R}}^d}J(x)|x_d|\, \textrm{d}x)^{-1}\) (here \(x=(x_1,\ldots ,x_d)\)).

In order to obtain asymptotics of non-local \(\nu \)-perimeters we first extend [36, Theorem 2] to the current setting (see Theorem 3.1) and with this result at hand we show that, for any family of measures \(\{\nu _\varepsilon \}_{\varepsilon >0}\) satisfying (1.4) and such that the mass of normalized measures \((1\wedge |x|)\nu _\varepsilon (\textrm{d}x)\) concentrates at zero, there is a sequence of positive numbers \(\{\varepsilon _j\}\) converging to zero such that

$$\begin{aligned} \lim _{j\rightarrow \infty } C_{\varepsilon _j}^{-1}\textrm{Per}_{\nu _{\varepsilon _j}}(E) = \frac{1}{2} \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D {\textbf{1}}_E \cdot \theta |\, \mu (\textrm{d}\theta ). \end{aligned}$$
(1.10)

Here, \(C_{\varepsilon _j}\) are normalizing constants and \(\mu \) is a probability measure on the unit sphere which is constructed through the family \(\{\nu _\varepsilon \}_{\varepsilon >0}\), see Sect. 3 for details. Clearly, if the limit measure \(\mu \) happens to be the (normalized) uniform measure on the unit sphere then the right-hand side of (1.10) is equal to the right-hand side of (1.6) divided by \(\kappa _{d-1}\). It turns out that this approach applies to fractional perimeters and J-perimeters and we not only recover results in (1.6) and (1.9) but we also abandon the assumption that J is compactly supported.

A fruitful observation in the present context is the fact that the non-local perimeter of E can be represented with the aid of the so-called covariance function related to the set E, see (3.17). This enables us to investigate the case when the mass of the normalized measures \((1\wedge |x|)\nu _\varepsilon (\textrm{d}x)\) concentrates at infinity and as an application we recover (1.8). We also find a corresponding result for J-perimeters. We show that under the assumption that the function \(\ell (s)= \int _{|x|>s}J(x)\, \textrm{d}x\) is slowly varying at zero, for any set E of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\ell (\varepsilon )^{-1}\textrm{Per}_{{\widetilde{J}}_\varepsilon }(E) = |E|, \end{aligned}$$

where \({\widetilde{J}}_\varepsilon (x) = \varepsilon ^dJ(\varepsilon x)\). Recall that \(\ell \) is slowly varying at zero if \(\lim _{s\rightarrow 0}\ell (\lambda s)/\ell (s)=1\), for \(\lambda >0\), see [5].

The notion of non-local \(\nu \)-perimeter can be also successfully exploited in the framework of anisotropic perimeters. Anisotropic perimeter related to a given convex body is a natural generalization of the classical perimeter and it is defined via a norm whose unit ball is equal to a given convex body, see [20] and references therein. Let \(K\subset {\mathbb {R}}^d\) be a convex compact set of non-empty interior (so-called convex body) and such that it is origin-symmetric. Let \(\Vert \cdot \Vert _K\) denote the unique norm on \({\mathbb {R}}^d\) with unit ball equal to K, that is

$$\begin{aligned} \Vert x\Vert _K = \inf \{ \lambda >0: \lambda ^{-1}x\in K\},\quad x\in {\mathbb {R}}^d. \end{aligned}$$

Let \(K^*=\{y\in {\mathbb {R}}^d:\sup _{x\in K} y\cdot x\le 1\}\) be the polar body of K. Anisotropic perimeter of a Borel set \(E\subset {\mathbb {R}}^d\) with respect to K is defined as

$$\begin{aligned} \textrm{Per}(E,K) = \int _{\partial ^* E}\Vert {\textsf{n}}_{E}(x)\Vert _{K^*}\, \textrm{d}x. \end{aligned}$$

Here \({\textsf{n}}_E(x)\) denotes the measure theoretic outer unit normal vector of E at \(x\in \partial ^* E\), where \(\partial ^*E\) is the reduced boundary of E, see [3, Section 3.5]. Similarly as the classical perimeter is linked with Sobolev norm, the anisotropic perimeter is related to anisotropic Sobolev (semi)norms which have been intensively studied, see [1, 14, 21, 33, Appendix by M. Gromov], [28, 29].

There also exists a fractional counterpart of the anisotropic perimeter. For \(0<\alpha <1\), the anisotropic \(\alpha \)-perimeter of E with respect to K is given by

$$\begin{aligned} \textrm{Per}_{\alpha } (E,K) = \int _E \int _{E^c}\frac{1}{\Vert x-y\Vert ^{d+\alpha }_K}\,\textrm{d}x\, \textrm{d}y . \end{aligned}$$

In [27] it was proved that for any bounded set of finite perimeter the following results hold

$$\begin{aligned} \lim _{\alpha \uparrow 1}(1-\alpha ) \textrm{Per}_\alpha (E,K) = \textrm{Per}(E,ZK) \end{aligned}$$
(1.11)

and

$$\begin{aligned} \lim _{\alpha \downarrow 0}\alpha \textrm{Per}_{\alpha }(E,K) = d|K||E|, \end{aligned}$$
(1.12)

where ZK is the so-called moment body of K, see (3.10) for the definition. Through the methods of the present paper we are able to recover convergence in (1.11) and (1.12) and show that they are actually valid for all sets of finite measure and of finite perimeter. Similarly, we establish the corresponding convergence of anisotropic Sobolev norms given in [27, Theorem 8] and show that the assumption of compact support is superfluous.

2 Basic properties of the non-local perimeter

In this section we establish a few essential facts concerning the non-local \(\nu \)-perimeter, such as co-area formula and isoperimetric inequality. We start with the following symmetry property. Recall that for a set \(A\subset {\mathbb {R}}^d\) we denote by |A| its Lebesgue measure.

Lemma 2.1

For any \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure we have

$$\begin{aligned} \textrm{Per}_{\nu } (E) = \textrm{Per}_{\nu } (E^c). \end{aligned}$$

Proof

Indeed, by Fubini’s theorem we obtain

$$\begin{aligned} \textrm{Per}_{\nu } (E)&= \int _{E} \int _{E^c-x}\nu (\textrm{d}y)\, \textrm{d}x = \int \int {\textbf{1}}_{E}(x) {\textbf{1}}_{E^c-y}(x)\, \textrm{d}x\, \nu (\textrm{d}y)\\&=\int |E\cap (E-y)^c|\nu (dy). \end{aligned}$$

Since \(|E\cap (E-y)|=|E\cap (E+y)|\), we have

$$\begin{aligned} |E\cap (E-y)^c| = |E|- |E\cap (E-y)|=|E\cap (E+y)^c|=|E^c\cap (E-y)|. \end{aligned}$$

Hence, \(\textrm{Per}_{\nu } (E) =\textrm{Per}_\nu (E^c)\), as desired. \(\square \)

In particular, by Lemma 2.1 we easily obtain that

$$\begin{aligned} {\mathcal {F}}_\nu ({\textbf{1}}_E)=\textrm{Per}_\nu (E), \end{aligned}$$

for any \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure.

Remark 2.2

(1) Lemma 2.1 evidently does not hold if \(|E|=\infty \). Indeed, the equality \(|E\cap (E-y)^c| = |E\cap (E+y)^c|\) fails already for \(E=(0,\infty )\).

(2) Without loss of generality we could assume that the measure \(\nu \) appearing in the definition of the \(\nu \)-perimeter is symmetric in the sense that \(\nu (A) = \nu (-A)\), for any Borel set A. This follows from the fact that \(\textrm{Per}_\nu (A) = \textrm{Per}_{{\widetilde{\nu }}}(A)\), where \({\widetilde{\nu }}(A) = \frac{1}{2}(\nu (A)+\nu (-A))\).

We further claim that under condition (1.4) the space \(\textrm{BV}({\mathbb {R}}^d)\) is contained in \(\textrm{BV}_\nu ({\mathbb {R}}^d)\). For any vector-valued measure \(\Lambda \) we use notation \(\int _{{\mathbb {R}}^d} \Lambda = \Lambda ({\mathbb {R}}^d)\).

Proposition 2.3

For any \(u\in \textrm{BV}({\mathbb {R}}^d)\) it holds

  1. (i)

    \({\mathcal {F}}_\nu (u)\le C_\nu \Vert u\Vert _{\textrm{BV}}\), where \(C_\nu = \int (1\wedge |x|)\, \nu (\textrm{d}x)\).

  2. (ii)

    \({\mathcal {F}}_\nu (u) \le {\tilde{C}}_\nu \int _{{\mathbb {R}}^d} |Du|\), where \({\tilde{C}}_\nu = \int |x|\nu (\textrm{d}x) \le \infty \).

Proof

It is known [3, Theorem 3.9] that for any \(u\in \textrm{BV}({\mathbb {R}}^d)\) there exists a sequence \(u_n\in C^\infty \cap W^{1,1}({\mathbb {R}}^d)\) such that

$$\begin{aligned} u_n\rightarrow u\ \textrm{in}\ L^1\quad \textrm{and}\quad \lim _{n\rightarrow \infty }\int _{{\mathbb {R}}^d} |\nabla u_n(x)|\, \textrm{d}x = \int _{{\mathbb {R}}^d} |Du|\,. \end{aligned}$$

We prove that for such sequence \(u_n\) it holds

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathcal {F}}_\nu (u_n) = {\mathcal {F}}_\nu (u). \end{aligned}$$

Observe that

$$\begin{aligned} \int _{{\mathbb {R}}^d} |u_n(x+y)-u_n(y)|\, \textrm{d}y&\le |x|\int _{{\mathbb {R}}^d} \int _0^1 |\nabla u_n (y+tx)|\, \textrm{d}t\, \textrm{d}y\nonumber \\&= |x|\int _0^1 \int _{{\mathbb {R}}^d} |\nabla u_n (w)|\, \textrm{d}w\, \textrm{d}t \le C|x|\, \vert Du\vert ({\mathbb {R}}^d). \end{aligned}$$
(2.1)

Further, we have

$$\begin{aligned} \int _{{\mathbb {R}}^d} |u_n(x+y)-u_n(y)|\, \textrm{d}y&\le C \Vert u \Vert _{L^1}. \end{aligned}$$
(2.2)

Thus, we may apply the dominated convergence theorem to arrive at

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathcal {F}}_\nu (u_n)&= \frac{1}{2}\int _{{\mathbb {R}}^d} \lim _{n\rightarrow \infty }\int _{{\mathbb {R}}^d} |u_n(x+y)-u_n(y)|\, \textrm{d}y\, \nu (\textrm{d}x) \\&= \frac{1}{2}\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} |u(x+y)-u(y)|\, \nu (\textrm{d}x)\, dy = {\mathcal {F}}_\nu (u). \end{aligned}$$

Hence, it suffices to operate on \(u\in C^\infty \cap W^{1,1}({\mathbb {R}}^d)\). Then inequality (i) follows easily by (2.1) and (2.2). Similarly, (ii) is a consequence of (2.1). \(\square \)

Corollary 2.4

Let \(E\subset {\mathbb {R}}^d\) be such that \(|E|<\infty \) and \(\textrm{Per}(E)<\infty \). Then

$$\begin{aligned} \textrm{Per}_{\nu }(E)\le C_\nu (\textrm{Per}(E) + |E|). \end{aligned}$$

We next show for completeness that \(\textrm{BV}_\nu ({\mathbb {R}}^d)\) is a Banach space.

Lemma 2.5

The space \(\textrm{BV}_\nu ({\mathbb {R}}^d)\) equipped with the norm \(\Vert u\Vert _{\textrm{BV}_{\nu }} = \Vert u\Vert _{L^1} + {\mathcal {F}}_{\nu }(u)\) is a Banach space.

Proof

The function \(u\mapsto \Vert u\Vert _{\textrm{BV}_\nu }\) is evidently a norm and thus we only need to show completeness. Let \(\{u_n\}\) be a Cauchy sequence in \(\textrm{BV}_\nu ({\mathbb {R}}^d)\) and let u be its \(L^1\) limit. By Fatou’s lemma we have

$$\begin{aligned} 2{\mathcal {F}}_\nu (u)&= \int _{{\mathbb {R}}^d} \lim _{n\rightarrow \infty } \int _{{\mathbb {R}}^d} | u_n(x+y)-u_n(x)|\, \textrm{d}x\, \nu (\textrm{d}y)\\&\le \liminf _{n\rightarrow \infty }\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} | u_n(x+y)-u_n(x)|\, \nu (\textrm{d}y)\, \textrm{d}x = 2\liminf _{n\rightarrow \infty }{\mathcal {F}}_{\nu }(u_n) \end{aligned}$$

and as the sequence \(\{u_n\}\) is bounded in \(\textrm{BV}_\nu ({\mathbb {R}}^d)\) we infer the result. \(\square \)

Proposition 2.6

(Co-area formula) For \(u\in L^1({\mathbb {R}}^d)\) we set \(S_t(u)=\{ x\in {\mathbb {R}}^d:\, u(x)>t \}\). Then

$$\begin{aligned} {\mathcal {F}}_{\nu } (u) = \int _{{\mathbb {R}}} \textrm{Per}_{\nu } (S_t(u)) \, \textrm{d}t. \end{aligned}$$

Proof

Since

$$\begin{aligned} u(x) = \int _0^\infty {\textbf{1}}_{S_t(u)}(x)\,\textrm{d}t -\int _{-\infty }^0 (1-{\textbf{1}}_{S_t(u)}(x))\,\textrm{d}t , \end{aligned}$$

we have

$$\begin{aligned} |u(x)-u(y)| = \int _{-\infty }^\infty |{\textbf{1}}_{S_t(u)}(x) -{\textbf{1}}_{S_t(u)}(y)|\,\textrm{d}t . \end{aligned}$$

Thus, by Tonelli’s theorem,

$$\begin{aligned} {\mathcal {F}}_{\nu } (u) = \frac{1}{2}\int _{\mathbb {R}}\int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} |{\textbf{1}}_{S_t(u)}(x+y) -{\textbf{1}}_{S_t(u)}(y) |\, \nu (\textrm{d}x)\, \textrm{d}y\, \textrm{d}t. \end{aligned}$$

In view of Lemma 2.1 we obtain

$$\begin{aligned} \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} |{\textbf{1}}_{S_t(u)}(x+y)&-{\textbf{1}}_{S_t(u)}(y) |\, \nu (\textrm{d}x)\, \textrm{d}y \\&= \int _{S_t(u)}\int _{S_t(u)^c-y}\nu (\textrm{d}x)\, \textrm{d}y + \int _{S_t(u)^c}\int _{S_t(u)-y}\nu (\textrm{d}x)\, \textrm{d}y\\&= \textrm{Per}_{\nu } (S_t(u)) + \textrm{Per}_{\nu } (S_t(u)^c) = 2\textrm{Per}_{\nu } (S_t(u)) \end{aligned}$$

and the result follows. \(\square \)

The following result is an isoperimetric inequality for the non-local perimeter in the case when the measure \(\nu \) is given by a radial kernel j. We emphasize that the function j does not have to be integrable. We omit the proof as it is the same as that of [11, Proposition 3.1], see also [30, Theorem 2.4].

Proposition 2.7

(Isoperimetric inequality) Let \(\nu (\textrm{d}x) = j(x)\textrm{d}x\) where \(j\ge 0\) is a radially non-increasing function. Then for any \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure

$$\begin{aligned} \textrm{Per}_\nu (E)\ge \textrm{Per}_\nu (B), \end{aligned}$$

where B is an open ball centred at the origin such that \(|B|= |E|\).

3 Asymptotics of the non-local perimeter

In this section we show that the non-local perimeter which we introduce in the present article converges towards the classical perimeter (or to the Lebesgue measure) if we use an appropriate scaling procedure. We then apply our results to establish convergence of fractional perimeters, J-perimeters and anisotropic fractional perimeters.

3.1 Convergence towards the classical perimeter

We first formulate an approximation result in the space of functions of bounded variation which can be seen as a generalization of the result by Ponce [36], see also [6, 16]. By \(B_R\) we denote the closed ball centred at zero and of radius \(R>0\).

Theorem 3.1

Let \(\{\lambda \}_{\varepsilon >0}\) be a family of probability measures on \({\mathbb {R}}^d\) such that \(\lambda _\varepsilon (\{0\})=0\). Suppose that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\lambda _\varepsilon (B_R^c)=0,\quad \mathrm {for\ any}\ R>0. \end{aligned}$$
(3.1)

Further, let \(\mu _\varepsilon \) be a probability measure on the unit sphere given by

$$\begin{aligned} \mu _{\varepsilon }(E)= \lambda _{\varepsilon }((0,\infty ) E),\quad E\subset {\mathbb {S}}^{d-1}, \end{aligned}$$
(3.2)

where \((0,\infty )E = \{re:\, r>0\ \text {and}\ e\in E\}\) is the cone determined by E. Then there exists a sequence of positive numbers \(\{\varepsilon _j\}\) converging to zero such that for any \(f\in \textrm{BV}({\mathbb {R}}^d)\)

$$\begin{aligned} \lim _{j\rightarrow \infty } \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \frac{\left| f(x+y)-f(x) \right| }{|y|}\lambda _{\varepsilon _j}(\textrm{d}y)\, \textrm{d}x = \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d}|Df\cdot \theta |\, \mu (\textrm{d}\theta ), \end{aligned}$$
(3.3)

where \(\mu \) is a probability measure on the unit sphere which is equal to the weak limit of the sequence \(\{\mu _{\varepsilon _j}\}\).

The proof is postponed to the Appendix given in Sect. 4.

Remark 3.2

In many applications we can conclude the weak convergence of the whole sequence \(\{\mu _\varepsilon \}\) towards the limit measure \(\mu \). In such cases convergence in (3.3) holds for \(\varepsilon \downarrow 0\) and not only along a subsequence. This fact follows directly from the proof of Theorem 3.1.

We apply Theorem 3.1 in the case when the probability measures \(\lambda _\varepsilon \) are absolutely continuous with respect to a given family of measures \(\{\nu _\varepsilon \}\) satisfying (1.4). We start by giving a general result in this direction and next we present a few examples.

Theorem 3.3

Let \(\{\nu _\varepsilon \}_{\varepsilon >0}\) be a family of positive Borel measures satisfying (1.4) and let

$$\begin{aligned} \lambda _\varepsilon (\textrm{d}x) = C_\varepsilon ^{-1}\left( R_\varepsilon \wedge |x|\right) \nu _\varepsilon (\textrm{d}x), \end{aligned}$$

where \(C_\varepsilon = \int _{{\mathbb {R}}^d}\left( R_\varepsilon \wedge |x|\right) \nu _\varepsilon (\textrm{d}x)\) and \(R_\varepsilon \in [1,\infty ]\). Let \(\mu _{\varepsilon }\) be the corresponding probability measure on the unit sphere defined in (3.2). Under condition (3.1), there exists a sequence of positive numbers \(\{\varepsilon _j\}\) converging to zero such that for any \(f\in \textrm{BV}({\mathbb {R}}^d)\) we have

$$\begin{aligned} \lim _{j\rightarrow \infty } C_{\varepsilon _j}^{-1}{\mathcal {F}}_{\nu _{\varepsilon _j}} (f) = \frac{1}{2}\int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d}|Df\cdot \theta |\, \mu (\textrm{d}\theta ), \end{aligned}$$
(3.4)

where \(\mu \) is the weak limit of the sequence \(\{\mu _{\varepsilon _j}\}\).

Proof

We split the integral as follows

$$\begin{aligned} C_{\varepsilon _j}^{-1}{\mathcal {F}}_{\nu _{\varepsilon _j}} (f)&= \frac{1}{2}\left( \int _{{\mathbb {R}}^d}\int _{B_{R_{\varepsilon _j}}} + \int _{{\mathbb {R}}^d}\int _{B_{R_{\varepsilon _j}}^c}\right) \frac{\vert f(x+y)-f(x)|}{R_{\varepsilon _j}\wedge |y|}\lambda _{\varepsilon _j}(\textrm{d}y)\textrm{d}x. \end{aligned}$$
(3.5)

We have

$$\begin{aligned} \int _{{\mathbb {R}}^d}\int _{B_{R_{\varepsilon _j}}} \!\!\!\!\!\! \frac{\vert f(x{+}y){-}f(x)|}{|y|}\lambda _{\varepsilon _j}(\textrm{d}y)\textrm{d}x {=}\!\! \left( \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d}\!\! {-} \!\!\int _{{\mathbb {R}}^d}\int _{B_{R_{\varepsilon _j}}^c} \right) \frac{\vert f(x{+}y){-}f(x)|}{|y|}\lambda _{\varepsilon _j}(\textrm{d}y)\textrm{d}x \end{aligned}$$

and the first integral converges by Theorem 3.1 to (two times) the right hand side of (3.4) while the second integral converges to zero as

$$\begin{aligned} \int _{{\mathbb {R}}^d}\int _{B_{R_{\varepsilon _j}}^c} \frac{\vert f(x+y)-f(x)|}{|y|}\lambda _{\varepsilon _j}(\textrm{d}y)\textrm{d}x \le 2\Vert f\Vert _{L^1}\frac{\lambda _{\varepsilon _j}(B_{R_{\varepsilon _j}}^c)}{R_{\varepsilon _j}}\le 2\Vert f\Vert _{L^1} \lambda _{\varepsilon _j}(B_1^c)\rightarrow 0. \end{aligned}$$

The second integral in (3.5) is negligible by the argument from the previous line. \(\square \)

Corollary 3.4

In the notation of Theorem 3.3, let \(f={\textbf{1}}_E\) where \(E\subset {\mathbb {R}}^d\) is a set of finite Lebesgue measure and of finite perimeter (i.e. \({\textbf{1}}_E\in \textrm{BV}({\mathbb {R}}^d)\)). Then

$$\begin{aligned} \lim _{j\rightarrow \infty } C_{\varepsilon _j}^{-1}\textrm{Per}_{\nu _{\varepsilon _j}} (E) = \frac{1}{2} \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D {\textbf{1}}_E \cdot \theta |\, \mu (\textrm{d}\theta ). \end{aligned}$$

In particular, if \(\mu \) is the (normalized) uniform measure on \({\mathbb {S}}^{d-1}\) then

$$\begin{aligned} \lim _{j\rightarrow \infty } C_{\varepsilon _j}^{-1}\textrm{Per}_{\nu _{\varepsilon _j}} (E) = \frac{K_{1,d}}{2\kappa _{d-1}}\textrm{Per}(E), \end{aligned}$$

where \(K_{1,d}\) is the constant from (1.7).

We next illustrate Theorem 3.3 and Corollary 3.4 by a few examples. We start with the following result which is an application of Theorem 3.3 for stable Lévy measures. For a detailed discussion on the representation of stable Lévy measures we refer the reader to [37, Section 14].

Proposition 3.5

Let \(\nu _\alpha \) be an \(\alpha \)-stable Lévy measure with its spectral decomposition given by

$$\begin{aligned} \nu _\alpha (A) = \int _{{\mathbb {S}}^{d-1}}\int _{0}^\infty {\textbf{1}}_A(r \theta ) r^{-1-\alpha }\textrm{d}r \, \eta (\textrm{d}\theta ),\quad \alpha \in (0,1), \end{aligned}$$
(3.6)

where \(\eta \) is a probability measure on \({\mathbb {S}}^{d-1}\). Then, for any \(f\in \textrm{BV}({\mathbb {R}}^d)\),

$$\begin{aligned} \lim _{\alpha \uparrow 1}(1-\alpha ){\mathcal {F}}_{\nu _\alpha }(f)=\frac{1}{2}\int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |Df \cdot \theta |\, \eta (\textrm{d}\theta ). \end{aligned}$$

In particular, if the measure \(\eta \) is uniform on the unit sphere then, for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and such that \(\textrm{Per}(E)<\infty \), it holds

$$\begin{aligned} \lim _{\alpha \uparrow 1}\, (1-\alpha )\textrm{Per}_{\nu _{\alpha }}(E) = \frac{K_{1,d}}{2\kappa _{d-1}}\textrm{Per}(E). \end{aligned}$$

Proof

We aim to apply Theorem 3.3. We define measures \(\lambda _\alpha \) as follows

$$\begin{aligned} \lambda _\alpha (A) = C_{\alpha }^{-1}\int _{{\mathbb {S}}^{d-1}}\int _0^\infty (1\wedge r){\textbf{1}}_{A}(r\theta ) r^{-1-\alpha }\textrm{d}r\, \eta (\theta ),\quad C_\alpha = \int _0^\infty (1\wedge r)r^{-1-\alpha }\textrm{d}r. \end{aligned}$$

We first observe thatFootnote 1

$$\begin{aligned} C_\alpha&= \int _1^\infty r^{-1-\alpha }\textrm{d}r + \int _0^1r^{-\alpha }\textrm{d}r = \frac{1}{\alpha }+\frac{1}{1-\alpha }\sim \frac{1}{1-\alpha },\quad \alpha \uparrow 1. \end{aligned}$$

Furthermore, for any \(R>0\),

$$\begin{aligned} \lambda _\alpha (B_R^c) = C_\alpha ^{-1}\cdot {\left\{ \begin{array}{ll} \int _R^\infty r^{-1-\alpha } \textrm{d}r = \frac{1}{\alpha }R^{-\alpha },&{}\quad R>1;\\ \int _R^1 r^{-\alpha }\textrm{d}r + \int _1^\infty r^{-1-\alpha }\textrm{d}r= \frac{1}{1-\alpha }(1-R^{-\alpha +1})+ \frac{1}{\alpha },&{}\quad R\le 1. \end{array}\right. } \end{aligned}$$

We easily infer that

$$\begin{aligned} \lim _{\alpha \uparrow 1}\lambda _{\alpha }(B_R^c)=0,\quad R>0. \end{aligned}$$

The measures \(\mu _\alpha \) are given by

$$\begin{aligned} \mu _\alpha (E) = \eta (E),\quad E\subset {\mathbb {S}}^{d-1}, \end{aligned}$$

which evidently implies \(\mu _\alpha \xrightarrow [\alpha \uparrow 1]{w}\eta \) and the result follows. \(\square \)

Example 3.6

(Asymptotics of \(\alpha \)-perimeters for \(\alpha \uparrow 1\)) As a direct application of Proposition 3.5 we obtain the well-known convergence of \(\alpha \)-perimeters towards the classical perimeter for sets of finite perimeter, see [2, 6, 8, 16, 39]. Let the measure \(\nu _\alpha \) be rotationally invariant and given by

$$\begin{aligned} \nu _\alpha (\textrm{d}x) = \frac{\alpha \, \textrm{d}x}{\kappa _{d-1}|x|^{d+\alpha }},\quad \alpha \in (0,1), \end{aligned}$$
(3.7)

where \(\kappa _{d-1}=2\pi ^{d/2}/\Gamma (d/2)\) is the surface area of \({\mathbb {S}}^{d-1}\). For such measure it holds

$$\begin{aligned} \textrm{Per}_{\nu _\alpha }(E) = \frac{\alpha }{\kappa _{d-1}}\textrm{Per}_\alpha (E), \end{aligned}$$
(3.8)

where \(\nu _\alpha \) is given by (3.6) with \(\eta \) equal to (\(\alpha \) times) the normalized surface measure on the unit sphere. Then, by Proposition 3.5, we recover (1.6) for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and such that \(\textrm{Per}(E)<\infty \).

Example 3.7

(Asymptotics of J-perimeters) Let \(\nu (\textrm{d}x) = J(x)\, \textrm{d}x\), where J is a positive function such that \(C_J=\int _{{\mathbb {R}}^d}|x|J(x)\, \textrm{d}x <\infty \), see [11]. For any \(\varepsilon >0\) let \(J_\varepsilon (x) = \varepsilon ^{-d}J(x/\varepsilon )\) and \(\nu _\varepsilon (\textrm{d}x) = J_\varepsilon (x)\, \textrm{d}x\). Clearly, \(\textrm{Per}_{\nu _\varepsilon }(E) = \textrm{Per}_{J_\varepsilon }(E)\). In this case we can apply Theorem 3.3 with \(R_\varepsilon =\infty \) and we easily verify that the corresponding measures \(\lambda _\varepsilon \) satisfy condition (3.1). Further, for any \(\varepsilon >0\) we have

$$\begin{aligned} \mu _\varepsilon (E) = C_J^{-1}\int _{(0,\infty )E}|x|J(x)\, \textrm{d}x =: \mu _J (E),\quad E\subset {\mathbb {S}}^{d-1}. \end{aligned}$$

Hence, by Corollary 3.4, for any \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter,

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\varepsilon ^{-1}\textrm{Per}_{\nu _\varepsilon }(E) = \frac{C_J}{2} \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D {\textbf{1}}_E \cdot \theta |\, \mu _J(\textrm{d}\theta ). \end{aligned}$$

In particular, if the kernel J is radially symmetric then this leads to (1.9) and we observe that we do not need to assume that J is compactly supported.

We can use the scaling procedure of Example 3.7 also under slightly more general assumption. Let \(\nu \) be a measure satisfying (1.4) and let \(\nu _\varepsilon (A)=\nu (A/\varepsilon )\) for any \(\varepsilon >0\). If \(\int |x|\nu (\textrm{d}x)<\infty \) then we can apply Theorem 3.3 with \(R_\varepsilon =\infty \) and we obtain \(C_\varepsilon =\varepsilon \int |x|\nu (\textrm{d}x)\). The corresponding convergence in (3.4) follows.

Example 3.8

Let \(\nu \) be a measure satisfying (1.4) and let \(\nu _\varepsilon (A)=\nu (A/\varepsilon )\) for any \(\varepsilon >0\). We assume that

$$\begin{aligned} \nu (A) = \int _{{\mathbb {S}}^d-1} \int _0^\infty {\textbf{1}}_A(r\theta )\varrho (\textrm{d}r)\eta (\textrm{d}\theta ), \end{aligned}$$

where \(\varrho \) is a positive measure on \((0,\infty )\) and \(\eta \) is a probability measure on \({\mathbb {S}}^{d-1}\). Then

$$\begin{aligned} \lambda _\varepsilon (B) = C_\varepsilon ^{-1} \int _{{\mathbb {S}}^{d-1}} \int _0^\infty (R_\varepsilon \wedge r){\textbf{1}}_{\varepsilon ^{-1}B}(r\theta ) \varrho _\varepsilon (\textrm{d}r) \eta (\textrm{d}\theta ), \end{aligned}$$

where \(C_\varepsilon = \int _0^\infty (R_\varepsilon \wedge r)\varrho _{\varepsilon }(\textrm{d}r)\) and \(\varrho _\varepsilon (\textrm{d}r) = \varrho (\varepsilon \textrm{d}r)\). We observe that in this case the measures \(\mu _\varepsilon \) of Theorem 3.3 are all equal to \(\eta \). In particular, if \(\eta \) is rotationally invariant then (3.4) becomes

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} C_{\varepsilon }^{-1}{\mathcal {F}}_{\nu _{\varepsilon }} (f) = \frac{K_{1,d}}{2\kappa _{d-1}} \int _{{\mathbb {R}}^d}|Du|. \end{aligned}$$

This results in the following approximation of the classical perimeter: for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and such that \(\textrm{Per}(E)<\infty \) it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} C_{\varepsilon }^{-1}\textrm{Per}_{\nu _{\varepsilon }} (E) = \frac{K_{1,d}}{2\kappa _{d-1}} \textrm{Per}(E). \end{aligned}$$

3.1.1 Anisotropic fractional perimeters

We next apply Theorem 3.3 in the context of anisotropic fractional perimeters. We start by recalling another (equivalent) definition of the classical perimeter. For a set E such that \({\textbf{1}}_E\in \textrm{BV}({\mathbb {R}}^d)\) one can define its perimeter as

$$\begin{aligned} \textrm{Per}(E)=\int _{\partial ^*E}|{\textsf{n}}_E(x)|{\mathcal {H}}^{d-1}(\textrm{d}x), \end{aligned}$$
(3.9)

where \(\partial ^*E\) is the reduced boundary of E, \({\textsf{n}}_E(x)\) denotes the measure theoretic outer unit normal vector of E at \(x\in \partial ^* E\) and \({\mathcal {H}}^{d-1}\) is the \((d-1)\)-dimensional Hausdorff measure. For a detailed treatment on the fine structure of the classical perimeter we refer to [3, Section 3.5] and [20].

Let \(K\subset {\mathbb {R}}^d\) be a convex compact set of non-empty interior (so-called convex body) and such that it is origin-symmetric. Anisotropic perimeter of a Borel set \(E\subset {\mathbb {R}}^d\) with respect to K is defined as

$$\begin{aligned} \textrm{Per}(E,K) = \int _{\partial ^* E}\Vert {\textsf{n}}_{E}(x)\Vert _{K^*}\, \textrm{d}x. \end{aligned}$$

Let ZK be the so-called moment body of K, see [38, Section 10.8]. It is defined as the unique convex body satisfying

$$\begin{aligned} \Vert y \Vert _{Z^*K} = \frac{d+1}{2}\int _K |y\cdot x|\, \textrm{d}x,\quad y\ \in {\mathbb {R}}^d, \end{aligned}$$
(3.10)

where \(Z^*K\) is the polar body of ZK. Our aim is to show (through the methods of the present paper) that convergence in (1.11) holds actually for all sets of finite perimeter.

Proposition 3.9

For any Borel set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\alpha \uparrow 1}(1-\alpha ) \textrm{Per}_\alpha (E,K) = \textrm{Per}(E,ZK). \end{aligned}$$

Proof

Let \(\nu _{\alpha }(A,K)\) be a measure (with respect to the convex body K) given by

$$\begin{aligned} \nu _{\alpha }(A,K) =\int _{{\mathbb {S}}^{d-1}}\int _0^\infty {\textbf{1}}_A(r\theta )r^{-\alpha -1}\textrm{d}r\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+\alpha }}. \end{aligned}$$

We easily verify that

$$\begin{aligned} \textrm{Per}_\alpha (E,K) = \textrm{Per}_{\nu _\alpha (\cdot ,K)}(E). \end{aligned}$$
(3.11)

Clearly, \(\nu _\alpha (\cdot ,K)\) is (up to a normalising constant) a special case of (3.6) and thus we are in the scope of Theorem 3.3. We easily find that the measure \(\lambda _\alpha (\cdot ,K)\) appearing in Theorem 3.3 satisfies

$$\begin{aligned} \lambda _\alpha (B_R^c ,K) = \frac{\int _R^\infty (1\wedge r)r^{-\alpha -1}\textrm{d}r}{\int _0^\infty (1\wedge r)r^{-\alpha -1}\textrm{d}r} \end{aligned}$$

and we show similarly as in the proof of Proposition 3.5 that \(\lambda _\alpha (B_R^c,K)\) converges to 0. The corresponding measure \(\mu _\alpha (\cdot ,K)\) is given by

$$\begin{aligned} \mu _\alpha (S,K) =C_\alpha (K)^{-1}\int _{S}\frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+\alpha }},\qquad S\subset {\mathbb {S}}^{d-1}, \end{aligned}$$

where \(C_\alpha (K) = \int _{{\mathbb {S}}^{d-1}}\frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+\alpha }}\). Since \(\lim _{\alpha \uparrow 1}\Vert \theta \Vert _K^{-d-\alpha } = \Vert \theta \Vert _K^{-d-1}\) for every \(\theta \in {\mathbb {S}}^{d-1}\), we infer that

$$\begin{aligned} \mu _\alpha (\cdot ,K)\xrightarrow [\alpha \uparrow 1]{w}\mu (\cdot ,K), \end{aligned}$$

where \(\mu (S,K)= C(K)^{-1}\int _{S}\Vert \theta \Vert ^{-d-1}_K\textrm{d}\theta \) and \(C(K)= \int _{{\mathbb {S}}^{d-1}}\Vert \theta \Vert ^{-d-1}_K\textrm{d}\theta \). We obtain that for any \(f\in \textrm{BV}({\mathbb {R}}^d)\),

$$\begin{aligned} \lim _{\alpha \uparrow 1}C_\alpha ^{-1}C_\alpha (K)^{-1}{\mathcal {F}}_{\nu _{\alpha }(\cdot ,K)}(f)= \frac{1}{2}\int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |Df\cdot \theta |\mu (\textrm{d}\theta ,K), \end{aligned}$$
(3.12)

where \(C_\alpha =\int _0^\infty (1\wedge r)r^{-1-\alpha }\textrm{d}r\). Hence, by taking \(f={\textbf{1}}_E\in \textrm{BV}({\mathbb {R}}^d)\), we arrive at

$$\begin{aligned} \lim _{\alpha \uparrow 1}\, (1-\alpha )\, \textrm{Per}_{\alpha }(E,K) = \frac{1}{2} \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D{\textbf{1}}_E\cdot \theta |\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}. \end{aligned}$$

Further, by employing (3.9) together with Fubini’s theorem, we obtain

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D{\textbf{1}}_E\cdot \theta |\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}} = \frac{1}{2} \int _{\partial ^*E} \int _{{\mathbb {S}}^{d-1}} |{\textsf{n}}_E(x)\cdot \theta | \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}\, {\mathcal {H}}^{d-1}(\textrm{d}x). \end{aligned}$$
(3.13)

Since \(K=\{y\in {\mathbb {R}}^d: \Vert y\Vert _K\le 1\}=\{(r,\theta )\in [0,\infty )\times {\mathbb {S}}^{d-1}: r\le 1/\Vert \theta \Vert _K\}\), the polar coordinate formula yields

$$\begin{aligned} \int _K |{\textsf{n}}_E(x)\cdot y|\textrm{d}y =\int _{{\mathbb {S}}^{d-1}}\int _0^{\frac{1}{\Vert \theta \Vert _K}}|{\textsf{n}}_E(x)\cdot \theta |r^{d}\,\textrm{d}r\, \textrm{d}\theta =\frac{1}{d+1}\int _{{\mathbb {S}}^{d-1}} |{\textsf{n}}_E(x)\cdot \theta | \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}. \end{aligned}$$
(3.14)

Finally, (3.13) combined with (3.14) and (3.10) imply

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |D{\textbf{1}}_E\cdot \theta |\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}} = \int _{\partial ^*E}\Vert {\textsf{n}}_E(x)\Vert _{Z^*K}{\mathcal {H}}^{d-1}(\textrm{d}x) = \textrm{Per}(E,ZK) \end{aligned}$$

and the result follows. \(\square \)

3.1.2 Anisotropic Sobolev norms

We finally show that similar methods apply to anisotropic Sobolev norms. We aim to prove a stronger version of [27, Theorem 8] as we abandon the assumption of compact support. We recall that for any \(f\in \textrm{BV}({\mathbb {R}}^d)\) its anisotropic Sobolev semi-norm (with respect to a given origin-symmetric convex body K) is defined as

$$\begin{aligned} \Vert f\Vert _{\textrm{BV}, ZK} = \int _{{\mathbb {R}}^d}\left\| \frac{Df}{|Df|}\right\| _{Z^*K}\! \textrm{d}|Df|. \end{aligned}$$

Here the vector Df/|Df| is the Radon-Nikodým derivative of the \({\mathbb {R}}^d\)-valued vector measure Df with respect to the positive measure |Df|.

Proposition 3.10

For any \(f\in \textrm{BV}({\mathbb {R}}^d)\) it holds

$$\begin{aligned} \lim _{\alpha \uparrow 1}\, (1-\alpha ) \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d}\frac{|f(x+y)-f(x)|}{\Vert y\Vert _K^{d+\alpha }}\, \textrm{d}y\, \textrm{d}x = 2 \Vert f\Vert _{\textrm{BV}, ZK}. \end{aligned}$$
(3.15)

Proof

We observe that by (3.12)

$$\begin{aligned} \lim _{\alpha \uparrow 1}\, (1-\alpha ) \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d}\frac{|f(x+y)-f(x)|}{\Vert y\Vert _K^{d+\alpha }}\, \textrm{d}y\, \textrm{d}x = \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |Df \cdot \theta |\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}. \end{aligned}$$

We are thus left to identify the limit. We first note that, in view of [3, Proposition 1.23],

$$\begin{aligned} \textrm{d}|Df\cdot \theta | = \left| \frac{Df}{|Df|}\cdot \theta \right| \textrm{d}|Df|,\quad \theta \in {\mathbb {S}}^{d-1}. \end{aligned}$$
(3.16)

This implies

$$\begin{aligned} \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} |Df \cdot \theta |\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}&= \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d} \left| \frac{Df}{|Df|} \cdot \theta \right| \, \textrm{d}|Df|\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}\\&= \int _{{\mathbb {R}}^d} \int _{{\mathbb {S}}^{d-1}} \left| \frac{Df}{|Df|} \cdot \theta \right| \, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}}\, \textrm{d}|Df|. \end{aligned}$$

We then proceed similarly as in (3.14) to obtain

$$\begin{aligned} \int _{{\mathbb {S}}^{d-1}} \left| \frac{Df}{|Df|} \cdot \theta \right| \, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+1}} = (d+1)\int _K \left| \frac{Df}{|Df|} \cdot y\right| \, \textrm{d}y = 2 \left\| \frac{Df}{|Df|}\right\| _{Z^*K} \end{aligned}$$

and the result follows. \(\square \)

3.2 Convergence towards the Lebesgue measure

In this paragraph we focus on convergence of non-local perimeters towards the Lebesgue measure in the case when the mass of the normalized measures \((1\wedge |x|)\nu _\varepsilon \) concentrates at infinity. In the rest of the paper we make use of the covariance function of a set and thus we briefly recall its definition and basic properties.

For any \(E \subset {\mathbb {R}}^d\) of finite Lebesgue measure its covariance function \(g_E\) is given by

$$\begin{aligned} g_E (y)=|E \cap (E + y)|=\int _{{\mathbb {R}}^d}\,{\textbf{1}}_{E}(x)\,{\textbf{1}}_E (x-y) \textrm{d}x,\quad y\in {\mathbb {R}}^d. \end{aligned}$$
(3.17)

It is a symmetric and uniformly continuous function tending to zero at infinity. If E is of finite perimeter then \(g_E\) is Lipschitz continuous. For more details we refer to [24]. According to [15, Lemma 1], if E is of finite Lebesgue measure and of finite perimeter then \(\textrm{Per}_\nu (E)<\infty \) for any measure \(\nu \) satisfying (1.4).

We first aim to prove the following result.

Theorem 3.11

Let \(\{\nu _\varepsilon \}_{\varepsilon >0}\) be a family of measures satisfying (1.4) and let

$$\begin{aligned} \lambda _\varepsilon (\textrm{d}x) = C_\varepsilon ^{-1}\left( 1 \wedge R_\varepsilon |x|\right) \nu _\varepsilon (\textrm{d}x), \end{aligned}$$

where \(C_\varepsilon = \int _{{\mathbb {R}}^d}\left( 1 \wedge R_\varepsilon |x|\right) \nu _\varepsilon (\textrm{d}x)\) and \(R_\varepsilon \in [1,\infty ]\). We assume that for any \(R>0\)

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \lambda _\varepsilon (B_R^c) = 1. \end{aligned}$$
(3.18)

Then for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} C_{\varepsilon }^{-1}\textrm{Per}_{\nu _{\varepsilon }}(E) = |E|. \end{aligned}$$
(3.19)

Furthermore, if \(\{\nu _\varepsilon \}_{\varepsilon >0}\) is a family of finite measures then for any set E of finite Lebesgue measure it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} (\nu _\varepsilon ({\mathbb {R}}^d))^{-1} \textrm{Per}_{\nu _{\varepsilon }}(E) = |E|. \end{aligned}$$

Proof

We have

$$\begin{aligned} \textrm{Per}_{\nu _{\varepsilon }}(E)&= \int _{{\mathbb {R}}^d}|E\cap (E^c-y)|\nu _{\varepsilon }(\textrm{d}y) = \int _{{\mathbb {R}}^d} (g_E(0)-g_E(y)) \nu _{\varepsilon }(\textrm{d}y) \\&= |E| \nu _{\varepsilon }(B_{R}^c) - \int _{B_R^c}g_E(y) \nu _{\varepsilon }(\textrm{d}y) + \int _{B_R} (g_E(0)-g_E(y)) \nu _{\varepsilon }(\textrm{d}y) . \end{aligned}$$

Since \(R_\varepsilon \ge 1\), for any \(R>1\) we have \(C_{\varepsilon }^{-1}\nu _{\varepsilon }(B_{R}^c) = \lambda _{\varepsilon }(B_R^c)\) which tends to one as \(\varepsilon \) goes to zero. If \(|E|<\infty \) then \(g_E\in C_0({\mathbb {R}}^d)\).Footnote 2 Thus we can choose R big enough so that \(g_E(y)\) is smaller than any given \(\epsilon >0\) for \(|y|>R\). We are left with the last integral in the formula above. We have

$$\begin{aligned} C_{\varepsilon }^{-1} \int _{B_R} (g_E(0)-g_E(y)) \nu _{\varepsilon }(\textrm{d}y)&\le C C_{\varepsilon }^{-1}\int _{B_R}(1\wedge |y|)\nu _{\varepsilon }(\textrm{d}y) \le C\lambda _{\varepsilon }(B_R)\xrightarrow [\varepsilon \downarrow 0]{} 0, \end{aligned}$$
(3.20)

where we used the fact that \(g_E\) is Lipschitz continuous if \(\textrm{Per}(E)<\infty \). If \(\nu _\varepsilon \) is a finite measure then we simply choose \(R_\varepsilon =\infty \) and repeat the same reasoning as above. In (3.20) we use the fact that \(g_E\) is bounded. \(\square \)

We present an analogous result for stable Lévy measures, but this time the spherical part of the stable Lévy measure is fixed, whereas the assumption on the set E is weaker as we do not require its perimeter to be finite, c.f. Example 3.15.

Proposition 3.12

Let \(\nu _\alpha \) be the \(\alpha \)-stable Lévy measure given by

$$\begin{aligned} \nu _\alpha (A) =\alpha \int _{{\mathbb {S}}^{d-1}}\int _{0}^\infty {\textbf{1}}_A(\varrho \theta ) \varrho ^{-1-\alpha }\textrm{d}\varrho \, \eta (\textrm{d}\theta ),\quad \alpha \in (0,1), \end{aligned}$$
(3.21)

where \(\eta \) is a probability measure on \({\mathbb {S}}^{d-1}\). For any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and such that \(\textrm{Per}_{\nu _{\alpha _0}}(E)<\infty \), for some \(\alpha _0\in (0,1) \), it holds

$$\begin{aligned} \lim _{\alpha \downarrow 0}\textrm{Per}_{\nu _{\alpha }}(E) = |E|. \end{aligned}$$
(3.22)

Proof

Representation (3.21) yields \(\nu _\alpha (B_R^c)=R^{-\alpha }\), for any \(R>0\). As in the proof of Theorem 3.11 we have

$$\begin{aligned} \textrm{Per}_{\nu _{\alpha }}(E)&= |E|R^{-\alpha } - \int _{B_R^c}g_E(y) \nu _{\alpha }(\textrm{d}y) + \int _{B_R} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y) \end{aligned}$$
(3.23)

and the first integral is negligible as \(g_E\in C_0({\mathbb {R}}^d)\). To estimate the second integral we observe that

$$\begin{aligned} \alpha ^{-1} \int _{B_R} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y)&= \int _{{\mathbb {S}}^{d-1}}\int _0^R (g_E(0)-g_E(\varrho \theta ))\varrho ^{-1-\alpha } \textrm{d}\varrho \, \eta (\textrm{d}\theta ). \end{aligned}$$

Since for \( 0<\varrho \le R\) and \(0\le \alpha <\alpha _0\),

$$\begin{aligned} \varrho ^{-1-\alpha }\le R^{\alpha _0}\varrho ^{-1-\alpha _0} , \end{aligned}$$
(3.24)

we can apply the dominated convergence theorem in the last equation and this implies

$$\begin{aligned} \lim _{\alpha \rightarrow 0} \alpha ^{-1} \int _{B_R} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y) = \int _{{\mathbb {S}}^{d-1}}\int _0^R (g_E(0)-g_E(\varrho \theta ))\varrho ^{-1} \textrm{d}\varrho \, \eta (\textrm{d}\theta ). \end{aligned}$$

The last expression is finite in view of (3.24) used for \(\alpha =0\) and combined with the assumption that the perimeter \(\textrm{Per}_{\nu _{\alpha _0}}(E)\) is finite. \(\square \)

Remark 3.13

We could prove convergence in (3.22) also in the case when the spherical part \(\eta \) of the measure \(\nu _\alpha \) given in (3.21) depends on the parameter \(\alpha \) and converges weakly towards some measure on the unit sphere, see e.g. (3.30).

Let the measure \(\nu _\alpha \) be rotationally invariant and given by (3.7). The following result provides an enhancement of [17, Corollary 2.6] (see also [32]) for the classical \(\alpha \)-perimeter in the sense that we abandon the assumption of boundedness.

Corollary 3.14

(Convergence of \(\alpha \)-perimeters as \(\alpha \downarrow 0\)) Let \(E\subset {\mathbb {R}}^d\) be of finite Lebesgue measure and such that \(\textrm{Per}_{\nu _{\alpha _0}}(E)<\infty \), for some \(\alpha _0\in (0,1) \). Then

$$\begin{aligned} \lim _{\alpha \downarrow 0}\alpha \textrm{Per}_{\alpha }(E) = \kappa _{d-1}|E|. \end{aligned}$$

Proof

This follows directly from Proposition 3.12 and Eq. (3.8). \(\square \)

In the following example we show that the assumption of finite perimeter in Theorem 3.11 in general cannot be weakened.

Example 3.15

We first present an example of a set \(E\subset {\mathbb {R}}\) of finite Lebesgue measure and such that its classical perimeter is infinite, whereas its \(\alpha \)-perimeter is finite for each \(\alpha \in (0,1)\). We consider the one-dimensional case for simplicity’s sake. Let

$$\begin{aligned} E=\bigcup _{n=1}^\infty \, [n,n+2^{-n}]. \end{aligned}$$
(3.25)

Clearly \(|E|=1\) and \(\textrm{Per}(E)=\infty \). In order to compute \(\textrm{Per}_{\nu _\alpha }(E)\) we apply formula (3.23) which leads to

$$\begin{aligned} \textrm{Per}_{\nu _{\alpha }}(E)&= |E|- \int _{|y|>1}g_E(y) \nu _{\alpha }(\textrm{d}y) + \int _{|y|\le 1} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y). \end{aligned}$$
(3.26)

The first integral is clearly finite so it suffices to handle the last integral. We easily show that for \(n\in {\mathbb {N}}\)

$$\begin{aligned} (g_E(0)-g_E(y))= (n-1)y +2^{-n+1},\quad \textrm{for}\ y\in (2^{-n},2^{-n+1}). \end{aligned}$$

This implies

$$\begin{aligned} \int _{0}^1 (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y)&= \alpha \sum _{n=1}^\infty \left( (n-1)\int _{2^{-n}}^{2^{-n+1}}\!\!\! y^{-\alpha }\textrm{d}y + 2^{-n+1}\int _{2^{-n}}^{2^{-n+1}} \!\!\!\! y^{-\alpha -1}\textrm{d}y\right) \\&= \frac{\alpha }{1-\alpha }\frac{1}{1-2^{\alpha -1}}+ \frac{2^\alpha (1-2^{-\alpha })}{1-2^{\alpha -1}}\\&\sim \frac{1}{(1-\alpha )^2} + \frac{1}{1-\alpha },\quad \alpha \uparrow 1. \end{aligned}$$

In particular, we infer that \(\textrm{Per}_{\nu _\alpha }(E)<\infty \) and \(\lim _{\alpha \uparrow 1}\textrm{Per}_{\nu _\alpha }(E)=\infty \).

Further, we consider a family of stable Lévy measures given by

$$\begin{aligned} \nu _n(\textrm{d}x) = \nu _{\frac{1}{n}}(\textrm{d}x)+ c_n\nu _{1-\frac{1}{n}}(\textrm{d}x),\quad n\in {\mathbb {N}}, \end{aligned}$$

where \(\nu _{\frac{1}{n}}\) (resp. \(\nu _{1-\frac{1}{n}}\)) is the \(\frac{1}{n}\)-stable (resp. \((1-\frac{1}{n})\)-stable) Lévy measure defined at (3.21) and \(c_n\) is a sequence of positive numbers to be specified shortly. Using (3.26) and the above calculation we obtain for set E given in (3.25) that

$$\begin{aligned} \textrm{Per}_{\nu _n}(E)\sim \frac{1}{(1-\frac{1}{n})^2}+c_nn^{2}\sim 1+c_nn^2,\quad n\rightarrow \infty . \end{aligned}$$

We first observe that for \(c_n\sim n^{-1}\), one has \(\lim _{n\rightarrow \infty }\textrm{Per}_{\nu _n}(E)=\infty \). We next show that it is possible to choose the sequence \(c_n\) in such a way that assumption (3.18) of Theorem 3.11 is satisfied but convergence in (3.19) fails. Indeed, if \(c_n=o(1/n)\), then the corresponding sequence of measures

$$\begin{aligned} \lambda _n (\textrm{d}x) = C_n^{-1}(1\wedge |x|)\nu _n(\textrm{d}x), \end{aligned}$$

with \(C_n=\int (1\wedge |x|)\nu _n(\textrm{d}x)\) satisfies (3.18). To see this we compute

$$\begin{aligned} \int _{|x|>R}\nu _n(\textrm{d}x) = {\left\{ \begin{array}{ll} R^{-1/n}+c_nR^{1/n-1},&{}\quad R>1;\\ \frac{\frac{1}{n}}{1-\frac{1}{n}}(1-R^{-1/n+1})+ \frac{1-\frac{1}{n}}{\frac{1}{n}}(1-R^{1/n})+c_n+1,&{}\quad R<1 \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} C_n = \frac{\frac{1}{n}}{1-\frac{1}{n}}+c_n\frac{1-\frac{1}{n}}{\frac{1}{n}}+c_n+1\rightarrow 1,\quad n\rightarrow \infty . \end{aligned}$$

It follows that under the condition \(c_n=o(1/n)\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \lambda _n(B_R^c)=1,\quad R>0. \end{aligned}$$

Thus, if the sequence \(c_n\) is such that \(c_n=o(1/n)\) and \(c_n\sim c\, n^{-2}\), for a constant \(c>0\), then assumption (3.18) of Theorem 3.11 is satisfied, but at the same time \(\lim _{n\rightarrow \infty }\textrm{Per}_{\nu _n}(E)=1+c>|E|\).

As a further application of Theorem 3.11 we obtain asymptotics for the perimeter given through rescaled measures under the assumption that the tail of the original measure is slowly varying at zero.

Proposition 3.16

Let \(\nu \) be a given measure and let \(\nu _\varepsilon (A) = \nu (\varepsilon A)\) for any \(\varepsilon >0\). Assume that \(\nu _\varepsilon \) satisfies (1.4) for all \(\varepsilon >0\). Suppose that the function \(\ell (s) = \int _{|x|>s}\nu (\textrm{d}x)\) is slowly varying at zero. Then, for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} C_{\varepsilon }^{-1}\textrm{Per}_{\nu _{\varepsilon }}(E) = |E|, \end{aligned}$$

where \(C_\varepsilon = \int _{{\mathbb {R}}^d}(1\wedge |x|)\nu _{\varepsilon }(\textrm{d}x)\).

Proof

We apply Theorem 3.11 with \(R_\varepsilon =1\). It is enough to show condition (3.18). For any \(R\ge 1\) we have

$$\begin{aligned} \lambda _\varepsilon (B_R)&= \frac{\int _{|x|<R}(1\wedge |x|)\, \nu _\varepsilon (\textrm{d}x)}{\int _{{\mathbb {R}}^d}(1\wedge |x|)\, \nu _\varepsilon (\textrm{d}x)} = \frac{\int _{|x|<1} |x|\nu _\varepsilon (\textrm{d}x)+ \int _{1<|x|<R}\nu _\varepsilon (\textrm{d}x)}{\int _{|x|<1} |x|\nu _\varepsilon (\textrm{d}x)+ \int _{|x|>1} \nu _\varepsilon (\textrm{d}x)}. \end{aligned}$$

We observe that

$$\begin{aligned} {\begin{matrix} \int _{|x|<1} |x|\nu _\varepsilon (\textrm{d}x) &{}= \int _{|x|<1} \int _0^{|x|}\textrm{d}u\, \nu _\varepsilon (\textrm{d}x) = \int _0^1 \int _{u<|x|<1}\nu _\varepsilon (\textrm{d}x)\, \textrm{d}u\\ &{}= \int _0^1 \int _{\varepsilon u<|x|<\varepsilon }\nu (\textrm{d}x)\, \textrm{d}u = \int _0^1 (\ell (\varepsilon u)-\ell (\varepsilon ))\, \textrm{d}u . \end{matrix}} \end{aligned}$$
(3.27)

This implies

$$\begin{aligned} \lambda _\varepsilon (B_R)&= \frac{\int _0^1 \ell (\varepsilon u)\, \textrm{d}u -\ell (\varepsilon R)}{\int _0^1 \ell (\varepsilon u)\, \textrm{d}u}. \end{aligned}$$
(3.28)

We set \(L(w) = \ell (1/w)\), for \(w>0\). By a change of variable we obtain

$$\begin{aligned} \int _0^1 \ell (\varepsilon u)\, \textrm{d}u = \frac{1}{\varepsilon }\int _{1/\varepsilon }^{\infty }w^{-2}L(w)\, \textrm{d}w. \end{aligned}$$

According to [5, Proposition 1.5.10] we have

$$\begin{aligned} \frac{1}{\varepsilon }\int _{1/\varepsilon }^{\infty }w^{-2}L(w)\, \textrm{d}w \sim L(1/\varepsilon ),\quad \varepsilon \downarrow 0, \end{aligned}$$

which yields \(\int _0^1 \ell (\varepsilon u)\, \textrm{d}u \sim \ell (\varepsilon )\) when \(\varepsilon \downarrow 0\) and thus the expression in (3.28) tends to zero as \(\ell \) is slowly varying. \(\square \)

Example 3.17

Let \(\nu \) be a measure such that \(\nu (\textrm{d}x){\textbf{1}}_{B_1(0)}= |x|^{-d}\textrm{d}x\) and \(\nu (B_1(0)^c)<\infty \), where \(B_1(0)\) is the open unit ball in \({\mathbb {R}}^d\). In this case

$$\begin{aligned} \ell (s)&= \int _{|x|>s}\nu (\textrm{d}x) = \int _{1>|x|>s}\nu (\textrm{d}x) + \int _{|x|\ge 1}\nu (\textrm{d}x)\\&= \kappa _{d-1}\log (1/s)+\int _{|x|\ge 1}\nu (\textrm{d}x)\sim \kappa _{d-1}\log (1/s),\quad s\downarrow 0. \end{aligned}$$

Let \(\nu _{\varepsilon }(A)= \nu (\varepsilon A)\) for any \(\varepsilon >0\). Using the same argument as in (3.27) we easily find that

$$\begin{aligned} C_\varepsilon = \int _{{\mathbb {R}}^d}(1\wedge |x|)\nu _{\varepsilon }(\textrm{d}x) \sim \kappa _{d-1}\log (1/\varepsilon ),\quad \varepsilon \downarrow 0. \end{aligned}$$

This implies that for such measure \(\nu \) and for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\frac{\textrm{Per}_{\nu _\varepsilon }(E)}{\log (1/\varepsilon )}= \kappa _{d-1}|E|. \end{aligned}$$

We close this paragraph with two interesting results concerning J-perimeters.

Corollary 3.18

Let \(J:{\mathbb {R}}^d \rightarrow (0,\infty )\) be a given kernel and let \(J_\varepsilon (x)=\varepsilon ^dJ(\varepsilon x)\), for \(\varepsilon >0\). Assume that \((1\wedge |x|)J_\varepsilon (x)\in L^1({\mathbb {R}}^d)\) for all \(\varepsilon >0\) and that \(\ell (s) = \int _{|x|>s}J(x)\textrm{d}x\) is slowly varying at zero. Then, for any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter, it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\ell (\varepsilon )^{-1}\textrm{Per}_{J\varepsilon }(E) = |E|. \end{aligned}$$

Proof

This follows from Proposition 3.16 if we choose \(\nu _\varepsilon (\textrm{d}x) = J_\varepsilon (x)\textrm{d}x\) and notice that in this case \(C_\varepsilon = \int (1\wedge |x|)J_\varepsilon (x)\, \textrm{d}x \sim \ell (\varepsilon )\), as \(\varepsilon \downarrow 0\). \(\square \)

The following easy observation is a consequence of Theorem 3.11 if we choose \(\nu _{\varepsilon }(\textrm{d}x) = J_{\varepsilon }(x)\textrm{d}x\) and observe that \(\nu _{\varepsilon }({\mathbb {R}}^d)= \Vert J\Vert _{L^1}\). We remark, however, that it can be proved directly if we use the definition of the J-perimeter together with the dominated convergence theorem.

Corollary 3.19

Let \(J:{\mathbb {R}}^d \rightarrow (0,\infty )\) be a kernel such that \(J\in L^1({\mathbb {R}}^d)\) and let \(J_\varepsilon (x) = \varepsilon ^d J(\varepsilon x)\), for \(\varepsilon >0\). For any set \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure it holds

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\textrm{Per}_{J_\varepsilon }(E) = \Vert J\Vert _{L^1}|E|. \end{aligned}$$

3.2.1 Convergence of anisotropic fractional perimeters

We finally present how to deduce the convergence of anisotropic fractional \(\alpha \)-perimeters when \(\alpha \downarrow 0\). We actually slightly improve on [27, Theorem 6] as we do not require the set E to be bounded.

Proposition 3.20

For any \(E\subset {\mathbb {R}}^d\) of finite Lebesgue measure and of finite perimeter it holds

$$\begin{aligned} \lim _{\alpha \downarrow 0}\alpha \textrm{Per}_{\alpha }(E,K) = d|K||E|. \end{aligned}$$
(3.29)

Proof

We consider the following measure

$$\begin{aligned} \nu _\alpha (A, K) =\alpha \int _{{\mathbb {S}}^{d-1}}\int _{0}^\infty {\textbf{1}}_A(\varrho \theta ) \varrho ^{-1-\alpha }\textrm{d}\varrho \, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+\alpha }},\quad \alpha \in (0,1). \end{aligned}$$
(3.30)

We observe that

$$\begin{aligned} \textrm{Per}_{\nu _{\alpha }(\cdot ,K)}(E) = \alpha \textrm{Per}_\alpha (E,K). \end{aligned}$$
(3.31)

Proceeding similarly as in (3.23) we find that

$$\begin{aligned} \textrm{Per}_{\nu _\alpha (\cdot ,K)}(E) = C_\alpha (K)R^{-\alpha }|E| - \int _{B_R^c}g_E(y) \nu _{\alpha }(\textrm{d}y, K) + \int _{B_R} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y, K), \end{aligned}$$

where \(C_\alpha (K) = \int _{{\mathbb {S}}^{d-1}}\frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d+\alpha }}\). The integral over \(B^c_R\) may be done arbitrarily small if we choose R big enough. Since \(\textrm{Per}(E)<\infty \), we know that \(g_E(x)\) is a Lipschitz function. This and the fact that \(\Vert \theta \Vert ^{-d-\alpha }_K\) is bounded for \(\theta \in {\mathbb {S}}^{d-1}\) allows us to apply the dominated convergence theorem and we obtain

$$\begin{aligned} \lim _{\alpha \downarrow 0}\alpha ^{-1}\int _{B_R} (g_E(0)-g_E(y)) \nu _{\alpha }(\textrm{d}y, K) = \int _{{\mathbb {S}}^{d-1}}\int _0^R (g_E(0)-g_E(r\theta ))r^{-1}\, \textrm{d}r\, \frac{\textrm{d}\theta }{\Vert \theta \Vert _K^{d}}. \end{aligned}$$

Further, we have

$$\begin{aligned} \lim _{\alpha \downarrow 0}C_\alpha (K) = \int _{{\mathbb {S}}^{d-1}}\frac{\textrm{d}\theta }{\Vert \theta \Vert _K^d}= d|K| \end{aligned}$$

and we infer that

$$\begin{aligned} \lim _{\alpha \downarrow 0}\textrm{Per}_{\nu _{\alpha }(\cdot ,K)}(E) = d|K||E|, \end{aligned}$$

which in view of (3.31) implies (3.29). \(\square \)

4 Appendix: Proof of Theorem 3.1

We aim to prove Theorem 3.1 which can be seen as an extended version of [36, Theorem 2]. For the proof we follow closely the approach of [36]. In the remaining part of the text we make use of mollifiers. Let \(j\in L^1({\mathbb {R}}^d)\) be such that \(\int _{{\mathbb {R}}^d}j(x)\textrm{d}x =1\). For any \(\delta >0\) let \(j_\delta (x) =\delta ^{-d}j(x/\delta )\). Clearly, \(\int _{{\mathbb {R}}^d}j_\delta (x)\textrm{d}x=1\). For any \(f\in L^1({\mathbb {R}}^d)\), we set

$$\begin{aligned} f_\delta (x) = j_{\delta }*f(x)= \int _{{\mathbb {R}}^d}j_\delta (x-y)f(y)\, \textrm{d}y,\quad x\in {\mathbb {R}}^d. \end{aligned}$$

We start with a series of lemmas.

Lemma 4.1

For any probability measure \(\lambda \) on \({\mathbb {R}}^d\) and any \(f\in \textrm{BV} ({\mathbb {R}}^d)\) it holds

$$\begin{aligned} \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \frac{|f(x+y) - f(x)|}{|y|}\lambda (\textrm{d}y)\, \textrm{d}x \le \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \left| Df \cdot \frac{y}{|y|}\right| \, \lambda (\textrm{d}y) . \end{aligned}$$

Proof

For any \(R>0\) and \(\delta >0\), by a standard argument based on the Fundamental Theorem of Calculus, we have

$$\begin{aligned}&\int _{B_R}\int _{B_R} \frac{|f_\delta (x+y)-f_\delta (x)|}{|y|}\lambda (\textrm{d}y)\, \textrm{d}x\\&\quad \le \int _{B_R}\int _{B_R} \int _0^1 \left| \nabla f_\delta (x+\rho y)\cdot \frac{y}{|y|}\right| \textrm{d}\rho \, \lambda (\textrm{d}y)\, \textrm{d}x \\&\quad \le \int _{{\mathbb {R}}^d}\int _0^1\int _{{\mathbb {R}}^d} \left| \nabla f_\delta (x+\rho y)\cdot \frac{y}{|y|}\right| \textrm{d}x \, \textrm{d}\rho \, \lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \left| \nabla f_\delta (x)\cdot \frac{y}{|y|}\right| \textrm{d}x \, \lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \left| \int _{{\mathbb {R}}^d} j_\delta (x-z)\, Df(\textrm{d}z)\cdot \frac{y}{|y|}\right| \textrm{d}x \, \lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \left| \int _{{\mathbb {R}}^d} j_\delta (x-z) \frac{Df}{|Df|}(z)\cdot \frac{y}{|y|}\, |Df|(\textrm{d}z)\right| \textrm{d}x \, \lambda (\textrm{d}y)\\&\quad \le \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} j_\delta (x-z) \left| \frac{Df}{|Df|}(z)\cdot \frac{y}{|y|}\right| \textrm{d}x\, |Df|(\textrm{d}z) \, \lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \left| \frac{Df}{|Df|}(z)\cdot \frac{y}{|y|}\right| |Df|(\textrm{d}z)\, \lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \left| Df\cdot \frac{y}{|y|}\right| \lambda (\textrm{d}y), \end{aligned}$$

where we used (3.16) together with [3, Proposition 1.23]. We finally take \(\delta \downarrow 0\) and then \(R\rightarrow \infty \) and the result follows. \(\square \)

Lemma 4.2

For any probability measure \(\lambda \) on \({\mathbb {R}}^d\) and any \(f\in L^1({\mathbb {R}}^d)\) it holds

$$\begin{aligned} \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \frac{|f_\delta (x+y) - f_\delta (x)|}{|y|}\lambda (\textrm{d}y)\, \textrm{d}x \le \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \frac{|f(x+y) - f(x)|}{|y|}\lambda (\textrm{d}y)\, \textrm{d}x . \end{aligned}$$

Proof

We clearly have

$$\begin{aligned}&\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \frac{|f_\delta (x+y) - f_\delta (x)|}{|y|}\lambda (\textrm{d}y)\, \textrm{d}x \\&\quad \le \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d}\frac{|f (x+y-z) - f (x-z)|}{|y|}\, j_\delta (z)\, \textrm{d}z\, \lambda (\textrm{d}y)\, \textrm{d}x \\&\quad = \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d}\frac{|f (w+y) - f (w)|}{|y|}\, j_\delta (x+w)\,\textrm{d}x\, \textrm{d}w\lambda (\textrm{d}y)\\&\quad = \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \frac{|f(w+y) - f(w)|}{|y|}\, \textrm{d}w\, \lambda (\textrm{d}y) \end{aligned}$$

and the proof is finished. \(\square \)

Recall that \(\{\lambda _{\varepsilon }\}_{\varepsilon >0}\) is a family of probability measures on \({\mathbb {R}}^d\) such that \(\lambda _\varepsilon (\{0\})=0\) and, for any \(R>0\),

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\lambda _\varepsilon (B_R^c)=0. \end{aligned}$$
(4.1)

We consider \(\mu _{\varepsilon }(E)= \lambda _{\varepsilon }((0,\infty ) E)\), where \((0,\infty ) E = \{re:\, e\in E \ \text {and}\ r>0\}\) is the cone spanned by \(E\subset {\mathbb {S}}^{d-1}\). Since the family \(\{\mu _\varepsilon \}_{\varepsilon >0}\) is bounded in the space of Radon measures on \({\mathbb {S}}^{d-1}\), there exists a sequence \(\varepsilon _j\) converging to zero and a probability measure \(\mu \) on the unit sphere such that \(\mu _{\varepsilon _j}\xrightarrow {w}\mu \). We observe that in view of the definition, for any continuous function \(F:{\mathbb {S}}^{d-1}\rightarrow {\mathbb {R}}\) we have

$$\begin{aligned} \int _{{\mathbb {R}}^d}F\Big ( \frac{y}{|y|}\Big )\lambda _\varepsilon (\textrm{d}y) = \int _{{\mathbb {S}}^{d-1}}F(\theta )\mu _\varepsilon (\textrm{d}\theta ),\quad \varepsilon >0. \end{aligned}$$
(4.2)

It evidently follows that

$$\begin{aligned} \lim _{j\rightarrow \infty } \int _{{\mathbb {R}}^d}F\Big ( \frac{y}{|y|}\Big )\lambda _{\varepsilon _j} (\textrm{d}y) = \int _{{\mathbb {S}}^{d-1}}F(\theta )\mu (\textrm{d}\theta ). \end{aligned}$$
(4.3)

Lemma 4.3

Let \(B_R\subset {\mathbb {R}}^d\) be an arbitrary Euclidean ball of radius \(R>0\). Then, for any \(f\in C^2(B_R)\), it holds

$$\begin{aligned} \lim _{j\rightarrow \infty } \int _{B_R}\int _{B_R} \frac{|f(x+y) - f(x)|}{|y|}\lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x = \int _{{\mathbb {S}}^{d-1}}\int _{B_R} |\nabla f(x) \cdot \theta |\, \textrm{d}x\, \mu (\textrm{d}\theta ) . \end{aligned}$$

Proof

We start with the following easy inequality

$$\begin{aligned} \left| \frac{|f(x+y)-f(y)|}{|y|}- \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \right|&\le \frac{\big \vert f(x+y)-f(x)-\nabla f(x)\cdot \frac{y}{|y|}\big \vert }{|y|}\\&\le C|y|,\quad C>0. \end{aligned}$$

This implies

$$\begin{aligned}{} & {} \int _{B_R} \int _{B_R} \left| \frac{|f(x+y)-f(y)|}{|y|}- \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \right| \lambda _\varepsilon (\textrm{d}y)\, \textrm{d}x\\{} & {} \quad \le |B_R|\left( C\int _{|y|\le 1}|y|\lambda _\varepsilon (\textrm{d}y) +2\Vert \nabla f\Vert _\infty \int _{|y|>1}\lambda _\varepsilon (\textrm{d}y)\right) . \end{aligned}$$

By (4.1), the second integral converges to zero as \(\varepsilon \downarrow 0\). For the second we write

$$\begin{aligned} \int _{|y|\le 1}|y|\lambda _\varepsilon (\textrm{d}y)&= \int _{|y|\le 1} \int _0^{|y|} \textrm{d}t\, \lambda _\varepsilon (\textrm{d}y) = \int _0^1 \int _{t\le |y|\le 1}\lambda _\varepsilon (\textrm{d}y)\, \textrm{d}t\\&\le \int _0^1 \int _{ |y|>t}\lambda _\varepsilon (\textrm{d}y)\, \textrm{d}t\rightarrow 0,\quad \textrm{as}\ \varepsilon \downarrow 0. \end{aligned}$$

Thus we are left to show that

$$\begin{aligned} \lim _{j\rightarrow \infty } \int _{B_R} \int _{B_R} \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x = \int _{{\mathbb {S}}^{d-1}}\int _{B_R} |\nabla f(x) \cdot \theta |\, \textrm{d}x\, \mu (\textrm{d}\theta ) . \end{aligned}$$

We have

$$\begin{aligned} \int _{B_R} \int _{{\mathbb {R}}^d} \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x&= \int _{B_R} \int _{B_R} \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x\\&\quad \quad + \int _{B_R} \int _{B_R^c} \left| \nabla f(x)\cdot \frac{y}{|y|}\right| \lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x. \end{aligned}$$

As the last integral clearly tends to zero, for \(\varepsilon \downarrow 0\), we infer the result with the aid of (4.3) and the dominated convergence theorem. \(\square \)

Proof of Theorem 3.1

By Lemma 4.1 and Lemma 4.2 we have

$$\begin{aligned}{} & {} \int _{B_{1/R}} \int _{B_{1/R}} \frac{|f_\delta (x+y) - f_\delta (x)|}{|y|}\lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x \\{} & {} \quad \le \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \frac{|f (x+y) - f (x)|}{|y|}\lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x\\{} & {} \quad \le \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \left| Df \cdot \frac{y}{|y|}\right| \, \lambda _{\varepsilon _j} (\textrm{d}y). \end{aligned}$$

We note that the following convergence holds uniformly for \(\theta \in {\mathbb {S}}^{d-1}\)

$$\begin{aligned} \lim _{\delta \downarrow 0}\int _{{\mathbb {R}}^d}|\nabla f_\delta \cdot \theta | = \int _{{\mathbb {R}}^d}|Df \cdot \theta |. \end{aligned}$$

Combining this with Lemma 4.3 we obtain

$$\begin{aligned} \lim _{R\downarrow 0}\lim _{\delta \downarrow 0}\lim _{j\rightarrow \infty } \int _{B_{1/R}} \int _{B_{1/R}}&\frac{|f_\delta (x+y) - f_\delta (x)|}{|y|}\lambda _{\varepsilon _j} (\textrm{d}y)\, \textrm{d}x = \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d}|Df\cdot \theta |\mu (\textrm{d}\theta ). \end{aligned}$$

Since the function \({\mathbb {S}}^{d-1}\ni \theta \mapsto \int _{{\mathbb {R}}^d}|Df \cdot \theta |\) is continuous, we can apply (4.3) and we arrive at

$$\begin{aligned} \lim _{j\rightarrow \infty } \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d} \left| Df \cdot \frac{y}{|y|}\right| \, \lambda _{\varepsilon _j} (\textrm{d}y) = \int _{{\mathbb {S}}^{d-1}}\int _{{\mathbb {R}}^d}|Df\cdot \theta |\mu (\textrm{d}\theta ), \end{aligned}$$

and the proof is complete. \(\square \)