1 Introduction

In this paper, we study the minimization problem

$$\begin{aligned} \underset{u}{\min }\, \text {TV}(u) + \frac{\gamma }{2} \Vert u-\mathcal {A}(u)\Vert _{H^{-1}}^2 \end{aligned}$$

on undirected graphs. Here \(\text {TV}\) and \(\Vert \cdot \Vert _{H^{-1}}^2\) are graph-based analogues of the continuum total variation seminorm and continuum \(H^{-1}\) Sobolev norm, respectively, \(\gamma \ge 0\), and u is allowed to vary over the set of node functions with prescribed average mass \(\mathcal {A}(u)\). These concepts will be made precise later in the paper, culminating in formulation (34) of the minimization problem. The main contributions of this paper are the introduction of the graph Ohta–Kawasaki functional into the literature, the development of an algorithm to produce (approximate) minimizers, and the study of that algorithm, which leads to, among other results, further insight into the connection between the graph Merriman–Bence–Osher (MBO) method and the graph total variation, following on from initial investigations in van Gennip et al. (2014).

There are various reasons to study this minimization problem. First of all, it is the graph-based analogue of the continuum Ohta–Kawasaki variational model (Ohta and Kawasaki 1986; Kawasaki et al. 1988). This model was originally introduced as a model for pattern formation in diblock copolymer systems and has become a paradigmatic example of a variational model which exhibits pattern formation. It spawned a large mathematical literature which explores its properties analytically and computationally. A complete literature overview for this area is outside the scope of this paper. For a brief overview of the continuum Ohta–Kawasaki model, see Section S1 in Supplementary Materials. (We use the prefix “S” to indicate a reference to Supplementary Materials.) For a sample of mathematical papers on this topic, see for example (Ren and Wei 2000; Choksi and Ren 2003; van Gennip et al. 2009; Choksi et al. 2009; Le 2010; Choksi et al. 2011; Glasner 2017) and other references mentioned in Section S1. The problem studied in this paper thus follows in the footsteps of a rich mathematical heritage, but at the same time, being the graph analogue of the continuum functional, connects with the recent interest in discrete PDE inspired problems.

Recently, there has been a growing enthusiasm in the mathematical literature for graph-based variational methods and graph based dynamics which mimic continuum-based variational methods and partial differential equations (PDEs), respectively. This is partly driven by novel applications of such methods in data science and image analysis (Ta et al. 2011; Elmoataz et al. 2012; Bertozzi and Flenner 2012; Merkurjev et al. 2013; Hu et al. 2013; Garcia-Cardona et al. 2014; Calatroni et al. 2017; Bosch et al. 2016; Merkurjev et al. 2017; Elmoataz et al. 2017) and partly by theoretical interest in the new connections between graph theory and PDEs (van Gennip and Bertozzi 2012; van Gennip et al. 2014; Trillos and Slepčev 2016). Broadly speaking, these studies fall into one (or more) of three categories: papers connecting graph problems with continuum problems, for example through a limiting process (van Gennip and Bertozzi 2012; Trillos and Slepčev 2016; Trillos et al. 2016), papers adapting a PDE approach to a graph context in order to tackle a graph problem such as graph clustering and classification (Bertozzi and Flenner 2016; Bresson et al. 2014; Merkurjev et al. 2016), maximum cut computations (Keetch and van Gennip in prep), and bipartite matching (Caracciolo et al. 2014; Caracciolo and Sicuro 2015), and papers studying the graph analogue of a PDE or variational problem that has interesting properties in the continuum, to explore what (potentially similar) properties are present in the graph based version of the problem (van Gennip et al. 2014; Luo and Bertozzi 2017; Elmoataz and Buyssens 2017). This paper mostly falls in the latter category.

The study of the graph-based Ohta–Kawasaki model is also of interest, because it connects with graph methods, concepts, and questions that have recently attracted attention, such as the graph MBO method (also known as threshold dynamics), graph curvature, and the question how these concepts relate to each other. The MBO scheme was originally introduced (in a continuum setting) to approximate motion by mean curvature (Merriman et al. 1992, 1993, 1994). It is an iterative scheme, which alternates between a short-time diffusion step and a threshold step. Not only have these dynamics been proven to converge to motion by mean curvature (Evans 1993; Barles and Georgelin 1995; Swartz and Yip 2017), but they have been a very useful basis for numerical schemes as well, both in the continuum and on graphs. Without aiming for completeness, we mention some of the papers that investigate or use the MBO scheme: (Mascarenhas 1992; Ruuth 1998a, b; Chambolle and Novaga 2006; Esedoḡlu et al. 2008, 2010; Hu et al. 2013; Merkurjev et al. 2013, 2014; Hu et al. 2015; Esedoḡlu and Otto 2015).

In this paper, we study two different MBO schemes, (OKMBO) and (mcOKMBO). The former is an extension of the standard graph MBO scheme van Gennip et al. (2014) in the sense that it replaces the diffusion step in the scheme with a step whose dynamics are related to the Ohta–Kawasaki model and reduce to diffusion in the special case when \(\gamma =0\) (for details, see Sect. 5.1). The latter uses the same dynamics as the former in the first step, but incorporates mass conservation in the threshold step. The (mcOKMBO) scheme produces approximate graph Ohta–Kawasaki minimizers and is the one we use in our simulations which are presented in Sect. 7 and Section S9 of Supplementary Materials. The scheme (OKMBO) is of interest both as a precursor to (mcOKMBO) and as an extension of the standard graph MBO scheme. In van Gennip et al. (2014), it was conjectured that the standard graph MBO scheme is related to graph mean curvature flow and minimizers of the graph total variation functional. This paper furthers the study of that conjecture (but does not provide a definitive answer): in Sect. 5.2 it is shown that the Lyapunov functionals associated with the (OKMBO) \(\Gamma \)-converge to the graph Ohta–Kawasaki functional (which reduces to the total variation functional in the case when \(\gamma =0\)). Moreover, in Sect. 6 we introduce a special class of graphs, \(\mathcal {C}_\gamma \), dependent on \(\gamma \). For graphs from this class the (OKMBO) scheme can be interpreted as the standard graph MBO scheme on a transformed graph. For such graphs we extend existing elliptic and parabolic comparison principles for the graph Laplacian and graph diffusion equation to our new Ohta–Kawasaki operator and dynamics (Lemmas 6.13 and 6.15).

A significant role in the analysis presented in this paper is played by the equilibrium measures associated to a given node subset (Bendito et al. 2000b, 2003), especially in the construction of the aforementioned class \(\mathcal {C}^0\). In Sect. 3 we study these equilibrium measures and the role they play in constructing Green’s functions for the graph Dirichlet and Poisson problems. The Poisson problem in particular, is an important ingredient in the definition of the graph \(H^{-1}\) norm and the graph Ohta–Kawasaki functional as they are introduced in Sect. 4. Both the equilibrium measures and the Ohta–Kawasaki functional itself are related to the graph curvature, which was introduced in van Gennip et al. (2014), as is shown in Lemma 3.6 and Corollary 4.12, respectively.

The structure of the paper is as follows. In Sect. 2 we define our general setting. Section 3 introduces the equilibrium measures from Bendito et al. (2003) into the paper (the terminology is derived from potential theory; see, e.g. Simon 2007 and references therein) and uses them to study the Dirichlet and Poisson problems on graphs, generalizing some results from Bendito et al. (2003). In Sect. 4 we define the \(H^{-1}\) inner product and norm and use those to construct the object at the centre of our paper: the (sharp interface) Ohta–Kawasaki functional on graphs, \(F_0\). We also briefly consider \(F_\varepsilon \), a diffuse interface version of the Ohta–Kawasaki functional and its relationship with \(F_0\). Moreover, in this section we start using tools from spectral analysis to study \(F_0\). These tools will be one of the main ingredients in the remainder of the paper. In Sect. 5 the algorithms (OKMBO) and (mcOKMBO) are introduced and analysed. It is shown that both these algorithms have an associated Lyapunov functional (which extends a result from van Gennip et al. 2014) and that these functionals \(\Gamma \)-converge to \(F_0\) in the limit when \(\tau \) (the time parameter associated with the first step in the MBO iteration) goes to zero. We introduce the class \(\mathcal {C}_\gamma \) in Sect. 6 and prove that the Ohta–Kawasaki dynamics [(i.e. the dynamics used in the first steps of both (OKMBO) and (mcOKMBO)] on graphs from this class correspond to diffusion on a transformed graph. We also prove comparison principles for these graphs. In Sect. 7 we then use (mcOKMBO) to numerically construct (approximate) minimizers of \(F_0\), before ending with a discussion of potential future research directions in Sect. 8. Supplementary Materials accompany this paper, which contain further background information, results, examples, numerical simulations, and deferred proofs.

2 Setup

In this paper we consider graphs \(G\in \mathcal {G}\), where \(\mathcal {G}\) is the set consisting of all finite, simple,Footnote 1 connected, undirected, edge-weighted graphs \((V,E,\omega )\) with \(n:= |V| \ge 2\) nodes. Here \(E\subset V\times V\) and \(\omega : E\rightarrow (0,\infty )\). Because \(G\in \mathcal {G}\) is undirected, we identify \((i,j)\in E\) with \((j,i)\in E\). If we want to consider an unweighted graph, we view it as a weighted graph with \(\omega =1\) on E.

Assume \(G\in \mathcal {G}\) is given. Let \(\mathcal {V}\) be the set of node functions \(u:V\rightarrow {\mathbb {R}}\) and \(\mathcal {E}\) the set of skew-symmetricFootnote 2 edge functions \(\varphi : E\rightarrow {\mathbb {R}}\). For \(i\in V\), \(u\in \mathcal {V}\), we write \(u_i:=u(i)\) and for \((i,j)\in E\), \(\varphi \in \mathcal {E}\) we write \(\varphi _{ij}:=\varphi ((i,j))\). To simplify notation, we extend each \(\varphi \in \mathcal {E}\) to a function \(\varphi :V^2 \rightarrow {\mathbb {R}}\) (without changing notation) by setting \(\varphi _{ij} = 0\) if \((i,j)\not \in E\). The condition that \(\varphi \) is skew-symmetric means that, for all \(i,j\in V\), \(\varphi _{ij}=-\varphi _{ji}\). Similarly, for the edge weights we write \(\omega _{ij}:=\omega ((i,j))\) and we extend \(\omega \) (without changing notation) to a function \(\omega : V^2 \rightarrow [0,\infty )\) by setting \(\omega _{ij} = 0\) if and only if \((i,j)\not \in E\). Because \(G\in \mathcal {G}\) is undirected, we have for all \(i,j\in V\), \(\omega _{ij} = \omega _{ji}\).

The degree of node \(i\in V\) is \(\displaystyle d_i:=\sum \nolimits _{j\in V} \omega _{ij}\) and the minimum and maximum degrees of the graph are defined as \(\displaystyle d_-:= \underset{1\le i\le n}{\min }d_i\) and \(\displaystyle d_+:= \underset{1\le i\le n}{\max }d_i\), respectively. Because \(G\in \mathcal {G}\) is connected and \(n\ge 2\), there are no isolated nodes and thus \(d_-,d_+>0\).

For a node \(i\in V\), we denote the set of its neighbours by

$$\begin{aligned} \mathcal {N}(i) := \{j\in V: \omega _{ij}>0\}. \end{aligned}$$
(1)

For simplicity of notation, we will assume that the nodes of a given graph \(G\in \mathcal {G}\) are labelled such that \(V=\{1, \ldots , n\}\). For definiteness and to avoid confusion we specify that we consider \(0\not \in {\mathbb {N}}\), i.e. \({\mathbb {N}}=\{1, 2, 3, \ldots \}\), and when using the subset notation \(A\subset B\) we allow for the possibility that \(A=B\). The characteristic function (or indicator function) \(\chi _S\) of a node set \(S\subset V\) is defined by \((\chi _S)_i := 1\) if \(i\in S\) and \((\chi _S)_i:=0\) otherwise. If \(S = \{i\}\), we can use the Kronecker delta to write:Footnote 3\(\displaystyle (\chi _{\{i\}})_j = \delta _{ij} := {\left\{ \begin{array}{ll} 1, &{}\text {if }\quad i=j,\\ 0, &{} \text {otherwise.}\end{array}\right. } \)

As justified in earlier work (Hein et al. 2007; van Gennip and Bertozzi 2012; van Gennip et al. 2014), we introduce the following inner products,

$$\begin{aligned} \langle u, v \rangle _{\mathcal {V}} := \sum _{i\in V} u_i v_i d_i^r, \quad \langle \varphi , \psi \rangle _{\mathcal {E}} := \frac{1}{2} \sum _{i,j\in V} \varphi _{ij} \psi _{ij} \omega _{ij}^{2q-1}, \end{aligned}$$

for parameters \(q\in [1/2,1]\) and \(r\in [0,1]\).Footnote 4 We define the gradient \(\nabla : \mathcal {V} \rightarrow \mathcal {E}\) by, for all \(i,j\in V\),

$$\begin{aligned} (\nabla u)_{ij} := {\left\{ \begin{array}{ll} \omega _{ij}^{1-q} (u_j-u_i), &{} \text {if }\quad \omega _{ij}>0,\\ 0, &{}\text {otherwise}.\end{array}\right. } \end{aligned}$$

Note that \(\langle \cdot , \cdot \rangle _{\mathcal {V}}\) is indeed an inner product on \(\mathcal {V}\) if G has no isolated nodes (i.e. if \(d_i>0\) for all \(i\in V\)), as is the case for \(G\in \mathcal {G}\). Furthermore, \(\langle \cdot , \cdot \rangle _{\mathcal {E}}\) is an inner product on \(\mathcal {E}\) (since functions in \(\mathcal {E}\) are either only defined on E or are required to be zero on \(V^2{\setminus } E\), depending on whether we consider them as edge functions or as extended edge functions, as explained above).

Using these building blocks, we define the divergence as the adjoint of the gradient and the (graph) Laplacian as the divergence of the gradient, leading to,Footnote 5 for all \(i\in V\),

$$\begin{aligned} (\text {div}\,\varphi )_i := \frac{1}{d_i^r} \sum _{j\in V} \omega _{ij}^q \varphi _{ji}, \quad (\Delta u)_i :=\left( \text {div}\,(\nabla u)\right) _i = d_i^{-r} \sum _{j\in V} \omega _{ij} (u_i-u_j), \end{aligned}$$

as well as the following norms:

$$\begin{aligned} \Vert u\Vert _{\mathcal {V}}&:= \sqrt{\langle u,u\rangle _{\mathcal {V}}}, \quad \Vert \varphi \Vert _{\mathcal {E}} := \sqrt{\langle \varphi , \varphi \rangle _{\mathcal {E}}},\\ \Vert u\Vert _{\mathcal {V},\infty }&:= \max \{|u_i|:i\in V\}, \quad \Vert \varphi \Vert _{\mathcal {E},\infty } := \max \{|\varphi _{ij}|:i,j \in V\}. \end{aligned}$$

Note that we indeed have, for all \(u\in \mathcal {V}\) and all \(\psi \in \mathcal {E}\),

$$\begin{aligned} \langle \nabla u,\psi \rangle _{\mathcal {E}} = \langle u, \text {div}\,\psi \rangle _{\mathcal {V}}. \end{aligned}$$
(2)

In van Gennip et al. (2014, Lemma 2.2) it is proven that, for all \(u\in \mathcal {V}\),

$$\begin{aligned} d_-^{\frac{r}{2}} \Vert u\Vert _{\mathcal {V},\infty } \le \Vert u\Vert _{\mathcal {V}} \le \sqrt{{\mathrm {vol}}\left( V\right) } \Vert u\Vert _{\mathcal {V},\infty }. \end{aligned}$$
(3)

For a function \(u \in \mathcal {V}\), we define its support as \( {{\mathrm{supp}}}(u) := \{i\in V: u_i \ne 0\}. \) The mass of a function \(u\in \mathcal {V}\) is \( \mathcal {M}(u):= \langle u, \chi _V\rangle _{\mathcal {V}} = \sum _{i\in V}d_i^r u_i, \) and the volume of a node set \(S\subset V\) is \( {\mathrm {vol}}\left( S\right) := \mathcal {M}(\chi _S) = \Vert \chi _S \Vert _{\mathcal {V}}^2 = \sum _{i\in S} d_i^r. \) Note that, if \(r=0\), then \({\mathrm {vol}}\left( S\right) = |S|\), where |S| denotes the number of elements in S. Using (2), we find the useful property that, for all \(u\in \mathcal {V}\),

$$\begin{aligned} \mathcal {M}(\Delta u) = \langle \Delta u, \chi _V\rangle _{\mathcal {V}} = \langle \nabla u, \nabla \chi _V\rangle _{\mathcal {E}} = 0. \end{aligned}$$
(4)

For \(u\in \mathcal {V}\), define the average mass function of u as \( \mathcal {A}(u) := \frac{\mathcal {M}(u)}{{\mathrm {vol}}\left( V\right) } \chi _V. \) Note in particular that

$$\begin{aligned} \mathcal {M}(u-\mathcal {A}(u)) = 0. \end{aligned}$$
(5)

We also define the Dirichlet energy of a function \(u\in \mathcal {V}\),

$$\begin{aligned} \frac{1}{2} \Vert \nabla u\Vert _{\mathcal {E}}^2 = \frac{1}{4} \sum _{i,j\in V} \omega _{ij}(u_i - u_j)^2, \end{aligned}$$
(6)

and the total variation of \(u\in \mathcal {V}\), \( \text {TV}(u) := \max \left\{ \langle \text {div}\,\varphi , u\rangle _{\mathcal {V}}: \varphi \in \mathcal {E}, \, \Vert \varphi \Vert _{\mathcal {E},\infty }\le 1\right\} = \frac{1}{2} \sum _{i,j\in V} \omega _{ij}^q |u_i-u_j|. \)

Remark 2.1

We have introduced two parameters, \(q\in [1/2,1]\) and \(r\in [0,1]\), in our definitions so far. As we will see later in this paper, the choice \(q=1\) is the natural one for our purposes. In those cases where we do not require \(q=1\), however, we do keep the parameter q unspecified, because there are papers in the literature in which the choice \(q=1/2\) is made, such as Gilboa and Osher (2009). One reason for the choice \(q=1/2\) is that in that case \(\omega _{ij}\) appears in the graph gradient, graph divergence, and graph total variation with the same power (1/2), allowing one to think of \(\sqrt{\omega _{ij}}\) as analogous to a reciprocal distance.

The parameter r is the more interesting one of the two, as the choices \(r=0\) and \(r=1\) lead to two different graph Laplacians that appear in the spectral graph theory literature under the names combinatorial (or unnormalized) graph Laplacian and random walk (or normalized, or non-symmetrically normalized) graph Laplacian, respectively. Many of the results in this paper hold for all \(r\in [0,1]\), and we will clearly indicate whether and when further assumptions on r are required

We note that, besides the graph Laplacian, also the mass of a function depends on r, whereas the total variation of a function does not depend on r, but does depend on q. The Dirichlet energy depends on neither parameter.

Unless we explicitly mention any further restrictions on q or r, only the conditions \(q\in [1/2,1]\) and \(r\in [0,1]\) are implicitly assumed.

Given a graph \(G=(V,E,\omega )\in \mathcal {G}\), we define the following useful subsets of \(\mathcal {V}\):

figure a

The space of zero mass node functions, \(\mathcal {V}_0\), will play an important role, as it is the space of admissible ‘right-hand side’ functions in the Poisson problem (17). Note that every \(u\in \mathcal {V}^b\) is of the form \(u=\chi _S\) for some \(S\subset V\).

Observe that for \(M>{\mathrm {vol}}\left( V\right) \), \(\mathcal {V}^b_M = \emptyset \). In fact, for a given finite graph there are only finitely many \(M\in [0,{\mathrm {vol}}\left( V\right) ]\) such that \(\mathcal {V}_M^b \ne \emptyset \). For a given graph, we define the (finite) set of admissible masses as

$$\begin{aligned} \mathfrak {M} := \{M\in [0,{\mathrm {vol}}\left( V\right) ]: \mathcal {V}^b_M \ne \emptyset \}. \end{aligned}$$
(11)

In Lemma S6.5 of Supplementary Materials we construct \(\mathfrak {M}\) for the example of a star graph.

3 Dirichlet and Poisson Equations

3.1 A Comparison Principle

Lemma 3.1

(Comparison principle I) Let \(G=(V,E,\omega ) \in \mathcal {G}\), let \(V'\) be a proper subset of V, and let \(u, v \in \mathcal {V}\) be such that, for all \(i\in V'\), \((\Delta u)_i \ge (\Delta v)_i\) and, for all \(i\in V{\setminus } V'\), \(u_i\ge v_i\). Then, for all \(i\in V\), \(u_i \ge v_i\).

Proof

The result follows as a special case of the comparison principle for uniformly elliptic partial differential equations on graphs with Dirichlet boundary conditions in Manfredi et al. (2015, Theorem 1). For completeness (and future use in the proof of Lemma 6.13) we provide the proof of this special case here. In particular, we will prove that if \(w\in \mathcal {V}\) is such that, for all \(i\in V'\), \((\Delta w)_i \ge 0\), and, for all \(i\in V{\setminus } V'\), \(w_i\ge 0\), then for all \(i\in V\), \(w_i \ge 0\). Applying this to \(w=u-v\) gives the desired result.

If \(V'=\emptyset \), the result follows trivially. In what follows we assume that \(V'\ne \emptyset \).

Define the set \(U := \left\{ i\in V: w_i = \min _{j\in V} w_j\right\} \). Note that \(U\ne \emptyset \). For a proof by contradiction, assume \(\min _{j\in V} w_j < 0\), then \(U\subset V'\). By assumption \(V'\ne V\), hence \(\emptyset \ne V{\setminus } V' \subset V{\setminus } U\). Let \(i^*\in V{\setminus } U\). Since G is connected, there is a path from U to \(i^*\).Footnote 6 Fix such a path and let \(k^*\) be the first node along this path such that \(k^*\in V{\setminus } U\) and let \(j^*\in U\) be the node immediately preceding \(k^*\) in the path. Then, for all \(k\in V\), \((\nabla w)_{kj^*} \le 0\), and \( (\nabla w)_{k^*j^*} = \omega _{k^*j^*}^{1-q} (w_{j^*}-w_{k^*}) < 0. \) Thus \( d_{j^*}^r (\Delta w)_{j^*} = \sum _{k\in V} \omega _{j^*k}^q (\nabla w)_{kj^*} < 0. \) Since \(j^* \in V'\), this contradicts one of the assumptions on w, hence \(\min _{i\in V} w_i \ge 0\) and the result is proven. \(\square \)

We will see a generalization of Lemma 3.1 as well as another comparison principle in Sect. 6.2, but their proofs require some groundwork which is interesting in its own right as well. That is the topic of Sect. 6.1.

3.2 Equilibrium Measures

Let \(G=(V,E,\omega )\in \mathcal {G}\). Given a properFootnote 7 subset \(S\subset V\), consider the equation

$$\begin{aligned} {\left\{ \begin{array}{ll} (\Delta \nu ^S)_i = 1, &{}\text {if }\quad i\in S,\\ \nu ^S_i = 0, &{}\text {if }\quad i \in V{\setminus } S. \end{array}\right. } \end{aligned}$$
(12)

We recall some properties that are proven in Bendito et al. (2003, Section 2).

Lemma 3.2

Let \(G=(V,E,\omega )\in \mathcal {G}\). The following results and properties hold:

  1. 1.

    The Laplacian \(\Delta \) is positive semidefinite on \(\mathcal {V}\) and positive definite on \(\mathcal {V}_0\).

  2. 2.

    The Laplacian satisfies a maximum principle: for all \(u\in \mathcal {V}_+\), \(\displaystyle \max _{i\in V} (\Delta u)_i = \max _{i\in {{\mathrm{supp}}}(u)} (\Delta u)_i. \)

  3. 3.

    For each proper subset \(S\subset V\), (12) has a unique solution in \(\mathcal {V}\). If \(\nu ^S\) is this solution, then \(\nu ^S \in \mathcal {V}_+\) and \({{\mathrm{supp}}}(\nu ^S) = S\).

  4. 4.

    If \(R \subset S\) are both proper subsets of V and \(\nu ^R, \nu ^S \in \mathcal {V}_+\) are the corresponding solutions of (12), then \(\nu ^S \ge \nu ^R\).

Proof

These properties are proven to hold in Bendito et al. (2003, Section 2) for \(r=0\); in Section S10.1 of Supplementary Materials, we give our own proofs for the general case in detail. \(\square \)

Using property 3 in Lemma 3.2, we can now define the concept of the equilibrium measure of a node subset S.

Definition 3.3

Let \(G=(V,E,\omega )\in \mathcal {G}\). For any proper subset \(S\subset V\), the equilibrium measure for S, \(\nu ^S\), is the unique function in \(\mathcal {V}_+\) which satisfies, for all \(i\in V\), the equation in (12).

In Lemmas S6.4 and S6.5 in Supplementary Materials, we construct equilibrium measures on a bipartite graph and a star graph, respectively.

3.3 Graph Curvature

We recall the concept of graph curvature, which was introduced in van Gennip et al. (2014, Section 3).

Definition 3.4

Let \(G\in \mathcal {G}\) and \(S\subset V\). Then we define the graph curvature of the set S by, for all \(i\in V\),

$$\begin{aligned} (\kappa _S^{q,r})_i := d_i^{-r} {\left\{ \begin{array}{ll} \sum \nolimits _{j\in V{\setminus } S} \omega _{ij}^q, &{}\text {if }\quad i\in S,\\ -\sum \nolimits _{j\in S} \omega _{ij}^q, &{}\text {if }\quad i\in V{\setminus } S. \end{array}\right. } \end{aligned}$$

We are mainly interested in the case \(q=1\) in this paper and in any given situation, if there are any restrictions on \(r\in [0,1]\), they will be clear from the context. Hence, for notational simplicity, we will write \(\kappa _S := \kappa _S^{1,r}\).

For future use, we also define

$$\begin{aligned} \kappa _S^+ := \max _{i\in V} \left( \kappa _S\right) _i. \end{aligned}$$
(13)

The following lemma collects some useful properties of the graph curvature.

Lemma 3.5

Let \(G\in \mathcal {G}\), \(S\subset V\), and let \(\kappa _S^{q,r}\) and \(\kappa _S\) be the graph curvatures from Definition 3.4. Then

$$\begin{aligned} \text {TV}(\chi _S) = \langle \kappa _S^{q,r}, \chi _S\rangle _{\mathcal {V}} \end{aligned}$$
(14)

and

$$\begin{aligned} \Delta \chi _S = \kappa _S, \end{aligned}$$
(15)

Moreover, if \(\kappa _S^+\) is as in (13), then \(\kappa _S^+ = \max _{i\in S} \left( \kappa _S\right) _i\).

Proof

The properties in (14) and (15) are proven in van Gennip et al. (2014, Section 3) and can be checked by a direct computation. Note that the latter requires \(q=1\). The property for \(\kappa _S^+\) follows from the fact that \(\kappa _S\) is nonnegative on S and nonpositive on \(S^c\). \(\square \)

We can use Lemma 3.1 to connect the equilibrium measures from (12) with the graph curvature.

Lemma 3.6

Assume \(G=(V,E,\omega )\in \mathcal {G}\) and let S be a proper subset of V. Let \(\nu ^S\) be the equilibrium measure for S from (12) and let \(\kappa _S\) be the graph curvature of S (for \(q=1\)) and \(\kappa _S^+\) its maximum value, as in Definition 3.4. Then, for all \(i\in S\), \(\nu ^S_i \ge \left( \kappa _S^+\right) ^{-1}\).

Proof

Define \(x:= \min _{i\in S} \left( \kappa _S\right) _i^{-1} = \left( \kappa _S^+\right) ^{-1}\). Since G is connected and S is a proper subset of V, \(\max _{i\in S} \left( \kappa _S\right) _i > 0\), and hence x is well-defined. Using (15), we compute \(\Delta \left( x \chi _S\right) = x \kappa _S \le 1\) on V (and in particular on S). Hence, for \(i\in S\), \(\left( \Delta \left( x \chi _S\right) \right) _i \le 1 = \left( \Delta \nu ^S_i\right) _i\). Furthermore, for \(i\in V{\setminus } S\), \(x \left( \chi _S\right) _i = 0 = \nu ^S_i\). Thus, by Lemma 3.1, for all \(i\in S\), \(x = x (\chi _S)_i \le \nu ^S_i\). \(\square \)

We illustrate Lemma 3.6 with bipartite and star graph examples in Remark S6.6 in Supplementary Materials.

3.4 Green’s Functions

Next we use the equilibrium measures to construct Green’s functions for Dirichlet and Poisson problems, following the discussion in Bendito et al. (2003, Section 3); see also Bendito et al. (2000a) and Chung and Yau (2000).

In this section, all the results assume the context of a given graph \(G\in \mathcal {G}\). In this section and in some selected places later in the paper, we will also denote Green’s functions by the symbol G. It will always be very clear from the context whether G denotes a graph or a Green’s function in any given situation.

Definition 3.7

For a given subset \(S\subset V\), we denote by \(\mathcal {V}(S)\) the set of all real-valued node functions whose domain is S. Note that \(\mathcal {V}(V)=\mathcal {V}\).

Given a nonempty, proper subset \(S\subset V\) and a function \(f \in \mathcal {V}(S)\), the (semihomogeneous) Dirichlet problem is to find \(u\in \mathcal {V}\) such that, for all \(i\in V\),

$$\begin{aligned} {\left\{ \begin{array}{ll} (\Delta u)_i = f_i, &{}\text {if } \quad i\in S,\\ u_i = 0, &{}\text {if }\quad i\in V{\setminus } S. \end{array}\right. } \end{aligned}$$
(16)

Given \(k\in V\) and \(f\in \mathcal {V}_0\), the Poisson problem is to find \(u\in \mathcal {V}\) such that,

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta u = f,\\ u_k = 0. \end{array}\right. } \end{aligned}$$
(17)

Remark 3.8

Note that a general Dirichlet problem which prescribes \(u=g\) on \(V{\setminus } S\), for some \(g\in \mathcal {V}(V{\setminus } S)\), can be transformed into a semihomogeneous problem by considering the function \(u-{\tilde{g}}\), where, for all \(i\in S\), \({\tilde{g}}_i = 0\) and for all \(i\in V{\setminus } S\), \({\tilde{g}}_i = g_i\).

Lemma 3.9

Let \(S\subset V\) be a nonempty, proper subset, and \(f\in \mathcal {V}(S)\). Then the Dirichlet problem (16) has at most one solution. Similarly, given \(k\in V\) and \(f\in \mathcal {V}_0\), the Poisson problem (17) has at most one solution.

Proof

Given two solutions u and v to the Dirichlet problem, we have \(\Delta (u-v) = 0\) on S and \(u-v=0\) on \(V{\setminus } S\). Since the graph is connected, this has as unique solution \(u-v=0\) on V (see the uniqueness proof in point 3 of Lemma 3.2, which uses the comparison principle of Lemma 3.1). A similar argument proves the result for the Poisson problem. \(\square \)

Next we will show that solutions to both the Dirichlet and Poisson problem exist, by explicitly constructing them using Green’s functions.

Definition 3.10

Let S be a nonempty, proper subset of V. The function \(G: V \times S \rightarrow {\mathbb {R}}\) is a Green’s function for the Dirichlet equation, (16), if, for all \(f\in \mathcal {V}(S)\), the function \(u \in \mathcal {V}\) which is defined by, for all \(i\in V\),

$$\begin{aligned} u_i := \sum _{j\in S} d_j^r G_{ij} f_j, \end{aligned}$$
(18)

satisfies (16).

Let \(k\in V\). The function \(G: V \times V\rightarrow {\mathbb {R}}\) is a Green’s function for the Poisson equation, (17), if, for all \(f\in \mathcal {V}_0\), (17) is satisfied by the function \(u\in \mathcal {V}\) which is defined by, for all \(i\in V\),

$$\begin{aligned} u_i := \sum _{j\in V} d_j^r G_{ij} f_j = \langle G_{i\cdot }, f\rangle _{\mathcal {V}}, \end{aligned}$$
(19)

where, for all \(i\in V\), \(G_{i\cdot }: V \rightarrow {\mathbb {R}}\).Footnote 8

In either case, for fixed \(j\in S\) (Dirichlet) or fixed \(j\in V\) (Poisson), we define \(G^j: V\rightarrow {\mathbb {R}}\), by, for all \(i\in V\),

$$\begin{aligned} G^j_i := G_{ij}. \end{aligned}$$
(20)

Lemma 3.11

Let S be a nonempty, proper subset of V and let \(G:V\times S \rightarrow {\mathbb {R}}\). Then G is a Green’s function for the Dirichlet equation, (16), if and only if, for all \(i\in V\) and for all \(j\in S\),

$$\begin{aligned} {\left\{ \begin{array}{ll} (\Delta G^j)_i = d_j^{-r} \delta _{ij}, &{}\text {if }\quad i\in S,\\ G^j_i = 0, &{}\text {if }\quad i\in V{\setminus } S. \end{array}\right. } \end{aligned}$$
(21)

Let \(k\in V\) and \(G:V\times V \rightarrow {\mathbb {R}}\). Then G is a Green’s function for the Poisson equation (17), if and only if there is a \(q\in \mathcal {V}\) which satisfies

$$\begin{aligned} \mathcal {M}(q) = -\,1 \end{aligned}$$
(22)

and there is a \(C\in {\mathbb {R}}\), such that G satisfies, for all \(i,j\in V\),

$$\begin{aligned} {\left\{ \begin{array}{ll} (\Delta G^j)_i = d_j^{-r} \delta _{ij} + q_i,\\ G^j_k = C. \end{array}\right. } \end{aligned}$$
(23)

Proof

For the Dirichlet case, let u be given by (18), then, for all \(i\in S\), \( (\Delta u)_i = \sum _{j\in S} d_j^r (\Delta G^j)_i f_j. \) If the function G is a Green’s function, then, for all \(f\in \mathcal {V}(S)\) and for all \(i\in S\), \((\Delta u)_i = f_i\). In particular, if we apply this to \(f=\chi _{\{j\}}\) for \(j\in S\), we find, for all \(i, j\in S\), \( d_j^r (\Delta G^j)_i = \delta _{ij}. \) Moreover, for all \(f\in \mathcal {V}(S)\) and for all \(i\in V{\setminus } S\), \(u_i=0\). Applying this again to \(f=\chi _{\{j\}}\) for \(j\in S\), we find for all \(i\in V{\setminus } S\) that \(d_j^r G^j_i = 0\). Hence, for all \(i\in V{\setminus } S\), and for all \(j\in S\), \(G^j_i = 0\). This gives us (21).

Next assume G satisfies (21). By substituting G into (18), we find that u satisfies (16) and thus G is a Green’s function.

Now we consider the Poisson case and we let u be given by (19). Let q satisfy (22). If G is a Green’s function, then, for all \(f\in \mathcal {V}_0\), \(\Delta u = f\). Let \(l_1, l_2 \in V\) and apply \(\Delta u = f\) to \(f=d_{l_1}^r \chi _{\{l_1\}} - d_{l_2}^r \chi _{\{l_2\}}\). It follows that, for all \(i\in V\), \(\left( \Delta G^{l_1}\right) _i - \left( \Delta G^{l_2}\right) _i = d_{l_1}^{-r} \left( (\chi _{\{l_1\}}\right) _i - d_{l_2}^{-r} \left( \chi _{\{l_2\}}\right) _i\). In particular, if \(l_1\ne i \le l_2\) the right-hand side in this equality is zero and thus for all \(i\in V\), \(j\mapsto \left( \Delta G^j\right) _i\) is constant on \(V{\setminus }\{i\}\). In other words, there is a \(q\in \mathcal {V}\), such that, for all \(i\in V\) and for all \(j\in V{\setminus } \{j\}\), \(\left( \Delta G^j\right) _i = q_i\).

Next let \(l\in V\) and apply \(\Delta u = f\) to the function \(f=\chi _{\{l\}} - \mathcal {A}\left( \chi _{\{l\}}\right) \). We compute that \(\left( \Delta u\right) _i = \left( \Delta G^k\right) _i d_k^r - \frac{d_k^r d_i^r}{{\mathrm {vol}}\left( V\right) } \left( \Delta G^i\right) _i - \frac{d_k^r}{{\mathrm {vol}}\left( V\right) } q_i ({\mathrm {vol}}\left( V\right) - d_i^r)\). Hence, if \(l=i\), \(\Delta u = f\) reduces to \( \left( \Delta G^i\right) _i d_i^r \left( 1-\frac{d_i^r}{{\mathrm {vol}}\left( V\right) }\right) - d_i^r q_i + \frac{d_i^{2r}}{{\mathrm {vol}}\left( V\right) } q_i = 1 - \frac{d_i^r}{{\mathrm {vol}}\left( V\right) }. \) We solve this for \(\left( \Delta G^i\right) _i\) to find \(\left( \Delta G^i\right) _i = d_i^{-r} + q_i\). If \(l\in V{\setminus }\{i\}\), \(\Delta u = f\) reduces to \( \left( \Delta G^k\right) _i \left( d_l^r-d_l^r + \frac{d_l^r d_i^r}{{\mathrm {vol}}\left( V\right) }\right) - \frac{d_l^r d_i^r}{{\mathrm {vol}}\left( V\right) } \left( \Delta G^i\right) _i = 0 - \frac{d_l^r}{{\mathrm {vol}}\left( V\right) }. \) Using the expression for \(\left( \Delta G^i\right) _i\) that we found above, we solve for \(\left( \Delta G^k\right) _i\) to find \(\left( \Delta G^i\right) _i = q_i\).

Combining the above, we find, for all \(i,j\in V\), \( (\Delta G^j)_i = d_j^{-r} \delta _{ij} + q_i. \) Now we compute, for each \(j \in V\),

$$\begin{aligned} 0 = \langle \Delta G^j, \chi _V \rangle _{\mathcal {V}} = \langle d_j^{-r} \chi _{\{j\}} + q, \chi _V\rangle _{\mathcal {V}} = 1+ \langle q, \chi _V\rangle _{\mathcal {V}} = 1+\mathcal {M}(q), \end{aligned}$$

thus \(\mathcal {M}(q) = -1\).

The ‘boundary condition’ \(u_k=0\) for a fixed \(k\in V\) in (17), holds for all \(f\in \mathcal {V}_0\). Applying this again for \(f=d_{l_1}^{-r} \chi _{\{l_1\}} - d_{l_2}^{-r} \chi _{\{l_2\}}\) we find \(G_k^{l_1} - G_k^{l_2} = 0\). Hence there is a constant \(C\in {\mathbb {R}}\) such that, for all \(j\in V\), \(G^j_k = C\). This gives us (23).

Next assume G satisfies (23). By substituting G into (19) we find that u satisfies (17). In particular, remember that \(f\in \mathcal {V}_0\). Thus, since q does not depend on j we have \(\langle q, f \rangle _{\mathcal {V}} = 0\) and moreover \(u_k = C \mathcal {M}(f) = 0\). Thus G is a Green’s function. \(\square \)

Remark 3.12

Any choice of q in (23) consistent with (22) will lead to a valid Green’s function for the Poisson equation and hence to the same (and only) solution u of the Poisson problem (17) via (19). We make the following convenient choice: for all \(i\in V\),

$$\begin{aligned} q_i = -d_k^{-r} \delta _{ik}. \end{aligned}$$
(24)

In Lemma 3.16, we will see that this choice of q leads to a symmetric Green’s function.

Also any choice of \(C\in {\mathbb {R}}\) in (23) will lead to a valid Green’s function for the Poisson equation. A function \({\tilde{G}}\) satisfies (23) with \(C={\tilde{C}} \in {\mathbb {R}}\) if and only if \({\tilde{G}}-{\tilde{C}}\) satisfies (23) with \(C=0\). Hence in Lemma 3.14, we will give a Green’s function for the Poisson equation for the choice

$$\begin{aligned} C=0. \end{aligned}$$
(25)

Corollary 3.13

For a given nonempty, proper subset \(S\subset V\), if there is a solution to (21), it is unique. Moreover, for given \(k\in V\), \(q\in \mathcal {V}_{-1}\), and \(C\in {\mathbb {R}}\), if there is a solution to (23), it is unique.

Proof

Let \(j\in S\) (or \(j\in V\)). If \(G^j\) and \(H^j\) both satisfy (21) [or (23)], then \(G^j-H^j\) satisfies a Dirichlet (or Poisson) problem of the form (16) [or (17)]. Hence by a similar argument as in the proof of Lemma 3.9, \(G^j-H^j = 0\). \(\square \)

For the following lemma, recall the definition of equilibrium measure from Definition 3.3.

Lemma 3.14

Let S be a nonempty, proper subset of V. The function \(G: V\times S \rightarrow {\mathbb {R}}\), defined by, for all \(i\in V\) and all \(j\in S\),

$$\begin{aligned} G_{ij} = \frac{\nu ^S_j}{\mathcal {M}(\nu ^S) - \mathcal {M}(\nu ^{S{\setminus }\{j\}})} \left( \nu ^S_i - \nu ^{S{\setminus }\{j\}}_i\right) , \end{aligned}$$
(26)

is the Green’s function for the Dirichlet equation, satisfying (21).

Let \(k\in V\). The function \(G: V\times V \rightarrow {\mathbb {R}}\), defined by, for all \(i,j\in V\),

$$\begin{aligned} G_{ij} = \frac{1}{{\mathrm {vol}}\left( V\right) } \left( \nu ^{V{\setminus }\{k\}}_i + \nu ^{V{\setminus }\{j\}}_k - \nu ^{V{\setminus }\{j\}}_i\right) , \end{aligned}$$
(27)

is the Green’s function for the Poisson equation, satisfying (23) with (24) and (25).

Proof

This can be checked via direct computations. We provide the details in Section S10.2 of Supplementary Materials. \(\square \)

Remark 3.15

Let G be the Green’s function from (27) for the Poisson equation. As shown in Lemma 3.14, G satisfies (23) with (24) and (25). Now let us try to find another Green’s function satisfying (23) with (25) and with a different choice of q. Fix \(k\in V\) and define \({\tilde{q}}\in \mathcal {V}\), by, for all \(i\in V\), \( {\tilde{q}}_i := q_i + d_k^{-r} \delta _{ik}. \) Then, by (22), \(\mathcal {M}({\tilde{q}}) = 0\). Hence, using (19) with the Green’s function G, we find a function \(v\in \mathcal {V}\) which satisfies, \(\Delta v_i = {\tilde{q}}\) and \(v_k = 0\). Hence, for all \(i,j\in V\),

$$\begin{aligned} {\left\{ \begin{array}{ll} (\Delta (G^j+v))_i = d_j^{-r} \delta _{ij} - d_k^{-r} \delta _{ik} + {\tilde{q}}_i,\\ (G^j+v)_k = 0. \end{array}\right. } \end{aligned}$$

So \(G^j+v\) is the new Green’s function we are looking for.

Lemma 3.16

Let S be a nonempty, proper subset of V. If \(G: V \times S \rightarrow {\mathbb {R}}\) is the Green’s function for the Dirichlet equation satisfying (21), then G is symmetric on \(S\times S\), i.e. for all \(i, j \in S\), \( G_{ij} = G_{ji}. \)

Let \(k\in V\). If \(G: V \times V \rightarrow {\mathbb {R}}\) is the Green’s function for the Poisson equation satisfying (23) with (24) (and any choice of \(C\in {\mathbb {R}}\)), then G is symmetric, i.e. for all \(i, j\in V\), \( G_{ij} = G_{ji}. \)

Proof

Let \(G: V \times S \rightarrow {\mathbb {R}}\) be the Green’s function for the Dirichlet equation, satisfying (21). Let \(u\in \mathcal {V}\) be such that \(u=0\) on \(V{\setminus } S\). Let \(i\in V\), then

$$\begin{aligned} \langle \Delta G^i, u\rangle _{\mathcal {V}}= & {} \sum _{j, k \in V} \omega _{jk} (G^i_j-G^i_k) u_j = \sum _{\mathop {j\in S}\limits _{k\in V}} \omega _{jk} (G^i_j-G^i_k) u_j \\= & {} \sum _{j\in S} d_j^r d_i^{-r} \delta _{ji} u_j = u_i, \end{aligned}$$

where the third equality follows from \( d_i^{-r} \delta _{ji} (\Delta G^i)_j = d_j^{-r} \sum _{k\in V} \omega _{jk} (G^i_j-G^i_k). \) Now let \(i,j\in S\) and use the equality above with \(u=G^j\) to deduce

$$\begin{aligned} G_{ij} = G^j_i = \langle \Delta G^i, G^j\rangle _{\mathcal {V}} = \langle G^i, \Delta G^j\rangle _{\mathcal {V}} = G^i_j = G_{ji}. \end{aligned}$$

Next we consider the Poisson case with Green’s function \(G: V \times V\rightarrow {\mathbb {R}}\), satisfying (23) with (24) and (25). Let \(k\in V\) and \(u\in \mathcal {V}\) with \(u_k=0\). Then, similar to the computation above, for \(i\in V\), we find

$$\begin{aligned} \langle \Delta G^i, u\rangle _{\mathcal {V}}= & {} \sum _{j\in V} (\Delta G^i)_j u_j d_j^r = \sum _{j\in V} \left( d_i^{-r} \delta _{ji}+q_j\right) u_j d_j^r \\= & {} u_i-\sum _{j\in V} d_k^{-r} \delta _{jk} u_j d_j^r = u_i - u_k = u_i, \end{aligned}$$

where we used (24). If we use the identity above with \(u=G^j\), we obtain, for \(i,j\in V\),

$$\begin{aligned} G_{ij} = G^j_i = \langle \Delta G^i, G^j\rangle _{\mathcal {V}} = \langle G^i, \Delta G^j\rangle _{\mathcal {V}} = G^i_j = G_{ji}, \end{aligned}$$

where we have applied that \(G^j_k=G^i_k=0\).

Finally, if \({\tilde{G}}: V\times V \rightarrow {\mathbb {R}}\) satisfies (23) with (24) and with \(C\ne 0\), then \({\tilde{G}} = G+C\) and hence \({\tilde{G}}\) is also symmetric. \(\square \)

The symmetry and support of the Green’s functions are discussed in some more detail in Remarks S5.3 and S5.4 in Supplementary Materials. Section S2 of Supplementary Materials gives a random walk interpretation for the Green’s function for the Poisson equation.

4 The Graph Ohta–Kawasaki Functional

4.1 A Negative Graph Sobolev Norm and Ohta–Kawasaki

In analogy with the negative \(H^{-1}\) Sobolev norm (and underlying inner product) in the continuum (see for example Evans 2002; Adams and Fournier 2003; Brezis 1999), we introduce the graph \(H^{-1}\) inner product and norm.

Definition 4.1

The \(H^{-1}\) inner product of \(u, v\in \mathcal {V}_0\) is given by

$$\begin{aligned} \langle u, v \rangle _{H^{-1}} := \langle \nabla \varphi , \nabla \psi \rangle _{\mathcal {E}}, \end{aligned}$$

where \(\varphi , \psi \in \mathcal {V}\) are any functions such that \(\Delta \varphi = u\) and \(\Delta \psi = v\) hold on V.

Remark 4.2

The zero mass conditions on u and v in Definition 4.1 are necessary and sufficient conditions for the solutions \(\varphi \) and \(\psi \) to the Poisson equations above to exist as we have seen in Sect. 3.4. These solutions are unique up to an additive constant. Note that the choice of this constant does not influence the value of the inner product.

Remark 4.3

It is useful to realize we can rewrite the inner product from Definition 4.1 as

$$\begin{aligned} \langle u, v \rangle _{H^{-1}} = \langle \varphi , \Delta \psi \rangle _{\mathcal {V}} = \langle \varphi , v \rangle _{\mathcal {V}} \quad \text {or} \quad \langle \varphi , v \rangle _{\mathcal {V}} = \langle \Delta \varphi , v \rangle _{H^{-1}}. \end{aligned}$$
(28)

Remark 4.4

Note that for a connected graph the expression in Definition 4.1 indeed defines an inner product on \(\mathcal {V}_0\), as \(\langle u,u\rangle _{H^{-1}} = 0\) implies that \((\nabla \varphi )_{ij}=0\) for all \(i,j\in V\) for which \(\omega _{ij}>0\). Hence, by connectivity, \(\varphi \) is constant on V and thus \(u=\Delta \varphi = 0\) on V.

The \(H^{-1}\) inner product then also gives us the \(H^{-1}\) norm:

$$\begin{aligned} \Vert u\Vert _{H^{-1}}^2 := \langle u, u \rangle _{H^{-1}} = \Vert \nabla \varphi \Vert _{\mathcal {E}}^2 = \langle u, \varphi \rangle _{\mathcal {V}}. \end{aligned}$$

Let \(k\in V\). By (5), if \(u\in \mathcal {V}\), then \(u-\mathcal {A}(u) \in \mathcal {V}_0\), and hence there exists a unique solution to the Poisson problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \varphi = u-\mathcal {A}(u),\\ \varphi _k = 0, \end{array}\right. } \end{aligned}$$
(29)

which can be expressed using the Green’s function from (27). We say that this solution \(\varphi \) solves (29) for u. Because the kernel of \(\Delta \) contains only the constant functions, the solution \(\varphi \) for any other choice of k will only differ by an additive constant. Hence the norm

$$\begin{aligned} \Vert u-\mathcal {A}(u)\Vert _{H^{-1}}^2 = \Vert \nabla \varphi \Vert _{\mathcal {E}}^2 = \frac{1}{2} \sum _{i,j\in V} \omega _{ij} (\varphi _i-\varphi _j)^2, \end{aligned}$$

is independent of the choice of k. Note also that this norm in general does depend on r, since \(\varphi \) does. Contrast this with the Dirichlet energy in (6) which is independent of r. The norm does not depend on q.

Using the Green’s function expansion from (19) for \(\varphi \), with G being the Green’s function for the Poisson equation from (27), we can also write

$$\begin{aligned} \Vert u-\mathcal {A}(u)\Vert _{H^{-1}}^2 = \langle u-\mathcal {A}(u), \varphi \rangle _{\mathcal {V}}^2 = \sum _{i,j\in V} \left( u_i-\mathcal {A}(u)\right) d_i^r G_{ij} d_j^r \left( u_j - \mathcal {A}(u)\right) . \end{aligned}$$

Note that this expression seems to depend on the choice of k, via G, but by the discussion above we know in fact that it does not depend on k. This can also be seen as follows. A different choice for k, leads to an additive constant change in the function G, which leaves the norm unchanged, since \(\sum _{i\in V} d_i^r \left( u_i-\mathcal {A}(u)\right) = 0\).

Let \(W: {\mathbb {R}}\rightarrow {\mathbb {R}}\) be the double-well potential defined by, for all \(x\in {\mathbb {R}}\),

$$\begin{aligned} W(x):= x^2 (x-1)^2. \end{aligned}$$
(30)

Note that W has wells of equal depth located at \(x=0\) and \(x=1\).

Definition 4.5

For \(\varepsilon >0\), \(\gamma \ge 0\) and \(u\in \mathcal {V}\), we now define both the (epsilon) Ohta–Kawasaki functional (or diffuse interface Ohta–Kawasaki functional)

$$\begin{aligned} F_\varepsilon (u) := \frac{1}{2} \Vert \nabla u\Vert _{\mathcal {E}}^2 + \frac{1}{\varepsilon }\sum _{i\in V} W(u_i) + \frac{\gamma }{2} \Vert u-\mathcal {A}(u)\Vert _{H^{-1}}^2, \end{aligned}$$
(31)

and the limit Ohta–Kawasaki functional (or sharp interface Ohta–Kawasaki functional)

$$\begin{aligned} F_0(u) := \text {TV}(u) + \frac{\gamma }{2} \Vert u-\mathcal {A}(u)\Vert _{H^{-1}}^2. \end{aligned}$$
(32)

The nomenclature and notation is justified by the fact that \(F_0\) [with its domain restricted to \(\mathcal {V}^b\), see (8)] is the \(\Gamma \)-limit of \(F_\varepsilon \) for \(\varepsilon \rightarrow 0\) (this is shown by a straightforward adaptation of the results and proofs in van Gennip and Bertozzi  (2012, Section 3); see Section S3 in Supplementary Materials).

There are two minimization problems of interest here:

$$\begin{aligned}&\min _{u\in \mathcal {V}_M}F_\varepsilon (u), \end{aligned}$$
(33)
$$\begin{aligned}&\min _{u\in \mathcal {V}^b_M}F_0(u), \end{aligned}$$
(34)

for a given \(M\in {\mathbb {R}}\) for the first problem and a given \(M\in \mathfrak {M}\) for the second. In this paper we will mostly be concerned with the second problem (34).

In Lemma S6.7 in Supplementary Materials, we describe a useful symmetry of \(F_0\) for the star graph example.

4.2 Ohta–Kawasaki in Spectral Form

Because of the role the graph Laplacian plays in the Ohta–Kawasaki energies, it is useful to consider its spectrum. As is well known (see for example Chung 1997; von Luxburg 2007; van Gennip et al. 2014), for any \(r\in [0,1]\), the eigenvalues of \(\Delta \), which we will denote by

$$\begin{aligned} 0=\lambda _0 \le \lambda _1 \le \cdots \le \lambda _{n-1}, \end{aligned}$$
(35)

are real and nonnegative. The (algebraic and geometric) multiplicity of 0 as eigenvalue is equal to the number of connected components of the graph and the corresponding eigenspace is spanned by the indicator functions of those components. If \(G\in \mathcal {G}\), then G is connected, and thus, for all \(m\in \{1, \ldots , n-1\}\), \(\lambda _m>0\). We consider a set of corresponding \(\mathcal {V}\)-orthonormal eigenfunctions \(\phi ^m\in \mathcal {V}\), i.e. for all \(m,l \in \{0, \ldots , n-1\}\),

$$\begin{aligned} \Delta \phi ^m = \lambda _m \phi ^m, \quad \text {and} \quad \langle \phi ^m, \phi ^l\rangle _{\mathcal {V}} = \delta _{ml}, \end{aligned}$$
(36)

where \(\delta _{ml}\) denotes the Kronecker delta. Note that, since \(\Delta \) and \(\langle \cdot , \cdot \rangle _{\mathcal {V}}\) depend on r, but not on q, so do the eigenvalues \(\lambda _m\) and the eigenfunctions \(\phi ^m\).

For definiteness we chooseFootnote 9

$$\begin{aligned} \phi ^0 := ({\mathrm {vol}}\left( V\right) )^{-1/2} \chi _V. \end{aligned}$$
(37)

The eigenfunctions form a \(\mathcal {V}\)-orthonormal basis for \(\mathcal {V}\), hence, for any \(u\in \mathcal {V}\), we have

$$\begin{aligned} u = \sum _{m=0}^{n-1} a_m \phi ^m, \quad \text {where } \quad a_m := \langle u, \phi ^m\rangle _{\mathcal {V}}. \end{aligned}$$
(38)

As an example, Laplacian eigenvalues and eigenfunctions for the star graph are given in Lemma S6.8 in Supplementary Materials.

The following result will be useful later.

Lemma 4.6

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(u\in \mathcal {V}\), and let \(\{\phi ^m\}_{m=0}^{n-1}\) be \(\mathcal {V}\)-orthonormal Laplacian eigenfunctions as in (36). Then

$$\begin{aligned} \sum _{m=0}^{n-1} \langle u,\phi ^m\rangle _{\mathcal {V}}^2 = \mathcal {M}(u^2). \end{aligned}$$

Proof

Let \(j\in V\) and define \(f\in \mathcal {V}\) by, for all \(i\in V\), \(f^j_i:=d_i^{-r} \delta _{ij}\), where \(\delta \) denotes the Kronecker delta. Using the expansion in (38), we find, for all \(i\in V\),

$$\begin{aligned} f^j_i = \sum _{m=0}^{n-1} \langle f^j, \phi ^m\rangle _{\mathcal {V}}\, \phi ^m_i = \sum _{m=0}^{n-1} \sum _{k\in V} d_i^{-r} \delta _{kj} d_k^r \phi ^m_k \phi ^m_i = \sum _{m=0}^{n-1} \phi ^m_j \phi ^m_i. \end{aligned}$$

Hence

$$\begin{aligned} \sum _{m=0}^{n-1} \langle u,\phi ^m\rangle _{\mathcal {V}}^2= & {} \sum _{m=0}^{n-1} \sum _{i,j\in V} u_i u_j d_i^r d_j^r \phi ^m_i \phi ^m_j = \sum _{i,j\in V} u_i u_j d_i^r d_j^r f^j_i \\= & {} \langle u^2, \chi _V\rangle _{\mathcal {V}} = \mathcal {M}(u^2). \end{aligned}$$

\(\square \)

Lemma 4.7

Let \(u\in \mathcal {V}\), \(k\in V\), then \(\varphi \) satisfies \(\Delta \varphi = u-\mathcal {A}(u)\), if and only if

$$\begin{aligned} \varphi = \mathcal {A}(\varphi ) + \sum _{m=1}^{n-1}\lambda _m^{-1} \langle u, \phi ^m\rangle _{\mathcal {V}}\, \phi ^m. \end{aligned}$$
(39)

Proof

Let \(\varphi \) satisfy \(\Delta \varphi = u - \mathcal {A}(u)\). Using expansions as in (38) for \(\varphi \) and \(u-\mathcal {A}(u)\), we have

$$\begin{aligned} \Delta \left( \sum _{m=0}^{n-1} a_m \phi ^m\right) = \Delta \varphi = u-\mathcal {A}(u) = \sum _{m=0}^{n-1} b_m \phi ^m, \end{aligned}$$

where, for all \(m\in \{0, \ldots , n-1\}\), \(a_m := \langle \varphi , \phi ^m\rangle _{\mathcal {V}}\) and \(b_m := \langle u-\mathcal {A}(u), \phi ^m\rangle _{\mathcal {V}}\). Hence \( \sum _{m=0}^{n-1} a_m \lambda _m \phi ^m = \sum _{m=0}^{n-1} b_m \phi ^m \) and therefore, for any \(l \in \{0, \ldots , n-1\}\),

$$\begin{aligned} a_l \lambda _l = \left\langle \sum _{m=0}^{n-1} a_m \lambda _m \phi ^m, \phi ^l\right\rangle _{\mathcal {V}} = \left\langle \sum _{m=0}^{n-1} b_m \phi ^m, \phi ^l \right\rangle _{\mathcal {V}} = b_l. \end{aligned}$$

In particular, if \(m\ge 1\), then \(a_m = \lambda _m^{-1} b_m\). Because \(\lambda _0=0\), the identity above does not constrain \(a_0\). Because, for \(m\ge 1\), \(\langle \phi ^0, \phi ^m\rangle =0\), it follows that, for \(m\in \{1, \ldots , n-1\}\),

$$\begin{aligned} b_m = \langle u-\mathcal {A}(u), \phi ^m\rangle _{\mathcal {V}}&= \langle u, \phi ^m\rangle _{\mathcal {V}} - \frac{\mathcal {M}(u)}{{\mathrm {vol}}\left( V\right) } \langle \chi _V, \phi ^m\rangle \nonumber \\&= \langle u, \phi ^m\rangle _{\mathcal {V}} - \frac{\mathcal {M}(u)}{{\mathrm {vol}}\left( V\right) } ({\mathrm {vol}}\left( V\right) )^{1/2} \langle \phi ^0, \phi ^m\rangle \nonumber \\&= \langle u, \phi ^m\rangle _{\mathcal {V}} . \end{aligned}$$
(40)

and therefore, for all \(m\in \{1, \ldots , n-1\}\), \( a_m = \lambda _m^{-1} \langle u, \phi ^m\rangle _{\mathcal {V}}. \) Furthermore

$$\begin{aligned} a_0 = \langle \varphi , \phi ^0\rangle _{\mathcal {V}} = ({\mathrm {vol}}\left( V\right) )^{-1/2} \langle \varphi , \chi _V\rangle _{\mathcal {V}} = ({\mathrm {vol}}\left( V\right) )^{-1/2} \mathcal {M}(\varphi ). \end{aligned}$$

Substituting these expressions for \(a_0\) and \(a_m\) into the expansion of \(\varphi \), we find that \(\varphi \) is as in (39).

Conversely, if \(\varphi \) is as in (39), a direct computation shows that \(\Delta \varphi = u-\mathcal {A}(u)\). \(\square \)

Remark 4.8

From Lemma 4.7 we see that we can write \(\varphi -\mathcal {A}(\varphi ) = \Delta ^\dagger (u-\mathcal {A}(u))\), where \(\Delta ^\dagger \) is the Moore–Penrose pseudoinverse of \(\Delta \) (Dresden 1920; Bjerhammer 1951; Penrose 1955).

Lemma 4.9

Let \(q\in [1/2,1]\), \(S\subset V\), and let \(\kappa _S^{q,r}\), \(\kappa _S\) be the graph curvatures from Definition 3.4, then

$$\begin{aligned} \text {TV}(\chi _S) = \sum _{m=1}^{n-1} \langle \kappa _S^{q,r}, \phi ^m\rangle _{\mathcal {V}}\, \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}. \end{aligned}$$

Furthermore, if \(q=1\), then

$$\begin{aligned} \text {TV}(\chi _S)= & {} \sum _{m=1}^{n-1} \lambda _m \langle \chi _s, \phi ^m\rangle _{\mathcal {V}}^2 \end{aligned}$$
(41)
$$\begin{aligned}= & {} \sum _{m=1}^{n-1} \lambda _m^{-1} \langle \kappa _s, \phi ^m\rangle _{\mathcal {V}}^2. \end{aligned}$$
(42)

Proof

Using an expansion as in (38) for \(\chi _S\) together with (14), we find

$$\begin{aligned} \text {TV}(\chi _S)= & {} \left\langle \kappa _S^{q,r}, \sum _{m=0}^{n-1} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}\, \phi ^m \right\rangle _{\mathcal {V}} = \sum _{m=0}^{n-1} \langle \kappa _S^{q,r}, \phi ^m\rangle _{\mathcal {V}}\, \langle \chi _S, \phi ^m\rangle _{\mathcal {V}} \\= & {} \sum _{m=1}^{n-1} \langle \kappa _S^{q,r}, \phi ^m\rangle _{\mathcal {V}}\, \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}, \end{aligned}$$

where the last equality follows from

$$\begin{aligned} \langle \kappa _S^{q,r}, \phi ^0\rangle _{\mathcal {V}} = ({\mathrm {vol}}\left( V\right) )^{-1/2} \left( \sum _{i\in S} \sum _{j\in S^c} \omega _{ij}^q (\chi _V)_i - \sum _{i\in S^c} \sum _{j\in S} \omega _{ij}^q (\chi _V)_i \right) = 0. \end{aligned}$$

Moreover, we use (15) to find \( \langle \chi _S, \lambda _m \phi ^m\rangle _{\mathcal {V}} = \langle \chi _S, \Delta \phi ^m\rangle _{\mathcal {V}} = \langle \Delta \chi _S, \phi ^m\rangle _{\mathcal {V}} = \langle \kappa _s, \phi ^m\rangle _{\mathcal {V}}, \) hence

$$\begin{aligned} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}} = \lambda _m^{-1} \langle \kappa _s, \phi ^m\rangle _{\mathcal {V}}, \end{aligned}$$
(43)

If \(q=1\), such that \(\kappa _S^{q,r} = \kappa _S\), then (41) and (42) follow. \(\square \)

Lemma 4.10

Let \(q\in [1/2,1]\), \(S\subset V\), and let \(\kappa _S\) be the graph curvature (with \(q=1\)) from Definition 3.4, then

$$\begin{aligned} \Vert \chi _S-\mathcal {A}(\chi _S)\Vert _{H^{-1}}^2 = \sum _{m=1}^{n-1} \lambda _m^{-1} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}^2 = \sum _{m=1}^{n-1} \lambda _m^{-3} \langle \kappa _S, \phi ^m\rangle _{\mathcal {V}}^2. \end{aligned}$$

Proof

Let \(k\in V\) and let \(\varphi \in \mathcal {V}\) solve \( \Delta \varphi = \chi _S-\mathcal {A}(\chi _S)\), with \( \varphi _k = 0\). Using Lemma 4.7, we have \( \varphi - \mathcal {A}(\varphi ) = \sum _{m=1}^{n-1}\lambda _m^{-1} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}\, \phi ^m. \) Because \( \langle \mathcal {A}(\varphi ), \chi _S - \mathcal {A}(\chi _S)\rangle _{\mathcal {V}} = 0, \) we have

$$\begin{aligned} \Vert \chi _S-\mathcal {A}(\chi _S)\Vert _{H^{-1}}^2= & {} \langle \varphi - \mathcal {A}(\varphi ), \chi _S-\mathcal {A}(\chi _S)\rangle _{\mathcal {V}} \\= & {} \sum _{m=1}^{n-1} \lambda _m^{-1} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}\, \langle \phi ^m, \chi _S-\mathcal {A}(\chi _S)\rangle _{\mathcal {V}}. \end{aligned}$$

As in (40) (with u replaced by \(\chi _S\)), we have, for \(m\ge 1\), \(\langle \phi ^m, \mathcal {A}(\chi _S)\rangle _{\mathcal {V}} = 0\), and thus \( \Vert \chi _S-\mathcal {A}(\chi _S)\Vert _{H^{-1}}^2 = \sum _{m=1}^{n-1} \lambda _m^{-1} \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}^2. \)

We use (43) to write \( \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}^2 = \lambda _m^{-2} \langle \kappa _S, \phi ^m\rangle _{\mathcal {V}}^2, \) and therefore \( \Vert \chi _S-\mathcal {A}(\chi _S)\Vert _{H^{-1}}^2 = \sum _{m=1}^{n-1} \lambda _m^{-3} \langle \kappa _S, \phi ^m\rangle _{\mathcal {V}}^2. \) \(\square \)

Remark 4.11

Note that \(\Vert \chi _S-\mathcal {A}(\chi _S)\Vert _{H^{-1}}^2\) is independent of q and thus the results from Lemma 4.10 hold for all \(q\in [1/2,1]\). However, the formulation involving the graph curvature relies on (43) and thus on the identity (15) which holds for \(\kappa _S\) only, not for any \(\kappa _S^{q,r}\). If \(q\ne 1\) this leads to the somewhat unnatural situation of using \(\kappa _S\) (which corresponds to the case \(q=1\)) in a situation where \(q\ne 1\). Hence the curvature formulation in Lemma 4.10 is more natural, in this sense, when \(q=1\).

Corollary 4.12

Let \(q=1\), \(S\subset V\), and let \(F_0\) be the limit Ohta–Kawasaki functional from (32), then

$$\begin{aligned} F_0(\chi _S)= & {} \sum _{m=1}^{n-1} \left( \lambda _m + \gamma \lambda _m^{-1}\right) \langle \chi _s, \phi ^m\rangle _{\mathcal {V}}^2\nonumber \\= & {} \sum _{m=1}^{n-1} \left( \lambda _m^{-1} + \gamma \lambda _m^{-3}\right) \langle \kappa _s, \phi ^m\rangle _{\mathcal {V}}^2. \end{aligned}$$
(44)

Proof

This follows directly from the definition in (32) and Lemmas 4.9 and 4.10. \(\square \)

Lemma S6.9 in Supplementary Materials explicitly computes \(F_0(\chi _S)\) for the unweighted star graph example, which allows us, in Corollary S6.10, to solve the binary minimization problem (34) for this example graph. Remarks S6.11 and S6.12 provide further discussion on these results.

5 Graph MBO Schemes

5.1 The Graph Ohta–Kawasaki MBO Scheme

One way in which we can attempt to solve the \(F_\varepsilon \) minimization problem in (33) (and thus approximately the \(F_0\) minimization problem in (34) in the \(\Gamma \)-convergence sense of Section S3 in Supplementary Materials) is via a gradient flow. In Section S4 in Supplementary Materials we derive gradient flows with respect to the \(\mathcal {V}\) inner product (which, if \(r=0\) and each \(u\in \mathcal {V}\) is identified with a vector in \({\mathbb {R}}^n\), is just the Euclidean inner product on \({\mathbb {R}}^n\)) and with respect to the \(H^{-1}\) inner product which leads to the graph Allen–Cahn and graph Cahn–Hilliard type systems of equations, respectively. In our simulations later in the paper, however, we do not use these gradient flows, but we use the MBO approximation.

Heuristically, graph MBO type schemes [originally introduced in the continuum setting in Merriman et al. (1992) and Merriman et al. (1993)] can be seen as approximations to graph Allen–Cahn type equations [as in (S1)], obtained by replacing the double-well potential term in that equation by a hard thresholding step. This leads to the algorithm (OKMBO). In the algorithm we have used the set \(\mathcal {V}_\infty \), which we define to be the set of all functions \(u: [0, \infty ) \times V \rightarrow {\mathbb {R}}\) which are continuously differentiable in their first argument (which we will typically denote by t). For such functions, we will use the notation \(u_i(t) := u(t,i)\). We note that where before u and \(\varphi \) denoted functions in \(\mathcal {V}\), here these same symbols are used to denote functions in \(\mathcal {V}_\infty \).

For reasons that are explored in Remark S5.1 in Supplementary Materials, in the algorithm we use a variant of (29): for given \(u\in \mathcal {V}\), if \(\varphi \in \mathcal {V}\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \varphi = u-\mathcal {A}(u),\\ \mathcal {M}(\varphi ) = 0, \end{array}\right. } \end{aligned}$$
(45)

we say \(\varphi \) solves (45) for u.

If \(\varphi \in \mathcal {V}\) solves (45) for a given \(u\in \mathcal {V}\) and \({\tilde{\varphi }} \in \mathcal {V}\) solves (29) for the same u and a given \(k\in V\), then \(\Delta (\varphi -{\tilde{\varphi }})=0\), hence there exists a \(C\in {\mathbb {R}}\), such that \(\varphi = {\tilde{\varphi }} + C\chi _V\). Because \({\tilde{\varphi }}_k=0\), we have \(C= \varphi _k\). In particular, because (29) has a unique solution, so does (45).

For a given \(\gamma \ge 0\), we define the operator \(L: \mathcal {V} \rightarrow \mathcal {V}\) as follows. For \(u\in \mathcal {V}\), let

$$\begin{aligned} Lu := \Delta u + \gamma \varphi , \end{aligned}$$
(46)

where \(\varphi \in \mathcal {V}\) is the solution to (45).

figure b

Remark 5.1

Since L, as defined in (46), is a continuous linear operator from \(\mathcal {V}\) to \(\mathcal {V}\) (see (55)), by standard ODE theory (Hale 2009, Chapter 1; Coddington and Levinson 1984, Chapter 1) there exists a unique, continuously differentiable-in-t, solution u of (47) on \((0,\infty )\times V\). In the threshold step of (OKMBO), however, we only require \(u(\tau )\), hence it suffices to compute the solution u on \((0,\tau ]\).

By standard ODE arguments (Hale 2009, Chapter III.4) we can write (and interpret) the solution of (47) as an exponential function: \(u(t)=e^{-tL}u_0\).

In Supplementary Materials, Remark S5.1 and Lemma S5.2 address the relationship between solutions of (29) and (45).

The next lemma will come in handy later in the paper.

Lemma 5.2

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(\gamma \ge 0\), and \(u\in \mathcal {V}\), then the function

$$\begin{aligned}{}[0,\infty ) \rightarrow {\mathbb {R}}, t\mapsto \left\langle e^{-t L}u, u\right\rangle _{\mathcal {V}} \end{aligned}$$
(48)

is decreasing. Moreover, if u is not constant on V, then the function in (48) is strictly decreasing.

Furthermore, for all \(t>0\),

$$\begin{aligned} \frac{\hbox {d}}{\hbox {d}t} \mathcal {M}\left( e^{-tL} u_0\right) = 0. \end{aligned}$$

Proof

Using the expansion in (38) for u, we have

$$\begin{aligned} \left\langle e^{-t L}u, u\right\rangle _{\mathcal {V}}&= \left\langle \sum _{m=0}^{n-1} e^{-t \Lambda _m} \langle u,\phi ^m\rangle _{\mathcal {V}}\, \phi ^m, \sum _{l=0}^{n-1} \langle u,\phi ^l\rangle _{\mathcal {V}} \phi ^l\right\rangle _{\mathcal {V}}\nonumber \\&= \sum _{m,l=0}^{n-1} e^{-t\Lambda _m} \langle u,\phi ^m\rangle _{\mathcal {V}} \langle u,\phi ^l\rangle _{\mathcal {V}}\, \delta _{ml} = \sum _{m=0}^{n-1} e^{-t\Lambda _m} \langle u,\phi ^m\rangle _{\mathcal {V}}^2. \end{aligned}$$
(49)

Since, for each \(m\in \{0, \ldots , n-1\}\), the function \(t\mapsto e^{-t\Lambda _m}\) is decreasing, the function in (48) is decreasing. Moreover, for each \(m\in \{1, \ldots , n-1\}\), the function \(t\mapsto e^{-t\Lambda _m}\) is strictly decreasing; thus the function in (48) is strictly decreasing unless for all \(m\in \{1, \ldots , n-1\}\), \( \langle u,\phi ^m\rangle _{\mathcal {V}}=0\).

Assume that for all \(m\in \{1, \ldots , n-1\}\), \( \langle u,\phi ^m\rangle _{\mathcal {V}}=0\). Then, by the expansion in (38) and the expression in (37), we have \(u= \langle u, \phi ^0\rangle _{\mathcal {V}}\, \phi ^0 = {\mathrm {vol}}\left( V\right) ^{-1} \mathcal {M}(u) \chi _V\). Hence u is constant. Thus, if u is not constant, then the function in (48) is strictly decreasing.

The proof of the mass conservation property follows very closely the proof of (van Gennip et al. 2014, Lemma 2.6(a)). Using (4) and (45), we find \( \frac{\hbox {d}}{\hbox {d}t} \mathcal {M}(u(t)) = \mathcal {M}\left( \frac{\partial }{\partial t} u(t)\right) = -\mathcal {M}(\Delta u(t)) - \gamma \mathcal {M}(\varphi ) = 0. \) \(\square \)

The following lemma introduces a Lyapunov functional for the (OKMBO) scheme.

Lemma 5.3

Let \(G=(V,E,\omega ) \in \mathcal {G}\), \(\gamma \ge 0\), and \(\tau >0\). Define \(J_\tau : \mathcal {V} \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} J_\tau (u) := \langle \chi _V-u, e^{-\tau L} u \rangle _\mathcal {V}. \end{aligned}$$
(50)

Then the functional \(J_\tau \) is strictly concave and Fréchet differentiable, with directional derivative at \(u\in \mathcal {V}\) in the direction \(v\in \mathcal {V}\) given by

$$\begin{aligned} dJ_\tau ^u(v) := \langle \chi _V-2 e^{-\tau L} u, v\rangle _{\mathcal {V}}. \end{aligned}$$
(51)

Furthermore, if \(S^0\subset V\) and \(\{S^k\}_{k=1}^N\) is a sequence generated by (OKMBO), then for all \(k\in \{1, \ldots , N\}\),

$$\begin{aligned} \chi _{S^k} \in \underset{v\in \mathcal {K}}{\mathrm {argmin}}\, dJ_\tau ^{\chi _{S^{k-1}}}(v), \end{aligned}$$
(52)

where \(\mathcal {K}\) is as defined in (9). Moreover, \(J_\tau \) is a Lyapunov functional for the (OKMBO) scheme in the sense that, for all \(k\in \{1,\ldots , N\}\), \(J_\tau (\chi _{S^k}) \le J_\tau (\chi _{S^{k-1}})\), with equality if and only if \(S^k=S^{k-1}\).

Proof

This follows immediately from the proofs of (van Gennip et al. 2014, Lemma 4.5, Proposition 4.6) [(which in turn were based on the continuum case established in Esedoḡlu and Otto (2015)], as replacing \(\Delta \) in those proofs by L does not invalidate any of the statements. It is useful, however, to reproduce the proof here, especially with an eye to incorporating a mass constraint into the (OKMBO) scheme in Sect. 5.3.

First let \(u,v \in \mathcal {V}\) and \(s\in {\mathbb {R}}\), then we compute

$$\begin{aligned} \left. \frac{\hbox {d} J_\tau (u+sv)}{\hbox {d}s}\right| _{s=0} = \langle \chi _V - u, e^{-\tau L} v\rangle _{\mathcal {V}} - \langle v, e^{-\tau L} u\rangle _{\mathcal {V}} = \langle \chi _V - 2 e^{-\tau L} u, v\rangle _{\mathcal {V}}, \end{aligned}$$

where we used that \(e^{-\tau L}\) is a self-adjoint operator and \(e^{-\tau L} \chi _V = \chi _V\). Moreover, if \(v\in \mathcal {V}{\setminus }\{0\}\), then

$$\begin{aligned} \left. \frac{\hbox {d}^2 J_\tau (u+sv)}{\hbox {d}s^2}\right| _{s=0} = -2\langle v, e^{-\tau L} v\rangle _{\mathcal {V}} < 0, \end{aligned}$$

where the inequality follows for example from the spectral expansion in (49). Hence \(J_\tau \) is strictly concave.

To construct a minimizer v for the linear functional \(dJ_\tau ^{\chi _{S^{k-1}}}\) over \(\mathcal {K}\), we set \(v_i = 1\) whenever \(1-2\left( e^{-\tau L} \chi _{S^{k-1}}\right) _i \le 0\) and \(v_i = 0\) for those \(i\in V\) for which \(1-2\left( e^{-\tau L} \chi _{S^{k-1}}\right) _i > 0\).Footnote 10 The sequence \(\{S^k\}_{k=1}^N\) generated in this way by setting \(S^k = \{i\in V: v_i=1\}\) corresponds exactly to the sequence generated by (OKMBO).

Finally we note that, since \(J_\tau \) is strictly concave and \(dJ_\tau ^{\chi _{S^{k-1}}}\) is linear, we have, if \(\chi _{S^{k+1}} \ne \chi _{S^k}\), then

$$\begin{aligned} J_\tau \left( \chi _{S^{k+1}}\right) - J_\tau \left( \chi _{S^k}\right) < dJ_\tau ^{\chi _{S^k}}\left( \chi _{S^{k+1}}-\chi _{S^k}\right) = dJ_\tau ^{\chi _{S^k}}\left( \chi _{S^{k+1}}\right) -dJ_\tau ^{\chi _{S^k}}\left( \chi _{S^k}\right) \le 0, \end{aligned}$$

where the last inequality follows because of (52). Clearly, if \(\chi _{S^{k+1}} = \chi _{S^k}\), then \(J_\tau \left( \chi _{S^{k+1}}\right) - J_\tau \left( \chi _{S^k}\right) = 0\). \(\square \)

Remark 5.4

It is worth elaborating briefly on the underlying reason why (52) is the right minimization problem to consider in the setting of Lemma 5.3. As is standard in sequential linear programming the minimization of \(J_\tau \) over \(\mathcal {K}\) is attempted by approximating \(J_\tau \) by its linearization,

$$\begin{aligned} J_\tau (u) \approx J_\tau \left( \chi _{S^k}\right) + d J_\tau ^{\chi _{S^{k-1}}}\left( u-\chi _{S^k}\right) = J_\tau \left( \chi _{S^k}\right) + d J_\tau ^{\chi _{S^{k-1}}}\left( u\right) -d J_\tau ^{\chi _{S^{k-1}}}\left( \chi _{S^k}\right) , \end{aligned}$$

and minimizing this linear approximation over all admissible \(u\in \mathcal {K}\).

We can use Lemma 5.3 to prove that the (OKMBO) scheme converges in a finite number of steps to stationary state in sense of the following corollary.

Corollary 5.5

Let \(G=(V,E,\omega ) \in \mathcal {G}\), \(\gamma \ge 0\), and \(\tau >0\). If \(S^0\subset V\) and \(\{S^k\}_{k=1}^N\) is a sequence generated by (OKMBO), then there is a \(K\ge 0\) such that, for all \(k\ge K\), \(S^k = S^K\).

Proof

If \(N\in {\mathbb {N}}\) the statement is trivially true, so now assume \(N=\infty \). Because \(|V|<\infty \), there are only finitely many different possible subsets of V, hence there exists \(K, k'\in {\mathbb {N}}\) such that \(k' > K'\) and \(S^K=S^{k'}\). Hence the set in \(l:=\min \{l'\in {\mathbb {N}}: S^K=S^{K+l'}\}\)Footnote 11 is not empty and thus \(l\ge 1\). If \(l \ge 2\), then by Lemma 5.3 we know that \( J_\tau (\chi _{S^{K+l}})< J_\tau (\chi _{S^{K+l-1}})< \cdots < J_\tau (\chi _{S^K}) = J_\tau (\chi _{S^{K+l}}). \) This is a contradiction, hence \(l=1\) and thus \(S^K=S^{K+1}\). Because equation (47) has a unique solution (as noted in Remark 5.1), we have, for all \(k \ge K\), \(S^k = S^K\). \(\square \)

Remark 5.6

For given \(\tau >0\), the minimization problem

$$\begin{aligned} u \in \underset{v\in \mathcal {K}}{\mathrm {argmin}}\, J_\tau (v) \end{aligned}$$
(53)

has a solution \(u\in \mathcal {V}^b\), because \(J_\tau \) is strictly concave and \(\mathcal {K}\) is compact and convex. This solution is not unique; for example, if \({\tilde{u}} = \chi _V-u\), then, since \(e^{-\tau L}\) is self-adjoint, we have

$$\begin{aligned} J_\tau (u) = \langle {\tilde{u}}, e^{-tL}(\chi _V-u)\rangle _{\mathcal {V}} = \langle \chi _V-{\tilde{u}}, e^{-\tau L} {\tilde{u}}\rangle _{\mathcal {V}} = J_\tau ({\tilde{u}}). \end{aligned}$$

Lemma 5.3 shows that \(J_\tau \) does not increase in value along a sequence \(\{S^k\}_{k=1}^N\) of sets generated by the (OKMBO) algorithm, but this does not guarantee that (OKMBO) converges to the solution of the minimization problem in (53). In fact, we see in Lemma S5.10 and Remark S5.11 in Supplementary Materials that for every \(S^0\subset V\) there is a value \(\tau _\rho (S^0)\) such that \(S^1=S\) if \(\tau <\tau _\rho (S^0)\). Hence, unless \(S^0\) happens to be a solution to (53), if \(\tau <\tau _\rho (S^0)\) the (OKMBO) algorithm will not converge to a solution. This observation and related issues concerning the minimization of \(J_\tau \) will become important in Sect. 5.2, see for example Remarks 5.12 and 5.16.

We end this section with a look at the spectrum of L, which plays a role in our further study of (OKMBO). More information is given in Section S5.2 of Supplementary Materials. Moreover, in Section S5.3 we use this information about the spectrum to derive pinning and spreading bounds on the parameter \(\tau \) in (OKMBO) along similar lines as corresponding results in van Gennip et al. (2014).

Remark 5.7

Remembering from Remark 4.8 the Moore–Penrose pseudoinverse of \(\Delta \), which we denote by \(\Delta ^\dagger \), we see that the condition \(\mathcal {M}(\varphi ) = 0\) in (45) allows us to write \(\varphi = \Delta ^\dagger (u-\mathcal {A}(u))\). In particular, if \(\varphi \) satisfies (45), then

$$\begin{aligned} \varphi = \sum _{m=1}^{n-1}\lambda _m^{-1} \langle u, \phi ^m\rangle _{\mathcal {V}} \ \phi ^m, \end{aligned}$$
(54)

where \(\lambda _m\) and \(\phi ^m\) are the eigenvalues of \(\Delta \) and corresponding eigenfunctions, respectively, as in (35), (36). Hence, if we expand u as in (38) and L is the operator defined in (46), then

$$\begin{aligned} L(u) = \sum _{m=1}^{n-1} \left( \lambda _m+ \frac{\gamma }{\lambda _m}\right) \langle u, \phi ^m\rangle _{\mathcal {V}} \ \phi ^m. \end{aligned}$$
(55)

In particular, \(L:\mathcal {V}\rightarrow \mathcal {V}\) is a continuous, linear, bounded, self-adjoint, operator and for every \(c\in {\mathbb {R}}\), \(L(c\chi _V)=0\). If, given a \(u_0\in \mathcal {V}\), \(u\in \mathcal {V}_\infty \) solves (47), then we have that \(u(t) = e^{-tL}u_0\). Note that the operator \(e^{-tL}\) is self-adjoint, because L is self-adjoint.

Lemma 5.8

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(\gamma \ge 0\), and let \(L: \mathcal {V} \rightarrow \mathcal {V}\) be the operator defined in (46), then L has n eigenvalues \(\Lambda _m\) (\(m\in \{0, \ldots , n-1\}\)), given by

$$\begin{aligned} \Lambda _m = {\left\{ \begin{array}{ll} 0, &{} \text {if }\quad m=0,\\ \lambda _m + \frac{\gamma }{\lambda _m}, &{} \text {if }\quad m\ge 1, \end{array}\right. } \end{aligned}$$
(56)

where the \(\lambda _m\) are the eigenvalues of \(\Delta \) as in (35). The set \(\{\phi ^m\}_{m=0}^{n-1}\) from (36) is a set of corresponding eigenfunctions. In particular, L is positive semidefinite.

Proof

This follows from (55) and the fact that \(\lambda _0=0\) and, for all \(m\ge 1\), \(\lambda _m>0\). \(\square \)

In the remainder of this paper we use the notation \(\lambda _m\) for the eigenvalues of \(\Delta \) and \(\Lambda _m\) for the eigenvalues of L, with corresponding eigenfunctions \(\phi ^m\), as in (35), (36), and Lemma 5.8.

Using an expansion as in (38) and the eigenfunctions and eigenvalues as in Lemma 5.8 in the main paper, we can write the solution to (47) as

$$\begin{aligned} u(t) = \sum _{m=0}^{n-1} e^{-t \Lambda _m} \langle u_0, \phi ^m\rangle _{\mathcal {V}}\, \phi ^m. \end{aligned}$$
(57)

5.2 \(\Gamma \)-Convergence of the Lyapunov Functional

In this section we prove that the functionals \({\tilde{J}}_\tau : \mathcal {K}\rightarrow \overline{{\mathbb {R}}}\), defined by

$$\begin{aligned} {\tilde{J}}_\tau (u) := \frac{1}{\tau }\langle \chi _V-u, e^{-\tau L} u \rangle _\mathcal {V}, \end{aligned}$$
(58)

for \(\tau >0\), \(\Gamma \)-converge to \({\tilde{F}}_0: \mathcal {K}\rightarrow \overline{{\mathbb {R}}}\) as \(\tau \rightarrow 0\), where \({\tilde{F}}_0\) is defined by

$$\begin{aligned} {\tilde{F}}_0(u) := {\left\{ \begin{array}{ll} F_0(u), &{}\text {if }\quad u\in \mathcal {K}\cap \mathcal {V}^b,\\ +\,\infty , &{}\text {otherwise}, \end{array}\right. } \end{aligned}$$
(59)

where \(F_0\) is as in (32) with \(q=1\).Footnote 12 We use the notation \(\overline{{\mathbb {R}}}:= {\mathbb {R}}\cup \{-\infty ,+\infty \}\) for the extended real line. Remember that the set \(\mathcal {K}\) was defined in (9) as the subset of all [0, 1]-valued functions in \(\mathcal {V}\). We note that \({\tilde{J}}_\tau = \frac{1}{\tau }\left. J_\tau \right| _{\mathcal {K}}\), where \(J_\tau \) is the Lyapunov functional from Lemma 5.3. Compare the results in this section with the continuum results in (Esedoḡlu and Otto 2015, Appendix).

In this section we will encounter different variants of the functional \(\frac{1}{\tau }J_\tau \), such as \({\tilde{J}}_\tau \), \(\overline{J}_\tau \), and \(\overline{\overline{J}}_\tau \), and similar variants of \(F_0\). The differences between these functionals are the domains on which they are defined and the parts of their domains on which they take finite values: \({\tilde{J}}_\tau \) is defined on all of \(\mathcal {K}\), while \(\overline{J}_\tau \) and \(\overline{\overline{J}}_\tau \) (which will be defined later in this section) incorporate a mass constraint and relaxed mass constraint in their domains, respectively. For technical reasons, we thought it is prudent to distinguish these functionals through their notation, but intuitively they can be thought of as the same functional with different types of constraints (or lack thereof).

For sequences in \(\mathcal {V}\) we use convergence in the \(\mathcal {V}\)-norm, i.e. if \(\{u_k\}_{k\in {\mathbb {N}}}\subset \mathcal {V}\), then we say \(u_k\rightarrow u\) as \(k\rightarrow \infty \) if \(\Vert u_k-u\Vert _{\mathcal {V}} \rightarrow 0\) as \(k\rightarrow \infty \). Note, however, that all norms on the finite space \(\mathcal {V}\) induce equivalent topologies, so different norms can be used without this affecting the convergence results in this section.

Lemma 5.9

Let \(G=(V,E,\omega )\in \mathcal {G}\). Let \(\{u_k\}_{k\in {\mathbb {N}}}\subset \mathcal {V}\) and \(u\in \mathcal {V}{\setminus }\mathcal {V}^b\) be such that \(u_k\rightarrow u\) as \(k\rightarrow \infty \). Then there exists an \(i\in V\), an \(\eta >0\), and a \(K>0\) such that for all \(k\ge K\) we have \((u_k)_i \in {\mathbb {R}}{\setminus }\big ([-\eta ,\eta ]\cup [1-\eta ,1+\eta ]\big )\).

Proof

Because \(u\in \mathcal {V}{\setminus }\mathcal {V}^b\), there is an \(i\in V\) such that \(u_i\not \in \{0,1\}\). Since \(G\in \mathcal {G}\), we know that \(d_i^r>0\). Thus, since \(u_k\rightarrow u\) as \(k\rightarrow \infty \), we know that for every \({\hat{\eta }}>0\) there exists a \(K({\hat{\eta }})>0\) such that for all \(k\ge K({\hat{\eta }})\) we have \(\left| (u_k)_i - u_i\right| < {\hat{\eta }}\). Define \(\displaystyle \eta := \frac{1}{2} \min \{\left| u_i\right| , \left| u_i-1\right| \}>0. \) Then, for all \(k\ge K(\eta )\), we have \(\displaystyle \left| (u_k)_i\right| \ge \big | \left| (u_k)_i - u_i\right| - \left| u_i\right| \big | > \frac{1}{2} |u_i| \ge \eta \) and similarly \(\displaystyle \left| (u_k)_i-1\right| > \eta . \) \(\square \)

Theorem 5.10

(\(\Gamma \)-convergence). Let \(G=(V,E,\omega )\in \mathcal {G}\), \(q=1\), and \(\gamma \ge 0\). Let \(\{\tau _k\}_{k \in {\mathbb {N}}}\) be a sequence of positive real numbers such that \(\tau _k\rightarrow 0\) as \(k\rightarrow \infty \). Let \(u\in \mathcal {K}\). Then the following lower bound and upper bound hold:

  1. (LB)

    for every sequence \(\{u_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}\) such that \(u_k \rightarrow u\) as \(k\rightarrow \infty \), \({\tilde{F}}_0(u) \le \liminf \limits _{k\rightarrow \infty }\, {\tilde{J}}_{\tau _k}(u_k)\), and

  2. (UB)

    there exists a sequence \(\{u_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}\) such that \(u_k \rightarrow u\) as \(k\rightarrow \infty \) and \(\limsup \limits _{k\rightarrow \infty }\, {\tilde{J}}_{\tau _k}(u_k) \le {\tilde{F}}_0(u)\).

Proof

With \(J_\tau \) the Lyapunov functional from Lemma 5.3, we compute, for \(\tau >0\) and \(u\in \mathcal {V}\),

$$\begin{aligned} J_\tau (u) = \langle \chi _V-u, e^{-\tau L} u \rangle _{\mathcal {V}} = \mathcal {M}\left( e^{-\tau L} u\right) - \langle u, e^{-\tau L} u \rangle _{\mathcal {V}} = \mathcal {M}(u) - \langle u, e^{-\tau L} u \rangle _{\mathcal {V}}, \end{aligned}$$

where we used the mass conservation property from Lemma 5.2. Using the expansion in (38) and Lemma 4.6, we find

$$\begin{aligned} \frac{1}{\tau }J_\tau (u)&= \frac{1}{\tau }\mathcal {M}(u) - \sum _{m=0}^{n-1} \frac{e^{-\tau \Lambda _m}-1}{\tau }\langle u, \phi ^m\rangle _{\mathcal {V}}^2 -\frac{1}{\tau }\sum _{m=0}^{n-1} \langle u, \phi ^m\rangle _{\mathcal {V}}^2\nonumber \\&= - \sum _{m=0}^{n-1} \frac{e^{-\tau \Lambda _m}-1}{\tau }\langle u, \phi ^m\rangle _{\mathcal {V}}^2 +\frac{1}{\tau }\left( \mathcal {M}(u)-\mathcal {M}(u^2)\right) . \end{aligned}$$

Now we prove (LB). Let \(\{\tau _k\}_{k \in {\mathbb {N}}}\) and \(\{u_k\}_{n\in {\mathbb {N}}} \subset \mathcal {K}\) be as stated in the theorem. Then, for all \(m\in \{0, \ldots , n-1\}\), we have that \(\displaystyle -\underset{k \rightarrow \infty }{\lim }\, \frac{e^{-{\tau _k} \Lambda _m}-1}{\tau _k} = \Lambda _m\) and \(\displaystyle \underset{k \rightarrow \infty }{\lim }\, \langle u_k, \phi ^m\rangle _{\mathcal {V}}^2 = \langle u, \phi ^m\rangle _{\mathcal {V}}^2\), hence

$$\begin{aligned} \underset{k\rightarrow \infty }{\lim }\, - \sum _{m=0}^{n-1} \frac{e^{-\tau _k \Lambda _m}-1}{\tau _k} \langle u_k, \phi ^m\rangle _{\mathcal {V}}^2 = \sum _{m=0}^{n-1} \Lambda _m \langle u, \phi ^m\rangle _{\mathcal {V}}^2 \ge 0. \end{aligned}$$
(60)

Moreover, if \(u\in \mathcal {K} \cap \mathcal {V}^b\), then, combining the above with (44) (remember that \(q=1\)) and Lemma 5.8, we find

$$\begin{aligned} \underset{k\rightarrow \infty }{\lim }\, - \sum _{m=0}^{n-1} \frac{e^{-\tau _k \Lambda _m}-1}{\tau _k} \langle u_k, \phi ^m\rangle _{\mathcal {V}}^2 = F_0(u). \end{aligned}$$
(61)

Furthermore, since, for every \(k\in {\mathbb {N}}\), \(u_k\in \mathcal {K}\), we have that, for all \(i\in V\), \(u_i^2 \le u_i\) and thus \(\mathcal {M}(u_k)-\mathcal {M}(u_k^2) \ge 0\). Hence

$$\begin{aligned} \underset{k\rightarrow \infty }{\liminf }\, {\tilde{J}}_{\tau _k}(u_k) \ge -\underset{k\rightarrow \infty }{\liminf }\, \sum _{m=0}^{n-1} \frac{e^{-\tau _k \Lambda _m}-1}{\tau _k} \langle u_k, \phi ^m\rangle _{\mathcal {V}}^2= F_0(u). \end{aligned}$$

Assume now that \(u\in \mathcal {K}{\setminus }\mathcal {V}_b\) instead, then by Lemma 5.9 it follows that there are an \(i\in V\) and an \(\eta >0\), such that, for all k large enough, \((u_k)_i \in (\eta , 1-\eta )\). Thus, for all k large enough,

$$\begin{aligned} \mathcal {M}(u_k) - \mathcal {M}(u_k^2) \ge d_i^r (u_k)_i (1-(u_k)_i)> d_i^r \eta ^2 > 0. \end{aligned}$$

Combining this with (60) we deduce

$$\begin{aligned} \underset{k\rightarrow \infty }{\liminf }\, {\tilde{J}}_{\tau _k}(u_k) \ge \underset{k\rightarrow \infty }{\liminf }\, \frac{1}{\tau _k} \big (\mathcal {M}(u_k) - \mathcal {M}(u_k^2) \big ) = +\infty = F_0(u), \end{aligned}$$

which completes the proof of (LB).

To prove (UB), first we note that, if \(u\in \mathcal {K}{\setminus }\mathcal {V}_b\), then \(F_0(u)=+\infty \) and the upper bound inequality is trivially satisfied. If instead \(u\in \mathcal {K} \cap \mathcal {V}^b\), then we define a so-called recovery sequence as follows: for all \(k\in {\mathbb {N}}\), \(u_k:=u\). We trivially have that \(u_k\rightarrow u\) as \(k\rightarrow \infty \). Moreover, since \(u=u^2\), we find, for all \(k\in {\mathbb {N}}\), \(\mathcal {M}(u_k)-\mathcal {M}(u_k^2) = 0\). Finally we find

$$\begin{aligned} \underset{k\rightarrow \infty }{\limsup }\, {\tilde{J}}_{\tau _k}(u_k) =- \underset{k\rightarrow \infty }{\lim }\, \sum _{m=0}^{n-1} \frac{e^{-\tau _k \Lambda _m}-1}{\tau _k} \langle u, \phi ^m\rangle _{\mathcal {V}}^2 = F_0(u), \end{aligned}$$

where we used a similar calculation as in (61). \(\square \)

Theorem 5.11

(Equi-coercivity). Let \(G=(V,E,\omega )\in \mathcal {G}\) and \(\gamma \ge 0\). Let \(\{\tau _k\}_{k \in {\mathbb {N}}}\) be a sequence of positive real numbers such that \(\tau _k\rightarrow 0\) as \(k\rightarrow \infty \) and let \(\{u_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}\) be a sequence for which there exists a \(C>0\) such that, for all \(k\in {\mathbb {N}}\), \({\tilde{J}}_\tau (u_k)\le C\). Then there is a subsequence \(\{u_{k_l}\}_{l \in {\mathbb {N}}} \subset \{u_k\}_{k\in {\mathbb {N}}}\) and a \(u\in \mathcal {V}^b\) such that \(u_{k_l} \rightarrow u\) as \(l\rightarrow \infty \).

Proof

Since, for all \(k\in {\mathbb {N}}\), we have \(u_k\in \mathcal {K}\), it follows that, for all \(k\in {\mathbb {N}}\), \(0\le \Vert u_k\Vert _2 \le \sqrt{n}\), where \(\Vert \cdot \Vert _2\) denotes the usual Euclidean norm on \({\mathbb {R}}^n\) pulled back to \(\mathcal {V}\) via the natural identification of each function in \(\mathcal {V}\) with one and only one vector in \({\mathbb {R}}^n\) (thus, it is the norm \(\Vert \cdot \Vert _{\mathcal {V}}\) if \(r=0\)). By the Bolzano–Weierstrass theorem it follows that there is a subsequence \(\{u_{k_l}\}_{l \in {\mathbb {N}}} \subset \{u_k\}_{k\in {\mathbb {N}}}\) and a \(u\in \mathcal {V}\) such that \(u_{k_l} \rightarrow u\) with respect to the norm \(\Vert \cdot \Vert _2\) as \(l\rightarrow \infty \). Because the \(\mathcal {V}\)-norm is topologically equivalent to the \(\Vert \cdot \Vert _2\) norm (explicitly, \(d_-^{\frac{r}{2}} \Vert \cdot \Vert _2 \le \Vert \cdot \Vert _{\mathcal {V}} \le d_+^{\frac{r}{2}} \Vert \cdot \Vert _2\)), we also have \(u_{k_l} \rightarrow u\) as \(l\rightarrow \infty \). Moreover, since convergence with respect to \(\Vert \cdot \Vert _2\) implies convergence of each component of \((u_{k_l})_i\) (\(i\in V\)) in \({\mathbb {R}}\) we have \(u\in \mathcal {K}\).

Next we compute

$$\begin{aligned} \tau _{k_l} {\tilde{J}}_{\tau _{k_l}}(u_{k_l})= & {} \langle \chi _V-u_{k_l}, e^{-\tau _{k_l} L} u_{k_l}\rangle _{\mathcal {V}} = \langle \chi _V, u_{k_l}\rangle _{\mathcal {V}} - \langle u_{k_l}, e^{-\tau _{k_l} L} u_{k_l}\rangle _{\mathcal {V}} \nonumber \\\ge & {} \langle \chi _V - u_{k_l}, u_{k_l}\rangle _{\mathcal {V}}, \end{aligned}$$
(62)

where we used both the mass conservation property \(\langle \chi _V, e^{-\tau _{k_l} L} u_{k_l}\rangle _{\mathcal {V}} = \langle \chi _V, u_{k_l}\rangle _{\mathcal {V}}\) and the inequaltiy \(\langle u_{k_l}, e^{-\tau _{k_l} L} u_{k_l}\rangle _{\mathcal {V}} \le \langle u_{k_l}, u_{k_l}\rangle _{\mathcal {V}}\) from Lemma 5.2. Thus, for all \(l\in {\mathbb {N}}\), we have

$$\begin{aligned} 0 \le \langle \chi _V - u_{k_l}, u_{k_l}\rangle _{\mathcal {V}} \le C \tau _{k_l}. \end{aligned}$$
(63)

Assume that \(u\in \mathcal {K}{\setminus }\mathcal {V}^b\), then there is an \(i\in V\) such that \(0<u_i<1\). Hence, by Lemma 5.9, there is a \(\delta >0\) such that for all l large enough, \(\delta< (u_{k_l})_i < 1-\delta \) and thus

$$\begin{aligned} \langle \chi _V - u_{k_l}, u_{k_l}\rangle _{\mathcal {V}} \ge d_i^r \big (1-(u_{k_l})_i\big ) (u_{k_l})_i \ge d_i^r \delta ^2. \end{aligned}$$
(64)

Let l be large enough such that \(C \tau _{k_l} < d_i^r \delta ^2\) and large enough such that (64) holds. Then we have arrived at a contradiction with (63) and thus we conclude that \(u\in \mathcal {V}^b\).

\(\square \)

Remark 5.12

The computation in (62) shows that, for all \(\tau >0\) and for all \(u\in \mathcal {K}\), we have \(\tau {\tilde{J}}_\tau (u) \ge \langle \chi _V-u, u\rangle _{\mathcal {V}} \ge 0\). Moreover, we have \({\tilde{J}}_\tau (0) = {\tilde{J}}_\tau (\chi _V) = 0\). Furthermore, since each term of the sum in the inner product is nonnegative, we have \(\langle \chi _V-u, u\rangle _{\mathcal {V}}=0\) if and only if \(u=0\) or \(u=\chi _V\). Hence we also have \({\tilde{J}}_\tau (u) = 0\) if and only if \(u=0\) or \(u=\chi _V\). The minimization of \({\tilde{J}}_\tau \) over \(\mathcal {K}\) is thus not a very interesting problem. Therefore we now extend our \(\Gamma \)-convergence and equi-coercivity results from above to incorporate a mass constraint.

As an aside, note that Lemma S5.10 and Remark S5.11 in Supplementary Materials guarantee that for \(\tau \) large enough and \(S^0\) such that \({\mathrm {vol}}\left( S^0\right) \ne \frac{1}{2}{\mathrm {vol}}\left( V\right) \), the (OKMBO) algorithm converges in at most one step to the minimizer \(\emptyset \) or the minimizer V.

Let \(M\in \mathfrak {M}\), where \(\mathfrak {M}\) is the set of admissible masses as defined in (11). Remember from (10) that \(\mathcal {K}_M\) is the set of [0, 1]-valued functions in \(\mathcal {V}\) with mass equal to M. For \(\tau >0\) we define the following functionals with restricted domain. Define \(\overline{J}_\tau : \mathcal {K}_M\rightarrow \overline{{\mathbb {R}}}\) by \(\overline{J}_\tau := \left. {\tilde{J}}_\tau \right| _{\mathcal {K}_M}\), where \({\tilde{J}}_\tau \) is as defined above in (58). Also define \(\overline{F}_0: \mathcal {K}_M\rightarrow \overline{{\mathbb {R}}}\) by \( \overline{F}_0(u) := \left. {\tilde{F}}_0\right| _{\mathcal {K}_M}, \) where \({\tilde{F}}_0\) is as in (59), with \(q=1\). Note that by definition, \({\tilde{F}}_0\), and thus \(\overline{F}_0\), do not assign a finite value to functions u that are not in \(\mathcal {V}^b\).

Theorem 5.13

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(q=1\), \(\gamma \ge 0\), and \(M\in \mathfrak {M}\). Let \(\{\tau _k\}_{k \in {\mathbb {N}}}\) be a sequence of positive real numbers such that \(\tau _k\rightarrow 0\) as \(k\rightarrow \infty \). Let \(u\in \mathcal {K}_M\). Then the following lower bound and upper bound hold:

  1. (LB)

    for every sequence \(\{u_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}_M\) such that \(u_k \rightarrow u\) as \(k\rightarrow \infty \), \(\overline{F}_0(u) \le \liminf \limits _{k\rightarrow \infty }\, \overline{J}_{\tau _k}(u_k)\), and

  2. (UB)

    there exists a sequence \(\{u_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}_M\) such that \(u_k \rightarrow u\) as \(k\rightarrow \infty \) and \(\limsup \limits _{k\rightarrow \infty }\, \overline{J}_{\tau _k}(u_k) \le \overline{F}_0(u)\).

Furthermore, if \(\{v_k\}_{k\in {\mathbb {N}}} \subset \mathcal {K}_M\) is a sequence for which there exists a \(C>0\) such that, for all \(k\in {\mathbb {N}}\), \(\overline{J}_\tau (v_k)\le C\), then there is a subsequence \(\{v_{k_l}\}_{l \in {\mathbb {N}}} \subset \{v_k\}_{k\in {\mathbb {N}}}\) and a \(v\in \mathcal {K}_M \cap \mathcal {V}^b\) such that \(v_{k_l} \rightarrow v\) as \(l\rightarrow \infty \).

Proof

We note that any converging sequence in \(\mathcal {K}_M\) with limit u is also a converging sequence in \(\mathcal {K}\) with limit u. Moreover, on \(\mathcal {K}_M\) we have \(\overline{J}_{\tau _k} = {\tilde{J}}_{\tau _k}\) and \(\overline{F}_0 = {\tilde{F}}_0\). Hence (LB) follows directly from (LB) in Theorem 5.10.

For (UB) we note that if we define, for all \(k\in {\mathbb {N}}\), \(u_k:=u\), then trivially the mass constraint on \(u_k\) is satisfied for all \(k\in {\mathbb {N}}\) and the result follows by a proof analogous to that of (UB) in Theorem 5.10.

Finally, for the equi-coercivity result, we first note that by Theorem 5.11 we immediately get the existence of a subsequence \(\{v_{k_l}\}_{l \in {\mathbb {N}}} \subset \{v_k\}_{k\in {\mathbb {N}}}\) which converges to some \(v\in \mathcal {K}\). Since the functional \(\mathcal {M}\) is continuous with respect to \(\mathcal {V}\)-convergence, we conclude that in fact \(v\in \mathcal {K}_M\). \(\square \)

Remark 5.14

Note that for \(\tau >0\), \(M\in \mathfrak {M}\), and \(u\in \mathcal {K}_M\), we have

$$\begin{aligned} \tau \overline{J}_\tau (u)= & {} \mathcal {M}(u) - \langle u, e^{\tau L}u\rangle _{\mathcal {V}} = M - \sum _{m=0}^{n-1} e^{-\tau \Lambda _m} \langle u, \phi ^m\rangle _{\mathcal {V}}^2 \\= & {} M\left( 1-\frac{M}{{\mathrm {vol}}\left( V\right) }\right) - \sum _{m=1}^{n-1} e^{-\tau \Lambda _m} \langle u, \phi ^m\rangle _{\mathcal {V}}^2. \end{aligned}$$

Hence finding the minimizer of \(\overline{J}_\tau \) in \(\mathcal {K}_M\) is equivalent to finding the maximizer of \(\displaystyle u\mapsto \sum \nolimits _{m=1}^{n-1} e^{-\tau \Lambda _m} \langle u, \phi ^m\rangle _{\mathcal {V}}^2\) in \(\mathcal {K}_M\).

The following result shows that the \(\Gamma \)-convergence and equi-coercivity results still hold, even if the mass conditions are not strictly satisfied along the sequence.

Corollary 5.15

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(q=1\), and \(\gamma \ge 0\) and let \(\mathcal {C} \subset \mathfrak {M}\) be a set of admissible masses. For each \(k\in {\mathbb {N}}\), let \(\mathcal {C}_k \subset [0,\infty )\) be such that \(\displaystyle \underset{k\in {\mathbb {N}}}{\bigcap }\mathcal {C}_k = \mathcal {C}\) and define, for all \(k\in {\mathbb {N}}\),

$$\begin{aligned} \mathcal {K}_M^k := \{u \in \mathcal {K}: \mathcal {M}(u) \in \mathcal {C}_k\}. \end{aligned}$$

Let \(\{\tau _k\}_{k \in {\mathbb {N}}}\) be a sequence of positive real numbers such that \(\tau _k\rightarrow 0\) as \(k\rightarrow \infty \). Define \(\overline{\overline{J}}_{\tau _k}: \mathcal {K} \rightarrow \overline{{\mathbb {R}}}\) by

$$\begin{aligned} \overline{\overline{J}}_{\tau _k}(u) := {\left\{ \begin{array}{ll} {\tilde{J}}_{\tau _k}(u), &{}\text {if }\quad u\in \mathcal {K}_M^k,\\ +\infty , &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$

Furthermore, define \(\overline{\overline{F}}_0: \mathcal {K} \rightarrow \overline{{\mathbb {R}}}\) by

$$\begin{aligned} \overline{\overline{F}}_0(u) := {\left\{ \begin{array}{ll} {\tilde{F}}_0(u), &{}\text {if }\quad u\in \mathcal {K}_M,\\ +\infty , &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$

Then the results of Theorem 5.13 hold with \(\overline{J}_{\tau _k}\) and \(\overline{F}_0\) replaced by \(\overline{\overline{J}}_{\tau _k}\) and \(\overline{\overline{F}}_0\), respectively, and with the sequences \(\{u_k\}_{k\in {\mathbb {N}}}\) and \(\{v_k\}_{k\in {\mathbb {N}}}\) in (LB), (UB), and the equi-coercivity result taken in \(\mathcal {K}\) instead of \(\mathcal {K}_M\), such that, for each \(k\in {\mathbb {N}}\), \(u_k, v_k \in \mathcal {K}_M^k\).

Proof

The proof is a slightly tweaked version of the proof of Theorem 5.13. On \(\mathcal {K}_M^k\) we have that \(\overline{\overline{J}}_{\tau _k} = {\tilde{J}}_{\tau _k}\) and \(\overline{\overline{F}}_0 = {\tilde{F}}_0\). Hence (LB) follows from (LB) in Theorem 5.10. For (UB) we note that, since \(\displaystyle \underset{k\in {\mathbb {N}}}{\bigcap }\mathcal {C}_k \supset \mathcal {C}\), the recovery sequence defined by, for all \(k\in {\mathbb {N}}\), \(u_k:=u\), is admissible and the proof follows as in the proof of Theorem 5.10.

Finally, for the equi-coercivity result, we obtain a converging subsequence \(\{v_{k_l}\}_{l \in {\mathbb {N}}} \subset \{v_k\}_{k\in {\mathbb {N}}}\) with limit \(v\in \mathcal {K}\) by Theorem 5.11. By continuity of \(\mathcal {M}\) it follows that \(\mathcal {M}(v) \in \overline{\underset{k\in {\mathbb {N}}}{\bigcap }\mathcal {C}_k}\), where \(\overline{\quad \cdot \quad }\) denotes the topological closure in \([0,\infty ) \subset {\mathbb {R}}\). Because \(\mathfrak {M}\) is a set of finite cardinality in \({\mathbb {R}}\), we know \(\displaystyle \underset{k\in {\mathbb {N}}}{\bigcap }\mathcal {C}_k \subset \mathcal {C} \subset \mathfrak {M}\) is closed, hence \(\displaystyle \mathcal {M}(v)\in \underset{k\in {\mathbb {N}}}{\bigcap }\mathcal {C}_k \subset \mathcal {C}\) and thus \(v\in \mathcal {K}_M\). \(\square \)

Remark 5.16

By a standard \(\Gamma \)-convergence result (Maso 1993, Chapter 7; Braides 2002, Section 1.5) we conclude from Theorem 5.13 that (for fixed \(M\in \mathfrak {M}\)) minimizers of \(\overline{J}_\tau \) converge (up to a subsequence) to a minimizer of \(\overline{F}_0\) (with \(q=1\)) when \(\tau \rightarrow 0\).

By Lemma 5.3 we know that iterates of (OKMBO) solve (52) and decrease the value of \(J_\tau \), for fixed \(\tau >0\) (and thus of \({\tilde{J}}_\tau \)). By Lemma S5.10 in Supplementary Materials, however, we know that when \(\tau \) is sufficiently small, the (OKMBO) dynamics is pinned, in the sense that each iterate is equal to the initial condition. Hence, unless the initial condition is a minimizer of \(\overline{J}_\tau \), for small enough \(\tau \) the (OKMBO) algorithm does not generate minimizers of \(\overline{J}_\tau \) and thus we cannot use Theorem 5.13 to conclude that solutions of (OKMBO) approximate minimizers of \(\overline{F}_0\) when \(\tau \rightarrow 0\).

As an interesting aside that can be an interesting angle for future work, we note that it is not uncommon in sequential linear programming for the constraints (such as the constraint that the domain of \({\tilde{J}}_\tau \) consists of [0, 1]-valued functions only) to be an obstacle to convergence; compare for example the Zoutendijk method with the Topkis and Veinott method (Bazaraa et al. 1993, Chapter 10). An analogous relaxation of the constraints might be a worthwhile direction for alternative MBO type methods for minimization of functionals like \({\tilde{J}}_\tau \). We will not follow that route in this paper. Instead, in the next section, we will look at a variant of (OKMBO) which conserves mass in each iteration.

5.3 A Mass Conserving Graph Ohta–Kawasaki MBO Scheme

In Sect. 5.2 we saw that, for given \(M\in \mathfrak {M}\), any solution to

$$\begin{aligned} u \in \underset{{\tilde{u}}\in \mathcal {K}_M}{\mathrm {argmin}}\, J_\tau ({\tilde{u}}), \end{aligned}$$
(65)

where \(J_\tau \) is as in (50)Footnote 13 is an approximate solution to the \(F_0\) minimization problem in (34) (with \(q=1\)) in the \(\Gamma \)-convergence sense discussed in Remark 5.16.

We propose the (mcOKMBO) scheme described below to include the mass condition into the (OKMBO) scheme. As part of the algorithm we need a node relabelling function. For \(u\in \mathcal {V}\), let \(R_u: V \rightarrow \{1, \ldots , n\}\) be a bijection such that, for all \(i, j \in V\), \(R_u(i) < R_u(j)\) if and only if \(u_i \ge u_j\). Note that such a function need not be unique, as it is possible that \(u_i=u_j\) while \(i\ne j\). Given a relabelling function \(R_u\), we will define the relabelled version of u denoted by \(u^R \in \mathcal {V}\), by, for all \(i\in V\),

$$\begin{aligned} u^R_i := u_{R_u^{-1}(i)}. \end{aligned}$$
(66)

In other words, \(R_u\) relabels the nodes in V with labels in \(\{1, \dots , n\}\), such that in the new labelling we have \(u^R_1 \ge u^R_2 \ge \cdots \ge u^R_n\).

Because this will be of importance later in the paper, we introduce the new set of almost binary functions with prescribed mass \(M\ge 0\):Footnote 14

$$\begin{aligned} \mathcal {V}^{ab}_M := \left\{ u\in \mathcal {K}_M: \exists i \in V\,\, \forall j\in V{\setminus }\{i\}\,\, u_j \in \{0,1\}\right\} . \end{aligned}$$
figure c

We see that the ODE step in (mcOKMBO) is as the ODE step in (OKMBO), using the outcome of the previous iteration as initial condition. However, the threshold step is significantly different. In creating the function \(v^k\), it assigns the available mass to the nodes \(\{1, \ldots , i^*\}\) on which u has the highest value. Note that if \(r=0\), there is exactly enough mass to assign the value 1 to each node in \(\{1, \ldots , i^*\}\), since we assumed that \(M\in \mathfrak {M}\) and each node contributes the same value to the mass via the factor \(d_i^r=1\). In this case we see that \(v^k_{i^*+1} = 0\). However, if \(r\in (0,1]\), this is not necessarily the case and it is possible to end up with a value in (0, 1) being assigned to \(v^k_{i^*+1}\) (even if \(v^{k-1}\in \mathcal {V}_M^b\)). Hence, in general \(v^k \in \mathcal {V}_M^{ab}\), but not necessarily \(v^k\in \mathcal {V}_M^b\).

Of course there is no issue in evaluating \(F_0(v^k)\) for almost binary functions \(v^k\), but strictly speaking an almost binary \(v^N\) cannot serve as approximate solution to the \(F_0\) minimization problem in (34) as it is not admissible. We can either accept that the qualifier “approximate” refers not only to approximate minimization, but also to the fact that \(v^N\) is binary when restricted to \(V{\setminus }\{i^*+1\}\), but not necessarily on all of V, or we can apply a final thresholding step to \(v^N\) and set the value at node \(i^*+1\) to either 0 or 1 depending on which choice leads to the lowest value of \(F_0\) and/or the smallest deviation of the mass from the prescribed mass M. In the latter case, the function will be binary, but the adherence to the mass constraint will be “approximate”. We emphasize again that this is not an issue when \(r=0\) (or on a regular graph; i.e. a graph in which each node has the same degree). This case is the most interesting case, as the mass condition can be very restrictive when \(r\in (0,1]\), especially on (weighted) graphs in which most nodes each have a different degree. When \(r\in (0,1]\), our definition of (mcOKMBO) suggests the first interpretation of “approximate”, i.e. we use \(v^N\) as is and accept that its value at node \(i^*+1\) may be in (0, 1). All our numerical examples in Sect. 7 (and Section S9 in Supplementary Materials) use \(r=0\).

Note that the sequence \(\{v^k\}_{k=1}^N\) generated by the (mcOKMBO) scheme is not necessarily unique, as the relabelling function \(R_u\) in the mass conserving threshold step is not uniquely determined if there are two different nodes \(i,j\in V\) such that \(u_i=u_j\). This non-uniqueness of \(R_u\) can lead to non-uniqueness in \(v^k\) if exchanging the labels \(R_u(i)\) and \(R_u(j)\) of those nodes leads to a different ‘threshold node’ \(i^*\). In the practice of our examples in Sect. 7 (and Section S9 in Supplementary Materials) we used the MATLAB function sort(\(\cdot \), ‘descend’) to order the nodes.

Lemma 5.18 shows that some of the important properties of (OKMBO) from Lemma 5.3 and Corollary 5.5 also hold for (mcOKMBO). First we state an intermediate lemma.

Lemma 5.17

Let \(G=(V,E,\omega )\in \mathcal {G}\), \(M\ge 0\) and \(z\in V\). Consider the minimization problem

$$\begin{aligned} \min _{w\in \mathcal {V}} \sum _{l\in V} w_l z_l, \quad \text {subject to} \quad \sum _{l\in V} w_l = M \quad \text {and} \quad \forall l\in V\,\,\, 0\le w_l \le d_l^r. \end{aligned}$$
(67)

Let \(w^*\in V\) satisfy the constraints in (67). Then \(w^*\) is a minimizer for (67) if and only if for all \(i,j\in V\), if \(z_i<z_j\), then \(w^*_i =d_i^r\) or \(w^*_j = 0\).

Proof

See Section S10.3 in Supplementary Materials. \(\square \)

Lemma 5.18

Let \(G=(V,E,\omega ) \in \mathcal {G}\), \(\gamma \ge 0\), \(\tau >0\), and \(M\ge 0\). Let \(J_\tau : \mathcal {V}\rightarrow {\mathbb {R}}\) be as in (50), \(v^0\in \mathcal {V}_M^{ab}\), and let \(\{v^k\}_{k=1}^N \subset \mathcal {V}_M^{ab}\) be a sequence generated by (mcOKMBO). Then, for all \(k\in \{1, \ldots , N\}\),

$$\begin{aligned} v^k \in \underset{v\in \mathcal {K}_M}{\mathrm {argmin}}\, dJ_\tau ^{v^{k-1}}(v), \end{aligned}$$
(68)

where \(dJ_\tau \) is given in (51). Moreover, for all \(k\in \{1,\ldots , N\}\), \(J_\tau (v^k) \le J_\tau (v^{k-1})\), with equality if and only if \(v^k=v^{k-1}\). Finally, there is a \(K\ge 0\) such that for all \(k\ge K\), \(v^k = v^K\).

Proof

For all \(i\in V\), define \(w_i:=d_i^r v_i\) and \(z_i := \left( \chi _V-2 e^{-\tau L} v^{k-1}\right) _i\). Then the minimization problem (68) turns into (67). Hence, by Lemma 5.17, \(v^*\) is a solution of (68) if and only if \(v^*\) satisfies the constraints in (68) and for all \(i, j\in V\), if \(\left( e^{-\tau L} v^{k-1}\right) _i > \left( e^{-\tau L} v^{k-1}\right) _j\), then \(v^*_i = d^r_i\) or \(v^*_j = 0\). It is easily checked that \(v^k\) generated from \(v^{k-1}\) by one iteration of the (mcOKMBO) algorithm satisfies these properties.

We note that (68) differs from (52) only in the set of admissible functions over which the minimization takes place. This difference does not necessitate any change in the proof of the second part of the lemma compared to the proof of the equivalent statements at the end of Lemma 5.3.

The final part of the lemma is trivially true if \(N\in {\mathbb {N}}\). Now assume \(N=\infty \). The proof is similar to that of Corollary 5.5. In the current case, however, our functions \(v^k\) are not necessarily binary. We note that for each k, there is at most one node \(i(k)\in V\) at which \(v^k\) can take a value in (0, 1). For fixed k and i(k), there are only finitely many different possible functions that \(\left. v^k\right| _{V{\setminus }\{i(k)\}}\) can be. Because \(\mathcal {M}\left( v^k\right) = \sum _{i\in V{\setminus }\{i(k)\}} \left( \left. v^k\right| _{V{\setminus }\{i(k)\}}\right) _i + d_{i(k)}^r v_{i(k)}^k = M\), this leads to finitely many possible values \(v^k_{i(k)}\) can have. Since i(k) can be only one of finitely many (n) nodes, there are only finitely many possible functions that \(v^k\) can be. Hence the proof now follows as in Corollary 5.5. \(\square \)

Remark 5.19

Similar to what we saw in Remark 5.4 about (OKMBO), we note that (68) is a sequential linear programming approach to minimizing \(J_\tau \) over \(\mathcal {K}_M\); the linear approximation of \(J_\tau \) over \(\mathcal {K}_M\) is minimized instead.

Remark S8.2 in Supplementary Materials discusses the behaviour of (mcOKMBO) at small and large \(\tau \). If \(\tau \) is too small, pinning can occur similar to, but for different reasons than, the pinning behaviour of (OKMBO) at small \(\tau \).

6 Special Classes of Graphs

There are certain classes of graphs on which the dynamics of equation (47), can be directly related to graph diffusion equations, in a way which we will make precise in Sect. 6.1. The tools which we develop in that section will again be used in Sect. 6.2 to prove additional comparison principles.

6.1 Graph Transformation

Definition 6.1

Let \(G=(V,E,\omega )\in \mathcal {G}\). For all \(j\in V\), let \(\nu ^{V{\setminus }\{j\}}\) be the equilibrium measure which solves (12) for \(S=V{\setminus }\{j\}\) and define the functions \(f^j\in \mathcal {V}\) as

$$\begin{aligned} f^j:= \nu ^{V{\setminus }\{j\}} - \mathcal {A}\left( \nu ^{V{\setminus }\{j\}}\right) . \end{aligned}$$
(69)

Now we introduce the following classes of graphs:

1.:

\(\mathcal {C} := \left\{ G\in \mathcal {G}: \forall j\in V\,\, \forall i\in V{\setminus }\{j\}\,\, f^j_i \ge 0\right\} \),

2.:

\(\mathcal {C}^0 := \left\{ G\in \mathcal {G}: \forall j\in V\,\, \forall i\in V{\setminus }\{j\}\,\, \omega _{ij}>0 \text { or } f^j_i \ge 0 \right\} \),

3.:

\(\mathcal {C}_\gamma := \left\{ G\in \mathcal {C}^0: \forall j\in V\,\, \forall i\in V{\setminus }\{j\}\,\, \omega _{ij}=0 \text { or } d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i > 0\right\} \), for \(\gamma >0\).

For \(\gamma =0\), we define \(\mathcal {C}_0 := \mathcal {G}\).Footnote 15

Remark 6.2

Let us have a closer look at the properties of graphs in \(\mathcal {C}_\gamma \). Let \(\gamma > 0\). If \(G\in \mathcal {C}_\gamma \), then per definition \(G\in \mathcal {C}^0\). Let \(i,j\in V\). If \(\omega _{ij}=0\), then per definition of \(\mathcal {C}^0\), \(f^j_i\ge 0\) and thus \(d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i \ge 0\). On the other hand, if \(\omega _{ij} > 0 \), then per definition of \(\mathcal {C}_\gamma \), \(d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i > 0\).

Lemma 6.3

Let the setting and notation be as in Definition 6.1. Then, \(\mathcal {C} \subset \mathcal {C}^0\) and, for all \(\gamma \ge 0\), \(\mathcal {C} \subset \mathcal {C}_\gamma \). Moreover, if \(G\in \mathcal {C}^0{\setminus } \mathcal {C}\), there is a \(\gamma _*(G)>0\) such that, for all \(\gamma \in [0,\gamma _*(G))\), \(G\in \mathcal {C}_\gamma \).

Proof

The first two inclusions stated in the lemma follow immediately from the definitions of the sets involved. If \(\gamma =0\), then \(G\in \mathcal {C}_\gamma \) in the final statement is trivially true. To prove it for \(\gamma \ne 0\), let \(G\in \mathcal {C}^0{\setminus }\mathcal {C}\) and let \(j\in V\), \(i\in V{\setminus }\{j\}\). If \(f^j_i \ge 0\), then, \(\omega _{ij}=0\) or, for all \(\gamma > 0\), \( d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i > d_i^{-r} \omega _{ij} \ge 0\). If \(f^j_i < 0\) (and, by the assumption that \(G\not \in \mathcal {C}\), there are \(j\in V\), \(i\in V{\setminus }\{j\}\) for which this is the case), then by definition of \(\mathcal {C}^0\) we have \(\omega _{ij} > 0\). Define

$$\begin{aligned} \gamma _*(G) := {\mathrm {vol}}\left( V\right) \min \left\{ d_i^{-r} d_j^{-r} \omega _{ij} \left| f^j_i\right| ^{-1}: j \in V, i\in V{\setminus }\{j\} \text { such that } f^j_i < 0 \right\} \end{aligned}$$

and let \(\gamma \in (0, \gamma _*(G))\), then \( d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i > d_i^{-r} \omega _{ij} - \gamma _*(G)\frac{d_j^r}{{\mathrm {vol}}\left( V\right) } |f_j^i| \ge 0\). \(\square \)

Lemma S7.1 in Supplementary Materials shows \(\mathcal {C}\) is not empty; in particular, unweighted star graphs with three or more nodes are in \(\mathcal {C}\). Remark S7.2 shows that \(\mathcal {C}^0{\setminus }\mathcal {C} \ne \emptyset \). Lemma S7.3 and Remarks S7.4 and S7.6 give and illustrate different sufficient conditions for graphs to be in \(\mathcal {C}\) or \(\mathcal {C}^0\), which are used in Corollary S7.5 to show that complete graphs are in \(\mathcal {C}^0\).

The following lemma hints at the reason for our interest in the functions \(f^j\) from (69).

Lemma 6.4

Let \(G=(V,E,\omega )\in \mathcal {G}\). Let \(j\in V\) and let \(f^j\in \mathcal {V}\) be as in (69). Then the function \(\varphi ^j \in \mathcal {V}\), defined by

$$\begin{aligned} \varphi ^j := - \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j, \end{aligned}$$
(70)

solves (45) for \(\chi _{\{j\}}\).

Proof

From (69) and (12) it follows immediately that, for all \(i\in V{\setminus }\{j\}\), \(\left( \Delta f^j\right) _i = \left( \Delta \nu ^{V{\setminus }\{j\}}\right) _i = 1\). Thus, for all \(i\in V{\setminus }\{j\}\), \(\left( \Delta \varphi ^j\right) _i = -\frac{d_j^r}{{\mathrm {vol}}\left( V\right) } = \left( \chi _{\{j\}}\right) _i - \mathcal {A}\left( \chi _{\{j\}}\right) \). Moreover, by (5) we have \(\displaystyle 0=\mathcal {M}\left( \Delta \varphi ^j\right) = d_j^r \left( \Delta \varphi ^j\right) _j + \sum _{i\in V{\setminus }\{j\}} d_i^r \left( \Delta \varphi ^j\right) _i\), and thus

$$\begin{aligned} \left( \Delta \varphi ^j\right) _j&= -d_j^{-r} \sum _{i\in V{\setminus }\{j\}} d_i^r \left( \Delta \varphi ^j \right) _i = d_j^{-r} \sum _{i\in V{\setminus }\{j\}} d_i^r \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } = \frac{{\mathrm {vol}}\left( V\right) -d_j^r}{{\mathrm {vol}}\left( V\right) }\\&= 1-\frac{d_j^r}{{\mathrm {vol}}\left( V\right) } = \left( \chi _{\{j\}}\right) _j-\left( \mathcal {A}\left( \chi _{\{j\}}\right) \right) _j. \end{aligned}$$

Finally, by (69), \(\mathcal {M}\left( f^j\right) =0\), thus \(\mathcal {M}\left( \varphi ^j\right) =0\). \(\square \)

Corollary 6.5

Let \(G=(V,E,\omega )\in \mathcal {G}\). Let \(\lambda _m\) and \(\phi ^m\) be the eigenvalues and corresponding eigenfunctions of the graph Laplacian \(\Delta \) (with parameter r), as in (35), (36). Let \(j\in V\). If \(\varphi ^j \in \mathcal {V}\) is as defined in (70), then, for all \(i\in V\),

$$\begin{aligned} \varphi ^j_i = \sum _{m=1}^{n-1} \lambda _m^{-1} d_j^r \phi _i^m \phi _j^m. \end{aligned}$$
(71)

In particular, if \(f^j\) is as in (69) and \(i\in V\), then \(f^j_i \ge 0\) if and only if \(\sum _{m=1}^{n-1} \lambda _m^{-1} d_j^r \phi _i^m \phi _j^m \le 0\).

Proof

Let \(j\in V\). By Lemma 6.4 we know that \(\varphi ^j\) solves (45) for \(\chi _{\{j\}}\). Then by (54) we can write, for all \(i\in V\),

$$\begin{aligned} \varphi ^j_i = \sum _{m=1}^{n-1}\lambda _m^{-1} \langle \chi _{\{j\}}, \phi ^m\rangle _{\mathcal {V}} \ \phi ^m = \sum _{m=1}^{n-1} \lambda _m^{-1} d_j^r \phi _i^m \phi _j^m, \end{aligned}$$

where we used that,

$$\begin{aligned} \langle \chi _{\{j\}}, \phi ^m\rangle _{\mathcal {V}} = \sum _{k\in V} d_k^r \delta _{jk} \phi ^m_k = d_j^r \phi _j^m, \end{aligned}$$
(72)

where \(\delta _{jk}\) is the Kronecker delta.

The final statement follows from the definition of \(\varphi ^j\) in Lemma 6.4, which shows that, for all \(i\in V\), \(f_i^j \ge 0\) if and only \(\varphi ^j_i \le 0\). \(\square \)

Corollary 6.6

Let \(G=(V,E,\omega )\in \mathcal {G}\). For all \(j\in V\), let \(\varphi ^j\) be as in (70), let \(f^j\) be as in (69), and let \(\nu ^{V{\setminus }\{j\}}\) be the equilibrium measure for \(V{\setminus }\{j\}\) as in (12). If \(r=0\), then, for all \(i,j\in V\), \(\varphi ^j_i = \varphi ^i_j\), \(f^j_i = f^i_j\), and

$$\begin{aligned} \nu ^{V{\setminus }\{j\}}_i - \mathcal {A}\left( \nu ^{V{\setminus }\{j\}}\right) _i = \nu ^{V{\setminus }\{i\}}_j- \mathcal {A}\left( \nu ^{V{\setminus }\{i\}}\right) _j. \end{aligned}$$

Proof

This follows immediately from (71), (70), and (69). \(\square \)

Remark 6.7

The result of Corollary 6.5 is not only an ingredient in the proof of Theorem 6.9, but can also be useful when testing numerically whether or not a graph is in \(\mathcal {C}\) or in \(\mathcal {C}^0\).

Lemma 6.8

Let \(\gamma \ge 0\) and let \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \). Let L be as defined in (46). Let \(\lambda _m\) and \(\phi ^m\) be the eigenvalues and corresponding eigenfunctions of the graph Laplacian \(\Delta \) (with parameter r), as in (35), (36) and define

$$\begin{aligned} {\tilde{\omega }}_{ij} := {\left\{ \begin{array}{ll} -d_j^r\sum _{m=1}^{n-1} \Lambda _m \phi _i^m \phi _j^m, &{}\text {if }\quad i\ne j,\\ 0, &{} \text {if }\quad i=j, \end{array}\right. } \end{aligned}$$
(73)

where \(\Lambda _m\) is defined in (56). Then, for all \(i,j\in V\), \({\tilde{\omega }}_{ij} \ge 0\). Moreover, if \(\omega _{ij}>0\), then \({\tilde{\omega }}_{ij} > 0\). If, additionally, \(G\in \mathcal {C}\), then \({\tilde{\omega }}_{ij} \ge d_i^{-r}\omega _{ij}\).

Proof

Expanding \(\chi _{\{j\}}\) as in (38) and using (72), we find, for \(i,j\in V\),

$$\begin{aligned} (L\chi _{\{j\}})_i = \sum _{m=1}^{n-1} \langle \chi _{\{j\}}, \phi ^m\rangle _{\mathcal {V}} \left( L \phi ^m\right) _i = d_j^r \sum _{m=1}^{n-1}\Lambda _m \phi ^m_j \phi ^m_i. \end{aligned}$$
(74)

Note in particular that, if \(i\ne j\), then \({\tilde{\omega }}_{ij} = -(L\chi _{\{j\}})_i\).

For \(i,j \in V\) we also compute

$$\begin{aligned} (\Delta \chi _{\{j\}})_i = d_i^{-r} \sum _{k\in V} \omega _{ik} (\delta _{ji} - \delta _{jk}) = d_i^{-r} (d_i \delta _{ji} - \omega _{ij}), \end{aligned}$$
(75)

hence, if \(i\ne j\), then \(\omega _{ij} = - d_i^r \left( \Delta \chi _{\{j\}}\right) _i\). Combining the above with (70), we find for \(i\ne j\),

$$\begin{aligned} {\tilde{\omega }}_{ij} = -(L\chi _{\{j\}})_i = -\left( \Delta \chi _{\{j\}}\right) _i - \gamma \varphi ^j_i = d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i \ge 0, \end{aligned}$$
(76)

where the inequality follows since \(G\in \mathcal {C}_\gamma \) (note that for \(\gamma =0\) the inequality follows from the nonnegativity of \(\omega \)). Moreover, if \(\omega _{ij}>0\), then, by definition of \(\mathcal {C}_\gamma \), the inequality is strict, and thus \({\tilde{\omega }}_{ij} > 0\).Footnote 16

If additionally \(G\in \mathcal {C}\), then, for \(i\ne j\), \(f^j_i \ge 0\) and thus by (76), \({\tilde{\omega }}_{ij} \ge \omega _{ij}\). \(\square \)

Lemma 6.8 suggests that, given a graph \(G\in \mathcal {C}_\gamma \) with edge weights \(\omega \), we can construct a new graph \({\tilde{G}}\) with edge weights \({\tilde{\omega }}\) as in (73), that are also nonnegative. The next theorem shows that, in fact, if \(r=0\), then this new graph is in \(\mathcal {G}\) and the graph Laplacian \({\tilde{\Delta }}\) on \({\tilde{G}}\) is related to L.

Theorem 6.9

Let \(\gamma \ge 0\) and let \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \). Let L be as defined in (46). Let \(\lambda _m\) and \(\phi ^m\) be the eigenvalues and corresponding eigenfunctions of the graph Laplacian \(\Delta \) (with parameter r), as in (35), (36). Assume \(r=0\) and let \({\tilde{\omega }}\) be as in (73). Let \({\tilde{E}} \subset V^2\) contain an undirected edge (ij) between \(i\in V\) and \(j\in V\) if and only if \({\tilde{\omega }}_{ij} >0\). Then \({\tilde{G}}=(V, {\tilde{E}}, {\tilde{\omega }}) \in \mathcal {G}\). Let \({\tilde{\Delta }}\) be the graph Laplacian (with parameter \({\tilde{r}}\)) on \({\tilde{G}}\). If \({\tilde{r}}=0\), then \({\tilde{\Delta }} = L\).

Proof

In the following it is instructive to keep \(r, {\tilde{r}}\in [0,1]\) as unspecified parameters in the proof and point out explicitly where the assumptions \(r=0\) and \({\tilde{r}} = 0\) are used.

From the definition of \({\tilde{\omega }}_{ij}\) in (73) it follows directly that \({\tilde{G}}\) has no self-loops (\({\tilde{\omega }}_{ii}=0\)). Moreover, using \(r=0\) in (73), we see that \({\tilde{\omega }}_{ij} = {\tilde{\omega }}_{ji}\) and thus \({\tilde{G}}\) is undirected. Furthermore, by Lemma 6.8 we know that, for all \(i,j\in V\), if \(\omega _{ij} > 0\), then \({\tilde{\omega }}_{ij}>0\). Thus \({\tilde{G}}\) is connected, because G is connected. Hence \({\tilde{G}}\in \mathcal {G}\).

Repeating the computation from (75) for \({\tilde{\Delta }}\) instead of \(\Delta \), we find, for \(i,j\in V\),

$$\begin{aligned} \left( {\tilde{\Delta }} \chi _{\{j\}}\right) _i = {\tilde{d}}_i^{-{\tilde{r}}} \left( {\tilde{d}}_i \delta _{ji} - {\tilde{\omega }}_{ij}\right) , \end{aligned}$$
(77)

where \({\tilde{d}}_i:=\sum _{j\in V} {\tilde{\omega }}_{ij}\). Combining this with (74), we find that, if \(j\in V\) and \(i\in V{\setminus }\{j\}\), then

$$\begin{aligned} \left( {\tilde{\Delta }} \chi _{\{j\}}\right) _i = - {\tilde{d}}_i^{-{\tilde{r}}} {\tilde{\omega }}_{ij} = {\tilde{d}}_i^{-{\tilde{r}}} d^r_j \sum _{m=1}^{n-1} \Lambda _m \phi ^m_j \phi ^m_i = {\tilde{d}}_i^{-{\tilde{r}}}\left( L\chi _{\{j\}}\right) _i. \end{aligned}$$
(78)

Since we have \(\displaystyle 0=\langle \phi ^m, \chi _V\rangle _{\mathcal {V}} = \sum _{j\in V} d_j^r \phi ^m_j, \) it follows that, for all \(i\in V\), \(d_i^r \phi ^m_i = -\sum _{j\in V{\setminus }\{i\}} d_j^r \phi ^m_j\). Thus, for \(i\in V\), \( {\tilde{d}}_i = \sum _{j\in V} {\tilde{\omega }}_{ij} = \sum _{j\in V{\setminus }\{i\}} {\tilde{\omega }}_{ij} = -\sum _{m=1}^{n-1} \Lambda _m \sum _{j\in V{\setminus }\{i\}}d_j^r \phi ^m_j \phi ^m_i = d_i^r \sum _{m=1}^{n-1} \Lambda _m \left( \phi ^m_i\right) ^2. \) By (74) and (77) with \(i=j\), we then have

$$\begin{aligned} \left( {\tilde{\Delta }} \chi _{\{i\}}\right) _i = {\tilde{d}}_i^{1-{\tilde{r}}} = \left( d_i^r \sum _{m=1}^{n-1} \Lambda _m \left( \phi ^m_i\right) ^2\right) ^{1-{\tilde{r}}} = \left( (L\chi _{\{i\}})_i\right) ^{1-{\tilde{r}}}. \end{aligned}$$
(79)

Now we use \({\tilde{r}}=0\) in (78) and (79) to deduce that, for all \(j\in V\), \({\tilde{\Delta }} \chi _{\{j\}}= L \chi _{\{j\}}\). Since \(\{\chi _{\{i\}} \in \mathcal {V}: i\in V\}\) is a basis for the vector space \(\mathcal {V}\), we conclude \({\tilde{\Delta }} = L\). \(\square \)

Remark 6.10

In the proof of Theorem 6.9, we can trace the roles that r and \({\tilde{r}}\) play. We only used the assumption \(r=0\) in order te deduce that \({\tilde{G}}\) is undirected. The assumption \({\tilde{r}} = 0\) is necessary to obtain equality between \({\tilde{\Delta }}\) and L in equations (78) and (79).

These assumptions on r and \({\tilde{r}}\) have a further interesting consequence. Since the graphs G and \({\tilde{G}}\) have the same node set, both graphs also have the same associated set of node functions \(\mathcal {V}\). Moreover, since \(r={\tilde{r}} = 0\), the \(\mathcal {V}\)-inner product is the same for both graphs. Hence we can view \(\mathcal {V}\) corresponding to G as the same inner product space as \(\mathcal {V}\) corresponding to \({\tilde{G}}\). In this setting the operator equality \({\tilde{\Delta }} = L\) from Theorem 6.9 holds not only between operators on the vector space \(\mathcal {V}\), but also between operators on the inner product space \(\mathcal {V}\).

Lemma 6.11

Let \(\gamma \ge 0\), \(q=1\), and let \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \). Assume \(r=0\). Let \({\tilde{\omega }}\) be as in (73) and \({\tilde{E}}\) as in Theorem 6.9. Let \({\tilde{r}}\) be the r-parameter corresponding to the graph \({\tilde{G}}=(V,{\tilde{E}}, {\tilde{\omega }})\). Suppose \(S\subset V\), \(F_0\) is as in (32), for all \(i\in V\) \({\tilde{d}}_i := \sum _{j\in V} \tilde{\omega }_{ij}\), and \({\tilde{\kappa }}_S\) is the graph curvature of S as in Definition 3.4 corresponding to \({\tilde{\omega }}\). Then \(F_0(\chi _S) = \sum _{i,j\in S} {\tilde{\omega }}_{ij}\). Moreover, if \({\tilde{r}} = 0\), \(F_0(\chi _S) = \sum _{i\in S} \left( {\tilde{d}}_i - \left( {\tilde{\kappa }}_S\right) _i\right) \).

Proof

From Corollary 4.12 and (56) we find

$$\begin{aligned} F_0(\chi _S)= & {} \sum _{m=1}^{n-1} \Lambda _m \langle \chi _S, \phi ^m\rangle _{\mathcal {V}}^2 = \sum _{m=1}^{n-1} \Lambda _m \sum _{i,j\in V} \left( \chi _S\right) i \left( \chi _S\right) _j d_i^r d_j^r \phi _i^m \phi _j^m \\= & {} \sum _{i,j\in V}\left( \chi _S\right) _i \left( \chi _S\right) _j {\tilde{\omega }}_{ij}, \end{aligned}$$

where we used that \(r=0\). Moreover, if \({\tilde{r}}=0\), \( \sum _{i,j\in S}{\tilde{\omega }}_{ij} = \sum _{i\in S} \left( \sum _{j\in V} {\tilde{\omega }}_{ij} - \sum _{j\in V{\setminus } S} {\tilde{\omega }}_{ij}\right) = \sum _{i\in S} \left( {\tilde{d}}_i - \left( {\tilde{\kappa }}_S\right) _i\right) . \) \(\square \)

Lemma S7.7 in Supplementary Materials gives upper and lower bounds on \({\tilde{\omega }} - \omega \) in terms of the Laplacian eigenvalues and eigenfunctions. Remarks S7.8 and S7.9 interpret these conditions in terms of the algebraic connectivity of the graph and use them to give some intuition about the (mcOKMBO) dynamics. Lemma S7.10 and Remarks S7.11 and S7.12 use the star graph to illustrate the results from this section.

6.2 More Comparison Principles

Theorem 6.9 tells us that, if \(\gamma \ge 0\) is such that \(G\in \mathcal {C}_\gamma \) and if \(r=0\), then the dynamics in (47) can be viewed as graph diffusion on a new graph with the same node set, but a different edge set and weights, as the original graph G. We can use this to prove that properties of \(\Delta \) also hold for L on such graphs. Note that, when \(\gamma =0\), \(L=\Delta \), so this can be viewed as a generalization of results for \(\Delta \) to L.

In this section, we prove a generalization of Lemma 3.1 and a generalization of the comparison principle in (van Gennip et al. 2014, Lemma 2.6(d)). In fact, despite the new graph construction in Theorem 6.9 requiring \(r=0\) for symmetry reasons (see Remark 6.10), the crucial ingredient that will allow these generalizations is that \(G\in \mathcal {C}_\gamma \); the assumption on r is not required. We will also see a counterexample illustrating that this generalization does not extend (at least not without further assumptions) to graphs that are not in \(\mathcal {C}_\gamma \).

Lemma 6.12 gives a result which we need to prove the comparison principles in Lemmas  6.13 and 6.15.

Lemma 6.12

Let \(\gamma \ge 0\), \(G=(V,E,\omega )\in \mathcal {C}_\gamma \), \(w\in \mathcal {V}\), and let \(i^*\in V\) be such that \(w_{i^*}=\min _{i\in V} w_i\). Let \(\varphi \in V\) solve (45) for w. Then \(\varphi _{i^*} \le 0\).

Proof

Let \(j\in V\) and let \(\varphi ^j \in \mathcal {V}\) be as in (70). Then, by Lemma 6.4, we have that \(\displaystyle \Delta \varphi ^j = \chi _{\{j\}} - \mathcal {A}\left( \chi _{\{j\}}\right) \) and \(\displaystyle \mathcal {M}\left( \varphi ^j\right) =0\). Furthermore, by Definition 6.1 and (70), it follows that, for all \(i\in V{\setminus }\{j\}\), \(\varphi ^j_i \le 0\). Because \(\displaystyle w = \sum \nolimits _{j\in V} w_j \chi _{\{j\}}\), we have \(\displaystyle \mathcal {A}(w) = \sum \nolimits _{j\in V} w_j \mathcal {A}\left( \chi _{\{j\}}\right) \) and thus \(\displaystyle \Delta \varphi = \sum \nolimits _{j\in V} w_j \left( \chi _{\{j\}}-\mathcal {A}\left( \chi _{\{j\}}\right) \right) \). Since also \(\displaystyle \mathcal {M}\left( \sum \nolimits _{j\in V} w_j \varphi ^j\right) = \sum _{j\in V} \mathcal {M}\left( w_j \varphi ^j\right) = 0\), we find that \(\displaystyle \varphi = \sum \nolimits _{j\in V} \varphi ^j\). Hence \(\displaystyle \varphi _{i^*} = \sum \nolimits _{j\in V} w_j \varphi ^j_{i^*} = w_{i^*} \varphi ^{i^*}_{i^*} + \sum \nolimits _{j\in V{\setminus }\{i^*\}} w_j \varphi ^j_{i^*}. \) For \(j\ne i^*\), we know that \(w_{i^*} \le w_j\) and \(\varphi ^j_{i^*} \le 0\), hence \(w_j \varphi ^j_{i^*} \le w_{i^*} \varphi ^j_{i^*}\). Therefore \(\displaystyle \varphi _{i^*} \le w_{i^*} \varphi ^{i^*}_{i^*} + \sum \nolimits _{j\in V{\setminus }\{i^*\}} w_{i^*} \varphi ^j_{i^*} = w_{i^*} \sum _{j\in V} \varphi ^j_{i^*}. \) If we define \(\displaystyle {\tilde{\varphi }} := \sum \nolimits _{j\in V} \varphi ^j = \sum \nolimits _{j\in V} \left( \chi _V\right) _j \varphi ^j\), then by a similar argument as above, \(\displaystyle \Delta {\tilde{\varphi }} = \chi _V - \mathcal {A}\left( \chi _V\right) = 0\) and \(\displaystyle \mathcal {M}\left( {\tilde{\varphi }}\right) =0\). Thus \({\tilde{\varphi }} = 0\) and we conclude that \(\varphi _{i^*} \le 0\). \(\square \)

Lemma 6.13

(Generalization of comparison principle I). Let \(\gamma \ge 0\), \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \), and let \(V'\) be a proper subset of V. Assume that \(u, v \in \mathcal {V}\) are such that, for all \(i\in V'\), \((L u)_i \ge (L v)_i\) and, for all \(i\in V{\setminus } V'\), \(u_i\ge v_i\). Then, for all \(i\in V\), \(u_i \ge v_i\).

Proof

When \(r=0\), we know that \(L={\tilde{\Delta }}\), where \({\tilde{\Delta }}\) is the graph Laplacian on the graph \({\tilde{G}}\), in the notation from Theorem 6.9. Because G and \({\tilde{G}}\) have the same node set V, the result follows immediately by applying Lemma 3.1 to \({\tilde{\Delta }}\). We will, however, prove the generalization for any \(r\in [0,1]\).

Let the situation and notation be as in the proof of Lemma 3.1, with the exception that now, for all \(i\in V'\), \((Lw)_i\ge 0\) (instead of \((\Delta w)_i\ge 0\)). Let \(\varphi \in \mathcal {V}\) by such that \(\Delta \varphi = w - \mathcal {A}(w)\) and \(\mathcal {M}(w)=0\). Proceed with the proof in the same way as the proof of Lemma 3.1, up to and including the assumption that \(\min _{j\in V} w_j <0\) and the subsequent construction of the path from U to \(i^*\) and the special nodes \(j^*\) and \(k^*\) on this path. Then, as in that proof, we know that \((\Delta w)_j^* < 0\). Moreover, since \(w_{j^*} = \min _{i\in V}w_i\), we know by Lemma 6.12 that \(\varphi _{j^*} \le 0\). Hence, for all \(\gamma \ge 0\), \((Lw)_j^* < 0\). This contradicts the assumption that, for all \(i\in V'\), \((Lw)_i\ge 0\), hence \(\min _{i\in V} w_i \ge 0\) and the result is proven. \(\square \)

The following corollary of Lemma 6.12 will be useful in proving Lemma 6.15

Corollary 6.14

Let \(\gamma \ge 0\), \(G=(V,E,\omega )\in \mathcal {C}_\gamma \). Assume that \(u, {\tilde{u}} \in \mathcal {V}\) satisfy, for all \(i\in V\), \(u_i \le {\tilde{u}}_i\), and let there be an \(i^* \in V\) such that \(u_{i^*}={\tilde{u}}_{i^*}\). Then \((Lu)_{i^*} \ge (L{\tilde{u}})_{i^*}\).

Proof

Define \(w:= {\tilde{u}} - u\), then, for all \(i\in V\), \(w\ge 0\) and \(w_{i^*} = 0\). We compute

$$\begin{aligned} d_{i^*}^r (\Delta w)_{i^*} = d_{i^*} w_{i^*} - \sum _{j\in V} \omega _{i^*j} w_j = - \sum _{j\in V} \omega _{i^*j} w_j \le 0. \end{aligned}$$

Let \(\varphi \in \mathcal {V}\) solve (45) for w. Since \(w_{i^*} = \min _{i\in V} w_i\), we have by Lemma 6.12 that \(\varphi _{i^*} \le 0\). Hence

$$\begin{aligned} (L{\tilde{u}})_{i^*} - (Lu)_{i^*} = (Lw)_{i^*} = (\Delta w)_{i^*} + \gamma \varphi _{i^*} \le 0. \end{aligned}$$

\(\square \)

Lemma 6.15

(Comparison principle II). Let \(\gamma \ge 0\), \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \), \(u_0, v_0\in \mathcal {V}\), and let \(u, v \in \mathcal {V}_\infty \) be solutions to (47), with initial conditions \(u_0\) and \(v_0\), respectively. If, for all \(i\in V\), \((u_0)_i \le (v_0)_i\), then, for all \(t\ge 0\) and for all \(i\in V\), \(u_i(t)\le v_i(t)\).

Proof

If \(r=0\) we note that, by Theorem 6.9, L can be rewritten as a graph Laplacian on a new graph \({\tilde{G}}\) with the same node set V. The result in (van Gennip et al. 2014, Lemma 2.6(d)) shows the desired conclusion holds for graph Laplacians (i.e. when \(\gamma =0\)) and thus we can apply it to the graph Laplacian on \({\tilde{G}}\) to obtain to result for L on G.

In the general case when \(r\in [0,1]\), Corollary 6.14 tells us that L satisfies the condition which is called \(W_+\) in Szarski (1965, Section 4).Footnote 17 Since, for a given initial condition, the solution to (47) is unique, the result now follows by applying (Szarski 1965, Theorem 9.3 or Szarski 1965, Theorem 9.4). \(\square \)

Corollary 6.16

Let \(\gamma \ge 0\), \(G=(V,E,\omega ) \in \mathcal {C}_\gamma \), and let \(w\in \mathcal {V}_\infty \) be a solution to (47) with initial condition \(w_0 \in \mathcal {V}\). Let \(c_1, c_2\in {\mathbb {R}}\) be such that, for all \(i\in V\), \(c_1 \le (w_0)_i \le c_2\). Then, for all \(t\ge 0\) and for all \(i\in V\), \(c_1 \le w_i(t) \le c_2\).

In particular, for all \(t\ge 0\), \(\Vert w(t)\Vert _{\mathcal {V}, \infty } \le \Vert w_0\Vert _{\mathcal {V},\infty }\).

Proof

First note that \(c_1\) and \(c_2\) always exist, since V is finite.

If \(u\in \mathcal {V}_\infty \) solves (47) with initial condition \(u_0 = c_1\chi _V \in \mathcal {V}\), then, for all \(t\ge 0\), \(u(t)=c_1 \chi _V\). Applying Lemma 6.15 with \(v_0 = w_0\) and \(v=w\), we obtain that, for all \(t\ge 0\) and for all \(i\in V\), \(w_i(t)\ge c_1\). Similarly, if \(v\in \mathcal {V}_\infty \) solves (47) with initial condition \(v_0 = c_2 \chi _V \in \mathcal {V}\), then, for all \(t\ge 0\), \(u(t)=c_2 \chi _V\). Hence, Lemma 6.15 with \(u_0=w_0\) and \(u=w\) tells us that, for all \(t\ge 0\) and for all \(i\in V\), \(w_i(t) \le c_2\).

The final statement follows by noting that, for all \(i\in V\), \(-\Vert w_0\Vert _{\mathcal {V},\infty } \le (w_0)_i \le \Vert w_0\Vert _{\mathcal {V},\infty }\). \(\square \)

Remark 6.17

Numerical simulations show that when \(G\not \in \mathcal {C}_\gamma \), the results from Corollary 6.16 do not necessarily hold for all \(t>0\). For example, consider an unweighted 4-regular graph (in the notation of Section S9.1 in Supplementary Materials we take the graph \(G_{\text {torus}}(900)\)) with \(r=0\) and \(\gamma =0.7\). We compute \(\min _{i,j\in V} (d_i^{-r} \omega _{ij} + \gamma \frac{d_j^r}{{\mathrm {vol}}\left( V\right) } f^j_i) \approx -0.1906\) in MATLAB using (70), (71), so the graph is not in \(\mathcal {C}_{0.7}\). Computing \(v(0.01) = e^{-0.01 L} v^0\), where \(v^0\) is a \(\{0,1\}\)-valued initial condition,Footnote 18 we find \(\min _{i\in V} v_i(0.01) \approx -\,0.0033<0\) and \(\max _{i\in V} v_i(0.01) \approx 1.0033>1\). Hence the conclusions of Corollary 6.16 do not hold in this case.

We can use the result from Corollary 6.16 to prove a second pinning bound, in the vein of Lemma S5.10, for graphs in \(\mathcal {C}_\gamma \); see Lemma S8.1 in Supplementary Materials.

7 Numerical Implementations

We implement (mcOKMBO) (in MATLAB version 9.2.0.538062 (R2017a)) by computing the eigenvalues \(\Lambda _m\) with corresponding eigenfunctions \(\lambda _m\) and then using the spectral expansion from (57) to solve (47). This is similar in spirit to the spectral expansion methods used in, for example (Bertozzi and Flenner 2012; Calatroni et al. 2017). However, in those papers an iterative method is used to deal with additional terms in the equation. Here, we can deal with the operator L in (47) in one go. Note that in other applications of spectral expansion methods, such as those in Bertozzi and Flenner (2016), sometimes only a subset of the eigenvalues and corresponding eigenfunctions is used. When n is very large, computation time can be saved, often without a great loss of accuracy, by using a truncated version of (38) which only uses the \(K \ll n\) smallest eigenvalues \(\Lambda _m\) with corresponding eigenfunctions. The examples we show in this paper (and Supplementary Materials) are small enough that such an approximation was not necessary, but it might be considered if the method is to be run on large graphs.

Fig. 1
figure 1

Initial (left column) and final (right column) states of Algorithm (mcOKMBO) applied to \(G_{\text {moons}}\) with \(r=0\), \(M=300\), \(\tau =1\) for a different value of \(\gamma \) in each row. The initial conditions are eigenfunction based in the sense of option (c) in Section S9.3 in Supplementary Materials. The values of \(F_0\) at the final iterates are approximately 109.48 (top row), 230.48 (middle row), and 626.89 (bottom row). a Initial condition for \(\gamma =0.1\), b Final iterate (\(k=21\)) for \(\gamma =0.1\), c Initial condition for \(\gamma =1\), d Final iterate (\(k=9\)) for \(\gamma =1\), e Initial condition for \(\gamma =10\) and f Final iterate (\(k=7\)) for \(\gamma =10\)

Fig. 2
figure 2

Plots of \(J_1\left( v^k\right) \) (left column) and \(F_0\left( v^k\right) \) (right column) for the applications of (mcOKMBO) corresponding to Fig. 1. a Plot of \(J_1\left( v^k\right) \) for \(\gamma =0.1\), b Plot of \(F_0\left( v^k\right) \) for \(\gamma =0.1\), c Plot of \(J_1\left( v^k\right) \) for \(\gamma =1\), d Plot of \(F_0\left( v^k\right) \) for \(\gamma =1\), e Plot of \(J_1\left( v^k\right) \) for \(\gamma =10\) and f Plot of \(F_0\left( v^k\right) \) for \(\gamma =10\)

Figure 1 shows the initial conditions and final states for three runs of (mcOKMBO) on a two-moon graph \(G_{\text {moons}}\),Footnote 19 with different values for \(\gamma \). Figure 2 shows the corresponding values of \(J_\tau (v^k)\) and \(F_0(v^k)\) as a function of the iteration number k. In each case, the algorithm was terminated when \(v^k=v^{k-1}\), which is why in each plot in Fig. 2 the final two values are the same.

As expected from Lemma 5.18, \(J_\tau \) decreases along the iterates. By and large \(F_0\) also decreases, although Fig. 2d shows this is not necessarily always the case; also note that the value at the final iterate is not guaranteed to be the minimum value among all iterates (although in our tests it always is close, if not equal, to that minimum; see the figures in Section S9 in Supplementary Materials).

In Section S9 of Supplementary Materials we provide additional results obtained by running (mcOKMBO) on various different graphs, as well as in-depth discussions about those results and the choice of \(\tau \), of the initial condition, and of the other parameters (\(\gamma \), q, r, M, N) in the graph Ohta–Kawasaki model and the (mcOKMBO) algorithm.

8 Discussion and Future Work

In this paper we presented three main results: the Lyapunov functionals associated with the (mass conserving) Ohta–Kawasaki MBO schemes \(\Gamma \)-converge to the sharp interface Ohta–Kawasaki functional; there exists a class of graphs on which this MBO scheme can be interpreted as a standard graph MBO scheme on a transformed graph (and for which additional comparison principles hold); the mass conserving Ohta–Kawasaki MBO scheme works well in practice when attempting to minimize the sharp interface graph Ohta–Kawasaki functional under a mass constraint. Along the way we have also further developed the theory of PDE inspired graph problems and added to the theoretical underpinnings of this field.

Future research on the graph Ohta–Kawasaki functional can mirror the research on the continuum Ohta–Kawasaki functional and attempt to prove the existence of certain structures in minimizers on certain graphs, analogous to structures such as lamellae and droplets in the continuum case. The numerical methods presented in this paper might also prove useful for simulations of minimizers of the continuum functional.

The \(\Gamma \)-convergence results presented in this paper also fit in well with the ongoing programme, started in van Gennip et al. (2014), aimed at improving our understanding how various PDE inspired graph-based processes, such as the graph MBO scheme, graph Allen–Cahn equation, and graph mean curvature flow, are connected.

One of the initial hopes for the graph Ohta–Kawaski functional when starting this research was that it might be helpful to detect particular structures in graphs [(similar to how the graph Ginzburg–Landau functional can be used to detect cluster structures (Bertozzi and Flenner 2012) and to how the signless graph Ginzburg–Landau functional detects bipartite structures (Keetch and van Gennip in prep)]. So far this line of research has not yielded concrete results, but it is worth keeping in mind as a potential application, if such a structure can be identified.

We thank the anonymous referee of the first draft of this paper for the suggestion that the mass conserving MBO scheme can be useful for data clustering with prescribed cluster sizes. It would be interesting to pursue this idea in future research.