Max-linear models in random environment

We extend previous work of max-linear models on finite directed acyclic graphs to infinite graphs as well as random graphs, and investigate their relations to classical percolation theory, more particularly the impact of Bernoulli bond percolation on such models. We show that the critical probability of percolation on the oriented square lattice graph $\mathbb{Z}^2$ describes a phase transition in the obtained model. Focus is on the dependence introduced by this graph into the max-linear model. We discuss natural applications in communication networks, in particular, concerning the propagation of influences.


Introduction
Extreme value theory is concerned with max-stable random elements which occur as limits of normalized maxima.The theory has progressed in recent years from classical finite models to infinite-dimensional models (see, for example, [10,28,29]).A monograph relevant in the infinite-dimensional context is [6].Prominent models are stochastic processes in space and/or time having finite dimensional max-stable distributions (e.g., see [5,11,21]).Such processes model extreme dependence between process values at different locations and/or time points.
Max-linear models are natural analogues of linear models in an extreme value framework.Within the class of multivariate extreme value distributions, whose dependence structures are characterized by a measure on the sphere, they are characterized by the fact that this measure is discrete (e.g.[30]).
In this paper we connect two research fields, namely max-linear models on directed acyclic graphs and percolation theory.Directed acyclic graphs, also called Bayesian networks, describe conditional independence properties between random variables.Percolation, in particular Bernoulli bond percolation is a simple way of obtaining a random version of a directed acyclic graph using a sample of iid Bernoulli random variables.
We extend previous work of max-linear models on finite directed acyclic graphs (e.g.[12,13,22]) to infinite graphs.The model allows for finite subgraphs with different dependence structures, and we envision applications where this may play a role as, for instance, a hierarchy of communities with different communication structures.Max-linear models on directed acyclic graphs have been the subject of concrete useful applications, for example in [9] they have been fitted in order to explain properties of European stock markets, in which the economic sector influences the tail behavior of stock returns by means of max-linear behavior.The model we propose is quite flexible, as we work on arbitrary subgraphs of the oriented 2-dimensional lattice, additionally incorporate randomness.Thus our model allows to capture arbitrary (finite) directed acyclic graphs by identifying their edges with paths in our model.Therefore, such directed acyclic graphs which can be fitted in the description of European stock markets are included in our model as well.
We investigate the relation of the infinite max-linear model to classical percolation theory, more precisely to nearest neighbor bond percolation (e.g.[4,15]).We focus on the square lattice Z 2 with edges to the nearest neighbors, where we orient all edges in a natural way (north-east) resulting in a directed acyclic graph (DAG) on this lattice.On this infinite DAG a random sub-DAG may be constructed by choosing nodes and edges between them at random.In a Bernoulli bond percolation DAG edges are independently declared open with probability p ∈ [0, 1] and closed otherwise.The random graph consists then of the nodes and the open edges.The percolation probability is the probability P p (|C(i)| = ∞), with |C(i)| denoting the cardinality of C(i), that a given node i belongs to an infinite open cluster C(i), which is 0 if p ≤ 1/2 and positive for p > 1/2.Kolmogorov's zero-one law entails that an infinite open cluster exists for p > 1/2 with probability 1, and otherwise with probability 0.
We combine percolation theory with an infinite max-linear model by assigning to each node a max-linear random variable.Sampling a random graph by Bernoulli bond percolation, we use this subgraph for modelling the dependence in the max-linear process on the oriented square lattice.The max-linear models we envision are recursively constructed from independent continuously distributed random variables (Z j ) j∈Z 2 , which include the class of variables belonging to the max-domain of attraction of the Fréchet distribution.More precisely, each random variable X i on a node i ∈ Z 2 with ancestral set an(i) exhibits the property in distribution on every finite DAG, where b ji are positive coefficients.As this model is defined on a random graph it is a max-linear model in random environment.According to our terminology models investigated in [18,24] can also be seen as models in random environment.For related work we also refer to [19].To the best of our knowledge, our model is the first such model studying the impact of Bernoulli bond percolation on max-linear models in the sense that we show that classical results regarding two different phases of Bernoulli bond percolation can be transferred into two distinct phases of typical behavior of naturally investigated properties of max-linear models, namely the dependence structure in max-linear models.One prerequisite for this work is the fact that max-stable random variables on different nodes (that is random variables X i and X j ) of a DAG are independent if and only if they have no common ancestors, see [13,Theorem 2.3].As a consequence of this and percolation theory we find for the subcritical case p ≤ 1/2 that two random variables become independent with probability 1, whenever their distance tends to infinity.In contrast, for the supercritical case there exists 1  2 < p * < 1 such that for p > p * two random variables are dependent with positive probability, even when their node distance tends to infinity.
Finally, we consider changes in the dependence properties of random variables on a sub-DAG H of a finite or infinite graph on the oriented square lattice Z 2 , when enlarging this subgraph.The method of enlargement consists of adding nodes and edges of Bernoulli bond percolation clusters.Here we start with X i and X j independent in H, and answer the question, whether they can become dependent in the enlarged graph.We evaluate critical probabilities such that X i and X j become dependent in the enlarged graph with positive probability or with probability 1.We find in particular that for every DAG H with finite number of nodes, in the enlarged graph X i and X j remain independent with positive probability.On the other hand, if H has nodes Z 2 and percolates everywhere; i.e., every connected component of H is infinite, then X i and X j become dependent with probability 1 in the enlarged graph.
The recursive max-linear process X from Definition 2.1 below, may be viewed as a model for the communication between members of an infinitely large network, which may be regarded as an arbitrarily large union of individual networks of finite size, where each finite network has its own communication structure.These are represented by finite sub-DAGs.A practical example in which a max-linear process is eligible as a model is given by the exploration of webbased communication or, more generally, complex networks in which it is of considerable interest to determine (the most) influential nodes.Concerning web-based communication as a (random) graph model nodes may be identified with ranks of a certain webpage, that is realizations of the random variables X i may correspond to concrete values of ranks.When using the PageRank as a tool in order to detect influences, several results [17,31] approve that the distribution of a rank is heavy-tailed, giving rise to employ max-linear models as an alternative to capture the evolution of influences [25].
Besides, we believe the scope of applicability of the model under discussion is quite flexible.A concrete example, mentioned in more detail in Section 5, that we propose is the modeling the course of an auction.Numerous auction houses nowadays offer live auctions, in which bidders from all over the world can place their bids on the internet.We assess the max-linear models to be suitable in terms of modeling the course of such auctions.For further discussion on this topic we refer to Section 5.
Another practical application, particularly of statistical interest, is the identifiability of recursive max-linear processes from concrete observations and known DAGs.In particular, [14] provides estimation procedures for crucial parameters in the model, namely the edge-weights and max-linear coefficient matrix discussed below.More precisely, for n ∈ N let X 1 , . . ., X n be independent realizations of a max-linear model, i.e., a random vector as given in Definition 2.1 below.Assume that for each X j , 1 ≤ j ≤ n, its distribution (which is assumed to have atom-free margins on R + ) and the underlying DAG are known.Then, according to [14,Section 4] one can estimate the corresponding max-linear coefficient matrix without needing further conditions.
Our paper is organized as follows.In Section 2 we introduce recursive max-linear models on DAGs in Z 2 .In particular, we give sufficient conditions under which max-linear models on infinite graphs are well-defined.Section 3 uses the fact that the max-linear coefficients b ji originate from an algebraic path analysis by multiplying edge weights along a path between nodes j and i with j being an ancestor of i.This concept, known from finite recursive maxlinear models, extends to infinite DAGs.Example 2.5 shows that the important class of max-weighted models can be extended from finite to infinite graphs such that the max-weighted property remains.Recursive max-linear processes on a DAG have the nice property that independence between random variables on two different nodes is characterized by their ancestral sets.We prove that this also holds in the setting of infinite graphs.This is the starting point of our investigation.Section 4 contains the dependence results.Here we investigate the Bernoulli bond percolation DAGs.In Section 4.1 we prove that nearest neighbor bond percolation on Z 2 yields independence of X i and X j with probability 1 provided |i − j| → ∞ for p < p * , whereas it yields dependence with positive probability for p > p * and some 1  2 < p * < 1.In Section 4.2 we investigate for X i and X j , which are independent in some subgraph H, whether enlargement of H can result in dependence between X i and X j .Finally, in Section 5 we discuss applications in communication networks in more detail and interpretations of our results in this context.

Max-linear processes on directed acyclic lattice graphs
This section presents a description of infinite max-linear models on directed acyclic lattice graphs.We first explain the structure of the directed graph on a lattice before we define and show the existence of a random field with finitedimensional distributions entailing a dependence structure of max-linear type encoded in such graphs.

Graph notation and terminology
Let Z 2 be the oriented square lattice as follows (e.g.[1,4,8,15]).We write i = (i 1 , i 2 ) for elements in Z 2 and refer to them as nodes.The distance from i to j is defined in terms of the Manhattan metric given by for i, j ∈ Z 2 .We regard Z 2 as a graph by adding edges between all nodes i, j with δ(i, j) = 1.In addition, we assume the edges to be oriented in the following manner.Denote by pa(i) and ch(i) the parents and children of node i = (i 1 , i 2 ), respectively.Then j = ( j 1 , j 2 ) ∈ pa(i) if and only if either ( j 1 , j 2 ) = (i 1 − 1, i 2 ) or ( j 1 , j 2 ) = (i 1 , i 2 − 1) and, consequently, j = ( j 1 , j 2 ) ∈ ch(i) if and only if either ( j 1 , j 2 ) = (i 1 + 1, i 2 ) or ( j 1 , j 2 ) = (i 1 , i 2 + 1).We may write i → j if there is a directed edge from i to j, that is if i is a parent of j.The set of edges in this oriented lattice Z 2 is E(Z 2 ), which is a subset of Z 2 × Z 2 .In this paper we work with graphs , which are directed acyclic lattice graphs.We refer to them simply as DALGs or DAGs.When there is no ambiguity, we often abbreviate V = V(G) and E = E(G).Thus, every node i ∈ V has at most two children and two parents, but possibly infinitely many descendants and ancestors, denoted by de(i) and an(i), respectively.Moreover, we define De(i) = {i} ∪ de(i) and An(i) = {i} ∪ an(i).Note that such a DAG may have no roots, i.e., it might be the case that for all i ∈ V we have an(i) ∅, which proves relevant for the questions we want to answer.

Infinite recursive max-linear models
We now introduce recursive max-linear processes.Let G = (V(G), E(G)) be a DAG with some possibly infinite set of nodes V(G) ⊂ Z 2 .Moreover, we assume that all the nodes i ∈ V(G) and all the edges (i, j) ∈ E(G) are equipped with prespecified (strictly) positive weigths c ii and c i j , respectively.Recall from [12,Section 1] where (Z j ) j∈V(G) are independent continuously distributed non-negative noise variables with infinite support on (0, ∞) and pa G (i) denotes the parents of i which belong to the DAG G. Recall that c ii is the weight of a node i.By [12, Theorem 2.2], applying a standard path analysis, the vector X exhibits a max-linear structure, that is where we denote by an G (i) the ancestors of i on the DAG G and the matrix with P ji (G) denoting the set of all paths in G from j to i and d ji (p) is defined by Note that this representation explicitly depends on G and B G is called the max-linear coefficient matrix of X with respect to G. We now provide an extension of this to infinite graphs, in which a family of infinitely many random variables is characterized by a graph G with infinitely many nodes and edges.
Definition 2.1.We call a family of random variables X := {X i : i ∈ V(G)} recursive max-linear process if for every i ∈ Z 2 the random variable X i is given by the representation provided that the latter maximum is almost surely finite, (Z j ) j∈V(G) are independent continuously distributed nonnegative noise variables with infinite support on (0, ∞) and b G ji is computed by the path analysis described above.We now prove the existence of a stochastic process with the dependence structure described by infinite recursive max-linear processes as in Definition 2.1.Furthermore, we give a sufficient condition on the weights under which there exists a stochastic process We illustrate the procedure of extending max-linear models in case of two finite subgraphs of the lattice.Assume that ( and X 2 = (X j 1 , . . ., X j n ) are the corresponding recursive max-linear models with coefficient matrices B 1 and B 2 , respectively with recursive max-linear representation with pa 1 and pa 2 denoting the parents with respect to the graphs (V 1 , E 1 ) and (V 2 , E 2 ), respectively.Consider the enlarged finite graph . Then a recursive max-linear model on this graph is given by with pa 1,2 denoting the parents with respect to the graph (V, E) and coefficient matrix B 1,2 calculated by the usual path analysis (see [12,Theorem 2.2]).In the following for notational simplicity write b Moreover, assume that (Z k ) k∈Z 2 is a sequence of independent standard α-Fréchet distributed noise variables.Then there exists a max-linear process as in Definition 2.1.
Proof: We prove that the weighted maximum of infinitely many noise variables is finite with probability one.Indeed, let x ∈ (0, ∞) and define by condition (2.2).Thus, X i has a Fréchet distribution.Moreover, let i 1 , . . ., i d ∈ Z 2 , d ≥ 1, and x i 1 , . . ., Then, by a simple calculations, the finite-dimensional distributions of In particular, every recursive max-linear process in which the noises are standard α-Fréchet as in Definition 2.1 exhibits these finite-dimensional distributions.
Consider the following example of a (max-weighted) max-linear process X with weights satisfying the assumption (2.2), see also Example 2.5 below.For simplicity assume that for every i, j ∈ Z 2 with j ∈ An(i) there is only one path from j to i.
be the path from j to i. Assume that the edges are equipped with the weights where c ii = 1 for every i.Note that condition (2.2) is satisfied.In particular, since the weights are vanishing, this shows that the larger the distance between a node and its ancestor, the smaller the contribution of the ancestor, which is a natural property to hold.
Different blocks of the matrix B may correspond to distinct communities with different communication structure.The values of the random variables X i may correspond to extreme observations.The following limit result, which can be found in [30, Lemma 2.1(iv)], shows that we can regard a max-linear model on an infinite graph as a limit of a sequence of max-linear models on finite graphs.We precise this now in the following Remark.Definition 2.3.In the following we say that a sequence of subgraphs (V n , E n ) n∈N of a graph (V, E) tends to (V, E) if for every j ∈ V and e ∈ E there exists n ∈ N such that also j ∈ V m and e ∈ E m for every m ≥ n.
Remark 2.4.If (Z j ) j∈Z 2 are independent standard α-Fréchet random variables and (V n , E n ) n∈N is a sequence of finite sub-DAGs of the oriented square lattice Z 2 then from Lemma 2.2 we know that for each n ∈ N has α-Fréchet distribution with scale parameter ( j∈V n (b i j ) α ) 1/α .Suppose that the sequence of DAGs (V n , E n ) n∈N tends to a DAG (V, E) with infinitely many nodes as n → ∞ and that Provided X i is almost surely finite, the value at node i may originate in a large number of values along an infinite path.As there may be many sequences of subgraphs with limit (V, E) the random variable at node i depends on this sequence.There may be sequences of subgraphs or paths in subgraphs leading to very large values of X i , as a consequence, all its descendants also become large.
We now treat the case that V(G) ⊂ N 2 0 = N 0 × N 0 , so that every node has at most finitely many ancestors and give an example of a max-weighted process.To this end,we consider infinite DAGs on N 2 0 , which we view as a prototypical sub-DAG with infinitely many nodes of the oriented square lattice Z 2 , such that each node has at most finitely many ancestors.

Max-weighted process
Let G = (V, E) be a DAG with V ⊂ N 2 0 = N 0 × N 0 and corresponding edges E. Assume a recursive max-linear process X = {X i : i ∈ V} on G.In the following the aim is to give a canonical choice of a possible max-linear coefficient matrix B associated with the process X and to introduce a process that we call max-weighted.
Assume that the edges of G are equipped with positive weights c ki for every i ∈ V and k ∈ {i} ∪ pa(i).For n ∈ N let G n = (V n , E n ) be the DAG with nodes V n = {i = (i 1 , i 2 ) ∈ V : i 1 + i 2 ≤ n} and corresponding edges taken from E, so that lim n→∞ G n = G.By Definition 2.1 there are independent non-negative noise variables (Z i ) i∈V n with infinite support on (0, ∞) and a max-linear coefficient matrix B = (b i j ) i, j∈V n with non-negative entries such that X (n)  i is as in (2.3).Indeed the entries b i j may be derived from the path analysis mentioned in Section 2. This in particular shows that for i ∈ V the b i j do not depend on the descendants de(i).Thus, an infinite max-linear coefficient matrix B is built up from increasing finite blocks representing V n for increasing n ∈ N.
For a communication network on N 2 0 the representation (2.3) reduces to a maximum over finitely many random variables, for instance, the root 0 influences all the other nodes in the network.Hence, if the root node happens to hold the maximum of all Z j for j ∈ N 2 0 its influence may dominate the whole network, although by the max-linear coefficient matrix B all other nodes may have different realisations.
As there may be several paths between nodes with different path-weights, so-called max-weighted models with same paths-weights along all possible directed paths between two nodes play an important role.We now give an example of such a max-linear process relying on the definition of max-weighted models presented in [12, Definition 3.1] and discussed in [13, Section 3].Resulting as a limit of max-weighted paths, we may call such a process maxweighted.
Example 2.5 (Max-weighted process).Let V = N 2 0 be the set of nodes and assume oriented edges between all nodes i, j with δ(i, j) = 1.Start with a subgraph in which the set of nodes is bounded and of the form n} for some n ∈ N 0 and the corresponding set of edges is denoted by E n .Assume that the corresponding model is max-weighted so that every entry of the max-linear coefficient matrix is given by b ji = d p ( j 1 , j 2 ), (i 1 , i 2 ) , where d p ( j 1 , j 2 ), (i 1 , i 2 ) is calculated by a path analysis along the edge-weights as in equation (1.5) in [12].Since the model is max-weighted, d p ( j 1 , j 2 ), (i 1 , i 2 ) is the same value for every path p from i to j and thus we can write d p ( j 1 , j 2 ), (i 1 , i 2 ) = d ( j 1 , j 2 ), (i 1 , i 2 ) , since the latter value is independent of the chosen path p.We now show that the DAG can be enlarged in such a way that the enlarged new subgraph is again max-weighted.Moreover, this procedure can be executed infinitely often.Let n ≥ 1 and assume that we add a node, say (ℓ 1 , ℓ 2 ) which we connect with the nodes (ℓ 1 − 1, ℓ 2 ) and (ℓ 1 , ℓ 2 − 1) in V by two edges with corresponding weights c (ℓ 1 − 1, ℓ 2 ), (ℓ 1 , ℓ 2 ) and c (ℓ 1 , ℓ 2 − 1), (ℓ 1 , ℓ 2 ) .By choosing these appropriately we can ensure that the new model is again max-weighted.More precisely, we choose the weights satisfying .
We now show that the enlarged DAG again leads to a max-weighted model.Let p 1 be a path from the root to (ℓ 1 , ℓ 2 ) containing (ℓ 1 − 1, ℓ 2 ) and let p 2 be such a path containing the node (ℓ 1 , ℓ 2 − 1).Then we have by definition Thus every path from the root to (ℓ 1 , ℓ 2 ) is max-weighted and this shows that the new model is max-weighted.
In the following section we return to DAGs on Z 2 , which allow for infinitely many ancestors.We consider percolation (dependence) properties between two fixed nodes i and j on Z 2 .

Common ancestors and dependence structure
In this section we let G = (V, E) be an arbitrary, possibly infinite DAG with nodes V ⊂ Z 2 and oriented edges E. Furthermore, we let X be a recursive max-linear process on G as in Definition 2.1.
The following result is an analogue to [13, Theorem 2.3] and its proof justifies the extension of the arguments to infinite dimension.Proposition 3.1.Let X := {X u : u ∈ V(G)} be a recursive max-linear process and i, j ∈ V(G).The following statements are equivalent: (i) X i and X j are independent, (ii) An(i) ∩ An( j) = ∅.
Proof: The proof extends [13,Theorem 2.3].By Definition 2.1 there exist independent noise variables Z k , k ∈ V(G), with infinite support on (0, ∞) and a matrix B = (b uk ) such that Thus X i and X j are independent if and only if An(i) ∩ An( j) = ∅.Indeed, first assume that An(i) ∩ An( j) = ∅.Then we obtain for every x i , x j ∈ (0, ∞), by independence of the noise variables Z k , k ∈ V(G).On the other hand assume that X i and X j are independent.By way of contradiction let us suppose that An(i and H contains all the edges of G that are connecting nodes in [−n, n] 2 .Write V(H) = {i, j, i 1 , . . ., i k } for some k ∈ N. Observe that (X i , X j , X i 1 , . . .X i k ) is a max-linear model on H with almost surely finite, but not necessarily independent innovation noise variables given by Let l ∈ An(i) ∩ An( j) ∩ V(H).Then, by the assumptions on the noise variables Zk , k ∈ V(H), we have But by continuity of the noise variables, this contradicts the fact that X i and X j are independent.This finishes the proof.
Having characterized the dependence between two random variables we are now interested in the following.We use Bernoulli bond percolation to generate random DAGs on the oriented square lattice Z 2 and, thus, random dependence structures.
We want to answer the following question: given an extreme quantity, observed at two nodes i and j, is there a common cause in the network (a common ancestor) or not?

Bernoulli bond percolation DAGs
The main purpose of this section is to construct max-linear models on randomly obtained DAGs with a possibly infinite number of nodes in order to investigate a randomized dependence structure.
In view of Proposition 3.1 the probability that two random variables X i and X j on the random graph are dependent is nothing else than the probability that i and j have common ancestors inside the random open cluster containing nodes i and j.Our setting is a max-linear model on the oriented square lattice and percolation on this simple graphical model.This is a first step of linking percolation with max-linear models, and we envision further results on more sophisticated graphs as can be found, for instance, in [16] and [20].

Max-linear models on random open clusters
Recall that we consider the oriented square lattice Z 2 .For this oriented model, the open cluster at 0 is usually defined as the set of all points we can reach from the origin by travelling along open edges in the direction of the orientation; see [1,8], or [15,Section 12.8].As this open cluster always has root 0, all nodes i and j would have at least common ancestor 0, and would make the problem discussed below trivial.Consequently, we consider unoriented, but not undirected, paths in (4.2) as we will make precise below.
Let us first recall the framework of Bernoulli bond percolation from any book on percolation as e.g.[4,15].Given the oriented square lattice Z 2 with edge set E ⊂ Z 2 × Z 2 , a (bond) configuration is a function ω : E → {0, 1}, e → ω e .An edge e is open in the configuration ω, if and only if ω e = 1, so configurations correspond to open subgraphs.Recall from Section 2 that in our setting open edges are directed, hence a configuration is a DAG denoted by (V, E) with V ⊂ Z 2 and directed edges E. Each edge is declared open with probability p and closed otherwise, different edges having independent designations.This gives the Bernoulli measure P p , p ∈ [0, 1] on the space Ω = {0, 1} E of configurations.The σ-field F is generated by the finite-dimensional cylinders of Ω.In summary, the probability space is (Ω, F , P p ).
Let C(k) be the open cluster containing the node k ∈ V.The distribution of |C(k)| is, by the translation-invariance of the measure P p , well-known to be independent of k ∈ V, so that we assume in the following k = 0 ∈ V without loss of generality.If |C(0)| denotes the (random) number of nodes of C(0) then P p (|C(0)| = ∞) is called the percolation probability.This probability depends on p ∈ [0, 1], and Hammersley's critical percolation probability is defined as Thus, for p > p 1 c (V) it is possible to generate infinite open clusters with positive probability.By Kolmogorov's zeroone law (cf.[15,Theorem 1.11 ]) there exists an infinite open cluster with probability 1 for p > p 1 c (V), and otherwise with probability 0. Similarly, for two different given nodes i, j ∈ V we can define C(i, j) as the open cluster containing i and j, with the convention that C(i, j) = ∅ if there is no path (of open edges) from i to j.For notational simplicity in the following without loss of generality we assume that j = 0.The following definition is related to the radius of a finite open cluster as investigated in [15,Sections 6.1 and 8.4].
As in (4.1) we define the critical probability where we use the convention that |C(i, 0)| > 0 if and only if there exists a (possibly undirected) path, of open edges from 0 to i, called an open path.It is not difficult to see that p 1 c (V) = p 2 c (V).Indeed, let A = {0 ↔ i} be the event that there exists an open path from the origin to node i.Note that this event has strictly positive probability P p (0 ↔ i), also called the two-point connectivity function in [15,Section 8.5].Thus e for every e ∈ E. We recall that an event A ⊂ Ω is increasing if ω ∈ A implies that ω ′ ∈ A. Since all the considered events are increasing, the Fortuin-Kasteleyn-Ginibre (FKG)-inequality [15,Theorem 2.4] further yields Since P p (A) > 0 altogether we obtain Recall that the critical percolation probability p 1 c (Z 2 ) on the whole unoriented square lattice Z 2 equals 1  2 and moreover satisfies P 1 2 |C(i, 0)| = ∞ = 0 ([15, Chapter 11]).Given such an infinite open cluster, we are interested in the probability that the random variables X i and X j on the random DAG are independent.First, we give a formal definition of a max-linear model on a random environment.Definition 4.1.Let {X u : u ∈ Z 2 } be a max-linear model.Let ω ∈ Ω be a configuration, i.e. a realization of a sequence of iid Bernoulli random variables indexed by the possible edge-set, in which an edge e ∈ E is present if and only if ω e = 1.Let V(ω) bet its corresponding set of nodes.The process {X u : u ∈ V(ω)} is called a max-linear model in random environment.
From now on we suppose that {X u : u ∈ V(ω)} is a max-linear model in random environment and we investigate the probability P p X i and X j are independent .That is to say, we are mainly interested in the max-linear process {X i : i ∈ C(i, 0)} on the random sub-DAG with nodes V(C(i, 0)) and edges E(C(i, 0)).
We observe that the events {X i and X j are dependent} = {An(i) ∩ An(0) ∅} and {De(i) ∩ De(0) ∅} are increasing as defined above.Let denote the event that node i and node 0 have common ancestors or descendants.From arguments given below, it is not difficult to see that 1 2 P p (Σ) ≤ P p {An(i) ∩ An(0) ∅} .
The following lemma gives a refinement of this bound, which may be of interest in its own right.
Using this inequality we get .
In what follows we need the analog C → (k) := An(k) ∪ De(k) of the open cluster C(k) containing k ∈ V in the oriented square lattice.We denote by P p (|C → (k)| = ∞) the probability that there exists an oriented path from k ∈ Z 2 to ∞, which is by translation-invariance independent of k.In [8, Section 3] it is shown that holds for some critical probability 1  2 < p * < 1.The exact value for p * is unknown; however, it is known that 0, 6298 ≤ p * < 0, 6735 ([15, Chapter 10] and [1]).For p > p * there exists a constant 0 < C < 1 not depending on |i − j| with 0 <P p (X i and X j are independent) ≤ C. ( Proof: By translation-invariance the distribution of the above event only depends on the edge distance |i|.We will make use of results on oriented percolation as discussed in [8].In particular, in [8, Section 7] it is shown that for some C > 0, γ > 0 decays exponentially as n → ∞ for p < p * , where p * is introduced above.From this and from Proposition 3.1 for every p < p * we obtain P p X i and X j are dependent = P p {An(i) ∩ An(0 In order to prove the second statement we assume that p > p * .Furthermore, let Σ be the event in (4.4) and let Σ ∁ be its complement, which is the event that i and j have neither common ancestors nor descendants.Applying Kolmogorov's zero-one law one can easily deduce that for i, j ∈ Z 2 which implies that Hence, by Lemma 4.2 we can estimate for p 1 for every |i|, where the second last inequality follows from the FKG-inequality ([15, Theorem 2.4]).Thus, in the supercritical phase, with positive probability one can generate dependence between random variables X i and X j , which proves (4.6).
Theorem 4.3 links the subcritical and supercritical case to probabilities for dependence and independence of X i and X j .
For the communication in a Bernoulli bond percolation network, we conclude that for edges being open (communication channels) with small probability, extreme observations at two different nodes become a.s.independent, when nodes are far apart.However, if edges are open with high probability then there is a positive probability that two extreme values are observed dependently; i.e., there may be a common source.Also further properties of X i and X j within the oriented square lattice Z 2 can be derived similarly using percolation properties.The following remark gives an example.

Enlargement of DAGs using Bernoulli percolation
Throughout this section fix two nodes i, j ∈ Z 2 .We are again interested in dependence properties of the random variables X i and X j .We write P for the property that X i and X j are dependent, and for a DAG G we write G ∈ P if a max-linear model X on G has the property that the components X i and X j are dependent.
Suppose that H = V(H), E(H) , V(H) ⊂ Z 2 , is a sub-DAG of the oriented square lattice Z 2 containing i, j such that X i and X j are independent on H, equivalently An(i) ∩ An( j) ∩ V(H) = ∅ by Proposition 3.1; i.e., H P. We utilize a method introduced in [27] in order to enlarge the sub-DAG H by adding possibly infinitely many nodes and edges of open clusters and investigate the probability that X i and X j become dependent on the randomly enlarged DAG.
In the framework of communication in a network, if two extremes are observed seemingly independent, we investigate if a possible dependence could arise by a different network of a network member i, which are not present in the original network.The following results answer this question.
Recall that for k ∈ Z 2 the open cluster containing k is denoted by C(k).The following definition is taken from [27, Definition 1.1].For an analogous definition of enlargement of percolating everywhere subgraphs as in Theorem 4.10 below we also refer to [3].Note that by definition U(H) is a DAG containing the nodes i and j, as H is assumed to contain i and j.Furthermore, we add finitely many or possibly infinitely many nodes, according as p ≤ 1 2 or p > 1 2 .Moreover, Definition 4.5 corresponds to percolation with underlying probability measure P H p on {0, 1} E(Z 2 ) satisfying P H p (ω e = 1) = 1 if e ∈ E(H) and P H p (ω e = 1) = p else, ( for all ω e ∈ {0, 1} E(Z 2 ) .In addition, we have by definition that One prerequisite is the measurability of the event (4.8), and we verify this by observing that {U(H) ∈ P} is equivalent to the existence of some n ∈ N such that An(i) ∩ An( j) ∅ holds on the ball B(i, n) = {y ∈ Z 2 : δ(y, i) ≤ n} and, thus, {U(H) ∈ P} is determined by configurations of edges in a finite ball, and hence measurable.
In analogy to [27,  We first remark that {U(H) ∈ P} has positive probability for all p > 0, such that p c,1,P,H = 0 holds, and the interesting question is for which choice of sub-DAGs H we have p c,1,P,H = p c,2,P,H .As an easy example we might first consider the non-connected DAG H with node set V(H) = {i, j} and E(H) = ∅.It is straightforward to see that P p (U(H) P) > 0 for every p ∈ [0, 1) and this implies p c,2,P,H = 1 p c,1,P,H .On the other hand, the following Lemma gives an example of a DAG, where the latter assertion is not true, i.e. p c,1,P,H = p c,2,P,H = 0. Lemma 4.6.Let H be an infinite DAG with nodes V(H) = Z 2 and let k ∈ Z 2 such that i 1 ≤ k 1 ≤ j 1 .Assume edges E(H) only inside the set Then p c,2,P,H = 0.
Proof: Fix p ∈ (0, 1).We show that p c,2,P,H ≤ p by calculating P p (U(H) P).By choice of H the event {U(H) P} does not depend on finitely many edges, see also Figure 1.Hence, by Kolmogorov's zero-one law, P p U(H) P ∈ {0, 1}.
If we inspect the examples presented so far we recognize that the number of nodes and edges of the chosen DAG H has a strong impact on whether we have p c,1,P,H = p c,2,P,H or not.The following result substantiates this observation.Theorem 4.7.Let H be a DAG and j ∈ V(H) such that the cluster containing j is finite.Then we have p c,2,P,H = 1.
Proof: Let p < 1 and recall that P p U(H) ∈ P = P H p An(i) ∩ An( j) ∅ .We prove the assertion by making use of planar duality arguments discussed in [15,Section 1.4].Let L d be the dual graph of Z 2 with nodes given by the set {x + ( 1 2 , 1 2 ) : x ∈ Z 2 } and edges joining two neighboring nodes so that each edge of L d is crossed by a unique edge of its dual Z 2 .As introduced in [15, Section 1.4, p. 16] an edge of the dual is declared to be open if it crosses an open edge of Z 2 and closed otherwise.Recall that a circuit of L d is an alternating sequence k 0 , e 0 , k 1 , e 1 , . . ., k n , e n , k 0 of nodes k 0 , . . ., k n and edges e 0 , . . ., e n forming a cyclic path from k 0 to k 0 .
Let A be the event that there is a sub-path of closed edges of a circuit containing j in its interior and i in its exterior.Since the connected component containing node j is finite, we have 0 < P H p (A) ≤ P H p An(i) ∩ An( j) = ∅ which yields P p U(H) ∈ P < 1, for every p ∈ [0, 1).Thus, by definition we get p c,2,P,H = 1 as claimed.
By the same arguments as in the proof of [27,Theorem 1.13] we can choose a partition V(Z At this point observe that the number of connected components of H is infinite, otherwise we would have |E(A, B)| < ∞ for every partition V(Z ′ ) = A ∪ B. Thus, by an application of Kolmogorov's zero-one law we have P H p An(i) ∩ An( j) = ∅ ∈ {0, 1}, so that P p U(H) ∈ P ∈ {0, 1}.This in particular implies that p c,2,P,H = p c,1,P,H = 0 by definition and concludes the proof.

Communication networks
As indicated before, the question we answer here by means of a simple probabilistic model is the following: given an extreme observation in a communication network, observed at two nodes, is there a common cause (a common ancestor) in the network or in an enlarged network or not.
In terms of the propagation of influence, every node may be interpreted as a network member, a directed edge between two nodes may be seen as a communication channel, and the weights represent the degree of influence between two members.A phase transition in such a network indicates the non-existence or existence of a common cause of extreme observations of two different network members.
Probabilistic communication models using tools from percolation theory to investigate phase transitions in graph structures are numerous in the literature; see e.g., [15,Ch. 13], [23], and [26, Part IV], to name only a few.They model spread of diseases, voter behaviour, optimal behaviour of market agents, etc. within nearest neighbor lattice graphs, in preferential attachment models, or in small-world networks.
A basic model is explained in [7] as follows: the authors assume the network nodes to take values randomly in {0, 1} representing two possible states.A network member changes its state provided enough neighbours share a different state.In contrast to this simple model, in the present paper the community members at every node exhibit observations, which can be modeled by any distribution, thus allowing for a more refined analysis and larger scope of interpretation for applications.For example, as already mentioned in the introduction, we can model the course of an auction.In this sense a community member represents a bidder in an auction, and we observe the bid placed by this person.Hence, the bid (for example money in dollars) is modeled by the random variable X.Since the purpose of a bid is to overbid the previous offers, a propagation by means of max-linear behavior is plausible, in which the noise variables Z represent the amount of money the bidder is willing to spend independently, and in several cases depending of the type of auction a heavy-tailed distribution might be required.One possible and eligible question of interest is to understand cause and effect of such extreme observations.Example 5.1.Consider two arbitrary choices of finite communication networks modeled by X as in Definition 2.1.More precisely, let H 1 be the DAG with nodes represented by V = {1, 2, 3} and edge-set E = {(2, 3)} consisting of one single edge, i.e. we have three network members and only X 2 and X 3 communicate, where X 3 is influenced by X 2 .We assume the second DAG H 2 to be obtained from H 1 simply by adding the edge (1, 3), i.e.X 1 and X 3 start to communicate and X 3 is influenced by more than one source.Assume that the nodes and edges are equipped with positive weights c i j , i, j ∈ {1, 2, 3}, and for i j we have c i j 0 if and only if there is an edge from i to j.We now want to characterize the communication activities with the aid of max-linear coefficient matrices.For two matrices M 1 , M 2 of same size we write M 1 0 M 2 if all non-zero entries of M 1 are also non-zero entries of M 2 and there exists a zero entry of M 1 which is a non-zero entry of M 2 .Let B 1 and B 2 be the max-linear coefficient matrices corresponding to H 1 and H 2 , respectively.Applying the path analysis mentioned in Section 2 (cf.Theorem 2.4 of [12]) we obtain , so that B 1 0 B 2 .Note that this stems from the fact that H 2 contains the edge (1, 3) not included in H 1 .Thus, inspecting zero entries of the max-linear coefficient matrix helps in detecting communication channels.Such observation holds in general and we summarize it in the following result.Proposition 5.2.Let X be a max-linear process with node-set V and let H 1 and H 2 be two DAGs over the same finite set of nodes V H ⊂ V and max-linear coefficient matrices B 1 and B 2 , respectively.If B 1 0 B 2 then H 2 has more communication channels than H 1 .Theorem 4.3 gives rise to the following obvious interpretation.For a network with only moderately many communication channels, extreme observations at two nodes, which are far apart, are a.s.independent.However, in a highly communicative network, there may be a common source for an extreme observation presented at a specific node.
We now want to interpret the results in Section 4.2 concerning random DAGs obtained from Bernoulli bond percolation clusters.Randomly added nodes and edges correspond to the formation of additional communication channels.Consider the probability p of an edge being open in the original network.For high values of p the influences are more likely to spread.We investigate this in more detail for a DAG H. Assume that members of H hold additional communication channels outside the communication network.We call the combined network a network with randomly spreading influences.What is the probability that two network members with independent observations become influenced by the same source in the combined larger network?Theorems 4.7 and Corollary 4.8 describe a situation, where the answer rather depends on the number of participants in the network and not so much on the structure of communication channels.This observation may be helpful in order to detect extreme observations simply by considering how many agents are affected by the spread of influences.In a wide sense, our results propose that extreme influences are less likely to spread if less agents are affected, being more decisive than the structure of communication channels.
Example 5.3 (Continuation of Example 5.1).To precise these arguments we again compare two finite networks H 1 and H 2 .By Corollary 4.8 two independent observations become influenced with certainty by a common source inside a network with randomly spreading influences, if these influences disseminate almost surely and only in this case, regardless of the setup of connections inside the network.Recall that here p can be regarded as the probability that a communication channel emerges.In such a case we have p = 1, which may correspond to very strong influences.Theorem 4.10 on the other hand, describes the situation, where the network has already many communication channels itself.Only some links between large communication communities are missing.Then links between these large communication communities are created a.s.whenever some randomly spreading influence arrives in the network at all.
k∈An H (i) b ik Zk = b il Zl , k∈An H ( j) b jk Zk = b jl Zl with positive probability, which implies that Pr(X i = b il b jl X j ) > 0.