Gracefully Degrading Consensus and $k$-Set Agreement in Directed Dynamic Networks

We study distributed agreement in synchronous directed dynamic networks, where an omniscient message adversary controls the availability of communication links. We prove that consensus is impossible under a message adversary that guarantees weak connectivity only, and introduce vertex-stable root components (VSRCs) as a means for circumventing this impossibility: A VSRC(k, d) message adversary guarantees that, eventually, there is an interval of $d$ consecutive rounds where every communication graph contains at most $k$ strongly (dynamic) connected components consisting of the same processes, which have at most outgoing links to the remaining processes. We present a consensus algorithm that works correctly under a VSRC(1, 4H + 2) message adversary, where $H$ is the dynamic causal network diameter. On the other hand, we show that consensus is impossible against a VSRC(1, H - 1) or a VSRC(2, $\infty$) message adversary, revealing that there is not much hope to deal with stronger message adversaries. However, we show that gracefully degrading consensus, which degrades to general $k$-set agreement in case of unfavourable network conditions, is feasible against stronger message adversaries: We provide a $k$-uniform $k$-set agreement algorithm, where the number of system-wide decision values $k$ is not encoded in the algorithm, but rather determined by the actual power of the message adversary in a run: Our algorithm guarantees at most $k$ decision values under a VSRC(n, d) + MAJINF(k) message adversary, which combines VSRC(n, d) (for some small $d$, ensuring termination) with some information flow guarantee MAJINF(k) between certain VSRCs (ensuring $k$-agreement). Our results provide a significant step towards the exact solvability/impossibility border of general $k$-set agreement in directed dynamic networks.

adversary VSSC(n, d) + MAJINF(k) allows to solve k-set agreement, it does not allow to is modeled as a sequence of communication graphs, which contain a directed edge between two processes if the message sent in the corresponding round is successfully received. A bidirectional link is modeled by a pair of directed links that are considered independent of each other here.
A natural approach to build robust services despite the dynamic nature of such systems is to use some sort of distributed agreement on certain system parameters like action schedules and operating modes, as well as on application-level issues: Such a solution allows to use arbitrary algorithms for generating local proposals, which are supplied as inputs to a consensus algorithm that finally selects one of them consistently at all processes. As opposed to master-slave-based solutions, this approach avoids the single point of failure formed by the process acting as the master.
The ability to reach system-wide consensus is hence the most convenient abstraction one could provide here. The first major contribution of our paper is hence a suite of impossibility results and a consensus algorithm for directed dynamic networks that, to the best of our knowledge, works under the weakest communication guarantees sufficient for consensus known so far.
Obviously, however, one cannot reasonably assume that every dynamic network always provides sufficiently strong communication guarantees for solving consensus. Fortunately, weaker forms of distributed agreement are sufficient for certain applications. In case of determining communication schedules [11], for example, which are used for staggering message transmission of nearby nodes in time to decrease mutual interference, it usually suffices if those processes that have to communicate regularly with each other (e.g., for implementing a distributed service within a partition) agree on their schedule. A more high-level example would be agreement on rescue team membership [21] in disaster relief applications.
For such applications, suitably designed k-set agreement algorithms [22], where processes must agree on at most k different values system-wide, are a viable alternative to consensus (k = 1). This is particularly true if such a k-set agreement (i) respects partitions, in the sense that processes in the same (single) partition decide on the same value, and (ii) is gracefully degrading, in the sense that the actual number k of different decision values depends on the actual network topology in the execution: If the network is well-behaved, the resulting k is small (ideally, k = 1), whereas k may increase under unfavorable conditions. Whereas any gracefully degrading algorithm must be k-universal, i.e., unaware of any a priori information on k, it should ideally also be k-optimal, i.e., produce the smallest number k of different decisions possible. The second major contribution of our paper are several impossibility results for k-set agreement in directed dynamic networks, as well as the, to the best of our knowledge, first instance of a worst-case k-optimal k-set agreement, i.e., a consensus algorithm that indeed degrades gracefully to general k-set agreement.

Detailed contributions and paper organization.
In Section 3, we introduce our detailed system model, which builds upon and extends the message adversary notation used in [23]. It consists of an (unknown) number n of processes, where communication is modeled by a sequence of directed communication graphs, one for each round: If some edge (p, q) is present in the communication graph G r of round r, then process q has received the message sent to it by p in round r. The message adversary determines the set of links actually present in every G r , according to certain constraints that may be viewed as network assumptions.
With respect to consensus, we provide the following contributions: (1) In Section 4, we show that communication graphs that are weakly connected in every round are not sufficient for solving consensus, and introduce an additional assumption that allows to overcome this impossibility. For this, we rely on the graph-theoretic notion of a source component (strongly connected component that has no incoming edges from vertices outside) and its dynamic counterpart, the vertex-stable source component (VSSC), which describes a set of vertices, constituting a source component for multiple consecutive rounds. We note that every directed graph has at least one source component and that the detailed connection topology of a VSSC may change arbitrarily from round to round, as long as its vertices still form a source component in the communication graph. Our message adversary VSSC(d) requires that the communication graph in every round is weakly connected and has a single (possibly changing) source component. Since this assumption is still too weak for solving consensus, VSSC(d) also requires that, eventually, there will be d consecutive rounds where some source component is vertex-stable. In Section 5, we provide a consensus algorithm that works in this model, and prove its correctness. Our algorithm requires a window of stability of d = 4E + 2 rounds, where E n − 1 is the dynamic network depth of the network (= the number of rounds required to reach all processes in the network from every process in the vertex-stable source component via multi-hop communication). (2) In Section 4, we also show that every deterministic consensus or leader election algorithm needs to know (a bound on) E under VSSC(d), i.e., that there is no universal algorithm. In addition, we prove that consensus is impossible both under VSSC(E − 1) and VSSC(2, ∞) (VSSC(x, y) is essentially the same as VSSC( y), except that it allows up to x source components per round). Therefore, E is a lower bound for the window of stability of VSSCs if vertex stability of source components is the only guarantee of the message adversary. Interestingly, the resulting dynamic networks fall between the weakest and second-weakest category in the classification of [24], and neither allow to solve classic problems such as reliable broadcast, atomic broadcast, or causal-order broadcast nor counting, k-verification, k-token dissemination, all-to-all token dissemination, and k-committee election.
With respect to k-set agreement and gracefully degrading consensus, we provide the following contributions: (3) In Section 6, we provide a fairly weak natural message adversary VSSC(k, d) that is still too strong for solving k-set agreement: It reveals that the restriction to at most k simultaneous VSSCs in every round is not sufficient for solving k-set agreement if just a single VSSC is vertex-stable for less than n − k rounds: A generic reduction of k-set agreement to consensus introduced in [25], in conjunction with certain bivalence arguments, is used to construct a non-terminating run in this case. Moreover, eventual stability of all VSSCs is also not enough for solving k-set agreement, not even when it is guaranteed that (substantially) less than k VSSCs exist simultaneously. The latter is a consequence of some adversarial partitioning over time, which could happen in our dynamic networks.
(4) In Section 7, we show that the message adversary VSSC(n, d) + MAJINF(k), which combines VSSC(n, d) (ensuring termination) with some information flow guarantee MAJINF(k) between certain VSSCs (ensuring k-agreement), is sufficient for solving k-set agreement. Basically, MAJINF(k) guarantees that if we choose k + 1 VSSCs, a majority influence chain between at least two of the chosen VSSCs exists. Despite being fairly strong, the resulting message adversary VSSC(n, d) + MAJINF(k) allows to implement a k-universal k-set agreement algorithm, which naturally respects partitions and is worst-case k-optimal, in the sense that no algorithm can solve k − 1-set agreement under VSSC(n, d) + MAJINF(k).
To the best of our knowledge, it is the first gracefully degrading consensus algorithm proposed so far.
Finally, in the spirit of [23], we include a relation of our message adversaries to failure detectors. Whereas such a comparison obviously only makes sense for the eventually-forever-variants VSSC(∞) and VSSC(n, ∞) + MAJINF(k) of our message adversaries, it provides some very interesting insights: (5) In Section 8, we show that even though VSSC(1, ∞) allows to solve consensus and to implement the failure detector, it does not allow to implement . This contrasts the fact that, in asynchronous message-passing systems with a majority of process crashes, ( , ) is a weakest failure detector for solving consensus. Similarly, although the message adversary VSSC(n, ∞) + MAJINF(k) allows to solve k-set agreement, it does not allow to implement the failure detector k . Again, this is in contrast to the fact that k is known to be necessary for k-set agreement in asynchronous message-passing systems with a majority of process crashes. One of the consequences of these findings is that it is not possible to adapt failure-detector-based algorithms to work in conjunction with our message adversaries.

Related work
Dynamic networks have been studied intensively in research (see the overview by Kuhn and Oshman [1] and the references therein). Besides work on peer-to-peer networks like [2], where the dynamicity of nodes (churn) is the primary concern, different approaches for modeling dynamic connectivity have been proposed, both in the networking context and in the context of classic distributed computing. Casteigts et al. [24] introduced a comprehensive classification of time-varying graph models. Models. There is a rich body of literature on dynamic graph models going back to [26], which also mentions for the first time modeling a dynamic graph as a sequence of static graphs. A more recent paper using this approach is [17], where distributed computations are organized in lock-step synchronous rounds. Communication is described by a sequence of per-round communication graphs, which must adhere to certain network assumptions (like T -interval connectivity, which says that there is a common subgraph in any interval of T rounds). Afek and Gafni [27] introduced message adversaries for specifying network assumptions in this context, and used them for relating problems solvable in wait-free read-write shared memory systems to those solvable in message-passing systems. Raynal and Stainer [23] also used message adversaries for exploring the relationship between round-based models and failure detectors.
In particular, the work by Kuhn et al. [18] focuses on the -coordinated consensus problem, which extends consensus by requiring all processes to decide within rounds of the first decision. Since they consider only undirected graphs that are connected in every round, without node failures, solving consensus is always possible. In terms of the classes of [24], the model of [40] is in one of the strongest classes (Class 10) in which every process is always reachable by every other process. On the other hand, [34,36] do consider directed graphs, but restrict the dynamicity by not allowing stabilizing behavior. Consequently, they also belong to quite strong classes of network assumptions in [24]. In sharp contrast, the message adversary tolerated by our algorithms does not guarantee bidirectional (multi-hop) communication between all processes, hence falls between the weakest and second weakest class of models defined in [24].
The solvability/impossibility border of consensus under message adversaries that support eventual stabilization has been explored in [35,[37][38][39]. As it turned out, consensus can be solved for graph sequences where the set of graphs occurring in the sequence would render consensus impossible under an oblivious message adversary [28,41]. Whereas [35,37] are subsumed by the present paper, the algorithms presented in [38,39] allow to solve consensus for even shorter periods of stability.
The leader election problem in dynamic networks has been studied in [33,42], where the adversary controls the mobility of nodes in a wireless ad-hoc network. This induces dynamic changes of the (undirected) network graph in every round and requires any leader election algorithm to take (Dn) rounds in the worst case, where D is a bound on information propagation.
Regarding k-set agreement in dynamic networks, we are not aware of any previous work except [43], where bidirectional links are assumed, and our previous paper [44], where we assumed the existence of an underlying static skeleton graph (a non-empty common intersection of the communication graphs of all rounds) with at most k static source components. Note that this essentially implies a directed dynamic network with a static core. By contrast, in this paper, we allow the directed communication graphs to be fully dynamic. In [45], we provided k-set agreement algorithms for partially synchronous systems with weak synchrony requirements. Degrading consensus problems. We are also not aware of related work exploring gracefully degrading consensus or k-universal k-set agreement. However, there have been several attempts to weaken the semantics of consensus, in order to cope with partitionable systems and excessive faults. Vaidya and Pradhan introduced the notion of degradable agreement [46], where processes are allowed to also decide on a (fixed) default value in case of excessive faults. The almost everywhere agreement problem introduced by [47] allows a small linear fraction of processes to remain undecided. Aguilera et. al. [48] considered quiescent consensus in partitionable systems, which requires processes outside the majority partition not to terminate. None of these approaches is comparable to gracefully degrading k-set agreement, however: On the one hand, we allow more different decisions, on the other hand, all correct processes are required to decide and every decision must be the initial value of some process.
Ingram et. al. [49] presented an asynchronous leader election algorithm for dynamic systems, where every component is guaranteed to elect a leader of its own. Whereas this behavior clearly matches our definition of graceful degradation, contrary to decisions, leader assignments are revocable and the algorithm of [49] is guaranteed to successfully elect a leader only once the topology eventually stabilizes.

Model
We consider a synchronous distributed system made up of a fixed set of distributed processes with | | = n 2, which have fixed unique ids and communicate via unreliable message passing. Processes will be denoted by p i , p j etc.
Similar to the LOCAL model [50], we assume that processes are deterministic state machines that organize their computations as an infinite sequence of communication-closed [51] lock-step rounds. For every p i ∈ , s i denotes its local state, taken from a potentially infinite state space. It also comprises an input variable x i , which holds some fixed initial value v i at the beginning of an execution, and an output variable y i , which is initially undefined (⊥) and can be changed to some value = ⊥ exactly once. s r i , r 1, denotes the state at the end of round r, s 0 i denotes the initial state. In each round r > 0, each process performs three steps in the following order: First p i broadcast a message, then receives a subset of the messages sent in this round, and finally updates the state from s r−1 i to s r i , based on the messages received and s r−1 i . Note that processes do not know, without receiving explicit feedback in later rounds, which processes received their round r broadcast.
The evolving nature of the network topology is modeled as an infinite sequence of simple directed graphs G 1 , G 2 , . . . , which is determined by an omniscient message adversary [23,27] that may view the processes' internal states at any time.
Given such a graph sequence and a set of initial states {s 0 i |p i ∈ }, the corresponding run is the execution of system where G r is used as the round r communication graph. For our deterministic algorithms, a run is completely determined by the initial states of the processes and the sequence of communication graphs.

Definition 1 (Communication graph). A communication graph
An edge (p i → p j ) is in E if and only if p j successfully receives p i 's message. For a given run, we denote the round r communication graph by G r = V , E r . The set N r j denotes p j 's in-neighbors in G r (excluding p j ).
Note that we will sloppily write (p i → p j ) ∈ G r to denote (p i → p j ) ∈ E r , as well as p i ∈ G r to denote p i ∈ V = . We emphasize again that p i does not have any a priori knowledge of its neighbors, i.e., p i does not know who receives its round r broadcast, and does not know who it will receive from in round r before its round r computation. Fig. 1 shows a sequence of communication graphs for a network of 5 processes, for rounds 1 to 3.
Since every G r can range arbitrarily from n isolated nodes to a fully connected graph, there is no hope to solve any non-trivial agreement problem without restricting the power of the adversary to drop messages 2 to some extent. Inspired by [23], we encapsulate a particular restriction, e.g., that every communication graph must be weakly connected, by means of a particular message adversary. Note that Definition 2 generalizes the notation introduced in [27], which just specified the set of communication graphs the adversary may choose from in every round, to sets of sequences of communication graphs. Definition 2 (Message adversary). A message adversary Adv (for our system of n processes) is a set of sequences of Informally, we say that some message adversary Adv guarantees some property (like "all graphs are weakly connected"), called a network assumption, if every (G r ) r>0 ∈ Adv satisfies this property.
Complementing the traditional approach of partially ordering system models or unreliable failure detectors [53] via their problem solving power (task implementability), the restricted nature of our message adversaries allows us to employ a much simpler and direct way of relating those: For a fixed system of n processes, we say that A is stronger than B if and only if A ⊇ B, i.e., if A can generate at least the communication graph sequences that can be generated by B. As a consequence, an algorithm that works correctly under message adversary A will also work under B ⊆ A.

Consensus and k-set agreement
To formally introduce agreement problems, we consider some finite value domain V with ⊥ = V , and say that p i has decided in round r (or state s r Otherwise, it is (still) undecided. Note that, in the context of the particular algorithms introduced in later sections, we will sometimes also assign additional attributes to states.

Definition 3 (Consensus).
Algorithm A solves consensus, if the following properties hold in every run of A: (Agreement) If process p i decides on v and p j decides on v , then v = v .
(Validity) If process p i decides on v, then v is some p j 's initial value x j . (Termination) Every process must eventually decide.
For the k-set agreement problem [22], we assume that both |V| > k and n > k to rule out trivial solutions: Definition 4 (k-set agreement). Algorithm A solves k-set agreement, if the following properties hold in every run of A: (k-Agreement) At most k different decision values are obtained system-wide in any run.
(Validity) If process p i decides on v, then v is some p j 's initial value x j . (Termination) Every process must eventually decide.
Clearly, consensus is the special case of 1-set agreement; set agreement is a short-hand for n − 1-set agreement.
We call a consensus or k-set agreement algorithm universal, if it does not have any a priori knowledge of the network (and hence of n). A k-set agreement algorithm is called k-universal, if it is universal and does not even require a priori knowledge of k.

Influence in dynamic networks
We will now establish what it means for a process p i to influence some process p j , which is central in our paper.
Note carefully that such an influence is always paired with time: In the spirit of [54,18], for a given sequence (G r ) r>0 of communication graphs, we say that process p i at the end of round r influences p j in round s, denoted as s r i ; s s j , if the state of process p i at the end of round r could have affected the state of process p j at the end of round s. Clearly, in our system model, this requires process p i to send a message in round r + 1 or later that (directly or indirectly, via some message chain) reaches p j at the latest in round s, so that it could affect its state s s j reached at the end of round s. Formally, this is defined via the influence relation given in Definition 5.
Definition 5 (Influence relation). For a given run with sequence of communication graphs (G r ) r>0 , the influence relation is the smallest relation that satisfies the following conditions for processes p i , p j , p k ∈ and rounds r, r , r > 0: We will now define the cornerstones of the message adversaries used in the remaining paper. Message adversaries such as VSSC(d) (Definition 12) and VSSC(k, d) (Definition 15) will be defined via the properties of the sequences of feasible communication graphs. Informally, most of those will rest on the pivotal concept of source components, which are strongly connected components in G r without incoming edges from processes outside the component. The graphs generated by our message adversaries will be required to eventually guarantee source components that are vertex-stable, i.e., consist of the same set of nodes (with possibly varying interconnect) during a sufficiently large number of consecutive rounds. It will turn out that vertex-stability guarantees that eventually all members receive information from each other.

Definition 6 (Source component).
A source component S = ∅ of a graph G is the set of vertices of a strongly connected component in G that has no incoming edges from other components, formally ∀p i ∈ S, ∀p j ∈ G : By contracting strongly connected components (SCCs), it is easy to see that every weakly connected directed simple graph G has at least one source component, see Lemma 4. Hence, if G has k source components, it has at most k weakly connected components.
We now introduce vertex-stable source components as source components that remain the same for multiple rounds in a given graph sequence, albeit their actual interconnection topology may vary.
Definition 7 (Vertex-stable source component). Given a graph sequence (G r ) r>0 , we say that the consecutive sub-sequence of communication graphs G r for r ∈ I = [a, b], b a, contains an I -vertex-stable source component S, if, for r ∈ I , every G r contains S as a source component.
We abbreviate I -vertex-stable source component as I -VSSC, and write |I|-VSSC if only the length of I matters. Note carefully that we assume |I| = b − a + 1 here, since I = [a, b] ranges from the beginning of round a to the end of round b; hence, I = [r, r] is not empty but rather represents round r.
The most important property of a I -VSSC is that information is guaranteed to spread among its vertices if the interval I is large enough, as expressed in Corollary 1 below. To prove this, we need a few basic observations and lemmas. Our first observation is a direct consequence of the definition of a strongly connected component.

Observation 1.
Let C denote the set of processes of a strongly connected component of some graph G, and C be any proper subset of C . Then, there exists a process p i ∈ C s.t. (p i → p j ) ∈ G for some p j ∈ C \ C .
Based on influence and strongly connected components, we can show that a certain amount of information propagation is guaranteed in any strongly connected component C that is vertex-stable, i.e., whose vertex set remains the same, for a given number of rounds. The following Lemma 1 shows that if the number of rounds of the interval of vertex stability |[a, b]| matches the size of the component minus 1, then for all p i ∈ C , s a−1 i reaches every process of C in round b at latest.
In order to also model message adversaries that guarantee faster information propagation, Definition 8 introduces a system parameter D, called the dynamic source diameter. Informally, it guarantees that the message sent by a process p i ∈ S, where S is a I -VSSC with |I| = D, i.e., I = [a, a + D − 1], in round a can reach, directly or through message forwarding, every other process in S by the end of round a + D − 1. Corollary 1 revealed that every sufficiently long I -VSSC S guarantees D |S| − 1; all sufficiently long VSSCs hence necessarily give D n − 1. Choosing some D < n − 1 can be used to force the message adversary to speed-up information propagation accordingly. For example, we show in Section 3.4 that certain expander graph topologies ensure D = O (log n).
Analogous considerations apply for the dynamic network depth E in communication graphs G r with a single source component: As all graphs are weakly connected in this case (see Lemma 4), analogous versions of Lemma 1 and Corollary 1 are easily established.

Definition 9 (E-influencing I -VSSC).
We note that, by definition, for |I| < D and |I| < E, an I -VSSC is trivially D-bounded and E-influencing. While it might be tempting to assume a connection between the graph diameter and the dynamic source diameter, resp. the dynamic network depth, in general, these notions are independent of each other. To illustrate this, Fig. 2 depicts an example where the graph diameter is constant even though, due to p 1 , the dynamic source diameter and the dynamic network depth are in the order of the number of vertices. It is straightforward to generalize this example to n vertices. To formalize information propagation from source components to the rest of the network in the general case with more than a single source component per communication graph G r , one has to account for the fact that a process p j outside any source component could be reachable from multiple source components. Intuitively speaking, this allows modeling dynamic networks that do not "cleanly" partition. Similarly to Lemma 1, the following Lemma 2 shows that there is a guaranteed information propagation from at least one process of the set of VSSCs to every process in the system, provided all occurring source components are I -VSSCs with |I| n − 1.

Lemma 2.
Let n 2 and R = {S 1 , S 2 ...S } be a set of 1 I -VSSCs with I = [a + 1, a + n − 1] such that, for any r ∈ I, every source component of G r is in R. Then, for all p j ∈ , it holds that ∃S ∈ R such that s a i ; s a+n−1 j for some p i ∈ S.
Proof. Let R = S∈R S denote the set of processes of all VSSCs of R. First, we show an analogue of Observation 1 for processes outside any source component of R: In every G r , r ∈ I , at least one process of \ R has an incoming edge from a process contained in some S of R. Suppose that this is not the case. Then, contracting the strongly connected components of G r yields at least one node, contracted entirely from nodes of \ R , with no incoming edges. Hence, some source component of G r consists entirely of nodes from \ R and thus cannot be in R. This contradicts the assumptions made on R. Now, let P R (r) be the set of processes p j ∈ for which there exists some S ∈ R such that s a i ; s r j holds for some p i ∈ S. Using induction on r a + 1, we show that |P R (r)| min{r − a + 1, n}; as r − a + 1 n for r a + n − 1, this proves the lemma.
For the induction start r = a + 1, LOCALITY implies that P R (a + 1) contains all processes in R , in addition to at least one process of \ R , secured by our equivalent of Observation 1. Hence, |P R (a + 1)| 2 = min{2, n} as required. For the induction step, assume |P R (r)| min{r − a + 1, n}, and consider two cases: (i) If |P R (r)| < n, then the induction hypothesis implies r − a + 1 < n, i.e., r + 1 ∈ I . Since R ⊆ P R (a + 1) ⊆ P R (r), there is at least one process p j / ∈ P R (r) that must be contained in \ R ; thus, NEIGHBORHOOD and TRANSITIVITY in conjunction with our equivalent of Observation 1 secure |P R (r + 1)| min{r + 1 − a + 1, n}. (ii) If already |P R (r)| = n, then |P R (r + 1)| |P R (r)| by LOCALITY, so |P R (r + 1)| = n min{r + 1 − a + 1, n} holds trivially. 2 Again, we introduce a parameter H that allows a more fine-grained modeling of the information propagation in a dynamic network than just assuming the worst case n − 1 secured by Lemma 2. For this purpose, Definition 10 generalizes Definition 9 from a single I -VSSC to a set R of I -VSSCs. If |I| H it guarantees that every process in the network receives a message from some member of at least one I -VSSC of R within H rounds. Note carefully, though, that this does not necessarily imply that there exists an E-influencing I -VSSC. In the special case where R is a singleton set, however, the sole member of R is obviously a E-influencing VSSC.

An example for E-influencing I -VSSCs with E < n − 1: Expander topologies
We conclude this section with an example of a network topology that guarantees that all I -VSSCs are E-influencing for some E that is much smaller than n − 1, which justifies why we introduce this parameter (as well as D) explicitly in our model. 3 An undirected graph G is an α-vertex expander if, for all sets where N (R) is the set of neighbors of R in G, i.e., those nodes in V (G) \ R that have a neighbor in R. (Explicit expander constructions can be found in [56].) As we need an expander property for directed communication graphs, we consider, for a vertex/process set R and a round r, both the set N r + (R) of nodes outside of R that are reachable from R and the set of nodes N r − (R) that can reach R in r. Definition 11 ensures an expansion property both for subsets R chosen from source components (property (a)) and other processes (properties (b), (c)).

Definition 11 (Directed expander topology).
There is a fixed constant α and a fixed set S such that the following conditions hold for all sets R ⊆ V (G r ): (a) If |R| |S|/2 and R ⊆ S, then (c) If |R| n/2 and S ∩ R = ∅, then Proof. We will first argue that directed graphs with a single source component exist that satisfy Definition 11. Consider the simple undirected graph Ū that is the union of an α-vertex expander on S I with member set S, and an α-vertex expander on V (G r ). We turn Ū into a directed graph by replacing every edge (p i , p j ) ∈ E(Ū ) with oriented directed edges p i → p j and p j → p i . This guarantees Properties (a)-(c). In order to guarantee the existence of exactly one source component, we drop all directed edges pointing to S I from the remaining graph, i.e., we remove all edges p i → p j where p i / ∈ S and p j ∈ S, which leaves Properties (a)-(c) intact and makes the S from Definition 11 the single source component of the graph. We stress that the actual topologies chosen by the adversary might be quite different from this construction, which merely serves to show the existence of such graphs. We also recall that our message adversaries like the one given in Definition 12 will rely on I -vertex-stable source components, which only require that the set of vertices remains unchanged, whereas the interconnect topology can change arbitrarily. Adding Definition 11 does of course not change this fact.
We will first show that the "per round" expander topology stipulated by Definition 11 is strong enough to guarantee |S| ∈ (log n) and consider some process p i ∈ S. For round a, Property (a) yields |P 1 | |P 0 |(1 + α). In fact, for all i where |P i | |S|/2, we can apply Property (a) to get |P i+1 | |P i |(1 + α), hence |P i | min{(1 + α) i , |S|/2}. Let be the smallest value such that (1 + α) > |S|/2, which guarantees that |P | > |S|/2. That is, = log(|S|/2) log(1+α) ∈ O (log n). Now consider any p j ∈ S and define Q i−1 ⊂ S as the set of nodes that causally influence the set Q i in round a + i, for Q 2 +1 = {p j }. Again, by . From the definition of above, we thus have |Q | > |S|/2. Since P ∩ Q = ∅, it follows that every p i ∈ S influences every p j ∈ S within 2 ∈ O (log n) rounds. While the above proof has been applied to the starting round x = a only, it is evident that it carries over literally also for any x < s − 2 , which shows that S is indeed a D-bounded I -VSSC.
What remains to be shown is that S is also a E-influencing VSSC with E = O (log n). We use Properties (b) and (c) similarly as in the above proof: For any round x ∈ [r, s − 2k ], we know by (b) that any process p i ∈ S has influenced at least n/2 nodes by round x + k where k = log 1+α (n/2) ∈ O (log n) by arguing as for the P i sets above. Now (c) allows us to reason along the same lines as for the sets Q i−1 above. That is, any p j in round x + 2k will be influenced by at least n/2 nodes. Therefore, any p i will influence every p j ∈ by round x + 2k , which completes the proof. 2 This confirms that sequences of communication graphs with D < n − 1 and E < n − 1 indeed exists and are compatible with message adversaries such as VSSC(d) stated in Definition 12 below.

Consensus impossibilities and lower bounds
In this section, we will introduce a message adversary VSSC D,E (d) that allows to solve consensus for d 2D + 2E + 2 in our model, and justify its particular properties by showing that relaxations lead to impossibilities. First and foremost, it requires that every G r is rooted, i.e., contains only a single source component. Moreover, albeit the processes do not need to know n, they need a priori knowledge of the dynamic source diameter D and the dynamic network depth E from Definitions 8 and 9. And finally, our message adversary must guarantee that, eventually, a d-VSSC occurs. Interestingly, whereas VSSC D,E (d) allows to solve consensus for d 2D + 2E + 2, it is too strong for solving other standard problems in dynamic networks such as reliable broadcasting.
Since consensus is trivially impossible for an unrestricted message adversary, which may just inhibit any communication in the system, it is natural to consider the question whether weakly connected communication graphs G r in every round r allow to solve consensus. However, it is not difficult to see that this does not work, even when all G r = G are the same, i.e., in a static topology: Consider the case where G contains two source components S 1 and S 2 ; such a graph obviously exists, cf. Lemma 4 below. If all processes in S 1 start with initial value 0 and all processes in S 2 start with initial value 1, they must decide on their own initial value (by validity and termination) and hence violate agreement. After all, no process in, say, S 1 ever has an incoming link from any process not in S 1 .
Therefore, we restrict our attention to message adversaries that guarantee a single source component in G r for any round r. Fig. 1 showed a sequence of graphs where this is the case. Some simple properties of such graphs are asserted by Lemma 4.

Lemma 4.
Any graph G contains at least one and at most n source components (isolated processes), which are all disjoint. If G contains a single source component S, then G is weakly connected, and there is a directed (out-going) path from every p i ∈ S to every p j ∈ G.
Proof. We first show that every weakly connected directed simple graph G has at least one source component. To see this, contract every SCC to a single vertex and remove all resulting self-loops. The resulting graph G is a directed acyclic graph (DAG) (and of course still weakly connected), and hence G has at least one vertex S (corresponding to some SCC in G) that has no incoming edges. By construction, any such vertex S corresponds to a source component in the original graph G. Since G has at least 1 and at most n weakly connected components, the first statement of our lemma follows.
To prove the second statement, we use the observation that there is a directed path from u to v in G if and only if there is a directed path from the vertex C u (containing u) to the vertex C v (containing v) in the contracted graph G . If there is only one source component in G, the above observations imply that there is exactly one vertex S in the contracted graph G that has no incoming edges. Since G is connected, S has a directed path to every other vertex in G , which implies that every process p i ∈ S has a directed path to every vertex p j , as required. 2 Obviously, assuming a single source component makes consensus solvable if the source component is static (shown in detail in [44]). In this paper, we allow the source component to change throughout the run, i.e., the (single) source component S of G r might consist of a different set of processes in every round r. However, it will turn out that a sufficiently long interval of vertex-stability is indispensable for solving consensus in this setting. In the sequel, we will consider the message adversary VSSC D,E (d) stated in Definition 12, which enforces the dynamic source diameter D and the dynamic network depth E D and is parameterized by some stability window duration d > 0. Note that all the impossibility results and lower bounds in this section hold also when item (ii) is dropped or replaced by something weaker (like merely D-bounded VSSCs, as is done in Definition 15). Actually, it is only needed by the consensus algorithm in Section 5, and has been added already here solely for the purpose of avoiding two different definitions of essentially the same message adversary.
We first establish some general properties of the graph sequences generated by VSSC D,E (d).

Lemma 5 (Properties of VSSC D,E (d)). In every sequence (G r ) r>0 of communication graphs feasible for VSSC D,E (d),
(i) there is at least one process p i such that ∀p j ∈ : s 0 i ; s n(n−2)+1 j holds, where s 0 i represents p i 's initial state.
(ii) Conversely, for n > 2, the adversary can choose some sequence (G r ) r>0 where no process p i is causally influenced by all other processes p j , i.e., p i ∈ s.t. ∃y and ∀p j ∈ : s 0 j ; s y i .
Proof. Definition 12 guarantees that there is (at most) one source component in every G r , r > 0. Since we have infinitely many graphs in (G r ) r>0 but only finitely many processes, there is at least one process p i in the source component of G r for infinitely many r. Let r 1 , r 2 , . . . be this sequence of rounds. Moreover, let P 0 = {p i }, and define for each i > 0 the set Using induction, we will show that |P k | min{n, k + 1} for k 0. Consequently, by the end of round r n−1 at latest, p i will have causally influenced all processes in . Induction base k = 0: |P 0 | min{n, 1} = 1 follows immediately from P 0 = {p i }. Induction step k → k + 1, k 0: First assume that already |P k | = n min{n, k + 1}; since |P k+1 | |P k | = n min{n, k + 1}, we are done. Otherwise, consider round r k+1 and |P k | < n: Since p i is in the source component of G r k+1 , there is a path from p i to any process p j , in particular, to any process p j in be an edge on such a path, such that v ∈ P k and w ∈ \ P k . Clearly, the existence of this edge implies that v ∈ N r k+1 w and thus w ∈ P k+1 . Since this implies |P k+1 | |P k | + 1 k + 1 + 1 = k + 2 = min{n, k + 2} by the induction hypothesis, we are done.
Finally, at most n(n − 2) + 1 rounds are needed until all processes p j have been influenced by p i , i.e., r n−1 n(n − 2) + 1: A pigeonhole argument reveals that at least one process p i must have been in the source component for n − 1 times after so many rounds. After all, if every p i appeared at most n − 2 times, we could fill up at most n(n − 2) rounds. By the above result, this is enough to secure that some p i influenced every p j .
The converse statement (ii) follows directly from considering a static star, for example, i.e., a communication graph where there is one central process p c , and for all r, G r = , (p c → p j )|p j ∈ \ {p c } . Clearly, p c cannot be causally influenced by any other process, and for p j , p k = p j ∈ \ {p c } and ∀x, y s x j ; s y k does not hold. On the other hand, this topology satisfies Definition 12, which includes the requirement of at most one source component per round. 2 In the light of Lemma 5, it is interesting to relate the message adversary in Definition 12 to the classification of [24]: It is apparent that VSSC D,E (d) belongs to a class that it is stronger than the weakest class that requests one node that eventually reaches all others, but weaker than the second-weakest class that requests one node that is reached by all. By contrast, models like [18,40] that assume bidirectionally connected graphs G r in every round belong to the strongest classes (Class 10) in [24].
In Theorem 1, we will examine the solvability of several broadcast problems [40] under the message adversary VSSC D,E (d). It will turn out that none of these are implementable under our assumptions-basically, because there is no guarantee of (eventual) bidirectional communication. This is clearly in contrast to the usual strong bond between some of these problems and consensus in traditional settings.

Theorem 1. The message adversary VSSC D,E (d) given in Definition 12, for any d, belongs to a class that is between the weakest and
second-weakest in [24]. Neither reliable broadcast, atomic broadcast, nor causal-order broadcast can be implemented. Moreover, there is no algorithm that solves counting, k-verification, k-token dissemination, all-to-all token dissemination, and k-committee election.
Proof. We first consider reliable broadcast, which requires that when a correct process broadcasts m, every correct process eventually delivers m. Suppose that the adversary chooses the communication graphs ∀r : , which matches Definition 12. Clearly, p j is a correct process in our model. Since p i never receives a message from p j , p i can trivially never deliver a message that p j broadcasts. For the token dissemination problems stated in [40], consider the same communication graphs and assume that there is a token that only p has. Since no other process ever receives a message from p , token dissemination is impossible.
For counting, k-verification, and k-committee election, we return to the static star round graph G r = , (p c → p j )|p j ∈ \ p c with central node p c considered in the proof of Lemma 5. As the local history of any process is obviously independent of n here, it is impossible to solve any of these problems. 2

Necessity of a priori knowledge of the dynamic network depth
We will now show that every correct solution for consensus, as well as for the related leader-election problem, requires some a priori knowledge of the dynamic network depth of the communication graphs generated by the adversary. Recall that a universal algorithm does not have any priori knowledge of the network, i.e., does not even know upper bounds for the dynamic network depth E (and hence for n and D).

Theorem 2 (Impossibility of universal consensus). There is no universal algorithm that can solve consensus under any message adversary VSSC D,E (d) as given in Definition 12, i.e., works correctly under VSSC D,E (d) for any choice of d.
Proof. Assume for the sake of a contradiction that there is such a universal algorithm A, w.l.o.g. for a set of input values V that contains 0 and 1. Consider a run α v of A on a communication graph G that forms a (very large) static directed line rooted at process p i and ending in process p j . Process p i has initial value v ∈ {0, 1}, while all other processes have initial value 0. Clearly, the universal algorithm A must allow p i to decide on v by the end of round κ, where κ is a constant (independent of E, D and n; we assume that n is large enough to guarantee n − 1 > κ). Next, consider a run β v of A that has the same initial states as α v , and communication graphs (G r ) r>0 that, during rounds [1, κ], are also the same as in α v (defining what happens after round κ will be deferred). In any case, since α v and β v are indistinguishable for p i until its decision round κ, it must also decide v in β v at the end of round κ.
However, since n > κ + 1, p j has not been causally influenced by p i by the end of round κ. Hence, it has the same state s κ+1 i both in β v and in β 1−v . As a consequence, it cannot have decided by round κ: If p j decided v, it would violate agreement with p i in β 1−v . Now assume that runs β v , β 1−v are actually such that the stable window occurs later than round κ, i.e., r S T = κ + 1, and that the adversary just reverses the direction of the line then: For all G r , r κ + 1, p j is the source component and p i is the last process of the resulting topology. Observe that the resulting β v still satisfies Definition 12, since p j itself forms the only source component. Now, p j must eventually decide on some value v in some later round κ , but since p j has been in the same state at the end of round κ in both β v and β 1−v , it is also in the same state in round κ in both runs. Hence, its decision contradicts the decision of We now use a more involved indistinguishability argument to show that a slightly weaker problem than consensus, namely, leader election is also impossible to solve universally under the message adversary VSSC D,E (d). The classic leader election problem (cf. [57]) assumes that, eventually, exactly one process irrevocably elects itself as leader (by entering a special elected state) and every other process elects itself as non-leader (by entering the non-elected state). Non-leaders are not required to know the process id of the leader. Whereas it is easy to achieve leader election in our model when consensus is solvable, by just reaching consensus on the process ids in the system, the opposite is not true: Since the leader elected by some algorithm need not be in the source component that exists when consensus terminates, one cannot use the leader to disseminate a common value to all processes in order to solve consensus atop of leader election. Proof. We assume that there is a universal algorithm A that solves the problem. Consider the execution α i (m) of A in a static unidirectional chain of m processes, headed by process p i : Since p i has only a single out-going edge and does not know n, it cannot know whether it has neighbors at all. Since it might even be alone in the single-vertex graph consisting of p i only, it must elect itself as leader in any α i (m), m 1, after some T i rounds (T i may depend on i, however, as we do not restrict A to be time-bounded).
Let i and j be two arbitrary different process ids, and let T i resp. T j be the termination times in the executions α i (m) resp. α j (m ), for any m, m ; let T = max{T i , T j }.
We now build a system consisting of n = 2T + 3 processes. To do so we assume a chain G i of T + 1 processes headed by p i and ending in process p c , a second chain G j of T + 1 processes headed by p j and ending in process p k , and the process p . Now consider an execution β, which proceeds as follows: For the first T rounds, the communication graph is the unidirectional ring created by connecting the above chains with edges (p k → p i ), (p c → p ) and (p → p j ); its source component clearly is the entire ring. Starting from round T + 1 on, process p forms the single vertex source component, which feeds, through edges (p → p j ) and (p → p c ) the two chains G j and Ḡ i , with Ḡ i being G i with all edges reversed. Note that, from round T + 1 on, there is no edge connecting processes in G i with those in G j or vice versa.
Let p m be the process that is elected leader in β. We distinguish 2 cases: 1. If p m ∈ G j ∪ {p }, then consider the execution β i that is exactly like β, except that there is no edge (p k → p i ) during the first T rounds: p i is the single source component here. Clearly, for p i , the execution β i is indistinguishable from during the first T i T rounds, so it must elect itself leader. However, since no process in G j ∪ {p } (including p c = p m ) is causally influenced by p i during the first T rounds, all processes in G j ∪ {p } have the same state after round T (and all later rounds) in β i as in β. Consequently, p m also elects itself leader in β i as it does in β, which is a contradiction.
2. On the other hand, if p m ∈ G i , we consider the execution β j , which is exactly like β, except that there is no edge (p → p j ) during the first T rounds: p j is the single source component here. Clearly, for p j , the execution β j is indistinguishable from α j (T + 1) (made up of the chain G j ) during the first T j T rounds, so it must elect itself leader.
However, since no process p c in G i ∪ {p } (including p c = p m ) is causally influenced by p j during the first T rounds, p c has the same state after round T (and all later rounds) in β j as in β. Consequently, p m also elects itself leader β j as it does in β, which is again a contradiction.
This completes the proof of Theorem 3. 2

Impossibility of consensus with too short stability intervals
The goal of this section is to show that some I -VSSC S must be vertex-stable sufficiently long for solving consensus in our model. In essence, what is needed for this purpose is that every member of S is able to reach the entire network.
Recalling Definition 9, this requires |I| E and hence d E in Definition 12.
To show that VSSC D,E (E) is indeed necessary in our setting, we will now consider a stronger message adversary VSSC' D,E (E − 1) given in Definition 14 below: It is stronger than VSSC D,E (E) as its stability interval is shorter, but still slightly weaker than VSSC D,E (E − 1), in that it also guarantees one process to be reached from the processes in S within E rounds, despite the too short stability interval I . Note carefully that, since there is only one such process, it would be reached if |I| was actually E. This property is formally captured by almost E − 1-influencing VSSCs introduced in Definition 13, which is slightly weaker than Definition 9 in that I -VSSC's with |I| = E − 1 are no longer arbitrary. We have the following Lemma 6 that relates our message adversaries: We will now prove that the message adversary VSSC' D,E (E − 1), and hence, by Lemma 6, also VSSC D,E (E − 1), is too strong for solving consensus: Processes can withhold information from each other, which causes consensus to be impossible [34]. In order to simplify our proof, we assume that the adversary has to fix the start of J = [r S T , r S T + E − 2] and the set of source component members S in the eventually generated J -VSSC S before the beginning of the execution (but given the initial values). Note that this does not strengthen the adversary, and hence does not weaken our impossibility result: For deterministic algorithms, the whole execution depends only on the initial values and the sequence of the G r 's, so the adversary could simulate the execution and determine every G r+1 based on this.

Lemma 7. Consider two runs of a consensus algorithm A under message adversary VSSC' D,E
and a corresponding J -VSSC S, which start from two univalent configurations C and C that differ only in the state of one process p i at the beginning of round r. Then, C and C cannot differ in valency.
Proof. The proof proceeds by assuming the contrary, i.e., that C and C have different valency. We will then apply the same sequence of round graphs to extend the execution prefixes that led to C and C to get two different runs e and e . It suffices to show that there is at least one process p j that cannot distinguish e from e : This implies that p j will eventually decide on the same value in both executions, which contradicts the assumed different valency of C and C . Our choice of the round graphs depends on the following exhaustive cases: (i) For p i / ∈ S, we let the adversary choose any source component consisting of the processes in S, for all G s with s r. Obviously, every process (i.e., we can choose any) p j ∈ S has the same state throughout e and e . (ii) For p i ∈ S and r ∈ J , we choose any source component consisting of the processes in S for all G s with r s r S T + E − 2. For s > r S T + E − 2, we chose the source component {p j }, where p j is the process that does not hear from any process in S (and hence from p i ) within J according to Definition 13. Hence, p j has the same state in e and e , both during J and afterwards, where it is the single source component.
(iii) For p i ∈ S and r / ∈ J , we choose graphs G s where the source component is {p j } and p i has only in-edges for r s < r S T ; p j (satisfying p j / ∈ S and hence p j = p i ) is again the "distant" process allowed by Definition 13. From s = r S T on, we choose the same graphs G s as in case (ii). It is again obvious that p j has the same state throughout e and e , since p i cannot communicate to any process before J and does not reach p j within J .
In any case, for process p j , the sequence of states in the extensions starting from C and C is hence the same. Therefore, the two runs are indistinguishable for p j , which cannot hence decide differently. This provides the required contradiction to the different valencies of C and C . 2 The next Lemma 8 establishes connectedness of the successor graphs of a configuration [34] and is a general property on graphs independent from the model used in this section. It is based upon constructing a sequence of graphs that differ only in one edge. Note that our construction is complicated by the fact that it must maintain D-boundedness of all intermediate graphs. We use d G (v, w) denote the distance (number of edges on a shortest path) from v to w in graph G. Subsequently, Lemma 9 shows that in the case where n > 2, we can even find connected successor graphs while avoiding a specific source component S.

Lemma 8 (Connectedness)
. For any two graphs G and G , we can find a finite sequence of graphs G , G 1 , . . . G i . . . G , each with a single source component, where any two consecutive graphs differ only by at most one edge. We say that the configurations C resp. C reached by applying G resp. G to the same configuration C are connected in this case. Moreover, the following can be asserted: (i) If the source components of G and G consist of the same set of processes S = S = S, the same is true for all G i and either (ii) If S = S and the source component S i of G i is the same as the source component of G ∈ {G , G }, then either for all v ∈ S i , for all Proof. We describe how to construct the sequence by stating, for each step of our construction, which edge e is modified To show (i), we provide a construction that assumes S = S = S. In the first phase of the construction, add some edge e from E \ E . When no such edge remains we have constructed the graph G j = G ∪ G . We then commence the second phase by removing some e from E \ E until no such edge remains. For each G i in the sequence constructed in this way, we have that either G ⊆ G i or G ⊆ G i , which implies that each graph has a single source component since G and G both have a single source component themselves. To see that all G i have the same source component, suppose some G i has a source component different from S. Hence an edge (v, w) was added for v / ∈ S, w ∈ S or all edges (v, w) with w ∈ S \ {v} were removed for some v ∈ S. Both contradict the assumption that S = S = S, however.
To show (ii), we assume S = S and, w.l.o.g., that there is some u ∈ S \ S (if there is no such node u, we can still use the same construction, albeit with reversed direction). First, we add some edge e, chosen arbitrarily one-by-one, until we arrive at the complete graph. Since u is always in the source component of each G i generated this way (there is always a spanning tree rooted at u contained in G i ), each G i has a single source component and no G i has source component S . Furthermore, G ⊆ G i and hence (ii) holds. It remains to be shown that from the complete graph we can successively delete edges to arrive at G while respecting (ii). For simplicity, we show that, equivalently, we can add edges to G and arrive at the complete graph without ever violating (ii). We start by adding to G the edge e = (v → w) for v ∈ S , w ∈ V until no such edge remains. Clearly adding an edge cannot increase any distances and hence (ii) is preserved. We continue by for v ∈ V to arrive at the complete graph. Again, the distance from any node in S i \ {u} to any other node is equal to 1 and hence (ii) holds. 2 Lemma 9. Pick two arbitrary graphs G , G with exactly one source component S , S , respectively. If n > 2 and given some non-empty S with S = S and S = S , there is a sequence of graphs G , . . . , G i , . . . , G such that any two consecutive graphs of the sequence differ in at most one edge and each G i of the sequence has a unique source component that is different from S.
Proof. We show that, for any graph G with source component S , there is such a sequence G , . . . , G i , . . . , G for any graph G with source component S if S differs from S in at most one node, i.e., |S \ S ∪ S \ S | 1. Repeated application of this fact implies the lemma, because, for n > 2, it is easy to find a sequence S , . . . , S i , . . . S of subsets of s.t. each two consecutive sets of the sequence differ from each other in at most one node and each set of the sequence is = S. We sketch how to construct the desired graphs G i of the sequence in three phases.
Phase 1: Remove all edges (one by one) between nodes of S until only a cycle (or, in general, circuit) remains, and then remove all edges between nodes outside of S until only chains going out from S remain. Let G j be the graph resulting from this operation and S j be its source component.
Phase 2: If we need to add a node p to S j , for some q ∈ S j , first add (q → p). For any Phase 3: Since we now already have some graph G k with source component S , it is easy to add/remove edges one by one to arrive at the topology of G . First, we add edges until the nodes of S are completely connected among each other, the nodes not in S are completely connected among each other, and there is an edge from every node of S to each node not in S . Second, we remove the edges not present in G . 2 The proof of the following impossibility result follows roughly along the lines of the proof of [ For the base case, we consider binary consensus only and argue similar to [58] but make use of our stronger validity property: Let C 0 x be the initial configuration, where the processes with the smallest ids start with 1 and all others with 0. Clearly, in C 0 0 all processes start with 0 and in C 0 n all start with 1, so the two configurations are 0-and 1-valent, respectively. To see that for some x C 0 x must be bivalent, consider that this is not the case, then there must be a C 0 x that is 0-valent while C 0 x+1 is 1-valent. But, these configurations differ only in p i , and so by Lemma 7 they cannot be univalent with different valency.
For the induction step we assume that there is a bivalent configuration C at the beginning of round r − 1, and show that there is at least one such configuration at the beginning of round r. We proceed by contradiction and assume all configurations at the beginning of round r are univalent. Since C is bivalent and all configurations at the beginning of r are univalent, there must be two configurations C and C at the beginning of round r which have different valency. Clearly, C and C are reached from C by two different round r − 1 graphs G = , E and G = , E . As we explain in more detail below, we can apply Lemmas 8 and 9 to show that there is a sequence of applicable graphs such that C and C are connected. Each pair of subsequent graphs in this sequence differs only in one link (v → w), such that the resulting configurations differ only in the state of w. Moreover, if the source component in G and G is the same, all graphs in the sequence also have the same source component. Since the valency of C and C was assumed to be different, there must be two configurations C and C in the corresponding sequence of configurations that have different valency and differ only in the state of one process, say p i . Applying Lemma 7 to C and C again produces a contradiction, and so not all successors of C can be univalent.
It remains to be shown that Lemmas 8 and 9 indeed yield a sequence of applicable graphs, i.e., that extending the sequence of the r − 1 graphs so far accordingly yields a prefix of some sequence of VSSC' D,E (E − 1). By the assumptions of the theorem we may assume that E > 1 and n > 2, which allows us to apply all the claims of Lemma 9. Since all graphs in the sequence described by Lemmas 8 and 9 have exactly one source component, item (i) of Definition 14 is clearly satisfied.
Let S r−1 denote the source component of G r−1 , let S , S denote the source component of G , G , respectively. If S r−1 = S and S r−1 = S , Lemma 9 allows us to construct a sequence s.t. the source component of no G i of the sequence is S r−1 . Therefore, E-influence of all VSSCs in the sequence is preserved and (ii) of Definition 14 holds in this case. If S = S , since both G and G preserved the E-influence, so does every G i in the sequence described in item (i) of Lemma 8, because every G i contains either G or G . If S = S and, say S = S r−1 , then each G i with source component S i = S either preserves the distances from S to all other nodes or the distance from all but at most one node p j of S to all other nodes is 1, In the latter case, p j reached at least one process p k ∈ S i in round r − 1 since it was part of S r−1 by assumption. As ∀p k ∈ S i \ {p j }, ∀p ∈ : d G i (p k , p ) = 1, in addition to s r−1 k ; s r , we have that s r−1 j ; s r . Thus, E-influence is preserved for any E > 1 also in this case and item (ii) of Definition 14 is satisfied.
If r ∈ J , i.e., round r is part of the stability phase, it follows that G and G have the same source component and so do all graphs in the sequence provided by item (i) of Lemma 8. Hence (iii) of Definition 14 also holds.
We have hence established that VSSC' D,E (E − 1) is too strong for consensus, which implies the same for VSSC D,E (E − 1) according to Lemma 6. 2

A consensus algorithm for VSSC D,E (2D + 2E + 2)
In this section, we show that it is possible to solve consensus under the message adversary VSSC D,E (2D + 2E + 2).
The underlying idea of our consensus algorithm is to use flooding to propagate the largest input value to everyone. However, as Definition 12 does not guarantee bidirectional communication between every pair of processes according to (ii) of Lemma 5, flooding is not sufficient: The largest input value could be hidden at a single process p i that never has outgoing edges. If such a process p i would never accept smaller values, it is impossible to reach agreement (without potentially violating validity). Thus, we have to find a way to force p i to accept also a smaller value.
A well-known technique to do so is locking a candidate value. Obviously, we do not want any process to lock its value, but rather some process(es) that will be able to impose their locked value, i.e., can successfully flood the system. In addition, we may allow processes that have successfully locked a value to decide only when they are sure that every other process has accepted their value as well. According to Definition 10, both can be guaranteed when these processes have been in a vertex stable source component long enough which is guaranteed by VSSC D,E (2D + 2E + 2).
The first major ingredient of our consensus algorithm is a network approximation algorithm (described in Section 5.1), which allows processes to detect their source component membership in (past) rounds. The core of our consensus algorithm (presented in Section 5.2) then exploits this knowledge for reaching agreement on locked values and imposes the resulting value on all processes in the network. As we will see, the main complication comes from the fact that a process can detect whether it has been part of the source component of round r only with some latency.

The local network approximation algorithm
According to our system model, no process p i has any initial knowledge of the network. In order to learn about VSSCs, for example, it hence needs to locally acquire such knowledge. Process p i achieves this by means of Algorithm 1, which Algorithm 1 Local Network Approximation (Process p i ).

Variables and Initialization:
1

weighted digraph without multi-edges and loops
Emit round r messages: 2: send A i to all current neighbors Round r: computation: 3: for p j ∈ N r i and p j sent message A j in r do 4: if ∃ edge e = (p j T → p i ) ∈ E i then 5: replace e with (p j if no such edge exists 12: function InStableSource(I) 13: Let C i |t be A i |t if it is strongly connected, or the empty graph otherwise.

15:
if ∀t 1 , t 2 ∈ I : C i := V (C i |t 1 ) = V (C i |t 2 ) = ∅ then 16: return C i 17: else 18: return ∅ maintains a network estimate A i in a local variable. 4 A i is a graph that holds the local estimates of every communication graph G r that occurred so far, simply by labeling an edge (p i → p j ) with the set of round numbers of every G r once p i received evidence that (p i → p j ) was present in round r.
Initially, A i consists of process p i only. In every round, every process p i broadcasts its current A i and fuses it with the network estimates received from its neighbors. In more detail, Moreover, p i also receives A j from p j and uses this information to update its own knowledge: The loop in line 9 ensures that p i has an edge (v where T is the set of rounds previously known to p i . Given A i , we use A i |t with 5 0 < t r to denote the current estimate of G t contained in A i . Formally, A i |t is the graph induced by the set of edges As the information about p j 's neighbors in G t might take many rounds to reach some process p i (if it ever arrives at p i ), A i |t may never be fully up-to-date, and as only reported edges are added to the estimate (but not all reports need to reach p i ), A i |t will be an under-approximation of G t . For example, a process p i that does not have any incoming links from other processes, throughout the entire run of the algorithm, cannot learn anything about the remaining network, i.e., A i will permanently be the singleton graph.
Algorithm 1 finally provides an externally callable function InStableSource(I), which will be used by the core consensus algorithm to find out whether the calling process p i was member in an I -VSSC S and to query the set of all members of S. We will prove in Lemma 11 below that p i is a member of a I -VSSC if A i |t is strongly connected and consists of the same non-empty set S of processes for all t ∈ I . Informally, this is due to the fact that the members of an I -VSSC will not be able to acquire knowledge of the topology outside S within I , as they do not have incoming links from outside.
We start our analysis of Algorithm 1 with Lemma 10, which shows that A i |t underapproximates G t in a way that consistently includes neighborhoods. Its proof uses the trivial invariant asserting A i |t = {p i }, ∅ at the end of every round r < t. Lemma 10. If A i |t contains (p k → p ) at the end of some round r, then (i) (p k → p ) ∈ G t , i.e., A i |t ⊆ G t , and (ii) A i |t also contains (p m → p ) for every p m ∈ N t ⊆ G t . 4 We denote the value of a variable v of process p i at the end of its round r computation as v r i ∈ s r i ; we usually suppress the superscript when it refers to the current round. 5 To simplify the presentation, we have refrained from purging outdated information from the network approximation graph. Actually, our consensus algorithm only queries InStableSource for intervals that span at most the last 2E + 1 rounds, i.e., any older information could safely be removed from the approximation graph, resulting in a message complexity that is polynomial in n.
Proof. We first consider the case where r < t: At the end of round r, A i |t is empty, i.e., there are no edges in A i |t. As the precondition of the Lemma's statement is false, the statement is true. For the case where r t, we proceed by induction on r: Induction base r = t: If A i |t contains (p k → p ) at the end of round r = t, it follows from A j |t = {p j }, ∅ at the end of every round r < t, for every p j ∈ , that p = p i , since p i is the only processor that can have added this edge to its graph approximation. Clearly, it did so only when p k ∈ N t i , i.e., (p k → p ) ∈ G t , and included also (p m → p ) for every p m ∈ N t i on that occasion. This confirms (i) and (ii).
Induction step r → r + 1, r t: Assume, as our induction hypothesis, that (i) and (ii) hold for any A j |t at the end of round r, in particular, for every p j ∈ N r+1 i . If indeed (p k → p ) in A i |t at the end of round r + 1, it must be contained in the union of round r approximations and hence in some A k |t with k ∈ {i, j} at the end of round r. Note that the edges (labeled r + 1) added in round r + 1 to A i are irrelevant for A i |t here, since t < r + 1.
Consequently, by the induction hypothesis, (p k → p ) ∈ G t , thereby confirming (i). As for (ii), the induction hypothesis also implies that (p m → p ) is also in this A k |t. Hence, every such edge must be in U and hence in A i |t at the end of round r + 1 as asserted. 2 The following Lemma 11 shows that locally detecting A i |t to be strongly connected (in line 14 of Algorithm 1) implies that p i is in the source component of round t. This result rests on the fact that A i |t underapproximates G t (Lemma 10.(i)), but does so in a way that never omits an in-edge at any process p j ∈ A i |t (Lemma 10.(ii)).

Lemma 11. If the graph C i |t (line 14) with t < r is non-empty in round r, then p i is member of S, the source component of G t .
Proof. For a contradiction, assume that C i |t is non-empty (hence A i |t is an SCC by line 14), but p i / ∈ S. Since p i is always included in any A i by construction and A i |t underapproximates G t by Lemma 10.(i), this implies that A i |t cannot be the source component of G t . Rather, A i |t must contain some process p k that has an in-edge (p j → p k ) in G t that is not present in A i |t. As p k and hence some edge (p j t → p k ) is contained in A i |t, because it is an SCC, Lemma 10.(ii) reveals that this is impossible. 2 From the definition of the function InStableSource(I) in Algorithm 1 and Lemma 11, we get the following Corollary 2.

Corollary 2.
If the function InStableSource(I) evaluates to S = ∅ at process p i in round r, then ∀x ∈ I where x < r, it holds that p i is a member of S and S is the source component of G x .
The following Lemma 12 proves that, in a sufficiently long I = [a, b] with a I -vertex-stable source component S, every member p i of S detects an SCC for round a (i.e., C i |a = ∅) with a latency of at most D rounds (i.e., at the end of round a + D). Informally speaking, together with Lemma 11, it asserts that if there is an I -vertex-stable source component S for a sufficiently long interval I , then a process p i observes C i |a = ∅ from the end of round a + D on if and only if p i ∈ S.

Lemma 12. Consider an interval of rounds I = [a, b], such that there is a D-bounded I-vertex-stable source component S and assume
Then, from the end of round a + D onwards, we have C i |a = S, for every process in p i ∈ S.
Proof. Consider any p j ∈ S. At the beginning of round a + 1, p j has an edge (p k T → p j ) in its approximation graph A j with a ∈ T if and only if p k ∈ N a j . Since processes always merge all graph information from other processes into their own graph approximation, it follows from the definition of a D-bounded I -vertex-stable source component (Definition 8) in conjunction with the fact that a + 1 b − D + 1 that every p i ∈ S has these in-edges of p j in its graph approximation by the end of round a + 1 + D − 1. Since S is a vertex-stable source-component, it is strongly connected without in-edges from processes outside S. Hence C i |a = S from the end of round a + D on, as asserted.

Variables and Initialization:
2: x i ∈ N, initially own input value 3: locked i , decided i ∈ {false, true} initially false 4: lockRound i ∈ Z initially 0 Emit round r messages: 5: if decided i then 6: send decide, x i to all neighbors 7: else 8: send lockRound i , x i to all neighbors Round r computation: 9: if not decided i then 10: if received decide, x j from any neighbor p j then 11: x i ← x j 12: decide on x i and set decided i ← true 13: else // p i only received lock j , x j messages (if any): 14: 16: if (not locked i ) then 17: locked i ← true 18: lockRound i ← r 19: else 20: if InStableSource([lockRound i , lockRound i + E]) = ∅ then 21: decide on x i and set decided i ← true 22:

such that there is a D-bounded vertex-stable source component S. Then, from the end of round b on, a call to InStableSource([a, b − D]) returns S at every process in S.
Together, Corollaries 2 and 3 reveal that InStableSource(.) precisely characterizes the caller's actual membership in the [a, b − D]-VSSC S in the communication graphs from the end of round b on.

Core consensus algorithm for VSSC D,E (2D + 2E + 2)
As explained in Section 5, the core consensus algorithm stated in Algorithm 2 builds upon the network approximation algorithm given as Algorithm 1: Relying on Corollary 2, every process uses InStableSource provided by Algorithm 1 to detect whether it has been in the vertex-stable source component of some past round(s). Since Corollary 3 reveals that InStableSource has a latency of up to D E rounds for reliably detecting that a process is in the vertex-stable source component of some (interval of) rounds, our algorithm (conservatively) looks back D rounds in the past when locking a value.
In more detail, Algorithm 2 proceeds as follows: Initially, no process has locked a value, that is, locked i = false and lockRound i = 0. Processes try to detect whether they are privileged by evaluating the condition in line 15. When this condition is true in some round , they lock the current value (by setting locked i = true and lockRound to the current round), unless locked i is already true. Note that our locking mechanism does not actually protect the value against being overwritten by a larger value being also locked in ; it locks out only those values that have older locks l < .
When the process p m that had the largest value in the source component of round detects that it has been in a vertex-stable source component in all rounds to + E (line 20), it can decide on its current value. As all other processes in that source component must have had p m 's value imposed on them, they can decide as well. After deciding, a process stops participating in the flooding of locked values, but rather (line 6) floods the network with decide, x . At the point when the stability window guaranteed by Definition 12 with d = 2D + 2E + 2 is large enough to allow every process to receive this message, all processes will eventually decide. Before we turn our attention to the correctness proof of Algorithm 2, we need to define how the network approximation algorithm and the core consensus algorithm are combined to form a joint algorithm in our computation model. Let m_appr r−1 i be the information to be broadcast by the network approximation algorithm and m_c r−1 i the information to be broadcast by the consensus algorithm in round r. Process p i actually performs the following steps in round r: Note carefully that this joint execution scheme implies that when InStableSource() is called in step (iii.2) of the consensus algorithm, the network approximation algorithm is already in the state s r p i reached at the end of round r, so A i has already been updated with the information received in round r. Consequently, according to Corollaries 2 and 3, a call to InStableSource(I) with I = [a, b − D] by p i in the computing step at the end of round b (or a later round) returns S = ∅ precisely when a I -VSSC S containing p i existed.
Our correctness proof starts with the validity property of consensus according to Definition 3.

Lemma 13 (Validity). Every decision value is the input value of some process.
Proof. Processes decide either in line 12 or in line 21. When a process decides via the former case, it has received a decide, x j message, which is sent by p j if and only if p j has decided on x j in an earlier round. In order to prove validity, it is thus sufficient to show that processes can only decide on some process' input value when they decide in line 21, where they decide on their current estimate x i . Let the round of this decision be r. The estimate x i is either p i 's initial value, or was updated in some round r r in line 14 from a value received by way of one of its neighbors' lockRound, x message.
In order to send such a message, p j must have had x j = x at the beginning of round r , which in turn means that x j was either p j 's initial value, or p j has updated x j after receiving a message in some round r < r. By repeating this argument, we will eventually reach a process that sent its initial value, since no process can have updated its decision estimate prior to the first round. 2 The following Lemma 14 states a number of properties maintained by our algorithm when the first process p i has decided. Essentially, they say that there has been a vertex-stable source component in the interval centered around the lock round (but not earlier), and asserts that all processes in that source component chose the same lock round . at a process p j ∈ \ S, as C j |r = ∅ for any r ∈ I by Corollary 2, (iv) also holds. 2 The following Lemma 15 asserts that if a process decides, then it has successfully imposed its proposal value on all other processes.

Lemma 15 (Agreement). Suppose that process p i decides in line 21 in round r and that no other process has executed line 21 before r.
Then, for all p j , it holds that x r−1 j = x r−1 i . Proof. Using items (i) and (iv) in Lemma 14, we can conclude that p i was in S, the vertex-stable source component of rounds = lockRound r−1 i to + E, and that all processes in it S have locked in round . Therefore, in the interval [ , + E], is the maximal value of lockRound. More specifically, all processes p j in S have lockRound j = , whereas all processes p k in \ S have lockRound k < during these rounds by Lemma 14.(iv). Let p m ∈ S have the largest proposal value x m = x max among all processes in S. Since p m is in S, there is a causal chain of length at most E from p m to any p j ∈ . Note carefully that guaranteeing this property requires item (ii) of Definition 12, as the first decision (in round r) need not occur in the eventually guaranteed 2D + 2E + 2-VSSC but already in some earlier "spurious" VSSC.
Since no process executed line 21 before round r, no process will send decide messages in [ , + E]. Thus, all processes continue to execute the update rule of line 14, which implies that x max will propagate along the aforementioned causal path to p j . 2

Theorem 5 (Consensus under VSSC D,E (2D + 2E + 2)). Let r S T be the beginning of the stability window guaranteed by the message adversary VSSC D,E (2D + 2E + 2) given in Definition 12. Then, Algorithm 2 in conjunction with Algorithm 1 solves consensus by the end of round r S T
Proof. Validity holds by Lemma 13. Considering Lemma 15, we immediately get agreement: Since the first process p i that decides must do so via line 21, there are no other proposal values left in the system.
Observe that, so far, we have not used the liveness part of Definition 12. In fact, Algorithm 2 is always safe in the sense that agreement and validity are not violated, even if there is no vertex-stable source component.
We now show the termination property. By Corollary 3, we know that every process in p i ∈ S evaluates the predicate InStableSource([r S T , r S T + 1]) = true in round = r S T + D + 1, thus locking in that round. Furthermore, Definition 12 and Corollary 3 imply that at the latest in round d = + E + D every process p i ∈ S will evaluate the condition of line 20 to true and thus decide using line 21. Thus, every such process p i will send out a message m = decide, x i . By Definition 10 and Definition 12, we know that every p j ∈ will receive a decide message at the latest in round d + E = + D + 2E = r S T + 2D + 2E + 1 and decide by the end of this round. 2 We conclude our considerations regarding consensus under our eventually stabilizing message adversary VSSC D,E (d) by pointing out that the upper bound 2D + 2E + 2 and the lower bound E − 1 of the stability interval d match up to a small constant factor. Part of our on-going research is devoted to closing this gap.

Impossibilities and lower bounds for k-set agreement
In this section, we will turn our attention from consensus to general k-set agreement and prove related impossibility results and lower bounds. We will accomplish this by showing that certain "natural" message adversaries do not allow to solve k-set agreement. For example, as excessive partitioning of the system into more than k source components makes k-set agreement trivially impossible, one natural assumption is to restrict the maximum number of source components per round in our system to k. Like for Definition 12, item (ii) has only been added for the sake of the k-set agreement algorithm Algorithm 4; the impossibility results and lower bounds also hold when (ii) is dropped or replaced by something weaker. Note that the message adversary VSSC D,H (k, 1) guarantees at most k VSSCs in every G r , r > 0.
We will now prove that it is impossible to solve k-set agreement for 1 k < n − 1 under the message adversary VSSC D,H (k, min{n − k, H} − 1), even under the slightly weaker version of this message adversary stated in Theorem 7 below.
We will use the generic impossibility theorem provided in [25,Thm. 1] for this purpose. In a nutshell, the latter exploits the fact that k-set agreement is impossible if k sufficiently disconnected components may occur and consensus cannot be solved in some component.
We first introduce the required definitions: Two executions of an algorithm α, β are indistinguishable (until decision) for a set of processes D, denoted α D ∼ β, if for any p i ∈ D it holds that p i executes the same state transitions in α and in β (until it decides). Now consider a model of a distributed system M = that consists of the set of processes and a restricted model M = D that is computationally compatible to M (i.e., an algorithm designed for a process in M can be executed on a process in M ) and consists of the set of processes D ⊆ . Let A be an algorithm that works in system M = , where M A denotes the set of runs of algorithm A on M, and let D ⊆ be a nonempty set of processes. Given any restricted system M = D , the restricted algorithm A |D for system M is constructed by dropping all messages sent to processes outside D in the message sending function of A. We also need the following similarity relation between runs in computationally compatible systems (cf. [25,Definition 3]): Let R and R be sets of runs, and D be a non-empty set of processes. We say that runs R are compatible with runs R for processes in D, denoted by R D R, if ∀α ∈ R ∃β ∈ R : α D ∼ β. Theorem 6 (k-Set agreement impossibility [25,Thm. 1]). Let M = be a system model and consider the runs M A that are generated by some fixed algorithm A in M, where every process starts with a distinct input value. Fix some nonempty and pairwise disjoint sets of processes D 1 , . . . , D k−1 , and a set of distinct decision values v 1 , . . . , v  To prove our theorem for k > 1, we will show that the conditions of the generic Theorem 6 are satisfied, thereby providing a contradiction to the assumption that A exists. Since |D| = n − k + 1, such a topology (e.g. a chain with head p and tail q) can be created by the message adversary VSSC D,H ( ), which is even stronger than the corresponding message adversary underlying Theorem 4. Hence, consensus is impossible in D.

(D) M A |D D M A : Fix any run ρ ∈ M A |D
and consider a run ρ ∈ M A , where every process in D has the same sequence of state transitions in ρ as in ρ . Such a run ρ exists, since the processes in D can be disconnected from D in every round Since Theorem 7 tells us that no k-set agreement algorithm (for 1 k < n − 1) can terminate with insufficient concurrent stability of the at most k source components in the system, it is tempting to assume that k-set agreement becomes solvable if a round exists after which all communication graphs remain the same. However, we will prove in Theorem 8 below that this is not the case for any 1 < k n − 1. We will again use the generic Theorem 6, this time in conjunction with the variant of the well-known impossibility of consensus with lossy links [28,34] provided in Lemma 16, to prove that ensuring at most k different decision values is impossible here, as too many decision values may originate from the unstable period.
Lemma 16. Let M = p i , p j be a two-processor subsystem of our system M = . If the sequence of communication graphs G r , r > 0, of M are restricted by the existence of a round r > 0 such that (i) for r < r , (p i → p j ) ∈ G r and/or (p j → p i ) ∈ G r , and no other edges incident with p i or p j are in G r , and (ii) for r r , there are no edges incident with p i and p j at all in G r , then consensus is impossible in M .
Proof. Up to r , this is ensured by the impossibility of 2-processor consensus with a lossy but at least unidirectional link established in [34,Lemma 3]. After r , this result continues to hold (and is even ensured by the classic lossy link impossibility [28]). Hence, consensus is indeed impossible in M . 2 Theorem 8. There is no algorithm that solves k-set agreement for n k + 1 processes under the message adversary VSSC D,H (k, ∞), for every 1 < k < n.
Proof. Suppose again that there is a k-set algorithm A that works correctly under the assumptions of our theorem. We restrict our attention to runs of M A where, until r S T , (i) the same set of k − 1 source components D 1 , . . . , D k−1 with D = k−1 i=1 D i exists in every round, and (ii) two remaining processes D = \ D = {p 1 , p 2 } exist, which are (possibly only uni-directionally, i.e., via a lossy link) connected in every round, without additional edges to or from D. After r S T , the communication graph remains the same, except that the processes in D are disconnected from each other and there is an edge from, say, p 1 to some process in D in every round. Note that these runs satisfy Definition 15 for d = ∞, as the number of source components never exceeds k.
Moreover, we let the adversary choose r S T sufficiently large such that the processes in D have decided. Since the processes in D i (i < 0 < k) never receive a message from the remaining system before r S T , in which case they must eventually unilaterally decide, we can safely assume this.
We can now again employ the generic impossibility Theorem 6 in this modified setting. The proofs of properties (A), (B) and (D) remain essentially the same as in Theorem 7. It hence only remains to prove: (C) Consensus is impossible in M = D : This follows immediately from Lemma 16 with r = r S T . 2 The following Theorem 9 reveals that even (considerably) less than k source components per round before stabilization and a single perpetually stable source component after stabilization are not sufficient for solving k-set agreement.

Theorem 9.
There is no algorithm that solves k-set agreement for n k +1 processes under the message adversary VSSC D,H ( k/2 + 1, ∞), for every 1 < k < n, even if G r = G, r r S T , where G contains only a single source component.
Proof. We show that, under the assumption that A exists, there is a sequence of communication graphs that is feasible for our message adversary that leads to a contradiction. We choose x i = i for all p i ∈ and let D i = {p 1+2i , p 2+2i } for 0 i < k/2 − 1. If k is even, let D k/2−1 = p k−1 , p k ; if k is odd, let D k/2−1 = {p k }. In any case, let D k/2 = {p k+1 }. Finally, let D = p k+2 , . . . , p n . Note that D may be empty, while all D i are guaranteed to contain at least one process since n > k. For all rounds, the processes in D have an incoming edge from a process in one of the D i .
We split the description of the adversarial strategy into k/2 + 1 phases in each of which we will force some D i to take |D i | decisions. To keep processes p 1+2i , p 2+2i ∈ D i with |D i | = 2 from deciding on the same value before their respective phase i, the adversary restricts G r such that (i) there are no links to D i from any other D j and (ii) either the edge (p 1+2i → p 2+2i ) or (p 1+2i ← p 2+2i ) or both are in G r , in a way that causes Lemma 16 to apply. Note carefully that any such G r indeed has no more than k/2 + 1 source components.
In the initial phase, D k/2 is forced to decide: Since p k+1 has no incoming edges from another node in G r , this situation is indistinguishable from a run where p k+1 became the single source component after r S T . Thus, by the correctness of A, p k+1 must eventually decide on x k+1 = k + 1. At this point, the initial phase ends, and we can safely allow the adversary to modify G r in such a way that p k+1 has an incoming edge from some other process.
We now proceed with k/2 − 1 phases: In the ith phase, 0 i < k/2 − 1, the adversary drops any link between the processes p 1+2i , p 2+2i ∈ D i (and does not provide an incoming link from any other process, as before) in any G r . Since, for both processes this is again indistinguishable from the situation where they become the single source component after r S T , both will eventually decide in some future round (if they have not already decided). Since the adversary may have chosen a link failure pattern in earlier phases that causes the impossibility (= forever bivalent run) of Lemma 16 to apply, as Note that Theorem 9 reveals an interesting gap between 2-set agreement and 1-set agreement, i.e., consensus: It shows that 2-set agreement is impossible with k/2 + 1 = 2 source components per round before and a single fixed source component after stabilization. By contrast, if we reduce the number of source components per round to a single one before stabilization (and still consider a single fixed source component thereafter), even 1-set agreement becomes solvable [35].

Algorithms for k-set agreement
In this section, we will provide a message adversary MAJINF(k) (Definition 19) that is sufficiently weak for solving k-set agreement if combined with VSSC D,H (n, 3D + H) (Definition 15). Although we can of course not claim that it is a strongest one in terms of problem solvability (we did not even define what this means), we have some intuitions that it is not too far from the solvability/impossibility border.

Set agreement
To illustrate some of the ideas that will be used in our message adversary for general k-set agreement, we start with the simple case of n − 1-set agreement (also called set agreement) first. Note that Theorem 7 does not apply here. To circumvent the impossibility result of Theorem 9, it suffices to strengthen the assumption of at most n − 1 source components in every round such that the generation of too many decision values during the unstable period is ruled out. A straightforward way to achieve this is to just forbid n different decisions obtained in source components consisting of a single process. Achieving this is easy under the n−1 -influence message adversary given in Definition 16, the name of which has been inspired by the n−1 failure detector [59].

Definition 16 ( n−1 -influence message adversary).
The message adversary n−1 -MAJ is the set of all sequences of communication graphs (G r ) r>0 that satisfy the following: if each process p i ∈ becomes a single-node source component during a non-empty set of Intervals X i , then any selection {I 1 , . . . , I n } with I i ∈ X i for 1 i n, contains two distinct It is easy to devise a set agreement algorithm that works correctly in a dynamic network under Definition 16, provided (a bound on) n is known: In Algorithm 3, process p i maintains a proposal value v i , initially x i , and a decision value y i , initially ⊥, which are broadcast in every round. If p i receives no message from any other process in a round, it decides by setting y i = v i . If p i receives a message from some p j that has already decided ( y j = ⊥), it sets y i = y j . Otherwise, it updates v i to the maximum of v i and all received values v j . At the end of round n, a process that has not yet decided sets y i := v i , and all processes terminate. Proof. Termination (after n rounds) and also validity are obvious, so it only remains to show n − 1-agreement. Assume, w.l.o.g., that the processes p 1 , p 2 , . . . are ordered according to their initial values x p 1 x p 2 . . . , and let R k be the set of Algorithm 3 Set agreement algorithm for message adversary n−1 -MAJ.
Set agreement algorithm, code for process p i : 1: v i := x i ∈ V // initial value 2: y i := ⊥ Emit round r messages: 3: send v i , y i to all Receive round r messages: 4: receive v j , y j from all current neighbors Round r: computation: 7: y i := y j 8: if (N i = ∅) ∧ (y i = ⊥) then 9: y i := v i 10: if (r = n) ∧ (y i = ⊥) then 11: y i := v i ; terminate different values (in y i or, if still y i = ⊥, in v i ) present in the system at the beginning of round k 1; R 1 is the set of initial values. Obviously, R 1 ⊇ R 2 ⊇ . . . , and since n − 1-agreement is fulfilled if |R n+1 | < n, we only need to consider the case where all x i are different.
Consider process p 1 : If p 1 gets a message from some other process p j in round 1, x 1 / ∈ R 2 as (i) p 1 does not decide on its own value and sets v 1 v j x j > x 1 and (ii) no process that receives a message containing x 1 from p 1 takes on this value. Hence, n − 1-set agreement will be achieved in this case. Otherwise, p 1 does not get any message in round 1 and hence decides on x 1 .
Proceeding inductively, assume that p ∈ P i−1 = {p 1 , . . . , p i−1 } has decided on x by round k , and received only messages from processes with smaller index in rounds 1, . . . , k − 1 and no message in round k. Now consider process p i : If p i gets a message from some process p j with j > i in some round k i, with minimal k, before it decides, then x i / ∈ R k+1 as (i) p i does not decide on its own value and sets v i v j x j > x i , (ii) p i did not send its value to any process in P i−1 before their decisions, and (iii) no process with index larger than i that receives a message containing x i from p i takes on this value. Hence, n − 1-set agreement will be achieved in this case. Otherwise, if p i gets a message from some process p ∈ P i−1 in round i, it will decide on p 's decision value x and hence also cause x i / ∈ R i+1 . In the only remaining case, p i does not get any message in round i and hence decides on x i , which completes the inductive construction of P i = {p 1 , . . . , p i } for i < n. Now consider p n in round n in the above construction of P n : Definition 16 prohibits the only case where n −1-agreement could possibly be violated, namely, when p n also decides on x n : During the first n rounds, we would have obtained n single-node source components no two of which influence each other in this case. Thus, we cannot extend the inductive construction of P i to i = n, as the resulting execution would be infeasible. 2

A message adversary for general k-set agreement
Whereas the set agreement solution introduced in the previous subsection is simple, it is apparent that Definition 16 is quite demanding. In particular, it requires explicit knowledge of (a bound on) n. We will now provide a message adversary MAJINF(k) (Definition 19), which is sufficient for general k-set agreement if combined with VSSC D,H (k, 3D + H) (Definition 15) and even with VSSC D,H (n, 3D + H). We obtained this combination by adding some additional properties to the necessary network conditions implied by our impossibility Theorems 7 and 9. 7 To avoid non-terminating (i.e., forever undecided) executions as predicted by Theorem 7, we require the stable interval constraint guaranteed by the message adversary VSSC D,H (n, 3D + H) to hold. The parameter D, which can always be safely set to D = n − 1 according to Lemma 1, allows to adapt the message adversary to the actual dynamic source diameter guaranteed in the VSSCs of a given dynamic network. Note that, since D > 0, rounds where no message is received are not forbidden here (in contrast to Definition 16). In order to also circumvent executions violating the k-agreement property established by Theorem 9, we introduce the majority influence constraint guaranteed by the message adversary MAJINF(k) given in Definition 19 below. Like Definition 16 for set agreement, it guarantees some (minimal) information flow between sufficiently long-lasting vertex-stable source components that exist at different times. As visualized in Fig. 3, it implies that the information available in any such I -VSSC, with |I| > 2D, originates in at most k "initial" J -VSSCs, where | J | > 2D. Thereby, it enhances the very limited information propagation that could occur in our model solely under VSSC D,H (k, 3D + H), which is too strong for solving k-agreement. Fig. 3. VSSCs influencing each other in a run, for k = 2. Time progresses from left to right; all grey rectangles are stable for more than 2D rounds, white rectangles are stable between D + 1 and 2D rounds. Snaked arrows represent majority influence, thin arrows represent (weak) influence. At most two grey rectangles may exist that are not majority-influenced by any other grey rectangles.
Formally, given some run ρ, we denote by V d the set of all pairs (S, I) where S is an I -vertex-stable-source components with |I| d in ρ; note that V d ⊆ V 1 for every d 1. The majority influence between S and S guarantees that S influences a set of nodes in S , which is greater than any set influenced by VSSCs not already known by the processes in S (and greater than or equal to any set influenced by VSSCs already known by the processes in S). Majority influence is hence a very natural way to discriminate between strong and weak influence between VSSCs. Informally, Definition 19 ensures that all but at most k "initial" I -VSSCs with |I| 2D + 1 are majority-influenced by some earlier I -VSSCs with |I | 2D + 1 (see Fig. 3). Note carefully, though, that Definition 19 neither prohibits more than k "initial" I -VSSCs with |I| 2D nor the partitioning of the system in more than k simultaneous VSSCs.
We conclude this section with some straightforward stronger assumptions, which also imply Definition 19 and can hence be handled by the algorithm introduced in Section 7.3: (i) Replacing majority influence in Definition 18 by majority intersection |S ∩ S | > |S ∩ S |, which is obviously the strongest form of influence.
(ii) Requiring |S ∩ S | > |S |/2, i.e., a majority intersection with respect to the number of processes in S . This could be interpreted as a changing VSSC, in the sense of "S is the result of changing a minority of processes in S". Although this restricts the rate of growth of VSSCs in a run, it would apply, for example, in case of random graphs where the giant component has formed [60,61].

Gracefully degrading consensus/k-set agreement
In this section, we provide a k-set agreement algorithm and prove that it works correctly under the message adversary VSSC D,H (n, 3D + H) + MAJINF(k), i.e., the conjunction of Definitions 15 and 19. Note that the algorithm needs to know D, but neither n nor H . It consists of a "generic" k-set agreement algorithm, which relies on the network approximation algorithm of Section 5.1 for locally detecting vertex-stable source components and a function GetLock that extracts candidate decision values from history information. Our implementation of GetLock uses a vector-clock-like mechanism for maintaining "causally consistent" history information, which can be guaranteed to lead to proper candidate values thanks to VSSC D,H (n, 3D + H) + MAJINF(k).
In sharp contrast to classic k-set agreement algorithms, the algorithm is k-universal, i.e., the parameter k does not appear in its code. Rather, the number of system-wide decision values is determined by the number of (certain) 2D + 1-VSSCs occurring in the particular run. As a consequence, if the network partitions into k weakly connected components, for example, 8 all processes in a component obtain the same decision value. On the other hand, if the network remains well-connected, the algorithm guarantees a unique decision value system-wide.
Properties. Our algorithm is in fact not only k-universal but even worst-case k-optimal, in the sense that (i) it provides at most k decisions system-wide in all runs that are feasible for VSSC D,H (n, 3D + H) + MAJINF(k), and (ii) that there is at least one feasible run under VSSC D,H (n, 3D + H) + MAJINF(k) where no correct k-set agreement can guarantee less than k decisions. (i) will be proved in Section 7.4, and (ii) follows immediately from the fact that a run consisting of k isolated partitions is also feasible for VSSC D,H (n, 3D + H) + MAJINF(k). Our algorithm can hence indeed be viewed as a consensus algorithm that degrades gracefully to k-set agreement, for some k determined by the actual network properties.
Network approximation. Like the consensus algorithm in Section 5, our k-set agreement algorithm consists of two reasonably independent parts, the network approximation algorithm Algorithm 1 and the k-set agreement core algorithm given in Algorithm 4. As in Section 5.2, we assume that the complete round r computing step of the network approximation algorithm is executed just before the round r computing step of the k-set algorithm, and that the round r message of the former is piggybacked on the round r message of the latter. Recall that this implies that the round r computing step of the k-set core algorithm, which terminates round r, can already access the result of the round r computation of the network approximation algorithm, i.e., its state at the end of round r.
Core algorithm. The general idea of our core k-set agreement algorithm in Algorithm 4 is to generate new decision values only at members of 2D + 1-VSSCs, and to disseminate those values throughout the remaining network. Using the network approximation A i , our algorithm causes process p i to make a transition from the initially undecided state to a locked state when it detects some minimal "stability of its surroundings", namely, its membership in some D + 1-VSSC D rounds in the past (line 17). Note that the latency of D rounds is inevitable here, since information propagation within a D + 1-VSSC may take up to D rounds since it is D-bounded, as guaranteed by item (ii) in Definition 15. If process p i , while in the locked state, observes some period of stability that is sufficient for locally inferring a consistent view among all VSSC members (which occurs when the D + 1-VSSC has actually extended to a 2D + 1-VSSC), p i can safely make a transition to the decided state (line 24). The decision value is then broadcast in all subsequent rounds, and adopted by any not-yet decided process in the system that receives it later on (line 9). Note that VSSC D,H (n, 3D + H) (Definition 15) guarantees that this will eventually happen.
Since locking is done optimistically, however, it may also happen that the D + 1-VSSC does not extend to a 2D + 1-VSSC (or, even worse, is not recognized to have done so by some members) later on. In this case, p i makes a transition from the locked state back to the undecided state (line 22). Unfortunately, this possibility has severe consequences: Mechanisms are required that, despite possibly inconsistently perceived unsuccessful locks, ensure both (a) an identical decision value among all members of a 2D + 1-VSSC who successfully detect this 2D + 1-VSSC and thus reach the decided state, and (b) no more than k different decision values originating from different 2D + 1-VSSCs. Algorithm 4 k-universal k-set agreement algorithm, code for process p i . Receive round r messages: 6: for all p j in p i 's neighborhood N r i , receive hist j , decision j Round r computation: 7: if decision i = ⊥ then 8: if received any message m containing m.decision = ⊥ then 9: decide m.decision and set decision i := m.decision 10: else // update hist i with hist j received from neighbors 11: for p j ∈ N r i , where p j sent hist j do 12: hist i := hist i // remember current history 13: for all non-empty entries hist j [x][r ] of hist j , x = i do 14: hist Let v be s.v of the single element s ∈ mfrq latest (R) 30: new Lock := (R, v, r) 31: else 32: new Lock := (R, max s∈R {s.v} , r) // deterministic choice 33: return new Lock Both goals are accomplished by a particular selection of the decision values (using function GetLock), which ultimately relies on an intricate utilization the network properties guaranteed by our message adversary VSSC D,H (n, 3D + H) + MAJINF(k) (Definitions 15 and 19): Our algorithm uses a suitable lock history data structure for this purpose, which is continuously exchanged and updated among all reachable processes. It is used to store sets of locks L = (S, v, τ create ), which are created by every process that enters the locked state: S is the vertex-set of the detected D + 1-VSSC, v is a certain proposal value (determined as explained below), and τ create is the round when the lock is created.
Maintaining history. In more detail, the lock history at process p i consists of an array hist i [ j][r] that holds p i 's (under)approximation of the locks process p j got to know in round r. It is maintained using the following simple update rules: Hence, if p i creates a new lock L when it detects, in its round r computing step, that it was part of a D + 1-VSSC that was stable from r − 2D to r − D, it is ascertained that any other member p j will have locally learned the same lock L in the same round r, provided that the D + 1-VSSC in fact extended to a 2D + 1-VSSC.

Consistent decisions.
The resulting consistency of the histories is finally exploited by the function GetLock(S, ), which computes (the value of) a new local lock (line 19) created in round r. As its input parameters, it is provided with the members S of the detected D + 1-VSSC and its starting round = r − 2D. GetLock first determines a multiset R, which contains all locks locally known to the members p j ∈ S by round r −2D (line 26). Note that the multiplicity of some lock L = (S , v, r ) in R is just the number of members of S who got to know L by round r − 2D, which is just |CS(S , S)| according to Definition 17. In order to determine a proper value for the new lock to be computed by GetLock, we exploit the fact that MAJINF(k) (given in Definition 19) ensures majority influence according to Definition 18: If the set mfrq latest (R), containing the most frequent locks in R with the same maximal lock creation round, contains a single lock L only, its value L.v is used. Note that the restriction to the maximal lock creation date automatically filters unwanted, outdated locks that have merely been disseminated in preceding 2D + 1-VSSCs, see (1) below. Otherwise, i.e., if mfrq latest (R) contains multiple candidate locks, a consistent deterministic choice, namely, the maximum among all lock values in R, is used (line 32). As a consequence, at most k different decision values will be generated system-wide.
Given the various mechanisms employed in our algorithm and their complex interplay, the question about a more light-weight alternative solution that omits some of these mechanisms might arise. We will proceed with some informal arguments that support the necessity of some of the pillars of our solution, namely, (1) the preference of most recently created locks in GetLock, (2) the creation of a new lock at every transition to the locked state, and finally (3) the usage of an a priori unbounded data structure hist i . Although these arguments are also "embedded" in the correctness proof in the following section, they do not immediately leap to the eye and are hence provided explicitly here.
(1) The preference of most recently created lock in GetLock, which is done by selecting the set mfrq latest (R) in line 28, defeats the inevitable "amplification" of the number of processes that got to know some "old" lock: All members of a 2D + 1-VSSC have finally learned all "old" locks that were only known to some of its members at the starting round of the VSSC initially. In terms of multiplicity in R, this would falsely make any such old lock a preferable alternative to the most recently created lock.
(2) Instead of creating new locks at every newly detected D + 1-VSSC, it might seem sufficient to simply update the creation time of an old lock that (dominantly) influences a newly detected VSSC. This is not the case, however: Consider a hypothesized algorithm where new locks are only generated if no suitable old locks can be found in the current history, and assume a run where two VSSCs with vertex sets S 1 = {p 1 , p 2 } and S 3 = {p 1 , p 2 } that are both stable for D + 1 rounds and two source components S 2 = {p 1 , p 3 } and S 4 = {p 1 , p 3 } that are stable for 2D + 1 rounds are formed. Let these VSSCs be such that S i is formed before S j if i < j and let there be no influence among the processes of {p 1 , p 2 , p 3 }, apart from their influence on each other when they are members of the same VSSC. First, let the processes of S 1 lock on some old lock L . Then, assume that the processes of S 2 lock on some lock 9 L = L , a lock not known in S 1 . Since S 3 = {p 1 , p 2 }, if S 3 is sufficiently well connected, p 1 might lock on L in S 3 , because L is known to both p 1 and p 2 while L is known merely to p 1 at the start of S 3 . Subsequently, this results in the situation in S 4 where there is neither a clear majority (L and L are known to both members of S 4 ) nor a clear most recently adopted lock (for p 1 , it seems that L is the most recent lock, while for p 3 , it seems that L is more recent). Consequently, in S 4 , it is not clear whether to lock on L.v or on L .v. Nevertheless, the processes of S 4 should be able to determine that they must lock on L and not on L , since S 2 → m S 4 holds in our example: |CS(S 1 , S 2 )| = 1, |CS(S 1 , S 4 )| = 2, |CS(S 2 , S 4 )| = 2 and |CS(S 3 , S 4 )| = 1. We can therefore conclude that merely adopting old locks is insufficient.
(3) Since the stabilization round r S T , as implied by Definition 15, may be delayed arbitrarily, an unbounded number of 2D + 1-VSSCs can occur before r S T . Since any of those might produce a critical lock, in the sense of exercising a majority influence upon some later 2D + 1-VSSC, no such lock can safely be deleted from hist i of any p i after bounded time.

Correctness proof
In this final subsection, we will prove the following Theorem 11: The proof consists of a sequence of technical lemmas, which will allow us to establish all the properties of k-set agreement given in Section 3. First, validity according to Definition 4 is straightforward to see, as only the values of locks are ever considered as decisions (line 24). Values of locks, on the other hand, are initialized to the initial value of a process (line 2) and later on always have values of previous locks assigned to them (lines 30 and 32). Note that the claimed k-universality is obvious, as the code of the algorithm does not involve k.
To establish termination, we start with Lemmas 18 to 20 that are related to setting locks at all members of vertex stable source components.

Lemma 18. Apart from processes adopting a decision sent by another process, only processes part of a vertex stable source component with interval length greater than D (resp. 2D) lock (resp. decide).
Proof. The if-statement in line 17 (resp. line 23) is evaluated to true only if InStableSource detects a stable member set S in some interval I of length D + 1 (resp. of length 2D + 1) or larger, which implies by Corollary 2 that S is indeed a I -VSSC with |I| = D + 1 (resp. |I| = 2D + 1). 2 Lemma 19. All processes part of a I -VSSC S with I = [a, b] and |I| > 2D, which did not start already before a, lock, i.e. set := a, in round a + 2D.
Proof. Because S is D-bounded by Definition 15, Corollary 3 guarantees that InStableSource(a, a + D) returns S from round a + 2D (of the k-set-algorithm) on, and that it cannot have done so already in round a + 2D − 1. Hence, = ⊥ in round a + 2D, the if-statement in line 17 is entered and := a is set in line 19. 2 Lemma 20. All processes part of a I -VSSC S with I = [a, b] and |I| > 3D, which did not start already before a, have decided by round a + 3D.
Proof. It follows from Lemma 19 that all members of the VSSC S set := a in round a + 2D. As the VSSC remains stable also in rounds a + 2D, . . . , a + 3D, line 22 will not be executed in these rounds, thus = a remains unchanged. Consequently, due to Corollary 3, the if-statement in line 23 will evaluate to true at the latest in round + 3D = a + 3D, causing all the processes to decide via line 24 by round a + 3D as asserted. 2 Lemma 21. The algorithm eventually terminates at all processes.
Proof. Pick any process p j . If p j is part of a source component during the stable interval, guaranteed by Definition 15, Lemma 20 ensures termination by r S T + 3D at the latest. Thus, we assume p j is not part of a source component during the stable interval. From Definition 10, it follows that there exists a causal chain of length at most H to p j from some member p i of a VSSC after its termination. Therefore, it must receive the decide message and decide via line 9 by r S T + 3D + H at latest. 2 Although we now know that all members of a VSSC that is vertex stable for at least 3D rounds will decide, we did not prove anything about their decision values yet. In the sequel, we will prove that they decide on the same value. Proof. Such a lock is created by p i ∈ S in round a + 2D, when it recognizes S as having been vertex-stable for D + 1 rounds according to Lemma 19. As the lock (value) is computed based on hist i present in round a + 2D, which is consistent among all VSSC members by Lemma 22, the lemma follows. 2 Finally, we show that, given that the system satisfies Definition 19, there will be at most k decision values in any run of Algorithm 4, which proves k-agreement: Since there are at most k VSSCs of V 2D+1 that are not majority-influenced by other VSSCs, it remains to show that any majority-influenced VSSC decides the same as the VSSC it is majority-influenced by. In order to do so, we will first establish a key property of our central data structure hist i .  Proof. From the definition of → m (Definition 18), it follows that no I -VSSC of V D+1 has a larger influence set on S than S. By Lemma 18, this implies that no lock that was generated by some I -VSSC in V D+1 can be known to more members of S than the lock L generated by S. Since process p i puts only newly learned locks into hist i (lines 15 and 20), by Lemma 24, this means that in round a no "bad" lock L b is present in more elements of R = p i ∈R I ,r a hist i [i][r ] than L. We now show that L.τ create > L b .τ create for all L b occurring in as many elements of R as L with L b = L. Obviously, the only locks L b that could occur in as many elements of R as L are locks that have been in hist i of some p i ∈ S at the beginning of round a already. Since for any such L b , L was created after L b , by lines 30 and 32, we have that L.τ create > L b .τ create , as claimed. This completes the proof of Theorem 11.

Failure detectors
A convenient way to characterize consensus and k-set solvability in distributed systems where processes are (usually) subject to crash failures are failure detectors [63]. Well-known results for message passing systems where a majority of processes may crash are the weakest failure detector ( , ) (defined below) for consensus [64], and the necessary failure detector k ( = 1 ) for k-set agreement [59]. Note that, whereas the weakest failure detector for k-set agreement in message passing systems is still unknown, there are failure detectors like L k [45] that are sufficiently strong for this purpose.
These results imply that, from any solution that solves consensus resp. k-set agreement, it must be possible to implement and resp. k . Conversely, if and can be implemented in some system, then there are well-known algorithms for solving consensus in this system.
In this section, we follow the example of [23] and explore the relation between our message adversaries and the above failure detectors. It is important to note, though, that both VSSC D,E (d) and VSSC D,H (n, d) + MAJINF(k) are inherently incompatible with time-free failure detectors, as they involve explicit timing information, namely, the duration of the stability window. By contrast, the specifications of , and k are time-free, in the sense that they only involve eventual properties for liveness. Therefore, we will consider only the eventually-forever variants VSSC D,E (∞) and VSSC D,H (n, ∞) + MAJINF(k) of our message adversaries in the comparison below.

Failure detector basics
We recall that a crash failure means that a faulty process may stop to perform any computation step after some point during an execution, possibly in a way that causes only a subset of the processes to receive the message of the last broadcast.
Given the time domain T of some system where processes are prone to crashes, for some given run, the function F : T → 2 that maps each t ∈ T to the processes that are crashed by t is called the failure pattern of the run. Processes in the set C = \ t∈T F (t) are called correct.
In the case of a synchronous model with lock-step rounds, T = N. Let AMP n,x denote the asynchronous message passing model where up to x out of the overall n = | | processes may crash; x = n − 1 characterizes the wait-free model. In AMP n,x , messages are delivered after a finite but unbounded time and processes do not operate in lock-step, hence T = R.
A failure detector is an oracle that can be queried by any process. Formally, a history H with range R is a function H : × T → R. A failure detector D with range R maps a non-empty set of histories with range R to each failure pattern.
Two important failure detectors for consensus are and .
Definition 20. The eventual leader failure detector has range . For each failure pattern F , for every history H ∈ (F ), there is a time t ∈ T and a correct process p j s.t. for every process p i for every t t, H(p i , t ) = p j . In order to relate such failure detector models to our message adversaries, which model dynamic link failures, we use the simple observation that the externally visible effect of a process crash can be expressed in our setting: Since correct processes in asynchronous message passing systems perform an infinite number of steps, we can assume that they send an infinite number of (possibly empty) messages that are eventually received by all correct processes. As in [23], we hence assume that the correct (= non-crashing) processes in the simulated AMP are the strongly correct processes. Informally, a strongly correct process is able to disseminate its state to all other processes infinitely often.
Hence, we can define correct resp. faulty processes in our directed dynamic network model as follows: Given an infinite sequence of communication graphs σ , process p i is faulty in a run with σ if there is a round r s.t., for some process p j , for all r > r: s r i ; s r j .
Let C(σ ) = p i ∈ | ∀p j ∈ , ∀r ∈ N, ∃r > r : s r i ; s r j denote the strongly correct (= non-faulty) processes in any run with σ .
If a given process influences just one strongly correct process infinitely often, it would transitively influence all processes in the system, hence would also be strongly correct. Therefore, in order not to be strongly correct, a faulty process must not influence any strongly correct process infinitely often. We can hence define failure patterns as follows: Proof. Every process p i executes a simulator, which invokes the steps of the simulated process as follows: The simulator keeps track of all messages sent by the simulated process so far, and adds this history to every simulation message it sends. Consequently, any message sent by a strongly correct process in the run under C R A S H(x) is eventually delivered to all other processes. To ensure that this is also true for all the messages sent by not strongly correct processes, a process p i that has sent message (m, i, j) to p j in its last simulated step is allowed to take its next simulated step only if (m, i, j) is already known to (the simulator of) at least n − x processes. If this never becomes true, the simulated p i does not execute further steps, i.e., is deliberately "crashed" by the simulation. Since x < n/2, there is always at least one strongly correct process among the n − x processes that know (m, i, j), which eventually disseminates this message to all processes in the system as needed.
Hence, it only remains to prove that the resulting simulation is consistent, i.e., that the simulated (non-atomic) send and receive operations are linearizable: Let t j be the time (round) when the simulated process p j is about to make the step where (m, i, j) is processed, with t j = ∞ if this is never the case (the simulator at p j never comes to know this message). Moreover, let t i be the time when the simulated process p i is about to perform the next step after having sent (m, i, j), with t i = ∞ if it never executes this next step (because it is "crashed"). Now, the send operation of (m, i, j) is linearized to Since this linearization ensures the proper send-receive order, every run of this simulation in SMP n [adv : C R A S H(x)] is indeed indistinguishable for all processes from a run in AMP n,x [ f d : ∅]. 2

Consensus
We are now ready to explore the relation of our consensus message adversary VSSC D,E (∞) to and : It will turn out that can be implemented under VSSC D,E (∞), but cannot. Proof. For simplicity, we will restrict our attention to the case n = 2; extending the proof for arbitrary n is straightforward.
Suppose some algorithm A solves consensus under this adversary. By termination and validity, there is some round τ where A lets p i decide x i in a run ε starting from some initial configuration C 0 with the graph sequence σ = (p i → p j ) r>0 . Similarly, in the run ε that also starts from C 0 using σ = (p i ← p j ) r>0 , A will eventually let p j decide x j . Now consider the run ε also starting from C 0 with sequence σ = (p i p j ) τ r=1 (p i ← p j ) r>τ , where (p i p j ) τ r=1 means that no message is successfully delivered in either direction in the first τ rounds. Clearly, until round τ , p i will have exactly the same view in the run ε and in the run ε , denoted ε ∼ p i ε , thus p i decides x i in the run ε . Similarly, ε ∼ p j ε until τ , so p j decides x j in this run. Because σ , σ , σ ∈ VSSC-PART D,E (∞), this contradicts the assumption that A solves consensus under this message adversary. 2 However, the following lemma shows that VSSC-PART D,E (∞) allows implementing . Proof. Consider an algorithm that outputs, at process p i , the process with the largest identifier in the source component that was detected E rounds ago, or itself if no such source component was detected. Clearly, this output is in the range of . Furthermore, since VSSC-PART D,E (∞) guarantees that eventually some D-bounded, E-influencing source component S remains the only VSSC forever, S will be eventually detected by every process p i forever, and its member with the largest identifier will be written to the output of p i eventually forever as well. By Definition 22, no processes of S is faulty, hence the specification of is satisfied.
To simulate AMP with process crashes, exactly the same simulation as in [23,Sec.4.2] is used: Analogous to the simulation used in the proof of Corollary 4, a simulated process is only allowed to take its next step if all the messages sent in the previous step are already known by the simulator of the current output of , which (eventually) will be a strongly correct process. We will now turn our attention to : The following theorem shows that cannot be implemented atop of VSSC D,E (∞). Proof. Again, we will prove our lemma for n = 2 for simplicity, as it is straightforward to generalize the proof for arbitrary n. Suppose that, for all rounds r and any processes p i , some algorithm A computes out(p i , r) s.t. for any admissible failure pattern F , out ∈ (F ). Consider the graph sequence σ = (p i → p j ) r 1 . Clearly, the failure pattern associated with σ is F σ (r) = {p j }. Hence, in the run ε starting from some initial configuration C 0 with sequence σ , there is some round r s.t. out(p i , r) = {p i } for any r > r by Definition 21. Let σ = (p i → p j ) r r=1 (p i ← p j ) r>r . By similar arguments as above, in the run ε that starts from C 0 with sequence σ , there is a round r such that out(p j , r) = {p j } for any r > r . Finally, for σ = (p i → p j ) r r=1 (p i ← p j ) r r=r +1 (p i ↔ p j ) r>r , let ε denote the run starting from C 0 with graph sequence σ . Until round r , ε ∼ p i ε, hence, as shown above, out(p i , r ) = {p i } in ε . Similarly, until round r , ε ∼ p j ε and hence out(p j , r ) = {p j } in ε . Clearly, σ , σ , σ ∈ VSSC D,E (∞) and F σ (r) = {}, that is, no process is faulty in σ . However, in ε , out(p i , r ) ∩ out(p j , r ) = ∅, a contradiction to Definition 21. 2 The above result may come as a surprise, since the proof of the necessity of k for k-set agreement (hence the necessity of = 1 for consensus) developed by Raynal et al. [59] only relies on the availability of a correct k-set agreement algorithm. However, their reduction proof works only in AMP n,n−1 , i.e., crash-prone asynchronous message passing systems: It relies crucially on the fact that there is no safety violation (i.e., a decision on a value that eventually leads to a violation of k-agreement) in any prefix of a run. This is not the case in SMP n , however, as processes may decide after a certain number of rounds also if no message is received. Hence, we cannot reuse their proof in our setting.
Taken together, Lemmas 26, 27, and 28 allow us to conclude the following: (i) Since VSSC D,E (∞) (not to speak of VSSC D,E (d), which is not compatible with failure detector specifications) does not allow to implement ( , ), we cannot derive consensus algorithms from ( , )-based solutions. And indeed, our consensus algorithm (Algorithm 2) is algorithmically very different.
(ii) The message adversaries SOURCE and QUORUM considered in [23], which allow to implement ( , ), are equivalent to VSSC D,E (∞) in terms of consensus solvability, but strictly weaker in terms of sequence inclusion, i.e., (SOURCE, QUORUM) ⊂ VSSC D,E (∞).

k-Set agreement
We start with the definitions of generalized failure detectors for the k-set agreement setting in crash-prone asynchronous message passing systems, using the notation introduced in Section 8.1.

Definition 26.
The range of the failure detector k is all k-subsets of 2 . For each failure pattern F , for every history H ∈ k (F ), there ∃LD = {q 1 , . . . , q k } ∈ 2 and t ∈ T such that L D ∩ C = ∅ and for all t t, p i ∈ C : H(p i , t ) = L D.
Definition 27. The failure detector k has range 2 . For each failure pattern F , for every H ∈ k (F ), two properties must hold: (1) for every t, t ∈ T and S ∈ with |S| = k + 1, ∃p i , p j ∈ S : H(p i , t) ∩ H(p j , t ) = ∅, (2) there is a time t ∈ T s.t. for every process p i , for every t t: H(p i , t ) ⊆ C.
k-set agreement in our lock-step round model with link failures allows non-temporary partitioning, which in turn makes it impossible to use the definition of crashed and correct processes from the previous section: In a partitioned system, every process p i has at least one process p j such that ∀r > r : s r i ; s r j , but no p i usually reaches all p j ∈ here. Definition 22 hence implies that there is no correct process in this setting. Hence, we employ the following generalized definition: Definition 28. Given a infinite graph sequence σ , let a minimal source set S in σ be a set of processes with the property that ∀p j ∈ , ∀r > 0 there exists p i ∈ S, r > r such that s r i ; s r j . The set of weakly correct processes WC(σ ) of a sequence σ is the union of all minimal source sets S in σ .
This definition is a quite natural extension of correct processes in a model, which allows perpetual partitioning of the system. In particular, it is not difficult to show that WC(σ ) = ∅ for σ ∈ Based on this definition of weakly correct processes, it is possible to generalize some of our consensus-related results (obtained for and ). First, we show that k cannot be implemented, since VSSC D,H (n, ∞) + M A J I N F (k) allows the system to partition into k isolated components. The impossibility can be expanded to k > 1 by choosing some σ that (i) perpetually partitions the system into k components P = {P 1 , . . . , P k } that each have a single source component and consist of the same processes throughout the run, and (ii) demands eventually a vertex stable source component in every partition forever. Pick an arbitrary partition P ∈ P .
If |P | > 1, such a sequence does not allow to implement in P (e.g., the message adversary could emulate the graph sequence used in Lemma 28 in P ). We hence know that ∃p, p ∈ P and ∃r, r such that out(p, r) ∩ out(p , r ) = ∅. Furthermore, and irrespective of |P |, as for every p i ∈ P , it is indistinguishable whether any p j ∈ P \ P is faulty in σ or not, p i has to assume that every process p j ∈ P \ P is faulty. Hence, for every p i ∈ P , we must eventually have out(p i , r i ) ⊆ P for some sufficiently large r i .
We now construct a set S of k + 1 processes that violates Definition 27: fix some P ∈ P with |P | > 1 and add the two processes p, p ∈ P , as described above, to S. For every partition P j ∈ P \ P , add one process p i from P j to S. Since there exist r, r such that out(p, r) ∩ out(p , r ) = ∅, and ∀P j ∈ P \ P , ∀p i ∈ P j , ∃r i : out(p i , r i ) ⊆ P i and, by the construction of S, we have that ∀p i , p j ∈ S, ∃r i , r j such that out(p i , r i ) ∩ out(p j , r j ) = ∅. This set S clearly violates Definition 27, as required. 2 As for k , we note that Lemma 27 reveals also that 1 = can be implemented under VSSC-PART D,E (∞). By contrast, however, k it is not implementable under VSSC D,H (n, ∞) + M A J I N F (k) for k > 1: Proof. We show the claim for k = 2 and n = 3 as it is straight-forward to derive the general case from this. We show that supposing some algorithm could implement k under the adversary leads to a contradiction. The following graph sequences (a)-(e) are all admissible sequences under VSSC D,H (k, ∞) (we assume that nodes not depicted are isolated): Let ε a , . . . , ε e be the runs resulting from the above sequences applied to the same initial configuration. By Definitions 26 and 28, L D has to include p 1 in ε a , p 2 in ε b , and p 3 in ε c . By Definition 26, in ε d , because ε a ∼ p 1 ε d and ε c ∼ p 3 ε d in all rounds, for some t > 0, for all t > t, out(p 1 , t ) = {p 1 , p 3 }. A similar argument shows that in ε e , for some t > 0, for all t > t, out(p 1 , t ) = {p 1 , p 2 }, because ε a ∼ p 1 ε e and ε b ∼ p 2 ε e . The indistinguishability ε d ∼ p 1 ε e provides the required contradiction, as for some t > 0, for all t > t, out(p 1 , t ) should be the same in ε d and ε e . 2

Conclusions
We introduced a framework for modeling dynamic networks with directed communication links under generalized message adversaries that focus on vertex-stable source components. We presented a number of impossibility results and lower bounds for consensus, as well as an algorithm that solves consensus under the strongest message adversary known so far. Moreover, we made a significant step towards determining the solvability/impossibility border of general k-set agreement in our model: We provided several impossibility results and lower bounds, which also led us to the first gracefully degrading consensus/k-universal k-set agreement under fairly strong message adversaries proposed so far. Our results are complemented by relating our message adversaries to failure detectors. Our results show that the weakest resp. necessary failure detectors for consensus resp. k-set agreement cannot be implemented under our message adversaries.
Part of our future work is devoted to finding even stronger message adversaries and matching algorithms, as well as even stronger lower bounds, in an attempt to close the remaining gap.