Is it worth a dam?

Once a sign of modernization and growth, dams are often seen today as symbols of environmental and social devastation. Over 800,000 dams have been built worldwide to provide drinking water, flood control, hydropower, irrigation, navigation, and water storage. Dams do indeed provide these things,but at the cost of several adverse, unexpected effects: disruption of ecosystems, decline of fish stocks, forced human and animal resettlements, and diseases such as malaria, which are borne by vectors that thrive in quiet waters.


Background
Phylogenetics is concerned with the construction and analysis of evolutionary or phylogenetic trees and networks to understand the evolution of species, populations and individuals [1]. Neighbor-Net is a phylogenetic analysis and data representation method introduced in [2]. It is loosely based on the popular Neighbor-Joining (NJ) method of Saitou and Nei [3], but with one fundamental difference: whereas NJ constructs phylogenetic trees, Neighbor-Net constructs phylogenetic networks. The method is widely used, in areas such as virology [4], bacteriology [5], plant evolution [6] and even linguistics [7].
Evolutionary processes such as hybridization between species, lateral transfer of genes, recombination within a population, and convergent evolution can all lead to evolutionary histories that are distinctly non tree-like. Moreover, even when the underlying evolution is tree-like, the presence of conflicting or ambiguous signal can make a single tree representation inappropriate. In these situations, phylogenetic network methods can be particularly useful (see e.g. [8]).
Phylogenetic networks are a generalization of phylogenetic trees (see Figure 1 for a typical example of a phylogenetic network). In case there are many conflicting phylogenetic signals supported by the data, Neighbor-Net can represent this conflict graphically. In particular a single network can represent several trees simultaneously, indicate whether or not the data is substantially tree-like, and give evidence for possible reticulation or hybridization events. Evolutionary hypotheses suggested by the network can be tested directly using more detailed phylogenetic analyses and specialized biochemical methods (e.g. DNA fingerprinting or chromosome painting).
For any network construction method, it is vital that the network does not depict more conflict than is found in the data and that, if there are conflicting signals, then these should be represented by the network. At the same time, when the data is fitted well by a tree, the method should return a network that is close to being a tree. This is essential not just to avoid false inferences, but for the application of networks in statistical tests of the extent to which the data is tree-like [9].
In this paper we provide a proof that these properties all hold for Neighbor-Net. Formally, we prove that if the input to NeighborNet is a circular distance function (distance matrix) [10], then the method returns a network that exactly represents the distance. Circular distance functions are more general than additive (patristic) distances on trees and, thus, as a corollary, if Neighbor-Net is given an additive distance it will return the corresponding tree. In this sense, Neighbor-Net is a statistically consistent method.
The paper is structured as follows: In Section 2 we introduce some basic notation, and in Section 3 we review the Neighbor-Net algorithm. In Section 4 we prove that Neighbor-Net is consistent (Theorem 4.1).

Preliminaries
In this section we present some notation that will be needed to describe the Neighbor-Net algorithm. We will assume some basic facts concerning phylogenetic trees, more details concerning which may be found in [11].
Throughout this paper, X will denote a finite set with cardinality n. A split S = {A, B} (of X) is a bipartition of X. We let = (X) = {{A, X\A}|∅ ⊂ A ⊂ X} denote the set of all splits of X, and call any non-empty subset of (X) a split system. A split weight function on X is a map ω: (X) → ‫ޒ‬ ≥0 . We let ω denote the set {S ∈ |ω(S) > 0}, the support of ω.
Let Θ = x 1 , ..., x n be an ordering of X.
., x j }. Note that if a split is compatible with an ordering Θ it is also compatible with its reversal x n , ..., x 2 , x 1 and with ordering x 2 , ..., x n , x 1 . We A phylogenetic network Figure 1 A phylogenetic network. The network was generated by Neighbor-Net for a sequence-based data set comprising of Salmonella isolates that originally appeared in [17]. A detailed network-based analysis of this data is presented in [2], where the strains indicated in bold-face are tested for the presence of recombination. Note that the network is planar (that is, it can be drawn in the plane without any crossing edges), and that parallel edges in the network represent bipartitions of the data.

Sse94
Smb−17 let Θ denote the set of those splits in (X) which are compatible with ordering Θ. A split system ' is compatible with Θ if ' ⊆ Θ . In addition a split system ' ⊆ (X) is circular if there exists an ordering Θ of X such that ' is compatible with Θ. Note that any split system corresponding to a phylogenetic tree is circular [ [11], Ch. 3], and so circular split systems can be regarded as a generalization of split systems induced by phylogenetic trees. A split weight function ω is called circular if the split system ω is circular. A distance function on X is a map d: X × X → ‫ޒ‬ ≥0 such that for all x, y ∈ X both d(x, x) = 0 and d(x, y) = d(y, x) hold. Note that any split weight function ω on X induces a distance Circular distances were introduced in [10] and have been further studied in, for example, [12] and [13]. Just as any tree-like distance function on X can be uniquely represented by a phylogenetic tree [ [11], ch. 7], any circular distance function d can be represented by a planar phylogenetic network such as the one pictured in Figure  1 [14]. The program SplitsTree [9] allows the automatic generation of such a network for d by computing a circular split weight function ω with d = d ω .

Description of the Neighbor-Net algorithm
In this section we present a detailed description of the Neighbor-Net algorithm, as implemented in the current version of SplitsTree [9]. The Neighbor-Net algorithm was originally described in [2], where the reader may find a more informal description for how it works. For the convenience of the reader we will use the same notation as in [2] where possible.
In Figure 2 we present pseudo-code for the Neighbor-Net algorithm. The aim of the algorithm is, for a given input distance function d, to compute a circular split weight function ω so that the distance function d ω gives a good approximation to d. The resulting distance function d ω can then be represented by a planar phylogenetic network as indicated in the last section.
To this end, NEIGHBOR-NET first computes an ordering Θ of X, and then applies a non-negative least-squares procedure to find a best fit for d within the set of distance functions {d ϕ |ϕ:(X) → ‫ޒ‬ ≥0 , ϕ ⊆ Θ }. More details concerning the least-squares procedure may be found in [2]: Here we will concentrate on the description of the key computation for finding an ordering Θ of X, which is detailed in the procedure FINDORDERING.
An (ordered) cluster is a non-empty finite set C together with an ordering Θ C = c 1 , ..., c k of the elements in C, k = |C|. where α, β and γ are positive real numbers satisfying α + β + γ = 1 (note that these formulae slightly differ from the ones given in [2] in which there is a typographical error).

Neighbor-Net(X, d)
Input: A finite non-empty set X and a distance function d on X Output: A circular split weight function ω Input: A collection C of ordered clusters and a distance function d Output: An ordering Θ of the elements in ∪ C∈C C Compute an ordering Θ of Y according to (2
return Θ In the current implementation of Neighbor-Net the values This completes the description of the reduction case.
We now describe the selection case. Note that in view of line 6 this case only applies if every cluster in contains at most two elements. In lines 17-18, two clusters C 1 , C 2 ∈ are selected and replaced by the single cluster C' = C 1 ∪ C 2 . The clusters C 1 and C 2 are selected as follows: We define a distance function on the set of clusters by and select C 1 , C 2 ∈ , C 1 ≠ C 2 that minimize the quantity where m is the number of clusters in . The function Q that is used to select pairs of clusters is called the Q-criterion. Note that this is a direct generalization of the selection criterion used in the NJ algorithm [2]. However, using only this criterion yields a method that is not consistent as illustrated in Figure 3. So, once the clusters C 1 and C 2 have been selected we use a second criterion to determine an ordering Θ C' in line 19 for the new cluster C'.
In particular, for every x ∈ C 1 ∪ C 2 we define put = m + |C 1 | + |C 2 | -2, and select x 1 ∈ C 1 and x 2 ∈ C 2 that minimize the quantity We then choose an ordering Θ C' in which x 1 and x 2 are neighbors and for which every two elements that were neighbors in C 1 or C 2 remain neighbors. This completes the description of the selection case, and hence the description of the procedure FINDORDERING.

Neighbor-Net is consistent
In this section we prove the consistency of Neighbor-Net: is a circular distance function, then the output of the Neighbor-Net algorithm is a circular split weight function ω: (X) → ‫ޒ‬ ≥0 with the prop- The key part of the Neighbor-Net algorithm is the procedure FINDORDERING. We will show that, for a circular distance function d = d ω on X, the call FINDORDER-ING({{x}|x ∈ X}, d) will produce an ordering Θ of X that is compatible with d. The non-negative least squares procedure finds the distance function in {d ϕ |ϕ: (X) → ‫ޒ‬ ≥0 , ϕ ⊆ Θ } that is closest to d. As this set of distance functions includes d ω , the least squares procedure returns exactly d = d ω , proving the theorem.
We focus, then, on the proof that FINDORDERING behaves as required: The base case of the induction is |Y| ≤ 3. In this case the set of splits Θ equals (Y) for every ordering of Y. In particular, , We now assume that |Y| > 3 and make the following induction hypothesis: If there exists an ordering compatible with distance function d' and ordered clusters , where either | | < | Y|, or | | = |Y| and | | < | |, then FINDORDERING( , d') will return an ordering compatible with and d'.
There are two cases to consider. In the first case, contains some cluster C with |C| ≥ 3. In the second case, contains only clusters C with |C| ≤ 2.

Case 1: The reduction case
Suppose that there is C ∈ with |C| ≥ 3. This is the reduction case in the description of the algorithm. The procedure FINDORDERING constructs a new set of clusters (in line 11) and a new distance function d' (in line 12).
We first show that, if there is an ordering compatible with and d, then there is also an ordering compatible with and d'. Proof: Suppose that = y 1 , ..., y n is an ordering of Y that is compatible with and d, where, without loss of generality, we have Θ C = y 1 , ..., y k . Let = u, v, y 4 , ..., y n = z 1 , ..., z n-1 , which is an ordering of Y' = . We claim that the ordering is compatible with the collection and with the distance function d'.
Since is compatible with it is straight-forward to check that is compatible with . Hence, we only need to show that is compatible with d'. We will use a 4-point condition that was first studied in a different context by Kalmanson [15] and has been shown to characterize circular distances in [12]. To be more precise, it suffices to show that, for every four elements , i 1 <i 2 <i 3 <i 4 ,  , , ,  It is used to construct an ordering Θ on Y, in line 14, which becomes the output of the procedure.  and, thus, S is compatible with Θ. ■

Case 2: The selection case
Now suppose that there are no clusters C ∈ with |C| ≥ 3. This is the selection case in the description of the algorithm.
In line 17 the algorithm selects two clusters that minimize (3): where Note that is a distance function defined on the set of clusters . We will first show that is circular. We do this in two steps: Proposition 4.5 and Proposition 4.6.    where λ is a real number with the property that 0 <λ < 1.
Then the following hold: We now have the more difficult task of showing that clusters C 1 and C 2 selected by the Q-criterion, that is by minimizing (3), are adjacent in at least one ordering of the clusters that is compatible with , as described in Proposition 4.6. This is the most technical part of the proof. The key step is the inequality established in Lemma 4.7. This is used to prove Theorem 4.8, which establishes that the Q-criterion when applied to a circular distance will always select a pair of elements that are adjacent in at least one ordering compatible with the circular distance. As a corollary it will follow that there exists an ordering of the clusters in compatible with where C 1 and C 2 are adjacent. (ii) Any other split S compatible with Θ satisfies λ(S) ≤ 0.
Proof: Expanding λ(S) gives We divide the rest of our argument into five cases which are summarized in Table 1. For these cases straight-forward calculations yield the entries of Table 2. Using Table   2 we compute λ(S) in each case.   from Θ and re-inserting it immediately after x 1 . We claim that Θ* is also compatible with ω.
As in Lemma 4.7, for any split S compatible with Θ we define After selecting C 1 and C 2 the procedure FINDORDERING removes these clusters from the collection and replaces them with their union C' = C 1 ∪ C 2 . It also assigns an ordering Θ C' to the cluster.
FINDORDERING is then called recursively. The following is directly analogous to Proposition 4.3.

Proposition 4.10
There exists an ordering of Y that is compatible with collection and split weight function ω.
Proof: We already know by Proposition 4.9 and Proposition 4.6 that there exists an ordering = y 1 , ..., y n of Y that is compatible with and ω and, in addition, also satisfies one of the following properties: If x 1 ∈ C 1 and x 2 ∈ C 2 are selected such that is also compatible with then we are done. Otherwise we have to construct a suitable new ordering of Y. There are, up to symmetric situations with roles of C 1 and C 2 swapped, only two cases we need to consider.  [d](y 1 , y 3 ). Thus, by the above strict inequality, for every split S ∈ ' we must have ω(S) = 0. Hence, ω is compatible with .
Thus Θ is compatible with and d, completing the proof of Theorem 4.2. ᮀ

Remark 4.11
Note that we have shown that Corollary 4.9 holds under the assumption that (in view of line 6) every cluster in contains at most two elements. However, it is possible to prove this result in the more general setting where clusters can have arbitrary size. In principle, this could yield a consistent variation of the Neighbor-Net algorithm that is analogous to the recently introduced QNet algorithm [16], where, instead of reducing the size of clusters when they have more than two elements, the reduction case is skipped entirely and clusters are pairwise combined until only one cluster is left. However, we suspect that such a method would probably not work well in practice since the reduced distances have smaller variance than the original distances.