Abstract
The classical matrix-tree theorem discovered by Kirchhoff in 1847 expresses the principal minor of the \(n \times n\) Laplace matrix as a sum of monomials of matrix elements indexed by directed trees with n vertices. We prove, for any \(k \ge n\), a three-parameter family of identities between degree k polynomials of matrix elements of the Laplace matrix. For \(k=n\) and special values of the parameters, the identity turns to be the matrix-tree theorem. For the same values of parameters and arbitrary \(k \ge n\), the left-hand side of the identity becomes a specific polynomial of the matrix elements called higher determinant of the matrix. We study properties of the higher determinants; in particular, they have an application (due to M. Polyak) in the topology of 3-manifolds.
Similar content being viewed by others
1 Introduction and the main results
1.1 Introduction
Let G be a directed graph with n numbered vertices. Given a \(n \times n\)-matrix \(W = (w_{ij})\), one can relate to G a monomial of the matrix elements
Denote now by \(\widehat{W}\) the Laplace matrix, i.e. a matrix with nondiagonal entries \(w_{ij}\) (\(1 \le i \ne j \le n\)) and diagonal entries \(-\sum _{j \ne i} w_{ij}\) (\(1 \le i \le n\)). The classical matrix-tree theorem discovered by Kirchhoff [5] and proved in its present form by Tutte [9] says that the diagonal minor \(\det \widehat{W}_i\) of size \((n-1)\) (where \(\widehat{W}_i\) is obtained from \(\widehat{W}\) by exclusion of the row and the column number i) is
where G runs through the set \({\mathcal {T}}_{n,i}\) of all trees with the vertices \(1 , \dots , n\) and all the edges directed towards the vertex i. The theorem has numerous generalizations to other minors, to Pfaffians and more; see [2, 3] and the references therein for a review, proofs and some related results.
Awan and Bernardi [1] defined and studied an analog \(B_G\) of the classical Tutte polynomial for the directed graph G. (See, e.g. [8] for a definition of the classical Tutte polynomial; the Bernardi polynomial is defined below in Sect. 1.2.) For every directed graph G, the \(B_G = B_G(q,y,z)\) is a polynomial of three variables; its degree in q is equal to the number of vertices in G and the total degree in y and z does not exceed the number of edges in G. The main result of this article is Theorem 1.3 which is equivalent to the identity
see Sect. 1.2 for the exact formulation. The summation is taken over the set of all directed graphs with n vertices and k edges, and \([B_G]_k\) means the degree k part (with respect to y and z) of the polynomial \(B_G\).
Matrix-tree theorems are obtained by specialization of parameters in the main identity. In particular, if \(k=n\), then substitution of \(q=-1\), \(y=0\) and \(z=1\) turns the left-hand side into the determinant of the Laplace matrix \(\widehat{W}\); changing the summation range appropriately one can get its principal minor (of any size) as well. The right-hand side in this case becomes exactly the right-hand side of (1). For an arbitrary k and the same values of q, y and z, the left-hand side of the identity turns into the alternating sum over all possible totally cyclic graphs with n vertices and k edges; this sum deserves to be called the higher determinant of the matrix (see its exact definition and the analysis of properties in Sect. 1.3 below). The right-hand side then becomes a sum over the set of all acyclic graphs, giving thus a “higher analog” of the matrix-tree theorem.
The article has the following structure: in Sect. 1.2, we give necessary definitions and formulate the main result, Theorem 1.3, and its analog for undirected graphs, Theorem 1.4. In Sect. 1.3, we consider corollaries of Theorem 1.3 for special values of the parameters; in particular, the section contains a definition of the higher determinant (Definition 1.8), and the higher analogs of the matrix-tree theorem (Corollaries 1.11 and 1.12).
Section 1.4 contains various digressions and ramifications of the main subject: analogs of the matrix-tree theorem for undirected graphs (Sect. 1.4.1), analysis of properties of the higher minors (Sect. 1.4.2) and a nice application of the higher determinants in topology (Sect. 1.4.3): a formula, due to Polyak [6], expressing the Casson–Walker invariant of a 3-manifold as a combination of higher minors of its chainmail adjacency matrix.
Proofs of the theorems are collected in Sect. 2. Section 2.1 contains the proofs of the main results, Theorems 1.3 and 1.4, and Sect. 2.2, the proofs of the properties of the higher minors (Propositions 1.18 and 1.19). Higher matrix-tree theorems (Corollaries 1.11 and 1.12) are given by two proofs: first, in Sect. 2.3 we derive them from Theorem 1.3 by specialization of parameters (plus some craft...). Then, in Sect. 2.5 we give another proof of the same results containing no reference to Theorem 1.3. The key ingredient of the second proof is Proposition 2.2; it was first obtained as [1, Proposition 6.16] by specialization of parameters in an identity for Bernardi polynomials, but we are giving its direct proof, thus answering a request from [1] (Question 6.17).
1.2 Graphs and Bernardi polynomial
The following theory has two parallel versions—for directed and undirected graphs—so let us introduce notation for both cases simultaneously.
Denote by \(\varGamma _{n,k}\) the set of directed graphs with n vertices numbered \(1 , \dots , n\) and k edges numbered \(1 , \dots , k\); in other words, an element \(G \in \varGamma _{n,k}\) is a k-element sequence \(([a_1b_1] , \dots , [a_kb_k])\) where \(a_1 , \dots , a_k, b_1 , \dots , b_k \in \{1 , \dots , n\}\) are understood as vertices and every [ab], as an edge from the vertex a to the vertex b. Loops (edges [aa]) and parallel edges (pairs \([a_ib_i] = [a_jb_j]\) with \(i \ne j\)) are allowed. Similarly, by \(\varUpsilon _{n,k}\) we denote the set of undirected graphs with n numbered vertices and k numbered edges. By a slight abuse of notation \(e \in G\) will mean that e is an edge of G (regardless of number). By \({\nu }(G)\) and \({\epsilon }(G)\), we denote the number of vertices and edges, respectively, in G (that is, \({\nu }(e) = n\) and \({\epsilon }(G) = k\) for \(G \in \varGamma _{n,k}\) or \(G \in \varUpsilon _{n,k}\)). The forgetful map \(\left| \smash {\cdot }\right| : \varGamma _{n,k} \rightarrow \varUpsilon _{n,k}\) relates to every \(G \in \varGamma _{n,k}\) the undirected graph \(\left| \smash {G}\right| = (\{a_1,b_1\} , \dots , \{a_k,b_k\})\) obtained by dropping the orientation of all edges: \([ab] \mapsto \{a,b\}\).
A vertex a of the graph G is called a sink if G has no edges [ab] starting from it; a is called isolated if it is not incident to any edge (i.e. G contains neither edges [ab] nor [ba]). An isolated vertex is a sink but a vertex a with a loop [aa] attached is not.
Consider a graph \(G \in \varGamma _{n,k}\) and an edge \(e \in G\). By \(G {\setminus } e\), G / e and \(G_e^{\vee }\), we will denote the graph G with e deleted, e contracted (here e should not be a loop) and e reversed, respectively; the first two notations can be used for undirected graphs \(G \in \varUpsilon _{n,k}\) as well. Note the shift of the edge numbers: if \(e \in G\) is the edge number s, and \(e' \in G\) is the edge number \(t \ne s\), then \(e' \in G {\setminus } \{e\}\) bears the number t if \(t < s\) and \(t-1\) if \(t > s\). For \(G/e \in \varGamma _{n-1,k-1}\), a similar renumbering is applied both to the edges and to the vertices. A graph \(H \in \varGamma _{n,m}\) is called a subgraph of \(G \in \varGamma _{n,k}\) (notation \(H \subseteq G\)) if it can be obtained from G by deletion of some edges.
Denote by \({\mathcal {G}}_{n,k}\) (resp., \({\mathcal {Y}}_{n,k}\)) a vector space over \({{\mathbb {C}}}\) spanned by \(\varGamma _{n,k}\) (resp., \(\varUpsilon _{n,k}\)). The forgetful map extends naturally to the linear map \(\left| \smash {\cdot }\right| : {\mathcal {G}}_{n,k} \rightarrow {\mathcal {Y}}_{n,k}\). The direct sum \({\mathcal {G}}_n {\mathop {=}\limits ^{\text{ def }}}\bigoplus _{k=0}^\infty {\mathcal {G}}_{n,k}\) bears the structure of an associative algebra: a product of the graphs \(G_1 = (e_1 , \dots , e_{k_1}) \in \varGamma _{n,k_1}\) and \(G_2 = (h_1 , \dots , h_{k_2}) \in \varGamma _{n,k_2}\) is defined as \(G_1*G_2 {\mathop {=}\limits ^{\text{ def }}}(e_1 , \dots , e_{k_1}, h_1 , \dots , h_{k_2}) \in \varGamma _{n,k_1+k_2}\), and then, \(*\) is extended to the whole \({\mathcal {G}}_n\) as a bilinear operation. Note that \(G_1*G_2 \ne G_2*G_1\). (The edges are the same, but the edge numbering is different.)
Following [1] define the Bernardi polynomial as a map \(B: \varGamma _{n,k} \rightarrow {{\mathbb {Q}}}[q,y,z]\) given by
where \(f_G^{>}\) (resp., \(f_G^{<}\)) means the set of edges [ab] of G such that \(f(b) > f(a)\) (resp., \(f(b) < f(a)\)). See [1] for a detailed analysis of the properties of \(B_G\).
The undirected version of the Bernardi polynomial is the full chromatic polynomial, which is a map \({\hbox {C}}: \varUpsilon _{n,k} \rightarrow {{\mathbb {Q}}}[q,y]\) defined as
where \(f_G^{\ne }\) is the set of edges [ab] of G such that \(f(b) \ne f(a)\). The Potts polynomial is defined then by the formula
See [1] for details of the definition and [8, 11] for the properties of the Potts polynomial. In particular, there holds
Proposition 1.1
([8]) \(Z_G(q,v) = \sum _{H \subseteq G} q^{\beta _0(H)} v^{{\epsilon }(H)}\),
where \(\beta _0(H)\) is the 0th Betti number of the subgraph H, that is, the number of its connected components.
For any \(G \in \varGamma _{n,k}\) (resp., \(G \in \varUpsilon _{n,k}\)), we denote by \(\widehat{G} \in \varGamma _{n,k-\ell (G)}\) (resp., \(\widehat{G} \in \varUpsilon _{n,k-\ell (G)}\)) the graph G with all the loops deleted; here \(\ell (G)\) is the number of loops in G. It follows directly from the definition of the Bernardi, the full chromatic and the Potts polynomials that
Proposition 1.2
The universal Bernardi polynomial is an element of \({\mathcal {G}}_{n,k}[q,y,z]\) defined as
For a polynomial \(P \in {{\mathbb {C}}}[q,y,z]\) denote by \([P]_k\) the sum of its terms containing monomials \(q^s y^i z^j\) with \(i+j = k\) and any s. The universal truncated Bernardi polynomial is an element of \({\mathcal {G}}_{n,k}[q,y,z]\) defined as
Note that \([B_G]_k = 0\) if (and only if) G contains at least one loop; so, \(\widehat{{{\mathcal {B}}}}_{n,k}\) contains only loopless graphs. \(\widehat{{{\mathcal {B}}}}_{n,k}\) is homogeneous of degree k with respect to y and z and is not homogeneous with respect to q.
In a similar way, the universal Potts polynomial and the universal truncated Potts polynomial are elements of \({\mathcal {Y}}_{n,k}[q,v]\) defined, respectively, as
For any \(i = 1 , \dots , k\) and \(p, q = 1 , \dots , n\) consider the map \(R_{p,q;i}: \varGamma _{n,k} \rightarrow \varGamma _{n,k}\) defined as follows: \(R_{p,q;i}(G)\) is the graph containing the same edges as G, except the edge number i, which is replaced by the edge [pq] (bearing the same number i). Also denote by \(V_i: {\mathcal {G}}_{n,k} \rightarrow {\mathcal {G}}_{n,k}\) the following operator: if \(G \in \varGamma _{n,k}\) and \([ab] \in G\) is the edge number i, then
then extend \(V_i\) to the space \({\mathcal {G}}_{n,k}\) by linearity. By definition, \(V_i = 0\) if \(n=1\) and \(k > 0\).
The product
is called the Laplace operator. Its undirected version \(\varDelta : {\mathcal {Y}}_{n,k} \rightarrow {\mathcal {Y}}_{n,k}\) is defined as \(\varDelta (X) {\mathop {=}\limits ^{\text{ def }}}\left| \smash {\varDelta (\varPhi )}\right| \) where \(\varPhi \in \varGamma _{n,k}\) is any element such that \(\left| \smash {\varPhi }\right| = X\). (It is easy to check that \(\varDelta (X)\) does not depend on the choice of \(\varPhi \).)
Let \(W = (w_{ij})\) be a \(n \times n\)-matrix. Denote by \(\langle W \vert : {\mathcal {G}}_{n,k} \rightarrow {{\mathbb {C}}}\) a linear functional such that for any \(G \in \varGamma _{n,k}\) one has
It follows from the definition of the Laplace operator that
for any element \(X \in {\mathcal {G}}_{n,k}\); here, \(\widehat{W}\) is the Laplace matrix defined in Sect. 1.1. This equation explains the name “Laplace operator” for \(\varDelta \). Note that since \(\varDelta (X)\) is a sum of graphs containing no loops, it is possible to change arbitrarily the diagonal entries of W in the right-hand side of (2); for example, one can use \(\widehat{W}\) instead.
The main results of this paper are the following two theorems: the directed
Theorem 1.3
and the undirected
Theorem 1.4
They are proved in Sect. 2.
Remark 1.5
Note that the Laplace operator \(\varDelta \) on directed graphs preserves sinks (i.e. vertices i such that the graph contains no edges [ij]). Therefore, Theorem 1.3 can be refined according to the set I of sinks.
1.3 Minors and the matrix-tree theorem
A graph \(G \in \varGamma _{n,k}\) is called strongly connected if every two of its vertices can be joined by a directed path. A graph is totally cyclic (or strongly semiconnected) if every connected component (in the topological sense) is strongly connected; equivalently, G is totally cyclic if every edge enters a directed cycle.
A totally cyclic graph may contain isolated vertices. Let \(I = \{i_1< \dots < i_s\} \subset \{1 , \dots , n\}\); by \({{\mathfrak {S}}}_{n,k}^I\) we denote the set of totally cyclic graphs \(G \in \varGamma _{n,k}\) such that the vertices \(i_1 , \dots , i_s\), and only, they are isolated. The set of all totally cyclic graphs is denoted by \({{\mathfrak {S}}}_{n,k} {\mathop {=}\limits ^{\text{ def }}}\bigcup _{I \subset \{1 , \dots , n\}} {{\mathfrak {S}}}_{n,k}^I\).
Example 1.6
If a vertex a of a totally cyclic graph \(G \in {{\mathfrak {S}}}_{n,k}^I\) is not isolated, then there is at least one edge \([ab] \in G\). So if \(I = \{i_1 , \dots , i_s\}\) and \({{\mathfrak {S}}}_{n,k}^I \ne \varnothing \), then \(k \ge n-s\).
Let \(k=n-s\). If \(G \in \varGamma _{n,k}^I\), then every vertex \(a \notin I\) is the beginning of exactly one edge; denote it by \([a\sigma _G(a)]\). Also a is the end of exactly one edge [ca], which means \(a = \sigma _G(c)\). Hence, \(\sigma _G\) is a permutation of the k-element set \(\{1 , \dots , n\} {\setminus } I\). For every such permutation \(\sigma \), there exist k! graphs \(G \in {{\mathfrak {S}}}_{n,k}^I\) such that \(\sigma _G = \sigma \); they differ by the edge numbering.
Geometrically G is a union of disjoint directed cycles passing through all vertices except \(i_1 , \dots , i_s\). (Some cycles may be just loops.)
A graph \(G \in \varGamma _{n,k}\) is called acyclic if it contains no directed cycles (in particular, no loops). Let \(I = \{i_1< \dots < i_s\}\) be as above; by \({{\mathfrak {A}}}_{n,k}^I\) we denote the set of acyclic graphs \(G \in \varGamma _{n,k}\) such that the vertices \(i_1 , \dots , i_s\), and only they, are sinks. The set of all acyclic graphs is denoted by \({{\mathfrak {A}}}_{n,k} {\mathop {=}\limits ^{\text{ def }}}\bigcup _{I \subset \{1 , \dots , n\}} {{\mathfrak {A}}}_{n,k}^I\).
Example 1.7
Let \(n > k\); then any graph \(G \in \varGamma _{n,k}\) has at least \(n-k\) connected components. If G is acyclic, then it contains a sink in every connected component. So if \({{\mathfrak {A}}}_{n,k}^I \ne \varnothing \) where \(I = \{i_1 , \dots , i_s\}\), then \(s \ge n-k \Longleftrightarrow k \ge n-s\).
Let \(k = n-s\). Then for every \(G \in {{\mathfrak {A}}}_{n,k}^I\) one has \(s \ge \beta _0(G) \ge n-k = s\), hence \(\beta _0(G) = s\). Thus, each component of G contains exactly one vertex \(i_\ell \in I\) (for some \(\ell = 1 , \dots , s\)), which is its only sink. The equation for the Euler characteristics \(\beta _0(G)-\beta _1(G) = n-k\) (where \(\beta _1(G)\) is the first Betti number) implies \(\beta _1(G) = 0\), so every \(G \in {{\mathfrak {A}}}_{n,k}^I\) is a forest. Each of its s components is a tree; every edge of this tree is directed towards the sink \(i_\ell \).
Definition 1.8
Let \(I = \{i_1< \dots < i_s\} \subset \{1 , \dots , n\}\). The element
is called a universal diagonal I-minor of degree k; in particular, \(\mathop {\det }\nolimits _{n,k}^\varnothing \) is called a universal determinant of degree k.
The element
is called a universal (i, j)-minor (of codimension 1) of degree k.
Example 1.9
Let \(I = \{i_1 , \dots , i_s\}\). As mentioned already in Example 1.6, if \(k < n-s\), then \({{\mathfrak {S}}}_{n,k}^I = \varnothing \) and \(\det _{n,k}^I = 0\).
Let \(k = n-s\). According to Example 1.6 for every permutation \(\sigma \) of \(\{1 , \dots , n\} {\setminus } I\) there exist k! graphs G with \(\sigma _G = \sigma \). It is easy to see that for every such graph G the coefficient \((-1)^{\beta _0(G)}\) is equal to \((-1)^n\) if \(\sigma \) is even and to \(-(-1)^n\) if \(\sigma \) is odd. Geometrically G is a union of disjoint directed cycles passing through all the \(k=n-s\) vertices not in I. The cycles themselves depend on \(\sigma \) only; the k! graphs G with \(\sigma _G = \sigma \) differ one from another by the edge numbering. For any \(n \times n\)-matrix \(W = (w_{ij})\), this implies the equality
here \(W_I\) is the submatrix of W obtained by deletion of the rows and the columns numbered \(i_1 , \dots , i_s\). It is proved in a similar way that \(\langle W \mid \det _{n,n-1}^{i/j}\rangle \) is equal to the codimension 1 minor of W obtained by deletion of the row i and the column j. This explains the terminology of Definition 1.8.
Values of the Bernardi polynomial at some points have special meaning:
Proposition 1.10
(Recall that \({\nu }(G)\) is the number of vertices in G.) For proof see [1, Eq. (45) and Definition 5.1]. Note that it follows immediately from the definition that \([B_G]_k(q,0,1) \equiv 0\) if (and only if) G contains an oriented cycle (e.g. a loop), and that \(B_G(q,0,1) = q^{\beta _0(G)}\) if G is totally cyclic. Thus, one half of each formula above is evident (but not the other half).
Take now special values of parameters in Theorem 1.3 to obtain
Corollary 1.11
(Higher matrix-tree theorem for diagonal minors) For every \(I = \{i_1 , \dots , i_s\} \subset \{1 , \dots , n\}\) one has
and also
Corollary 1.12
(Higher matrix-tree theorem for codimension 1 minors)
See Sect. 2 for detailed proofs.
Example 1.13
Let, as usual, \(I = \{i_1< \dots < i_s\}\) and \(W = (w_{ij})\) be a \(n \times n\)-matrix. Examples 1.6 and 1.7 imply that for \(k < n-s\) Corollary 1.11 takes the form \(0 = 0\).
Let now \(k = n-s\); apply the functional \(\langle W\vert \) to both sides of Corollary 1.11. Example 1.9 and Eq. (2) imply that the left-hand side then becomes the diagonal minor of the Laplace matrix \(\widehat{W}\) obtained by deletion of the rows and the columns numbered \(i_1 , \dots , i_s\). Example 1.7 now gives
Corollary 1.14
(of Corollary 1.11) The diagonal minor of the Laplace matrix obtained by deletion of the rows and the columns numbered \(i_1 , \dots , i_s\) is equal to \((-1)^{n-s}\) times the sum of monomials \(w_{a_1 b_1} \dots w_{a_{n-s} b_{n-s}}\) such that the graph \(([a_1 b_1] , \dots , [a_{n-s} b_{n-s}])\) is a s-component forest where every component contains exactly one vertex \(i_\ell \) for some \(\ell = 1 , \dots , s\), and all the edges of the component are directed towards \(i_\ell \).
A similar reasoning using Corollary 1.12 yields
Corollary 1.15
(of Corollary 1.12) The minor of the Laplace matrix obtained by deletion of the ith row and the jth column is equal to \((-1)^{n-1}\) times the sum of monomials \(w_{a_1 b_1} \dots w_{a_{n-1} b_{n-1}}\) such that the graph \(([a_1 b_1], \dots , [a_{n-1} b_{n-1}])\) is a tree with all the edges directed towards the vertex i.
Corollaries 1.14 and 1.15 are particular cases of the matrix-tree theorem [9]. This justifies the names of “higher matrix-tree theorems” given to their generalizations—Corollaries 1.11 and 1.12.
Remark 1.16
Higher matrix-tree theorems formulated here are series of identities indexed by an integer \(k = n, n+1, \ldots \). Classical matrix-tree theorems form the socle of this series at \(k=n\). It should be noted, though, that specializations of Corollaries 1.11 and 1.12 to \(k=n\) do not cover all the varieties of the matrix-tree theorems known. Thus, in [3] the matrix-tree theorems apply to all minors of the Laplace matrix, while Corollaries 1.14 and 1.15 cover only the case of diagonal minors (of any size) and of codimension 1 minors (both diagonal and nondiagonal), respectively. So, the theory of higher matrix-tree theorems is open for further generalization.
1.4 Remarks and applications
1.4.1 Undirected case
Values of the Potts function \(Z_G(q,v)\) at some points have special meaning (cf. Proposition 1.10 above):
Proposition 1.17
( [11, V, (8) and (10)]) For every \(G \in \varUpsilon _{n,k}\)
(Recall that \(\ell (G)\) is the number of loops in G, and by \({{\mathfrak {S}}}_{n,k}\) and \({{\mathfrak {A}}}_{n,k}\) we denote the sets of all totally cyclic and acyclic graphs in \(\varGamma _{n,k}\), respectively.)
In view of Proposition 1.2, the first formula of Proposition 1.17 is equivalent to
and therefore
Thus, substitution of \(q=-1\) and \(v=1\) into Theorem 1.4 gives the identity
which can also be obtained from Corollary 1.11 by summation over all \(I \subset \{1 , \dots , n\}\) and application of the forgetful map \(\left| \smash {\cdot }\right| \).
1.4.2 Properties of the degree k minors
The universal minors \(\det _{n,k}^I\) and \(\det _{n,k}^{i/j}\) exhibit some properties one would expect from determinants and minors:
Proposition 1.18
(Generalized row and column expansion)
Proposition 1.19
(Partial derivative with respect to a diagonal matrix element) Let matrix elements \(w_{ij}\), \(i,j = 1 , \dots , n\), of the matrix W be independent commuting variables. Then for any \(i = 1 , \dots , n\) and any \(m = 1 , \dots , k\), one has
See Sect. 2 for the proofs.
1.4.3 Invariants of 3-manifolds
Universal determinants \(\det _{n,k}^I\) have an application in 3-dimensional topology, due to M. Polyak. We describe it briefly here; see [6] and the MSc. thesis [4] for detailed definitions, formulations and proofs.
A chainmail graph is defined as a planar undirected graph, possibly with loops but without parallel edges; the edges (including loops) are supplied with integer weights. We denote by \(w_{ab} = w_{ba}\) the weight of the edge joining vertices a and b; \(w_{aa}\) is the weight of the loop attached to the vertex a. If the edge [ab] is missing, then \(w_{ab} = 0\) by definition.
There is a way (see [6]) to define for every chainmail graph G a closed oriented 3-manifold \({\mathcal {M}}(G)\); any closed oriented 3-manifold is \({\mathcal {M}}(G)\) for some G (which is not unique). To the chainmail graph G with n vertices, one associates two \(n \times n\)-matrices: the weighted adjacency matrix \(W(G) = (w_{ij})\) and the Laplace (better to say, Schrödinger) matrix \(L(G) = (l_{ij})\) where \(l_{ij} {\mathop {=}\limits ^{\text{ def }}}w_{ij}\) for \(i \ne j\) and \(l_{ii} {\mathop {=}\limits ^{\text{ def }}}w_{ii} - \sum _{j \ne i} w_{ij}\). If all \(w_{ii} = 0\) (in this case G is called balanced), then L(G) is the Laplace matrix \(\widehat{W}\) as defined in Sect. 1.1.
Theorem
([6]; see details of the proof in [4])
- 1.
The rank of the homology group \(H_1({\mathcal {M}}(G),{{\mathbb {Z}}})\) is equal to \(\dim \hbox { Ker } L(G)\).
- 2.
If L(G) is nondegenerate (so that \({\mathcal {M}}(G)\) is a rational homology sphere and \(H_1({\mathcal {M}}(G),{{\mathbb {Z}}})\) is finite) then
$$\begin{aligned} \left| \smash {H_1({\mathcal {M}}(G),{{\mathbb {Z}}})}\right| = \left| \smash {\det L(G)}\right| = \left| \smash {\langle L(G) \mid \mathop {\det }\nolimits _{n,n}^\varnothing \rangle }\right| . \end{aligned}$$(6) - 3.
If L(G) is nondegenerate, then
$$\begin{aligned} \langle W(G) \mid \varTheta _n\rangle = 12 \det L(G) \left( \lambda _{CW}({\mathcal {M}}(G)) - \frac{1}{4} {\hbox { sign }}(L(G))\right) \end{aligned}$$(7)where \(\lambda _{CW}\) is the Casson–Walker invariant [10] of the rational homology sphere \({\mathcal {M}}(G)\), \({\hbox { sign }}\) is the signature of the symmetric matrix L(G) and \(\varTheta _n\) is an element of \({\mathcal {G}}_{n,n+1} \oplus {\mathcal {G}}_{n,n-1}\) defined as
$$\begin{aligned} \varTheta _n {\mathop {=}\limits ^{\mathrm {def}}}\mathop {\det }\nolimits _{n,n+1}^\varnothing - \sum _{1 \le i \ne j \le n} ([ij])*\mathop {\det }\nolimits _{n,n-2}^{\{i,j\}} - 2\sum _{i=1}^n \mathop {\det }\nolimits _{n,n-1}^{\{i\}}. \end{aligned}$$
There is a conjecture that (6) and (7) begin a series of formulas for invariants of 3-manifolds; see [6] for details.
Corollary 1.20
( [4, Theorem 84]) If G is balanced, then \(\langle L(G) \mid \varTheta _n\rangle \) is equal to \(-2\) times the codimension 1 diagonal minor of L(G).
We give a detailed proof of this corollary at Sect. 2.4.
2 Proofs
2.1 Theorems 1.3 and 1.4
Recall that by \({\epsilon }(G)\) we denote the number of edges of the graph G (so \({\epsilon }(G) = k\) if \(G \in \varGamma _{n,k}\)).
Proof of Theorem 1.3
For a graph \(H \in \varGamma _{n,k}\) denote by definition \(\varDelta H {\mathop {=}\limits ^{\text{ def }}}\sum _{G \in \varGamma _{n,k}} y_{G,H} G\). Clearly, if \(y_{G,H} \ne 0\), then \({\widehat{H}}\) (the graph H with all the loops deleted) is a subgraph of G; so, if \(y_{G,H} \ne 0\), then \(y_{G,H} = (-1)^{\ell (H)} = (-1)^{k - {\epsilon }(\widehat{H})}\). Vice versa, for any subgraph \(\varPhi \subseteq G\) of a loopless graph G there exists exactly one H such that \(\varPhi = \widehat{H}\). To see this recall that \(\varPhi \in \varGamma _{n,m}\) is called a subgraph of \(G \in \varGamma _{n,k}\) if it is obtained from G by deletion of some edges, with the appropriate renumbering of the remaining ones (see the beginning of Sect. 1.2 for details). If \([ab] \in G\) is one of the edges to be deleted, then H contains the loop [aa] bearing the same number as [ab]. One has \(B_{{\widehat{H}}}(q,y,z) = B_H(q,y,z)\) by Proposition 1.2, and therefore,
Use now the following
Lemma 2.1
(Möbius inversion formula, [7]) Let \(f:\bigcup _k \varGamma _{n,k} \rightarrow {{\mathbb {C}}}\) be a function on the set of graphs with n vertices. Define the function h on the same set by the equality \(h(G) = \sum _{H \subseteq G} f(H)\) for every \(G \in \varGamma _{n,k}\). Then \((-1)^{{\epsilon }(G)} f(G) = \sum _{H \subseteq G} (-1)^{{\epsilon }(H)} h(H)\).
By [1, Eq. (21)], one has \(\sum _{\varPhi \subseteq G} [B_\varPhi ]_{{\epsilon }(\varPhi )}(q,y-1,z-1) = B_G(q,y,z)\), so it follows from the lemma that for every \(G \in \varGamma _{n,k}\)
Theorem 1.3 is proved. \(\square \)
Proof of Theorem 1.4
is similar to that of Theorem 1.3: again, if \(\varDelta {\mathcal Z}_{n,k}(q,v) = \sum _G x_G G\), then \(x_G \ne 0\) only if G has no loops. A graph H makes a contribution \(y_{G,H} \ne 0\) into \(x_G\) if and only if \(\widehat{H} \subseteq G\). Unlike the directed case, for a subgraph \(\varPhi \subseteq G\) having \({\epsilon }(\varPhi )\) edges there are \(2^{k-{\epsilon }(\varPhi )}\) graphs H such that \(\varPhi = \widehat{H}\): every edge [ab] present in G but missing in \(\varPhi \) may correspond either to a loop [aa] or to a loop [bb] in H; recall that \(a \ne b\) because G is loopless.
The contribution \(y_{G,H}\) of all such graphs H into \(x_G\) is the same and is equal to \((-1)^{k-{\epsilon }(\varPhi )} Z_\varPhi (q,v)\). Now by Proposition 1.1
\(\square \)
Theorem 1.4 is proved.
2.2 Propositions 1.18 and 1.19
Proof of Proposition 1.18
Let \([ij] \in G\) be the edge of G carrying number 1. If G is totally cyclic, then there is a directed path in \(G {\setminus } ([ij])\) joining j with i, and therefore, \(\beta _0(G {\setminus } ([ij])) = \beta _0(G)\). So G enters the left-hand side of (4) and the (i, j)th term of the sum at its right-hand side with the same coefficient. \(\square \)
Proof of Proposition 1.19
Let q be an integer, \(0 \le q \le k\); denote by \({{\mathfrak {S}}}_{n,k}^{[i:q]}\) the set of all graphs \(G \in {{\mathfrak {S}}}_{n,k}^\varnothing \) having q loops attached to vertex i. The graph \({\widehat{G}}^{(i)}\) obtained from G by deletion of all these loops belongs either to \({{\mathfrak {S}}}_{n,k-q}^{[i:0]} \subset {{\mathfrak {S}}}_{n,k-q}^\varnothing \) or, if \(q > 0\), to \({{\mathfrak {S}}}_{n,k-q}^{\{i\}}\) (totally cyclic graphs with the vertex i isolated). Vice versa, if \(q > 0\) and \({\widehat{G}}^{(i)} \in {{\mathfrak {S}}}_{n,k-q}^{[i:0]} \cup {{\mathfrak {S}}}_{n,k-q}^{\{i\}}\), then \(G \in {{\mathfrak {S}}}_{n,k}^{[i:q]}\). Deletion of a loop does not break a graph, so \(\beta _0(G) = \beta _0({\widehat{G}}^{(i)})\).
Let \(A \subset \varGamma _{n,k}\). To shorten the notation, denote
(the “alternating-sign sum” of elements of A). If \(G \in {{\mathfrak {S}}}_{n,k}^{[i:q]}\), then there are \(\left( {\begin{array}{c}k\\ q\end{array}}\right) \) ways to assign numbers to the q loops of G attached to i. The expression \(\langle W \mid G\rangle \) does not depend on the edge numbering, so one has for \(q > 0\)
and therefore
All the factors in the last formula, except \(w_{ii}^q\), do not contain \(w_{ii}\). So, applying the operator \(\frac{\partial ^m}{\partial w_{ii}^m}\) to Eq. (8) and then using the equation again with \(k-m\) in place of k, one gets (5). \(\square \)
2.3 Higher matrix-tree theorems
Proof of Corollary 1.11
Take \(y=0\), \(z=1\) and \(q=-1\) in Theorem 1.3. Now one has
The polynomial \(B_G(q,y,z)\) is symmetric in y and z, and \([B_G]_k(q,y,z)\) is homogeneous of degree k in y, which implies \([B_G]_k(-1,-1,0) = (-1)^k [B_G]_k(-1,0,1)\), and therefore
by Proposition 1.10.
By definition, the Laplace operator preserves sinks (cf. Remark 1.5): if \(i_1 , \dots , i_s\) are the sinks of G (in particular, if \(G \in {{\mathfrak {S}}}_{n,k}^I\)), then \(\varDelta (G) = \sum _H x_H H\) where the sinks of every graph H (such that \(x_H \ne 0\)) are exactly the same, \(i_1 , \dots , i_s\). Since the sets \({{\mathfrak {S}}}_{n,k}^I\) with different I do not intersect, and the same is true for \({{\mathfrak {A}}}_{n,k}^I\), one obtains \(\varDelta \det _{n,k}^I = \frac{(-1)^n}{k!} \sum _{G \in {{\mathfrak {A}}}^I_{n,k}} G\) for every individual I. \(\square \)
Proof of Corollary 1.12
Note that \(\det _{n,k}^{i/i} = \det _{n,k}^\varnothing + \det _{n,k}^{\{i\}}\). Applying the operator \(\varDelta \) to Eq. (4) and using Corollary 1.11 with \(I = \varnothing \) and \(I = \{i\}\), one obtains
The (i, j)th term of the last formula consists of graphs where the edge [ij] carries the number 1. Hence, different terms of the formula cannot cancel, and therefore, every single term is equal to 0. \(\square \)
2.4 Corollary 1.20
(cf. the proof with the proof of [4, Claim 102]) For a balanced graph, one has \(\langle L(G) \mid \varTheta _n\rangle = \langle W(G) \mid \varDelta \varTheta _n\rangle \); consider three terms in the definition of \(\varTheta _n\) separately.
First, by Corollary 1.11 one has \(\varDelta \mathop {\det }\nolimits _{n,n+1}^\varnothing = 0\).
By the same Corollary 1.11, one has \(\varDelta \mathop {\det }\nolimits _{n,n-2}^{\{i,j\}} = \frac{1}{(n-2)!} \sum _{H \in {{\mathfrak {A}}}_{n,n-2}^{\{i,j\}}} H\); this is \(\frac{1}{(n-2)!}\) times the sum of two-component forests such that i and j belong to different components, and all the edges are directed towards i or j, respectively. Then, \(\varDelta ([ij])*\mathop {\det }\nolimits _{n,n-2}^{\{i,j\}} = ([ij])*\varDelta \mathop {\det }\nolimits _{n,n-2}^{\{i,j\}} = \frac{1}{(n-2)!} \sum _{H \in {{\mathfrak {A}}}_{n,n-2}^{\{i,j\}}} ([ij])*H\); the right-hand side is \(\frac{1}{(n-2)!}\) times the sum of all trees with the edges directed towards the vertex j and such that one of the edges (this is [ij]) adjacent to j has number 1.
Again, by Corollary 1.11\(\varDelta \mathop {\det }\nolimits _{n,n-1}^{\{i\}}\) is \(\frac{1}{(n-1)!}\) times the sum of all trees with the edges directed towards the vertex i, without restrictions on the edge numbering.
For any \(\varPhi \in \varGamma _{n,k}\), the number \(\langle W(G) \mid \varPhi \rangle \) does not depend on the edge numbering in \(\varPhi \). The chainmail graph G in not directed, so W(G) is symmetric, and the number does not depend on the edge direction in \(\varPhi \) as well (so, it depends on \(\left| \smash {\varPhi }\right| \) only). So, \(\langle L(G) \mid \varTheta _n\rangle = \sum _H (c_1(H) + c_2(H)) \langle W(G) \mid H\rangle \) where H runs through the set of all undirected trees. The coefficients \(c_1(H)\) and \(c_2(H)\) coming from the second and the third terms in the definition of \(\varTheta _n\) are as follows. \(c_1(H) = \frac{1}{(n-2)!}\) times the number of ways to assign numbers to the edges times the number of ways to choose a root vertex which should be an endpoint of the edge number 1; that is, \(c_1(H) = \frac{1}{(n-2)!} (n-1)! \times 2 = 2n-2\). Also, \(c_2(H) = -\frac{2}{(n-1)!}\) times the number of ways to assign numbers to the edges times the number of ways to choose a root vertex (which can be any); that is, \(c_2(H) = -\frac{2}{(n-1)!} (n-1)! \times n = -2n\). Eventually, \(\langle L(G) \mid \varTheta _n\rangle = -2\sum _H \langle W(G) \mid H\rangle \), which is \(-2\) times the codimension 1 diagonal minor of L(G) by the matrix-tree theorem (Corollary 1.14).
2.5 A direct proof of the higher matrix-tree theorem
The matrix-tree theorem (Corollary 1.11) is proved in Sect. 2.3 by specialization of variables in Theorem 1.3. Here, we give a direct proof of the same result containing no reference to Theorem 1.3.
Consider the following functions on \(\varGamma _{n,k}\):
where \(\beta _1(G) {\mathop {=}\limits ^{\text{ def }}}\beta _0(G)+k-n\) is the first Betti number of the graph G. The key stage of the proof is the following proposition:
Proposition 2.2
Like Theorem 1.3, it can be proved by specialization of parameters in a certain identity involving Bernardi polynomials; see [1, Proposition 6.16]. We will, nevertheless, give its direct proof; it will constitute an answer to Question 6.17 from [1]. Note that the two statements of the proposition are equivalent by Lemma 2.1, so it will suffice to prove the first one.
To prove the proposition use induction by the number of edges of the graph G. If \({\mathcal {R}}\) is some set of subgraphs of G (different in different cases) and f is a function on the set of graphs, then for convenience we will write
Also by a slight abuse of notation \(\mathscr {S}(f,G)\) will mean \(\mathscr {S}(f,2^G)\) where \(2^G\) is the set of all subgraphs of G.
Consider now the following cases:
2.5.1 G is disconnected
Let \(G = G_1 \sqcup \dots \sqcup G_m\) where \(G_i\) are connected components. A subgraph \(H \subseteq G\) is acyclic if and only if the intersection \(H_i {\mathop {=}\limits ^{\text{ def }}}H \cap G_i\) is acyclic for all i. Hence, \(\alpha (H) = \alpha (H_1) \dots \alpha (H_m)\), and therefore, \(\mathscr {S}(\alpha ,G) = \mathscr {S}(\alpha , G_1) \dots \mathscr {S}(\alpha , G_m)\). By the induction hypothesis, \(\mathscr {S}(\alpha , G_i) = (-1)^{{\epsilon }(G_i)}\sigma (G_i)\). So,
It will suffice now to prove Proposition 2.2 for connected graphs G.
2.5.2 G is connected and not strongly connected
In this case, G contains an edge e that does not enter a directed cycle. If \(H \subset G\) is acyclic and \(e \notin H\), then \(H \cup \{e\}\) is acyclic, too. The converse is true for any e: if an acyclic graph \(H \subset G\) contains e, then \(H {\setminus } \{e\}\) is acyclic. Therefore,
So it will suffice to prove Proposition 2.2 for strongly connected graphs G.
2.5.3 G is strongly connected and contains a crucial edge
We call an edge e of a strongly connected graph Gcrucial if \(G {\setminus } \{e\}\) is not strongly connected. Suppose \(e = [ab] \in G\) is a crucial edge.
Denote by \({\mathcal {R}}_e^-\) (resp., \({\mathcal {R}}_e^+\)) the set of all subgraphs \(H \subset G\) such that \(e \notin H\) (resp., \(e \in H\)). The graph \(G {\setminus } \{e\}\) is not strongly connected and contains one edge less than G, so by Clause 2.5.2
Let now \(H \in {\mathcal {R}}_e^+\) be acyclic; such H contains no directed paths joining b with a. The graph \(G {\setminus } \{e\}\) is not strongly connected, so it does not contain a directed path joining a with b either. It means that such path in H will necessarily contain e, and therefore, the graph \(H/e \subset G/e\) (obtained by contraction of the edge e) is acyclic. The converse is true for any e: if \(e \in H\) and \(H/e \subset G/e\) is acyclic, then \(H \subset G\) is acyclic, too. The graph G / e is strongly connected, contains one edge less than G, and \(\beta _1(G/e) = \beta _1(G)\), so \(\sigma (G/e) = \sigma (G)\). The graph H / e contains one edge less than H, so \(\alpha (H/e) = -\alpha (H)\). Now by the induction hypothesis
and then (9) implies
2.5.4 G is strongly connected and contains no crucial edges
Let \(e = [ab]\in G\) be an edge and not a loop: \(b \ne a\). Recall that \(G_e^{\vee }\) means a graph obtained from G by reversal of the edge e: \(e \mapsto [ba]\). Since e is not crucial, \(G {\setminus } \{e\} = G_e^{\vee } {\setminus } \{e\}\) is strongly connected. So \(G_e^{\vee }\) is strongly connected, too, implying \(\sigma (G_e^{\vee }) = \sigma (G)\).
Lemma 2.3
If the graph G is strongly connected and the edge \(e = [ab] \in G\) is not crucial, then \(\mathscr {S}(\alpha ,G) = \sigma (G)\) if and only if \(\mathscr {S}(\alpha ,G_e^{\vee }) = \sigma (G_e^{\vee }) = \sigma (G)\).
Proof
For two vertices a, b of some graph G we will be writing \(a \succeq _{{}_{\scriptstyle G}} b\) (or just \(a \succeq b\) if the graph is evident) if G contains a directed path joining a with b.
Acyclic subgraphs \(H \subset G\) are split into five classes:
- I.
\(e \notin H\), but \(a \succeq _{{}_{\scriptstyle H}} b\).
- II.
\(e \notin H\), but \(b \succeq _{{}_{\scriptstyle H}} a\).
- III.
\(e \notin H\), and both \(a \not \succeq _{{}_{\scriptstyle H}} b\) and \(b \not \succeq _{{}_{\scriptstyle H}} a\).
- IV.
\(e \in H\), and \(a \succeq _{{}_{\scriptstyle H {\setminus } \{e\}}} b\).
- V.
\(e \in H\), and \(a \not \succeq _{{}_{\scriptstyle H {\setminus } \{e\}}} b\).
Obviously, \(H \in \hbox {I}\) if and only if \(H \cup \{e\} \in \hbox {IV}\). One has \({\epsilon }(H \cup \{e\}) = {\epsilon }(H)+1\), so
Also, \(H \in \mathrm {II}\) if and only if \(H \cup \{e\} \in \mathrm {V}\), and similar to (10) one has \(\mathscr {S}(\alpha ,\hbox {II} \cup \hbox {V}) = 0\), and therefore
Like in Clause 2.5.3 if \(H \in \hbox {V}\), then \(H/e \subset G/e\) is acyclic, and vice versa, if \(e \in H\) and \(H/e \subset G/e\) is acyclic, then \(H \in \hbox {V}\). The graph G / e is strongly connected, so by the induction hypothesis \(\mathscr {S}(\alpha ,\hbox {V}) = -\mathscr {S}(\alpha ,G/e) = -(-1)^{k-1} \sigma (G/e) = (-1)^k \sigma (G)\), hence \(\mathscr {S}(\alpha ,\hbox {II}) = -(-1)^k\sigma (G)\).
If \(e \notin H\) and H is acyclic, then H is an acyclic subgraph of the strongly connected graph \(G {\setminus } \{e\}\). The graph G is strongly connected, too, so e enters a cycle, and \(\beta _1(G {\setminus } \{e\}) = \beta _1(G)-1\), which implies \(\sigma (G {\setminus } \{e\}) = -\sigma (G)\). One has \({\epsilon }(G {\setminus } \{e\}) = k-1 < k\), so by the induction hypothesis
and therefore
A subgraph \(H \subset G\) of class I is at the same time a subgraph \(H \subset G_e^{\vee }\) of class II. So, (11) applied to \(G_e^{\vee }\) gives \(\mathscr {S}(\alpha , \hbox {I}) = \mathscr {S}(\alpha , G_e^{\vee })\). If follows now (11) and (12) that
which proves the lemma. \(\square \)
To complete the proof of Proposition 2.2, let a be a vertex of G, and let \(e_1 , \dots , e_m\) be the complete list of edges finishing at a, except loops. Consider the sequence of graphs \(G_0 = G\), \(G_1 = G_{e_1}^{\vee }\), \(G_2 = (G_1)_{e_2}^{\vee }\), ..., \(G_m = (G_{m-1})_{e_m}^{\vee }\). The graphs \(G_0\) and \(G_1\) are strongly connected; the graph \(G_m\) is not, because it contains no edges finishing at a (except possibly loops). Take the biggest p such that \(G_p\) is strongly connected. Since \(p < m\), the graph \(G_{p+1}\) exists and is not strongly connected; therefore, \(G_p {\setminus } \{e_{p+1}\} = G_{p+1} {\setminus } \{e_{p+1}\}\) is not strongly connected either. So, the edge \(e_{p+1}\) is crucial for the graph \(G_p\), and by Clause 2.5.3 one has \(\mathscr {S}(\alpha ,G_p) = (-1)^k \mathscr {S}(G_p) = (-1)^k \sigma (G)\). The graphs \(G_0=G, G_1 , \dots , G_p\) are strongly connected, so for any \(i = 0 , \dots , p-1\) the edge \(e_{i+1}\) is not crucial for the graph \(G_i\). Lemma 2.3 implies now
Proposition 2.2 is proved.
The rest of the proof of the matrix-tree theorem follows the lines of Sect. 2.1. Namely, let \(\varDelta \det _{n,k}^I = \sum _{G \in \varGamma _{n,k}} x_G G\); every graph G in the right-hand side has vertices \(i_1 , \dots , i_s \in I\), and only them, as sinks. For \(H \in {{\mathfrak {S}}}_{n,k}^I\), the contribution of \(\varDelta H\) into \(x_G\) is nonzero if and only if \(\widehat{H} \subseteq G\); in this case, the contribution is equal to
the last equality follows from the expression for the Euler characteristics: \(\chi (\widehat{H}) = n - {\epsilon }(\widehat{H}) = \beta _0(\widehat{H}) - \beta _1(\widehat{H})\). Now by Proposition 2.2
This finishes the proof.
References
Awan, J., Bernardi, O.: Tutte polynomials for directed graphs. arXiv:1610.01839v2
Burman, Yu., Ploskonosov, A., Trofimova, A.: Matrix-tree theorems and discrete path integration. Linear Algebra Appl. 466, 64–82 (2015)
Chaiken, S.: A combinatorial proof of the all minors matrix tree theorem. SIAM J. Discrete Math. 3(3), 319–329 (1982)
Epstein, B.: A combinatorial invariant of \(3\)-manifolds via cycle-rooted trees. MSc. thesis (under supervision of prof. M. Polyak), Technion, Haifa, Israel (2015)
Kirchhoff, G.: Über die Auflösung der Gleichungen, auf welche man bei der Untersuchung der linearen Vertheilung galvanischer Ströme gefurht wird. Ann. Phys. Chem. 72, 497–508 (1847)
Polyak, M.: From \(3\)-manifolds to planar graphs and cycle-rooted trees, talk at Arnold’s legacy conference. Fields Institute, Toronto (2014)
Rota, G.-C.: On the foundations of combinatorial theory I: theory of Möbius functions. Z. Wahrsch. Verw. Gebiete 2, 340–368 (1964)
Sokal, A.: The multivariate Tutte polynomial (alias Potts model) for graphs and matroids. In: Webb, B.S. (ed.) Surveys in combinatorics 2005. London Mathematical Society Lecture Note Series, vol. 327, pp. 173–226. Cambridge University Press, Cambridge (2005)
Tutte, W.T.: The dissection of equilateral triangles into equilateral triangles. Math. Proc. Cambridge Philos. Soc. 44(4), 463–482 (1948)
Walker, K.: An Extension of Casson’s Invariant, Annals of Mathematics Studies, 126. Princeton University Press, Princeton (1992)
Welsh, D.J.A., Merino, C.: The Potts model and the Tutte polynomial. J. Math. Phys. 41(3), 1127–1152 (2000)
Acknowledgements
The research was inspired by numerous discussions with prof. Michael Polyak (Haifa Technion, Israel) whom the author wishes to express his most sincere gratitude.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research was funded by the Russian Academic Excellence Project ‘5-100’ and by the Simons-IUM fellowship 2017 by the Simons Foundation.
Rights and permissions
About this article
Cite this article
Burman, Y. Higher matrix-tree theorems and Bernardi polynomial. J Algebr Comb 50, 427–446 (2019). https://doi.org/10.1007/s10801-018-0863-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10801-018-0863-x