Linearised Higher Variational Equations

This work explores the tensor and combinatorial constructs underlying the linearised higher-order variational equations of a generic autonomous system along a particular solution. The main result of this paper is a compact yet explicit and computationally amenable form for said variational systems and their monodromy matrices. Alternatively, the same methods are useful to retrieve, and sometimes simplify, systems satisfied by the coefficients of the Taylor expansion of a formal first integral for a given dynamical system. This is done in preparation for further results within Ziglin-Morales-Ramis theory, specifically those of a constructive nature.


Motivation and first definitions 1.Introduction
Integrability, an informal word for reasonably simple solvability, is an important problem in Dynamical Systems. Its opposite phenomenon, and specifically low predictability with respect to time, is usually summarised under the term chaos. If the system is Hamiltonian, as are most problems in Mechanics, the "chaos vs solvability" disjunctive is doubly advantageous. On one hand, it is amenable to the techniques of Symplectic Geometry. On the other, theory and empirics yield the specific, thus observable integrability condition described in §1. 3.
The introduction of the algebraic approach by Ziglin, Morales-Ruiz and Ramis produced hallmark contributions to the study of the integrability of Hamiltonian systems [6,22,23,31], essentially couched on a study of the invariants of a given matrix group, associated to a linear system: the first-order variational equations introduced in 1.2. A second step forward was carried out by Morales-Ruiz, Ramis and Simó ( [24]) in order to extend the preceding Galoisian framework to the groups of the higher-order variational equations along a particular solution.
The second step described above is the driving force behind this paper. A constructive version of the Morales-Ramis-Simó theorem was already started in [2] and tangentially tackled from another viewpoint in [5] (see §5) and the present work aims at expanding this effort by offering a closed-form expression for the linearised higher variationals. May the reader bear in mind that nowhere from §2 onwards, except for §6, is the system required to be Hamiltonian.

Dynamical systems and variational equations
In accordance with results described in §1. 3 and thereafter, we need to observe the following convention outside of Sections 2 and 3: dependent and independent variables for all dynamical systems will be allowed to be complex. Any open set T ⊆ P 1 C is an admissible domain for the time variable, embedded into the Riemann sphere to include t = ∞ as a valid singularity. Consider an autonomous holomorphic dynamical system: Conserved quantities and solution curve foliations are defined similarly to their real-valued counterparts. Indeed, a first integral of (DS) is a function F : U → C constant along every solution of (DS). And for every z ∈ U , the unique solution ϕ (t, z) of (DS) such that ϕ (0, z) = z allows us to define a function ϕ (·, ·) in n + 1 variables called the flow of (DS). Clarifying preliminary comments are in order whenever a particular solution ψ (t) is considered: a) partial derivatives ∂ k ∂z k ϕ (t, ψ) are multilinear functions of increasing order (or multidimensional matrices, see e.g. [17]) and appear in the Taylor series of the flow along ψ: (1) b) each of these derivatives ∂ k ∂z k ϕ (t, ψ) satisfies an echeloned set of differential systems, depending on the previous k − 1 partial derivatives and customarily called variational equations or systems. They are explicitly called higher-order whenever k 2. c) variational system for k = 1 is linear and satisfied by the linear part of the flow along ψ: A 1 (t) := X ′ (ψ) ∈ Mat n (K) , (VE ψ ) K = C (ψ) being the smallest differential field containing C (t) and the solution.
d) For k 2, however, the system is not linear, yet a linearised version may be found. The aim of the present paper is to do so with explicit formulae.

Morales-Ramis-Ziglin theory and extensions
Heuristics of all results within the Ziglin-Morales-Ramis-Simó theoretical framework are firmly rooted in the following principle, expected to affect a widespread class of systems: If general system (DS) is "integrable" in some reasonable sense, then the system satisfied by each of the partial derivatives of the flow at every particular solution ψ of (DS) must be also integrable in an accordingly reasonable sense.
Theorem 1.2 (Morales-Ruiz, Ramis, Simó, 2005, [24,Th. 5]). Let H be as in the previous theorem. Let G k be the differential Galois group of the k-th variational equations VE k ψ , k 1, and G := lim ← − G k the formal differential Galois group (inverse limit of the groups) of X H along ψ. Then, the identity components of the Galois groups G k and G are abelian.
Theorem 1.2 makes use of the language of jets, after proving non-linear VE k ψ equivalent to any consistent linearised completion. Efforts towards a constructive version of this main Theorem, as well as the line of study described in §5, are hampered by a lack of consensus on the explicit block structure of this completion. The present work, summarised in its main result (Proposition 4.5) aims at contributing to fill in this gap. Hence, outcomes will be restricted to symbolic calculus and bear no new results in the above theoretical framework. 1. The modulus i = |i| of a multi-index i = (i 1 , . . . , i n ) ∈ Z n is the sum of its entries.
Multi-index addition and subtraction are defined entrywise as usual.

Given complex analytic
where |i| = m and entries are ordered as per < lex on multi-indices.

5.
We define d n,k := n+k−1 n−1 , D n,k := k i=1 d n,i . It is easy to check there are d n,k k-ples of integers in {1, . . . , n}, and just as many homogeneous monomials of degree n in k variables. Notation 1.4. Given integers k 1 , . . . , k n 0, we define the usual multinomial coefficient as For a multi-index k ∈ Z n 0 , define k! := k 1 ! · · · k n !. For any two such k, j, we define and the multi-index counterpart to the multinomial, k 1 +···+km k 1 ,...,km 2 Symmetric products and powers of finite matrices

Definition and properties
The compact formulation called for by (1) and Notation 1.3 (3) will be achieved through a product ⊙ that was already defined by other means by U. Bekbaev (e.g. [8,9,10,11]) and will be systematised using basic categorical properties of the tensor product. Let K be a field and V a K-vector space. See [13,14,18] for details.
In other words, Hom K (S, W ) ∼ = S (V n , W ) holds between the vector space of linear maps S → W and the vector space of symmetric multilinear maps V n → W . Proposition 2.2. Given any K-vector space V and any r ∈ N, a) a symmetric power (Sym r V, ϕ) exists, unique up to isomorphism. We write v 1 ⊙· · ·⊙v r := Hence, product ⊙ operates exactly like products of homogeneous polynomials in several variables.

Remark 2.3.
Sym r may also be defined in terms of the tensor power by Sym Given any K-vector space W and two linear maps f, g : Immediately bilinear and symmetric, it is granted a unique linear h ⊙ : for any linear maps f 1 , g 1 : W → W 1 . A similar construction applies to the symmetric product of m 3 linear maps f i : V → W : Let us generalise the above symmetric product into one involving any two linear maps Assume V and W finite-dimensional, V having basis {e 1 , . . . , e n }. Defining the bilinear map ϕ (u 1 , u 2 ) := u 1 ⊙ u 2 , u i ∈ Sym j i V, we are interested in finding a bilinear function h in terms of f and g generalising (4), for which there is a unique linear h ⊙ completing the diagram We want h to yield coefficient 1 for all-round repeated vectors as in (4). Symmetric, multilinear h : V ×j 1 +j 2 → Sym i 1 +i 2 W is easier to define, generalising (4) and the example in [22, p. 155]: for any u 1 , . . . , u j 1 +j 2 ∈ V , where α j 1 ,j 2 = 1 ( j 1 +j 2 j 1 ) and the sum is taken over σ ∈ S j 1 ,j 2 with to commute. Let u i 1 , . . . , u i j 1 +j 2 ∈ {e 1 , . . . , e n }. Split into copies of separate basis vectors: . . , e n , pn . . ., e n }, u i j 1 +1 , . . . , u i j 1 +j 2 = e ×q 1 1 , . . . , e ×qn n , with |p| = j 1 and |q| = j 2 , and define k = p + q. The expression of (7) in these basis elements is now an immediate consequence of basic combinatorics: h e 1 k 1 . . ., e 1 , . . . , e n , kn . . ., e n = 1 leaving no option for (9) to commute but h e ⊙p , e ⊙q = 1 Finally, the universal property on Sym j 1 +j 2 V, ϕ yields a unique h ⊙ such that h ⊙ • ϕ ≡h, and ϕ • (ϕ 1 × ϕ 2 ) ≡ ϕ. Fixing ϕ (and h) the uniqueness of h ⊙ follows from construction: any other h • rendering (6) commutative would require the commutativity of the outer perimeter of (10), hence h • ≡ h ⊙ . Hence all we need to do is express f ⊙ g := h ⊙ in terms of its action on base elements (3) to obtain a simple, explicit form.

Notation 2.4.
When dealing with matrix sets, we will use super-indices and subindices: 1. The space of (i, j)-matrices Mat i,j m,n (K) is either defined by its underlying set, i.e. all d m,i × d n,j matrices having entries in K, or as vector space

2.
It is clear from the above that Mat 0,0 n (K) is the set of all scalars α ∈ K and Mat 0,k n (K) (resp. Mat k,0 n (K)) is made up of all row (resp. column) vectors whose entries are indexed by d n,k lexicographically ordered k-tuples.

2.
Reference to K may be dropped and notation may be abridged if dimensions are repeated or trivial, e.g. Mat i,j n := Mat i,j n,n , Mat i m,n := Mat i,i m,n , Mat n := Mat 1 n , etcetera. Checking product ⊙ defined below renders diagrams (6) and (10) commutative is immediate.

3.
For square A ∈ Mat 1,1 n , powers ⊙r according to (11) and (12) are obviously consistent with multiple product (5), hence equal to established definitions for group morphism Sym r : GL n (V ) → GL n (Sym r (V )) in multilinear algebra textbooks such as the expression in terms of the permanent of A (e.g. [13,Th. 9 Example 2.8. Given matrices A ∈ Mat 1,1 2 (K) and B ∈ Mat 3,2 2 (K), we may write them as and it is immediate to check that the (4, 3) (hence four-column, five-row) matrix product is equal to 3 3 .
The following is straightforward to prove from either direct application of the universal property or the techniques used in [8,10], and will not be delved into here: Proposition 2.9. For any A, B, C, and whenever products make sense,

e) If A is square and invertible, then
Universal property on (7) and diagram (10) with different notation yields: Lemma 2.10. For any two matrices A ∈ Mat i,j n and B ∈ Mat p,q n and v 1 , S j,q defined as in (8).
Lemma 2.11. a) ( [8,10]) For any A ∈ Mat p,q n , B ∈ Mat q,r n , and v ∈ Sym j K n , Proof. a) It suffices to prove it for basis elements of Sym r : for any k such that |k| = r, But this is immediate from equation (13) or the definition (11) of ⊙ itself.
b) Using the previous item and the associative property in Proposition 2.9, and the fact that e m ⊙ e T m ∈ Mat 2,2 n is zero save for a 1 in position m,m ,

More properties of ⊙
We need to generalise some of the properties in Proposition 2.9 for later purposes. Applying the universal property on (4) (with V := Sym k K n ) or (7) (with j 1 = j 2 = k), followed by (11) and (16) for m = 2, as well as the universal property on (5) (12) for arbitrary m, we obtain Lemma 2.12. Given square A, B ∈ Mat k,k n and matrices X i ∈ Mat k,j i n , i = 1, 2, and in general for any square A 1 , . . . , A m ∈ Mat k,k n and any (13), we have: An immediate consequence of either Lemma 2.12 or Lemma 2.13 is Corollary 2.14. Given a square matrix A ∈ Mat 1,1 n and X 1 , . . . , X m such that X i ∈ Mat 1,j i n , Xv ⊙ Id ⊙r n X ⊙r = Xv ⊙ X ⊙r in virtue of (14); applying this, (15), Proposition 2.9 and a detailed scrutiny of the effect on basis products e ⊙⋆ yields: Lemma 2.15. Given square matrix X ∈ Mat 1,1 n , any vector v ∈ K n and r 1, If (K, ∂) is a differential field [29] and we extend derivation ∂ entrywise, ∂ (a i,j ) := (∂a i,j ), the Leibniz rule holds on vector products x ⊙ y as trivially as it does for homogeneous polynomials in n variables in virtue of Proposition 2.2 or Sym k 1 +k 2 (V ⋆ ) ∼ = S k 1 +k 2 (V, K); (11) implies: Although the next result will be rendered academic by simplified expressions in §4.1, it is worth writing for the sake of clarifying certain routinely-appearing matrices a bit further. The proof is immediate from commutativity and (17), (19), Lemma 2.16, the distributive property and (19), as well as simple induction in (c): Remark 2.18. Albeit not explicitly as in (22), the matrix proven equal to k(A ⊙ Id ⊙k−1 n ) has appeared in numerous references (e.g. [2,3,4,5,7]) whenever a differential equation for Sym k arises, has been sometimes labelled sym k and has been consistently called symmetric power in the sense of Lie algebras, its Lie group counterpart therein equal to ⊙k as defined in this paper.

Symmetric products and exponentials of infinite matrices
The next step towards a compact form to linearised higher variationals is assembling the matrix blocks alluded to in Lemma 2.17 and Remark 2.18 together into a single matrix. Again, we follow paths already trod with other aims and formulations, e.g. by Bekbaev in [10].

Products and exponentials
Of the myriad ways to note a set of infinite matrices, we may need one taking finite submatrix orders into account. Alternatively, of all the ways in which to write a K-algebra S, a need may arise to express it whenever possible S = Sym (V ) := k 0 Sym k (V ) for a given vector space.
We write Mat := Mat n,n if n is unambiguous. Conversely, Mat i,j n,m is embedded in Mat n,m by identifying every matrix A i,j with an element of Mat n,m equal to 0 save for block A i,j .
We define a product on Mat n,m . For a formulation yielding the same results see [11, p. 2].

Definition 3.2. For any
Same as always, ⊙k will stand for powers built with this product.
The following is immediate and part of it has already been mentioned before, e.g. [10]: is an integral domain, its identity element 1 Mat equal to zero save for block (1 Mat ) 0,0 = 1 K . Mat (K) is also a unital associative K-algebra with the usual product by scalars.
Definition 3.4. (See also [10]) for every matrix A ∈ Mat n,m we define the formal power series Whenever A = 0 save for a finite distinguished submatrix A j,k (e.g. Examples 3.6 below or Lemma 3.10), the abuse of notation exp ⊙ A j,k = exp ⊙ A will be customary.
Commutativity of ⊙ renders the proof of the following similar to that of scalar exponentials:

a) For every two
b) For every Y ∈ Mat n,m and any derivation ∂ :

d) In particular, for every invertible square
Examples 3.6.

2.
If the only non-zero block in A is a row vector,

3.
If the only non-zero block in A is column 0,k , the only one in A ⊙j is jk,0 , obtained by switching rows and columns and expunging binomials from Fourth example (27), i.e. matrices equal to 0 save for block row 1,k , deserves special attention.

2.
Since each subset of size i s is supposed to be ordered, we must divide the total amount by the orders of the corresponding symmetric groups, hence the explicit formula: Lemma 3.9. Let Y ∈ Mat (K) equal to zero outside of block row 1,k , k 1: Let Z r,s , s, r 1, be the corresponding block in exp ⊙ Y . Then, a) Row block r in exp ⊙ Y is recursively obtained in terms of row blocks 1 and r − 1: In particular, Z r,r = Y ⊙r 1 and Z r,s = 0 dn,r ,dn,s whenever r > s.
b) For every m, r 1 and any v ∈ K n , c) Using Notation 3.7 and (26), for every s r d) Let A ∈ Mat (K) similar to Y , its horizontal strip not necessarily at level 1, * : For every t, i 1 and s t + i, the following factorization holds: e) If Q ∈ Mat n has only its square 1,1 block different from zero, then Proof. a) Using (25) on A = Y , B = Y ⊙s−1 , as well as the fact Z i,j = 0 for i > j, (28) ensues.
c) By induction. For s = 1, r can only be equal to 1 in order to have a non-zero block and Z 1,1 = Y 1 = c 1 1 Y 1 . Assume (30) holds for all r smaller than or equal to s − 1. Summand redistribution renders Z r,s = 1 r j 1 +···+jr=s C j 1 ,...,jr Y j 1 ⊙ Y j 2 ⊙ · · · ⊙ Y jr where C j 1 ,...,jr splits into a certain sum, each of whose m terms is easily checked to be equal to n m c s j 1 ,...,jr , hence the coefficient of Y j 1 ⊙ · · · ⊙ Y jr in equals 1 r m i=1 n i c s j 1 ,...,jr = c s j 1 ,...,jr .
d) the left-hand side in (31), expressed in terms of (30) and applying distributivity, equals A tedious exercise in counting index multiplicities and applying basic combinatorics allows us to apply Lemma 2.13 The fact every summand in (32) fits the same profile as the left-hand side in (33) allows us to factor i+t i A t ⊙ Id i n out of the whole sum, namely Z i+t,s . e) Replacing each factor Y i j by QY i j in (30) and applying Lemma 2.9 we obtain exp ⊙ QY = Z r,k whereZ r,s = i 1 +···+ir=s Q ⊙r ⊙ c s i 1 ,...,ir Y i 1 ⊙ Y i 2 ⊙ · · · ⊙ Y ir = Q ⊙r Z r,s , hence matrix exp ⊙ Y appears multiplied by diag · · · , Q ⊙2 , Q ⊙1 , 1 = exp ⊙ Q.

Lemma 3.10. Let A and Y be as in Lemma 3.9. Then,
Proof. Based on (25), B := A ⊙ exp ⊙ Id n ∈ Mat (K) is defined recursively by be the matrix formed by the first k row and column blocks in exp ⊙ Y . Let M r be the block row r of B, A k := (A 1,k , A 1,k−1 . . . , A 1,1 ) the first k blocks in A and Z k := (Z k,k , Z k−1,k . . . , Z 1,k ) T the first block column in Φ k . Given s

Application to power series
Since polynomials and power series split into homogeneous components, Example 3.6(3) implies: . . . , x n ), be a formal series. Then there exists a set of row blocks

Following Definition 3.4, write F (x) = M F exp ⊙ x if it poses no clarity issue.
From the above Lemma it follows that every formal power series can be expressed in the form M F exp ⊙ x, where abusing notation once again In other words: M F equals the sum of two matrices with easily computable ⊙-exponentials: one following Example 3.6 (3) (same as x) and one following (27). Lemma 3.5, Lemma 3.3 and the universal property of finite products ⊙ yield the following two results; see [8,10] for a proof.
Corollary 3.13. Let F (x) = (F 1 , . . . , F p ) (x 1 , . . . , x n ) be a vector power series, y = F (x) and be independent and dependent variable changes, which we assume admit formal inverse changes

Then, the expression of F in the new variables, written in that in those old, is
As was hinted at in [10, p. 5], this result shows interesting light on the way finite-level transformations translate into transformations on Mat n,m . For a linear transformation of the independent variables x = BX, however, basic properties of exp ⊙ are as useful as (36) in proving F admits the following expression in the new variable X (mind the effect of the first matrix, equal to zero save for block 1,1 which is equal to Id n , on the second one): This will be applied to first integrals of dynamical systems in Section 5.

Higher-order variational equations 4.1 Structure
Let us step back to what was said in §1.2. For each particular integral curve ψ of a given complex autonomous dynamical system (DS), the variational system VE k ψ for (DS) along ψ is satisfied by partial derivatives ∂ k ∂z k ϕ (t, ψ). Case k = 1 being trivial as shown in (VE ψ ), the situation of interest is k > 1. We will eschew formulations such as those in [24, eq (14)] in favour of the explicit formulae (38), (44), (LVE ψ ) and (VE k ψ ) using Linear Algebra to express multilinear maps. (t, ψ) and, per Lemma 3.9, Proof. We will explicitly prove (40); (39) is an immediate consequence of Lemma 2.11 and (40). We have, for every given ordered multi-index i = (i 1 , . . . , i k ), The right-hand side in (40) is equal to this expression, too, by simple application of the same principle as in (15). The effect of ∂ ∂z on A j is clear as well: chain rule implies A k+1 e r ⊙ e ⊙i ∂ϕ r ∂z m , which is equal, again using (15) (41) and (42). For instance the latter is obtained by induction over k using derivation of (28), (40) and Leibniz rule (21), as well as application of (31) with i = 1, t = r−2, s = k, A t = Y 1 e m ⊙Id ⊙r−2 n and p = r−1, use of (20) and the fact Z r−2,r−2 = Y ⊙r−2 1 .

Proposition 4.3 (First explicit version of non-linearised VE k ψ ). In the above hypotheses,
in other words, for every k 1, partial derivatives of ϕ (t, z) (41) and (42) imply this is equal to Sum swapping in m p and Lemma 2.11 (b) imply S 2 = k−1 p=1 A p Z p,k ; (29), Lemma 2.11 (b) and Proposition 2.9 (b) render S 1 − S 3 equal to missing summand A k Z k,k in S 2 .
(VE k ψ ) in Proposition 4.3 effectively settles the entries for lower n rows in A LVE k ψ and the first n columns in Φ k . Let us now find the rest of the matrices.

Proposition 4.5 (Explicit version of LVE k ψ ). Still following Notation 4.1, the infinite systeṁ
has Φ := exp ⊙ Y as a solution matrix. Hence, for every k 1, a) the lower-triangular recursive D n,k × D n,k form for LVE k ψ isẎ = A LVE k ψ Y , its system matrix being obtained from the first k row and column blocks of A LVE ψ :

b) and the principal fundamental matrix for LVE
Proof. (34) in 3.10, (VE ψ ) in Proposition 4.3, and item (b) in Lemma 3.5 implẏ The rest follows from Lemma 3.9.
Example 4.6. For instance, for k = 5 we have and, using any of the equivalent (28), (30), the principal fundamental matrix Φ 5 is hence (VE k ψ ) for k = 5 is the lowest row in A LVE 5 ψ times the leftmost column in Φ 5 .

Explicit solution and monodromy matrices for LVE k ψ
Let T ⊆ P 1 C be the domain for time variable t in (DS) and γ ⊂ T a closed path based at t 0 ∈ T . Assume k = 1. If Y 1 is a fundamental matrix of first-order (VE ψ ), analytic continuation along γ yields Y 1 (t 0 ) γ −−→ cont Y 1 (t 0 ) · M 1,γ , M 1,γ being the monodromy matrix ( [32]) of (VE ψ ). Assume Y 1 := Φ 1 is the principal fundamental matrix for (VE ψ ), any other solution matrix Ψ 1 recovered from Ψ 1 = Y 1 Ψ 1 (t 0 ). The non-linearised second-order equation, after Proposition 4.3, iṡ Following Proposition 4.5, linearised completion LVE 2 ψ has principal fundamental matrix is found via variation of constants, which becomes a contour integral whenever time is taken along path γ: hence therefore the monodromy of LVE 2 ψ along γ will be Assume k = 3. The principal fundamental matrix of LVE 3 ψ consists of the lower right 3 × 3-block of (45) and all solution matrices can be expressed Ψ 3 = Φ 3 C. Let us now find a solution tȯ Same as before, variation of constants on (48) yields another contour integral if τ ∈ γ: The remaining term of our monodromy matrix is a direct consequence of analytic continuation: Our monodromy matrix is The pattern is clear now. Assume we have computed solutions Y 1 , . . . , Y k−1 and performed continuation up to k − 1: where Q r,s,γ := i 1 +···+ir=s c s i 1 ,...,ir Q 1,i 1 ,γ ⊙ Q 1,i 2 ,γ ⊙ · · · ⊙ Q 1,ir,γ , s r 2.
Then, the fundamental matrix for LVE k ψ will be expressed in the form (38), its lower left block Y k being computable in terms of the blocks Z 2,k , . . . , Z k,k above it (all of which involve Upper terms Z 2,k , . . . , Z k,k are continued into Q 2,k , . . . , Q k,k as in (51), s replaced by k. It is clear we have proven the following: Lemma 4.7. The monodromy matrix Φ k of LVE k ψ along closed path γ is composed by the first k row and column blocks in where Q 1,1,γ := M 1,γ , blocks above the bottom row are computed according to (51) and Hence it is clear the computation of a monodromy matrix follows a block order such as the one below, blocks in the bottom row requiring quadratures: . . .
We assume there are two generators [γ] , [ γ] ∈ π 1 (T ; t 0 ), yielding two different matrices: Commutativity of monodromy matrices now admits simple, compact formulation:  a) The monodromy group of a linear system is contained in its differential Galois group (e.g. [29] ; a sufficient condition for arbitrary order is fulfilment at order 1, M 1,γ , M 1,γ ∈ Gal (VE ψ ) • , itself an open problem in general. b) All disquisitions and results on the variational jet in [20,21] are referred to the lower n-row strip for commutators of these monodromies. More specifically: • what is called jet therein is lower strip Y in principal fundamental matrix Φ = exp ⊙ Y for infinite system (LVE ψ ), and we will use this terminology in the following Section; • morphism properties imply monodromy matrices along path commutators equal matrix commutators: M k,γ −1 [20,21] amounts to lower strip Q k,γ −1 Although [20,21] clearly benefit from the use of automatic differentiation techniques (see also [19]), it may be argued that expressions such as those in (LVE ψ ) provide for a fuller control of the general structure of the whole variational complex when it comes to symbolic computations, as well as a further check aid for the aforementioned techniques. See §6.1 for an example. See also [28] for a recent application to the Friedmann-Robertson-Walker Hamiltonian arising from Cosmology.

First integrals and higher-order variational equations
Let F : U ⊆ C n → C n be a holomorphic function and ψ : I ⊂ C → U . Firstly, the flow ϕ (t, z) of X admits, at least formally, Taylor expansion (1) along ψ which is expressible as where J ψ is the jet for flow ϕ (t, ·) along ψ, displayed as Y in (27) and defined in Notation 4.1that is, the matrix whose ⊙-exponential Φ is a solution matrix for (LVE ψ ). Secondly, the Taylor series of F along ψ can be written, cfr. [5, Lemma 2] and Notation 1.3, Basic scrutiny of Example 3.6(3), Lemma 3.11 and (35) trivially implies (56) can be expressed as i.e. J ψ F is the jet or horizontal strip of lex-sifted partial derivatives of F at ψ.
Definition 5.1. We call the adjoint or dual variational system of (DS) along ψ. Same as in (LVE ψ ) and all throughout 4.1, consideration of finite subsystems, namely the lowest D n,k × D n,k block, leads to specific The following is immediate upon derivation of equation Φ k Φ −1 k = Id D n,k : , is a solution to (LVE ⋆ ψ ).
The following was proven in [24] and recounted in [5,Lemma 7], and may now be expressed in a simple, compact fashion: Lemma 5.3. Let F and ψ be a holomorphic first integral and a non-constant solution of (DS) respectively. Let V := J T F be the transposed jet of F along ψ. Then, V is a solution of (LVE ⋆ ψ ).
Proof. Let us recall formal expansion (55) and F (y) = J ψ F exp ⊙ y for every y ∈ K n . Let φ = ϕ (t, ψ + ξ). We have, using Lemma 3.12, and F (φ) is supposed to be constant, hence applying (LVE ψ ) and Lemma 3.12 Compound the jet of field X, i.e. A in Notation 4.1 and Proposition 4.5, with a 1,0 term A 0 , equal to X (0) = X (ψ) =ψ: It is easy to check, via possibilities offered on i 1 and j 1 in (25), that the symmetric product of A with exp ⊙ Id n adds only a relatively minor addendum to A LVE ψ , namely a superdiagonal of blocks i i A 0 ⊙ Id ⊙i n ∈ Mat i+1,i n , i 1, effectively rendering it block-Hessenberg: by means of a solid line, Using the M k -M k notation in [5], it is immediate to check that A result in [5] using said notation is easier to prove in this setting. Indeed, the same reasoning underlying (40) applies to row F (k) and ∂ ∂zm F (k) = F (k+1) e m ⊙ Id ⊙k n ; following Lemma 2.11, were called admissible solutions of the order-k adjoint system in [5]. This takes us back to the end of Section 3.2. Consider gauge transformation ([2, 5, 6, 22]) x = P X transforming linear systemξ = A 1 ξ into equivalenṫ Ξ = P [A 1 ] Ξ := P −1 A 1 P − P −1Ṗ Ξ.
Using notation Y i = P X i , J ψ = P X and item (e) in Lemma 3.9, we recover the result already seen in previous references, summarised in the extension of gauge transformations to higher dimensions via P ⊙k : exp ⊙ (X) = exp ⊙ P −1 J ψ = exp ⊙ P −1 exp ⊙ J ψ = diag · · · , P −1 ⊙2 , P −1 , 1 exp ⊙ J ψ , and very simple application of properties seen so far extends the general structure of the gauge transformation to Ψ = exp ⊙ P −1 exp ⊙ J ψ : Second summand Ṗ −1 ⊙ exp ⊙ P −1 exp ⊙ P can be simplified into: withṖ −1 = −P −1Ṗ P −1 . The above gauge transformation can be seen as the effect of transformation z = P Z on the jet of (DS). Given a first integral F of the latter, we may always assume F (ψ) = 0, which implies M 1,0 F = 0 and, as seen in (37) or in Lemma 3.5, F P (Z) = J F exp ⊙ P exp ⊙ Z.
The jet of this formal series is J FP = J F exp ⊙ P = · · · F (0) (ψ) P ⊙3 F (2) (ψ) P ⊙2 F (1) (ψ) P · · · 0 0 0 ∈ Mat 1,n , and applying (58), Lemmae 5.3 and 5.4, and identity P −1 ⊙kṖ ⊙k = −˙ P −1 ⊙k P ⊙k , we have just proven the following: Proposition 5.5. The transposed jet V P := J T F P in the new variables must satisfẏ The key importance in practical examples resides in the choice of the particular solution ψ and the reduction matrix P , in order to render (60) easier (or more convenient) to solve than its unreduced counterparts, Lemma 5.3 and Proposition 5.4: see also [3,4].
We therefore have the following result: Theorem 6.4. H in (63) is not meromorphically integrable for any M, m > 0.