The tridendriform structure of a Magnus expansion

The notion of trees plays an important role in Butcher's B-series. More recently, a refined understanding of algebraic and combinatorial structures underlying the Magnus expansion has emerged thanks to the use of rooted trees. We follow these ideas by further developing the observation that the logarithm of the solution of a lihear first-order finite-difference equation can be written in terms of the Magnus expansion taking place in a pre-Lie algebra. By using basic combinatorics on planar reduced trees we derive a closed formula for the Magnus expansion in the context of free tridendriform algebra. The tridendriform algebra structure on word quasi-symmetric functions permits us to derive a discrete analogue of the Mielnik-Plebanski-Strichartz formula for this logarithm.


Introduction
In many areas of the mathematical sciences linear initial value problems (IVP) play an essential role. Recall that such a linear IVP basically consists of a first order linear differential equation together with the initial value Y(0) = Y 0 . The function A(t) may be matrix or operator valued. It is common to write the solution of such an IVP in terms of the time-ordered exponential, Y(t) = Texp t 0 A(s)ds Y 0 . Indeed, using the definition of the time-ordering operator T at distinct times s 1 , . . . , s n T[U 1 (s 1 ) · · · U n (s n )] := U σ(1) (s σ(1) ) · · · U σ(n) (s σ(n) ), where σ is the unique permutation such that s σ(1) < · · · < s σ(n) , the function Y(t) results as the formal solution (2) Texp h t 0 A(s)ds Y 0 = Y 0 1 + n>0 h n n! [0,t] n T[A(t 1 ) · · · A(t n )]dt 1 · · · dt n Y 0 Date: April 23, 2013. of the linear integral equation

A(s)Y(s)ds
corresponding to (1). We have introduced the formal parameter h for convenience. The first few terms are The solution Y(t) of (1) can also be written as a proper exponential. However, in general we can not expect that Y(t) = exp h t 0 A(s)ds Y 0 . Indeed, trying to re-arrange the coefficient of the second order term in h yields Looking at the first term on the right-hand side, we see that the iterated integral is in "bad" order, which means that the right-hand side does not add up to the h 2 -order term in (4), namely t 0 A(s)ds 2 2 t 0 A(s) s 0 A(u)duds. One may try to resolve this problem using the following simple ansatz. Introduce functions Ω i (t), such that Returning to (5), one verifies quickly that Ω 2 (t) := − 1 2 t 0 s 0 A(u)du, A(s) ds does the job -up to order h 2 . Indeed, observe that the unwanted term in (5) is canceled It is clear that the introduction of this order h 2 correction term, Ω 2 (t), will contribute at higher orders h n , for n > 2, which we have to take into account when calculating the function Ω n (t). More generally, the function Ω n (t) will depend on Ω i (t), 0 < i ≤ n − 1.
The solution to this formidable rewriting problem was presented by Wilhelm Magnus in his seminal 1954 paper [41], where he proposed for the logarithm of the time-ordered exponential, i.e., the logarithm of the formal series of iterated integrals (4)

A(s)ds
(we assume Y 0 = 1) a particular differential equation (hA)(s)ds with Ω(hA)(0) = 0, and Ω(hA)(t) = i>0 h i Ω i (A; t), Ω 1 (A; t) = t 0 A(s)ds. The B n are the Bernoulli numbers, and, as usual, the n-fold iterated Lie bracket is denoted by ad (n) a (b) := [a, [a, · · · [a, b]] · · · ]. Note that in the literature one also finds the following notationΩ(hA)(t) = dexp −1 Ω(hA) (hA). See for instance Theorem 4 in [6]. The solution of the IVP (1) is then given by Let us write down the first few terms ofΩ(hA)(s), following from Picard iteration Magnus' seminal paper triggered much progress in both applied mathematics and physics. In the authoritative reference [6] the reader may find much more information. We should also remark that the presentation of Magnus' expansion given above is rather formal, since we have deliberately ignored aspects of convergence. The reason for this more algebraic approach to Magnus' expansion will become clear in the sequel. The principal purpose of it is to show that (7) is just a particular case of a more general expansion that allows to solve fixed point equations like (3) in a far more general context than just the one given by IVPs.
For the last 30 years or so, rooted trees play a central role in the theory of Butcher's B-series [9,10,25]. In the recent works [11,18,47], including the standard reference [26], the reader may find more details on the use of trees in numerical integration methods. Iserles and Nørsett [28] were the first to make extensive use of rooted trees to obtain a deeper understanding of the workings of Magnus' expansion. Included in the review article [29] the reader can find a comprehensive summary of the work of Iserles and Nørsett. A very readable account on the use of rooted trees for Magnus' series can be found in [30].
In [19,20] we started to explore the genuine pre-Lie algebra structure underlying Magnus' expansion. Two key observations form the basis for our approach. First, note that the basic building block in (7), i.e., the Lie bracket with the integral operator on one side defines a non-commutative binary product for, say, matrix valued functions A, B. It is easy to see that this product is non-associative. Indeed, it satisfies what is well-known as the left pre-Lie identity [7,17,43,52] This relation reflects the combination of integration by parts and the Jacobi identity. The second observation is based on expanding this Lie bracket, One then shows by using integration by parts that the two binary non-associative products satisfy a non-commutative shuffle like structure [2], which is known as dendriform algebra [33]. Going back to (4), we see that the iteration of the second product in (9) yields the basic operation in the formal solution of (1). The iteration of the first operation analogously corresponds to the formal solution ofẎ(t) = Y(t)A(t). Hence, we see that these non-associative, non-commutative binary products reflect well the basic operations for solving linear IVPs. With this in mind, let us return to (7). In terms of the pre-Lie product (Ω ⊲ A)(s) = ad s 0Ω (t)dt (A(s)), Magnus' series gains some transparency We denote by L A⊲ the left multiplication operator defined by L A⊲ (B) := A ⊲ B. A similar approach applies to Fer's expansion [19]. Note that the right-hand side of (10) already appeared in [1] (where left pre-Lie algebras are called chronological algebras), but the dendriform structure is required to establish identity (10) itself [19]. The pre-Lie picture is our starting point for the use of rooted trees in the exploration of Magnus' expansion. We would also like to mention the following references [12,13,14,15,16], which explore in depth pre-Lie aspects of Magnus' expansion. In [24] the Magnus expansion appears in the context of non-commutative symmetric functions. One may wonder whether there is another expression forΩ(A) in terms of the dendriform operations (9) rather than using the pre-Lie product. In [23] we gave a positive answer, which is based on a classical commutator free formula due to Mielnik-Plebański and Strichartz.
Proposition 1 (Mielnik-Plebański-Strichartz formula [46,54]). The functionΩ(A)(t) is given by the series of iterated integrals Here S n is the group of permutations of n elements, and d(σ) is the cardinality of the descent set D(σ) ⊂ {1, . . . , n − 1} of the permutation σ ∈ S n , i.e., the subset of indices i such that σ(i) > σ(i + 1). Unveiling the very dendriform nature of formula (11) requires the use of the free dendriform algebra with one generator (concretely described in terms of planar binary trees, or alternatively in terms of planar rooted trees via Knuth's rotation correspondence), as well as the Malvenuto-Reutenauer-Foissy bidendriform Hopf algebra FQSym of free quasi-symmetric functions.
As Ω(A) is a Lie element, we can use the Dynkin-Specht-Wever theorem (Theorem 8 on page 169 in [31]), so that we recover the formula in its original Lie algebraic setting In the case of discrete analogues of differentiation and integration further refinement is needed. Recall that corresponding to the modified Leibniz rule for finite-difference operators, summation operators satisfy a modified integration by parts identity. The latter accounts for non-trivial diagonal terms. Therefore, replacing the Riemann integral by a Riemann sum operator, which we denote by Σ, yields three binary products They are known to form a tridendriform algebra [36], which can be interpreted as a non-commutative quasishuffle like structure [49].
This paper is a continuation of our work [23]. First we explore the Magnus expansion from the tridendriform algebra point of view, using planar reduced rooted trees. Then we aim at a "discrete analogue" of the Mielnik-Plebański-Strichartz formula (11), i.e., iterated integrals will be replaced by iterated sums. Contrarily to the continuous case, partial diagonals have to be taken into account. The relevant algebraic structure will be given by the one of tridendriform algebra, which is a natural refinement of the notion of dendriform algebra [34] proposed by J-L. Loday and M. Ronco in [36]. The Malvenuto-Reutenauer-Foissy bidendriform Hopf algebra FQSym must then be replaced by the more refined tridendriform Hopf algebra WQSym of word quasi-symmetric functions, where the groups S n of permutations of {1, . . . , n} are replaced by the sets S T r n of surjective maps from {1, . . . , n} onto {1, . . . , r}.
The free tridendriform algebra with one generator is concretely described in terms of planar reduced trees [34], or alternatively in terms of planar rooted hypertrees via a suitable extension of Knuth's rotation correspondence [22]. The tridendriform Hopf algebra WQSym can be traced back to F. Hivert's PhD thesis [27], in which he constructs the even larger Hopf algebra MQSym of matrix quasi-symmetric functions, which naturally contains WQSym. A clear account of the associated tridendriform structure can be found in [48]. This object has also been thoroughly studied under the notation ST by E. Burgunder and M. Ronco in [8].
The discrete Mielnik-Plebański-Strichartz formula splits into two versions according to whether one excludes the upper bound from the summation operator or not, see equations (56) and (57) respectively. Both look similar to (11) once iterated integrals have been replaced with iterated sums, except that the notion of descent, extended from permutations to surjections, splits into a strict and a weak version, each of them giving its variant of the formula. The strict (resp. weak) descent set of a surjection σ : We note that as well as any dendriform algebra is naturally pre-Lie, any tridendriform algebra is naturally endowed with a structure of post-Lie algebra [3]. The latter is a vector space together with a binary product ⊲ and a Lie bracket [−, −] subject to compatibility axioms [37,38,55]. Recently, due to the work of Munthe-Kaas et al. it became clear that post-Lie algebras play a central role in the theory of Lie group integrators on manifolds. It would be interesting to understand the post-Lie algebra structure underlying the Magnus expansion by refining (10) for logarithms of solutions of discrete IVPs. We plan to address this problem in a forthcoming paper.
The paper is organized as follows: in Section 2 we review the notion of trees, and introduce the essential algebraic structures. In Section 3 we give a detailed description of two "Magnus elements", namely the logarithms of the solutions of two first-order linear tridendriform equations, corresponding to the two dendriform structures one can associate to a tridendriform algebra. After a reminder of the pre-Lie Magnus expansion, we give a tridendriform Magnus expansion of the two Magnus elements above in terms of planar reduced trees, when the tridendriform algebra is free. Finally in Section 4, relating the tridendriform algebra of sequences with WQSym and with the free tridenriform algebra, we give the discrete analogue of the Mielnik-Plebański-Strichartz formula.

Acknowledgements:
We would like to thank A. Lundervold, H. Munthe-Kaas, E. Burgunder, M. Livernet, F. Patras, M. Ronco and J.-Y. Thibon for discussions and remarks. We are thankful to the referees for their comments and suggestions. The first author is supported by a Ramón y Cajal research grant from the Spanish government, as well as the project MTM2011-23050 of the Ministerio de Economía y Competitividad. Both authors were supported by the CNRS (GDR Renormalisation), and by Agence Nationale de la Recherche, projet CARMA ANR-12-BS01-0017.

Algebraic and combinatorial preliminaries
Throughout the paper, k will stand for a field of characteristic zero. In this section we recall the notion of trees, as well as the relevant algebraic structures.

Planar reduced trees.
Recall that a tree t is a connected and simply connected graph made out of vertices and edges, the sets of which we denote by V(t) and E(t), respectively. A planar reduced tree is a finite oriented tree given an embedding in the plane, such that all vertices have two or more incoming edges, and exactly one outgoing edge. An edge can be internal (connecting two vertices) or external (with one loose end). The external incoming edges are the leaves. The root edge is the unique edge not ending in a vertex. For any planar reduced tree t, a partial order on the set of its vertices V(t) is defined as follows: u, v ∈ V(t), u < v if and only if there is a path from the root of t through u up to v. A planar reduced tree is binary if all vertices have exactly two incoming edges.
We include the unique planar reduced tree without internal vertices, i.e., the single edge |, despite the fact that it is not binary in the strict sense. We denote by T red pl (resp. T red pl ) the set (resp. the linear span) of planar reduced trees. A simple grading for such trees is given in terms of the number of internal vertices. Alternatively, one can use the number of leaves. Above we listed all planar reduced trees up to four leaves. Observe that for any collection (t 1 , . . . , t n ) of planar reduced trees we can build up a new planar reduced tree via the grafting operation, t := (t 1 , . . . , t n ), by considering the unique planar binary tree with one single vertex and n leaves, and plugging t k on leaf number k, k ∈ {1, . . . , n}, from left to right.
Any planar reduced tree t | obviously can be expressed as (t 1 , . . . , t n ) in a unique way. The grafting operation makes T red pl the free generic magma with one generator and one operation in any arity a ≥ 2. Notice that this product is of degree zero with respect to the leaf number grading. However we will adopt the grading t → |t| given by the number of leaves of t minus one. We call the binary trees τ (n) r , τ (n) l recursively defined by right and left combs, respectively. The following list includes the right and left combs up to order three There is a partial order on T red pl defined as follows: t 1 ≤ t 2 if t 1 can be obtained from t 2 by glueing some inner vertices together. Here glueing refers to shrinking an edge between two adjacent inner vertices until it becomes a new inner vertex. In particular, two comparable trees must have the same degree. The minimal elements are the trees with only one inner vertex, and the maximal elements are the planar binary trees. 1 , we have proved that a natural extension of D. Knuth's rotation correspondence settles a natural bijection from planar reduced trees onto planar rooted hypertrees. As we won't use this fact here, we refer the reader to [22] for details.

Pre-and Post-Lie algebras.
Recall that a left pre-Lie algebra (P, ⊲) is a k-vector space P equipped with an operation ⊲ : P ⊗ P → P subject to the following relation The Lie bracket following from antisymmetrization in P, [a, b] := a ⊲ b − b ⊲ a, satisfies the Jacobi identity. See e.g. [43] for a survey on pre-Lie algebras. Recall from Chapoton and Livernet [17] that the basis of the free pre-Lie algebra in one generator can be expressed in terms of undecorated, non-planar rooted trees. See also [1,52] for other descriptions of free pre-Lie algebra.
A natural example of pre-Lie algebra is given in terms of a differentiable manifold M endowed with a flat torsion-free connection. The corresponding covariant derivation operator ∇ on the space χ(M), ⊲ of vector fields on M gives it a left pre-lie algebra structure defined by a ⊲ b = ∇ a b, by virtue of the two equalities which express the vanishing of torsion and curvature respectively. For M = R n with the standard flat connection A left post-Lie algebra (Q, ⊲, [−, −])) is a Lie algebra Q with Lie bracket [−, −], together with another operation ⋄ : Q ⊗ Q → Q subject to the following two compatibility relations Note that a pre-Lie algebra is a post-Lie algebra with vanishing Lie bracket. The natural geometric example of a post-Lie algebra is given in terms of a connection which is flat and has constant torsion. See [38,39] for details.

Rota-Baxter algebras.
Recall that a Rota-Baxter algebra is a k-algebra A endowed with a k-linear map T : A → A that satisfies the relation where θ ∈ k is a fixed parameter [4]. The map T is called a Rota-Baxter operator of weight θ. The map T := −θid − T also is a weight θ Rota-Baxter map. Both images T (A) and T (A) are subalgebras in A. One may think of (15) as a generalized integration by parts identity. Indeed, a simple example of Rota-Baxter algebra is given by the classical integration by parts rule showing that the ordinary Riemann integral is a weight zero Rota-Baxter map. Other examples can be found for instance in [19,20,21].

Tridendriform algebras.
We introduce the notion of tridendriform algebra [36] over k, which is a k-vector space D endowed with three bilinear operations ≺, ≻ and ·, subject to the seven following axioms Axioms (16)- (22) imply that for a, b ∈ D the composition defines an associative product. At first this may look puzzling, but further below we will see that finite differences provide a natural and elucidating example, showing that these axioms encode the modified integration by parts formula for summation operators.
A dendriform algebra is defined by setting the product · to zero in the above axioms. Hence, the rules of a dendriform algebra are given in terms of axioms (16)-(18) alone, without the · term. Note, for example, that this reduced set of rules encodes integration by parts for the Riemann integral.
However, any tridendriform algebra (D, ≺, ≻, ·) gives rise to two ordinary dendriform algebras D L := (D, , ≻) and D R := (D, ≺, ) with :=≺ + · and :=≻ + ·. Recall that dendriform algebras, hence tridendriform algebras as well, are at the same time pre-Lie algebras. Indeed, the two following products inherited from the dendriform structure D R are left pre-Lie and right pre-Lie, respectively. That is, we have The Lie brackets following from the associative operation (23) and the pre-Lie operations (24) all define the same Lie algebra. The same holds of course mutatis mutandis for the other dendriform algebra D L , giving rise to two other pre-Lie products which will be denoted by ⊲ and ⊳ . Note that the associative product (23) is the same for both dendriform structures. For any tridendriform algebra D we denote by D = D⊕k.1 the corresponding dendriform algebra augmented by a unit 1, with the following rules a ≺ 1 := a =: 1 ≻ a 1 ≺ a = a ≻ 1 = 1 · a = a · 1 := 0, implying a * 1 = 1 * a = a. Note that the equality 1 * 1 = 1 makes sense, but that 1 ≺ 1, 1 · 1 and 1 ≻ 1 are not defined.
Now suppose that the tridendriform algebra D is complete with respect to the topology given by a decreasing filtration D = D 1 ⊃ D 2 ⊃ D 3 ⊃ · · · , compatible with the dendriform structure, in the sense that D p ≺ D q ⊂ D p+q , D p ≻ D q ⊂ D p+q and D p · D q ⊂ D p+q for any p, q ≥ 1. In the unital algebra we can then define the exponential and logarithm map in terms of the associative product (23) exp * (x) := n≥0 x * n /n! resp. log * (1 + x) := − n>0 (−1) n x * n /n.
Let L a≻ (b) := a ≻ b and R ≻b (a) := a ≻ b. Note that L a≻ L b≻ = L (a * b)≻ . We recursively define the set of tridendriform words in D for fixed elements x 1 , . . . , x n ∈ D, n ∈ N by In case that x 1 = · · · = x n = x we simply write w (n) Our main example of a tridendriform algebra comes from the following simple observation. One verifies easily that any associative Rota-Baxter algebra A of weight θ gives rise to a tridendriform algebra as follows The corresponding associative and left pre-Lie products are explicitly given for a, b ∈ A by Note that in a commutative Rota-Baxter algebra with weight θ 0, the pre-Lie products are still nontrivial although the Lie brackets vanish. This leads to the classical Spitzer identity [19,20]. By omitting the θ-terms the product yields a post-Lie algebra structure (13) on A with respect to the Lie bracket defined in terms of the third tridendriform product [3]. The θ-term on the right hand side of (15), respectively the product · in the definition of the tridendriform algebra, is necessary, for instance, when we replace the Riemann integral by a Riemann-type summation operator. This becomes evident once we recall the modified Leibniz rule for the finite difference operator The corresponding summation operator verifies the weight θ = 1 Rota-Baxter relation See further below in subsection 4.3 for more details. More generally for finite Riemann sums where θ is a positive real number and [−] is the floor function, we find that T θ satisfies the weight θ Rota-Baxter relation T

2.4.1.
Tridendriform algebra structure on planar reduced trees. In [36] it was shown that the linear span T ′red pl of planar reduced trees different from | generates the free tridendriform algebra in one generator. Starting from taking | as a unit for the associative product * , the three products for two trees s = (s 1 , . . . , s n ) and t = (t 1 , . . . , t p ) are given recursively by s ≺ t = (s 1 , . . . , s n−1 , s n * t), s · t = (s 1 , . . . , s n−1 , s n * t 1 , t 2 , . . . , t p ).
The tree | can be taken as the unit for the corresponding augmented dendriform algebra. For any collection of trees (t 1 , . . . , t n ) we easily derive as well as for n ≥ 2. We have omitted parentheses in the second line in the computation above, by virtue of Axiom (20). The freeness property of (T ′red pl , ≻, ≺, ·) implies that for any tridendriform algebra D and any a ∈ D there is a unique morphism F a : T ′red pl → D. Using (32) and (33), this morphism can be described recursively. Indeed, starting from F a ( ) := a we have for any n ≥ 2, as is easily seen from (33).

Linear tridendriform equations and the pre-Lie Magnus expansion
In this section we abstract (3) into linear fixed point equations in the complete filtered tridendriform algebra hA [[h]], augmented by a unit 1, where (A, ≺, ≻, ·) is any tridendriform algebra. For a ∈ A, let X = X(ha) and X = X(ha) be solutions of respectively. Equation (34) + h 3 a (a a) + h 4 a (a (a a)) + · · · .
3.1. The pre-Lie Magnus expansion. In [19,20] we have given a general formula, the pre-Lie Magnus expansion, for the logarithm of such linear dendriform equations in terms of the left pre-Lie product. Applying this to the two dendriform structures above, we obtain, with the notations of Paragraph 2.4 Theorem 3. ( [19,20]) The elements Ω ′ = log * (X(ha)) and Ω ′ = log * (X(ha)) in A[[h]] satisfy respectively the two recursive formulas where B m is the m-th Bernoulli number. The first few terms are Recall (6), the terms beyond order h in Ω ′ (ha) are needed to eliminate the unwanted terms when calculating the X(ha) = exp * (ha + m 1 B m m! L (m) Ω ′ ⊲ (ha)).

The post-Lie Magnus expansion.
Splitting in strands the pre-Lie product, we see from a tridendriform point of view, that the pre-Lie multiplication operator in (38) decomposes into Similarly the other pre-Lie multiplication operator in (39) decomposes into L a⊲ = L a⋄ − R ·a , with the right multiplication R ·a (b) := b · a. Recall that the vector space A together with two bilinear binary operations ⋄ and the Lie bracket following from the · product, [a, b] · := a · b − b · a, defines a post-Lie algebra (13). This post-Lie algebra is very particular, as it comes with two compatible pre-Lie structures, namely Splitting the pre-Lie product ⊲ = ⋄ + · in (38), or analogously ⊲ = ⋄ − · op in (39), yields refinements of the pre-Lie Magnus expansions of the elements Ω ′ and Ω ′ described in the previous paragraph, which would be very interesting to understand in greater detail. A forthcoming paper will be devoted to exploring the post-Lie structure of the Magnus expansion.

A closed formula for the logarithm.
In this paragraph, expanding the pre-Lie product, we give an explicit expression of log * (X) and log * (X) in the completed free tridendriform algebra in one generator. We can safely set the dummy parameter h to 1 here, thanks to completeness.
Recall that a leaf of a planar binary tree is a descent if it is not the leftmost one and if it is pointing up to the left [13]. For example, take the following right and left combs · · · The first tree has one descent and the second has two descents. The last two trees have no descents. We extend this notion to planar reduced trees in two different ways as follows: a leaf is a descent if it is not the leftmost one, and if it is not the rightmost edge above a vertex. A strict descent is a descent which is moreover the leftmost edge above some vertex.
where d(t) (resp. d(t)) denotes the number of descents (resp. strict descents) of t, and |t| its number of leaves minus one.
Proof. Both statements will be derived from [23,Corollary 6]. There is a unique dendriform algebra morphism F L (resp. F R ) from the free dendriform algebra T ′ pl bin to A L (resp. A R ) such that F L ( ) = F R ( ) = .
Lemma 5. For any t ∈ T ′ pl bin we have Proof. Recall that the notions of descent and strict descent coincide for planar binary trees. Lemma 5 is obviously true for t = . Let us prove it by induction on the degree |t|. Remark first that, in any planar reduced tree, shrinking an inner edge does not change the number of descents if and only if this edge points up to the right. Similarly, shrinking an inner edge does not change the number of strict descents if and only if this edge points up to the left. Shrinking any other inner edge "in between" will simultaneously increase the number of descents and decrease the number of strict descents by one. Hence the right-hand side of (42), resp. (43), is the sum of all planar reduced trees which can be obtained from t by repeatedly glueing two vertices together, provided they are linked by an edge pointing up to the right, resp. up to the left. Recall that any planar binary tree writes t = t 1 ∨ t 2 = t 1 ≻ ≺ t 2 in a unique way. We can compute, using the induction hypothesis The computation of F R (t) is done similarly using strict descents. As an example, we have End of proof of Theorem 4: Corollary 6 of [23] applied to A L and A R reads: , Applying Lemma 5 then immediately yields Theorem 4. Proof. The reader is invited to check the seven tridendriform axioms. For example and similarly for the four remaining ones. Compatibility with the grading is obvious. A complete proof can be found, e.g., in [50, Chap. 2].

Planar reduced trees and surjections.
The material presented in this paragraph is mostly borrowed from [36] and [8]. A bijective correspondence between surjections and planar reduced trees with levels is described as follows: a planar reduced tree with r levels is a planar reduced tree t with, say, m internal vertices and n + 1 leaves together with a surjective nonincreasing map ϕ from the poset of its internal vertices onto {1, . . . , r}. Such a tree admits a graphical realization by drawing the internal vertices at the prescribed levels, with level 1 being the top one and level r being the deepest one. Any planar reduced tree with levels (t, ϕ) gives rise to several such trees (t 1 , ϕ 1 ), . . . , (t k , ϕ k ), where t = (t 1 , . . . , t k ) and ϕ i is the standardized restriction of the map ϕ to the internal vertices of t i .
To any such tree (t, ϕ) we can associate a surjection σ t,ϕ : {1, . . . , n} onto {1, . . . , r} as follows: σ t,ϕ (i) is the level of the internal vertex u i situated between leaves l i and l i+1 (the leftmost being the first and the rightmost being number n + 1). This correspondence P is a bijection, the inverse of which is recursively given as follows: the surjection σ : {1, . . . n} → → {1, . . . , r} reaches its maximal value r a certain number of times, say k − 1. It gives then rise to k sequences of integers, possibly with repetitions, in {1, . . . , r − 1}. Some of them can of course be empty. By "standardizing" the integers in each sequence, they form a surjection. For instance (341324134113) gives the four sequences (3), (132), (13) and (113) which, after standardizing, give the four surjections (1), (132), (12) and (112). The grafting of the k trees (in the order given above) gives the underlying tree of P −1 (σ), and the original surjection is used to determine the levels of each vertex, namely ϕ(u j ) = σ( j). 3
All descents, indicated with bracketed numbers, are strict except the [1] on the right .
Forgetting the levels, we thus obtain for any positive integer n a surjective map Ψ : ST n → → (T red pl ) n . A descent, resp. a strict descent of a surjection f in ST n is an integer j ∈ {1, . . . , n − 1} such that f ( j) ≥ f ( j + 1), resp. f ( j) > f ( j + 1). These notions match with the corresponding notion for planar reduced trees. In fact, given a planar reduced tree with levels, any descent (resp. any strict descent) gives rise to a corresponding descent (resp. strict descent) of the associated surjection, and vice-versa. As an obvious corollary we have for any f ∈ ST: Proof. Take any u = (s, t) = (u 1 , . . . , u n+m ) in T σ (N) × T τ (N), and order the u j 's from largest to smallest. This uniquely defines a surjection γ ∈ ST n+m , by sending the largest u j 's on 1, the second largest ones on 2 and so on. The standardization of F = γ | {1,...,n} (resp. G = γ | {n+1,...,n+m} ) is equal to σ (resp. τ). Then obviously u ∈ T γ (N), which proves the first assertion. If moreover min s < min t, then max F > max G, and similarly if min s > min t or min s = min t, which proves the lemma.
Now let a ∈ A, and let F a : WQSym → A the linear map defined for any σ ∈ ST n by a(s 1 ) · · · a(s n )         .
Theorem 9. The map F a : WQSym → A defined above is a tridendriform algebra morphism, and we have where F a : T red pl → A is the unique tridendriform algebra morphism such that F a ( ) = a. In other words, we have the following commutative diagram of tridendriform algebra morphisms: Proof. By direct computation: take σ ∈ ST n and τ ∈ ST m . Then, using Lemma 8, The conclusion follows by applying D to both sides. The computation for ≻ and · is completely similar. a(s 1 ) · · · a(s n ). (57)

Remark 12.
Contrarily to what happens in the continuous case [23], the rewriting of Ω(a) and Ω(a) as Lie elements is not obvious. Hence, the representation of (56) and (57) in terms of Lie brackets is rather involved. This is related to the fact that the pre-Lie products ⊲ and ⊲ in a Rota-Baxter algebra cannot be expressed in terms of the Lie bracket and the Rota-Baxter operator alone, unless the weight θ is equal to zero. It is at this point where the post-Lie structure enters the picture. We plan to address this in detail in a forthcoming paper.