A Unified Analysis of Linear Quaternion Dynamic Equations on Time Scales

Over the last years, considerable attention has been paid to the role of the quaternion differential equations (QDEs) which extend the ordinary differential equations. The theory of QDEs was recently well established and it has wide applications in physics and life science. This paper establishes a systematic frame work for the theory of linear quaternion dynamic equations on time scales (QDETS), which can be applied to wave phenomena modeling, fluid dynamics and filter design. The algebraic structure of the solutions to the QDETS is actually a left- or right- module, not a linear vector space. On the non-commutativity of the quaternion algebra, many concepts and properties of the classical dynamic equations on time scales (DETS) can not be applied. They should be redefined accordingly. Using $q$-determinant, a novel definition of Wronskian is introduced under the framework of quaternions which is different from the standard one in DETS. Liouville's formula for QDETS is also analyzed. Upon these, the solutions to the linear QDETS are established. The Putzer's algorithms to evaluate the fundamental solution matrix for homogeneous QDETS are presented. Furthermore, the variation of constants formula to solve the nonhomogeneous QDETs is given. Some concrete examples are provided to illustrate the feasibility of the proposed algorithms.


Introduction
The theory of dynamic equations on time scales (DETS) has enormous applications [7,29]. It is applicable to many fields in which physical phenomena can be described by continuous or discrete dynamical models. For instance, both continuous and discrete models are used in 3D tracking of shape, motion estimation [30] and DNA dynamics [26]. An unify framework for the theory of DETS was introduced in For linear QDETS with quaternion constant coefficient matrix, it is not difficult to find its fundamental solution matrix if its coefficient matrix has enough right linearly independent eigenvectors. Otherwise, we cannot use the method of combining series expansion and root subspace decomposition to construct the fundamental solution matrices of linear QDETS. This is because that the generalized exponential function on time scales does not possess simple series expansion in contrast to e At . In order to overcome this difficulty, we propose a modified Putzer's algorithm to find the fundamental solution matrices of linear QDETS. The Putzer's algorithm is particularly useful for quaternion coefficient matrices that do not have enough right linearly independent eigenvectors since it avoids the computing of eigenvectors. To authors' best knowledge, the Vieta's formulas of quaternion polynomials and the theory of annihilating polynomial of quaternion matrices have not been well studied yet. Thus the operability of Putzer's algorithm for QDETS may be confronted with some challenges. Still and all, further discussion in a later section indicates that the Putzer's algorithm for QDETS may after all be accepted as a good choice.
The rest of the paper is organized as follows. In Section 2, some basic concepts of quaternion algebra and the calculus of time scales are reviewed. In Section 3, the first order linear homogeneous QDETS are studied and the properties of generalized exponential function for QDETS are investigated. In Section 4, the structure of general solutions of higher order linear QDETS are analyzed. Specifically, a novel Wronskian determinant for QDETS is defined and the Liouville's formula and variation of constants formula are given. In Section 5, explicit formulations of the fundamental solution matrices for linear QDETS with constant coefficient matrix are presented. Some examples are given to illustrate the feasibility of the established Putzer's algorithm. Finally, some conclusions are drawn at the end of the paper.
For every quaternion q = q 0 + iq 1 + jq 2 + kq 3 , the scalar and vector parts of q, are defined as R(q) = q 0 and ℑ(q) = q 1 i + q 2 j + q 3 k, respectively. If q = ℑ(q), then q is called pure imaginary quaternion. The quaternion conjugate is defined by q = q 0 − iq 1 − jq 2 − kq 3 , and the norm |q| of q is defined as |q| 2 = qq = qq = m=3 m=0 q 2 m . Using the conjugate and norm of q, one can define the inverse of q ∈ H\{0} by q −1 = q/|q| 2 . For each fixed unit pure imaginary quaternion ς, the quaternion has subset C ς := {a + bς : a, b ∈ R} and C ς is isomorphic to the complex numbers.
The quaternion exponential function exp(q) is defined by means of an infinite series as exp(q) := ∞ n=0 q n n! .
Then all possible values of the argument can be expressed as arg(q) = Arg(q) + 2kπ, k ∈ Z. It follows that the polar form of a non-real quaternion can be written as: where ς = ℑ(q) |ℑ(q)| and θ = arg(q). Accordingly, the principal logarithm function is defined by Ln(q) := ln(|q|) + ℑ(q) |ℑ(q)| Arg(q) q ∈ H \ R, ln(|q|) + iπ Let h > 0, Georgiev and Morais [18] introduced the Hilger quaternion numbers They defined the addition ⊕ on H h by p ⊕ q := p + q + pqh and proved that (H h , ⊕) is a group. The generalized quaternion cylinder transformation was also given in [18]: Next we recall an important transformation between quaternion and complex matrices which were studied in [6,42]. Every quaternion matrix A ∈ H m×n can be expressed uniquely in the form of So we can define G : H m×n → C 2m×2n by where G(A) is called the complex adjoint matrix of the quaternion matrix A. For simplicity, G(A) will be denoted by χ A in the following.
From [28], we know that H n over the division ring H is a right H-module and η 1 , η 2 , . . . , η k ∈ H n are right linearly independent if Let A ∈ H n×n , a nonzero η ∈ H n×1 is said to be a right eigenvector of A corresponding to the right eigenvalue λ ∈ H provided that Aη = ηλ holds. A matrix A 1 is said to be similar to a matrix A 2 if A 2 = S −1 AS for some nonsingular matrix S.
In particular, we say that two quaternions p, q are similar if p = α −1 qα for some nonzero quaternion α. We recall some basic results about quaternion matrices which can be found, for instance, in [42,8,33]. (iv) det χ A ≥ 0, and the characteristic polynomial of χ A has real coefficients.
(vi) If A is (upper or lower) triangular, then the only eigenvalues are the diagonal elements (and the quaternions similar to them).

Calculus on time scales
The theory of time scales has gained much popularity in recent years. Bohner and Peterson together with their research collaborators, such as Agarwal and Ahlbrandt, have greatly developed the theory of time scales (see e.g. [3,4,31,9]). An systematic introduction to dynamic equations on time scales was given by Bohner and Peterson [10]. We adopt the standard notations in [10,11,2]. A time scale is a nonempty closed subset of R. There are some typical examples of time scales.
(i) R consists of all real numbers.
(ii) hZ := {hk : k ∈ Z}, where Z is the set of integers.
where a, b are positive real constants.
Throughout the paper, let T be a time scale. For t ∈ T, the forward jump operator σ and the backward jump operator ρ are respectively defined by σ(t) := inf{s ∈ T : s > t} and ρ(t) := sup{s ∈ T : s < t}.
If σ(t) > t, σ(t) = t, ρ(t) < t, ρ(t) = t, then t is said to be right-scattered, right-dense, left-scattered, left-dense, respectively. The graininess function µ : T → [0, ∞) and the set T κ are respectively defined by and The classical time scales calculus is only concerned with the real-valued functions. With minor adjustments, some basic concepts of time scales calculus can also be carried to quaternion-valued functions. We denote the set of all quaternion-valued functions which are defined on time scales T by H ⊗ T. Definition 2.2 Assume that f ∈ H ⊗ T and let t ∈ T κ . The delta derivative f ∆ (t) is defined to be the number (provided it exists) with property that given any ε > 0, there exists δ > 0 such that It follows that some useful results concerning the delta derivative for real-valued functions in [10] can be carried to quaternion-valued functions.
(iv) The product f g is delta differentiable at t and (v) If f (t)f (σ(t)) = 0 then f (t) = (f (t)) −1 is delta differentiable at t and
Then we have By direct computation, They are equal to To describe integrable quaternion-valued functions on time scales, we need to introduce the concept of rd-continuous. The rd-continuity of real-valued functions was defined by Bohner et al. [10].

Definition 2.6 A real-valued function is called rd-continuous if it is continuous at right-dense points and its left-sided limits exist (finite) at left-dense points.
Bohner et al. [10] proved that every rd-continuous function has an antiderivative. Next we introduce the rd-continuity and integrability of quaternion-valued functions.
We say that f = f 0 + f 1 i + f 2 j + f 3 k ∈ H ⊗ T is rd-continuous provided that its every real components f 0 , f 1 , f 2 , f 3 are rd-continuous. For every rd-continuous function f , we define the integral by for t ∈ T κ . From above discussion, we have the following two theorems.
Theorem 2.8 Let a, b ∈ T and suppose that f ∈ H ⊗ T is rd-continuous.
Namely, it is the classical integral from calculus.

First order linear QDETS
In this section, we will study the first order linear QDETS and its corresponding initial value problems. Firstly, we need to introduce some auxiliary concepts.
The set of all regressive and rd-continuous quaternion-valued functions is denoted by R(T, H). By similar arguments to (H h , ⊕) , we know that R(T, H) is a group under addition ⊕ which is defined by where p, q ∈ R(T, H). Based on the definition of quaternion cylinder transformation (2.1), the generalized quaternion exponential function for p ∈ R(T, H) is defined by Clearly, the generalized quaternion exponential function never be zero for any p ∈ R(T, H). We proceed by presenting some important properties of E p (t, s).
Proof. Since α ∈ H \ R and µ(t) > 0, then by Theorem 2.7 Observe that 1 + αµ(t) is certainly not real-valued. It follows that which completes the proof.
Now we turn to study the following first order linear QDETS.

Definition 3.4 If p ∈ R(T, H), then the first order linear quaternion dynamic equation
is called regressive. For any fixed t 0 ∈ T and c 0 ∈ H, the corresponding initial value problem is By similar arguments to Theorem 5.8 in [10], the initial value problem (3.3) has exactly a unique solution.
Remark 3.5 Let ψ p (t, t 0 ) denotes the unique solution of (3.3) with c 0 = 1. In the classical case, we know that ψ p (t, s) = E p (t, s). This assertion, however, is no longer true in the quaternion case. Namely, By direct computation, we have Fortunately, under some suitable conditions, ψ p (t, s) is still equal to E p (t, s). To prove this result, we give an useful lemma first.
Proposition 3.9 Let p ∈ R(T, H), then the following assertions hold.
Proof. In the classical case, for any y = φ(t) satisfying (3.2) never vanishes, because it is just a composition of exponential and ξ µ(·) (p(·)). But Example 3.6 indicates that ψ p (t, s) is not necessarily to be E p (t, s) in the quaternion case. So assertion 1 is not a trivial result. Let . Then by Theorem 2.3, Therefore w(t 0 ) = 0 implies that w(t) = 0 for all t ∈ T by Theorem 3.8. This completes the proof of assertion 1. The rest of assertions follows from first assertion.

Linear systems of QDETS
In order to state our results, we introduce some notations which are analogous to those used in [24,15]. Let Ω be an index set. For any B ∈ C m×m ⊗ T, B(Λ k ) denotes the principal sub-matrix that lies in the rows and columns of B indexed by Λ k and k . Similarly, let B(Λ k , ∆) be the m × m matrix generated from B by replacing original entries with delta derivatives on the rows indexed by Λ k .
We consider the linear nonhomogeneous quaternion dynamic equations and the linear homogeneous quaternion dynamic equations We are in a position to introduce the concept of regressivity of A(t).
The totality of all such regressive and rd-continuous quaternion-valued functions is denoted by R(T, H n×n ).
If A ∈ R(T, H n×n ), we say system (4.2) is regressive.

t)A(t) is invertible if and only if
Proof. Let A(t) = I n + µ(t)A(t), it is easy to see that χ A (t) = I 2n + µ(t)χ A (t). Then by statement 3 of Theorem 2.1, we complete the proof.
We need a lemma about the equivalent conditions of regressivity in the classical case. This result can be found in [10,15]. Lemma 4.3 Let B(t) ∈ C m×m ⊗ T, then for any fixed t ∈ T, the following statements are equivalent.
and V k is defined by (4.1).
An immediate consequence of Lemma 4.2 and 4.3 is the equivalent conditions of regressivity for A(t) ∈ H n×n ⊗ T. Proof. By Lemma 4.2 and 4.3, we know that A(t) ∈ H n×n ⊗T is regressive if and only if all the standard eigenvalues of A(t) are regressive. Note that if λ ∈ H is regressive, then α −1 λα is also regressive for any nonzero α ∈ H. Then by statement 2 of Theorem 2.1, we complete the proof.
By similar arguments to Theorem 5.8 in [10], we have the following existence and uniqueness theorem.
Theorem 4.5 If A ∈ R(T, H n×n ) and f is rd-continuous. Then the initial value problem (4.2), (4.4) has exactly a unique solution.
To study the properties of solutions of (4.3), we should define the concept of Wronskian for quaternion dynamic equations. Due to the noncommutative property of quaternons, there is no unified definition of determinant of quaternion matrix. Many researchers have proposed different definitions. But, as mentioned in [28], some definitions of determinant may be not suitable to define Wronskian. Kou et al. [27] adopted the definition of determinant based on permutation proposed by Chen [13]. The proof of Liouville's formula in [27] is complicated. In this paper, we adopt an alternative definition of determinant called q-determinant [42] for A ∈ H n×n . It is defined by We know that the product rule for delta derivative is more tedious than traditional one. But the Wronskian defined in [27] contains many product operations. So we use q-determinant to define the Wronskian for quaternion dynamic equations on time scales.
The Wronskian of M (t) is defined by If x 1 (t), x 2 (t), · · ·, x n (t) are solutions of (4.3), then we call M (t) a solution matrix of (4.3).
In other words, if W (t) = 0 for some t 0 ∈ T, then x 1 (t), x 2 (t), · · ·, x n (t) are right linearly independent on T.
If there exists t 0 ∈ T such that W (t 0 ) = 0, by definition of Wronskian, we have χ M (t 0 ) is invertible. From Theorem 2.1, it follows that M (t 0 ) is invertible. Thus M (t 0 )η = 0 has a unique solution η = 0 by Theorem 4.3 of [42]. This contradicts the fact that η = q is a nonzero solution of M (t 0 )η = 0.
Proof. Assume that there exists t 0 ∈ T such that W (t 0 ) = 0. By similar arguments to Theorem 4.8, it is easy to see that there is a nonzero vector q = (q 1 , q 2 , · · ·, q n ) ⊤ ∈ H n such that M (t 0 )q = 0. Define x(t) := M (t)q for all t ∈ T. By Theorem 4.7, x(t) is a solution of (4.3) with x(t 0 ) = 0. Note that y(t) ≡ 0 is the unique solution of (4.3) and initial condition (4.4) with η = 0. Therefore M (t)q = x(t) = y(t) ≡ 0. This implies that x 1 (t), x 2 (t), · · ·, x n (t) are right linearly dependent, which is contradiction to the hypotheses of the theorem.
To deduce the Liouville's formula, we need an important Lemma.

Lemma 4.10 Let M (t) be a solution matrix of (4.3) and W (t) be the corresponding Wronskian. Then for any index set
Proof. Without loss of generality, we may assume We only consider the case of i k ≤ n, the other cases can be similarly proved.
Then for s ≤ n and s / ∈ Λ k , we do row operation of −C 1 (a irs )R s + R ir for χ M (Λ k , ∆). And for s > n, we do row operation of −C 2 (a ir,s−n )R s + R ir for χ M (Λ k , ∆). After doing this procedure for

Construct a block diagonal matrix
where I s is the s-order identity matrix and Observe that det H = det χ A (Λ k ), the proof is complete.
We need a lemma about delta derivative of determinant from Theorem 5.105 in [10].
Then det C ∈ R ⊗ T is delta differentiable, and

Remark 4.12 Although Lemma 4.11 is established for real-valued functions, it is easy to verify that Lemma 4.11 is also valid for complex-valued functions.
Now we present the Liouville's formula.
In fact, V 1 (χ A ), V 2 (χ A ), · · ·, V 2n (χ A ) are coefficients of the characteristic polynomial of χ A . Then, by Theorem 2.1 and the definition of u, we conclude that u is real-valued. From Theorem 3.8 follows the Liouville's formula (4.7).

Therefore the Liouville's formula becomes
If T = Z, then the graininess function µ is identically equal to 1. Therefore u(t) equals to the sum of all coefficients of the characteristic polynomial of χ A (t). In this case, the Liouville's formula becomes  . Then x 1 (t), x 2 (t), · · ·, x n (t) are right linearly dependent on T if and only if W (t) = 0 at some t 0 ∈ T . And x 1 (t), x 2 (t), · · ·, x n (t) are right linearly independent on T if and only if W (t) = 0 at some t 0 ∈ T .
Let A ∈ R(T, H n×n ) and e k be the k-th column of I n . Then there exists a unique solution x k (t) of (4.3) satisfying x k (t 0 ) = e k . Thus M (t) = (x 1 (t), x 2 (t), · · ·, x n (t)) is a solution matrix of (4.3). Let W (t) be the Wronskian of M (t). It is easy to see that W (t 0 ) = 1 = 0. Then we immediately have  with an arbitrary non-singular quaternion matrix on the right-side, it remains to be a fundamental solution matrix. Using Theorem 4.16, we can easily determine whether M (t) is a fundamental solution matrix or not.

Corollary 4.18 A solution matrix M (t) of (4.3) is a fundamental solution matrix if and only if its corresponding Wronskian
Next result describes the structure of the general solution of (4.3). It is a right H-module. That implies that if we know n right linearly independent solutions of (4.3), then we actually know all possible solutions of (4.3), since any other solution is just a right linear combination of known solutions.
Proof. For any t 0 ∈ T, by Corollary 4.18, M (t 0 ) is invertible. Thus M (t 0 )η = x(t 0 ) has a unique solution η = q. Note that M (t)q is also a solution of 4.3 with initial condition φ(t 0 ) = x(t 0 ). By the uniqueness theorem, the equality (4.8) holds.
Let A ∈ R(T, H n×n ) and M (t) be a fundamental solution matrix of (4.3). Define the state-transition matrix It is not difficult to verify that Ψ A (t, s) is well-defined. Suppose that M 1 (t) be a fundamental solution matrix which is different from M (t). It is easy to see that M 1 (t)M −1 1 (s) = M (t)M −1 (s) by uniqueness theorem. By the similar arguments to Proposition 3.9, we have Proposition 4.20 Let A ∈ R(T, H n×n ) and t, r, s ∈ T, then the following assertions hold.
After studying the homogeneous equations (4.3), we also consider the nonhomogeneous equations (4.2). It is easy to verify the following two results.  Next result describes the structure of the general solution of (4.2).
Theorem 4.23 Suppose that M (t) be a fundamental solution matrix of (4.3) and φ N H 0 (t) be a solution of (4.2). Then any solution of (4.2) can be expressed by where q = (q 1 , q 2 , · · ·, q n ) ⊤ is a constant quaternion vector.
Proof. For any solution φ N H (t) of (4.2), we know that φ N H (t) − φ N H 0 (t) is a solution of (4.3) by Lemma 4.22. From Theorem 4.19, it follows that there exists q ∈ H n such that φ N H (t) − φ N H 0 (t) = M (t)q. Then we have equality (4.9). Theorem 4.23 indicates that if we know a fundamental solution matrix of (4.3) and a particular solution of (4.2), then we actually know all possible solutions of (4.2). The following result further points out that if a fundamental solution matrix of (4.3) M (t) is known, then the general solution of (4.2) can be specifically described by method of variation of constants.

Theorem 4.24 Let A ∈ R(T, H n×n ) and M (t) be a fundamental solution matrix of (4.3). Suppose that f is rd-continuous. Then the general solution of (4.2) is given by
(4.10) where q = (q 1 , q 2 , · · ·, q n ) ⊤ is a constant quaternion vector.
Proof. Let us look for a solution of (4.2) in a form similar to (4.8). Suppose that By differentiating both sides of (4.11), we obtain Observe that M ∆ (t) = A(t)M (t). Thus Therefore where q is a constant quaternion vector. Then we obtain an expression of φ N H (t) as follows: Now it remains to show that (4.10) is exactly a solution of (4.2). We use product rule to differentiate φ N H (t): The proof is complete.

Corollary 4.25 The solution of initial value problem (4.2) and (4.4) is given by
In particular, let α ∈ H be a quaternion constant, then the solution of y ∆ (t) = αy(t) + f (t), y(t 0 ) = 0 is

Linear QDETS with constant coefficients
Let A ∈ R(T, H n×n ) be a constant quaternion matrix and suppose that f ∈ H n×1 ⊗ T is rd-continuous.
In this section, we consider the following quaternion-valued linear equations and its corresponding homogeneous equations Theorem 5.1 If λ is a right eigenvalue of A and η is an eigenvector corresponding to λ. Then φ(t) = ηE λ (t, 0) is a solution of (5.2).
Theorem 5.2 If A has n right linearly independent eigenvectors η 1 , η 2 , · · ·, η n corresponding to right eigenvalues λ 1 , λ 2 , · · ·, λ n (no matter whether they are similar). Then is a fundamental solution matrix of (5.2). In particular, if A has n distinct standard eigenvalues, then λ 1 , λ 2 , · · ·, λ n can be chosen to be the standard eigenvalues of A.
Thus M (t) is a fundamental solution matrix of (5.2) by Theorem 4.18. If A has n distinct standard eigenvalues λ 1 , λ 2 , · · ·, λ n , then they are pairwise non-similar. By Theorem 2.1, we know that their corresponding eigenvectors are right linearly independent. This completes the proof.

Example 5.3 Find a fundamental solution matrix of
for the special time scales T = Z.

Example 5.4 Find a fundamental solution matrix of
for the special time scales T = Z.
By Theorem 2.1, we know that A has a repeated standard eigenvalue λ = i. Although A only has one standard eigenvalue, there are two right linearly independent eigenvectors is a fundamental solution matrix of (5.4).
From Example 7.4 in [42], we know that not every n × n constant quaternion matrix has n right linearly independent eigenvectors. In this case, Theorem 5.2 is of no use any more. The Putzer's algorithm for the classical case in [10] is generalized to quaternion dynamic equations on time scales. Since Putzer's algorithm avoids the computing of eigenvectors. So it is particularly useful for quaternion matrices that do not have n right linearly independent eigenvectors. Theorem 5.5 Let A ∈ R(T, H n×n ) be a constant quaternion matrix. If there exists quaternion constants α 1 , α 2 , · · ·, α m ∈ R(T, H) such that P m = 0, where P 0 , P 1 , P 2 , · · ·, P m are recursively defined by P 0 = I n and where ϕ(t) := (ϕ 1 (t), ϕ 1 (t), · · ·, ϕ m (t)) ⊤ is the solution of the following initial value problem Proof. From statement 6 of Theorem 2.1 and Lemma 4.3 we see that the coefficient matrix of (5.6) is regressive and therefore (5.6) has a unique solution ϕ(t) := (ϕ 1 (t), ϕ 1 (t), · · ·, ϕ m (t)) ⊤ . Let φ(t) be the right-hand side of (5.5). Then The last equality is a consequence of P m = 0. Since ϕ(t 0 ) = ϕ 1 (t 0 )P 0 = I. Thus ϕ(t) is a fundamental solution matrix of (4.3). Therefore which completes the proof.
Example 5.6 Find the state-transition matrix of the quaternion dynamic equations for the special time scales T = R and T = Z.
Since χ A is not diagonalizable, then A does not have n right linearly independent eigenvectors belonging to right eigenvalues. So Theorem 5.2 does not apply to this problem.
Then by Corollary 4.25, we have ϕ 2 (t) = sin t and therefore Thus Then we obtain By direct computation, we see that both Ψ ∆ A (t, 0) and je jt 0 0 t 2 je jt + ke jt + 1 2 e jt + e kt ke kt   .
That means that (5.7) is exactly the state-transition matrix.
Example 5.7 Find the state-transition matrix of the quaternion dynamic equations for the special time scales T = R and T = Z.
Then we have ϕ 3 (t) = − 1 2 − i 2 −(1 + i)e t + e (1+i)t + i and therefore Thus Using the Putzer's algorithm (5.5), we can obtain The result is consistent with the result of Example 6.3 in [27].
If T = Z and t 0 = 0, then ϕ 1 (t) = 2 t , ϕ 2 (t) = 2 t − 1 and Then we have and therefore Thus Then we obtain where . By direct computation, we see that both Ψ ∆ A (t, 0) and AΨ A (t, 0) are equal to  That means that (5.10) is exactly the state-transition matrix. where A ∈ C n×n and λ j , (1 ≤ j ≤ n) are eigenvalues of A. So α 1 , α 2 , · · ·, α m can be chosen as the eigenvalues of A ∈ C n×n . In the quaternion case, however, the selection of α k , (1 ≤ k ≤ m) is more difficult. What should be clear to us is that the less m the less calculation.
We say that h(z) = z m + z m−1 β 1 + · · · + β m is an annihilating polynomial of quaternion matrix where β 1 , β 2 , · · ·, β m ∈ H. To authors' best knowledge, there are (at least) two annihilating polynomials for every A ∈ H n×n . The first one which was presented by Zhang [42] is ch χ A (z), the characteristic polynomial of χ A . In this case, m = 2n and α 1 , α 2 , · · ·, α m are exactly the standard eigenvalues of A. The other one is called minimal polynomial which was given by Rodman [33]. The coefficients of minimal polynomial in [33] are confined to be real. Thus, there may be some other annihilating polynomials (with quaternion coefficients), which possess less degree than minimal polynomial. Since m-degree minimal polynomial possesses real coefficients, then it has m complex roots and the Vieta's formula holds. Therefore, α 1 , α 2 , · · ·, α m can be chosen as the complex roots of minimal polynomial. Unfortunately, there is no explicit expression for minimal polynomial of quaternion matrices until now. On the other hand, even when we know an annihilating polynomial h(z) = z m +z m−1 β 1 +···+β m (with quaternion coefficients) of A, we still can not find α 1 , α 2 , ···, α m . In fact, we need to find α 1 , α 2 , ···, α m such that (−1) k 1≤i 1 <i 2 ···<i k ≤m α i 1 α i 2 · · · α i k = β k (5.11) for 1 ≤ k ≤ m. Thus, α 1 , α 2 , · · ·, α m may not exist. Even if they exist, we can not conclude that they are roots of h(z) (see Example 5.8). Even if they are roots of h(z), we still can not find them. This is because that the number of zeros of quaternion polynomials is indeterminate and the computing of zeros of quaternion polynomials is complicated. For details of zeros of quaternion polynomials, please refer to [35,34,32,25].
In practice, α 1 , α 2 , · · ·, α m are usually chosen to be the eigenvalues of A ∈ H n×n (see Example 5.6 and 5.7). The value of m does not need to be as large as 2n. By iterative computing, we can always get more and more succinct P k as k increases. Although there are some theoretical challenges on selections of α 1 , α 2 , · · ·, α m , the Putzer's algorithm is still feasible. The theoretical challenges give us something to focus on and work toward.
The method of variation of constants for one-dimensional case has been used many times in Example 5.6 and 5.7. We now present an example to illustrate the feasibility of the method of variation of constants for higher dimensional case.

Example 5.9 Find the solution of the initial value problem
for the special time scale T = Z.
Thus φ(t) is exactly the solution of (5.12).

Conclusion
In this paper, we establish the basic theory of linear quaternion dynamic equations on time scales (QDETS). It not only generalizes the theory of quaternion differential equations (QDEs) but also extends the theory of dynamic equations on time scales (DETS). Employing the newly defined Wronskian determinant, the Liouville's formula for QDETS is derived, thereby giving the structure of general solutions of QDETS. We present the Putzer's algorithm to compute fundamental solution matrix for homogeneous QDETS. The Putzer's algorithm is applicable to all homogeneous QDETS with constant coefficient matrices. It is particularly useful for quaternion coefficient matrices which are not diagonalizable. Furthermore, the variation of constants formula of solving the nonhomogeneous QDETS is also derived. Importantly, examples are given in each sections to illustrate our results.