On Hankel transforms of generalized Bessel matrix polynomial

Abstract: The present article deals with the evaluation of the Hankel transforms involving Bessel matrix functions in the kernel. Moreover, these transforms are associated with products of certain elementary functions and generalized Bessel matrix polynomials. As applications, many useful special cases are discussed. Further, the current results are more general to the previous one. In addition to, these results are yielded to more results in the modern integral transforms with special matrix functions.

Hankel transforms (also designated as Fourier-Bessel transforms) are type of integral transforms that involving Bessel functions as the kernel arises naturally in radial problems formulated in cylindrical polar coordinates (See [14][15][16]). The classical Hankel transformation defined by where x > 0. and J n (x) is the Bessel function of order n (See [6]).
In the current study, we define the Hankel transforms and its inverse involving Bessel matrix functions [25][26][27] in the kernel. Moreover, we evaluate some new integrals of matrix functions involving generalized Bessel matrix polynomials [25,28]. Interesting special cases of the main results are also deduced. The present work is a very useful in the study of boundary value problems, electromechanical problems, statistic theory, numerical calculations and computer science.

Definitions and lemmas
In this section, we have enclosed some basic definitions and lemmas which are useful in our main results.
Let C d denote the d-dimensional complex vector space and C d×d denote the set of all square matrices with d rows and d columns with entries are complex numbers. I and 0 stand for the identity matrix and the null matrix in C d×d , respectively. Definition 2.1. (See [29]) For a matrix N in C d×d , σ(N) is the spectrum of N, the set of all eigenvalues of N and where ϑ(N) is referred to as the spectral abscissa of N and ϑ(N) = −β(−N). A matrix N is said to be a positive stable if and only if ϑ(N) > 0.
Definition 2.2. [25,27,29] The logarithmic norm of a matrix N in C d×d is defined as Suppose that the number β(N) is such that where N * is the transposed conjugate of N.
The reciprocal gamma function denoted by Γ −1 (ξ) = 1 Γ(ξ) is an entire function of the complex variable ξ. Then the image of Γ −1 (ξ) acting on N denoted by Γ −1 (N) is a well-defined matrix and invertible as well as N + nI is invertible for all integers n ∈ N 0 := N ∪ {0}.
(2.4) By applying the matrix functional calculus, for a matrix N is positive stable in C d×d , then from [18,26], the Pochhammer symbol of a matrix argument defined by where Γ(N) is the gamma matrix function [26,27] where m is a positive integer, then (N) n = 0 whenever n > m.
Definition 2.3. [25,26] Let h and k be finite positive integers, the generalized hypergeometric matrix function is defined by the matrix power series are invertible for all integers m ∈ N 0 and 1 ≤ i ≤ h. More details, Abdalla discussed regions of convergence and properties of (2.7) in [25,26]. Note that for h = 2, k = 1, we get the Gauss hypergeometric matrix function 2 F 1 (See [25,26]).
where S is a matrix in C d×d satisfying the condition υ is not a negative integer for every υ ∈ σ(S ). (2.9) Definition 2.5. [25,28,31] Let M and N be commuting matrices in C d×d such that N is an invertible matrix. For any natural number n ∈ N 0 , the n-th generalized Bessel matrix polynomial B n (u; M, N) is defined by (2.10) The following lemmas are needed to find certain integral representations of the Bessel matrix function and generalized Bessel matrix polynomials Lemma 2.2. For u, v, λ, δ ∈ C, v > 0, Re(δ) > 0, Re(λ) > 0 and S is a positive stable matrix in C d×d such that S + I is an invertible matrix in C d×d and β(S + λI) > −1, the following formula holds: where J S (u) is Bessel matrix function given in (2.8).
Proof. To prove (2.11), let the left hand side equal to: Setting w = δu 2 we have This completes the proof of Eq (2.11) asserted by Lemma 2.2.
Lemma 2.3. Let S , M and P be positive stable and commuting matrices in C d×d such that (1 + n)I − S and (2 − n)I − (S + M), be invertible matrices. Then, we have the integral representation of generalized Bessel matrix polynomials as follows: Proof. Expanding the two Bessel matrix polynomials in (2.12) by the series (2.10) and interchanging the order of integration and summation, we observe that Putting τ = µu, we have We thus arrive at the desired result (2.12).

Matrix Hankel transforms
We begin this section with defining a matrix analogue of Hankel transform and its inverse as follows.
Let us consider the generalization of the Hankel integral transform and its inverse by help of the Bessel matrix function J S (w) of the first kind associate to the matrix S ∈ C d×d in the following definition: Definition 3.1. (Matrix Hankel Transforms) Let S be a matrix in C d×d satisfying (2.9) and let Φ(u) be a function defined for u ≥ 0. The Hankel transform involving Bessel matrix function as kernel of Φ(u) is defined as where v > 0 and J S (uv) is Bessel matrix function of the first kind defined in (2.8).

Main theorems
Now we give our main theorem, which encompass the matrix analogue of Hankel transforms of functions involving the generalized Bessel matrix polynomials.
Proof. To prove (3.4) substitute for B n (N; (1 − 2n)I − S , u 2 ) by its series expansion in (2.10) into (3.1), we consider Applying Lemma 2.2, we get Applying the matrix analogue of Kummer's transformations in [32], we obtain Setting m = t + s in the above expression to get After changing the order of summation and simplifying yield This finalizes the proof of the Theorem 3.1. then, we have (1 − n)I + 1 2 (P + S ), nI + M + 1 2 (P + S ),
Proof. By Definition 2.6 and applying (3.1) into (3.5), we obtain Then, by virtue of Lemma 2.2 applied to the above equation we attain Applying Lemma 2.1 and after simplification, we thus obtain the desired result as follows Next, we consider some interesting special cases of the Theorem 3.2 in the following corollary: Corollary 3.1.
Proof. To establish Theorem 3.3, from (3.7) into (3.1), we consider the following integral which, in terms of (2.7), yields our desired result (3.8). then, we have
Proof. From (3.11) into (3.1) and using (2.10), we have Putting u 2 = z, we get Applying Lemma 2.3 and after simplification, we see that which is the claimed result in (3.12). then, we have where M, N and S are commuting matrices in C d×d such that β(S ) > −3/2, ψ(S ) is the digamma matrix function defined in [25] by where Γ −1 (S ) and Γ (S ) are reciprocal and derivative of the gamma matrix function, respectively and v > 0, λ > 0.

Conclusions
The theory of integral transforms are played a very crucial role in the area of mathematical analysis, mathematical physics and engineering sciences.
One of these transforms is the Hankel transform that is more suitable for the problems that are defined in terms of polar coordinate variables. It should be noted that, the kernel of the Hankel transform is the Bessel function, perhaps, for this reason, in some literature, this transform is called Bessel transformation or Fourier−Bessel transform. In addition to, the Hankel transforms are natural generalizations of Fourier transforms.
In this work, Hankel transforms containing Bessel matrix functions as kernels are proposed. Then we provided some matrix Hankel integrals of generalized Bessel matrix polynomials together with ceratin elementary matrix functions, exponential function, and logarithmic function. Further, we gave the matrix versions of those results for Hankel transforms (or formulas) involving a variety of functions and polynomials (See, e.g., [35,Chapter VIII], also see [6,Chapter 7]).