Inversion of $\alpha$-sine and $\alpha$-cosine transforms on $\mathbb{R}$

We consider the $\alpha$-sine transform of the form $T_\alpha f(y)=\int_0^\infty\vert\sin(xy)\vert^\alpha f(x)dx$ for $\alpha>-1$, where $f$ is an integrable function on $\mathbb{R}_+$. First, the inversion of this transform for $\alpha>1$ is discussed in the context of a more general family of integral transforms on the space of weighted, square-integrable functions on the positive real line. In an alternative approach, we show that the $\alpha$-sine transform of a function $f$ admits a series representation for all $\alpha>-1$, which involves the Fourier transform of $f$ and coefficients which can all be explicitly computed with the Gauss hypergeometric theorem. Based on this series representation we construct a system of linear equations whose solution is an approximation of the Fourier transform of $f$ at equidistant points. Sampling theory and Fourier inversion allow us to compute an estimate of $f$ from its $\alpha$-sine transform. The same approach can be extended to a similar $\alpha$-cosine transform on $\mathbb{R}_+$ for $\alpha>-1$, and the two-dimensional spherical $\alpha$-sine and cosine transforms for $\alpha>-1$, $\alpha\neq 0,2,4,\dots$. In an extensive numerical analysis, we consider a number of examples, and compare the inversion results of both methods presented.


Introduction
Spherical α-sine and α-cosine integral transforms and their inversion are of particular interest in stochastic geometry and tomography.For example when analyzing the structure of a fibrous material, the so-called rose of intersections is the spherical cosine transform of the directional distribution measure of the fibers [8,18,26].In the context of convex geometry, the support function of a zonoid is the α-cosine transform of some generating signed measure [27].
for α > −1, where the integral transform on the right-hand side is the aforementioned α-sine transform.
The inversion of this α-sine transform is applicable in the context of stationary real harmonizable symmetric αstable random processes.These processes are uniquely determined by their so-called control measure.Furthermore, the codifference function describes their dependence structure.It is a generalization of the covariance function to the α-stable case with 0 < α < 2, where second moments are infinite.
Assuming this control measure has a density function f , which we refer to as spectral density, with respect to the Lebesgue measure on R, it can be shown the α-sine transform of f can be obtained from the codifference function.
Estimation of the codifference function (and other parameters) as well as the subsequent inversion of the α-sine transform yield the spectral density f , cf.Section 5.
Another application example is given in the case when f is a 2π-periodic functions on R.Then, it suffices to consider the above α-sine transform (or α-cosine transform when the integral kernel is replaced by the cosine function) on the interval [−π, π].This coincides with the two-dimensional spherical α-sine and α-cosine transforms on the unit circle.
We will first introduce necessary notation and transformations in Section 2. In Section 3 we consider the inversion for α > 1 in the context of a more general class of integral transformations on the space of weighted L 2 -functions.
We will refer to this as the direct approach.An alternative, approximative approach, applicable to α > −1 is given in Section 4. This approach relies on the relation between the α-sine transform and the classical Fourier transform, hence the name Fourier approximation approach.Our approach can also be extended to α-cosine transforms, which will be introduced later.Applications in the context of harmonizable symmetric α-stable stochastic processes and two-dimensional spherical α-sine and cosine transforms are outlined in Section 5. Lastly, numerical results of each approach are presented and discussed in Section 6.

Preliminaries
Denote by R + the positive reals.Let C k (R + ) be the space of k-times continuously differentiable functions on R + with C(R + ) being the class of all continuous functions.Denote by C b (A) the space of bounded continuous functions on the interval A ⊆ R + .We define the space of p-integrable functions on R + with respect to the measure w by L p (R + , w).Furthermore, we write L p (R + ) = L p (R + , dx) for the space of p-integrable functions with respect to the Lebesgue measure on R + with L p -norm f p = ∞ 0 | f (x)| p dx 1/p for f ∈ L p (R + ), p ≥ 1.For p = ∞ this is the uniform norm f ∞ = ess sup x∈R + | f (x)|.We say that a sequence ( f n ) n∈N converges to some limit f in L p if the L p -distance f n − f p converges to 0 as n tends to infinity.
Proof.Let α ≥ 0. Note that by the triangle inequality it holds that |T α f (y)| ≤ T α | f |(y) ≤ f 1 for all y ∈ R + .The relation T α f (0) = 0 is trivially satisfied for α > 0. For −1 < α < 0 the finiteness of K 0 |T α f (y)|dy would imply that T α f is finite almost everywhere on the interval [0, K].Again, by the triangle inequality it holds that Using Fubini's theorem we can further compute where the last equality stems from the substitution u = xy.Since | sin(u)| is π-periodic, we can estimate where the constant C α is given by Γ(1+ α 2 ) for α > −1.The above converges to K/π as x → ∞, and demanding f ∈ L 1 (R + ) ∩ L 1 R + , max 1  x , 1 dx ensures that For α ≥ 0, using the triangle inequality for integrals, one can show that the transform T α is a bounded linear operator from L 1 w,α into the space of bounded continuous functions on R + .In the case −1 < α < 0, one needs to impose more conditions on the function f such that T α is bounded.Both cases are analyzed in detail in Theorem 3.
The goal is to invert the transform T α , or in other words to solve the integral equation g = T α f for the function f .Each approach, presented in Sections 3 and 4, respectively, requires the introduction of different integral operators and special functions, which will be given in the following.
Section 4 establishes the close relationship between the α-sine transform (1) and the classical Fourier transform on R. The α-sine transform is well defined for all functions from the space L 1 w,α (R + ).We can evenly extend functions f ∈ L 1 w,α (R + ) to the negative half of the real line by setting f (−x) = f (x) for all x ∈ R. For ease of notation, we denote this by f ∈ L 1 e,w,α (R + ).Similarly, we denote by L p e (R) the space of all even L p -functions.
Define the Fourier transform and its inverse transform on the space of integrable functions L 1 (R) by for v, w ∈ L 1 (R).By the Euler formula, the integral kernels e ixy and e −ixy in the definition of the Fourier transform above can be replaced by cos(xy) for even functions v, w ∈ L 1 e (R).Note that on the space of Lebesgue integrable functions L 1 (R) the above Fourier transform is bounded, uniformly continuous, and vanishes at infinity by the Riemann-Lebesgue lemma [11, Prop.2.2.17.].It is well known that the Fourier transform of an integrable function might not be integrable itself.Therefore, only under certain additional conditions on v the Fourier inversion theorem Then, the convergence is preserved under the Fourier transform in the sense that For Section 3 we consider the Fourier transform and its inverse transform on the multiplicative group (R + , •) by ∞ 0 e i log(x) log(y) w(y) dy y for v, w ∈ L 2 R + , dx x , as well as the similarity transform M : L 2 (R + , x c dx) → L 2 R + , dx x and its inverse by x .Additionally, we state the following useful result.For any complex number z ∈ C one can expand where z k is the generalized binomial coefficient defined by Here, Γ denotes the gamma function.For |x| < 1 the series converges absolutely for any z ∈ C.
Lastly, the so-called generalized hypergeometric function p F q and the special Gauss hypergeometric function 2 F 1 , which are well known in mathematical physics, play important roles in the Fourier approximation approach.
The generalized hypergeometric function p F q with p, q ∈ N 0 is defined by for any complex numbers a 1 , . . ., a p , b 1 , . . ., b q ∈ C and z ∈ C, where (•) n is called the Pochhammer symbol, or rising factorial, with For p ≤ q the generalized hypergeometric function converges for all z ∈ C, and for p > q + 1 only if z = 0.In the case p = q + 1 the series converges if |z| < 1, and when For the Gauss hypergeometric function 2 F 1 it holds that for a, b, c, z ∈ C, and for z = 1 provided Re(c − a − b) > 0 the series converges absolutely with This classical result is known as the Gauss hypergeometric theorem [1, Section 1.3].

Direct approach for α > 1
In [7] the existence and uniqueness of a solution to integral equations of the form for given measurable functions β, γ : R d → R, and a weighted L 2 -function v on R is analyzed.The set supp(γ) = t ∈ R d : γ(t) 0 denotes the support of γ.Their solution theory is based on operators on the multiplicative group on R × = R \ {0}.Since even functions are of interest, it suffices to consider the positive reals R + only.
Define the linear integral operator G : where the functions β and γ are chosen such that the constant is finite.Furthermore, we introduce the function µ : R + → C given by The function µ is bounded and its continuity follows from Lebesgue's dominated convergence theorem.
The following lemma on the injectivity and surjectivity of the operator G can be derived from ([7, Corollary 2.4]) which states the results in the context of the multiplicative group (R × , •).
Lemma 2. Assume that C < ∞, and let µ : R → C be the bounded, continuous function defined in (8).Then, the operator G : L 2 (R + , x c dx) → L 2 (R + , x c dx) as defined in ( 6) is (i) injective if and only if µ 0 almost everywhere on R + (with respect to the Lebesgue measure).
(ii) bijective if and only if inf It is now possible to consider the α-sine transform in the context of the integral operator G.For all y ∈ R + , the transform T α f can be reformulated by substituting t = xy and setting z = 1/y to for all z ∈ R + .Plugging in the functions β and γ into the definitions of G and µ, we can state the following theorem similar to [7, Proposition 2.1, Theorem 2.3]:

and
(i) the linear operator G : Proof.The application of [7, Proposition 2.1] yields (i).For (ii) note that Gu = μu holds if and only if GF + Mu = μF + Mu is true for all u ∈ L 2 R + , dx x .For y > 0 2 u(x)e −i log(x) log(y) dx , and hence using Fubini's theorem and the substitution x = s/t.
We can now state an inversion formula for the operator G + solving Equation (9).
Proof.The inversion formula ( 13) is a direct consequence of Theorem 1.It is easy to see that µ 0 almost everywhere on R + .To show that G is not surjectiv, i.e. inf x>0 |µ(x)| = 0, note that 1 < c+1 2 ≤ α holds, and one can compute Corollary 1 gives an inversion formula which computes the solution of the integral equation g = T f directly.We will later see though that the involved operators F + , M and the function µ are numerically unstable, and inversion results are rather unsatisfying in practice.An even more significant drawback is the restriction to α > 1.The numerical analysis of the direct approach can be found in Section 6.1.

Fourier approximation approach
Recall the space L 1 w,α (R + ) defined in the preliminaries by Remark 1.In the following, the even extension of f ∈ L 1 w,α (R + ) to the whole real line is also denoted by f , whenever we consider the Fourier transform F f .
Assuming the constant F f (0) is known, the inversion of T 2 is simply achieved by applying the Fourier inverse transform to F f (0) − 2T 2 f .But for α > −1, α 2, the cosine double angle formula alone does not help a lot.
In following section, we first prove a series representation of T α .Section 4.2 then establishes the Fourier approximation approach, where the Fourier transform F f is approximated from the transform T α f .Lastly, Section 4.3 presents an interpolation method from which the function f is computed.

Series representation
admits the Fourier series expansion with real Fourier coefficients The Fourier series converges absolutely and uniformly on R for α ≥ 0. It converges almost everywhere on R for −1 < α < 0.
In the case α = 2k, k ∈ N 0 , the above infinite series in expression (15) becomes a finite sum, i.e.
for all x, y ∈ R by the binomial series (3).Since | cos(x)| < 1 almost everywhere on R the series above converges absolutely almost everywhere for any α > −1.Splitting the series into an odd and even summands and applying the cosine power formulae Substituting j = k − 2l and rearranging the series above we get The coefficients c and c j , j ∈ N, can be expressed in terms of the Gauss hypergeometric function aforementioned in Section 2.
First, we compute the constant c in equation (16).Note that α/2 2n 2n n For all α > −1 it holds that 1 − − α 4 − − α 4 + 1 2 = 1 2 + α 2 > 0, and the Gauss hypergeometric theorem (5) yields Hence, the constant c is given by For all coefficients c j , j ∈ N, recall from Equation ( 16) Applying the Gauss hypergeometric theorem (5) and Legendre's duplication formula Thus, the coefficients c j are given by for any j ∈ N. Note, that setting j = 0 in the above yields c = c 0 /2, where c was given in (17).For α = 2k, k ∈ N 0 the binomial series expansion is only a finite sum and the coefficients c j simplify to c j = (−1) j 4 k 2k k− j for j = 0, . . ., k and c j = 0 for all j > k.
To summarize the Fourier series expansion of 1  2 sin x 2 α is given by Absolute and uniform convergence of the above Fourier series for α ≥ 0 follow from [6, Theorem 2.5] since (ii) For α > −1 consider the even extension of f ∈ L 1 w,α (R + ) onto the whole R.For ease of notation, we write f ∈ L 1 e,w,α (R).We make use of the Fourier expansion (19) of the integral kernel of T α .Note that integration and summation can be interchanged by Lebesgue's dominated convergence theorem.Then, The case α = 2k, k ∈ N 0 follows immediately from the definition ( 14) of the coefficients c j .

Corollary 2. (i)
The series ∞ j=1 c j converges absolutely for all α ≥ 0. In particular, for α > 0 it holds that ∞ j=1 c j = − c 0 2 , and in the case α ∈ (0, 2], the absolute limit is given by Proof.(i) The function 1 2 |sin (x/2)| α is 2π-periodic, continuous and piecewise smooth for α ≥ 0. Its Fourier series converges absolutely and uniformly on R, in particular well-defined, and , it holds that c j ≤ 0 for all j ≥ 1 by their definition, and consequently To show the divergence of ∞ j=1 c j in the case −1 < α < 0, note that , where (•) n is the Pochhammer symbol.Then, The series diverges as ), and since it holds that Since the Fourier transform F f is bounded and continuous for all integrable functions, continuity and boundedness of T α f follow by the uniform convergence of the T (n) α f .
Considering the case −1 < α < 0, note that on the space and by the Plancherel theorem the equality where the last inequality is due to the Cauchy-Schwartz inequality.
For the boundedness of the integral operator T α , in particular for the case −1 < α < 0, we introduce the Sobolev space of integrable functions f with integrable first derivative f ′ , and define the norm Moreover, consider the space ) is a linear bounded (continuous) operator.In particular, for α ∈ (0, 2], the operator norm is bounded by T α ≤ 2c 0 . (ii) For −1 < α < 0 the integral operator T α : • * is a linear bounded (continuous) operator.The operator norm is bounded by 2 ) and hypergeometric function 3 F 2 as defined in Equation ( 4).Proof.Linearity of T α follows directly from the linearity of integrals.For the boundedness of T α we consider the cases α ≥ 0 and −1 < α < 0 separately.
where in the above 0 denotes the constant zero function.Following Corollary 2 (ii) the upper bound of the operator norm of T α for α ∈ (0, 2] is then given by Furthermore, |c j | = c j since the coefficients c j , j ≥ 0, are positive for −1 < α < 0 by their definition in Equation ( 14), and from Equation (20) in Corollary 2 (ii) it holds that for j ≥ 1.With j = (2) j−1 (1)

The above hypergeometric function series converges since
Analogously to the proof of Lemma 1 we can compute where Ultimately, the operator norm is bounded by Similar to Theorem 2, it is also possible to give a series representation of the α-cosine transform on R + , which we define by with coefficients c j , j = 0, 1, . . ., given by Proof.The result follows immediately from the equivalent formulation of the cosine double angle formula given by cos(2x) = 2 cos 2 (x) − 1, which yields The hypergeometric series above converges since 2

Approximating the Fourier transform
By the Fourier transform's symmetry it suffices to approximate fR at equidistant points n R N , n = 1, . . ., N, N ∈ N, from the series representation of T α f in Theorem 2. Define vectors ξ, η ∈ R N with elements and for n = 1, . . ., N, as well as the matrix C ∈ R N×N with where the matrix elements c k are the Fourier coefficients from Theorem 2.
Proposition 1.The vector ξ is uniquely determined by i.e. it is the unique solution of the system of linear equations η = C • ξ.
Proof.First note that fR is continuous on the compact interval [−R, R], as well as integrable and square-integrable on the real line.Furthermore, fR converges in the L 2 -norm to f , hence By the Riemann-Lebesgue lemma, the Fourier transform f is bounded, uniformly continuous, and vanishes at infinity [11], hence we approximate f by fR in Equation ( 15) with R chosen large enough.Then, for n = 1, . . ., N forms a system of linear equations, which in matrix form is given by where C = C i, j i, j=1,...,N with C i, j as in Equation ( 23) , i.e.
In the illustration of the matrix C above it is assumed, without loss of generality, that N is an even integer.By the construction of the system of linear equations, C is an upper triangular matrix with non-zero entries c 1 on its main diagonal.In particular, this means that its inverse C −1 exists, and ξ is uniquely determined by

Band-limited interpolation
We aim to reconstruct the Fourier transform f with a suitable interpolation.In the previous subsection we introduced the vector ξ with coordinates ξ n = f (nR/N), n = 1, . . ., N. Set f (−nR/N) = f (nR/N) and f (0) = F f (0), which we assume to be known for now.Note that this is in general not true.A simple workaround solving this is discussed in Section 6.Moreover, in the application examples of Section 5, the value F f (0) is further specified.
Define the band-limited interpolation of the Fourier transform f by where sinc(x) = sin (πx) /(πx) for x 0 and sinc(0) = 1.Furthermore, define the estimate where Proof.(i) Define the cardinal series of f by The Whittaker-Shannon-Kotel'nikov sampling theorem [3,17,28,25], widely known in the field of sampling theory, states that any bandlimited function u, i.e. u ∈ L p (R), 1 ≤ p < ∞, with compactly supported Fourier transform û on [−B, B], can be completely reconstructed in from its cardinal series S B u in L p [25,Thm. 6].The optimal sampling rate 2B is called Nyquist-rate.In our case u = f .Denote by the truncated cardinal series of f .Suppose that f is band-limited, i.e. the function f ∈ L 1 e,w,α (R) ∩ L 2 e (R) has compact support [−B, B] for some B > 0.Then, f ∈ L 2 e (R) is completely determined by its cardinal series S B f , and in the L 2 -sense as N → ∞.
The assumption that f has compact support is certainly too strong, and the Shannon sampling theorem applied to non-bandlimited functions will result in interpolation errors known as aliasing.To ensure L 2 -convergence of the bandlimited interpolation when f is not band-limited, we refer to the results in [21].For p > 1, define and F p = ∪ δ>0 F p (δ). Denote by R the set of all measurable functions g : R → C that are Riemann-integrable on every finite interval.Then, Recall, that for any function f ∈ C n (R) with integrable derivatives f (k) ∈ L 1 (R) for all k ≤ n, its Fourier transform f decays as Setting 2B = N/R in the truncated cardinal series ( 27) yields the interpolant (ii) Note that and applying the inverse Fourier transform F −1 to the band-limited interpolation f (N) R yields As the L 2 -convergence of the interpolant to f is preserved under the inverse Fourier transform, we ultimately get lim

Remark 3. (i) Under the assumptions of Theorem 4 the cardinal series S B f converges uniformly to f as B → ∞
with error estimate This is a direct consequence of [4,Theorem 1].
(ii) Alternatively, it is also possible to use piecewise linear interpolation.Denote by fN,R the piecewise linear interpolation polynomial constructed from the interpolation points ξ = f (kR/N) It is well known that the Fourier transform f is n-times continuously differentiable if the function f is piecewise continuous and x k f (x) is integrable for all k ≤ n [15].Restricted to compact intervals, boundedness follows.
Hence, if f ∈ L 1 e,w,α is additionally assumed to satisfy x f (x), x 2 f (x) ∈ L 1 (R), then f is twice continuously differentiable.It follows that linear interpolation fN,R converges to fR uniformly as N → ∞, and the error estimate holds, where C is some positive constant [20,Ch. 8.3] and ( fR ) ′′ denotes the second derivative of fR .This implies convergence in the L 2 -norm over [0, R] which is preserved by the Fourier transform, thus

Smoothing
In applications, there is a possibility that the function T α f is contaminated by noise, which directly affects the solution of the linear equation η = C • ξ from Proposition 1.To cope with noisy data of T α f , we fix a non-negative, even mollifier function e γ (x), γ > 0, that integrates to 1, and converges to the Dirac-Delta function as γ tends to 0. Additionally, determine the respective reconstruction kernel ψ γ by the relationship ψ γ = F e γ .Suitable mollifiers and their respective reconstruction kernels are given in Example 2 of Section 6.2.1.A smoothed estimate of f can then be computed by the following Proposition.

R
= S B is the interpolation given in Equation (25).The convergence to the Proof.For a given mollifier e γ we consider the convolution which converges to the function R , we then compute the smoothed estimate where L 2 -convergence is preserved by the inverse Fourier transform.

Applications
In the following Section, we briefly present two applications for the inversion of α-sine and cosine transforms.First, harmonizable symmetric α-stable processes as well as their connection to α-sine transforms are outlined.The theory of complex stable measures and stochastic integrals can be quite technical, and a detailed discussion would go beyond the scope of this work.We refer to [24] for a complete introduction to these processes.The second application deals with the inversion of the two-dimensional spherical α-cosine transform.

Stationary real harmonizable symmetric α-stable processes
Consider a probability space (Ω, F , P). Define the spaces L 0 (Ω) and L 0 c (Ω) of real-valued and complex valued random variables on this probability space, respectively.Then, every element X ∈ L 0 c (Ω) is of the form X = X 1 + iX 2 with X 1 , X 2 ∈ L 0 (Ω).A real-valued symmetric α-stable random variable X ∼ S αS (σ) on this probability space is defined by the characteristic function where σ > 0 is called the scale parameter of X and α ∈ (0, 2] its index of stability.In the multivariate case, a real-valued symmetric α-stable random vector X = (X 1 , . . ., X n ) is defined by its joint characteristic function where (x, y) denotes the scalar product of two vectors x, y ∈ R n , and S n−1 is the unit sphere in R n .The measure Γ is called the spectral measure of X.It is unique, finite and symmetric in the case 0 < α < 2 [24, Thm. 2.

4.3].
Let us define the notion of complex random measure.Let (E, E) be a measurable space, and let (S 1 , B(S 1 )) be the measurable space on the unit circle S 1 equipped with the Borel σ-algebra B(S 1 ).Let k be a measure on the product space (E × S 1 , E × B(S 1 )), and let A complex-valued S αS random measure on (E, E) is an independently scattered, σ-additive, complex-valued set function such that the real and imaginary part of M(A), i.e. the vector (M (1) (A), M (2) (A)) = (Re(M(A)), Im(M(A))), is jointly S αS with spectral measure k(A × •) for every A ∈ E 0 [24, Def.6.1.2].We refer to k as the circular control measure of M, and denote by m(A) = k(A × S 1 ) the control measure of M. Furthermore, M is isotropic if and only if its circular control measure is of the form where γ is the uniform probability measure on S 1 .
A stochastic integral with respect to a complex S αS random measures M is defined by where f : E → C is a measurable, complex-valued function from the space L α (E, m).The integral I(u) is well-defined by [24, Prop.6.2.2].
Let (E, E) = (R, B) and u(t, x) = e itx .Then, the stochastic process {X(t) : t ∈ R} defined by where M is a complex S αS random measure on (R, B) with finite circular control measure k (equivalently, with finite control measure m), is called a real harmonizable S αS process.By [24, Thm.6.5.1], this process is stationary if and only if M is isotropic, i.e. its spectral measure is of the form k = mγ.For all n ∈ N and t 1 , . . ., t n ∈ R, the characteristic function of the finite-dimensional distributions of the process X is given by The codifference function τ(t) of a S αS stochastic process is defined as the codifference of the random variables X 0 and X t , i.e.
where • α is the scale parameter of X 0 , X t and X t − X 0 , respectively.

Problem setting and inversion
Let X = {X t : t ∈ R} be a stationary real harmonizable S αS process with finite circular control measure k, and suppose its control measure m has an even, non-negative density function f ∈ L 1 e (R) ∩ L 2 e (R) with respect to the Lebesgue measure on R. We refer to the function f as the spectral density.It follows that m(dx) = f (x)dx in Equation (31), and the random variables X 0 , X t and X t − X 0 in (32) are S αS random variables with the following scale parameters.
First, it holds that X t ∼ S αS (σ) with In particular, this yields m(R) = σ α /λ α .By stationarity of X, the random variable X 0 has the same scale parameter σ.
Similarly, for X t − X 0 Equation ( 31) yields Ultimately, the codifference function in Equation (32) expands to Suppose that the spectral density f is an even, integrable function.Then rearranging the above yields Assume that the scale parameter σ, the index of stability α ∈ (0, 2) and the codifference function τ are known.
. ., N, and solve the system of linear equations η = Cξ, where ξ = (ξ 1 , . . ., ξ N ) with ξ n = f n R N , n = 1, . . ., N. Apply Theorem 4 to compute the estimate of the spectral density f .Remark 4. In statistical inference, the scale parameter σ, the index of stability α ∈ (0, 2) and the codifference function τ need to be estimated from a realization of the process X.

Two-dimensional spherical α-cosine transform
As mentioned in the introduction before, spherical α-cosine transforms are of particular interest in stochastic and convex geometry.For example, when analyzing fiber processes, the so-called rose of intersections is closely related to the α-cosine transform of the underlying directional distribution of the fibers.Also, in convex geometry, the support function of a zonoid is the spherical α-cosine transform of some generating measure ρ.
Let f be an even, integrable function on S n (i.e. the unit sphere in R n+1 ) with respect to the area surface measure on S n .We write f ∈ L 1 e (S n ).Define the (n where (•, •) denotes the inner product of the vectors θ, η ∈ S n .The spherical α-cosine transform is well-defined for α > −1, see [22].
Every point θ = (cos(x), sin(x)) on the unit circle S 1 corresponds one-to-one to an angle x ∈ (−π, π], and any function f ∈ L 1 (S 1 ) can be parameterized as a 2π-periodic function on R, which is integrable over [−π, π].We simply denote this by f ∈ L 1 ([−π, π]).It is π-periodic if f is even on S 1 .Furthermore, the inner product of θ, η ∈ S 1 is equivalent to the cosine of the angle between those two vectors.Define the convolution of 2π-periodic functions Any 2π-periodic function u ∈ L 2 ([−π, π]) has the Fourier series expansion (in exponential form) with Fourier coefficients The convolution theorem of Fourier coefficients states that (u Then, the two-dimensional equivalent of (34) on S 1 is defined by Note that f (0) = 1/(2π), and for any even n ∈ N we can compute ) is odd about π/2 on [0, π] for all odd n ∈ N. By the shift property we get f (n) = f * (n)e −inh = 0 for n odd.
To summarize the Fourier coefficients of f are given by Lastly, note that f (−n) = f (n), where z denotes the complex conjugate for z ∈ C. Applying this to the Fourier series yields the desired result.
Remark 5.In the case α = 0, 2, 4, . . ., the coefficients c j = 0 for all j ≥ α/2 (see Corollary 3).Thus, only a finite number of Fourier coefficients of f can be computed from the Fourier coefficients K α,S f (n), making it impossible to fully reconstruct f .

Numerical results
We first consider the α-sine transform on R + .The inversion results of the direct approach are presented in Section 6.1 for the case α > 1.Additionally, various numerical difficulties that emerge when dealing with the direct approach are analyzed.The Fourier approximation approach is considered in Section 6.2.It proves to be more accurate as well as more efficient compared to the direct approach.Most importantly, it is applicable for all α > −1.Inversion results are first shown for α ≥ 0, the case −1 < α < 0 is considered separately.Moreover, in the context of an application to harmonizable S αS processes, Gaussian noise is added to the transform T α f to test the smoothing procedure of Section 4.4.Lastly, Section 6.3 deals with the two dimensional α-cosine transform.We consider a number of πperiodic probability density functions on [−π, π] and demonstrate inversion formula (36) developed in Corollary 4 for α > −1.
Consider the following functions in L 1 w,α (R + ) ∩ L 2 (R + ) as well as their 2-sine transform T 2 f : For α 0, 2, 4, . . ., the transform T α f is in general not explicitly given, though its graph remains fairly similar, see Figure 1.

Direct approach
In application, one has to keep the following two issues with the direct approach in mind.The operator G −1 defined in Corollary 1 only fulfills injectivity, but it is not surjective.Secondly, G −1 is in general not a continuous operator.Therefore a straightforward application of G −1 does not automatically yield f .The reason for the abovementioned arises from the fact that inf x>0 µ(x) = 0, where µ was defined in Equation (12).Hence, the function 1/µ is not bounded and multiplication by this function does not define a linear bounded operator on L 2 R + , dx x .To deal with this we approximate 1/µ by 1 µ ½ {|µ|>ε} for some small ε > 0, and estimate the function f by To simplify the numerical implementation, note that for any function u ∈ L 2 (R + , x c dx) Thus, we define H : Similarly, we introduce Recall that the constant c = 1 + δ, where δ > 0 such that α ≥ 1 + δ/2, thus 1 < c ≤ 2α − 1.Furthermore, the constant c directly affects the decay of the integrand in the definition of µ, i.e. the larger c the faster its decay.We therefore set c as large as possible, that is c = 2α − 1, in the case 0 < α < 2. In the case α > 2, values of c > 3, and thus the exponent (c − 3)/2 > 0 in the operator H above lead to difficulties during numerical integration.It proves to be more convenient to set c = 3 ∈ (1, 2α − 1] here. The functions f 1 and f 2 of Example 1 are contained in the space L 2 (R + , x c dx) for all α > 1, hence the inversion formula from Corollary 1 is applicable for all α > 1.On the other hand, function f 3 fulfills this condition only for α ∈ (1,2].The results for all three example functions in the case α = 2 are displayed in Figure 2. The inversion was performed with ε = 0.025.The choice of ε is crucial for the precision of the inversion.The closer ε is to 0, the better the results of the inversion, but computation takes a significantly larger amount of time.Allowing for a larger ε reduces computation time, but large deviations from the expected result can be seen in Figure 3, where the inversion was performed for all three example functions with ε = 0.1.Moreover, the inversion for the functions f 1 and f 2 is significantly faster than in the case of f 3 .In the following, only example f 2 is considered.The inversion is computed for the cases α ∈ {10, 3, 1.5}.The results are illustrated in Figure 4. Numerical computation takes a significantly larger amount of time for smaller α, but at the same time a larger choice ε suffices.The inversion hinges on the behavior of the function µ which directly depends on α. Figure 5 depicts the function |µ| on the left-hand side as well as the real part of 1/µ(x)½ {|µ(x)|>0.1}on the right-hand side for various values of α with c = 2α − 1 (or c = 3 for α ≥ 2).The larger α the faster and less fluctuating the decay of |µ|.Thus, the function 1/µ(x)½ {|µ(x)|>0.1}cuts off sooner to 0 which directly contributes to computation of the integral operator F −1 + in Equation (38).Additionally, Figure 5 emphasizes how choosing ε too small when dealing with smaller α is counterproductive, for example consider the case α = 1.25.The corresponding dotted line on the left-hand side of Figure 5 is fluctuating, and |µ| fails to fall below ε = 0.1.Thus, 1/µ(x)½ {|µ(x)|>0.1}on the right-hand side of the plot is also fluctuating a lot and does not cut off to 0 as in the other cases.This will lead to difficulties during numerical integration and prolonged computation times.
On another note, for α 2 the function g = T α f is not given in a closed formula which makes applications of integral transforms to it numerically challenging, as the inversion formula (38) essentially involves threefold integration, i.e. by T α itself and by F + , F −1 + , respectively.Therefore, the function g was sampled discretely on the interval [0, 20] with a step size of 10 −6 between each sample point.Linear interpolation and constant extrapolation with the last sample point was used as an estimate for g.The upper bound of the interval was chosen such that g is approximately constant beyond that point.

Fourier approximation approach
We consider the functions given in Example 1. Function values of the Fourier transform of the wanted function f are the solution of the system of linear equations η = C • ξ described in Section 4.2.Recall, the vectors ξ ∈ R N containing function values of the Fourier transform, ξ n = fR n R N , and η ∈ R N given by η n = T α f n R 2N − c 0 2 F f (0) for n = 1, . . ., N, as well as the matrix C defined in Proposition 1.
The integer R should be chosen as large as possible.Under the assumption that the Fourier transform F f is negligibly outside of the interval [−R, R], we choose R large enough such that T α f (y) is approximately constant for all y > R. Furthermore, note that T α f (y) ≈ c 0 2 F f (0) for large y > R. Hence, we chose F f (0) ≈ 2T α f (y)/c 0 for some large y > R. Lastly, the larger N the better in general, as this will make for finer sampling.
Figure 6 shows the solution ξ of the system of linear equations for all three examples and the case α = 1.5 with N = 100 and R = 10.On the other hand, when fixing α and N, choosing R too small is directly reflected in the results for ξ, which is to be expected.For example choosing R = 1 would imply the assumption that F f vanishes outside of the interval [−1, 1], which cannot be true as the function T α f (y) is still increasing for y > 1 in all three examples f 1 , f 2 , f 3 , see Figure 1 (a).
Contrary to that, larger R also show good results for ξ.But since the number of sample points is fixed with N, the larger R the coarser the grid on which the Fourier transform F f is determined, which will effect the precision of the interpolation to follow.The sampling theory of Shannon as discussed before gives another useful interpolation method for the Fourier transform of the spectral density.By symmetry of the Fourier transform and choosing f (0) = F f (0) ≈ 2T α f (y)/c 0 for some large y > R, the function values fR (kR/N), k = −N, . . ., N are approximated by the solution of the system of equations ξ.Using the truncated cardinal series defined in (27) we interpolate fR .
comparison of f (n/(2B)) = f (nR/N), the Nyquist rate 2B corresponds to the ratio N/R.Since we generally cannot assume f to be compactly supported, the convergence results previously stated, suggest that the ratio N/R should tend to infinity.In practice, we therefore choose R just like in the case of the linear interpolation large enough such that T α f (y) is approximately constant for y > R. The integer N is then set as large as possible.We compute the estimate for the spectral density f by see Theorem 4. Figure 7 shows the result of the inversion with N = 100, R = 10 (based on the results depicted in Figure 6) for all three examples in the case α = 1.5.
We performed the inversion with various values of α > 0 for all three functions f 1 , f 2 and f 3 .With a suitable choice of N, R, different values of α seem to have no effect on the computation of ξ. Figure 8 highlights the result for f 1 with N = 100, R = 10 and α = 10.Remark 6.In Remark 3 the possibility of linear interpolation was mentioned.Our numerical experiments show that there seemed to be no visible difference in the results of either interpolation method.Linear interpolations performs slightly faster but is on the other hand also marginally less accurate.See Table 1 for a comparison between all methods for example f 2 .

Smoothing
Section 4.4 touched upon the topic of noise contamination of the function T α f .To simulate this scenario, we artificially added noise to the function T α f .For three functions f 1 , f 2 and f 3 , their transform T α f is on the interval [0, 20] at 400 equidistant points.We then added Gaussian noise with a standard deviation of 0.1 to each sample point, see Figure 9.
Recall from Section 4.4, that we compute the smoothed solution of the function f by f , where ψ γ is the reconstruction kernel with smoothing parameter γ > 0 satisfying ψ γ = F e γ for a given mollifier function e γ , and f (N) R,γ is the approximation of the Fourier transform F f as computed in the previous Sections.There are many options for mollifiers available.We consider the following two examples of e (1) , e (2)  and their corresponding reconstruction kernels ψ (1) , ψ (2) .

Example 2.
(i) e (1) The solution ξ of the linear equation η = Cξ is interpolated linearly as well as with the truncated cardinal series, see Figure 10.Whichever interpolation method is employed before does not play a significant role.The smoothed solution is computed for all three examples with γ = 0.5.Smoothing was performed with reconstruction kernel ψ (1) .
The results are displayed in Figure 11.

The case −1 < α < 0
In the following we consider negative values −1 < α < 0. Figure 12 shows the respective transforms for the functions of Example 1.The transform T α f is not defined at 0, and as the parameter α approaches −1 the irregularities  For the case α = −0.9, the reconstruction of f 2 shows larger deviations.This was to be expected looking at the irregularities of the transform T −0.9 f 2 in Figure 12 (b).For f 1 and f 2 the results for values of α close to −1 are better as the graphs of their transforms T α f do not show such severe spikes.

Spherical α-cosine transform on S 1
We consider the following examples of even probability density functions on the unit circle S 1 : Figure 15 shows the spherical α-cosine transform of all three densities f 1 , f 2 , f 3 from Example 3 with various values of α > −1.The results of the reconstruction of all three functions from their Fourier coefficients for α = 1.5, using Corollary 4, are given in Figure 16.The results inversion results for other values of α > −1 show an identical picture to Figure 16, and thus are omitted here.

Discussion
Corollary 1 gives an inversion operator of T α that is injective on the space of square integrable functions on the real positives with respect to a weighted Lebesgue measure.The direct approach provides a direct computational formula for the function f from its α-sine transform T α f , and generally works on a larger class of transformations than the presented α-sine transform, but is lacking in terms of efficiency.The practical implementation of the underlying  theory quickly proves to be quite cumbersome.The numerical instability of the function µ + as well as the Fourier transformation F + on the multiplicative group (R + , •) require tremendous computational effort.Most importantly, the direct approach is only applicable for α > 1.
The choice of the cut-off parameter ε significantly impacts the results of the inversion.An insufficiently small choice leads to large deviations around 0. Very small ε reduce these deviations but at the price of very long computation times.Satisfying results could only be achieved when accepting several hours of computation (on a modern Intel i5 8500 6-core CPU).In our examples, T α f needed to be sampled and an interpolation was used as an approximate.
A direct application of Corollary 1 to the integral form of T α f led to no results as numerical integration failed.The step size of the sample had to be chosen as small as 10 −6 .Larger step sizes increased computation times enormously or ended in diverging numerical integration.See Table 1 for a comparison of the direct approach to the Fourier approximation approach in terms of computation time and accuracy.
The Fourier approximation approach is based on the series representation of T α f given in Theorem 2, which holds for all α > −1.The specific Fourier expansion of the integral kernels | sin(x)| α and | cos(x)| α , respectively, allow for the construction of a fast and efficient approximative inversion method.In the case α = 2, the transform T 2 f is simply a linear transformation of the Fourier transform F f , thus inversion is achieved by a simple application of the inverse Fourier transform.In any other case α > −1, the representation of T α f as a series of the Fourier transform F f , and the resulting system of linear equation from Proposition 1 enable us to compute an approximate of F f efficiently.Applying the inverse Fourier transform allows us to estimate the function f .The presented Fourier approximation method is fast and delivers accurate results for the inversion in a matter of seconds.The solution of the system of linear equations involved is easily computed since the matrix C is triangular and non-singular by construction.We have seen that the parameters N and R can be chosen in a practical manner.Our computations were performed with only N = 100 equidistant samples of T f on the interval [0, 10].This is a huge advantage over the direct approach, where only a much smaller step size of 10 −6 led to acceptable results.Table 1 compares the numerical performance of all methods for the function f 2 from Example 1(b).Linear interpolation performed slightly faster than band-limited interpolation but is less accurate, and both perform tremendously faster and are more accurate than the direct approach.Most of the error in the direct approach stems from the fluctuation of the inversion near 0, e.g. on the interval [0.5, R] the L 2 -distance is much smaller.Still the long computation times are an immense drawback.But most importantly, the advantage the Fourier approximation approach lies in its applicability for all > −1.  ( * 3 ) Due to high computation times, we used the built-in trapezoid discretization for numerical integration.
When dealing with noise inflicted input data, the convolution property of the Fourier transformation enables us to compute a smoothed solution to our inversion problem.Multiplication of the interpolated approximate of the Fourier transform of f with a reconstruction kernel corresponds to the convolution of f with the corresponding mollifier.For our synthetically generated, noise corrupted data, satisfactory smoothed solutions are still achieved when dealing with noise that is considerably higher than the examples shown in Section 6.2.1.
The closer α gets to −1, the more difficult it is to compute satisfactory inversion results, as we have seen in the case α = −0.9 for example f 2 .The deviation of the inversion from its target is deeply connected to the numerical integrability of the integral kernel for α close to −1.Signs of the numerical instability can be seen immediately from the spikes in the transform T α f .A viable solution to this can be achieved by smoothing T α f as we have done in our sample with synthetically noise inflicted data before.
Lastly, we also derived a series representation for the two-dimensional spherical α-sine and cosine transforms with the identical coefficients as for the transforms on R + .This allows us establish an inversion algorithm which computes

Remark 2 .
Counterparts of Corollary 2 and Theorem 3 for K α follow immediately from |c j | = |c j |.The only exception is the convergence of the series ∞ j=1 c j for −1 < α < 0. This can be easily seen by the Leibniz criterion.More precisely, by Equation (21) it holds that

cos x 2 α
36)for all x ∈ [−π, π], where the coefficients ck , k = 0, 1, . . ., are given in Corollary 3.Proof.Since {c n } are the coefficients of the Fourier series (in sine-cosine form) of the function1  2 , one can easily verify that | cos(•)| α (n) = c|n/2| for even n ∈ Z and 0 otherwise.Thus, by the convolution theorem and Equation (35) we get the Fourier coefficients K α,S 1 (n) = 2πc |n/2| f (n) for even n ∈ Z and 0 otherwise, as well as the series expansion

Figure 2 :
Figure 2: Direct approach, α = 2. Inversion performed with ε = 0.025.Solid blue line shows the actual functions f 1 , f 2 , f 3 .The dashed red line shows the result of the inversion of the direct approach.

Figure 3 :
Figure 3: Direct approach, α = 2. Inversion performed with ε = 0.1 for all three examples.Solid blue line shows the actual functions f 1 , f 2 , f 3 .The dashed red line shows the result of the inversion of the direct approach.

Figure 4 :
Figure 4: Direct approach, α ∈ {10, 3, 1.5}.Inversion performed for f 2 .Solid blue line shows the actual function f 2 .The dashed red line shows the result of the inversion of the direct approach.

Figure 6 :Figure 7 :
Figure 6: System of linear equations, α = 1.5, N = 100, R = 10.Solid blue line shows the Fourier transform of the examples f 1 , f 2 and f 3 .The red circles are the solution of the system of linear equations ξ.

Figure 9 :
Figure 9: Noisy data.Gaussian noise added to T 1.5 f , sampled on the interval [0, 20] at 400 equidistant points, for all three spectral densities f 1 , f 2 and f 3 .

Figure 10 :
Figure 10: Noisy data.Linear interpolation and band-limited interpolation in the case α = 1.5 for all three example functions.

Figure 13
Figure13and 14 show the inversion results of all three examples using band-limited interpolation for α = −0.5 as well as for example f 2 and α ∈ {−0.9, −0.75, −0.25}, respectively.All results were computed with N = 100, R = 10.For the case α = −0.9, the reconstruction of f 2 shows larger deviations.This was to be expected looking at the

Table 1 :
Comparison between the direct approach and the Fourier approximation method (with band-limited or linear interpolation).Computations were performed for f 2 (cf.Example 1(b)) with R = 10, N = 100.The computation time was measured for the computation of the approximate f2 at 100 equidistantly spaced points on [0, R]. ( * ) For α = 2 we used ε = 0.025 and the analytical form of T 2 f 2 given in Example 1 (b).For α = 1.5 we set ε = 0.1 and used a discrete sample of T 1.5 f 2 with a sample step size of 10 −6 .(* 2 ) Computation was performed on [0.05, R] only as computation times increase tremendously the closer the left integration boundary is to 0. The error becomes larger, too.
(28)δwith δ = 1/2.Furthermore, f is uniformly continuous on R and bounded, in particular it is integrable over all finite intervals.Hence, f − S B f 2 → 0 as B → ∞, i.e. the cardinal series S B f converges to f in the L 2 -norm.The above and the L 2 -convergence in(28)