Frame Decompositions of Bounded Linear Operators in Hilbert Spaces with Applications in Tomography

We consider the decomposition of bounded linear operators on Hilbert spaces in terms of functions forming frames. Similar to the singular-value decomposition, the resulting frame decompositions encode information on the structure and ill-posedness of the problem and can be used as the basis for the design and implementation of efficient numerical solution methods. In contrast to the singular-value decomposition, the presented frame decompositions can be derived explicitly for a wide class of operators, in particular for those satisfying a certain stability condition. In order to show the usefulness of this approach, we consider different examples from the field of tomography.


Introduction
In this paper, we consider bounded linear operators between real or complex Hilbert spaces X and Y . These may be of the general form where X m and Y n are again Hilbert spaces, and M, N ∈ N. In particular, we are interested in solving potentially ill-posed linear operator equations of the form An element x * ∈ X is commonly called a least-squares solution of (1.2) if and a solution if Ax * = y. Furthermore, the minimum-norm (least-squares) solution x † is defined as the unique (least-squares) solution of minimal norm, i.e., x † = inf { x * | x * is a (least-squares) solution of Ax = y} .
In case that A is compact, it is well-known [2,6,11,14] that there exists a singular system (σ k , v k , u k ) ∞ k=1 such that A admits a singular-value decomposition (SVD) of the form Thereby, the singular values σ n and singular functions u k , v k are defined via (1.4) The singular system contains all relevant information about the operator A and can be used to characterize the minimum-norm (least-squares) solution x † of (1.2) via Since {u k } k∈N and {v k } k∈N are complete orthonormal systems spanning R(A) and N(A) ⊥ , respectively, it follows that x † is well-defined if and only if the so-called Picard condition holds: The rate of decay of the singular values σ k thus defines the degree of ill-posedness of the problem. Furthermore, in the presence of noisy data y δ , one can determine a stable approximation x δ α of x † for example via (see e.g. [12,18]) where g α is a properly selected approximation of s → 1/s and the regularization parameter α is suitably chosen in dependence on the noise level δ, which is such that y − y δ ≤ δ .
Different choices of g α give rise to different regularization methods, for example Tikhonov regularization, Landweber iteration, or the truncated singular-value expansion [12,18]. Hence, if the singular system of A is known or can be easily derived, it makes sense to use it for both analytical considerations as well as numerical implementations. Unfortunately, in many cases an explicit form of the singular system is unknown or hard to derive. The reasons for this are two-fold. On the one hand, finding an explicit representation of the solutions of the eigenvalue equation in (1.4) is often very difficult. And even if found, the numerical computation might be impossible (see e.g. [24]). On the other hand, the domains over which the Hilbert spaces X and Y are defined are often not regular, which makes it difficult to find orthonormal bases which can serve as candidates or building blocks for the singular functions u k and v k . Thus, while in theory the singular-value decomposition is a great tool for the analysis and solution of inverse problems, it is often not used in practice, save perhaps for finite-dimensional problems, for which the numerical computation/approximation of the then singular vectors quickly becomes overwhelming when the problem is medium-to large-scale.
Hence, in this paper, we generalize and extend the applicability of the singularvalue decomposition by considering frame decompositions of the operator A. By this, we mean that we decompose the operator in a similar way as in (1.3), but now with the sets of functions {v k } k∈N and {u k } k∈N no longer being orthonormal systems, but only forming (suitably connected) frames over X and Y , respectively. Due to the properties of frames we are then able to characterize the minimum-coefficient (leastsquares) solution of (1.2) in a way similar to (1.5). The advantage of this approach is the greater freedom gained by the use of frames over orthonormal systems, with which it is also easier to work over irregular domains. This freedom, combined with the availability of a wide number of highly specialized frames, allows us to obtain frame decompositions of operators for which singular-value decompositions are unavailable so far. These include in particular those operators which satisfy a certain stability property (see (4.1) below), as well as continuously invertible linear operators. Furthermore, in order to show that frame decompositions are not only a theoretical possibility, we also present a number of explicit examples from the field of tomography.
Note that frames have been used for the solution of inverse problems before, most commonly in the form of bi-orthogonal or orthonormal wavelet frames (see for example [1,3,4,7,9,17,20,23,[29][30][31][32] and the references therein). The popularity of wavelets is due to the fact that they can naturally be used to express sparsity properties, structural aspects, and smoothness assumptions on the sought for solutions. In this connection, we in particular want to mention the Wavelet-Vaguelette Decomposition [1,9,17], which is another generalization of the SVD based not on general frames, but specifically on an orthonormal wavelet basis and two bi-ortogonal "near-orthogonal sets" (i.e. frames).
The outline of this paper is as follows: In Section 2, we review some necessary material on frames in Hilbert spaces, which we then use in Section 3 to derive our main results on frame decompositions. In Section 4, we show the applicability of the developed theory to specific operator classes, providing a general recipe for their frame decomposition. In Section 5, we apply our results to a number of problems in atmospheric and computerized tomography, before ending with a short conclusion in Section 6.

Frames in Hilbert Spaces
Before considering frame decompositions of the operator A, we first need to recall some basic facts on frames in Hilbert spaces. This short summary, based on the seminal work [8], is adapted from our previous publication [15]. First, recall the following Definition 2.1. A sequence {e k } k∈N in a Hilbert space X is called a frame over X, if and only if there exist numbers B 1 , B 2 > 0 such that for all x ∈ X there holds The numbers B 1 , B 2 are called frame bounds. The frame is called For a given frame {e k } k∈N , one can consider the so-called frame (analysis) operator F and its adjoint (synthesis) operator F * , which are given by Due to (2.1) and the general fact that F = F * , there holds Furthermore, one can define the operator S := F * F , i.e., x, e k e k , and it follows that S is a bounded linear operator with B 1 I ≤ S ≤ B 2 I, where I denotes the identity operator. Furthermore, S is invertible and B −1 Thus, it follows that if one definesẽ k := S −1 e k , then the set {ẽ k } k∈N also forms a frame over X, with frame bounds B −1 2 , B −1 1 , which is called the dual frame of {e k } k∈N . For the corresponding frame operator where P denotes the orthogonal projector from ℓ 2 (N) onto R(F ) = R(F ). In particular, it follows from (2.4) that any x ∈ X can be written in the form x, e k ẽ k .
(2.5) Furthermore, if {e k } k∈N is a tight frame with frame bounds B 1 = B 2 = B, then we have that S −1 = B −1 I, which implies thatẽ k = e k /B and therefore, x, e k e k , and Since in general there holds {0} ⊂ N(F * ) = N(F * ), the decomposition of x given in (2.5) is not unique. However, it is the most economical one in the sense of the following and if a k = x,ẽ k for some k ∈ N, then This proposition implies that among all possible decompositions of a function x in terms of the frame {e k } k∈N , the coefficients x,ẽ k in (2.5) are those with the smallest ℓ 2norm. Thus, working with these coefficients also has the practical advantage of reducing the risk of computational instabilities and the resulting errors in an implementation.
The fact that frames generally allow the decomposition of a function in potentially infinitely many different ways is one of the key differences between frames and bases. In fact, for any frame {e k } k∈N the following statements are equivalent (see e.g. [5]): no element can be deleted) . (2.7) In particular, since R(F ) = R(F ) is a closed subspace of ℓ 2 (N) and thus there holds ℓ 2 (N) = R(F ) ⊕ N(F * ), it follows that P = I if and only if {e k } k∈N is an exact frame. Note that it is sometimes not possible to compute the dual frame functionẽ k explicitly. However, since there holds (see [8]) that where R := I − 2 B 1 +B 2 S, the elements of the dual frame can be approximated by only summing up to a finite index N, i.e., (2.9) The induced error of this approximation is controlled by the frame bounds B 1 , B 2 , i.e., x . (2.10) Note that (2.9) can also be written in the recursive form which allows for an efficient numerical implementation.

Frame Decomposition
In this section, we derive our main results on the frame decomposition of bounded linear operators A : X → Y , where X and Y are real or complex Hilbert spaces of the general form (1.1) for some M, N ∈ N, i.e., where the operators A n : X → Y n simply denote the components of the operator A.
On the spaces X and Y , we consider the canonic inner products where ·, · Xm and ·, · Yn denote the inner products of X m and Y n , respectively. For each of the spaces X m and Y n , we consider frames {e m k } k∈N and {f n k } k∈N with frame bounds B 1 , B 2 and C 1 , C 2 , respectively, i.e., for all x m ∈ X m there holds and for all y n ∈ Y n there holds y n , f n k Yn For our analysis, these frames have to be suitably connected to the operator A, which leads us to where as in (3.1) the operators A n : X → Y n denote the components of the operator A.
The implications of this assumption and the question of how to choose suitable frames {e m k } k∈N and {f n k } k∈N are discussed in detail in Section 4 below. Assumption 3.1 immediately leads to Lemma 3.1. Let A be as in (3.1) and let Assumption 3.1 hold. Then there holds Next, we want to derive a decomposition of A in terms of the functions e m k and f n k . For this, we need to consider the dual frames {ẽ m k } k∈N and {f n k } k∈N of the frames {e m k } k∈N and {f n k } k∈N , respectively. It follows from Section 2 together with (3.2) and (3.3) that these dual frames are again frames, but now with frame bounds 1/B 2 , 1/B 1 and 1/C 2 , 1/C 1 , respectively. In particular, this means that any function x m ∈ X m and y n ∈ Y n can be written in the form respectively, and that these decompositions are the most economical ones in the sense of Proposition 2.1. Using this, we now obtain the following decomposition result: , Proof. Since {f n k } k∈N forms a frame over Y n , it follows from (3.6) that for all x ∈ X, Together with the definition of A and Lemma 3.1, this implies A n x, f n , which yields the assertion.
Using the decomposition of A derived above, we are now turning our attention to the task of obtaining a suitable solution of the operator equation (1.2). For this, we first need to derive the following auxiliary result: Lemma 3.3. Let A be as in (3.1) and let Assumption 3.1 hold. Then for all functions x = (x m ) M m=1 ∈ X and y = (y n ) N n=1 ∈ Y there holds (3.8) where C 1 and C 2 denote the frame bounds of {f n k } k∈N . Proof. Let x = (x m ) M m=1 ∈ X and y = (y n ) N n=1 ∈ Y be arbitrary but fixed. Since the sets {f n k } k∈N form frames over Y n with frame bounds C 1 , C 2 , it follows from (3.3) that Together with the fact that Lemma 3.1 and this yields the assertion.
The above result has many useful consequences, such as the following Corollary 3.4. Let A be as in (3.1) and let Assumption 3.1 hold. Then for any function Furthermore, for any y = (y n ) N n=1 ∈ Y there holds The above corollary, and in particular equation (3.10), point to a strong connection between the structure of the equation Ax = y and the properties of the matrices Λ k , which we now analyse in more detail. We first consider the singular-value decomposition of each of the matrices Λ k , i.e., the vectors v k,j ∈ C M , u k,j ∈ C N , and µ k,j ∈ R + , for j = 1, . . . , r k , with r k ≤ min {M, N} denoting the rank of Λ k , such that v H k,j · v k,l = δ jl , u H k,j · u k,l = δ jl , µ k,1 ≥ · · · ≥ µ k,r k > 0 . Here, the superscript H denotes the Hermitian, i.e., the complex-conjugate transpose, of a vector. We collect the singular values and vectors into the singular systems (µ k,j , u k,j , v k,j ) r k j=1 . Note that the singular vectors v k,j are eigenvectors of the matrices Λ H k · Λ k and form bases of N(Λ k ) ⊥ ⊆ C M , and similarly, the singular vectors u k,j are eigenvectors of the matrices Λ k · Λ H k and form bases of R(Λ k ) ⊆ C N . Furthermore, for any w ∈ C N the unique minimizer of minimum-norm of the functional Using the singular-systems and the pseudo-inverses of the matrices Λ k we now make Definition 3.1. Let A be as in (3.1) and let Assumption 3.1 hold. Furthermore, let (µ k,j , u k,j , v k,j ) r k j=1 be the singular systems of the matrices Λ k as defined in (3.11). Then for y = (y n ) N n=1 ∈ Y we define whereF m denotes the frame operator corresponding to the dual frame {ẽ m k } k∈N .
Note that due to (2.2) and (3.12) the above definition of Ay is equivalent to (3.14) Concerning the well-definedness of the operator A, we have the following Lemma 3.5. Let y = (y n ) N n=1 ∈ Y and let Ay be defined as in (3.13). Then Ay is a well-defined element of X, i.e., Ay X < ∞, if the following Picard condition holds: Proof. Since the sets {e m k } k∈N form frames over X m with frame bounds B 1 , B 1 , the dual frames {ẽ m k } k∈N form frames over X m with frame bounds 1/B 2 , 1/B 1 , and thus with there follows Next, note that due to (3.12) there holds Furthermore, due to the orthonormality of the singular vectors u k,j and v k,j there holds (3.20) Hence, together with (3.17) we obtain that which yields the assertion.
We are now able to derive the first main result of this paper: Theorem 3.6. Let A be as in (3.1), let Assumption 3.1 hold, and let y ∈ R(A). Furthermore, assume that N(Λ k ) = {0} and let Ay be defined as in (3.13). Then Ay is a well-defined element of X and the unique solution of (1.2). Additionally, among all possible decompositions of Ay in terms of the dual frame functionsẽ m k , the decomposition (3.13) is the most economical one in the sense of Proposition 2.1.
Proof. Let y ∈ R(A) be arbitrary but fixed. Since y ∈ R(A) there exists a function x ∈ X such that Ax = y. Hence, due to (3.10) it follows that for all k ∈ N, the expansion coefficients x m , e m k Xm ofx are solutions of the matrix-vector systems and thus, via the reconstruction formula (3.6) we obtain that Comparing (3.22) with (3.13) and (3.14) we conclude thatx = Ay, and thus Ay is both a well-defined element of X and a solution of (1.2). Furthermore, since it follows from Proposition 2.1 that (3.13) is the most economic decomposition ofx = Ay in terms of the dual frame functionsẽ m k . Finally, since we assumed that N(Λ k ) = 0, it follows from (3.9) that N(A) = 0 and thus that the operator A is injective. Hence, Ay is the unique solution of (1.2), which concludes the proof.
Remark 3.1. In Theorem 3.6 we assumed that N(Λ k ) = {0}, which implied the injectivity of the operator A and thus the uniqueness of a solution of (1.2). On the other hand, if N(Λ k ) = {0} then due to (3.10), for any solution x = (x m ) M m=1 ∈ X of (1.2) there holds Now, since for y ∈ R(A) the full solution set of (1.2) is given by Hence, it follows from (3.23) that which can be rewritten as . Now by the definition (3.13) of Ay there holds and thus Hence, we see that the frame coefficients of x † and the nullspaces N(Λ k ) directly determine the distance of Ay to the minimum-norm solution x † .
Next, we proceed to derive the second main result of this paper: Theorem 3.7. Let A be as in (3.1), let Assumption 3.1 hold, and let y = (y n ) N n=1 ∈ Y . Furthermore, let the frames {f n k } k∈N be tight and assume that where F m denotes the frame operator corresponding to the frame {e m k }. Then Ay as given in (3.13) is a well-defined element of X and a minimum-coefficient least-squares solution of equation (1.2), i.e., it is a least-squares solution of (1.2) satisfying for any least-squares solution x * = (x * m ) M m=1 ∈ X of (1.2) and any k ∈ N. Furthermore, where B 1 and B 2 are the frame bounds of {e m k } k∈N . Hence, if also the frames {e m k } k∈N are tight, then Ay coincides with the minimum-norm least-squares solution x † .
On the other hand, if additionally there holds

27)
then Ay is also a solution of (1.2). Finally, among all possible decompositions of Ay in terms of the dual frame functionsẽ m k , the decomposition (3.13) is the most economical one in the sense of Proposition 2.1.
Proof. Let y = (y n ) N n=1 ∈ Y be arbitrary but fixed and let Ay be given as in (3.13), i.e., First of all, note that due to (3.17) there holds Hence, since we assumed that (3.24) holds and since R(F m ) ⊂ ℓ 2 (N), it follows that Ay is a well-defined element of X. Furthermore, since we have and since, with P m denoting the orthogonal projector onto R(F m ), there holds Now, by assumption the sets {f n k } k∈N form tight frames over the spaces Y n , i.e., they Hence, if x = (x m ) M m=1 ∈ X can be found such that for each k ∈ N its expansion coefficients x m , e m k Xm minimize the expressions then x is a minimizer of Ax − y Y and thus a least-squares solution of (1.2). Due to (3.28) and the properties of the pseudo-inverse Λ † k of Λ k , this is exactly satisfied for the choice x = Ay. Hence, it follows that Ay is a least-squares solution of (1.2). Furthermore, it follows that any other least-squares solution x * of (1.2) also has to minimize each of the expressions (3.30), and thus is of the form for some vectorsz k ∈ N(Λ k ). In particular, we have the orthogonality relation which follows from the properties of the pseudo-inverse. Using this, we get which yields (3.25). If in addition (3.27) is satisfied, then due to (3.28) and the properties of the pseudo-inverse there holds and thus it follows by choosing x = Ay in (3.29) that Ay is also a solution of (1.2). Next, note that since the sets {e m k } k∈N form frames over X m with frame bounds This holds in particular for the minimum-norm least-squares solution x † , and thus, since we saw above that Ay is also a least-squares solution, it follows that which yields (3.26). Hence, if the frames {e m k } k∈N are tight, i.e., if B 1 = B 2 = B for some B > 0, then it follows from the uniqueness of the minimum-norm solution that Ay coincides with x † . Finally, note that due to (3.28) and Proposition 2.1 it follows that the decomposition of Ay given in (3.13) is the most economical one in terms of the dual frame functionsẽ m k , which concludes the proof. A useful consequence of the above theorem is the following Hence, it follows that a m ∈ ℓ 2 (N) for each m ∈ {1 , . . . , M}. Furthermore, since we assumed that the frames {e m k } k∈N are exact, it follows from (2.7) that R(F * m ) = 0. and thus that R(F m ) = ℓ 2 (N). Hence, we have that a m ∈ R(F m ) for each m ∈ {1 , . . . , M}, which shows that (3.24) holds. As a result, Theorem 3.7 is applicable, which yields the assertions and thus concludes the proof.  Remark 3.2. In Theorem 3.6 and Corollary 3.8 we assumed that the frames {f n k } k∈N are tight to deduce that Ay is a least squares solution of (1.2). However, even if that is not the case, then it follows from the fact that the expansion coefficients (3.28) of Ay are minimizers of the functionals (3.30) that for any x = (x m ) M m=1 ∈ X there holds In particular, for any least squares solution x * of (1.2) there holds which implies that even in the case that the frames {f n k } k∈N are not tight, the function Ay is at most a factor C 2 /C 1 away from being a least-squares solution of (1.2). Furthermore, this shows that if (1.2) is solvable then Ay is a solution of (1.2) given that either (3.27) holds or that the frames {e m k } k∈N are exact.
Remark 3.3. The above results can be generalized by allowing a more general linear relationship between the coefficients A n x, f n k Yn and x m , e m j Xm than the one in (3.4). In particular, it is possible to allow a linear relationship between coefficients corresponding to multiple different values of j and k, as long as there is no "overlap". More precisely, one can assume that there exist (N · K(X, k)) × (M · K(Y, k)) matrices Λ k such that Xm appears in only one linear relationship, i.e., only for one value of k. Condition (3.4) then corresponds to the special case that the matricesΛ k are block-diagonal, and thus that (3.33) decouples. With this, analogous to the above results can still be proven, which we later use in Section 5.2.
Remark 3.4. Frame decompositions can be used to define stable approximations x δ α of Ay in the presence of noisy data y δ = (y δ n ) N n=1 ∈ Y . Considering for example the definition of Ay given in (3.13), then in analogy to (1.7) one can define the approximation where g α : R → R is a suitable approximation of the function s → 1/s, such as g α (s) := 1 s + α , g α,n (s) = (s + α) n − α n s(s + α) n , or g α (s) := 1/s , s ≥ α , 0 , s < α .
In the SVD case (1.7), these choices correspond to Tikhonov and iterated Tikhonov regularization, as well as the truncated SVD/spectral cut-off method, respectively [12]. Many iterative regularization methods such as Landweber iteration or the Brakhage ν-methods also have a characterization in terms of such a spectral filter function g α . From (3.34) it follows as in the proof of Lemma 3.5 that which together with the Cauchy-Schwarz inequality and (3.11) implies y n − y δ n , f n k Yn

2
. Now, since the set {f n k } k∈N forms a frame over Y n , it follows from (3.3) that Hence, if the function g α is chosen such that g α (µ k,j ) remains bounded for k → ∞, then the approximations x δ α depend continuously on the data y δ . In fact, we could retrace all steps of the standard convergence analysis of regularization methods for linear inverse problems [12] to obtain convergence of x δ α to Ay under standard assumptions on g α , thereby extending classic results from the SVD to the frame decomposition.
Remark 3.5. For the numerical computation of the functions Ay one needs to be able to evaluate the dual frame functionsẽ m k . In most cases, these functions cannot be computed analytically, but one may approximate them using the iterative approximation formula (2.11). Due to (2.10) only a small number of iterations are necessary, if the frame bounds B 1 and B 2 are close to each other. Note that the functionsẽ m k are independent of the actual right hand side y of equation (1.2) and can thus be computed in advance. This is particularly useful if the problem has to be solved multiple times, since then the computed approximations ofẽ k can be stored and re-used. Additionally, it is possible that in some situations a frame decomposition might behave better than the SVD.

Applicability to specific operator classes
We now turn our attention to Assumption 3.1, i.e., to the assumption that there exists a sequence {Λ k } k∈N of complex-valued matrices Λ k = (λ m,n k ) M,N m,n=1 such that and in particular to the task of finding frames which satisfy this. First, note that Assumption 3.1 can sometimes be satisfied by choosing frames which are suitably adapted to the "geometric" structure of the considered problem. As we shall see on some examples in Section 5, operators composed mainly of shifting and scaling operations can often be decomposed using frames derived from exponential bases. These can also sometimes be used to extend existing decompositions over regular domains to irregular domains. The same is also true for differential operators. In addition, wavelet frames can often be suitable, in particular since one can choose from a wide variety of available wavelets with properties such as regularity, vanishing moments, or compact support. This provides a link to the Wavelet-Vaguelett decomposition mentioned above.
Next, we shall see that Assumption 3.1 can also be satisfied if one of the following three situations occurs: 1. The singular-value decomposition of A : X → Y is known.
2. The operator A : X → Y satisfies a stability property of the form with constants c 1 , c 2 > 0 and some Hilbert space Z being a (dense) subspace of Y .
We shall now discuss each of those situations in turn, devoting particular attention to the second (and third) situation in Sections 4.2 and 4.3 below. Even though it allowed us to derive more general results in the previous section, for the subsequent considerations we do not need to make explicit use of the general structure (1.1) of the spaces X and Y . Thus, we now consider as a special case of (1.1) with M = N = 1. In order to keep the notation simple, in what follows we drop all sub-and superscripts related to M and N. For example, instead of writing e 1 k and f 1 k for the frame functions, we now simply write e k and f k , and so on. With this, condition (3.4) in Assumption 3.1 reads where λ k now is a sequence of complex numbers instead of matrices. Furthermore, the expressions (3.7) and (3.13) for the operators A and A now respectively read and the Picard condition (3.15) turns into Note that if X and Y have a structure of the form (1.1), then condition (3.4) is a more general linear relationship between the operator and the frames than condition (4.2). This is important if none of the three situations introduced above apply, in which case a frame decomposition can still be obtained under condition (3.4) as in Section 3.

The SVD as a Frame Decomposition
Comparing form tight frames with frame bound 1 over the spaces X and Y , respectively. Together with {λ k } k∈N := {σ k } k∈N ∪ {0} k∈N , they also satisfy condition (4.2).
Proof. This is a direct consequence of the properties of the SVD.
Due to the above result, it follows that it is possible to find a frame decomposition for any bounded (compact) linear operator A, at least in theory. Of course this result is not very practical, since it again involves the SVD, which we initially set out to avoid. However, since by Lemma 4.1 the frames {e k } k∈N and {f k } k∈N are tight with frame bound 1 and thusẽ k = e k andf k = f k , it follows that the results on the frame decomposition derived above are a direct generalization of the classic results on the SVD (compare for example with Corollary 3.8).

Stability Property -Part I
Let us now turn our attention to the second situation, i.e., to the case that A satisfies a stability property of the form (4.1). For this, we start by considering condition (4.2), which is clearly equivalent to with λ k denoting the complex conjugate of λ k , and is reminiscent of the connection (1.4) between the singular values and functions. This suggests the following strategy for finding frames that fulfil (4.6): Starting with some frame {f k } k∈N , one simply defines for some sequence of coefficients {λ k } k∈N . If one can choose the λ k in such a way that the resulting set {e k } k∈N forms a frame over X, then all assumptions in Theorem 3.6 are satisfied and thus the results on the frame decomposition derived above hold. As we are going see now, this is possible if the operator A has a specific stability property, which we now introduce in the following for some constants c 1 , c 2 > 0, where Z ⊆ Y is a Hilbert space. Furthermore, there exists a sequence of coefficients 0 = α k ∈ R and some constants a 1 , a 2 > 0 such that where as before the functions f k are such that the set {f k } k∈N forms a frame over Y .
Condition (4.8) is satisfied for many operators of practical relevance, the most prominent example perhaps being the Radon transform (compare with Section 5 below). It implies that A is injective, and that as an operator from X to Z it is continuously invertible. However, in the presence of noise the right-hand side y in equation (1.2) typically only belongs to Y but not to Z. Condition (4.9) is for example satisfied if Y and Z are Sobolev spaces and {f k } k∈N is a suitably chosen wavelet or exponential frame/basis (again see Section 5 below). We now proceed to derive the following Proposition 4.2. Let A : X → Y be a bounded linear operator and let Assumption 4.1 hold. Furthermore, let the functions e k be defined by (4.7), where the parameters λ k ∈ C are chosen such that for some constants b 1 , b 2 > 0. Then the set {e k } k∈N forms a frame over X with frame bounds Proof. Let x ∈ X be arbitrary but fixed. Due to (4.8) and (4.9) there holds Furthermore, it follows from (4.10) that and thus together with (4.11) we obtain Now since by the definition of e k there holds it follows from (4.13) that which shows that {e k } k∈N forms a frame over X and thus yields the assertion.
Using the above proposition, we can now derive the third main result of this paper: Proof. Due to Proposition 4.2 the set {e k } k∈N forms a frame over X. Furthermore, it follows from (4.10) that λ k = 0 for all k ∈ N. Moreover, since due to (4.8) there holds R(A) = Z, it follows that (1.2) is (uniquely) solvable for any y ∈ Z. Hence, all conditions of Theorem 3.6 are satisfied, which yields the assertions Remark 4.1. Due to (4.9) and (4.10) it follows that This implies that the Picard condition (4.4) holds if and only if y Z < ∞, i.e. if y ∈ Z. Hence, it also follows that Ay is well-defined for any y ∈ Z. This should be compared to the definition space of the Moore-Penrose inverse A † y, for which there holds [12] D Note first that from (4.8) it follows that R(A) = Z. Now since Z is typically a dense subspace of Y , it follows that Z ⊥ = 0, and thus we get that D(A † ) = Z ⊆ D(A). Hence, the definition space of A is at least as large as the definition space of A † .
A number of simplifications of the above theory are possible if the operator A is continuously invertible. This is because in that case condition (4.8) is satisfied for the choice Z = Y . Consequently, also condition (4.9) is satisfied for any frame {f k } k∈N over Y together with α k = 1. Hence, we can obtain the following Theorem 4.4. Let A : X → Y be a bounded and continuously invertible linear operator and let {f k } k∈N form a frame over Y . Furthermore, let the functions e k be defined by (4.7), where the parameters λ k ∈ C are such that for some constants b 1 , b 2 > 0 Proof. Since A is a bounded and continuously invertible operator it satisfies (4.8) for Z = Y . Furthermore, since {f k } k∈N forms a frame over Y , it follows that also (4.9) is satisfied for Z = Y and α k = 1. Moreover, due to (4.15) also condition (4.10) holds. Hence, Theorem 4.3 is applicable, which yields the assertion.

Stability Property -Part II
Next, we turn our attention to a slightly different way of deriving a frame decomposition for operators satisfying the stability property (4.8). This approach can be used even if no frame {f k } k∈N satisfying (4.9) is known or (numerically) feasible. All one needs is that the functions f k are elements of the subspace Z, albeit at the cost of a (numerically) more involved determination of a suitable frame {e k } k∈N . This leads us to the following Assumption 4.2. The operator A : X → Y satisfies condition (4.8), i.e, for some constants c 1 , c 2 > 0, where the Hilbert space Z ⊆ Y is a dense subspace of Y , for which there holds Furthermore, the functions f k are such that the set {f k } k∈N forms a frame over Y with frame bounds C 1 , C 2 > 0. Additionally, these f k are elements of Z, i.e., f k Z < ∞.
It is known (see e.g. [12]) that if (4.16) holds then there exists a densely defined, unbounded, selfadjoint, strictly positive operator L with D(L) = Z such that This operator L is uniquely determined by L = (EE * ) −1/2 , where E : Z → Y denotes the embedding operator. With this, we can proceed to derive the following form a frame over X with frame bounds B 1 = c 2 1 C 1 and B 2 = c 2 2 C 2 , where C 1 and C 2 are the frame bounds of {f k } k∈N , and c 1 and c 2 are as in Assumption 4.2.
Proof. First of all, due to Assumption 4.2 it follows that and thus the functions e k are well-defined. Let now x ∈ X be arbitrary but fixed and note that due to (4.8) there holds Ax ∈ Z = D(L). First, since L is selfadjoint we get Furthermore, since the set {f k } k∈N forms a frame over Y with frame bounds C 1 , C 2 , it follows with (4.17) that which combined with (4.19) yields Hence, together with (4.8) we obtain which yields the assertion.
Instead of the original problem (1.2) we now consider the "preconditioned" equation If condition (4.8) holds, then the concatenated operator LA is bounded, linear, and continuously invertible from X to Y . Hence, both problems (4.21) and (1.2) are uniquely solvable if and only if Ly ∈ Y , which due to (4.17) is equivalent to y ∈ Z. For this case, we want to derive an expression of the solution in terms of the frames {e k } k∈N and {f k } k∈N . We start by giving a frame decomposition of the operator LA in the following x, e k Xf k . (4.23) Proof. Due to the definition (4.18) of the functions e k there holds x, e k X = x, A * Lf k X = LAx, f k Y .
Hence, since the set {f k } k∈N forms a frame, equation (4.23) now follows from (2.5).
Note that by applying the well-defined inverse operator L −1 = (EE * ) 1/2 to (4.23) we also obtain an expression for the operator A, namely x, e k Xf k .
Next, we proceed to make the following Definition 4.1. The operatorĀ : Y → X is defined bȳ Ly, f k Yẽ k . (4.24) For this operatorĀ, we can derive the following well-definedness result: Lemma 4.7. Let A : X → Y be a bounded linear operator satisfying Assumption 4.2. Furthermore, let e k be defined as (4.18). Then for any y ∈ Z the functionĀy given in (4.24) is a well-defined element of X.
Proof. It follows from Lemma 4.5 that the set {e k } k∈N forms a frame over X with some frame bounds B 1 and B 2 . Since the dual-frame {ẽ k } k∈N then forms a frame with bounds B −1 2 and B −1 1 , it follows from (2.3) that Now since the set {f k } k∈N forms a frame over Y with some frame bounds C 1 and C 2 , it follows together with (4.17) that Combining the above we get that if y ∈ Z thenĀy ∈ X, which yields the assertion.
We can now proceed to derive the following Proof. For any y ∈ Z it follows from (4.8) that there exists a unique solution x * ∈ X of equation (4.21). Since by Assumption 4.2 the set {f k } k∈N forms a frame with frame bounds C 1 , C 2 > 0, it follows that and thus for each k ∈ N there holds x * , e k X = LAx * , e k X = Ly, f k Y .
Since by Lemma 4.5 the set {e k } k∈N forms a frame over X, it follows from (2.5) that Ly, f k Yẽ k =Āy .
Hence, the functionĀy is the unique solution of (4.21). Applying the operator L −1 to this equation we see thatĀy also solves (1.2). Together with the fact that due to (4.8) the nullspace of A is trivial, this yields the assertion.
Remark 4.2. It can be seen from the proof of Lemma 4.7 that However, noisy data y δ usually do not belong to the space Z, and thusĀy δ is not well-defined. Hence, in order to obtain a stable approximation ofĀy in this case one can, e.g., consider a family U α : Y → Z of bounded linear operators and define x δ α :=Ā U α y δ .
One possible choice for example is U α := U := L −1 = (EE * ) 1/2 . Note that since L is often some sort of differential operator (for example if Y and Z are Sobolev spaces), the introduction of this operator U basically amounts to a smoothing of the data y δ ; compare for example with [16,25,26].

Applications in Tomography
In this section, we consider the application of our frame decomposition results to some tomographic imaging problems. More precisely, we first consider computerized tomography based on the Radon transform, and then move on to atmospheric tomography.

Application to Computerized Tomography
Many problems of practical importance, for example in industry or in medicine, are based on the well-known Radon transform [18,19], which in 2D is given by Together with the parametrisation ω = ω(ϕ) = (cos(ϕ), sin(ϕ)) T we obtain the operator which is the version of the Radon transform commonly used for computational purposes.
For the subsequent considerations we first need to recall a number of definitions and results from [18,19], starting with the definition of the Sobolev spaces Furthermore, for any open subset Ω ⊂ R N we also define the Sobolev spaces which are equipped with the same norms as H α (R N ). Next, we introduce the domains Ω D := {x ∈ R 2 | |x| ≤ 1} and Ω S := R × [0, 2π), as well as the Sobolev spaces It has been shown in [19] that for each α ∈ R there exist positive constants c(α), C(α) such that This important result can be used to derive the following Theorem 5.1. Let the Radon transform A : H α 0 (Ω D ) → H β (Ω S ) be defined as in (5.1) for some 0 ≤ α, β ∈ R satisfying β ≤ α + 1/2. Furthermore, let the functions f k be such that the set {f k } k∈N forms a frame over H β (Ω S ). Additionally, assume that there exists a sequence of coefficients 0 < α k ∈ R such that the norm equivalence

4)
holds and define e k := α k A * f k . Then the set {e k } k∈Z forms a frame over H α 0 (Ω D ) and Furthermore, for any y ∈ H α+1/2 (Ω S ) the unique solution of Ax = y is given by Among all possible decompositions of Ay in terms of the dual frame functionsẽ k , the decomposition (5.5) is the most economical one in the sense of Proposition 2.1.
As a consequence of the above result we obtain the following Theorem 5.2. Let 0 ≤ α ∈ R and let the Radon transform A : H α 0 (Ω D ) → L 2 (Ω S ) be defined as in (5.1). Furthermore, let {ψ j,k } j,k∈Z be an orthonormal wavelet basis of L 2 (R), let {w l } l∈N be an orthonormal basis of L 2 (0, 2π), and define the functions f j,k,l (s, ϕ) := ψ j,k (s)w l (ϕ) , and e j,k,l := 1 + 2 −2jα A * f j,k,l .
Then the set {e j,k,l } j,k∈Z ,l∈N forms a frame over H α 0 (Ω D ) and x, A * f j,k,l f j,k,l .
Furthermore, for any y ∈ H α+1/2 (Ω S ) the unique solution of Ax = y is given by Among all possible decompositions of Ay in terms of the dual frame functionsẽ k , the decomposition (5.6) is the most economical one in the sense of Proposition 2.1.
Proof. Since {ψ j,k } j,k∈Z is an orthonormal wavelet basis of L 2 (R), the following norm equivalence holds for any s ∈ R (see e.g [8]): Hence, together with (5.2) we obtain the norm equivalence Now, since by its definition the set {f j,k,l } j,k∈Z ,l∈N forms an orthonormal basis over L 2 (Ω S ), and thus any function y ∈ L 2 (Ω S ) can be written in the form it follows that Hence, since the set {w l } l∈N forms an orthonormal basis over L 2 (0, 2π) we obtain that Consequently, for the special case β = 0 and with the choice Remark 5.1. The explicit representation of Ay given in (5.6) can be used as the basis of an efficient numerical routine for solving the tomography problem Ax = y. For example, one can replace the infinite sums over the indices j, k, l by finite sums, and pre-compute a numerical approximation of each dual frame functionẽ j,k,l via (2.11). Then, for each right-hand side y, one only needs to compute the coefficients y, f j,k,l L 2 (Ω S ) and sum up according to (5.6). If one e.g. chooses an exponential basis for {w l } l∈N , then these coefficients can be efficiently computed using the (fast) Fourier and wavelet transforms. Hence, in this case an efficient implementation of (5.6) for computing Ay is possible.
and use the fact (see e.g. [19]) that in order to obtain the alternative decomposition Ay = j,k∈Z 1 + |j| 2 α/2 y, w j,k L 2 (Ω S )ṽ j,k , which can again be efficiently numerically implemented using the fast Fourier transform.

Application to Atmospheric Tomography
Atmospheric tomography plays an important part in many Adaptive Optics (AO) systems for the improvement of the imaging quality of earthbound astronomical telescopes such as the Extremely Large Telescope (ELT) [13] of the European Southern Observatory (ESO), currently under construction in the Atacama desert in Chile. Based on measurements of the incoming light of both Natural Guide Stars (NGS) and artificially created Laser Guide Stars (LGS) in the vicinity of an object of interest, one aims at reconstructing the atmospheric turbulence on a finite number of turbulent layers, in order to then suitably adjust deformable mirrors in such a way that the incoming wavefronts are corrected (flattened) after reflection on these mirrors. Since the atmosphere is constantly changing, this has to be done in real time. For details we refer to [10,27,28].
Mathematically, the atmospheric tomography problem can be written as a linear operator equation of the form (1.2), using the atmospheric tomography operator [15,21] A : D(A) := L l=1 L 2 (Ω l ) → L 2 (Ω A ) G , φ → ϕ g = (Aφ) g (r) := L l=1 φ l (c l,g r + α g h l ) , g = 1, . . . , G , where φ = (φ l ) l=1,...,L denotes the turbulence layers and ϕ = (ϕ g ) g=1,...,G are the incoming wavefronts. Here, L denotes the number of atmospheric layers, located at the heights h l , G denotes the total number of guide stars with corresponding view directions α g = (α x g , α y g ) ∈ R 2 , and c l,g are constants depending on the layer and the guide star. Furthermore, the domain Ω A ⊂ R 2 denotes the telescope aperture, and where Ω A (α g h l ) := r ∈ R 2 : r − α g h l c l,g ∈ Ω A .
For more details on this setting we refer to [15,21] and the references therein. It follows from [21, Theorem 3.1] that the operator A is not compact with respect to the canonic inner products and hence, a singular system does not necessarily need to exist. Moreover, to our knowledge neither a Wavelet-Vaguelette nor a similar decomposition of this operator is known. However, it was recently shown in [15] that a frame decomposition of A is possible. The corresponding frames are built from the functions w jk (x, y) := 1 2T exp(ijπx/T ) exp(ikπy/T ) , w jk,lg (x, y) := c −1 l,g w jk ((x, y)/c l,g )I c l,g Ω A +αgh l (x, y) ., where I c l,g Ω A +αgh l (x, y) denotes the indicator function of the domain c l,g Ω A + α g h l . It was shown that if T ≥ 0 is chosen large enough, then the set {w jk } jk∈Z forms a tight frame with frame bound 1 over L 2 (Ω A ) and the sets {w jk,lg } jk∈Z,g=1,...,G form frames with frame bounds C 1 = 1 and C 2 = G over L 2 (Ω l ). Furthermore, one obtains (Aφ) g , w jk L 2 (Ω A ) = (2T ) L l=1 c −1 l,g w jk (α g h l /c l,g ) φ l , w jk,lg L 2 (Ω l ) , and thus the generalization (3.33) of condition (3.4) holds. Hence, Theorem 3.7 is applicable, which yields the same results as the ones presented in [15,Thm. 4.8].
Similarly, the authors of [21] derived a singular-value-type decomposition of what they called the periodic atmospheric tomography operator, which is defined bỹ w jk (α x g h l , α y g h l ) φ l , w jk L 2 (Ω l ) .
Since the sets {w jk } jk∈Z form orthonormal bases and thus tight frames over L 2 (Ω T ) with frame bound 1, it follows that Assumption 3.1 is satisfied. Hence, Corollary 3.8 is applicable, and we recover the same decomposition and reconstruction results as in [21]. The periodic atmospheric tomography operator as defined in (5.7) only covers settings without LGSs. A similar operator, which can be used to treat settings with only LGSs was also considered in [15], and the derived frame decomposition results again fit into the theoretical framework developed in this paper.
Lastly, the authors of [22] recently proposed an approach for using atmospheric tomography in Single Conjugate Adaptive Optics (SCAO), a specific AO setting using only a single guide star and thus not naturally allowing for atmospheric tomography. However, based on a time-series of wavefront measurements together with an estimate of the windspeed on each atmospheric layer, they developed a method for incorporating atmospheric tomography which lead to an improvement in imaging quality. Their approach uses the same operatorÃ as in (5.7), but with the parameters α g h l replaced by the corresponding windshift vectors. Since this does not entail any essential structural changes of the problem, the results of [21] and our theoretical results on the frame decomposition are applicable also in that case.

Conclusion
In this paper, we considered the decomposition of bounded linear operators on Hilbert spaces in terms of functions forming frames. The resulting frame decomposition encodes information on the structure and ill-posedness of the problem and can be used as the basis for the design and implementation of efficient numerical solution methods. In contrast to the singular-value decomposition, the presented frame decomposition can be derived explicitly for a wide class of operators, in particular for those satisfying a certain stability condition. In order to show the usefulness of this approach, we considered different examples from computerized and atmospheric tomography.

Support
S. Hubmer and R. Ramlau were (partly) funded by the Austrian Science Fund (FWF): F6805-N36. The authors would like to thank Dr. Stefan Kindermann for valuable discussions on some theoretical questions which arose during the writing of this manuscript.