Displacement field estimation from OCT images utilizing speckle information with applications in quantitative elastography

In this paper, we consider the problem of estimating the internal displacement field of an object which is being subjected to a deformation, from Optical Coherence Tomography (OCT) images before and after compression. For the estimation of the internal displacement field we propose a novel algorithm, which utilizes particular speckle information to enhance the quality of the motion estimation. We present numerical results based on both simulated and experimental data in order to demonstrate the usefulness of our approach, in particular when applied for quantitative elastography, when the material parameters are estimated in a second step based on the internal displacement field.


Introduction
The present paper is concerned with Quantitative Elastography from Optical Coherence Tomography (OCT) measurements. There one tries to estimate various material param-eters of a physical sample from its behaviour under deformations. This is of particular interest in medicine, where for example one tries to identify malignant tissues inside the human skin by reconstructing the spatially varying elastic parameters.
Various approaches to quantitative elastography have been proposed over the years; see e.g. [13,22,27,28,29,40] and the references therein. These differ for example in the employed deformation (e.g., quasi-static, harmonic, transient), the underlying elasticity model (e.g., linear, viscoelastic, hyperelastic), the way the deformation is measured (only on the boundary, or everywhere inside the sample using, e.g., ultrasound, magnetic resonance, or optical imaging techniques), and whether a one-step approach (i.e., reconstructing the material parameters directly from the internal images) or a twostep approach (i.e., first estimating the displacement field from the images and then computing the material parameters) is used.
In this paper, we consider in particular the displacement estimation step in two-step approaches to (quantitative) OCT elastography; see e.g. [1,22,27,28,40,46,49] and the references therein. In some cases, the axial component of the displacement field can be measured directly via phase-sensitive OCT, given that this modality is available. However, this is usually not enough to obtain quantitative material parameter reconstructions, since important information is also contained in the other field components. Hence, the full internal displacement field is typically reconstructed via various motion estimation techniques. A standard technique in this context is the normalized cross-correlation method based on so-called speckle in the OCT images [14,37,40,44].
Since the early stages of the development of scattering imaging modalities, researchers are aware of the phenomenon known as speckle. It is commonly observed in ultrasound, OCT, radar-based imaging, and radio-astronomy. Speckle is usually interpreted as the constructive and destructive interference of backscattered waves, which creates the granulated appearance (produced by bright and dark spots) of OCT scans (see Figure 1.1). It was noted that multiple factors contribute to the pattern of speckles, such as properties of the optical materials or the settings of the imaging system. Although often seen as a cause of noise in OCT data, speckles also carry important information, such as on the movement of particles inside the sample. All of these topics caused a broad discussion in the literature, see for example [41]. Moreover, virtual [16,39] or extra speckle [40] has been introduced to support the motion detection.
The normalized cross-correlation method correlates OCT scans of pre-compressed and post-compressed media by calculating the cross-correlation coefficients for every pixel over a fixed surrounding area, and then creates a map of displacements from the maxima of those coefficients [14,37,40,44]. Unfortunately, this method is prone to miss-estimations of the displacement field if the applied strain is too small or too high, or if the size of the correlation area is not carefully chosen. Furthermore, the pixel-by-pixel processing is very time-consuming.
Hence, in this paper, we propose an alternative approach (see Sections 2 and 3 below) to the cross-correlation method, which is computationally more efficient. The idea is to use the additional information contained in speckles present in OCT images in order to achieve an improved displacement field estimation. In particular, we propose a novel algorithm for detecting and tracking larger speckle formations, which we call bubbles, and to extract the information on the general internal motion which they contain. Then, we use this information to enhance the displacement field reconstruction procedure, which is based on the OCT images. This can be interpreted as a "large small details" strategy. The estimated displacement fields can afterwards be used as input for various material parameter reconstruction methods in quantitative elastography, one of which we discuss below.
The outline of this paper is as follows. In Section 2, we introduce and discuss a method for the detection and tracking of bubbles, and in Section 3, we propose an algorithm for enhancing the displacement field estimation with the obtained bubble information. Furthermore, we present a mathematical analysis of the proposed approach. In Section 4, we discuss possible ways to implement the proposed algorithm, and in Section 5, we present numerical results on both simulated and experimental data, demonstrating its practical usefulness. In Section 6, we discuss the application of our proposed displacement field estimation algorithm to quantitative elastography, and in Section 7, we provide numerical examples on the reconstruction of elastic parameters based on the estimated displacement fields, again on both simulated and experimental data, before proceeding to a short conclusion in Section 8.

A Bubble Approach to Speckle Tracking
In this section, we propose a novel, heuristics-based image-processing algorithm for the detection of large speckle formations, which we call bubbles, and the determination of their movement, from a pair of successive volumetric OCT data. The bubble information obtained from this bubble tracking algorithm is then used in Section 3 below to enhance the reconstruction quality of the displacement field estimation.
Our proposed method consists of the following three general steps: Pre-processing of the OCT-scan image data, detection of bubbles in the processed images, and estimation of the movement of the bubbles. These three general steps are described in detail below. Since the experimental data considered in Section 5 stems from a rotationally invariant cylindrical sample, the following description of the algorithm steps is tailored to this case. Note, however, that the algorithm can be generalized in a straightforward manner to non-symmetric samples.
Step I: Pre-processing of the OCT-scan image data For the pre-processing step, we start by taking a pair of volumetric OCT scans recorded from two sequential compression states of the sample. These data, which usually span several orders of magnitude, are commonly displayed in a base-10 logarithmic scale to enhance the visibility of lower refractions. For our purpose, we take the logarithmized data and rescale it such that its brightness lies within the interval [0, 1]. Since we consider a rotationally invariant cylindrical sample here, we search for its center and radius through lateral segmentation and circle fitting. Using this information, the sample is aligned in the x0y plane such that its center is located at the same point in both scans. The 'height' of the cylinder thus aligns with the z-axis, and data extending the actual height of the cylinder is cut. Figure 1.1 depicts a rotational MIP of OCT data pre-processed in the above way. The knowledge of the origin and height of the cylinder for both compression states provides us with an initial guess of the maximal possible axial and radial displacement which can have occurred within the sample.
Step II: Detection of bubbles in the processed images The next step is the detection of bubbles within the OCT data. For this, we first apply a Gaussian filter with a standard deviation of around 0.8 to 1 to the data, in order to reduce background intensity jumps and to slightly smooth the granulated appearance of bubbles. Based on the histogram statistic of the volumetric dataset, we define a brightness threshold of 0.5 − 1% of the brightest pixels, which we use to binarize the volumetric data. Each bubble in the dataset thus becomes a connected group of voxels with value 1 within a background of uniform value 0. Identifying and grouping the connected components of those binary images, we obtain the different bubbles, of which we extract their centroids (by fitting them with ellipsoids) as well as their pixel-volumes. Any thereby detected bubble which is smaller in voxel-volume than a predefined threshold (between 80 and 100 voxel-volumes in the experimental data set considered below) was discarded. See Figure 5.6 for an illustration of the outcome of the bubble detection step described above, where the procedure was applied to the OCT images depicted in Figure 1 Step III: Estimation of the movement of the bubbles In the third step, we aim at estimating the motion of the detected bubbles between the two recorded volumetric images.
We now compare each of the detected bubbles in the first ('start') scan with every bubble detected in the second ('end') scan. For each candidate pair of two bubbles A and B from the 'start' and 'end' scan, respectively, we compute their voxel-volumes V A and V B , as well as the z-coordinates of their centroids z A and z B . Furthermore, we calculate the distance between their two centroids (d AB ), the distance between the cylinder axis and the 'start' centroid (d O 1 A ), the distance between the cylinder axis and the 'end' centroid (d O 2 B ), the angle between the cylinder axis and the line connecting the 'start' and 'end' centroids (α), as well as the vertical angle in the x0y plane between the line passing through the cylinder origin and the 'start' centroid, and the line passing through the cylinder origin and the 'end' centroid (ϕ).
For each pair of bubbles (A, B) we check if the following basic intuitive geometric criteria are satisfied: 1. The volume of a bubble does not change significantly in the images between different compression phases, i.e.,

2.
A bubble does not move more than the displacement (applied for deforming the sample) in axial direction d max and more than the radial expansion of the sample in lateral direction, i.e., 3. If the sample is compressed from above, then the bubbles are mainly shifted downwards (i.e., in z-direction) between the first scan and the second, i.e., 4. In the case of rotational invariance, the sample experiences a side bulging, which implies that a bubble moves outwards from the cylinder axis in radial direction. Hence, one expect that there is either no tangential movement ϕ, or that it is at most very small, i.e., ϕ < ϕ max .
5. The possible angle α between the cylinder axis and the line passing through the starting and end points of the tracked bubbles is limited by the strain applied to the sample, i.e., α min ≤ α ≤ α max .
Two bubbles A and B are considered to match if they satisfy all of the six inequalities stated above. Hereby, the constants ε, d max , α min , α max , and ϕ max have to be suitably chosen in dependence on the strain applied in a concrete experiment. We then get an estimate of the bubble movement by computing the vector which points from the center of mass of bubble A to the center of mass of the matched bubble B. Two examples of the outcome of the bubble tracking step described above are shown in Figure 5.6 (right) and Figure 5.7. There, the procedure was applied to the OCT images depicted in Figure 1.1, and to the corresponding complete volumetric OCT scan, respectively.
The behaviour of the above algorithm is demonstrated in Section 5 below. It is both robust and efficient/fast on OCT scans with bright sharp spot reflectors. In terms of implementation, note that it can be useful not to choose one fixed ε in the inequality condition above, but to separate the bubbles into subsets of 'large' and 'small' bubbles, and to use a different ε for each one of these groups. Furthermore, both the bubble detection step and the bubble matching step can be carried out in parallel. Note that there is typically at most one bubble in the second scan which matches a bubble in the first scan, and that not necessarily every bubble can be matched with another one, in particular, since the number of bubbles in two consecutive OCT scans is not constant. However, this is not a problem, since not all bubbles need to be matched in order to improve the reconstruction quality of the subsequent displacement field estimation.

Displacement Field Estimation with Bubble Data
In this section, we present a method for incorporating bubble movement, obtained for example with the approach presented above, into the displacement field estimation. First of all, note that for estimating the displacement or motion between images, a relatively simple way, which is valid for small displacements, is the optical-flow equation where I = I(x, t) denotes the image intensity function and u(x) = (u 1 (x), u 2 (x)) denotes the displacement (motion, flow) field. Assuming that both I and u are defined for x ∈ Ω ⊂ R 2 , one of the most common approaches for estimating the displacement field u is to minimize the functional where α ≥ 0 is a regularization parameter and R is a regularization functional, used to incorporate additional assumptions on the displacement field u. Many possible choices for R have been proposed over the years, the most known and commonly used one perhaps being which ultimately gives rise to the well-known Horn-Schunck method. It encodes a smoothness assumption on the solution and, according to [42,43], is one of the only two regularization functionals which only use first-order terms and also have a physical interpretation (the other one being the regularization functional of Nagel [34]). Note that there exists a wide variety of different approaches to motion estimation, in particular for large displacement optical flow [3,4,5,6,8,11,10,25,36,45,47]. In fact, the optical flow equation (3.1) is itself a first-order approximation to the more general image registration problem, and it is valid for small displacements. Fortunately, the displacements considered throughout this paper are all comparatively small, and thus we can work with the (simpler) optical flow equation (3.1). Nevertheless, we also discuss an approach for treating larger displacements in Section 4.3 below.
As mentioned before, we want to utilize additional speckle (bubble motion) information in the displacement field reconstruction. In Section 2, we proposed a way to track the displacement of bubbles between two images, and thus we now assume that a set of given. Since we want our estimated displacement field to coincide with these vectors at the pointsx i = (x i 1 ,x i 2 ) ⊂ Ω where the centers of mass of the bubbles are located, i.e., u(x i ) ≈û i , we propose the following functional: where g(x,x i ) denotes the Gaussian function centered aroundx i with standard deviation σ, i.e., (3.4) Furthermore, since it also makes sense to assume in addition that the displacement field u is smooth, we propose to estimate it by minimizing the following functional: where β ≥ 0 is another regularization parameter. Depending on the choices of α and β, an emphasis can be placed on either the desired smoothness of the displacement field, or on its fit to the given bubble displacementû i (indirectly also influenced by σ).
We are now going to analyse the problem of minimizing the functional F . For that, we make use of the analysis of Schnörr [42], which is reviewed below, who already considered the problem of minimizing the functional J from (3.2). His key idea was to rewrite J in the form with the bilinear formã(·, ·), the linear formb(·) and the constant termc given bỹ (3.7) After showing thatã(·, ·) is symmetric and elliptic on a suitable function space V , Schnörr derived results about the minimization of J by noting that its unique minimizer is given as the unique solution of the linear equatioñ The assumptions required for his analysis are collected in the following Let Ω ⊂ R 2 be a nonempty, bounded, open, and connected set with a Lipschitz continuous boundary ∂Ω and let ∇I ∈ L ∞ (Ω) and I t ∈ L 2 (Ω). Furthermore, let α > 0 and let the components of ∇I be linearly independent.
Under the above assumptions, Schnörr derived the results summarized in Theorem 3.2. Let Assumption 3.1 hold and let the bilinear formã(·, ·), the linear form b(·) and the constant termc be defined as in (3.7). Thenã(·, ·) andb(·) are bounded on Furthermore, the unique minimizer of the problem is given as the unique solution u ∈ H 1 (Ω) 2 of the linear problem Furthermore, the solution of this equation depends continuously on the right-hand sidẽ b(·), but not necessarily on the image intensity function I.
Proof. The proofs of these statements can all be found in [42].
We now apply the same ideas and results presented above to the problem of minimizing the functional F . Note that Schnrr's work would correspond to the case β = 0. First of all, in analogy to (3.7), we define the bilinear form a(·, ·), the linear form b(·), and the constant term c by withã(·, ·),b(·), andc as defined in (3.7). It is easy to see that in complete analogy to the representation (3.6) of J there holds We now proceed by proving some important results for the bilinear and linear form in Proposition 3.3. Let Assumption 3.1 hold and let the bilinear form a(·, ·) and the linear form b(·) be defined as in (3.11). Then a(·, ·) is bounded and elliptic on H 1 (Ω) Proof. We start by showing that the linear form b(·) is bounded. By the definition of b and since, by Theorem 3.2,b is bounded, it remains to show that Using the Cauchy-Schwarz inequality, we get that where we have used that as a Gaussian function, g(x,x i ) ≤ 1. From this, it now follows that b(·) is bounded on H 1 (Ω) 2 . Similarly, for the bilinear form a(·, ·), we only need to show that Again using the Cauchy-Schwarz inequality, we get that which implies the boundedness of a(·, ·). Thus, it remains to show that a(·, ·) is coercive. For this, we look at where we used that g(x,x i ) > 0. Hence, the coercivity of a(·, ·) now follows from the coercivity ofã(·, ·) shown in Theorem 3.2, which concludes the proof.
With this, we are now able to prove well-posedness of our optical flow algorithm.
Theorem 3.4. Let Assumption 3.1 hold and let the bilinear form a(·, ·) and the linear form b(·) be defined as in (3.11). Then, the unique minimizer of the problem is given as the unique solution u ∈ H 1 (Ω) 2 of the linear problem (3.14) Furthermore, the solution of this equation depends continuously on the right-hand side b(·), but not necessarily on the image intensity function I.
Proof. This follows from the Lax-Milgram-Lemma in the same way as in Schnörr [42], using the representation (3.5) together with Proposition 3.3.
For the choice α = 0 and β > 0, i.e., when one only regularizes with the functional S but not with R, the requirement in the above theorem that the components of ∇I have to be linearly independent can be dropped. The only difference is that one then has to replace the space H 1 (Ω) by L 2 (Ω) everywhere. This is summarized in Let Ω ⊂ R 2 be a nonempty, bounded, open, and connected set with a Lipschitz continuous boundary ∂Ω and let ∇I ∈ L ∞ (Ω) and I t ∈ L 2 (Ω). Furthermore, let α = 0 and β > 0, and let the bilinear form a(·, ·) and the linear form b(·) be defined as in (3.11). Then, the unique minimizer of the problem is given as the unique solution u ∈ L 2 (Ω) 2 of the linear problem Furthermore, the solution of this equation depends continuously on the right-hand side b(·), but not necessarily on the image intensity function I.
Proof. Since the domain Ω is bounded, it follows from the definition (3.4) of the functions g(x,x i ) that there exists a constant c g > 0 such that Since this implies that it follows that a(·, ·) is elliptic on L 2 (Ω) 2 for α = 0 and β > 0. Furthermore, it is easy to see that in this case a(·, ·) and b(·) are also bounded on L 2 (Ω) 2 . Hence, the statements of the theorem now follow by the Lax-Milgram-Lemma using the representation (3.5) in the same way as in [42] or Theorem 3.5 above.
Remark. An extension of the above approach is possible if one deals with (almost) incompressible materials. Since for incompressible materials there holds div (u) = 0, it is then natural to consider the additional regularization functional and instead of (3.13) to solve the adapted minimization problem where γ > 0 is an additional regularization parameter. The theoretical results derived above still remain valid, and thus one can expect to obtain a physically meaningful displacement field reconstruction also in this case. However, one now has to tune three regularization parameters instead of two. Note that for incompressible materials also the subsequent material parameter estimation step (see Section 6 below) changes, since for example under models of linear elasticity one then no longer searches for the Lamé parameters λ and µ, but instead only for µ and the pressure [27,28].

Minimization Approaches
In this section, we are discussing two alternative approaches, direct and iterative, for the minimization of the functional (3.5). We analyze them based on the theoretical results presented above.

Direct Methods
Due to Theorem 3.4, one way of computing the minimizer of the functional F is by solving the variational problem (3.14), which can for example be done via a finite element (FE) method. Let {ψ 1 , . . . , ψ N } be a basis of a finite dimensional subspace V h of H 1 (Ω) 2 , consisting for example of piecewise linear functions on a suitable triangulation of the domain Ω. The FE approximation u h to the solution of (3.14) then is where the vector z = (z k ) is given as the solution of the matrix-vector equation Az = y, with the N × N matrix A = (A ik ) and the N × 1 vector y = (y i ) defined by Due to the ellipticity of the bilinear form a(·, ·), the matrix A is positive definite, and thus the matrix-vector system Az = y admits a stable solution. Of course, many different versions of finite element methods are applicable to the variational problem, each with its own peculiarities (see for example [9,30,35]). Also, for the solution of the resulting matrix-vector equation a number of different solvers are possible, from simple direct solvers to preconditioned iterative solvers. In particular, due to the underlying structure of the problem, multi-level (multi-grid) ideas are preferable, which not only lend themselves to parallelization, but can also be adapted for improving the estimation of relatively large displacement fields (see Section 4.3 on multi-scale approaches).

Iterative Methods
In some cases, the minimization of F via directly solving the underlying variational problem (3.14) as discussed in the previous section is either undesirable or infeasible, for example if the size of the underlying images are too large. In this case, one can apply iterative methods for the minimization of F , which we discuss in this section. First of all, due to the representation of F given in (3.12), we can derive the continuity and differentiability results presented in the following two propositions.
for a constant C > 0, and thus F is continuous on H 1 (Ω) 2 .
Proof. It follows from the representation (3.12) together with Proposition 3.3 that from which the assertion follows.  Proof. This is a direct consequence of the representation (3.12) of F in terms of the bilinear form a(·, ·), the linear form b(·), and the constant c, together with the boundedness results from Proposition 3.3.
As a consequence of the above result, we get Proof. Due to Proposition 4.2 and Proposition 3.3 there holds and thus it now follows from [7,Proposition 17.3] that F is convex.
The above results imply that any first-or second-order iterative method can applied for the minimization of F , especially since due to the strict convexity of F (see Corollary 4.3) the minimizer is unique. In particular, the first-order optimality condition is both necessary and sufficient in this case (compare with Theorem 3.4). Since, by the Riesz representation theorem, there exists a unique element ∇F (u) ∈ H 1 (Ω) 2 (the gradient) such that the first-order optimality condition is equivalent to ∇F (u) = 0 .
Applying for example the idea of fixed-point iteration to this equation yields the simple and well-known gradient descent method where ω k is a suitably chosen stepsize. For our particular problem, it follows from the explicit expression of F (u)h that the gradient F (u) ∈ H 1 (Ω) 2 is given as the unique solution of the variational problem and can therefore be easily computed numerically, using for example again a finiteelement method, which in this case can be implemented very efficiently, since the bilinear form is now just the scalar product of H 1 (Ω) 2 .
Of course there is a multitude of different iterative methods which can be applied to our minimization problem. The above considerations were mainly made to underline that since we showed that the problem is well-behaved in many ways, most of them are applicable and should lead to good results.

Multi-Scale Approach
In this section, we consider the so-called multi-scale approach (see for example [26,31,32,33]), for improving the reconstruction quality of the proposed methods in a numerical implementation, in particular for larger displacement fields. As the name suggests, one considers the optical flow problem on various scales, i.e., levels of discretization, by a suitable downscaling of the input images. On each of the scales, one solves the optical flow problem and then combines the results from each scale to obtain a final optical flow estimation. The idea behind this approach is that by considering the problem on multiple scales, information about the flow on those scales can be extracted more reliably than from the original input images alone. For example, due to the inherent locality of their estimation approaches, many optical flow algorithms often end up producing flow fields on fine scales only, and are not able to capture larger displacements. The multi-scale approach is a way to produce flow fields capturing larger displacements as well.
Following the ideas and notation of [31], we start by fixing a number N of scales, indexed by s and ranging from the finest (original) scale s = 0 to the coarsest scale s = N − 1. Fixing some down-sampling factor η ∈ (0, 1), the image intensity function on the scale s, denoted by I s (x, t), is then computed recursively via the formula where G σ is a Gaussian filter kernel with standard deviation σ and I 0 (x, t) is the image intensity function of the original image. The value of σ depends on η and is chosen as where the authors of [31] suggest σ 0 := 0.6 from experience with numerical examples. The multi-scale approach then consists in solving the optical flow problem on the scale s from the images I s and to use the obtained displacement field u s as an initial information for the computation on the scale s − 1. Besides a method for solving the optical flow problem on any scale s, all that is required in addition for the application of the multiscale approach is an up-scaling routine to transfer the obtained solution u s to the finer scale s − 1. This can for example be achieved by a suitable interpolation routine. For the iterative solution approaches presented above, incorporating the multi-scale idea is straightforward. Starting from the coarsest scale, one applies the chosen iterative method until a suitable approximation of the displacement field is obtained. This field is then transferred to a finer grid and serves as the initial guess for the iteration on that scale, and so on.
For the direct methods discussed above, the multi-scale approach combines well with a multi-grid finite element approach [9]. For this, we consider a sequence (V s h ) N −1 s=0 of nested finite element spaces and denote by a s (·, ·) and b s (·) the bilinear and linear forms a(·, ·) and b(·), in which ∇I and I t are replaced by their Gaussian-filtered counterparts ∇I s and I s t , respectively. On the coarsest level s = N − 1, one then computes the flow field u N −1 ∈ V N −1 h as the solution of the variational problem For all other scales 0 ≤ s ≤ N − 2, the flow field u s ∈ V s h is computed recursively via where P s s−1 denotes the orthogonal projector from V s−1 h to V s h and h s ∈ V s h is defined as the (unique) solution of the variational problem The flow field u 0 computed on the finest scale V 0 h is then chosen as the approximation to the solution of the displacement field estimation problem.

Numerical Results I: Displacement Estimation
In this section, we present a number of numerical examples based on both simulated and experimental data demonstrating the usefulness of our proposed method for displacement field estimation. Numerical examples of material parameter reconstruction based on the obtained displacement fields are presented in Section 7 below. The presented results were obtained using the direct method introduced in Section 4.1 together with the multi-scale approach from Section 4.3. We used conforming, piecewise linear finite elements on a regular triangulation of the square (image) domain Ω. The implementation was done in Python using the finite-element library Fenics [2] on a desktop computer running on a Linux OS with an 8 core processor (Intel(R) core(TM) i7-7700 CPU@3.60GHz) and 62.9GB RAM.
All our numerical results were obtained from two images I 1 and I 2 , as would be available from an OCT scan before and after compression of a sample. This means that the image intensity function I(x, t) is only given at two time points. Hence, for the numerical implementation we approximated the temporal derivative I t in the definition of the bilinear and linear forms a(·, ·) and b(·) by a backward difference quotient. The spatial gradient ∇I was approximated by ∇I 1 , which is computed as the spatial gradient of the finite-element function representing the image I 1 .

Simulated Data
In this section, we apply our displacement field estimation approach to two problems with simulated data, the first being an optical flow test problem and the second mimicking an experimental quantitative elastography setup.

Moving Squares
The first problem, which is for example used in [3,42] for testing optical flow estimation methods, is the problem of the moving squares. Therein, one considers two squares of constant image intensity moving towards each other. This is a difficult test problem, since the gradient of the images only contains information along the borders of the squares, and it admits a number of different plausible solutions. It is well-known that the reconstruction quality increases if for example the images are overlaid with an inhomogeneous background.
In order to test our proposed approach, we thus consider the two images I 1 and I 2 depicted in Figure 5.1, which show two moving squares with added bubbles, as they would for example result from an OCT scan. Also depicted in Figure 5.1 is the displacement field acting between the two images, which we want to recover. We want to show that utilizing the bubble motion information leads to an improved displacement field reconstruction. Hence, for the following test we assume that the displacement of all the 50 bubbles in the test images is known. However, in order to model the unavoidable inaccuracies in the tracking process, we add 0.1% relative noise to the whole set of the displacement vectors, which is already much compared to the magnitude of a single displacement vector.
For obtaining the results presented below, we use the following choice of parameters: For the smoothness-regularization, we choose α = 0.8, and when utilizing the bubble information, we choose β = 4 as well as σ = 5 in the definition of g(x,x i ) in (3.4). For using the multi-scale approach, we take 5 scale-levels, σ 0 = 0.6, and the griddownscaling parameter η = 0.5, which leads to σ(η) ≈ 1.03923 in formula (4.2). Figure 5.2 depicts the results of the displacement field estimation, for the four different combinations of not utilizing (β = 0) or utilizing (β > 0) the bubble information, and using or not using the multi-scale approach. Both the colour and the length of the displacement vectors in the figures thereby correspond to the local magnitudes of the reconstructed displacement fields. One can clearly see that utilizing the bubble information leads to reconstructions with greatly improved quality. In particular, the general tendency of optical flow to visualize only the movement of the boundary is counteracted. Both qualitative and quantitative features of the exact displacement field (compare with Figure 5.1)) can be recovered accurately utilizing the bubble displacement information. Furthermore, one can also see from the plots that using the multi-scale approach helps to obtain a more uniform field that is also closer in magnitude to the exact solution.

Simulated Elastography Setup
The second test problem which we now consider is designed to mimic an experimental elastography setup. We start with the rectangular sample with a circular inclusion depicted in Figure 5.3 (left), which we assume to be fixed on the bottom and freely moving on the sides. We apply a constant downward displacement of 10% of its size (corresponding to 20 pixels) on the top, which results in the deformed sample depicted in Figure 5.3 (middle). The corresponding displacement field, depicted in Figure 5.3 (right), which was used to deform the sample, was computed by solving the equations of linearized elasticity (6.2), with the Lamé parameters λ and µ as in Figure 7.1.
In order to simulate two OCT measurements, 200 randomly distributed bubbles were added to the sample and the resulting images before and after compression were rescaled to the interval [0, 1]. The movement of the bubbles between the images was also determined by the computed displacement field. The resulting simulated "OCT images" together with the corresponding bubble displacement vectorsû i are depicted in Figure 5.4.
In order to reconstruct the displacement field from the simulated OCT images, we employ our estimation procedure using the multi-scale approach. We chose the same parameter as for the moving squares example above (in particular β = 4), except for  the spatial regularization parameter α, for which we now use the slightly larger choice of α = 4 compared to α = 0.8 in the previous test. The reconstructed displacement field is depicted in Figure 5.5 (left). Note that, similarly to the exact displacement field depicted in Figure (5.3) (right), this is only a sparse representation of the actual field, which in reality is eight times denser in each direction. For a higher resolution image of the reconstructed displacement field see Figure 5.5 (middle). As can for example be seen from Figure 5.5 (right), the reconstructed field agrees well with the exact field, in particular in the internal part of the sample. The total relative error is 11.53%, with the relative error in the x and y components being 9.2% and 6.96%, respectively. As the numerical examples in Section 7 below show, this is sufficiently accurate to obtain quantitative material parameter reconstructions.

Experimental Data
After testing the proposed algorithm on simulated data, we now move to experimental data. We consider the volumetric OCT data whose rotational maximum-intensity projection (MIP) was already depicted in Figure 1.1 in the introduction. This data results from the consequent OCT imaging of a homogeneous sample which was compressed from above by a micrometer screw gauge. The sample itself is made from silicone rubber and is rotationally symmetric.
As a first test, we apply the bubble tracking algorithm presented in Section 2 to the two OCT images shown in Figure 1.1, the results of which are depicted in Figure 5.6.
In the left two images one can see the detected bubbles (circled in black) in the two images before and after compression. The threshold value in the algorithm was set so that only the bubbles of highest brightness would be detected. The right image depicts the resulting displacement vectors after the correspondence between the bubbles in the two images has been established by the algorithm. A visual inspection shows that the motion of the bubbles was reliably captured by our proposed tracking algorithm. Figure 5.6: Bubbles (circled in black) detected in the OCT images before compression (left) and after compression (middle) using the approach of Section 2, together with the resulting bubble displacement (right). Rotational maximum intensity projections.
Next, we apply our proposed algorithm to the complete 3D volumetric OCT data set, and obtain the results depicted in Figure 5.7. Many of the brightest bubbles were identified and tracked, resulting in a set of displacement vectors, which by themselves already form a physically plausible displacement field. In fact, using the rotational symmetry of the sample, and after applying a suitable interpolation, a complete displacement field can be obtained, which is already good enough for many further applications.
We now apply our displacement field estimation algorithm to the images depicted in Figure 1.1, using the same settings and parameters as for the simulated data considered in the previous section, but now with 6 instead of 5 layers in the multi-scale approach, since the images are of a higher resolution. When utilizing the bubble information, we use the displacement vectors of all 672 tracked bubbles depicted in Figure 5.7.
The resulting displacement fields are depicted in Figure 5.8, again in the four different combinations of utilizing/not utilizing the bubble information, and using/not using the multi-scale approach. Once more it is apparent that utilizing the bubble information greatly improves the quality of the reconstructed displacement fields. While basically no correct displacement fields could be obtained with the standard Horn-Schunck method (β = 0), when including the bubble information we obtain physically meaningful fields that fit to expectations also in magnitude of the displacement. The positive influence of the multi-scale approach, although not too pronounced in this case, again shows itself in the increased uniformity of the obtained field.
Finally, we investigate the influence of the parameter α on the reconstruction. Figure 5.9 depicts the obtained results for the exact same setting and parameters as before, but now for two different values of α. As one may expect, a higher value of α produces a smoother and more uniform displacement field, which is useful in physical situations where one expects the internal displacement field to be smooth anyways.

Application to Optical Coherence Elastography
In this section, we consider the application of our proposed displacement field estimation approach to quantitative material parameter estimation via optical coherence elastography. In particular, we are interested in obtaining quantitative estimates of the Young's modulus E of a sample from two OCT measurements before and after compression. The Young's modulus is connected to the Lamé parameters λ and µ via E = µ(3λ + 2µ) λ + µ , (6.1) whose estimation by iterative regularization methods we recently considered from both an analytical and a numerical viewpoint in [19]. Note that estimating the Lamé parameters λ and µ from a single displacement field in general only allows quantitative estimates of µ, but not of λ. Reconstructions also of the parameter λ were obtained in [19] from simulated data with the same iterative regularization approach as the one presented below, but for Lamé parameters with a different contrast and magnitude than  the ones considered here. However, the Young's modulus E is not very sensitive to λ and thus, as we see below, quantitative estimates of E can still be obtained. If one also wanted to obtain quantitative reconstructions of the parameter λ, one could for example combine the data resulting from multiple elastography experiments conducted on the same sample, each with a different applied displacement.
In the following, we first restate the mathematical model of linearized elasticity and the corresponding inverse problem of estimating the Lamé parameters from a measured displacement field. Afterwards, we recall the iterative regularization approach used in [19] for solving this inverse problem, which we then employ in Section 7 to obtain material parameter reconstructions for the simulated and the experimental sample from the displacement field estimations obtained in Section 5 above. Note that of course also other reconstruction methods could be used to extract the Lamé parameters from our proposed displacement estimation approach.

Mathematical Model of Linearized Elasticity
Mathematical models linking the material parameters of a sample to the internal and external forces present in an elastography experiment are commonly derived from the equations of motion [13,48]. Assuming that the sample is linear and isotropic, and that the deformation which the sample undergoes during the experiment is comparatively small, then the model of linearized elasticity can be used. It can for example be supplied with mixed Dirchlet and Neumann boundary conditions to model applied displacement and surface traction, respectively.
Mathematically, let Ω be a non-empty, bounded, open, connected set in R N , for N = 1, 2, 3, with a Lipschitz continuous boundary ∂Ω, which has two subsets Γ D and Γ T such that ∂Ω = Γ D ∪ Γ T , Γ D ∩ Γ T = ∅ and meas(Γ D ) > 0. Given body forces f , displacement data g D , surface traction g T and Lamé parameters λ and µ, the (forward) problem of linearized elasticity with displacement-traction boundary conditions consists in finding the displacement field u satisfying where n is the outward unit normal vector of ∂Ω, and the stress tensor σ defining the stress-strain relation in Ω is defined by where I is the identity matrix and E is the strain tensor defined by One typically considers the following weak formulation of this problem [9,12,19]: Find a function u ∈ V : where Φ is a homogenization function satisfying Φ| Γ D = g D , and the bilinear form a λ,µ (·, ·) and the linear form l(·) are given by the variational problem (6.3) admits a unique solution [9,12,19].

The Inverse Problem
After considering the forward problem of linearized elasticity in the previous section, we now turn our attention to the inverse problem, which consists in determining the Lamé parameters λ and µ from measurements of the displacement field u. Introducing for s > N/2 + 1 the parameter-to-solution map F , i.e., the nonlinear operator where u is the solution of the variational problem (6.3), the inverse problem can be formulated as the problem of solving the nonlinear operator equation In practical applications, instead of the true displacement field u one only has access to noisy data/measurements u δ , which for example satisfy the estimate where δ ≥ 0 denotes the noise level. Since the inverse problem is ill-posed and thus very sensitive even to a small amount of noise in the data, methods for obtaining (approximate) solutions of (6.4) need to be carefully chosen. One such choice, an iterative regularization approach based on Landweber iteration, which was used for obtaining the numerical results presented in Section 7, is discussed below.

Iterative Regularization Approach
One of the most well known iterative regularization methods for solving nonlinear inverse problems is Landweber iteration [15,21], given by where k is the iteration index and ω δ k is a sequence of stepsizes. As a stopping rule one typically uses the discrepancy principle, which determines the stopping index k * as the smallest index such that where δ is as in (6.5) and τ > 1 is a constant. For the stepsizes ω δ k one can for example use a suitably chosen constant, the minimal error, or the steepest descent stepsize [38], the latter one being given by In order to guarantee convergence of this, and in fact most other iterative regularization methods, one typically has to use the so-called tangential cone condition or one of its weaker variants, which are often very difficult to show for non-academic examples. Fortunately, for our present problem, if one knows the Lamé parameters in an (arbitrarily) small neighbourhood of the boundary and instead of (6.4) considers where the operator F c incorporates the knowledge of the Lamé parameters near the boundary, then even the strong tangential cone condition holds. This guarantees the convergence of (6.6) with F replaced by F c when combined with the discrepancy principle (6.7). For more details on this matter we refer the reader to [19].
In order to implement the iteration (6.6) and the stepsize (6.8), one needs explicit expressions for the Fréchet derivative F (λ, µ)(h λ , h µ ) and its adjoint F (λ, µ) * w. These have been given in [19] and involve the solution of a number of large-scale variational problems, which can be very costly and time consuming. Even though parallelisation can be used to reduce the computational time required for solving these variational problems, the convergence of (6.6) may still be too slow. Hence, in order to speed up the iteration, we employ Nesterov's acceleration principle, which leads to the iteration where also ω δ k is computed using the intermediate iterates (λ k ,μ k ). In the numerical results presented below, we used the popular choice of α δ k := (k − 1)/(k + 2); see for example [20]. For the solution of nonlinear inverse problems, iteration methods of the form (6.9) have been studied for various choices of the parameters α δ k in [17,18], and convergence was established essentially under the same tangential cone condition as needed for the convergence of Landweber iteration.
In case the noise level δ is unknown or estimates of it are unreliable, then instead of the discrepancy principle one can use heuristic stopping rules such as the heuristic discrepancy principle [23,24], which determines the stopping index k * by minimizing and has been successfully employed to a wide range of both linear and nonlinear inverse problems. Even if the noise level δ is known, using the discrepancy principle can remain a difficult task due to the freedom which one has in choosing the parameter τ . Popular choices such as τ = 1.1 are often not feasible, for example when the residual cannot be brought to the level of τ δ (in an acceptable amount of time) in practice. Furthermore, theoretically backed choices depend on parameters that are typically either unknown or overestimated [21]. Hence, for the numerical results presented below, we used the (heuristic) discrepancy principle together with a monitoring of the residual and the reconstruction quality as an informed guideline to manually determine a suitable stopping index for each test. It was shown in [19,Theorem 3.4] that the operator F (λ, µ) * can be written as the composition of two operators E s and A, where E s basically corresponds to the adjoint of the embedding operator from H s (Ω) 2 to L 2 (Ω) 2 .
The numerical examples presented in [19] showed that dropping the operator E s in iteration (6.6) or (6.9) leads to improved Lamé parameter reconstructions, in which larger jumps are better resolved. Mathematically, this can be understood as a (Hilbert scale) preconditioning of the problem; see for example [15]. Hence, for the numerical examples presented below, we always dropped the operator E s in our iterative reconstruction approach (6.9).

Numerical Results II: Parameter Estimation
In this section, we employ the iterative regularization approach discussed in Section 6.3 to obtain material parameter reconstructions for the simulated and the experimental sample from the displacement field estimations obtained in Section 5 above.

Simulated Data
In this section, we consider the simulated elastography setup from Section 5.1.2. The results of our iterative reconstruction approach using the operator F , applied to both the exact and the estimated displacement field (see Figure 5.5), are depicted in Figure 7.2. Starting from the uniform initial guess (λ 0 , µ 0 ) = (490, 10), the iteration was stopped after 85 and 27 iterations taking about 30 and 10 minutes, respectively, and the Young's modulus E was computed from the reconstructed Lamé parameters via (6.1).
Comparing the results with Figure 7.1 one can clearly see that the reconstructed parameters µ and E agree well with the exact parameters both in shape and in magnitude. Even though, as noted above, the relative error in the estimated displacement field is 11.53%, we still manage to obtain quantitative parameter reconstructions. In particular, the difficulty in recovering the Lamé parameter λ from just a single displacement field does not prohibit a quantitative reconstruction of the Young's modulus E. Next, we assume that the Lamé parameters λ and µ are known in a small neighbourhood of the boundary of the sample. Hence, we can use the operator F c instead of the operator F in our iterative reconstruction approach, which we again apply to both the exact and the estimated displacement field. The results after 88 and 3 iterations taking about 30 and 1 minute, respectively, are depicted in Figure 7.3. As before, we obtain quantitative reconstructions of the Lamé parameter µ and the Young's modulus E, which are not deteriorated by the inability to reconstruct λ from only a single displacement field. Note that the obtained parameter reconstructions using the operator F c , in particular the ones derived from the exact displacement field, better resolve the (circular) shape of the inclusion.

Experimental Data
After considering simulated data in the previous section, we now turn our attention to real data stemming from an OCT elastography experiment. In particular, we apply our iterative reconstruction approach to the homogeneous sample for which we have already estimated the displacement field in Section 5 above.
The resulting reconstructions of the Lamé parameters λ and µ as well as the Young's modulus E, obtained after 10 iterations taking about 10 minutes and given in kPA, are depicted in Figure 7.4. One can see that in the inner part of the sample a relatively uniform parameter reconstruction, in particular for µ and E, was obtained. The steep rise towards the edges of the samples is either an artefact due to the dropping of the (smoothing) operator E s in the iterative reconstruction method, or caused by the fact that the estimated displacement field becomes somewhat inaccurate towards the sides of the sample due to the lack of detected bubbles. Computing the mean values over the inner part of the sample (pixels 300 to 700 in x-direction and 200 to 600 in y-direction), we get the parameter estimates µ = 342 kPA and E = 1015 kPA, with a standard deviation of 49 kPA and 143 kPA, respectively.
We currently investigate the efficiency and accuracy of our displacement field estimation and material parameter reconstruction methods also on inhomogeneous samples. The results were obtained by applying the iterative reconstruction approach presented in Section 6.3 using the operator F to the estimated displacement field depicted in Figure 5.9 (right).

Conclusion
We presented a displacement field estimation approach applicable to OCT images which utilizes the additional information of speckles present in the OCT images. Apart from a concise analysis of the proposed method, we provided a number of numerical examples demonstrating the usefulness of our approach, in particular when applied to obtain quantitative material parameter estimates in optical coherence elastography.