Low-field magnetic resonance imaging using multiplicative regularization

In this paper we present a magnetic resonance imaging (MRI) technique that is based on multiplicative regularization. Instead of adding a regularizing objective function to a data fidelity term, we multiply by such a regularizing function. By following this approach, no regularization parameter needs to be determined for each new data set that is acquired. Reconstructions are obtained by iteratively updating the images using short-term conjugate gradient-type update formulas and Polak-Ribière update directions. We show that the algorithm can be used as an image reconstruction algorithm and as a denoising algorithm. We illustrate the performance of the algorithm on two-dimensional simulated low-field MR data that is corrupted by noise and on three-dimensional measured data obtained from a low-field MR scanner. Our reconstruction results show that the algorithm effectively suppresses noise and produces accurate reconstructions even for low-field MR signals with a low signal-to-noise ratio.


Introduction
In magnetic resonance imaging (MRI), the internal structure of the human body is visualized using magnetic fields.To form an image, commercial MR scanners employ strong magnetic background fields with field strengths ranging from 1.5 T to 7 T. Superconducting magnets are used to generate such strong background fields, which obviously adds to the cost, size, and infrastructure demands of high-field MR scanners.In fact, the overall costs of present day MR scanners are so high that they are essentially out of reach for low-income and middleincome countries.
Low-field MR scanners, as described in for example [1][2][3], may provide a solution to this problem.Compared with the commercial scanners mentioned above, a low-field scanner has a much weaker background field (typically in the centi-or millitesla range) and MR signal quality is reduced.However, a low-field scanner does not require any superconducting magnets and construction and maintenance costs are therefore significantly lower (apart from additional cost reductions that may be achieved).Moreover, low-field scanners may be portable as well and enable us to bring these scanners as diagnostic tools to rural areas in developing countries.For a review on low-field MRI, the reader is referred to [4].
A direct consequence of having a lower background field is that lowfield MR scanners inevitably yield significantly noisier signals than their high-field counterparts.Moreover, depending on the type of low-field scanner that is used, the background field may not be (sufficiently) uniform throughout the object and the gradient fields that are used for imaging may not be linear or are only approximately linear within some subdomain of the object or body part that we want to image.Loosely speaking, a uniform background field and linear gradient fields ultimately lead to a Fourier transform representation of the measured MR signals and MR imaging essentially amounts to applying an inverse Fourier transform to the measured data.Deviations from these ideal background and gradient fields lead to imaging artifacts when an inverse Fourier transform is applied, as can be seen in [5].Even when the background field and gradient fields can be considered constant and linear throughout the object, then still noisy reconstructions are obtained, since the acquired signals usually have a low signal-to-noise ratio (SNR).
There is a vast literature on image reconstruction in MRI.An excellent overview of many relevant techniques is given in [6].In the case of nonlinear gradients, a model-based image reconstruction as described in [7,8], among others, can be employed instead of the standard inverse Fourier Transform.In this paper, we will make use of this model-based image reconstruction as well.However, other methods exist that correct for nonlinear magnetic fields as a post-processing step, i.e. on the image obtained by applying a standard technique like the inverse Fourier Transform to the measured data.In [9] for example, spherical harmonic deconvolution methods are used to achieve this result.
In this paper, we employ an MR imaging technique that, using model-based image reconstruction, addresses the aforementioned issues of imaging using nonlinear magnetic fields and the contamination of the signal by noise.Specifically, we pose our low-field imaging problem as an optimization problem that minimizes a (least-squares) data fidelity term in which nonuniform background fields and nonlinear gradients are taken into account through a generalized signal model.Furthermore, noise effects are suppressed by incorporating a weighted L 2 -norm total variation objective function into the optimization procedure.Total variation regularization penalizes jumps between neighboring pixels.The additive variant is often used in MR imaging to denoise images while still maintaining their edges, see for example [10][11][12][13], among many others.
It is customary to include such a regularization term in an additive manner into an optimization framework.However, one of the main drawbacks of such an additive scheme is that an artificial regularization parameter must be chosen that balances the data fidelity and regularization term.While methods for choosing this parameter exist, see for example [14], they are often computationally expensive and do not allow for fast (real-time) imaging.Moreover, typically a regularizing parameter needs to be determined for each new available data set, which leads to even larger computation times in case multiple data sets need to be processed.
Inspired by the success of multiplicative regularization in wave field inversion, see [15][16][17][18], for example, we include regularization in a multiplicative manner as well.In the resulting iterative imaging algorithm, the image is then updated using a Polak-Ribière type of conjugate gradient directions, see for example [19].Two practical advantages of this scheme are that no effective regularization parameter needs to be computed and reconstruction results can be monitored as the scheme progresses.A similar multiplicative regularization approach was applied to image deblurring problems in [20].
We apply our imaging method to low-field noise-corrupted simulated data using nonlinear gradient fields and to measured data obtained with the low-field scanner shown in Fig. 1.This scanner consists of 23 rings spaced 22 mm apart.The total length of the scanner is 50.6 cm and the bore has a diameter of 27 cm.Each ring in the array consists of two concentric layers of cubic neodymium boron iron magnets.The magnets have a side length of 12 mm and the total number of magnets is close to 3000.The magnets are arranged in a Halbach configuration such that an approximately uniform background field is realized within the bore of the scanner.This background field is pointing in a transverse direction (from left to right inside the bore in the photograph of Fig. 1) as opposed to the longitudinal direction as is common in commercial high-field scanners.Consequently, the gradient coils of the scanner have to be redesigned as well.A gradient coil especially designed for the low-field scanner that produces an approximately linear gradient in the longitudinal direction inside the bore is also shown in Fig. 1.Further details about the design and realization of the scanner and the gradient coils can be found in [2,21].
For different types of fruit (apple, melon) we reconstruct an image using data collected with the MR scanner described above.Moreover, the effects of a nonlinear frequency encoding gradient are also studied using noise-corrupted simulated data.We resort to simulated data in this case, since we do not have measured background or gradient fields available.We do stress, however, that if such measured fields are available, then these can be easily incorporated into our imaging scheme.Imaging results will be presented for a regularizing objective function based on the minimization of a weighted total variation term.We demonstrate that the method indeed effectively suppresses noise for a given data set without any extra computations to determine a regularization parameter.
This paper is organized as follows.In Section 2 we introduce the continuous and discrete signal model and discuss the multiplicative regularization approach.Secondly, we present the multiplicatively regularized MR imaging algorithm and show that it can be used as an image reconstruction algorithm or as a denoising algorithm or both.In Section 3 we illustrate the performance of our imaging algorithm on simulated, noise-corrupted data and on measured data obtained with the low-field scanner.Finally, the conclusions can be found in Section 4.

Methods
In Section 2.1 we will introduce the signal model and discuss the multiplicative regularization approach.Section 2.2 contains the details on the numerical discretization that is used.After that, in Section 2.3, the multiplicatively regularized MR imaging algorithm is presented.

Signal modeling and regularization
The starting point of our imaging procedure is the voltage signal v(t) that is measured by a receive coil of the low-field scanner.Assuming that the complete object is excited, we have for this signal the representation where Δω 0 (r) = γB 0 (r) − ω mod is the difference between the local Larmor frequency γB 0 (r) and the modulation frequency ω mod with γ the proton gyromagnetic ratio.Furthermore, X(r) is the image having the bounded domain as its support.The image X depends on the proton density, coil sensitivity, etc. and is expressed in [Vm −3 ], but its particular form is irrelevant for our imaging purposes.Explicit expressions for X can be found in [22], for example.Finally, where is the gradient vector due to the application of the gradient coils of the scanner.The above model takes background field inhomogeneities into account through Δω 0 (r) and nonuniformities in the gradient fields are taken into account through the spatially dependent gradient vector G.In case the background field can be considered uniform throughout the object, , and the gradient vector does not depend on position, where we have taken ω mod = γB 0 and Clearly, in this case the signal v(t) is a three-dimensional spatial Fourier transform X t k [ ( )] of the image X(r).The data used for imaging are time samples of the voltage signal.Introducing the time instances t n = (n − 1)Δt for n = 1, 2, …, N with Δt > 0 and taking into account that the image X has the domain as its support, we may write for n = 1, 2, …, N and where im is the smallest cube that completely surrounds the domain such that X vanishes on its boundary.We refer to im as the imaging domain or Field of View (FOV).The side length of this domain is denoted by ℓ > 0 and im has a volume V = ℓ 3 .Before discretizing the volume integral in the above signal model, it is convenient to first normalize the spatial coordinates.Specifically, writing r = ℓr' with r u and = [0, 1] u 3 the unit cube, we have n n u 0 (6) where Δω 0′ (r′) = Δω 0 (ℓr′), k′(r′, t n ) = ℓk(ℓr′, t n ) is the scaled generalized Fourier vector, and X′(r′) = X(ℓr′).From now on we drop the primes and work with normalized spatial coordinates only.
Subsequently, we discretize the unit cube by subdividing this domain into nonoverlapping voxels, where each voxel has a positive side length Δx, Δy, and Δz in the x-, y-, and z-direction, respectively, and such that PΔx = QΔy = RΔz = 1 with P, Q, and R positive integers.The volume of a voxel is denoted by V Δ = ΔxΔyΔz and the centers of the voxels have coordinates (x p , y q , z r ) with , and /2 ( 1) , p q r (7) for p = 1, 2, …, P, q = 1, 2, …, Q, and r = 1, 2, …, R. Having introduced the voxelization of our imaging domain, we discretize the integral in (6) using the composite midpoint rule and obtain where and A is the matrix representation of the integral operator in Eq. ( 6).In case of uniform background and gradient fields and N = PQR, we have A = V Δ F, where F the 3D Discrete Fourier Transform (DFT) matrix that satisfies from which it follows that in this case In practice, the measured data contains noise and the data d ~that we really have available is assumed to be of the form = + d ~d n, where n is a noise vector.Now in high-field MRI, the noise level is much lower than in low-field MRI and as a first approximation the background field and gradient fields can be taken to be uniform (independent of the position vector), especially compared with a low-field setting.Simply applying an inverse 3D-Fourier transform to the measured signal generally yields images of excellent quality in high-field MRI.In a low-field setting, however, the background and gradient fields may not be uniform within the complete object and, as mentioned earlier, the required signals typically have a much lower SNR.Consequently, even when the background and gradient fields can be considered uniform, a straightforward application of an inverse Fourier transform to the measured data will lead to very noisy reconstructions in general.
To address both of these issues, we pose the imaging reconstruction problem as an optimization problem and minimize an objective function that consists of a data fidelity term describing the mismatch between observed and modeled data and a regularizing term, which suppresses the influence of noise on the reconstructions.In particular, in many regularized optimization methods the objective function that needs to be minimized is of the form where F data (x) is the data fidelity term, F reg,add (x) some regularizing function (often chosen to be a variant of the total variation (TV) operator), and λ is a regularization parameter.The main drawback of using objective functions of the form (11) is that to reconstruct an image, the artificial regularization parameter λ needs to be selected for each new data set that is acquired.Strategies for choosing this parameter exist, of course (such as the L-curve method [23]), but such approaches are computationally demanding and generally require extensive numerical experimentation for each new available data set.Instead of using an additive approach, we follow [15,16,18,20], for example, and set up an iterative reconstruction algorithm that is based on multiplicative regularization in which at each iteration the objective function is of the form Here, F reg,mult is the regularizing function, which is, in general, of a different form than the regularizer in Eq. (11).Specifically, for multiplicative regularization, it is usually defined such that F reg,mult (x) = 1 at an optimal point.A major advantage of such a multiplicative approach is that no extra computations are required to determine an effective regularization parameter.Here, we focus on the application of multiplicative regularization to invert low-field MR data.For theoretical properties of multiplicative regularization the reader is referred to [16,24,25], for example.
As a first step we introduce the standard 2-norm data misfit objective function as where = V b d 1 is the scaled and noisy data vector.Subsequently, we set up our iterative scheme and assume that at the kth iteration we have some approximation x k−1 of the image available.The next iterate x k is now constructed by minimizing the objective function (14) where Here, X and X k−1 are the continuous counterparts of x and x k−1 , respectively, and k 1 2 is given by In our algorithm, we only work with this particular choice for δ k−1 2 , but other choices are possible as well, see [15,17].In each iteration, Eq. ( 14) is minimized.This choice of F TV (x) promotes solutions that are piecewise constant.Note that if this process converges, we have lim k→∞ x k−1 = x and hence lim k→∞ ∇x k−1 = ∇x, which means that lim k→∞ F k−1 For ill-posed problems, as encountered in wavefield imaging and contrast source inversion (CSI), for example, the total objective function typically shows a convergence behavior as descibed in [15,16].Specifically, in CSI the data functional is typically large, at the start of the iterative process, giving a large weight to the regularizing functional.As the iterative process progresses, the regularizing function will start to approach a value of 1 and the focus will shift towards minimizing the data functional, as illustrated in [26].However, here we consider imaging problems which are well-posed (Fourier transform) or at least less ill-posed than in CSI, which means that, given a reasonable initial guess, the data functional will be small.Therefore, we cannot expect the same behavior, except that if the process converges, the regularizing functional will converge to 1.

Numerical discretization
To arrive at the discretized counterpart F k−1 (x) of ℱ k−1 (X), we use the weighting function and write Eq. ( 15) as where = ( ) , , x y z is the nabla operator.Discretizing the integrals in (18) using the composite midpoint rule, we obtain The partial derivatives in the gradient operator on the right-hand side of Eq. ( 19) are approximated by two-point forward or backward difference formulas.For example, for the partial derivative with respect to x we either use the forward or backward difference formulas p q r p q r p q r p q r 1 1 (20) and similar forward or backward difference formulas are used for the partial derivatives with respect to the y-and z-coordinate.The difference formulas are used for p = 1, 2, …, P, q = 1, 2, …, Q, and r = 1, 2, …, R along with the homogeneous boundary conditions = X x y z ( , , ) 0 whenever p = 0, p = P + 1, q = 0, q = Q + 1, r = 0, or r = R + 1 in Eq. (21).Through the latter equation we implement the boundary condition that the image X vanishes at the boundary of the imaging domain.
Moreover, we use the forward and backward differencing matrices in the x, y and z directions, defined as Denoting the 3D image array by X, we introduce its vectorized counterpart as x = vec(X), where we use lexicographical ordering.Similarly, w k−1 represents the vector form of Eq. (17).
Having introduced these vectors and matrices, we can write Eq. ( 19) more compactly as where e is the PQR × 1 vector of ones, = w diag( ) ( ) with D ξ a forward or backward differencing matrix and I P , I Q and I R being the identity matrices of order P, Q and R, respectively, and ⊗ denotes the Kronecker product.Finally, substituting the gradient vectors of Eq. ( 24) into Eq.( 23) gives where L w is a three-dimensional weighted approximate Laplacian given by .
In practice, forward finite differences or backward finite differences can be used to implement the total variation functional.Another option is to use mixed finite differences, which combines these two.

Mixed finite difference approach
As mentioned before, the partial derivatives in Eq. ( 19) are approximated by forward or backward two-point finite difference formulas and the differentiation matrices D x,y,z in the above formulas are either all forward differentiation matrices D x,y,z;f or backward differentiation matrices D x,y,z;b .Another option is to mix the forward and backward differencing operators.Specifically, introducing the forward and backward x-coordinate gradient vectors as with similar definitions for the forward and backward y-and z-coordinate gradient vectors, we approximate F X ( ) Extensive numerical testing has shown that the mixed finite-difference approach leads to faster convergence than implementations that use forward or backward difference operators only.Therefore, we use this mixed finite-difference approach in our implementation of multiplicative regularization.

MR imaging using multiplicative regularization
As mentioned above, at the kth step of the algorithm we assume that we have an approximation x k−1 available.We update the image according to the update formula where d k is the Polak-Ribière update direction given by [15] = + d g g g g with d 0 = 0 and g k the gradient of the objective function Using the product rule, this gradient is given by where we have used TV at x =x k−1 and using Eq. ( 25) is easily obtained as The gradient of F k−1 at x =x k−1 now follows as Note that for position independent background and linear gradient fields we have A H = F −1 , i.e.A H is an inverse Fourier Transform.
Finally, to obtain the update coefficient β k , we substitute and determine the update coefficient by solving the equation , where (39) and Note that a 2 and b 2 are always positive.With these results, the update coefficient follows from Eq. (37) as the root for which the polynomial a 0 b 1 +a 1 +2(a 0 b 2 +a 1 b 1 +a 2 )β+3(a 1 b 2 +a 2 b 1 )β 2 +4a 2 b 2 β 3 is minimized.The roots can be found analytically, or using a built-in polynomial root-finding algorithm.To summarize, the overall algorithm is as follows: LOW-FIELD MULTIPLICATIVELY REGULARIZED MR IMAGING 1.Given an initial guess of the low-field MR image x 0 .2. For k = 1, 2, …, a) Compute the gradient vector g k as given by Eq. ( 36 Remark: Consider the case of position independent background and linear gradient fields.We then have A H =F −1 and the gradient vector becomes As an initial guess, let us take x 0 = F −1 b.This initial guess is very noisy, but the data is exactly matched and F data (x 0 ) = 0. Consequently, g(x 0 ) = 0 and the algorithm stops after one iteration with x 1 = x 0 .In this case of homogeneous background and linear gradient fields, which is often assumed in practice, we use our algorithm as a denoising algorithm by setting the gradient at the kth iteration to g (x k−1 ) = L w x k−1 .We now update in directions determined by the gradient of the TV-term only.Starting with a masked initial guess x 0 = MF −1 b, where the mask M zeroes out any components of F −1 b for which it is a priori known that the image at the corresponding location vanishes, F data (x k ) will generally increase as k increases and if lim k→∞ F k−1 TV (x k ) = 1 then the objective function F k−1 (x) will asymptote to F data (x * ) as k → ∞, where x * = lim k→∞ x k−1 = lim k→∞ x k is the converged image in which noise variations have been minimized (suppressed).

Results
In this section we illustrate the performance of our multiplicatively regularized imaging and denoising algorithm.In Section 3.1 we use the algorithm to reconstruct the well-known 2D Shepp-Logan phantom from simulated and noise-corrupted low-field data with a frequency encoding gradient that is nonlinear.The data matrix A is not equal to a standard DFT matrix in this case and simply applying an inverse DFT to the data would lead to highly distorted reconstructions.We therefore use the algorithm as an iterative reconstruction algorithm to recover the Shepp-Logan phantom from the data.
Subsequently, in Section 3.2 we apply our algorithm to data as measured by the low-field scanner described in [2].This scanner has been constructed such that the background and gradient fields are approximately constant and linear throughout the imaging domain.We take this into account in our data model and the data matrix A is assumed to be equal to a scaled three-dimensional DFT matrix in this case (see Eq. ( 10)).The algorithm is now used as a denoising algorithm and images of an apple and a melon will be presented to demonstrate the performance of the method on measured data for which a Fourier signal representation is assumed to be applicable.In Section 3.3, we compare our multiplicatively regularized 2D and 3D results with images obtained by solving the additively regularized minimization problem where the second term is the total variation term, with T the anisotropic total variation operator, which we define as for 2D and 3D, respectively.We solve this minimization problem using the Alternating Direction Method of Multipliers (ADMM), see for example [27].
The experiments were carried out in MATLAB R2015a on a desktop PC with an Intel(R) Xeon(R) W-2123 CPU (3.60GHz).

Two-dimensional imaging of simulated noise-corrupted low-field MR data
In low-field MRI, inhomogeneities may be present in the background field and gradients may not be perfectly linear.To investigate the performance of the algorithm as an image reconstruction algorithm in such cases, we consider a two-dimensional low-field reconstruction problem in which field perturbations are taken into account.Many different field perturbations can be considered, of course, but in all cases signals are obtained for which the relationship between the signal and the object is no longer governed by a DFT.To fix the idea, we therefore focus on signal generation in an MR scanner in which the frequency encoding gradient is not perfectly linear.Specifically, we introduce a perturbation of the gradient field that is relative to the gradient strength.To this end, we scale the frequency encoding gradient such that we can describe its strength to be in the range [−.5, .5)and we use x and y to denote the location in the Field of View (FoV) such that .The unperturbed gradient field is shown in Fig. 2a, while the perturbed gradient field is shown in Fig. 2b.
Since the background and (nonlinear) gradient profiles are known, we can use the discretized version of Eq. ( 5) to obtain the model matrix A. As an image or model solution, we take the Shepp-Logan phantom of Fig. 3a and we use matrix A to generate the data.Subsequently, white Gaussian noise with an SNR of 20 is added to the data in the frequency domain.We note that not taking the distorted gradient into account leads to the image shown in Fig. 3b, which is obviously a deformed image of the true object.
Having the noise-corrupted data available, we apply our multiplicative regularization scheme to this data in an attempt to reconstruct the Shepp-Logan model solution.Mixed finite differences and the 2D version of the Laplacian matrix given in Eq. ( 29) are used to implement the total variation functional.As an initial guess, we use x 0 = αA H b, where α is chosen such that F data (x 0 ) is minimized.This initial guess is depicted in Fig. 3c.Subsequently, we use this x 0 to define F 0 TV (x), and start iterating.The reconstruction that we obtain after 50 iterations is shown in Fig. 3d.The algorithm's progress is shown in the Appendix, where we can see that the algorithm manages to progressively denoise the image, while maintaining the edges and structures in the phantom.Fig. 4 shows the value of the objective function F(x) as a function of the iteration number, for the two-dimensional Shepp-Logan image.The values of F TV (x) and F data (x) are plotted as well.We note that, as expected, F TV converges to 1, while the other two steadily grow to a larger value that enables this convergence.The increasing value of F data (x) can be explained by the observation that our matrix A does not deviate all that much from a Fourier Transform, which means that the term ∥b − Ax 0 ∥ 2 is close to 0. However, since x 0 is too noisy, this solution does not meet the smoothness requirements of the total variation functional.Instead we iterate towards a solution that minimizes the least-squares term under the constraint that F TV (x) is equal to 1.One iteration takes 1.79 s, which is relatively long but this can be explained by the fact that we need to explicitly calculate the matrix A and its transpose, since we cannot rely on the Fast Fourier Transform (FFT) due to the nonlinear gradient.
Additionally, we consider the same Shepp-Logan phantom, but with a decreased SNR of 5.This is more realistic in a low-field MRI setting.The inverse FFT reconstruction, initial guess and the multiplicatively regularized reconstruction are shown in Fig. 5b, c and d, respectively, and for different iteration numbers, the reconstructions are shown in the Appendix.This result was obtained after 50 iterations.We see that the algorithm manages to denoise the image, but some structures are lost.This is not due to the method itself, but because the SNR is simply too low.As will be shown later, an additively regularized method is not able to recover these structures either.The convergence plots are shown in Fig. 6, which showcase the same behavior as in the higher SNR case.

Three-dimensional imaging of measured data
This section contains our imaging results of an apple and a melon that were scanned using the low-field scanner of [2].Images of slices through these pieces of fruit will be presented.Fig. 2. In an ideal scenario, the frequency encoding gradient is linear (a).In these simulations, however, the gradient is perturbed and nonlinear (b).The black lines are contour lines.

Apple experiment
In our first experiment, an apple is imaged using a spin-echo sequence whose parameters are given in Table 1.As an initial guess, we take a masked version of the three-dimensional inverse DFT of the signals obtained during the apple experiment.The 35th slice of this initial guess is shown in Fig. 7a.Clearly, the initial guess is contaminated by noise and the mask that is used is visible.We use this initial guess to determine F 0 TV (x), after which we start the iterative process.
To remove the noise from the initial image, we use the algorithm as a denoising algorithm and use a mixed differences approach with the Laplacian matrix of Eq. (29) in the discretized total variation functional.One iteration takes approximately 0.3 s.This is very fast compared to the Shepp-Logan problem, for which the problem is smaller (64 × 64 pixels instead of 64 × 64 × 64 pixels), which can be explained by the fact that here, the FFT can be employed.We observe that the amount of noise in the image decreases as the iteration process continues, as can    be seen in Fig. 15 in the Appendix.After 30 iterations an image of good quality is obtained, with the apple's shape and seeds being clearly visible and the noise having been eliminated.Fig. 8 shows the convergence plots for our algorithm.Again, we see that F TV converges to 1.
Iterating further leads to a result that is somewhat oversmoothed, which is similar to the 2D results with a low SNR.Therefore, it is advisable to stop the iterative process early (semi-convergence).Finally, we note that the reconstructed images are obtained essentially in realtime and the reconstructions can be monitored as the iterative scheme progresses, which is of great importance in practice when a scanner is used for diagnostic purposes.Fig. 8 shows a three-dimensional visualization of the reconstructed apple.

Melon experiment
In our second experiment, a melon is imaged using a spin-echo sequence with the parameter settings given in Table 2.Here too, we construct an initial guess by masking the image that is obtained by applying an inverse three-dimensional DFT on the data.Note that this data set has twice as many data points in each Cartesian direction as the data set in the apple experiment.The 64th slice of the initial guess is shown in Fig. 10a.The mask that is used is clearly visible and again a very noisy initial image is obtained.To remove this noise, we use mixed finite differences to implement the total variation functional and the resulting reconstruction of the melon is shown in Fig. 10b.As can be seen in Fig. 16 in the Appendix, the noise level in the images decreases with the iteration number.We note that some individual pixels appear overly bright or dark, which may be removed by iterating further but this leads to oversmoothing, since the SNR is very low.We therefore terminate the iterative process after 20 iterations.Finally, even though the complex data set of the melon is eight times larger than the apple data set, the method still produces images essentially in real time that can be monitored as the scheme progresses.A three-dimensional visualization of the melon is shown in Fig. 9b.Each iteration takes approximately 2 s.In Fig. 11, the convergence plots are shown, which follow the same pattern as before.

Additive and multiplicative regularization
Finally, in Fig. 12, our multiplicatively regularized reconstructions are shown next to additively regularized reconstructions.We tuned the regularization parameter λ in a heuristic manner such that the background noise disappears, while the different structures are still visible.For all experiments, we used 10 ADMM iterations and within each     ADMM iteration, we used 10 iterations of the Conjugate Gradient (CG) algorithm to solve the first minimization problem.We observe that, in terms of image quality, additive and multiplicative regularization yield comparable results, but no parameter tuning is required in the multiplicative case.This is reflected in the peak signal-to-noise ratios (PSNR), shown in Table 3, which we were only able to calculate for the simulated case because no ground truth was available for the measured data.We see that multiplicative regularization achieves a somewhat higher PSNR for low SNR and a marginally lower PSNR for high SNR.Furthermore, in Table 4, the computing times for the 2 different algorithms (our algorithm and the ADMM algorithm solving the additively regularized problem) are shown.We note that it might be possible to lower the computation time for the additively regularized problem by lowering the amount of either ADMM or CG iterations.This table shows that, for our test problems, the computation times for multiplicative regularization are quite competitive compared to additive regularization.Additionally, the computation time shown for additive regularization does not include the time spent searching for an appropriate regularization parameter (requiring repeated execution of the algorithm in general), which is a problem that is completely eliminated when using a multiplicatively regularized approach.

Conclusions
In this paper we applied a multiplicative regularization approach to low-field MR imaging.By multiplying a least-squares data fidelity function by a regularizing total variation function that is differentiable, we avoid the problem of having to determine and compute a regularization parameter as is required for additive regularization.The resulting multiplicative regularization problem is nonlinear and solved by a nonlinear conjugate gradient scheme with Polak-Ribière update directions.Furthermore, we showed that the algorithm can be used as an image reconstruction and denoising algorithm by applying the method to two-dimensional noise-corrupted MR data obtained with a nonlinear gradient field and to three-dimensional measured data obtained with a low-field Halbach scanner.We demonstrated that multiplicative regularization yields very promising results, converging within a few iterations, whether we are dealing with two-dimensional noisy data for which a Fourier signal representation is no longer valid, or with threedimensional measured data for which a Fourier transform relationship between signal and image can be assumed.Moreover, accurate reconstructions are obtained essentially in real time in case a Fourier signal model is applicable.We observed that in case of a low SNR, it is better to stop the iterative process somewhat early (semi-convergence), to maintain the edges in the image.For a high SNR, edges are preserved while noise is eliminated, even for a large number of iterations.
In this work, we focused on a low-field MRI setting with one single receiver coil.However, multiplicative regularization may also be applied to multicoil high-field MRI, of course, and we intend to test its performance against more standard high-field reconstruction techniques used today.
Moreover, in future work we will also focus on incorporating measured background and gradient fields of practical low-field MR scanners into the data model that is used in our multiplicative regularization scheme.Obviously, this is particularly important in case the background and gradient fields are such that a standard Fourier transform signal representation is no longer valid, since otherwise significant distortions in the image are obtained as we demonstrated for two-dimensional simulated MR data.Standard FFTs can no longer be used in this case and accurate images may only be obtained by solving the image reconstruction problem as an optimization problem.In our multiplicative scheme, forming matrix-vector products with the model matrix A is then the main computational bottleneck, since computing the action of the Laplacian of the total variation functional on a vector involves sparse finite difference operators only.Consequently, to reduce the reconstruction time of the method, efficient routines to compute matrix-vector products with the data matrix A have to be developed, possibly involving nonuniform FFTs.Finally, we intend to include compressed sensing techniques into our multiplicative regularization framework as well, since this may lead to reduced scan times and a reduction of motion artifacts, for example.

Declaration of competing interest
There are no conflicts of interest to disclose.
ξ = {x, y, z}.The dimensions of the matrices are P × P for D x;f and D x;b , Q × Q for D y;f and D y;b and R × R for D z;f and D z;b .
); b) Compute the Polak-Ribière update direction d k ; c) Compute the update coefficient β k ; d) Update the low-field MR image: x k = x k−1 + β k d k .

= x . 5
corresponds to the left and = x .5 to the right boundary of the FoV, and = y .5 and = y .5 to the lower and upper boundary, respectively.The gradient field is now perturbed by the function +

Fig. 4 .
Fig. 4. Plots of the objective function value F(x) (left), which is equal to the product of F TV (x) (middle) and F data (x)(right), for the two-dimensional Shepp-Logan image.

Fig. 6 .
Fig. 6.Plots of the objective function value F(x) (left), which is equal to the product of F TV (x) (middle) and F data (x) (right), for the two-dimensional Shepp-Logan image.In this case, the SNR was lowered to 5.

Fig. 7 .
Fig. 7. Reconstruction result of the 35th slice of the apple.

Fig. 8 .
Fig.8.Plots of the objective function value F(x) (left), which is equal to the product of F TV (x) (middle) and F data (x) (right), for the three-dimensional apple image.

Fig. 9 .
Fig. 9. Three-dimensional visualization of the reconstructed apple (a) and melon (b) obtained after 20 iterations using mixed finite differences.

Fig. 12 .
Fig. 12. Images of the Shepp-Logan phantom using simulated data with an SNR of 20 (top row) and an SNR of 5 (second row), an apple based on measured data (third row), and a melon based on measured data (bottom row), using additive regularization (left column) and multiplicative regularization (right column).

Fig. 13 .
Fig. 13.Reconstruction of the Shepp-Logan phantom, with an SNR of 20, for different iteration numbers.

Fig. 14 .
Fig. 14.Reconstruction of the Shepp-Logan phantom, with an SNR of 5, for different iteration numbers.

Fig. 15 .
Fig. 15.Reconstruction of the 35th slice of the apple for different iteration numbers.

Fig. 16 .
Fig. 16.Reconstruction of the 64th slice of the melon for different iteration numbers.

Table 1
Parameter settings for the apple imaging experiment.

Table 2
Parameter settings for the melon imaging experiment.

Table 3
PSNR for different reconstructions of the Shepp-Logan phantom.

Table 4
Computing time (and total number of iterations) for the different reconstructions in seconds.DEMO) in Delft, Pennsylvania State University and Mbarara University of Science and Technology for their insight and expertise. (