DeepLSR: a deep learning approach for laser speckle reduction

Speckle artifacts degrade image quality in virtually all modalities that utilize coherent energy, including optical coherence tomography, reflectance confocal microscopy, and ultrasound. We present a deep learning framework for laser speckle reduction, DeepLSR (https://durr.jhu.edu/DeepLSR), that transforms coherently-illuminated images to a speckle-free domain. We apply this method to widefield images of objects and tissues illuminated with a multi-wavelength laser, reconstructing speckle-reduced images with greater fidelity than conventional methods.

Laser illumination offers many advantages for imaging over incoherent light, including high power density, efficient light generation, narrow spectral bandwidth, robust stability, long lifetime, and fast triggering capability.Unfortunately, coherent illumination also introduces speckle artifacts which are caused by constructive and destructive interference between emitted wavefronts 1 .The poor image quality resulting from speckle noise prohibits lasers from being used in many widefield imaging applications.For example, commercial endoscopes utilize arc lamps or light-emitting diodes (LEDs) as illumination sources and consequently require large-diameter light guides to transmit sufficient illumination power.Speckle noise also corrupts image quality in optical coherence tomography (OCT) 2 , reflectance confocal microscopy 3 , and ultrasound imaging 4 .To mitigate laser speckle noise, several optical methods 5,6 and image processing algorithms [7][8][9] have been explored.In general, optical approaches add cost and complexity, reduce power throughput, and place fundamental limitations on imaging speed.Image processing techniques, on the other hand, are computationally complex, require parameter tuning, and degrade resolution as they reduce speckle.
Here, we present a deep convolutional neural network for laser speckle reduction ('DeepLSR') on widefield images formed from multi-wavelength, red-green-blue laser illumination.We describe a method for effectively learning the distribution of speckle artifacts to target and reduce noise in images not previously seen by the network.This technique relies on pairs of coherentand incoherent-illuminated images of a variety of objects to learn a transformation from speckled images to speckle-free approximations.Previous work in OCT has explored shallow neural networks for estimating filter parameters in a speckle reduction model 10 , and deep networks for speckle reduction using a set of registered and averaged volumes of retinal tissue as ground truth 11 .In widefield imaging, deep learning networks have been applied for general image denoising 12 , but not specifically for speckle reduction.Our approach is novel in its use of a true incoherent source as a target ground truth, the use of a diverse set of objects for training, and in its applica- tion of deep learning to widefield laser-illumination imaging.We benchmark this approach against conventional speckle reduction methods on images of laser-illuminated objects previously unseen by the network.We further provide step-by-step instructions for adapting DeepLSR to new data sets contaminated with speckle noise (see Supplementary Note).
DeepLSR utilizes a conditional Generative Adversarial Network (cGAN) to reduce laser speckle by posing the problem as an image-to-image translation task 13 .In this way, a structured loss is learned during the training process.This is contrary to unstructured approaches which utilize a per-pixel classification or regression, where each pixel is treated as independent from the others, inhibiting the networks ability to learn from spatial relationships in the image.The overall architecture involves simultaneously training a speckle-free image generator and a real-versus-fake image discriminator, given a conditional input (Fig. 1).While the generator learns to generate a realistic mapping from an input speckled image to an output speckle-free image, the discriminator learns to classify pairs of input and generated output images as either real or fake.During this adversarial training, the discriminator provides feedback on the quality of the image pairs to the generator.The resulting trained generator is then capable of reducing speckle noise in images it has never seen.
We trained and tested DeepLSR using a total of 2,895 images acquired from up to 9 different positions of: (1) 113 assorted household and laboratory objects picked to represent a wide range of textures, shapes, and bidirectional reflectance distribution functions and (2) ex-vivo porcine esophagus, intestine, and stomach from three animals.These samples were illuminated using a red-green-blue laser for coherent illumination, the same laser with added optical speckle reduction (oLSR) from an oscillating diffuser, and an LED for incoherent illumination.All images acquired from six objects and one porcine were excluded from the training set, and these 30 images were used for testing the trained network (Fig. 2 and Fig. 3).In addition to DeepLSR, which learned a transformation from laser illumination to LED (DeepLSR), we also trained networks to learn transformations from images with optical speckle reduction to LED (DeepLSR+oLSR) and images with laser illumination to optical speckle reduction (Laser → oLSR).
To quantify the performance of DeepLSR, the trained networks were used to despeckle the reserved test images.Output images from DeepLSR were compared to the speckle-free, incoherent images by measuring peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).PSNR assesses the relative noise of an image, while SSIM is a valuable metric for image comparison using quantities that are important for human perception (image contrast, luminance, and structure) 14 .We also compared DeepLSR to two standard image processing denoising techniques for speckle reduction: median filtering and non-local means.The input parameters for these algorithms were determined by multi-objective optimization, where PSNR and SSIM were used as the objective functions, using the same training data set that was used for DeepLSR.To assess the effect of the network on image resolution, a modulation transfer function (MTF) was measured.
Our validation tests on assorted objects imaged with laser illumination demonstrate that DeepLSR reduces speckle noise by 5.3 dB, compared to a 3.0 dB reduction by non-local means filtering, and a 4.4 dB reduction by oLSR (Supplementary Table 1).We also found that oLSR can be used in combination with deep learning to provide enhanced speckle reduction compared to DeepLSR or oLSR alone (DeepLSR+oLSR).The DeepLSR method has a minor effect on resolution measured by a slanted edge test, as demonstrated from the modulation transfer functions of LED illumination compared to laser illumination with DeepLSR (Fig. 2c).
In applications involving tissue imaging, the object of interest is often a turbid medium that naturally blurs speckle artifacts.To assess the applicability of DeepLSR in this scenario, we applied our model to images of gastrointestinal tissue illuminated with laser light.In these tissue validation tests, DeepLSR reduced speckle noise by 6.3 dB, compared to 2.6 dB reduction by nonlocal means filtering, and a 3.7 dB reduction by oLSR.Fig. 3 shows representative images of test samples with laser illumination, conventional noise reduction methods, DeepLSR, and LED illumination.As in the assorted objects, DeepLSR removed speckle artifacts while retaining structural features with greater fidelity than conventional imaging processing approaches (Supplementary Table 2).DeepLSR may be particularly useful in endoscopy applications that require bright illumination or small-diameter endoscopes.Incoherent light sources for endoscopy, such as arc lamps and LEDs, require large-diameter light guides to deliver sufficient optical power through an endoscope.Laser illumination enables the delivery of greater illumination power through fiber optics, and can generate incoherent-like images after DeepLSR or DeepLSR+oLSR is applied.Moreover, in widefield applications that require coherent light, such as laser speckle contrast imaging for mapping flow 15 , DeepLSR allows both a computational image and a conventional image to be acquired simultaneously.DeepLSR may also be useful in OCT, ultrasound, and industry applications, such as part inspection.As a data-driven approach, DeepLSR should be trained on images that span the target domain.We have made the DeepLSR model and source code for widefield laser illumination available here: https://durr.jhu.edu/DeepLSR, and provide step-bystep instructions for installing and applying this framework to new data sets in the Supplementary Materials.
individual images in each iteration.Spectral normalization was used to stabilize GAN training when learning simultaneously from assorted objects and tissue 17 .The problem was solved using ADAM for stochastic optimization 18 .Further details about the generator and discriminator architectures can be found in 19 .A plot of generator and discriminator loss versus epoch are reported in Supplementary Fig. 1.
The network was trained for 400 epochs.The learning rate was set to 0.0002 for the first 200 epochs and linearly decayed to a learning rate of zero over the remaining 200 epochs.The size of the image buffer that stores generated images was set to 64.The networks were implemented using PyTorch 0.4 and the training was run on Nvidia P100 GPUs using Google Cloud.The average training time for each epoch was 303 seconds and the entire network was trained in approximately 33.66 hours.Once the training process is complete, the trained network computes speckle-reduced images at 6 frames per second on a virtual workstation with 4 CPUs on a 2.6 GHz Intel Xeon E5 processor and at 27 frames per second when using a P100 GPU.
Evaluation Metrics: To quantify the performance of DeepLSR, we measured the peak signalto-noise (PSNR) ratio and structural similarity index (SSIM) using the incoherent, speckle-free image as the target.PSNR assesses the relative noise of an image and was computed using, where R is the detector's bit depth of 255 and M SE is the mean squared error between images.SSIM was computed using, where x and y are images for comparison, the image mean is µ, variance is σ, covariance between x and y is σ xy , and the constant C is added for avoiding instability when the denominator is close to 0. Average image SSIM was calculated using windows of 11x11 pixels.The resultant SSIM index is a value between -1 and 1, where an index of 1 indicates equivalent image inputs.Images of a slanted edge and of a 1951 United States Airforce Resolution Target were used to assess the of effect of the DeepLSR method on image resolution.The Slanted Edge MTF plugin available for ImageJ 20 was used to formulate a modulation transfer functions.
Performance: The three trained networks were tested on the reserved test images of assorted objects and porcine tissue.The average PSNR and SSIM between the network-estimated images and the ground truth images are reported in Supplementary Table 1 and Supplementary Table 2.For performance benchmarks, we report PSNR and SSIM comparisons for laser vs. LED, oLSR vs. LED, and laser vs. oLSR.We also compared DeepLSR to standard image processing denoising techniques: median filtering and non-local means 7,8

Figure 1 :
Figure 1: DeepLSR Architecture.a) Training architecture for image-to-image translation-based laser speckle reduction using a conditional Generative Adversarial Network.A generator learns to transform between pairs of images acquired with coherent and incoherent illumination while a discriminator learns to classify input images as real or fake.b) Once training is complete, the discriminator is discarded and the trained generator (DeepLSR) reduces laser speckle noise in images not previously seen by the generator.

Figure 2 :
Figure 2: DeepLSR compared to conventional speckle reduction methods.DeepLSR was trained on an assortment of images that represent a variety of textures, shapes, and bidirectional reflectance distribution functions.a) Images of two test objects illuminated with laser illumination, laser illumination with optical speckle reduction (oLSR), median filtering, non-local means, DeepLSR applied to the laser illuminated image, and the target speckle-free image illuminated with a light-emitting diode (LED).b) Speckle artifacts removed from the laser illuminated images by DeepLSR.c) Modulation transfer functions for LED illumination and laser illumination with DeepLSR found using a slanted edge.d) Images of a 1951 United States Air Force Target with each illumination strategy and laser illumination with DeepLSR.

Figure 3 :
Figure 3: DeepLSR applied to images of laser-illuminated ex-vivo porcine gastrointestinal tissues not previously seen by the network.

Table 1 :
. The input parameters for these algorithms Evaluation with images (n=13) of six assorted objects.
* Raw Data Supplementary * Raw Data Supplementary