Next Article in Journal
Review of Dissolved CO and H2 Measurement Methods for Syngas Fermentation
Previous Article in Journal
Design of a Sensitive Balloon Sensor for Safe Human–Robot Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy

1
Department of Computer and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea
2
Department of Computer Science and Engineering, BRAC University, Dhaka 1212, Bangladesh
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(6), 2164; https://doi.org/10.3390/s21062164
Submission received: 22 February 2021 / Revised: 12 March 2021 / Accepted: 16 March 2021 / Published: 19 March 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The integral imaging microscopy system provides a three-dimensional visualization of a microscopic object. However, it has a low-resolution problem due to the fundamental limitation of the F-number (the aperture stops) by using micro lens array (MLA) and a poor illumination environment. In this paper, a generative adversarial network (GAN)-based super-resolution algorithm is proposed to enhance the resolution where the directional view image is directly fed as input. In a GAN network, the generator regresses the high-resolution output from the low-resolution input image, whereas the discriminator distinguishes between the original and generated image. In the generator part, we use consecutive residual blocks with the content loss to retrieve the photo-realistic original image. It can restore the edges and enhance the resolution by ×2, ×4, and even ×8 times without seriously hampering the image quality. The model is tested with a variety of low-resolution microscopic sample images and successfully generates high-resolution directional view images with better illumination. The quantitative analysis shows that the proposed model performs better for microscopic images than the existing algorithms.

1. Introduction

An optical microscope is a piece of magnification equipment for viewing and observing microscopic objects. The microscope is extensively used in many different fields, such as biomedical science, nanophysics, and medical science [1,2,3]. Generally, it consists of an objective lens and a tube lens. The specimen is placed under the objective lens, and the magnified object is viewed from the tube lens also known as the eyepiece. Magnification depends on the focal length of these two lenses. However, conventional two-dimensional (2D) microscopy only enhances the resolution and provides 2D information that cannot perceive the parallax and the depth information. It is a major problem where three-dimensional (3D) information is necessary. To overcome this situation, different types of 3D microscopy such as confocal [3], stereoscopic [4], integral imaging microscopy (IIM)/light field microscopy (LFM) [2,5,6], and integral imaging holographic microscopy [7] have been proposed. Among these, IIM/LFM microscopy provides full 3D and parallax information. Jang and Javidi employed the first integral imaging technique for microscopic objects [8]. Later, Levoy et al. proposed and developed the first IIM microscopy [9]. An integral imaging system consists of a camera with an MLA that generates elemental images. The main advantage of this system is that it takes a single shot, whereas MLA works similar to multiple cameras. That is why the setup is simpler and the acquisition accuracy is far better in this case. Adjacent lens array generates perspective views in the principle of the stereoscopic imaging system. In fact, the IIM provides multiple perspective views (depends on the lens array) than the usual stereoscopic system. Since the IIM captures multiple views under a single camera, the resolution becomes small; hence, resolution enhancement is required.
Different resolution enhancement techniques for IIMs have been proposed [5,10,11,12,13,14,15,16,17]. Some of them are fully mechanical, such as synchronously moving lens arrays [13], pinhole array-based MLA intensity distribution methods [18], and time-multiplex systems combining low-resolution images [14]. It has a relationship between the lens array (LA) size with resolution. A larger LA enhances the resolution but reduces the depth of field (DOF). To optimize this tradeoff between them, the MLA-shifting method was proposed [10]. A mechanically controlled piezo-actuator was used to move the MLA in the vertical and horizontal directions (25 μm per step), but MLA shifting in the microscale is error-prone. To mitigate this problem, the interpolation-based intermediate view elemental image (IVEI) generation method was proposed using a graphic processing unit (GPU) [12]. IVEI is an interpolation-based technique where neighboring pixels are used to reconstruct another pixel. Later, the resolution was improved by applying the iterative bilinear interpolation method, which generates an orthographic view image (OVI) from the neighboring elemental image (EI) [19]. This is a relatively fast and efficient method; however, the main limitation is that the image quality dramatically decreases after some iterations. Recently, Kwon et al. proposed a deep learning-based resolution enhancement method for IIM [5] where multiple degradations based super-resolution algorithm is employed. It can enhance the resolution well; however, the quality decreases dramatically for the higher scaling factor. Hence, we focus on a scale-invariant image reconstruction system.
In this work, we improve the IIM resolution using a deep learning-based super-resolution (SR) algorithm. The resolution of conventional microscopy is limited by the lens property; however, it is possible to enhance by using resolution enhancement methods, such as classical iterative interpolation-based methods [6,12,19] and deep learning-based methods [5,15]. The quality enhancement method takes place in the OVI. Since the resolution of the OVI is small, multiple time upscaling causes a low-quality output. To solve this problem, a deep learning-based SR algorithm is applied to the directional view images that scale up 2, 4, and 8 times larger than the original image by retaining the photo-realistic natural image. Due to the characteristics (poor lighting, distortion, etc.) of IIM image, the existing network cannot be applied as it is. In this study, the network is designed considering the IIM characteristics. Unlike other super-resolution algorithms, the proposed deep learning-based algorithm performs better for IIM. First, it retrieves edges and later synthesizes the color; on the other hand, existing algorithms perform this all together. Hence, the accuracy is significant.
The rest of the paper is organized as follows. In Section 2, we describe the necessary background for a better understanding of this work. The proposed super-resolution algorithm architecture and the processes are described in Section 3. The experimental setup and the quality measurement criteria are shown in Section 4. The result is discussed in Section 5 where we compared the result with different existing algorithms. Finally, the conclusion is provided in Section 6.

2. Background of IIM and Super Resolution

2.1. Integral Imaging Microscopy

IIM consists of a conventional microscope and an MLA. The basic schematic diagram of the IIM is shown in Figure 1. An infinity-corrected optical system is placed between the intermediate image plane and the specimen. The sensor captures the elemental image array (EIA) through MLA installed in front of the camera lens (CL). The OVI is reconstructed from the disparity information using the EIA. The object point (x, y, z) is imaged on the EIA plane through the CL and elemental lens (EL) as Equation (1):
X E I i , j = f M L A f C i   ×   P E L x f C i   ×   P E L z f M L A g f M L A z f M L A Y E I i , j = f M L A f C j   ×   P E L y f C j   ×   P E L z f M L A g f M L A z f M L A
where f M L A and f C are the focal lengths of the micro lens array and camera lens, respectively. Additionally, P E L represents the pitch of the elemental lens, which is the distance between one lens to another. The distance between the MLA and the camera lens is defined as g. The lens position is denoted by i and j. However, the disparity between the camera lens and the EL should be considered as Equation (2):
Δ X I = f M L A f C P E L i 2 i 1 g f M L A z f M L A Δ Y I = f M L A f C P E L j 2 j 1 g f M L A z f M L A
where i and j are the same as in Equation (1). This is the depth information and viewpoint obtained for each image. If the number of lenses in an MLA increases, the resolution is enhanced. However, there is a tradeoff between depth and resolution regarding the number of MLs. The number of directional view images depends on the resolution of EI, and the resolution of the sub-image depends on the number of EIs, as well as the ML. The generation of OVI from EIA is a little bit tricky. The first pixel of the first EI generates the first pixel of the first OVI, the first pixel of the second EI generates the second pixel of the first OVI, and so on. Similarly, the last pixel of the last EI generates the last pixel of the last OVI [12].

2.2. Deep Learning-Based Super-Resolution Algorithm

Deep learning has become popular for different applications [1,20,21,22,23,24,25,26,27,28,29]; single image SR (SISR) is one of them [27,28,29,30,31]. It is a very challenging problem due to the transformation of a specific low-resolution (LR) image to a high-resolution (HR) image. The main mechanism behind SISR is that it takes the original HR image and downsample multiple times to the LR image that is fed to the network to train the model. The main working principle of the SISR algorithm is shown in Figure 2. Original HR images are downsampled and convolved with noise known as kernel by the scaling factor. The LR image y can be modeled as Equation (3):
y = x n s + N ,
where x is the HR part that convolved with the noise n, is the downsampling operator with factor s. N (if necessary) is the independent (also known as bias) noise term. Most of the SISR algorithms work on the same concept.
There are different techniques for SISR algorithms; among them, the interpolation-based method is widely used [32,33,34]. A simple three-layer convolutional neural network (CNN)-based SR algorithm (known as SRCNN) was proposed [32]. These three nonlinear layers extract patches from the LR image map and reconstruct the HR image. Later, a pyramid structure network (LapSRN) was proposed [33]. The feature map was generated by cascading the convolutional layers; then, upscaling was performed by the cascade convolution; finally, a convolutional layer was used to predict the sub-band residuals. Recently, efficient multiple degradation-based algorithms, SRMD, was proposed [34]. It performs better in a noise-free (SRMDNF) situation. Recently, HDRN is proposed where a hierarchical dense block (HDB) is used to represent the feature module [31]. The HR image is reconstructed by the sub-pixel operation where the global fusion module is employed with HDB. However, the (first proposed in 2014 [35]) generative adversarial network (GAN)-based super-resolution algorithm is becoming popular day by day. There are different variants of GAN, such as InfoGAN [36], DCGAN [37], CycleGAN [38], and SRGAN [39]. Most of them use a rectified linear unit (ReLU) activation function [40]. It is difficult to resolve the finer texture detail for the photo-realistic natural image using general SR algorithms. However, the SRGAN can mitigate this problem using the perceptual loss function. Hence, we used a modified version of the SRGAN algorithm for resolution enhancement.
The basic concept of the GAN network is shown in Figure 3. Unlike other algorithms, it has two parts—the generator (G) and the discriminator (D) that train at the same time. The G works based on the random variable z (also known as noise). Both the generated data G(z) and the real data x are used in D to verify whether it is real or fake. It is more related to min–max rather than an optimization problem [35]. G needs to be trained simultaneously to minimize the difference between D and G. We can define it in terms of the value function V (D, G) as Equation (4):
min G   max D V D , G =   E x ~ p d a t a x log D x + E z ~ p z z log ( 1 D ( G ( x ) ) )
Equation (4) maximizes the value function of D and minimizes G. With this method, the generator can learn to produce a high-quality similar image from the original one.

3. Proposed Method for IIM Super-Resolution

The whole process is divided into two major parts—IIM capture through MLA and resolution enhancement using the proposed GAN-based super-resolution algorithm. In the capturing process (shown in Figure 1), the specimen is placed in front of the objective lens of the microscope. The magnified image is formed in the intermediate image plane between the tube lens and the MLA. Each ML works as an individual image source that generates a perspective view image, combinedly known as EI. The EI cannot be directly used for resolution enhancement. Therefore, the OVI is generated from the EIA, which contains the directional view information. The resolution and the number of directional view images are the same as the number of EIs and the resolution, respectively. Each directional view image was processed through the designed algorithm; the resolution was enhanced 2, 4, and 8 times and combined again for the full visualization system.
Figure 4 shows the detailed block diagram of the proposed IIM resolution enhancement system. A honeybee sample (~500 μm) is taken as a specimen. The EIA is captured through a camera whose resolution is 2048 × 2048 pixels. Due to the MLA properties, the outer side of the captured EIA contains noise; hence, a region of interest (ROI) is selected by cropping it into 1885 × 1885 pixels. From this selected ROI, an OVI is generated using a pixel mapping algorithm [12]. In this technique, each EI is mapped into the corresponding view image. The OVIs are fed to the designed GAN model (Figure 5) as input. The resolution of the directional view image from the OVI is individually enhanced 2, 4, and 8 times using the proposed algorithm. The directional view image generates a perspective view, which contains parallax information providing depth perception.
The main schematic diagram of the proposed algorithm is shown in Figure 5. This network is the modified version of the SRGAN algorithm. Since the scenarios are a little different in the microscopic image than the traditional one, some modifications are performed to cope with the microscopic image. Modification and network structure are described below. As mentioned earlier, the whole network has two different sections: the generator and the discriminator.

3.1. The Generator Network

The LR image is taken as the input to the generator that passes through the convolution layer and parametric ReLU (PReLU) [41,42]. The PReLU can be defined as Equation (5):
f x = x i ,   if   x i > 0 a i x i ,   if   x i 0  
where x i is the input of the hidden layer on the ith channel, a i is the coefficient of the negative part that controls the slope. The value of a i is learned via backpropagation during training time. When the value of a i is 0, it works as a ReLU activation function. The convolution layer consists of 9 × 9 kernels and 64 feature maps with padding 4. Apart from the original SRGAN, twelve residual blocks are used in the generator network. Each block consists of two convolutions followed by batch normalization (BN); after that, a PReLU is employed; the kernel size is 3 × 3 and the padding is 1 for both convolution layers. Due to the change of parameter in each layer for training, the learning rate reduces gradually with saturating nonlinearities. In that case, BN performs normalization for each mini-batch; hence, it allows a higher learning rate [43]. In the last residual block, there are 64 channels and 64 feature maps with kernel size 3 × 3 and padding 4. To enhance the resolution of the LR input image, an upsample block is used, which consists of a convolution, pixel shuffler, and PReLU layer. A single pixel-shuffler is useful for efficient sub-pixel convolution where the super-resolution takes place in the LR instead of the HR space [44], whereas two are used in the SRGAN. It helps to retrieve the color consistency of the IIM. In this upsample block, the channel size is the same as the residual block; however, the feature map size is calculated by multiplying the channel size and the square of the up-scale factor; kernel size and the padding is 3 × 3 and 1, respectively. In the gradient-based learning method, vanishing gradient plays a vital role, sometimes the network may stop learning. To overcome this, a skip connection is established between the input and the upsampling block to prevent the gradient vanishing problem where the weight value is calculated from the previous layer. The final convolution layer contains three output channels and a 9 × 9 kernel size with padding 4.

3.2. The Discriminator Network

To verify the generated and original image, a discriminator network is used. After the input layer, there is a convolution (three channel and 64 feature maps) followed by a leaky ReLU layer containing a kernel size of 9 × 9 and padding 1. The leaky ReLU is almost similar to the PReLU, if the value of a i is fixed then it becomes leaky ReLU [45], whereas the value of a i is variable for PReLU. In the proposed network, the value is always set to 0.2. There are seven consecutive convolution, BN, and leaky ReLU blocks (C–B–L Block). The C–B–L block starts with 64 channels and consecutively increases by 128, 256, and 512. All C–B–L blocks have the same kernel size 3 × 3 and padding 1. We applied stride 2 only in the odd C–B–L blocks. Adaptive average polling is used before the convolution layer. Lastly, there are two convolution layers with the output channel 1024 and 1, respectively; the kernel size is 1 × 1. A leaky ReLU with a constant value of 0.2 is employed between those convolution layer. A sigmoid activation function is used to discriminate whether the output is correct or not. We use the VGG feature map (pre-trained on the ImageNet dataset) to retrieve a photo-realistic image [46], whereas perceptual loss is used in the SRGAN. During the training period, a stochastic gradient-based optimizer, Adam, is used [47]. The sample data and source code of this proposed GAN network is available on GitHub (https://github.com/shahinur-alam/IIM-GAN, accessed on 12 March 2021).

4. Experimental Setup and Quality Measurement Metrics

In the original experimental setup shown in Figure 6, we use an Olympus BX41TF microscope with 10× magnification. An MLA composed of 100 × 100 lenses is used for IIM; each lens diameter and focal length are 2.4 mm and 125 μm, respectively. The image is captured using the Point Grey GS3-U3-41C6C-C, 1-inch CMOS sensor through the NIKON 20 mm lens. This sensor can capture 4.1 megapixels of information at 2048 × 2048 resolution.
A high-configuration personal computer (PC) is used to train the model, which is composed of an Intel Core i7-9800X 3.80 GHz processor with 128 GB RAM. The PC operates in a Windows 10 Pro 64-bit operating system associated with NVIDIA GeForce RTX 2080 Ti GPU. We use the Python programming language in the Anaconda environment with the PyTorch library to train and test this network. Since there is a lack of IIM dataset, the network is trained using the popular Pascal VOC2012 dataset, which contains 16,700 training and 425 validation images [48]. It has twenty classes of objects, including person, animal, vehicle, furniture, etc. Compared to other existing datasets, the VOC2012 dataset is optimal in this specific application. The training time was almost 17 h. The generated model is tested and verified by the real microscopic specimens. The proposed network retrieves edges, that is why it can enhance the resolution of the microscopic image, though the VOC2012 dataset is a little different to the microscopic specimens.
There are different kinds of image quality measurement (IQM) techniques that are frequently used to compare the original and output images. Here, we employed the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and power spectral density function (PSD).

4.1. PSNR

PSNR is the most commonly used IQM technique since it is simple and computationally cost-efficient [49]. It is the ratio between the maximum power and noise signal. If I x , y is the original image and K x , y is the generated or distorted image, then PSNR is calculated as Equation (6):
PSNR = 20 log 10 M A X 2   ×   M   ×   N i = 0 M 1 j = 0 N 1 I x , y K x , y 2
where MAX is the peak signal power. For an 8-bit general image, the MAX value is 255. M and N is the resolution of the image. The instantaneous pixel value is denoted by i and j for the width and height of the image, respectively. PSNR has obvious physical meaning in terms of optimization. Since it uses the mean square method, the PSNR value is always nonnegative in the decibel unit.

4.2. SSIM

SSIM calculates the image similarity focusing on the human visual system (HVS). Unlike PSNR, SSIM not only considers the absolute error but also focuses on structural information [50]. SSIM considers the structure (s), luminance (l), and the contrast (c):
s x , y = σ x y + c 3 σ x σ y + c 3
l x , y = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1
c x , y = 2 σ x σ y + c 2 σ x 2 + σ y 2 + c 2
where μ x ,   μ y ,   σ x ,   σ y are the average and variance of x i ,   y i , respectively. The SSIM is calculated after combining Equations (7)–(9) as:
SSIM x , y =   l x , y α   c x , y β   s x , y γ
Equation (10) calculates the final SSIM value for two signals as well as images; α , β , and γ are positive values that represent the magnitude of those three components. Magnitude is the user-defined value.

4.3. PSD

The power spectral density function is one kind of no-reference image quality assessment technique [50]. The power spectrum of a signal represents the power into the frequency of that signal. The PSD is calculated using the 2D Fourier transform as Equation (11):
PSD = log 10 Ϝ x t 2
where x(t) is a time-series signal. However, Equation (11) provides continuous spectral information. To quantify the PSD value, a mean value is calculated from each spectral power.

5. Results and Discussion of the Proposed Resolution Enhancement Method

In this research, five different kinds of microscopic specimens (honeybee, Zea Mays (Z. Mays), hydra, chip resistor, and printed circuit board (PCB)) are used. The specimens are collected by an IIM composed of a traditional microscope and MLA. All specimens are shown in Figure 7. The PSNR, SSIM, and PSD are used to evaluate the enhanced image quality. The visual results are shown in Figure 8 and Figure 9 for ×2 and ×4 upscaling of different specimens, respectively. It is observed from these figures that the output of the proposed algorithm is almost similar to the original image. Only a single OVI from each specimen is displayed here for better understanding. Table 1 and Table 2 show the PSNR, SSIM, and PSD comparison for ×2 and ×4 upscaling for LapSRN [33], SRCNN [32], SRMD [34], SRMDNF [34], and SRGAN [39]. The highest obtained result is shown in boldface (Table 1 and Table 3). There are two upsampling and downsampling mechanisms for SRMD and SRMDNF, and it is shown that the PSNR, SSIM, and PSD values are higher in most of the cases for bicubic interpolation than the general method.
From the table, in most cases, the PSNR, SSIM, and PSD values are higher for the proposed algorithm, which verifies that the method is highly suitable for microscopic specimens. There is only one case found for the PCB ×2 upscaling factor where the PSNR value is higher for the SRMDNF bicubic interpolation technique. This is because of the algorithm properties and PSNR calculation technique. SRMDNF enhances the image quality without seriously considering the edges. If we take a closer look at Figure 7 and compare the PCB specimen with the SRMDNF and proposed algorithm, there is a huge edge difference. The microwires are clearly visible for the proposed algorithm but not in the SRMDNF (it seems like a single wire stripe). Since the PSNR considers the mean square error and the output is brighter for SRMDNF, it provides a higher PSNR value. However, the SSIM and PSD values are higher for the method applied here. It is a considerable flaw that most of the algorithm does not support (possibly very noisy and low-quality output) more than ×2 or ×4 upscaling except SRGAN, which is a great advantage for this algorithm [51]. The ×8 upscaling results for the different specimens are shown in Table 3. The results are reasonable. It is observed in Table 1, Table 2 and Table 3 that the PSNR, SSIM, and PSD values are almost identical to different quality measurement techniques, which verifies that the proposed method can retrieve a good quality image for even an ×8 upscaling factor. The comparison between the PSNR, SSIM, and PSD values for different upscaling factors are shown in Figure 10, Figure 11 and Figure 12, respectively. The quality and scaling factors are inversely proportional. The higher the scaling factor, the less image quality is. The PSNR values for honeybee, Z. Mays, hydra, chip, and PCB are (33.57, 33.19, 37.84, 32.14, 32.60), (31.63, 31.79, 35.14, 30.98, 31.95), (31.71, 31.89, 35.59, 30.48, 31.81) for ×2, ×4, and ×8 upscaling, respectively. Figure 10 shows that the PSNR value is always higher and relatively lower for the ×2 and ×8 upscaling, respectively. The structure of the generated images is also better for ×2 upscaling (shown in Figure 11), though it is very similar to others. The SSIM values for honeybee, Z. Mays, hydra, chip, and PCB are (0.99, 0.99, 0.99, 0.99, 0.99), (0.99, 0.98, 0.99, 0.98, 0.99), (0.98, 0.98, 0.99, 0.99, 0.99) for ×2, ×4, and ×8 upscaling, respectively. The PSD values are loosely related because they depend on the specimen. Brightness and sharpness are completely different from each other. The PSD values for the honeybee, Z. Mays, hydra, chip, and PCB are (5.75, 5.75, 5.06, 5.79, 5.57), (5.76, 5.76, 5.07, 5.78, 5.56), (5.74, 5.74, 5.04, 5.79, 5.18) for ×2, ×4, and ×8 upscaling, respectively. PSD values are calculated in the decibel (dB) unit. It is shown in Figure 12 that the PSD values vary across different specimens. The quantitative results show that the output image is good enough to perceive a better microscopic view. However, the adversarial network produces some additional noises.
Due to the multiple ranges of the upscaling factor, the proposed method does not require the generation of any iterative interpolation-based intermediate view image [5]; in fact, the generation time dramatically increases after each iteration. However, the proposed method reduces the calculation complexity by retaining the image quality. The proposed algorithm takes, on average, 0.025 s to generate a one-directional view image. The training and testing times are calculated using the PyTorch default function.

6. Conclusions

In this paper, a useful and efficient deep learning-based resolution enhancement method for IIMs is presented. The proposed adversarial-based network efficiently handles photo-realistic images and reconstructs images similar to the original one. The EIA is captured through a camera sensor attached to the lens array and a 2D microscope. Then, the OVI is generated from the EIA according to the mapping algorithm. OVI contains multiple directional view images that generate 3D perception to the observer. The directional view images are directly used to feed the SR algorithm. As a result, we can obtain a high-quality resolution enhanced image.
The quantitative analysis of the PSD, SSIM, and PSNR shows that the proposed method outperforms all state-of-the-art algorithms. This algorithm requires a very short time to generate a single-view image. Additionally, it is shown that the PSNR, SSIM, and PSD values are almost identical for ×2, ×4, and ×8 upscaling factors, which is a great advantage for the proposed system. Furthermore, it indicates that the enhanced image quality rarely depends on the scaling factor. In future work, getting better resolution-enhanced images and faster generation will be the main focus of this research. However, different noise suppression techniques will be applied in future work, and the state-of-the-art deep learning algorithms will be compared with our improved deep learning model. Another important thing is that the IIM dataset is not adequate to date. In future work, we will also focus on making datasets for the deep learning network.

Author Contributions

Conceptualization, M.S.A. and K.-C.K.; data curation, M.S.A. and M.-U.E.; formal analysis, M.S.A. and N.K.; funding acquisition, N.K.; investigation, M.S.A. and K.-C.K.; methodology, M.S.A.; project administration, N.K.; resources, M.S.A.; software, M.S.A. and M.Y.A.; supervision, N.K.; validation, M.S.A.; visualization, M.S.A.; writing—original draft, M.S.A.; writing—review and editing, M.S.A., K.-C.K., M.-U.E., M.Y.A., M.A.A. and N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. NRF-2018R1D1A3B07044041 and No. NRF-2020R1A2C1101258) and Information Technology Research Center support program (IITP-2020-0-01462), supervised by the IITP (Institute for Information and communications Technology Promotion).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available data were analyzed in this study. This data can be found here: [https://github.com/shahinur-alam/IIM-GAN, accessed on 12 March 2021].

Acknowledgments

We want to thank all our colleagues who have helped us to complete the manuscript and supported both technically and mentally.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

C–B–LConvolution, batch normalization, leaky ReLU
CLCamera lens
DOFDepth of field
EIElemental image
EIAElemental image array
ELElemental lens
GANGenerative adversarial network
GPUGraphic processing unit
HRHigh resolution
HVSHuman visual system
IIMIntegral imaging microscopy
IQMImage quality measurement
IVEIIntermediate view elemental image
LALens array
LFMLight field microscopy
LRLow resolution
MLAMicro lens array
OVIOrthographic view image
PCBPrinted circuit board
PSDpower spectral density
PSNRPeak signal-to-noise ratio
ReLURectified linear unit
PReLUParametric rectified linear unit
SISRSingle image super-resolution
SRSuper resolution
SSIMStructure similarity index

References

  1. Belthangady, C.; Royer, L.A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 2019, 16, 1215–1225. [Google Scholar] [CrossRef]
  2. Palmieri, L.; Scrofani, G.; Incardona, N.; Saavedra, G.; Martínez-Corral, M.; Koch, R. Robust Depth Estimation for Light Field Microscopy. Sensors 2019, 19, 500. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Schlafer, S.; Meyer, R.L. Confocal microscopy imaging of the biofilm matrix. J. Microbiol. Methods 2017, 138, 50–59. [Google Scholar] [CrossRef] [PubMed]
  4. Wu, Y.; Rivenson, Y.; Wang, H.; Luo, Y.; Ben-David, E.; Bentolila, L.A.; Pritz, C.; Ozcan, A. Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning. Nat. Methods 2019, 16, 1323–1331. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Kwon, K.-C.; Kwon, K.H.; Erdenebat, M.-U.; Piao, Y.-L.; Lim, Y.-T.; Kim, M.Y.; Kim, N. Resolution-Enhancement for an Integral Imaging Microscopy Using Deep Learning. IEEE Photonics J. 2019, 11, 1–12. [Google Scholar] [CrossRef]
  6. Kim, J.; Jung, J.-H.; Jeong, Y.; Hong, K.; Lee, B. Real-time integral imaging system for light field microscopy. Opt. Express 2014, 22, 10210–10220. [Google Scholar] [CrossRef] [PubMed]
  7. Kim, N.; Alam, M.A.; Bang, L.T.; Phan, A.H.; Piao, M.L.; Erdenebat, M.U. Advances in the light field displays based on integral imaging and holographic techniques (Invited Paper). Chin. Opt. Lett. 2014, 12, 060005. [Google Scholar] [CrossRef]
  8. Jang, J.-S.; Javidi, B. Three-dimensional integral imaging of micro-objects. Opt. Lett. 2004, 29, 1230. [Google Scholar] [CrossRef] [Green Version]
  9. Levoy, M.; Ng, R.; Adams, A.; Footer, M.; Horowitz, M. Light field microscopy. In Proceedings of the ACM SIGGRAPH 2006 Papers, SIGGRAPH’06, Boston, MA, USA, 30 July–3 August 2006; ACM Press: New York, NY, USA, 2006; pp. 924–934. [Google Scholar]
  10. Lim, Y.-T.; Park, J.-H.; Kwon, K.-C.; Kim, N. Resolution-enhanced integral imaging microscopy that uses lens array shifting. Opt. Express 2009, 17, 19253. [Google Scholar] [CrossRef]
  11. Kwon, K.-C.; Erdenebat, M.-U.; Lim, Y.-T.; Joo, K.-I.; Park, M.-K.; Park, H.; Jeong, J.-R.; Kim, H.-R.; Kim, N. Enhancement of the depth-of-field of integral imaging microscope by using switchable bifocal liquid-crystalline polymer micro lens array. Opt. Express 2017, 25, 30503. [Google Scholar] [CrossRef]
  12. Kwon, K.-C.; Jeong, J.-S.; Erdenebat, M.-U.; Lim, Y.-T.; Yoo, K.-H.; Kim, N. Real-time interactive display for integral imaging microscopy. Appl. Opt. 2014, 53, 4450. [Google Scholar] [CrossRef]
  13. Jang, J.-S.; Javidi, B. Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett. 2002, 27, 324. [Google Scholar] [CrossRef] [PubMed]
  14. Kishk, S.; Javidi, B. Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging. Opt. Express 2003, 11, 3528. [Google Scholar] [CrossRef]
  15. Rivenson, Y.; Göröcs, Z.; Günaydin, H.; Zhang, Y.; Wang, H.; Ozcan, A. Deep learning microscopy. Optica 2017, 4, 1437. [Google Scholar] [CrossRef] [Green Version]
  16. Martinez-Corral, M.; Dorado, A.; Barreiro, J.C.; Saavedra, G.; Javidi, B. Recent Advances in the Capture and Display of Macroscopic and Microscopic 3-D Scenes by Integral Imaging. Proc. IEEE 2017, 105, 825–836. [Google Scholar] [CrossRef] [Green Version]
  17. Alam, S.; Kwon, K.-C.; Erdenebat, M.-U.; Lim, Y.-T.; Imtiaz, S.; Sufian, M.A.; Jeon, S.-H.; Kim, N. Resolution Enhancement of an Integral Imaging Microscopy Using Generative Adversarial Network. In Proceedings of the 14th Pacific Rim Conference on Lasers and Electro-Optics (CLEO PR 2020) (2020), paper C3G_4, The Optical Society, Sydney, Australia, 2–6 August 2020. [Google Scholar]
  18. Erdmann, L.; Gabriel, K.J. High-resolution digital integral photography by use of a scanning microlens array. Appl. Opt. 2001, 40, 5592. [Google Scholar] [CrossRef] [PubMed]
  19. Kwon, K.-C.; Jeong, J.-S.; Erdenebat, M.-U.; Piao, Y.-L.; Yoo, K.-H.; Kim, N. Resolution-enhancement for an orthographic-view image display in an integral imaging microscope system. Biomed. Opt. Express 2015, 6, 736–746. [Google Scholar] [CrossRef] [Green Version]
  20. Kwon, H.; Yoon, H.; Park, K.-W. CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks. Sensors 2020, 20, 1495. [Google Scholar] [CrossRef] [Green Version]
  21. Kwon, H.; Yoon, H.; Park, K.W. Robust CAPTCHA image generation enhanced with adversarial example methods. IEICE Trans. Inf. Syst. 2020, 103, 879–882. [Google Scholar] [CrossRef] [Green Version]
  22. Zhang, Q.; Yang, L.T.; Chen, Z.; Li, P. A survey on deep learning for big data. Inf. Fusion 2018, 42, 146–157. [Google Scholar] [CrossRef]
  23. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  24. Alam, M.S.; Kwon, K.-C.; Alam, M.A.; Abbass, M.Y.; Imtiaz, S.M.; Kim, N. Trajectory-Based Air-Writing Recognition Using Deep Neural Network and Depth Sensor. Sensors 2020, 20, 376. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Xie, J.; Girshick, R.; Farhadi, A. Deep3D: Fully Automatic 2D-to-3D Video Conversion with Deep Convolutional Neural Networks; European Conference on Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar]
  26. Nguyen-Phuoc, T.; Li, C.; Theis, L.; Richardt, C.; Yang, Y.L. HoloGAN: Unsupervised learning of 3D representations from natural images. In Proceedings of the 2019 International Conference on Computer Vision Workshop, ICCVW 2019, Seoul, Korea, 27 October–2 November 2019; pp. 2037–2040. [Google Scholar]
  27. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.-H.; Liao, Q. Deep Learning for Single Image Super-Resolution: A Brief Review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
  28. Abbass, M.Y.; Kwon, K.C.; Alam, M.S.; Piao, Y.L.; Lee, K.Y.; Kim, N. Image super resolution based on residual dense CNN and guided filters. Multimed. Tools Appl. 2020, 80, 1–19. [Google Scholar] [CrossRef]
  29. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  30. Mei, Y.; Fan, Y.; Zhou, Y.; Huang, L.; Huang, T.S.; Shi, H. Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; IEEE Computer Society: Piscataway, NJ, USA, 2020; pp. 5689–5698. [Google Scholar]
  31. Jiang, K.; Wang, Z.; Yi, P.; Jiang, J. Hierarchical dense recursive network for image super-resolution. Pattern Recognit. 2020, 107, 107475. [Google Scholar] [CrossRef]
  32. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  33. Lai, W.-S.; Huang, J.-B.; Ahuja, N.; Yang, M.-H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 2017, pp. 5835–5843. [Google Scholar]
  34. Zhang, K.; Zuo, W.; Zhang, L. Learning a Single Convolutional Super-Resolution Network for Multiple Degradations. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 3262–3271. [Google Scholar]
  35. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  36. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv 2016, arXiv:1606.03657. (preprint). [Google Scholar]
  37. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. (preprint). [Google Scholar]
  38. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. Proc. IEEE Int. Conf. Comput. Vis. 2017, 2223–2232. [Google Scholar]
  39. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 105–114. [Google Scholar]
  40. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; Omnipress: Madison, WI, USA, 2010; pp. 807–814. [Google Scholar]
  41. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
  42. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; Volume 2015, pp. 1026–1034. [Google Scholar]
  43. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015; International Machine Learning Society (IMLS): Baltimore, ML, USA, 2015; Volume 1, pp. 448–456. [Google Scholar]
  44. Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Washington, DC, USA, 2016; Volume 2016, pp. 1874–1883. [Google Scholar]
  45. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
  46. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  47. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  48. The PASCAL Visual Object Classes Challenge (VOC2012). 2012. Available online: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ (accessed on 15 June 2020).
  49. Mandal, J.K.; Satapathy, S.C.; Sanyal, M.K.; Sarkar, P.P.; Mukhopadhyay, A. Analysis and Evaluation of Image Quality Metrics. Adv. Intell. Syst. Comput. 2015, 340, 369–378. [Google Scholar]
  50. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Wang, Z.; Chen, J.; Hoi, S.C.H. Deep Learning for Image Super-resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Schematic diagram of the integral imaging microscopy (IIM) capturing system. A specimen is placed in front of the objective lens and magnified through the tube lens; a micro lens array is placed in front of the sensor to capture the elemental image (EI).
Figure 1. Schematic diagram of the integral imaging microscopy (IIM) capturing system. A specimen is placed in front of the objective lens and magnified through the tube lens; a micro lens array is placed in front of the sensor to capture the elemental image (EI).
Sensors 21 02164 g001
Figure 2. The basic concept of the single image super-resolution (SISR) algorithm. High-resolution images are downsampled to the corresponding low-resolution image and that low-resolution image inversely reconstructs the high-resolution image. The model is learned from the noise term during the training process.
Figure 2. The basic concept of the single image super-resolution (SISR) algorithm. High-resolution images are downsampled to the corresponding low-resolution image and that low-resolution image inversely reconstructs the high-resolution image. The model is learned from the noise term during the training process.
Sensors 21 02164 g002
Figure 3. Schematic diagram of a basic Generative Adversarial Network. Latent variable or noise term is used in the generator part to generate the output, whereas the discriminator part compares the generated and original data to distinguish whether it is real or fake.
Figure 3. Schematic diagram of a basic Generative Adversarial Network. Latent variable or noise term is used in the generator part to generate the output, whereas the discriminator part compares the generated and original data to distinguish whether it is real or fake.
Sensors 21 02164 g003
Figure 4. Block diagram of the proposed IIM resolution enhancement method. (a) An IIM capturing system where objective lens, tube lens, and micro lens array is employed to capture the EIA. (b) In the preprocessing part, the outer dark part is removed by cropping the elemental image array (EIA), and orthographic view image (OVI) is generated from the EIA using the pixel mapping algorithm. (c) A super-resolution algorithm is designed and trained using generative adversarial network (GAN). (d) The resolution of the OVI is enhanced using the SR network. (e) The resolution-enhanced directional view image is shown as an output.
Figure 4. Block diagram of the proposed IIM resolution enhancement method. (a) An IIM capturing system where objective lens, tube lens, and micro lens array is employed to capture the EIA. (b) In the preprocessing part, the outer dark part is removed by cropping the elemental image array (EIA), and orthographic view image (OVI) is generated from the EIA using the pixel mapping algorithm. (c) A super-resolution algorithm is designed and trained using generative adversarial network (GAN). (d) The resolution of the OVI is enhanced using the SR network. (e) The resolution-enhanced directional view image is shown as an output.
Sensors 21 02164 g004
Figure 5. The architecture of the proposed GAN-based resolution enhancement algorithm: (a) generator network: the main building block of the network where noise term is used to generate the output; (b) discriminator network: discriminate between the generated and the real image.
Figure 5. The architecture of the proposed GAN-based resolution enhancement algorithm: (a) generator network: the main building block of the network where noise term is used to generate the output; (b) discriminator network: discriminate between the generated and the real image.
Sensors 21 02164 g005
Figure 6. Experimental setup for the proposed IIM resolution enhancement system. The micro lens array (MLA) is placed between the camera sensor and the specimen, and the captured EIA is displayed in the display in real-time.
Figure 6. Experimental setup for the proposed IIM resolution enhancement system. The micro lens array (MLA) is placed between the camera sensor and the specimen, and the captured EIA is displayed in the display in real-time.
Sensors 21 02164 g006
Figure 7. Different types of specimen: (a) honeybee, (b) Z. Mays, (c) hydra, (d) chip, and (e) printed circuit board (PCB).
Figure 7. Different types of specimen: (a) honeybee, (b) Z. Mays, (c) hydra, (d) chip, and (e) printed circuit board (PCB).
Sensors 21 02164 g007
Figure 8. ×2 upscaling comparison. The SRCNN, LapSRN, SRMD, SRMDNF, and SRGAN algorithms are compared with the proposed method. Most of the cases, the proposed method performs better than others, and the super-resolved image is almost indistinguishable from the original.
Figure 8. ×2 upscaling comparison. The SRCNN, LapSRN, SRMD, SRMDNF, and SRGAN algorithms are compared with the proposed method. Most of the cases, the proposed method performs better than others, and the super-resolved image is almost indistinguishable from the original.
Sensors 21 02164 g008
Figure 9. ×4 upscaling comparison. The SRCNN, LapSRN, SRMD, SRMDNF, and SRGAN algorithm are compared with the proposed method. In all cases, the proposed method performs better than others, and the super-resolved image is almost indistinguishable from the original.
Figure 9. ×4 upscaling comparison. The SRCNN, LapSRN, SRMD, SRMDNF, and SRGAN algorithm are compared with the proposed method. In all cases, the proposed method performs better than others, and the super-resolved image is almost indistinguishable from the original.
Sensors 21 02164 g009
Figure 10. PSNR comparison between the ×2, ×4, and ×8 upscaling factors. The maximum value found for the ×2; however, compared to ×8, the difference is not more than 2.
Figure 10. PSNR comparison between the ×2, ×4, and ×8 upscaling factors. The maximum value found for the ×2; however, compared to ×8, the difference is not more than 2.
Sensors 21 02164 g010
Figure 11. SSIM comparison between the ×2, ×4, and ×8 upscaling factors. The best result is found for ×2 upscaling, which means the output and original image is more similar.
Figure 11. SSIM comparison between the ×2, ×4, and ×8 upscaling factors. The best result is found for ×2 upscaling, which means the output and original image is more similar.
Sensors 21 02164 g011
Figure 12. PSD comparison between the ×2, ×4, and ×8 upscaling factors. The PSD value varies across different specimen. However, in most of the cases, the ×2 upscaling performs better.
Figure 12. PSD comparison between the ×2, ×4, and ×8 upscaling factors. The PSD value varies across different specimen. However, in most of the cases, the ×2 upscaling performs better.
Sensors 21 02164 g012
Table 1. PSNR, SSIM, and PSD comparison of the super resolved OVI (×2) using different algorithms. Most of the cases the proposed method performs better except the PSNR of the PCB specimen. However, the other parameters are still better.
Table 1. PSNR, SSIM, and PSD comparison of the super resolved OVI (×2) using different algorithms. Most of the cases the proposed method performs better except the PSNR of the PCB specimen. However, the other parameters are still better.
HoneybeeZ. MaysHydraChipPCB
SRCNNPSNR15.7418.5616.3312.8212.32
SSIM0.630.840.890.390.64
PSD4.824.534.285.014.82
LapSRNPSNR29.1129.3437.4330.6329.85
SSIM0.970.970.990.980.98
PSD5.104.904.385.175.06
SRMD (general)PSNR23.5427.1919.3522.8323.41
SSIM0.830.920.970.980.91
PSD4.274.183.904.514.56
SRMD (bicubic)PSNR29.2332.5737.4630.9733.99
SSIM0.960.970.990.980.98
PSD5.234.984.365.255.01
SRMDNF (general)PSNR27.7431.3436.6828.8030.15
SSIM0.930.950.990.970.97
PSD4.814.764.185.174.81
SRMDNF (bicubic)PSNR32.3831.8637.2531.4934.86
SSIM0.970.920.980.980.98
PSD5.184.174.395.324.81
SRGANPSNR32.6832.3337.3331.4731.53
SSIM0.980.980.990.980.98
PSD5.495.244.745.465.16
ProposedPSNR33.3733.1937.8432.1432.60
SSIM0.990.990.990.990.99
PSD5.755.755.065.795.57
Table 2. PSNR, SSIM, and PSD comparison of the super resolved OVI (×4) using different algorithms. The proposed algorithm performs better in all cases.
Table 2. PSNR, SSIM, and PSD comparison of the super resolved OVI (×4) using different algorithms. The proposed algorithm performs better in all cases.
HoneybeeZ. MaysHydraChipPCB
SRCNNPSNR16.6019.5716.6113.7312.97
SSIM0.670.860.890.470.66
PSD4.433.873.974.444.43
LapSRNPSNR23.2925.5330.7022.8122.86
SSIM0.870.930.980.870.93
PSD4.454.284.144.814.73
SRMD (general)PSNR21.0025.5827.4321.1021.57
SSIM0.790.900.970.720.88
PSD3.933.993.714.214.38
SRMD (bicubic)PSNR24.9929.0733.6625.6225.64
SSIM0.880.920.980.890.94
PSD4.444.334.014.654.48
SRMDNF (general)PSNR25.1229.1033.7025.9626.02
SSIM0.880.920.990.900.94
PSD4.504.464.014.874.65
SRMDNF (bicubic)PSNR25.2229.2534.1725.9626.08
SSIM0.890.920.990.900.94
PSD4.574.444.014.874.69
SRGANPSNR29.5830.4634.2828.0629.72
SSIM0.980.980.990.980.98
PSD4.984.634.595.144.87
ProposedPSNR31.6331.7935.1430.9831.95
SSIM0.990.980.990.980.99
PSD5.765.765.075.785.56
Table 3. ×8 upscaling directional view image comparison using the proposed algorithm.
Table 3. ×8 upscaling directional view image comparison using the proposed algorithm.
HoneybeeZ. MaysHydraChipPCB
PSNR31.7131.8935.5930.4831.81
SSIM0.980.980.990.990.99
PSD5.745.745.045.795.18
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alam, M.S.; Kwon, K.-C.; Erdenebat, M.-U.; Y. Abbass, M.; Alam, M.A.; Kim, N. Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy. Sensors 2021, 21, 2164. https://doi.org/10.3390/s21062164

AMA Style

Alam MS, Kwon K-C, Erdenebat M-U, Y. Abbass M, Alam MA, Kim N. Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy. Sensors. 2021; 21(6):2164. https://doi.org/10.3390/s21062164

Chicago/Turabian Style

Alam, Md. Shahinur, Ki-Chul Kwon, Munkh-Uchral Erdenebat, Mohammed Y. Abbass, Md. Ashraful Alam, and Nam Kim. 2021. "Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy" Sensors 21, no. 6: 2164. https://doi.org/10.3390/s21062164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop