Next Article in Journal
Age Differences in Estimating Physical Activity by Wrist Accelerometry Using Machine Learning
Next Article in Special Issue
A Review of Vision-Based On-Board Obstacle Detection and Distance Estimation in Railways
Previous Article in Journal
Optimized Feature Subset Selection Using Genetic Algorithm for Preterm Labor Prediction Based on Electrohysterography
Previous Article in Special Issue
A Runway Safety System Based on Vertically Oriented Stereovision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Single Image Super Resolution Method Using Lightweight Multi-Scale Channel Dense Network

1
Department of Convergence IT Engineering, Kyungnam University, Changwon 51767, Korea
2
Department of IT Engineering, Sookmyung Women’s University, Seoul 04310, Korea
3
Intelligent Convergence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(10), 3351; https://doi.org/10.3390/s21103351
Submission received: 11 April 2021 / Revised: 8 May 2021 / Accepted: 10 May 2021 / Published: 12 May 2021
(This article belongs to the Special Issue Visual Sensor Networks for Object Detection and Tracking)

Abstract

:
Super resolution (SR) enables to generate a high-resolution (HR) image from one or more low-resolution (LR) images. Since a variety of CNN models have been recently studied in the areas of computer vision, these approaches have been combined with SR in order to provide higher image restoration. In this paper, we propose a lightweight CNN-based SR method, named multi-scale channel dense network (MCDN). In order to design the proposed network, we extracted the training images from the DIVerse 2K (DIV2K) dataset and investigated the trade-off between the SR accuracy and the network complexity. The experimental results show that the proposed method can significantly reduce the network complexity, such as the number of network parameters and total memory capacity, while maintaining slightly better or similar perceptual quality compared to the previous methods.

1. Introduction

Real-time object detection techniques have been applied to a variety of computer vision areas [1,2], such as object classification or object segmentation. Since it is mainly operated on the constrained environments, input images obtained from those environments can be deteriorated by camera noises or compression artifacts [3,4,5]. In particular, it is hard to detect objects from the images with low quality. Super resolution (SR) method aims at recovering a high-resolution (HR) image from a low-resolution (LR) image. It is primarily deployed on the various image enhancement areas, such as the preprocessing for object detection [6] of Figure 1, medical images [7,8], satellite images [9], and surveillance images [10]. In general, most SR methods can be categorized into single-image SR (SISR) [11] and multi-image SR (MISR). Deep neural network (DNN) based SR algorithms have been developed with various neural networks such as convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), and generative adversarial network (GAN). Recently, convolutional neural network (CNN) [12] based SISR approaches can provide powerful visual enhancement in terms of peak signal-to-noise ratio (PSNR) [13] and structural similarity index measure (SSIM) [14].
SR was initially studied pixel-wise interpolation algorithms, such as bilinear and bicubic interpolations. Although these approaches can provide fast and straightforward implementations, it had limitations in improving SR accuracy to represent complex textures in the generated HR image. As various CNN models have been recently studied in computer vision areas, these CNN models have been applied to SISR to surpass the conventional pixel-wise interpolation methods. In order to achieve higher SR performance, several deeper and denser network architectures have been combined with the CNN-based SR networks.
As shown in Figure 2, the inception block [15] was designed to obtain the sparse feature maps by adjusting the different kernel sizes. He et al. [16] proposed a ResNet using the residual block, which learns residual features with skip connections. It should be noted that CNN models with the residual block can support high-speed training and avoid the gradient vanishing effects. In addition, Huang et al. [17] proposed densely connected convolutional networks (DenseNet) with the concept of dense block that combines hierarchical feature maps along the convolution layers for the purpose of richer feature representations. As the feature maps of the previous convolution layer are concatenated with those of the current convolution layer within a dense block, it requires more memory capacity to store massive feature maps and network parameters. In this paper, we propose a lightweight CNN-based SR model to reduce the memory capacity as well as the network parameters. The main contributions of this paper are summarized as follows:
  • We propose multi-scale channel dense block (MCDB) to design the CNN based lightweight SR network structure.
  • Through a variety of ablation works, the proposed network architectures are optimized in terms of the optimal number of the dense blocks and the dense layers.
  • Finally, we investigate the trade-off between the network complexity and the SR performance on publicly available test datasets compared to the previous method.
The remainder of this paper is organized as follows. In Section 2, we briefly overview the previous studies related to CNN-based SISR methods. In Section 3, we describe the proposed network framework. Finally, experimental results and conclusions are given in Section 4 and Section 5, respectively.

2. Related Works

In general, CNN based SR models have shown improved interpolation performances compared to the previous pixel-wise interpolation methods. Dong et al. [18] proposed a super resolution convolutional neural network (SRCNN), which consists of three convolution layers and trains an end-to-end mapping from a bicubic interpolated LR image to a HR image. After the advent of SRCNN, Dong et al. [19] proposed another fast super-resolution convolutional neural network (FSRCNN), which conducts multiple deconvolution processes at the end of the network so that this model can utilize smaller filter sizes and more convolution layers before the upscaling stage. In addition, it achieved a speedup of more than 40 times with even better quality. Shi et al. [20] proposed an efficient sub-pixel convolutional neural network (ESPCN) to train more accurate upsampling filters, which was firstly deployed in the real-time SR applications. Note that both FSRCNN and ESPCN were designed to assign deconvolution layers for upsampling at the end of the network for reducing the network complexity. Kim et al. [21] designed a very deep convolutional network (VDSR) that is composed of 20 convolution layers with a global skip connection. This method verified that contexts over large image regions are efficiently exploited by cascading small filters in a deeper network structure. SRResNet [22] was designed with multiple residual blocks and a generative adversarial network (GAN) [23] to enhance the detail of textures by using perceptual loss function. Tong et al. [24] proposed a super-resolution using dense skip connections (SRDenseNet), which consists of 8 dense blocks, and each dense block contains eight dense layers. As the feature maps of the previous convolution layer are concatenated with those of the current convolution layer within a dense block, it requires heavy memory capacity to store the network parameters and temporally generated feature maps between convolution layers. Residual dense network (RDN) [25] is composed of multiple residual dense blocks, and each RDN includes a skip connection within a dense block for the pursuit of more stable network training. As both network parameters and memory capacity are increased in the proportion of the number of dense blocks, Ahn et al. [26] proposed a cascading residual network (CARN) to reduce the network complexity. The CARN architecture was designed to add multiple cascading connections starting from each intermediate convolution layer to the others for the efficient flow of feature maps and gradients. Lim et al. [27] proposed an enhanced deep residual network for SR (EDSR), which consists of 32 residual blocks, and each residual block contains two convolution layers. Especially, EDSR removed the batch normalization process in the residual block for the speedup of network training.
Although aforementioned methods have demonstrated better SR performance, they tend to be more complicated network architectures with respect to the enormous network parameters, excessive convolution operations, and high memory usages. In order to reduce the network complexity, several researches have been studied about more lightweight SR models [28,29]. Li et al. [30] proposed multi-scale residual network (MSRN) using two bypass networks with different kernel sizes. In this way, the feature maps between bypass networks can be shared with each other so that image features are extracted at different kernel sizes. Compared to that of EDSR, MSRN reduced the number of parameters up to one-seventh, the SR performance was also substantially decreased, especially generating four times scaled SR images. Recently, Kim et al. [31] proposed a lightweight SR method (SR-ILLNN) that has 2 input layers consisting of the low-resolution image and the interpolated image. In this paper, we propose a lightweight SR model, named multi-scale channel dense network (MCDN) to provide better SR performance while reducing the network complexity significantly compared to previous methods.

3. Proposed Method

3.1. Overall Architecture of MCDN

The proposed network aims at generating a HR image whose size is 4 N × 4 M where N and M indicate the width and height of input image, respectively. In this paper, we notate both feature maps and kernels as [ W × H × C ] where W × H and C are the spatially 2-dimenstional (2D) size and the number of channels, respectively. As depicted in Figure 3, MCDN is composed of 4 parts, which are input layer, multi-scale channel extractor, upsampling layer, and output layer, respectively. Particularly, the multi-scale channel extractor consists of three multi-scale channel dense blocks (MCDBs) with a skip and dense connection per a MCDB. In general, the convolution operation ( H i ) of i -th layer calculates the feature maps ( F i ) from the previous feature maps ( F i 1 ) as in Equation (1):
F i = H i ( F i 1 ) , w h e r e     H i ( F i 1 ) = σ (   W i     F i 1 + B i ) ,
where   F i 1 , W i , B i , σ , and ‘ ’ denote as the previous feature maps, kernel weights, biases, an activation function, and a weighted sum between the previous feature maps and kernel’s weights, respectively. For all convolution layers, we set the same kernel size to 3 × 3 and use zero padding to maintain the resolution of output feature maps. In Figure 3, F 0 is computed from the convolution operation of input layer ( I L R ) by using Equation (2).
F 0 = H L R ( I L R ) = σ (   W L R     I L R + B L R ) .    
After performing the convolution operation of input layer, F 0 is fed into the multi-scale channel extractor. The output of the multi-scale channel extractor ( F 3 ) is calculated by cascading MCDB operations as in Equation (3):
F 3 = H 3 M C D B ( F 2 ) =   H 3 M C D B ( H 2 M C D B ( F 1 ) ) =   H 3 M C D B ( H 2 M C D B ( H 1 M C D B ( F 0 ) ) ) ,
where H i M C D B ( · ) denotes convolution operation of the i -th MCDB. Finally, an output HR image ( I H R ) is generated through the convolution operations of the upsampling layer and the output layer. In the upsampling layer, we used 2 deconvolution layers with the 2 × 2 kernel size to expand the resolution by 4 times.
Figure 4 shows the detailed architecture about a MCDB. A MCDB has 5 dense blocks with the different channel size, and each dense block contains 4 dense layers. In order to describe the procedures of MCDB, we denote the k -th dense layer of j -th dense block as a D j , k in this paper. For the input feature maps ( F i ), j -th dense block generates output feature maps D j as in Equation (4), which combine the feature maps ( D j , 4 ) with a skip connection ( F i ).
D j = D j , 4 + F i , w h e r e     D j , 4 = H j , 4 ( σ ( W j , 4   [ D j , 3 ,     D j , 2 ,     D j , 1 ,   F i ] ) + B j , 4 ) .
After concatenating the output feature maps from all dense blocks, they are fed into a bottleneck layer in order to reduce the number of channel of the output feature maps. It means that the bottleneck layer has a role of decreasing the number of kernel weights as well as compressing the number of feature maps. The output of a MCDB is finally produced by the reconstruction layer with a global skip connection ( F 0 ) as shown in Figure 4.

3.2. MCDN Training

In order to train the proposed network, we set hyper parameters as presented in Table 1. We defined L1 loss [32] as the loss function and update the network parameters, such as kernel weights and biases by using Adam optimizer [33]. The number of mini-batch size, the number of epochs, and the learning rate were set to s 128, 50, and 10−3 to 10−5, respectively. Among the various activation functions [34,35,36], parametric ReLU was used as the activation functions in our network.

4. Experimental Results

As shown in Figure 5, we used DIV2K dataset [37] at the training stage. It has 2K (1920 × 1080) spatial resolution and consists of 800 images. All training images with RGB are converted into YUV color format and extracted only Y components with the patch size of 100 × 100 without overlap. In order to obtain input LR images, the patches are further down-sampled to 25 × 25 by bicubic interpolation. In order to evaluate the proposed method, we used Set5 [38], Set14 [39], BSD100 [40], and Urban100 [41] of Figure 6 as the test datasets, which are commonly used in most SR studies [42,43,44]. In addition, Set5 was also used as a validation dataset.
All experiments were conducted on an Intel Xeon Skylake ([email protected]) having 128GB RAM and two NVIDIA Tesla V100 GPUs under the experimental environments of Table 2. For the performance comparison of the proposed MCDN, we set bicubic interpolation method as an anchor and SRCNN [18], EDSR [27], MSRN [30] and SR-ILLNN [31] are used as the comparison methods in terms of SR accuracy and network complexity.

4.1. Performance Measurements

In terms of network complexity, we compared the proposed MCDN with SRCNN [18], EDSR [27], MSRN [30] and SR-ILLNN [31], respectively. Table 3 shows the number of network parameters and total memory size (MB). As shown in Table 3, MCDN reduces the number of parameters and the total memory size by as low as 1.2% and 17.4% compared to EDSR, respectively. Additionally MCDN marginally reduces the total memory size by as low as 92.2% and 80.5%, respectively, compared to MSRN and SR-ILLNN with lightweight network structures. Note that MCDN was able to reduce the number of parameters significantly because the parameters used in a MCDB are identically applied to other MCDBs.
In terms of SR accuracy, Table 4 and Table 5 show the results of PSNR and SSIM, respectively. While the proposed MCDN can significantly reduce the network complexity compared to EDSR, it has slightly high or similar PSNR performance on most test datasets. On the other hand, MCDN can achieve the improved PSNR gains as high as 0.21dB and 0.16dB on average compared to MSRN and SR-ILLNN, respectively.
Figure 7 shows the examples of visual comparisons between MCDN and the previous methods including anchor on the test datasets. From the results, we verified that the proposed MCDN can recover the structural information effectively and find more accurate textures than other works.

4.2. Ablation Studies

In order to optimize the proposed network architectures, we conducted a variety of verification tests on the validation dataset. In this paper, we denote the number of MCDB, the number of the dense blocks per a MCDB, and the number of the dense layers per a dense block as M, D, and L, respectively. Note that the more M, D, and L are deployed in the proposed network, the more memory is required to store network parameters and the feature maps. Therefore, it is important that the optimal M, D, and L components are deployed in the proposed network to consider the trade-off between SR accuracy and network complexity.
Firstly, we investigated what loss functions and activation functions were beneficial to the proposed network. According to [45], L2 loss does not always guarantee better SR performance in terms of PSNR and SSIM, although it is widely used to represent PSNR at the network training stage. Therefore, we conducted PSNR comparisons to choose the well matched loss function. Figure 8 and Table 6 indicate that L1 loss can be suitable to the proposed network structure. In addition, leaky rectified linear unit (Leaky ReLU) [46] and parametric ReLU can be replaced with ReLU to avoid the gradient vanishing effect in the negative side. In order to avoid overfitting at the training stage, we evaluated L1 loss according to various epochs as shown in Figure 9a. After setting the number of epochs to 50, we measured PSNR as a SR performance in the L1 loss functions. As demonstrated in Figure 9b, we confirmed that parametric ReLU is superior to other activation functions on the proposed MCDN.
Secondly, we have investigated the optimal number of M, after fixing the D and L to 5 and 4, respectively. We evaluated L1 loss according to the number of epochs as shown in Figure 10a. After setting the number of epochs to 50, we measured PSNR to identify SR performance according to the various M, and Figure 10b showed that the optimal M should be set to 3. Through the evaluations of Figure 11 and Figure 12 and Table 7 and Table 8, the optimal number of D and L were set to 5 and 4 in the proposed MCDN, respectively. Consequently, the proposed MCDN can be designed to consider the trade-off between the SR performance and the network complexity as measured in Table 7, Table 8 and Table 9.
Finally, we verified the effectiveness both of skip and dense connection. The more dense connections are deployed in the between convolution layers, the more network parameters are required to compute the convolution operations. According to the results of tool-off tests on the proposed MCDN as measured in Table 10, we confirmed that both skip and dense connection have an effect on SR performance. In addition, Table 11 shows the network complexity and the inference speed according to the deployment of skip and dense connection.

5. Conclusions

In this paper, we proposed CNN based a multi-scale channel dense network (MCDN). The proposed MCDN aims at generating a HR image whose size is 4N × 4M given an input image N × M. It is composed of four parts, which are input layer, multi-scale channel extractor, upsampling layer, and output layer, respectively. In addition, the multi-scale channel extractor consists of three multi-scale channel dense blocks (MCDBs), where each MCDB has five dense blocks with the different channel size, and each dense block contains four dense layers. In order to design the proposed network, we extracted training images from the DIV2K dataset and investigated the trade-off between the quality enhancement and network complexity. We conducted various ablation works to find the optimal network structure. Consequently, the proposed MCDN reduced the number of parameters and the total memory size by as low as 1.2% and 17.4%, respectively while it accomplished slightly high or similar PSNR performance on most test datasets compared to EDSR. In addition, MCDN marginally reduces the total memory size by as low as 80.5% and 92.2%, respectively, compared to MSRN and SR-ILLNN with lightweight network structures. In terms of SR performances, MCDN can achieve the improved PSNR gains as high as 0.21 dB and 0.16 dB on average compared to MSRN and SR-ILLNN, respectively.

Author Contributions

Conceptualization, Y.L. and D.J.; methodology, Y.L. and D.J.; software, Y.L.; validation, D.J., B.-G.K. and H.L.; formal analysis, Y.L. and D.J.; investigation, Y.L.; resources, D.J.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, D.J.; visualization, Y.L.; supervision, D.J.; project administration, D.J.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Science and ICT (Grant 21PQWO-B153349-03).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; Ding, Q.; Zou, Q.; Chen, Z.; Li, L. DenseLightNet: A Light-Weight Vehicle Detection Network for Autonomous Driving. IEEE Trans. Ind. Electron. 2020, 12, 10600–10609. [Google Scholar] [CrossRef]
  2. Wells, J.; Chatterjee, A. Content-Aware Low-Complexity Object Detection for Tracking Using Adaptive Compressed Sensing. IEEE J. Emerg. Sel. Top. Power Electron. 2018, 8, 578–590. [Google Scholar] [CrossRef]
  3. Gong, M.; Shu, Y. Real-Time Detection and Motion Recognition of Human Moving Objects Based on Deep Learning and Multi-Scale Feature Fusion in Video. IEEE Access 2020, 8, 25811–25822. [Google Scholar] [CrossRef]
  4. Oliveira, B.; Ferreira, F.; Martins, C. Fast and Lightweight Object Detection Network: Detection and Recognition on Resource Constrained Devices. IEEE Access 2017, 6, 8714–8724. [Google Scholar] [CrossRef]
  5. Zhang, J.; Zhu, H.; Wang, P.; Ling, A.X. ATT Squeeze U-Net: A Lightweight Network for Forest Fire Detection and Recognition. IEEE Access 2020, 9, 10858–10870. [Google Scholar] [CrossRef]
  6. Chen, G.; Wang, H.; Chen, K.; Li, Z.; Song, Z.; Liu, Y.; Chen, W.; Knoll, A. A Survey of the Four Pillars for Small Object Detection: Multiscale Representation, Contextual Information, Super-Resolution, and Region Proposal. IEEE Trans. Syst. Man Cybern. Syst. 2020. [Google Scholar] [CrossRef]
  7. Peled, S.; Yeshurun, Y. Superresolution in MRI: Application to Human White Matter Fiber Visualization by Diffusion Tensor Imaging. Magn. Reason. Med. 2001, 45, 29–35. [Google Scholar] [CrossRef]
  8. Shi, W.; Caballero, J.; Ledig, C.; Zhang, X.; Bai, W.; Bhatia, K.; Marvao, A.; Dawes, T.; Regan, D.; Rueckert, D. Cardiac Image Super-Resolution with Global Correspondence Using Multi-Atlas PatchMatch. Med. Image Comput. Comput. Assist. Interv. 2013, 8151, 9–16. [Google Scholar]
  9. Thornton, M.; Atkinson, P.; Holland, D. Sub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. Int. J. Remote Sens. 2006, 27, 473–491. [Google Scholar] [CrossRef]
  10. Zhang, L.; Zhang, H.; Shen, H.; Li, P. A super-resolution reconstruction algorithm for surveillance images. Signal Process. 2010, 90, 848–859. [Google Scholar] [CrossRef]
  11. Yang, C.; Ma, C.; Yang, M. Single-image super-resolution: A benchmark. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 372–386. [Google Scholar]
  12. Lecun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  13. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. International Conference on Pattern Recognition. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  14. Wang, Z.; Bovik, A.C.; Sheikh, H.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  15. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  16. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Las Vegas, NY, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  17. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K. Densely connected convolutional networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  18. Dong, C.; Loy, C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern. Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Dong, C.; Loy, C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 391–407. [Google Scholar]
  20. Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  21. Kim, J.; Lee, J.; Lee, K. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Las Vegas, NY, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  22. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  23. Goodfellow, I.; Abadie, J.; Mirza, M.; Xu, B.; Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst 2014, 2, 2672–2680. [Google Scholar]
  24. Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4799–4807. [Google Scholar]
  25. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2472–2481. [Google Scholar]
  26. Ahn, N.; Kang, B.; Sohn, K. Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 252–268. [Google Scholar]
  27. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  28. Lai, W.; Huang, J.; Ahuja, J.; Yang, M. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
  29. Liu, Y.; Zhang, X.; Wang, S.; Ma, S.; Gao, W. Progressive Multi-Scale Residual Network for Single Image Super-Resolution. arXiv 2020, arXiv:2007.09552. [Google Scholar]
  30. Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale Residual Network for Image Super-Resolution. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 517–532. [Google Scholar]
  31. Kim, S.; Jun, D.; Kim, B.; Lee, H.; Rhee, E. Single Image Super-Resolution Method Using CNN-Based Lightweight Neural Networks. Appl. Sci. 2021, 11, 1092. [Google Scholar] [CrossRef]
  32. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss Functions for Image Restoration with Neural Networks. IEEE Trans. Comput. Imaging 2017, 3, 47–57. [Google Scholar] [CrossRef]
  33. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  34. Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
  35. Redford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 1026–1034. [Google Scholar]
  37. Agustsson, E.; Timofte, R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  38. Bavilacqua, M.; Roumy, A.; Guillemot, C.; Morel, M.L. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the 23rd British Machine Vision Conference (BMVC), Surrey, UK, 3–7 September 2012; pp. 1–10. [Google Scholar]
  39. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the International Conference on Curves and Sufaces, Avignon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
  40. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 416–423. [Google Scholar]
  41. Huang, J.; Singh, A.; Ahuja, N. Single Image Super-resolution from Transformed Self-Exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5197–5206. [Google Scholar]
  42. Wang, Z.; Chen, J.; Hoi, S. Deep Learning for Image Super-resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep Learning for Single Image Super-Resolution: A Brief Review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
  44. Li, K.; Yang, S.; Dong, R.; Wang, X.; Huang, J. Survey of single image super-resolution reconstruction. IET Image Process. 2020, 14, 2273–2290. [Google Scholar] [CrossRef]
  45. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for neural networks for image processing. arXiv 2015, arXiv:1511.08861. [Google Scholar]
  46. Mass, A.; Hannun, A.; Ng, A. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 30, pp. 1–6. [Google Scholar]
Figure 1. Example of CNN-based SR applications in the area of object detection.
Figure 1. Example of CNN-based SR applications in the area of object detection.
Sensors 21 03351 g001
Figure 2. Examples of CNN-based network blocks. (a) Inception block; (b) residual block; and (c) dense block.
Figure 2. Examples of CNN-based network blocks. (a) Inception block; (b) residual block; and (c) dense block.
Sensors 21 03351 g002
Figure 3. Overall architecture of the proposed MCDN.
Figure 3. Overall architecture of the proposed MCDN.
Sensors 21 03351 g003
Figure 4. The architecture of a MCDB.
Figure 4. The architecture of a MCDB.
Sensors 21 03351 g004
Figure 5. Training dataset (DIV2K [37]).
Figure 5. Training dataset (DIV2K [37]).
Sensors 21 03351 g005
Figure 6. Test datasets (Set5 [38], Set14 [39], BSD100 [40], and Urban100 [41]).
Figure 6. Test datasets (Set5 [38], Set14 [39], BSD100 [40], and Urban100 [41]).
Sensors 21 03351 g006
Figure 7. Visual comparisons on test dataset [38,39,40,41]. For each test image, the figures of the second row represent the zoom-in for the area indicated by the red box.
Figure 7. Visual comparisons on test dataset [38,39,40,41]. For each test image, the figures of the second row represent the zoom-in for the area indicated by the red box.
Sensors 21 03351 g007aSensors 21 03351 g007bSensors 21 03351 g007cSensors 21 03351 g007d
Figure 8. Verification of loss functions.
Figure 8. Verification of loss functions.
Sensors 21 03351 g008
Figure 9. Verification of activation functions. (a) L1 loss per epoch. (b) PSNR per epoch.
Figure 9. Verification of activation functions. (a) L1 loss per epoch. (b) PSNR per epoch.
Sensors 21 03351 g009
Figure 10. Verification of the number of MCDB (M) in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Figure 10. Verification of the number of MCDB (M) in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Sensors 21 03351 g010
Figure 11. Verification of the number of dense block (D) per a MCDB in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Figure 11. Verification of the number of dense block (D) per a MCDB in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Sensors 21 03351 g011
Figure 12. Verification of the number of dense layer (L) per a dense block in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Figure 12. Verification of the number of dense layer (L) per a dense block in terms of SR performance. (a) L1 loss per epoch. (b) PSNR per epoch.
Sensors 21 03351 g012
Table 1. Hyper parameters of the proposed MCDN.
Table 1. Hyper parameters of the proposed MCDN.
Hyper ParametersOptions
Loss functionL1 loss
OptimizerAdam
Batch size128
Num. of epochs50
Learning rate10−3 to 10−5
Initial weightXavier
Activation functionParametric ReLU
Padding modeZero padding
Table 2. Experimental environments.
Table 2. Experimental environments.
Experimental EnvironmentsOptions
Linux versionUbuntu 16.04
Deep learning frameworksPytorch 1.4.0
CUDA version10.1
Input size ( I LR )25 × 25 × 1
Label size ( I HR )100 × 100 × 1
Table 3. The number of parameters and total memory (MB) size.
Table 3. The number of parameters and total memory (MB) size.
Num. of ParametersTotal Memory Size (MB)
SRCNN [18]57K14.98
EDSR [27]43,061K371.87
MSRN [30]6,075K70.56
SR-ILLNN [31]439K80.83
MCDN531K65.07
Table 4. Average PSNR (dB) on the test datasets. The best results of dataset are shown in bold.
Table 4. Average PSNR (dB) on the test datasets. The best results of dataset are shown in bold.
DatasetBicubicSRCNN [18]EDSR [27]MSRN [30]SR-ILLNN [31]MCDN
Set528.4430.3031.6831.3631.4131.68
Set1425.8027.0927.9627.7627.8327.96
BSD10025.9926.8627.4227.3627.3327.43
Urban10023.1424.3325.5425.2525.3225.56
Average24.7325.8026.7026.4926.5426.70
Table 5. Average SSIM on the test datasets. The best results of datasets shown in bold.
Table 5. Average SSIM on the test datasets. The best results of datasets shown in bold.
DatasetBicubicSRCNN [18]EDSR [27]MSRN [30]SR-ILLNN [31]MCDN
Set50.81120.85990.88930.88450.88480.8897
Set140.70330.74950.77480.77030.77090.7745
BSD1000.66990.71120.73090.72810.72750.7305
Urban1000.65890.71580.76980.76000.75830.7686
Average0.67020.71920.75510.74890.74790.7543
Table 6. SR performances according to loss functions on test datasets.
Table 6. SR performances according to loss functions on test datasets.
Set5Set14BSD100Urban100Average
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
L131.680.889727.960.774527.430.730525.560.768626.700.7543
L231.610.888327.900.773327.400.729725.470.765326.650.7524
Table 7. Verification of the number of dense block (D) per a MCDB in terms of network complexity.
Table 7. Verification of the number of dense block (D) per a MCDB in terms of network complexity.
Num. of ParametersTotal Memory Size (MB)
M3_D1_L5125K25.57
M3_D2_L5185K34.62
M3_D3_L5267K44.93
M3_D4_L5395K57.85
M3_D5_L5639K76.21
M3_D6_L51146K106.39
M3_D7_L52713K164.02
Table 8. Verification of the number of dense layer (L) per a dense block in terms of network complexity.
Table 8. Verification of the number of dense layer (L) per a dense block in terms of network complexity.
Num. of ParametersTotal Memory Size (MB)
M3_D5_L1280K37.82
M3_D5_L2351K45.87
M3_D5_L3435K54.96
M3_D5_L4531K65.07
M3_D5_L5639K76.21
M3_D5_L6760K88.37
M3_D5_L7893K101.57
Table 9. SR Performances on test datasets.
Table 9. SR Performances on test datasets.
Set5Set14BSD100Urban100Average
ModelPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
M1_D5_L531.500.886627.830.771427.340.727925.340.760.26.550.7491
M2_D5_L531.580.888227.920.773927.400.729825.500.766526.660.7530
M3_D5_L531.680.889527.980.774727.430.730425.560.769226.710.7546
M4_D5_L531.660.889628.010.775127.430.730825.590.770826.730.7555
M5_D5_L531.730.890328.030.775527.440.731025.650.772526.760.7564
M6_D5_L531.700.890128.050.775827.450.731325.660.772926.770.7568
M7_D5_L531.700.889928.050.776127.440.731325.650.773026.760.7568
M3_D1_L531.400.885327.800.770727.310.727025.250.757626.500.7474
M3_D2_L531.530.887427.880.772427.360.728525.360.761626.580.7500
M3_D3_L531.580.887827.900.773127.390.729225.410.763826.610.7514
M3_D4_L531.600.888327.960.774227.400.729925.500.766526.660.7531
M3_D5_L531.680.889527.980.774727.430.730425.560.769226.710.7546
M3_D6_L531.670.889427.990.774927.430.730825.590.770826.720.7555
M3_D7_L531.670.889727.950.774827.410.730725.580.771126.710.7556
M3_D5_L131.530.887127.860.772227.350.728325.370.761526.580.7499
M3_D5_L231.590.888027.900.773227.380.729225.430.764226.620.7516
M3_D5_L331.650.889127.930.773927.410.729925.500.766726.670.7531
M3_D5_L431.680.889727.960.774527.430.730525.560.768626.700.7543
M3_D5_L531.680.889527.980.774727.430.730425.560.769226.710.7546
M3_D5_L631.680.889727.990.775027.430.730925.600.770626.730.7555
M3_D5_L731.660.889427.990.775327.430.730925.610.771126.730.7557
Table 10. SR performances according to tool-off tests.
Table 10. SR performances according to tool-off tests.
Skip ConnectionDense ConnectionSet5Set14BSD100Urban100Average
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
DisableDisable26.420.736224.340.629724.780.598521.950.582323.500.5963
DisableEnable31.370.884527.780.769827.290.726425.220.755726.470.7462
EnableDisable31.590.887927.900.773127.390.729125.420.764326.620.7516
EnableEnable31.680.889727.960.774527.430.730525.560.768626.700.7543
Table 11. Network complexity and inference speed on BSD100 according to tool-off tests.
Table 11. Network complexity and inference speed on BSD100 according to tool-off tests.
Skip ConnectionDense ConnectionNum. of ParametersTotal Memory Size (MB)Inference Speed (s)
DisableDisable167K40.0224.09
DisableEnable531K65.0746.59
EnableDisable434K40.0226.37
EnableEnable531K65.0747.20
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, Y.; Jun, D.; Kim, B.-G.; Lee, H. Enhanced Single Image Super Resolution Method Using Lightweight Multi-Scale Channel Dense Network. Sensors 2021, 21, 3351. https://doi.org/10.3390/s21103351

AMA Style

Lee Y, Jun D, Kim B-G, Lee H. Enhanced Single Image Super Resolution Method Using Lightweight Multi-Scale Channel Dense Network. Sensors. 2021; 21(10):3351. https://doi.org/10.3390/s21103351

Chicago/Turabian Style

Lee, Yooho, Dongsan Jun, Byung-Gyu Kim, and Hunjoo Lee. 2021. "Enhanced Single Image Super Resolution Method Using Lightweight Multi-Scale Channel Dense Network" Sensors 21, no. 10: 3351. https://doi.org/10.3390/s21103351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop