Automatic System for Visual Detection of Dirt Buildup on Conveyor Belts Using Convolutional Neural Networks
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Data Collect
3.2. Data Preprocessing
- Random Resized Crop: applied with a probability of occurrence of 1.0, it is a random crop with the size between 65% to 100% of the original image. After that a change is applied to the aspect ratio of the cropped image between up to . Then, the image is resized to 224 × 224 pixels to match the input size of the network.
- Random Horizontal Flip: applied to images with a probability of occurrence of 0.5.
- Random Vertical Flip: applied to images with a probability of occurrence of 0.5.
- Random Rotation: applied with an angle of up to ±30° with probability of occurrence of 1.0.
- Color Jitter: inserted a random change of up to 0.05 in hue and saturation, with a probability of occurrence of 1.0.
3.3. Network and Training Definition
- CNN as fixed feature extractor. The trainable weights are frozen for all of the network except that of the fully connected (FC) layers. The last FC layer (output layer) is replaced with a new one with two neurons and random weights to match the number of classes in the problem. Only the FC layers are trained. A representation of this scenario is shown in Figure 5a, where the blue block indicates that only the classification weights are trained. In this scenario the original feature extractors of the models are used to extract the main features of the data. These features run through the FC layers trained from scratch.
- Fine-tuning the CNN. Instead of random initialization, the network is initialized with the pretrained weights of the model. Only the last FC layer (output layer) is randomly initialized with two neurons to match the number of classes in the problem. During the training, all the weights of the network (convolutional and classifier layers) are retrained. A representation of this scenario is shown in Figure 5b, where the blue blocks indicate that all weights in the network are trained. In this scenario, the feature extractors of the models are trained along with the FC layers.
3.4. Field Validation
4. Results and Discussion
4.1. Model Evaluation
4.1.1. CNN as Fixed Feature Extractor
4.1.2. Fine-Tuning the CNN
4.1.3. Discriminative Localization
4.2. Field Validation
5. Conclusions
Future Work
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Conveyor Equipment Manufactures Association. In Belt Conveyors for Bulk Materials, 6th ed.; k-kom: Naples, FL, USA, 2007.
- Yang, Y.; Miao, C.; Li, X.; Mei, X. On-line conveyor belts inspection based on machine vision. Opt. Int. J. Light Electron Opt. 2014, 125, 5803–5807. [Google Scholar] [CrossRef]
- Carvalho Júnior, J.R.D. Processamento Digital de Imagens Para a Identificação Automática de Falhas Em Rolos Dos Transportadores de Correias. Master’s Thesis, Universidade Federal de Ouro Preto (UFOP), Ouro Preto-MG, Brazil, 2018. (In Portuguese). [Google Scholar]
- Yang, W.; Zhang, X.; Ma, H. An inspection robot using infrared thermography for belt conveyor. In Proceedings of the 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Xi’an, China, 19–22 August 2016; pp. 400–404. [Google Scholar]
- Garcia, G.; Rocha, F.; Torre, M.; Serrantola, W.; Lizarralde, F.; Franca, A.; Pessin, G.; Freitas, G. ROSI: A Novel Robotic Method for Belt Conveyor Structures Inspection. In Proceedings of the 2019 19th International Conference on Advanced Robotics (ICAR), Belo Horizonte, Brazil, 2–6 December 2019; pp. 326–331. [Google Scholar]
- Ribeiro, R.G.; Júnior, J.R.; Cota, L.P.; Euzébio, T.A.; Guimarães, F.G. Unmanned Aerial Vehicle Location Routing Problem With Charging Stations for Belt Conveyor Inspection System in the Mining Industry. IEEE Trans. Intell. Transp. Syst. 2019. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Da Xu, L.; He, W.; Li, S. Internet of things in industries: A survey. IEEE Trans. Ind. Inf. 2014, 10, 2233–2243. [Google Scholar]
- Susto, G.A.; Schirru, A.; Pampuri, S.; McLoone, S.; Beghi, A. Machine learning for predictive maintenance: A multiple classifier approach. IEEE Trans. Ind. Inf. 2014, 11, 812–820. [Google Scholar] [CrossRef] [Green Version]
- ur Rehman, S.; Tu, S.; Waqas, M.; Huang, Y.; ur Rehman, O.; Ahmad, B.; Ahmad, S. Unsupervised pre-trained filter learning approach for efficient convolution neural network. Neurocomputing 2019, 365, 171–190. [Google Scholar] [CrossRef]
- Pei, Z.; Xu, H.; Zhang, Y.; Guo, M.; Yang, Y.H. Face Recognition via Deep Learning Using Data Augmentation Based on Orthogonal Experiments. Electronics 2019, 8, 1088. [Google Scholar] [CrossRef] [Green Version]
- Riese, F.M.; Keller, S. Soil Texture Classification with 1D Convolutional Neural Networks based on Hyperspectral Data. arXiv 2019, arXiv:1901.04846. [Google Scholar] [CrossRef] [Green Version]
- Bhatia, Y.; Rai, R.; Gupta, V.; Aggarwal, N.; Akula, A. Convolutional neural networks based potholes detection using thermal imaging. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
- Máthé, K.; Buşoniu, L. Vision and control for UAVs: A survey of general methods and of inexpensive platforms for infrastructure inspection. Sensors 2015, 15, 14887–14916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bjørlykhaug, E.; Egeland, O. Vision System for Quality Assessment of Robotic Cleaning of Fish Processing Plants using CNN. IEEE Access 2019, 7, 71675–71685. [Google Scholar] [CrossRef]
- Yfantis, E.; Fayed, A. A camera system for detecting dust and other deposits on solar panels. J. Adv. Image Video Process. 2014, 2, 1–10. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, J. Detection of Longitudinal Belt Rip Based on Canny Operator. In Proceedings of the 2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC), Dalian, China, 17–19 December 2017; pp. 939–941. [Google Scholar]
- Alport, M.; Govinder, P.; Plum, S.; Van Der Merwe, L. Identification of conveyor belt splices and damages using neural networks. Bulk Solids Handl. 2001, 21, 622–627. [Google Scholar]
- Qiao, T.; Zhao, B.; Shen, R.; Zheng, B. Infrared image detection of belt longitudinal tear based on SVM. J. Comput. Inf. Syst. 2013, 9, 7469–7475. [Google Scholar]
- Olivier, L.E.; Maritz, M.G.; Craig, I.K. Deep Convolutional Neural Network for Mill Feed Size Characterization. IFAC-PapersOnLine 2019, 52, 105–110. [Google Scholar] [CrossRef]
- Naixun, H.; Tao, C.; Ruiqing, N.; Na, Z. Object-Oriented Open Pit Extraction Based on Convolutional Neural Network, A Case Study in Yuzhou, China. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9435–9438. [Google Scholar]
- Masci, J.; Meier, U.; Ciresan, D.; Schmidhuber, J.; Fricout, G. Steel defect classification with max-pooling convolutional neural networks. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 10–15 June 2012; pp. 1–6. [Google Scholar]
- Chen, H.; Pang, Y.; Hu, Q.; Liu, K. Solar cell surface defect inspection based on multispectral convolutional neural network. J. Intell. Manuf. 2018, 31, 453–468. [Google Scholar] [CrossRef] [Green Version]
- Madhavan, S. Mastering Python for Data Science; Packt Publishing Ltd.: Birmingham, UK, 2015. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in PyTorch. In Proceedings of the NIPS 2017 Autodiff Workshop, Long Beach, CA, USA, 9 December 2017. [Google Scholar]
- Fernández, A.; García, S.; Galar, M.; Prati, R.C.; Krawczyk, B.; Herrera, F. Learning from Imbalanced Data Sets; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Norvig, P.R.; Intelligence, S.A. A Modern Approach; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Buduma, N.; Locascio, N. Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2017. [Google Scholar]
- Bianco, S.; Cadene, R.; Celona, L.; Napoletano, P. Benchmark analysis of representative deep neural network architectures. IEEE Access 2018, 6, 64270–64277. [Google Scholar] [CrossRef]
- Brownlee, J. A Gentle Introduction to Transfer Learning for Deep Learning. 2017. Available online: https://machinelearningmastery.com/transferlearning-for-deep-learning (accessed on 6 August 2019).
- Torrey, L.; Shavlik, J. Transfer learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; IGI Global: Hershey, PA, USA, 2010; pp. 242–264. [Google Scholar]
- Karpathy, A. Cs231n Convolutional Neural Networks for Visual Recognition. Course Notes. Available online: https://cs231n.github.io/transfer-learning/ (accessed on 15 September 2020).
- Chollet, F. Deep Learning with Python; Manning: Shelter Island, NY, USA, 2017. [Google Scholar]
- Faceli, K.; Lorena, A.C.; Gama, J.; Carvalho, A.C.P.D.L. Inteligência Artificial: Uma Abordagem de Aprendizado de Máquina; LTC: Rio de Janeiro, Brazil, 2011. [Google Scholar]
- Bazzani, L.; Bergamo, A.; Anguelov, D.; Torresani, L. Self-taught object localization with deep networks. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–9 March 2016; pp. 1–9. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
Parameter | Description |
---|---|
Batch size | 8 |
Optimizer | Stochastic Gradient Descent (SGD) |
Learning rate | 0.001 |
Decay | 0.1 |
Step size | 65 |
Loss function | Cross Entropy Loss |
Rounds | VGG16 | ResNet18 | Densenet161 | |||
---|---|---|---|---|---|---|
Loss | Accuracy | Loss | Accuracy | Loss | Accuracy | |
1 | 0.49467 | 0.73750 | 0.55240 | 0.76250 | 0.42724 | 0.85000 |
2 | 0.52849 | 0.72500 | 0.49380 | 0.72500 | 0.34067 | 0.85000 |
3 | 0.62515 | 0.70513 | 0.62907 | 0.64103 | 0.64232 | 0.62821 |
4 | 0.55003 | 0.93590 | 0.30824 | 0.94872 | 0.23751 | 0.97436 |
5 | 0.52007 | 0.73684 | 0.30650 | 0.86842 | 0.47549 | 0.77632 |
Average | 0.54368 | 0.76807 | 0.45800 | 0.78913 | 0.42465 | 0.83578 |
Standard Deviation | 0.04443 | 0.08473 | 0.13026 | 0.10818 | 0.13568 | 0.12576 |
Network | TP | FP | FN | TN | Precision | Recall | F-1 Score |
---|---|---|---|---|---|---|---|
VGG16 | 16.8 | 16 | 2.2 | 43.4 | 0.51220 | 0.88421 | 0.64865 |
Resnet18 | 22 | 10.8 | 5.8 | 39.8 | 0.67073 | 0.79137 | 0.72607 |
Densenet161 | 25.2 | 7.6 | 5.2 | 40.4 | 0.76829 | 0.82895 | 0.81110 |
Rounds | VGG16 | ResNet18 | Densenet161 | |||
---|---|---|---|---|---|---|
Loss | Accuracy | Loss | Accuracy | Loss | Accuracy | |
1 | 0.32579 | 0.82500 | 0.40805 | 0.85000 | 0.37880 | 0.91250 |
2 | 0.56521 | 0.78750 | 0.38419 | 0.83750 | 0.17310 | 0.96250 |
3 | 0.34537 | 0.85897 | 0.38491 | 0.83333 | 0.48134 | 0.73077 |
4 | 0.08021 | 0.98718 | 0.05477 | 0.98718 | 0.08661 | 0.98718 |
5 | 0.35257 | 0.89474 | 0.32437 | 0.84211 | 0.23900 | 0.89474 |
Average | 0.33383 | 0.87068 | 0.31126 | 0.87002 | 0.27177 | 0.89753 |
Standard Deviation | 0.15389 | 0.06825 | 0.13120 | 0.05884 | 0.14175 | 0.08978 |
Network | TP | FP | FN | TN | Precision | Recall | F-1 Score |
---|---|---|---|---|---|---|---|
VGG16 | 25 | 7.8 | 2.4 | 43.2 | 0.76220 | 0.91241 | 0.83057 |
Resnet18 | 25.2 | 7.6 | 2.6 | 43 | 0.76829 | 0.90645 | 0.83167 |
Densenet161 | 28.6 | 4.2 | 3.8 | 41.8 | 0.87195 | 0.88272 | 0.87730 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Santos, A.A.; Rocha, F.A.S.; Reis, A.J.d.R.; Guimarães, F.G. Automatic System for Visual Detection of Dirt Buildup on Conveyor Belts Using Convolutional Neural Networks. Sensors 2020, 20, 5762. https://doi.org/10.3390/s20205762
Santos AA, Rocha FAS, Reis AJdR, Guimarães FG. Automatic System for Visual Detection of Dirt Buildup on Conveyor Belts Using Convolutional Neural Networks. Sensors. 2020; 20(20):5762. https://doi.org/10.3390/s20205762
Chicago/Turabian StyleSantos, André A., Filipe A. S. Rocha, Agnaldo J. da R. Reis, and Frederico G. Guimarães. 2020. "Automatic System for Visual Detection of Dirt Buildup on Conveyor Belts Using Convolutional Neural Networks" Sensors 20, no. 20: 5762. https://doi.org/10.3390/s20205762