Next Article in Journal
Experimental Study and FEM Simulations for Detection of Rebars in Concrete Slabs by Coplanar Capacitive Sensing Technique
Next Article in Special Issue
Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty
Previous Article in Journal
A Multiwall Path-Loss Prediction Model Using 433 MHz LoRa-WAN Frequency to Characterize Foliage’s Influence in a Malaysian Palm Oil Plantation Environment
Previous Article in Special Issue
Spatial Attention-Based 3D Graph Convolutional Neural Network for Sign Language Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Water Quality Measurement and Modelling Based on Deep Learning Techniques: Case Study for the Parameter of Secchi Disk

1
College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China
2
Zhejiang Institute of Hydraulics and Estuary, Hangzhou 310020, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(14), 5399; https://doi.org/10.3390/s22145399
Submission received: 18 June 2022 / Revised: 16 July 2022 / Accepted: 18 July 2022 / Published: 20 July 2022

Abstract

:
The Secchi disk is often used to monitor the transparency of water. However, the results of personal measurement are easily affected by subjective experience and objective environment, and it is time-consuming. With the rapid development of computer technology, using image processing technology is more objective and accurate than personal observation. A transparency measurement algorithm is proposed by combining deep learning, image processing technology, and Secchi disk measurement. The white part of the Secchi disk is cropped by image processing. The classification network based on resnet18 is applied to classify the segmentation results and determine the critical position of the Secchi disk. Then, the semantic segmentation network Deeplabv3+ is used to segment the corresponding water gauge at this position, and subsequently segment the characters on the water gauge. The segmentation results are classified by the classification network based on resnet18. Finally, the transparency value is calculated according to the segmentation and classification results. The results from this algorithm are more accurate and objective than that of personal observation. The experiments show the effectiveness of this algorithm.

1. Introduction

Water is an important natural resource for all life. However, due to the rapid development of industrial and agricultural production activities and urbanization, water pollution is becoming more and more serious, directly affecting daily life and production [1,2,3,4]. Some indicators are needed to evaluate the quality of water, one of which is the transparency of water [5,6]. Transparency is defined as the degree to which light penetrates a water body [7]. It plays an important role in many fields such as ecological management [8,9], aquaculture [10], primary production of phytoplankton [11], seagrasses health [12], coral reefs [13] and so on. Therefore, it is of great significance to obtain accurate transparency values for guiding daily factory manufacturing and life.
At present, the commonly used method to measure transparency is the Secchi disk method [14]. It is a checkered black-and-white disk. When in use, it is immersed in water and slowly sinks until the white part of the disk disappears [15,16], and the depth of the disk in the water is the value of transparency. Although it is relatively simple to operate and easy to carry, the accurate critical position of the Secchi disk is not easy to observe with the naked eye. The measurer usually needs to spend several minutes to observe the Secchi disk close to the critical position. In order to obtain accurate results, many people often need to observe it many times, which is time-consuming. Moreover, the results from personal measurement are readily affected by subjective and objective factors, such as the quality of human vision, the experience of operating the Secchi disk, the strength of the surrounding light, the shaking of the water gauge, and so on [17,18]. Therefore, personal measurement using the Secchi disk has great uncertainty and instability [19,20,21], and the Secchi disk method needs to be further improved.
In addition to the method of using a Secchi disk, there are some more advanced methods, which are mainly divided into two categories: using sensor and image processing technology. The turbidity sensor is used to measure the transparency of a body of water [22,23], resulting in the automatic measurement of transparency, but the design, manufacture, and maintenance of a turbidity sensor need a certain cost. At present, the popular image measurement method is used to measure the transparency of water quality through satellite spectrogram. The relationship between satellite spectral images and water transparency can be described by algorithms, and these algorithms can be classified into empirical algorithms, semi-analytical algorithms, analytical algorithms, or machine learning algorithms [24,25]. An empirical algorithm is based on the observation that there is a strong relationship between water transparency and parameters such as reflectance, water-leaving radiances, diffuse attenuation coefficients, and so on, and these parameters can be obtained from satellite spectral images [26]. Semi-analytical algorithms and analytical algorithms are based on the theories of underwater light radiative transmission [27], and they calculate the absorption coefficient and scattering coefficient of the water body through remote-sensing reflectance and construct an equation between these parameters and the water transparency [28,29]. Machine learning algorithms are mentioned in the next paragraph; however, this method is suitable for large areas and needs satellites to obtain image data.
With the development of artificial intelligence, machine learning is used to monitor and assess water parameters such as dissolved oxygen, chlorophyll-a, Secchi disk depth, and so on. In the last decade, an Artificial Neural Network (ANN) is one of the most utilized artificial intelligence methods [30]. Sentas et al. [31] used three models, including ANN, to forecast daily dissolved oxygen. Karamoutsou and Psilovikos [32] used chlorophyl-a, pH, water temperature, water conductivity, turbidity, ammonia nitrogen, and nitrate nitrogen as inputs for the Deep Neural Network (DNN) to predict the dissolved oxygen. Gómez et al. [33] combined satellite images with Random Forest (RF), Support Vector Machine (SVM), ANN, and DNN to monitor chlorophyll-a. Some studies also use ANN for Secchi disk depth measurement. Heddam [34] collected other parameters (total suspended solids, water temperature, dissolved oxygen, and chlorophyll) of the target water area and used ANN to predict the Secchi disk depth. Batur et al. [35] and Arias-Rodriguez et al. [36] combined satellite images with machine learning methods to predict the Secchi disk depth. In recent years, deep learning has made great progress and has more advantages than traditional image processing technology in target detection [37,38], semantic segmentation [39], and so on. Oga et al. [40] and Montassar et al. [41] used semantic segmentation and convolutional neural networks (CNN) to evaluate the turbidity of the target water body, which indirectly reflected the clarity of the water body but failed to measure the specific value of transparency.
Although the above methods are better than the traditional Secchi disk method in different ways, there are few studies that combine deep learning with the Secchi disk. Now, cameras can be seen everywhere, making it easier and cheaper to obtain RGB video. If water transparency can be obtained directly from the Secchi disk video, it will be very worthwhile. When using deep learning to obtain transparency from Secchi disk video, there are two problems, including how to detect the blurred Secchi disk and how to measure the depth of the Secchi disk. For the first problem, some studies use deep learning to detect the blurred object; for example, Wang et al. [42] used DNN, and Zeng et al. [43] used a method based on CNN. For the second problem, a water gauge is commonly used to measure the depth of the Secchi disk. Some studies use deep learning methods to recognize the water gauge. Lin et al. [44] used a semantic segmentation network to segment the water gauge and processed the segmentation result to calculate its reading. Wang et al. [45] used a DNN to recognize the ship water gauge and calculated its reading according to the recognition result. Based on existing studies, it is possible to measure transparency by combining deep learning with the Secchi disk.
In this paper, image processing techniques and deep learning are combined with the Secchi disk method to measure the transparency value of water. A general camera (VARID-SUMMILUX-H) is used to take a video of the measurement process of the Secchi disk method. The critical position of the Secchi disk and the corresponding water gauge value are determined by image processing of the Secchi disk video and water gauge video, respectively. A Faster RCNN [46] is applied to determine the critical position of the Secchi disk, and the white part of the disk is segmented from the image by using the OTSU algorithm (Maximization of interclass variance), and the resnet18 [47] network is used to classify the segmentation results; furthermore, the critical position of the Secchi disk was determined according to the classification results. For the acquisition of the water gauge scale, firstly, the semantic segmentation network Deeplabv3+ [48] was used to segment the corresponding water gauge, and then binary and k-means clustering operations were performed on the segmented water gauge. Next, the characters were segmented, and the characters were classified by the resnet18 network. Finally, the scale value of the water gauge was calculated. When the Secchi disk is at the critical position, the corresponding water gauge scale is the transparency value.
Basically, the main contributions and novelty of this work are as follows: creatively proposes an algorithm based on deep learning and image processing technology to measure the transparency of water quality with Secchi disk. The algorithm adopts a method to determine the critical position of the Secchi disk, which can accurately determine the critical position of the Secchi disk in the natural environment and avoids the problems of time-consuming and unstable observation by the naked eye. Moreover, this method also gives a water gauge recognition algorithm based on DeepLabv3+ to assist in measuring water transparency, which can control the error of water gauge recognition to about 1 cm. The relevant experimental results show that, compared with personal measurement, this method is more accurate, more objective, faster, and less costly.

2. Description of Algorithm

The overall framework of the algorithm is shown in Figure 1, including the following three parts: video pre-processing, determination of the critical position of the Secchi disk, water gauge recognition, and water gauge scale calculation.

2.1. Determination of the Critical Position of the Secchi Disk

A general camera is applied to imitate the action of personal measurement by using the Secchi disk and take some video of the disk and corresponding water gauge. The following pre-processing is done for the video of Secchi disk and the video of the water gauge: every three frames extract a picture and store in a fixed folder, respectively; the average value of the last 10 frames in the video is taken as the background image.

2.1.1. Initial Image Crop of White Part of the Secchi Disk

The flow chart of the initial image crop of the white part of the Secchi disk is shown in Figure 2. First, Faster RCNN is used to recognize the first image, which contains the Secchi disk, and a rectangle, which can just surround the Secchi disk when it is obtained. Then, subtract the background image from all the images in the folder to get the image after subtracting the background. The position of the rectangular box with high average brightness is the position of the Secchi disk. Finally, crop the Secchi disk from the original image and determine the crop threshold of the white part by using the OTSU algorithm. The flow is as follows:
(1)
The brightness value of the whole picture is recorded as set C, and it is divided into two categories, one is recorded as set C1, the other is recorded as set C2, and C1⋂C2 = 0 and C1⋃C2 = C.
(2)
Take the brightness value K and put all the brightness values in the range of [0, k − 1] in set C1, and put the rest in set C2. The average value of the brightness value in set C1 is denoted as M1, and the proportion of the number of elements in set C1 to the number of elements in set C is denoted as P1; the average value of the brightness in set C2 is M2, and the proportion of the number of elements in set C2 to the number of elements in set C is P2. The mean value of the brightness in set C is recorded as m, and the formula for calculating the variance between classes is: g = P1 × (M1 − M)2 + P2 × (M2 − M)2.
(3)
The brightness value K is selected from 0 to 255 one by one, and the corresponding interclass variance is calculated every time. The K value corresponding to the maximum interclass variance divided by 255 is the final threshold. The brightness value that is greater than the threshold value is retained, and the rest are removed so that the white part on the Secchi disk can be cropped.

2.1.2. Fine Image Crop of the White Part of the Secchi Disk

When the image of the Secchi disk is very blurred, the position and crop threshold of the Secchi disk cannot be determined according to the previous method. At this time, the image crop threshold will obviously deviate from all the previous threshold change trends, resulting in a jump. It is necessary to determine in which image the threshold jump happens first. The procedure is as follows: firstly, the crop threshold of the adjacent images is calculated according to the order of images. Secondly, the threshold difference of the adjacent images is calculated, and the absolute value of the difference is obtained. Next, the kmeans function is used to divide these differences into two categories. Finally, the average value of each category is calculated to determine in which image the threshold jump happens.
In this paper, the linear fitting method is used to determine the threshold value when the Secchi disk is blurred. While cropping the Secchi disk, the background of the corresponding position is also cropped. By normalizing the background brightness, the background brightness can be regarded as a Gaussian distribution, and the mean value is u and the standard deviation is σ. According to the rule of Gaussian distribution, about 98% of the background brightness values are less than u + 2σ. When the threshold value predicted by the linear fitting curve is less than u + 2σ, u + 2σ is used as the threshold value to prevent the water surface from being cropped as the white part of the Secchi disk.

2.1.3. Determination of the Critical Position of the Secchi Disk by CNN

The crop results of the white part of the Secchi disk are divided into two categories by using a CNN classification network based on resnet18. One category indicates the existence of the Secchi disk, and the other category indicates that there is no Secchi disk. When the output of the classification network changes, it is the critical position of the Secchi disk.

2.2. Water Gauge Recognition and Water Level Calculation

2.2.1. Water Gauge Segmentation

The flow chart of water gauge segmentation is shown in Figure 3. Firstly, the Deeplabv3+ algorithm is used to segment the corresponding water gauge image at the critical position of the Secchi disk. Since the water gauge in the image may be tilted, it is necessary to carry out tilt correction. The slope of the left or right edge of the water gauge can be calculated by Equation (1):
k = n Σ x i y i ˙ Σ x i Σ y i n Σ x i 2 Σ x i Σ x i
where k is the slope of the left or right edge of the water gauge, xi and yi, are the coordinate of the point on the left or right edge of the water gauge, and 1 ≤ in, n is a positive integer.
According to the calculated slope, the slope of the water gauge image and the output of Deeplabv3+ are corrected at the same time, and then the water gauge can be segmented from the image.

2.2.2. Characters Segmentation and Classification

There are two types of digital characters on the water gauge, one is located in the entire ten scale position, which is a relatively large character, and the other is located in the unit centimeter scale position, which is a relatively small character. Small number characters are difficult to segment and recognize, and this paper mainly segments the large number characters.
The flow chart of character segmentation is shown in Figure 4. The segmented water gauge image is transposed and mirrored to make it horizontal. The image is then binarized, reversed, and corroded, and each character is marked with a rectangular box. After the above operation, the small number characters are corroded or remain on the water gauge, while the large number characters still exist. In order to further distinguish large characters from other characters, the following method is designed: firstly, we calculated the area of each character’s rectangle; secondly, we divided these areas into two categories; next, we calculated the average area of each category—the corresponding character of the category with the larger average is the large character—and subsequently segmented the large character according to the position of the rectangle; and finally, the large characters are classified by CNN classification network based on resnet18.

2.2.3. Water Gauge Scale Calculation

The position of each entire ten scale of the horizontal water gauge lies between each non-zero-digit character and the digit-zero character next to its right. For example, the position of scale 60 is between character 6 and character 0 to the right of it. The formula for calculating the position of each entire ten scale on the water gauge is shown in Equation (2):
x ( k ) = x _ l e f t ( k ) + x _ r i g h t ( k ) 2
where k is a positive integer, x(k) is the position of the entire ten scale corresponding to the number k on the water gauge, x_right(k) is the right edge of the number k, and x_left(k) is the left edge of the number 0 to the right of the number k.
The reading scale calculation formula of the water gauge is obtained based on the ratio relationship, as shown in (3):
V _ r e a d i n g = 10 × ( k x ( k ) x ( k + 1 ) x ( k ) )
where V_reading is the reading value of the water gauge scale in cm.
When the Secchi disk is at the critical position, the corresponding water gauge reading value is the transparency.

3. Experiments

All the experiments are carried out on MATLAB 2020b. All the codes are implemented in the MATLAB programming language. The networks used in the experiment have very good performance. They are highly cited on Google and have been tested by thousands of researchers. Our experimental results also verify the excellent performance of these networks. The images are annotated with the Image Labeler tool on MATLAB 2020b. The training data is augmented with the albumentations toolbox or ImageDataAugmenter function. The neural networks are created and trained with the functions from the Deep Learning Toolbox on MATLAB 2020b. The training is carried out on a single NVIDIA GeForce RTX 2080Ti (11 GB).

3.1. Training of Neural Network in Determining the Critical Position of the Secchi Disk

The training of the neural network includes the training of the target detection network based on the Secchi disk and the training of the Secchi disk classification network.
More than 1000 pictures containing the Secchi disk are collected and annotated with the Image Labeler function. These images are augmented with the albumentations toolbox. The augmentation flow is as follows: each picture has a probability of 0.5 to be selected, 3 to 8 functions are randomly selected from albumentations to enhance the selected picture, and this operation is repeated several times. The functions used in the augmentation process and the number of data augmented with this function are shown in Table 1. After augmentation, there are 3069 images to train the target detection network based on the Secchi disk. SGDM [49,50] is selected as the optimization algorithm, the initial learning rate is 0.001, the learning rate attenuation mode is set to the initial learning rate, every two epochs are multiplied by 0.9, a total of 40 epochs are trained, and the minibatchsize is 16. With Alexnet [51] as the backbone, a Faster RCNN network model for network training is built. The loss curve and accuracy curve in the training process are shown in Figure 5.
Moreover, a total of 2000 images are collected to train the Secchi disk classification network, and, among them, 915 images show that there is no Secchi disk and 1085 images show that there is a Secchi disk. Eighty percent of the data are used as the training set, and the remaining data are used as the test set. The ImageDataAugmenter function is used to enhance the training data. The training data is enhanced online by randomly rotating it 90°, random mirroring, random horizontal translation between [−10, 10], and random vertical translation between [−10, 10]. SGDM is selected for the optimization algorithm, the minibatchsize is 32, a total of 25 epochs are trained, the initial learning rate is 0.001, the learning rate attenuation mode is set to the initial learning rate, and every epoch is multiplied by 0.92. The classification network is resnet18. A dropout layer is used to prevent overfitting. The loss and accuracy curve from the training process are shown in Figure 6.

3.2. The Test Results of the Critical Position of the Secchi Disk

3.2.1. The Test Results of the White Part Crop of the Secchi Disk

The results of the initial crop and fine crop of the white part of the Secchi disk are shown in Table 2. As can be seen from Table 2, the effect of fine cropping is much better than that of initial cropping. When the Secchi disk is very blurred, the position of the Secchi disk cannot be determined by initial cropping, so the crop result is an irregular image. However, fine cropping can determine the location of the Secchi disk and crop the white part of the Secchi disk. When the Secchi disk is completely invisible, it is easy for the initial crop to crop the water surface, but the fine crop will not.

3.2.2. Test Results of Critical Position Determination by Classification Network

The accuracy of the Secchi disk classification network on the test set is 100.00%, which meets the requirements. After pre-processing the several collected videos, the previous crop algorithm of the white part of the Secchi disk is used to crop the videos. The crop results are sent into the trained classification network, and the image in which the Secchi disk is just invisible is determined according to the classification results. Some experimental results are shown in Table 3. The test results show that the critical position of the Secchi disk is basically accurate, and the Secchi disk is almost invisible in the determined image.

3.3. Training of Neural Network in Water Gauge Recognition

3.3.1. Deeplabv3+ Network Training

1100 images containing the water gauge are collected for the training set. The training data is augmented with the albumentations toolbox. The augmentation flow is as follows: each image has a probability of 0.6 to be selected, 2 to 8 functions are randomly selected from albumentations to enhance the selected image, and this operation is repeated several times. The functions used in the augmentation process and the number of data augmented with this function are shown in Table 4. After augmentation, the training set contains 5066 images. Another 200 images are applied as the test set. The Image Labeler function is used for data annotation. The optimization algorithm selects Adam [49,50], the initial learning rate is set to 0.002, the learning rate attenuation mode is set to the initial learning rate and is multiplied by 0.5 every 10 epochs, and the minibatchsize is set to 8. A total of 60 epochs are trained. Resnet18, Xception [52], and Mobilenetv2 [53] were taken as backbones, respectively, and the Deeplabv3+ network model is built for network training. The loss and accuracy curve from the training process are shown in Figure 7.

3.3.2. Training of Character Classification Network

More than 10,000 digital character images are collected. Eighty percent of the data are used as the training set, and the rest are used as the test set. The ImageDataAugmenter function is used to enhance the training data. The training images are enhanced online by randomly rotating them 10°, random horizontal translation between [−10, 10], and random vertical translation between [−10, 10]. SGDM is selected for the optimization algorithm, the minibatchsize is set to 32, a total of 25 epochs are trained, the initial learning rate is set to 0.001, and the learning rate attenuation mode is set to the initial learning rate and is multiplied by 0.9 every epoch. The classification network is resnet18. The dropout layer is used to prevent overfitting. The loss and accuracy curve from the training process are shown in Figure 8.

3.4. Experimental Results of Water Gauge Recognition

The main evaluation indexes of the Deeplabv3+ semantic segmentation effect are pixel accuracy (PA) and mean intersection over union (MIoU). The calculation formula is shown in (4) and (5):
PA = i = 0 k   P i i   i = 0 k P j j = 0 k P i j
MIoU = 1 k + 1 i = 0 k   P i i j = 0 k   P j i + j = 0 k P j i P i i
where Pij is the part belonging to category j among the pixels predicted by category i.
The larger the MIoU, the better the semantic segmentation. Table 5 shows the performance of Deeplabv3+ on the test set under different backbones. As can be seen from Table 5, the PA of different backbones is close with each other. MIoU is more reliable than PA in this case. Resnet18 is faster than other backbones, but Mobilenetv2 has the highest MIoU.
The accuracy of the water gauge character classification network in the test set is 99.98%, which basically meets the requirements. The calculation of the water gauge reading is tested. Some test results are shown in Table 6, and the error is about 1 cm.

3.5. Overall Test Results of the Algorithm

Some video of the Secchi disk and the corresponding water gauge video are collected, and a graphical user interface (GUI) is designed to test the overall algorithm. Some results are shown in Table 7. From the test results, it can be seen that the transparency of this algorithm is higher than that of personal eyes, and the maximum error is 3.4 cm. Considering that personal eye observation is easily affected by subjective experience, and personal eye image resolution is weaker than that of the computer, the result of this algorithm is higher than that of personal eye observation in theory. Within an allowable error range, the test results of this algorithm are reliable, and even closer to the real value than the manual observation results (the manual observation value refers to the weighted average value obtained after multiple measurements by multiple observers).

4. Conclusions

This paper presents an algorithm for transparency measurement based on computer vision. The measurement method of water quality transparency based on Secchi disk includes the following two aspects: blurred Secchi disk detection and the depth of Secchi disk detection. A variety of image processing and depth learning methods are used in this algorithm. Image processing is used to crop the white part of the Secchi disk, the classification network based on resnet18 is applied to classify the segmentation results and determine the critical position of the Secchi disk, and the Deeplabv3+ network is used to segment the water gauge and the characters on the water gauge. The algorithm can more accurately determine the critical position of the Secchi disk and obtain more exact and objective water quality transparency data. The overall test results from the algorithm are higher than that of personal observation. This technology has strong practical value, and it is more accurate and objective and less time-consuming than that of personal observation (the results of personal measurement are easily affected by subjective experience and objective environment). It can even help to form a unified standard of water quality transparency based on Secchi disks in the future. The application of these artificial intelligence technologies, such as deep learning, in water quality monitoring is very promising.

Author Contributions

Conceptualization, F.L. and L.G.; methodology, F.L., L.G. and Q.J.; software, L.G.; validation, L.G., F.L. and L.H.; formal analysis, L.G. and F.L.; investigation, L.G., F.L. and A.Y.; resources, F.L., Q.J. and A.Y.; data curation, L.G.; writing—original draft preparation, L.G. and F.L.; writing—review and editing, F.L. and L.G.; visualization, L.G.; supervision, F.L., Q.J. and A.Y.; project administration, A.Y. and L.H.; funding acquisition, A.Y., Q.J. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2019YFC0408802.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Palmer, M.; Ruhi, A. Linkages between flow regime, biota, and ecosystem processes: Implications for river restoration. Science 2019, 365, eaaw2087. [Google Scholar] [CrossRef] [Green Version]
  2. Seelen, L.M.S.; Flaim, G.; Jennings, E.; De Senerpont Domis, L.N. Saving water for the future: Public awareness of water usage and water quality. J. Environ. Manag. 2019, 242, 246–257. [Google Scholar] [CrossRef]
  3. Bhateria, R.; Jain, D. Water quality assessment of lake water: A review. Sustain. Water Resour. Manag. 2016, 2, 161–173. [Google Scholar] [CrossRef] [Green Version]
  4. Grill, G.; Lehner, B.; Thieme, M.; Geenen, B.; Tickner, D.; Antonelli, F.; Babu, S.; Borrelli, P.; Cheng, L.; Crochetiere, H.; et al. Mapping the world’s free-flowing rivers. Nature 2019, 569, 215–221. [Google Scholar] [CrossRef]
  5. Shukla, D.; Vaghela, K.B.; Jain, N. Assessment of Physico-chemical and Bacteriological Water Quality Parameters: A Review. Int. J. Pharm. Integr. Life Sci. 2017, 5, 1–17. [Google Scholar]
  6. Halim, A.; Sharmin, S.; Rahman, H.; Haque, M.; Rahman, S.; Islam, S. Assessment of water quality parameters in baor environment, Bangladesh: A review. Int. J. Fish. Aquat. Stud. 2018, 6, 259–263. [Google Scholar]
  7. Harrison, M.D. Water Clarity. In Encyclopedia of Estuaries; Kennish, M.J., Ed.; Encyclopedia of Earth Sciences Series; Springer: Dordrecht, The Netherlands, 2016. [Google Scholar] [CrossRef]
  8. Elkadiri, R.; Manche, C.; Sultan, M.; Al-Dousari, A.; Uddin, S.; Chouinard, K.; Abotalib, A.Z. Development of a Coupled Spatiotemporal Algal Bloom Model for Coastal Areas: A Remote Sensing and Data Mining-Based Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 5159–5171. [Google Scholar] [CrossRef]
  9. Opdal, A.F.; Lindemann, C.; Aksnes, D.L. Centennial decline in North Sea water clarity causes strong delay in phytoplankton bloom timing. Glob. Chang. Biol. 2019, 25, 3946–3953. [Google Scholar] [CrossRef] [Green Version]
  10. Zhou, L.; Zeng, L.; Fu, D.; Xu, P.; Zeng, S.; Tang, Q.; Chen, Q.; Chen, L.; Li, G. Fish density increases from the upper to lower parts of the Pearl River Delta, China, and is influenced by tide, chlorophyll-a, water transparency, and water depth. Aquat. Ecol. 2016, 50, 59–74. [Google Scholar] [CrossRef]
  11. Wondie, A.; Mengistu, S.; Vijverberg, J.; Dejen, E. Seasonal variation in primary production of a large high altitude tropical lake (Lake Tana, Ethiopia): Effects of nutrient availability and water transparency. Aquat. Ecol. 2007, 41, 195–207. [Google Scholar] [CrossRef]
  12. Wilson, C.J.; Dunton, K.H. Assessment of Seagrass Habitat Quality and Plant Physiological Condition in Texas Coastal Waters; The University of Texas at Austin Marine Science Institute: Port Aransas, TX, USA, 2012. [Google Scholar]
  13. de Voogd, N.J.; Cleary, D.F. Relating species traits to environmental variables in Indonesian coral reef sponge assemblages. Mar. Freshw. Res. 2007, 58, 240–249. [Google Scholar] [CrossRef]
  14. Zielinski, O. The History and Future of the Secchi Disk. In Angelo Secchi and Nineteenth Century Science; Springer: Cham, Switzerland, 2021; pp. 215–224. [Google Scholar] [CrossRef]
  15. Bledzki, L. Secchi Disk. Available online: http://editors.eol.org/eoearth/wiki/Secchi_disk_(About_the_EoE) (accessed on 1 June 2022).
  16. Wernand, M.R. On the history of the Secchi disc. J. Eur. Opt. Soc. Rapid Publ. 2010, 5, 10013s. [Google Scholar] [CrossRef] [Green Version]
  17. Smith, D.G. Standardization of secchi disk measurements, including use of a viewer box. In Proceedings of the NWQMC National Monitoring Conference 2000, Austin, TX, USA, 25–27 April 2000. [Google Scholar]
  18. Aas, E.; Høkedal, J.; Sørensen, K. Secchi depth in the Oslofjord–Skagerrak area: Theory, experiments and relationships to other quantities. Ocean Sci. 2014, 10, 177–199. [Google Scholar] [CrossRef] [Green Version]
  19. Tyler, J.E. The Secchi disc. Limnol. Oceanogr. 1968, 13, 1–6. [Google Scholar] [CrossRef]
  20. Preisendorfer, R.W. Secchi disc science: Visual optics of natural waters. Limnol. Oceanogr. 1986, 31, 909–926. [Google Scholar] [CrossRef] [Green Version]
  21. Davies-Colley, R.J.; Vant, W.N.; Smith, D.G. Colour and Clarity of Natural Waters: Science and Management of Optical Water Quality; Ellis Horwood: New York, NY, USA, 1993. [Google Scholar]
  22. Feng, C.; Yuan, J.; Sun, Y.; You, J. Design of Water Quality Monitoring System. In Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China, 23–25 October 2020. [Google Scholar]
  23. Zhang, G.; Hao, Y. Design and Implementation of a Home Intelligent Water Quality Inspection Device. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020; pp. 628–631. [Google Scholar] [CrossRef]
  24. Shen, M.; Duan, H.; Cao, Z.; Xue, K.; Qi, T.; Ma, J.; Liu, D.; Song, K.; Huang, C.; Song, X. Sentinel-3 OLCI observations of water clarity in large lakes in eastern China: Implications for SDG 6.3.2 evaluation. Remote Sens. Environ. 2020, 247, 111950. [Google Scholar] [CrossRef]
  25. Doron, M.; Babin, M.; Hembise, O.; Mangin, A.; Garnesson, P. Ocean transparency from space: Validation of algorithms estimating Secchi depth using MERIS, MODIS and SeaWiFS data. Remote Sens. Environ. 2011, 115, 2986–3001. [Google Scholar] [CrossRef]
  26. Gomes, A.C.; Alcântara, E.; Rodrigues, T.; Bernardo, N. Satellite estimates of euphotic zone and Secchi disk depths in a colored dissolved organic matter-dominated inland water. Ecol. Indic. 2020, 110, 105848. [Google Scholar] [CrossRef]
  27. Lee, Z.P.; Shang, S.; Hu, C.; Du, K.; Weidemann, A.; Hou, W.; Lin, J.; Lin, G. Secchi disk depth: A new theory and mechanistic model for underwater visibility. Remote Sens. Environ. 2015, 169, 139–149. [Google Scholar] [CrossRef] [Green Version]
  28. Jiang, D.; Matsushita, B.; Setiawan, F.; Vundo, A. An improved algorithm for estimating the Secchi disk depth from remote sensing data based on the new underwater visibility theory. ISPRS J. Photogramm. Remote Sens. 2019, 152, 13–23. [Google Scholar] [CrossRef]
  29. Zhan, J.; Zhang, D.; Zhou, G.; Zhang, G.; Cao, L.; Guo, Q. MODIS-Based Research on Secchi Disk Depth Using an Improved Semianalytical Algorithm in the Yellow Sea. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5964–5972. [Google Scholar] [CrossRef]
  30. Ighalo, J.O.; Adeniyi, A.G.; Marques, G. Artificial Intelligence for Surface Water Quality Monitoring and Assessment: A Systematic Literature Analysis. Model. Earth Syst. Environ. 2021, 7, 669–681. [Google Scholar] [CrossRef]
  31. Sentas, A.; Psilovikos, A.; Psilovikos, T.; Matzafleri, N. Comparison of the performance of stochastic models in forecasting daily Dissolved Oxygen in the Dam—Lake Thesaurus. Desalin. Water Treat. 2016, 57, 11660–11674. [Google Scholar] [CrossRef]
  32. Karamoutsou, L.; Psilovikos, A. Modeling of Dissolved Oxygen Concentration using a Deep Neural Network Approach in Lake Kastoria, Greece. Eur. Water 2020, 71/72, 3–14. [Google Scholar]
  33. Gómez, D.; Salvador, P.; José, J.S.; Casanova, L. A New Approach to Monitor Water Quality in the Menor sea (Spain) using Satellite Data and Machine Learning Methods. Environ. Pollut. 2021, 286, 117489. [Google Scholar] [CrossRef]
  34. Heddam, S. Secchi Disk Depth Estimation from Water Quality Parameters: Artificial Neural Network versus Multiple Linear Regression Models? Environ. Process. 2016, 3, 525–536. [Google Scholar] [CrossRef]
  35. Batur, E.; Maktav, D. Assessment of Surface Water Quality by Using Satellite Images Fusion Based on PCA Method in the Lake Gala, Turkey. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2983–2989. [Google Scholar] [CrossRef]
  36. Arias-Rodriguez, L.F.; Duan, Z.; Sepúlveda, R.; Martinez-Martinez, S.I.; Disse, M. Monitoring Water Quality of Valle de Bravo Reservoir, Mexico, Using Entire Lifespan of MERIS Data and Machine Learning Approaches. Remote Sens. 2020, 12, 1586. [Google Scholar] [CrossRef]
  37. Lin, F.; Hou, T.; Jin, Q.; You, A. Improved YOLO Based Detection Algorithm for Floating Debris in Waterway. Entropy 2021, 23, 1111. [Google Scholar] [CrossRef]
  38. D’Angelo, G.; Palmieri, F. A stacked autoencoder-based convolutional and recurrent deep neural network for detecting cyberattacks in interconnected power control systems. Int. J. Intell. Syst. 2021, 36, 7080–7102. [Google Scholar] [CrossRef]
  39. Cao, F.; Bao, Q. A Survey on Image Semantic Segmentation Methods with Convolutional Neural Network. In Proceedings of the 2020 International Conference on Communications, Information System and Computer Engineering (CISCE), Kuala Lumpur, Malaysia, 3–5 July 2020; pp. 458–462. [Google Scholar] [CrossRef]
  40. Oga, T.; Umeki, Y.; Iwahashi, M.; Matsuda, Y. River water quality estimation based on convolutional neural network. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 1305–1308. [Google Scholar] [CrossRef]
  41. Montassar, I.; Benazza-Benyahia, A. Water turbidity estimation in water sampled images. In Proceedings of the 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 2–5 September 2020; pp. 1–5. [Google Scholar] [CrossRef]
  42. Wang, X.; Zhang, S.; Liang, X.; Zhou, H.; Zheng, J.; Sun, M. Accurate and Fast Blur Detection Using a Pyramid M-Shaped Deep Neural Network. IEEE Access 2019, 7, 86611–86624. [Google Scholar] [CrossRef]
  43. Zeng, K.; Wang, Y.; Mao, J.; Liu, J.; Peng, W.; Chen, N. A Local Metric for Defocus Blur Detection Based on CNN Feature Learning. IEEE Trans. Image Process. 2019, 28, 2107–2115. [Google Scholar] [CrossRef]
  44. Lin, F.; Yu, Z.; Jin, Q.; You, A. Semantic Segmentation and Scale Recognition–Based Water-Level Monitoring Algorithm. J. Coast. Res. 2020, 105, 185–189. [Google Scholar] [CrossRef]
  45. Wang, X.; Chen, H.; Wu, Q.E.; Guo, Y. On Research of Video Stream Detection Algorithm for Ship Waterline. In Proceedings of the 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Fuzhou, China, 12–14 June 2020; pp. 204–208. [Google Scholar] [CrossRef]
  46. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  47. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  48. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation; Springer: Cham, Switzerland, 2018. [Google Scholar]
  49. Zhang, J. Gradient Descent based Optimization Algorithms for Deep Learning Models Training. arXiv 2019, arXiv:1805.07500. [Google Scholar]
  50. Sun, R. Optimization for deep learning: Theory and algorithms. arXiv 2019, arXiv:1912.08957. [Google Scholar]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Image Net Classification with Deep Convolutional Neural Networks. In Proceedings of the International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 3–6 December 2012; Curran Associates Inc.: New York, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  52. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
  53. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overall framework of the algorithm.
Figure 1. Overall framework of the algorithm.
Sensors 22 05399 g001
Figure 2. Flow chart of initial image crop of the white part of the Secchi disk.
Figure 2. Flow chart of initial image crop of the white part of the Secchi disk.
Sensors 22 05399 g002
Figure 3. Flow of water gauge segmentation.
Figure 3. Flow of water gauge segmentation.
Sensors 22 05399 g003
Figure 4. Flow chart of character segmentation.
Figure 4. Flow chart of character segmentation.
Sensors 22 05399 g004
Figure 5. Loss and accuracy in Faster RCNN training.
Figure 5. Loss and accuracy in Faster RCNN training.
Sensors 22 05399 g005
Figure 6. Loss and accuracy in classification network training of Secchi disk.
Figure 6. Loss and accuracy in classification network training of Secchi disk.
Sensors 22 05399 g006
Figure 7. The loss and accuracy curve from the training process.
Figure 7. The loss and accuracy curve from the training process.
Sensors 22 05399 g007
Figure 8. Loss and accuracy curve from character classification network training.
Figure 8. Loss and accuracy curve from character classification network training.
Sensors 22 05399 g008
Table 1. Functions used for training data augmentation of Faster RCNN network.
Table 1. Functions used for training data augmentation of Faster RCNN network.
Name of the FunctionRole of the FunctionNumber of Data Augmented with the Function
MedianBlurBlur the image using a median filter460
Cutoutrandom dropout of the square region in the image433
RandomSunFlareSimulate sun flare for the image446
RandomFogSimulate fog for the image470
RandomRainSimulate rain for the image437
MotionBlurApply motion blur to the image447
GlassBlurApply glass blur to the image481
SuperpixelsTransform images to their super-pixel representation476
SharpenSharpen the image464
ImageCompressionDecrease jpeg compression of the image382
MultiplicativeNoiseMultiply image to random number462
CLAHEApply contrast limited adaptive histogram equalization to the image474
HorizontalFlipFlip the image horizontally438
RotateRandomly rotate the image471
VerticalFlipFlip the image vertically429
RandomCropRandomly crop the image461
ShiftScaleRoateRandomly apply affine transforms426
PerspectiveRandomly apply perspective transforms436
RandomSnowSimulate snow for the image417
HueSaturationValueRandomly change the hue, saturation, and value of the image485
ISONoiseApply camera sensor noise439
GaussNoiseApply Gaussian noise428
Table 2. The results from cropping the white part of the Secchi disk.
Table 2. The results from cropping the white part of the Secchi disk.
ImageBlur Degree of Secchi DiskInitial CropFine Crop
Sensors 22 05399 i001clear Sensors 22 05399 i002 Sensors 22 05399 i003
Sensors 22 05399 i004blurred Sensors 22 05399 i005 Sensors 22 05399 i006
Sensors 22 05399 i007very blurred Sensors 22 05399 i008 Sensors 22 05399 i009
Sensors 22 05399 i010completely invisible Sensors 22 05399 i011 Sensors 22 05399 i012
Table 3. Test results of critical position determination by classification network.
Table 3. Test results of critical position determination by classification network.
VideoInitial PositionCompletely Invisible PositionCritical Position
1 Sensors 22 05399 i013 Sensors 22 05399 i014 Sensors 22 05399 i015
2 Sensors 22 05399 i016 Sensors 22 05399 i017 Sensors 22 05399 i018
3 Sensors 22 05399 i019 Sensors 22 05399 i020 Sensors 22 05399 i021
4 Sensors 22 05399 i022 Sensors 22 05399 i023 Sensors 22 05399 i024
5 Sensors 22 05399 i025 Sensors 22 05399 i026 Sensors 22 05399 i027
Table 4. Functions used for training data augmentation of Deeplabv3+ network.
Table 4. Functions used for training data augmentation of Deeplabv3+ network.
Name of the FunctionRole of the FunctionNumber of Data Augmented with the Function
MedianBlurBlur the image using a median filter691
Cutoutrandom dropout of the square region in the image689
RandomSunFlareSimulate sun flare for the image630
RandomFogSimulate fog for the image675
RandomRainSimulate rain for the image663
MotionBlurApply motion blur to the image719
GlassBlurApply glass blur to the image670
SuperpixelsTransform images to their super-pixel representation672
SharpenSharpen the image649
RandomShadowSimulate shadow for the image655
MultiplicativeNoiseMultiply image to random number701
CLAHEApply contrast limited adaptive histogram equalization to the image654
HueSaturationValueRandomly change the hue, saturation, and value of the image666
RandomBrightnessContrastRandomly change brightness and contrast of the image657
RandomSnowSimulate snow for the image686
GaussianBlurApply Gaussian blur to the image650
EmbossEmboss the image661
GridDropoutDrop out square region of the image in grid fashion674
ImageCompressionDecrease jpeg compression of the image656
ISONoiseApply camera sensor noise681
HorizontalFlipFlip the image horizontally652
RotateRandomly rotate the image650
VerticalFlipFlip the image vertically672
RandomCropRandomly crop the image649
ShiftScaleRotateRandomly apply affine transforms626
PerspectiveRandomly apply perspective transforms680
Table 5. Performance of Deeplabv3+ in different models.
Table 5. Performance of Deeplabv3+ in different models.
BackboneImage SizeFPSPAMIoU
Resnet18640 × 48023.9599.55%94.20%
Xception640 × 48016.9799.57%94.38%
Mobilenetv2640 × 48017.3599.57%94.48%
Table 6. Water gauge reading calculation test results.
Table 6. Water gauge reading calculation test results.
Input ImageWater Gauge after SegmentationGround Truth (cm)Measurement (cm)
Sensors 22 05399 i028 Sensors 22 05399 i02923.623.8
Sensors 22 05399 i030 Sensors 22 05399 i03160.861.9
Sensors 22 05399 i032 Sensors 22 05399 i03346.046.4
Sensors 22 05399 i034 Sensors 22 05399 i03513.213.8
Sensors 22 05399 i036 Sensors 22 05399 i03734.434.5
Table 7. Overall test results of the algorithm.
Table 7. Overall test results of the algorithm.
VideoInitial Position of Secchi DiskCritical Position of Secchi DiskTransparency of Manual MeasurementTransparency of GUI Measurement
1 Sensors 22 05399 i038 Sensors 22 05399 i03955.0 cm57.8 cm
2 Sensors 22 05399 i040 Sensors 22 05399 i04158.0 cm61.4 cm
3 Sensors 22 05399 i042 Sensors 22 05399 i04350.0 cm52.7 cm
4 Sensors 22 05399 i044 Sensors 22 05399 i04540.0 cm42.5 cm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, F.; Gan, L.; Jin, Q.; You, A.; Hua, L. Water Quality Measurement and Modelling Based on Deep Learning Techniques: Case Study for the Parameter of Secchi Disk. Sensors 2022, 22, 5399. https://doi.org/10.3390/s22145399

AMA Style

Lin F, Gan L, Jin Q, You A, Hua L. Water Quality Measurement and Modelling Based on Deep Learning Techniques: Case Study for the Parameter of Secchi Disk. Sensors. 2022; 22(14):5399. https://doi.org/10.3390/s22145399

Chicago/Turabian Style

Lin, Feng, Libo Gan, Qiannan Jin, Aiju You, and Lei Hua. 2022. "Water Quality Measurement and Modelling Based on Deep Learning Techniques: Case Study for the Parameter of Secchi Disk" Sensors 22, no. 14: 5399. https://doi.org/10.3390/s22145399

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop