Control of object prediction using smart optimized water indication (sonic) algorithm for flood detection

This paper presents the development of a new algorithm in the field of image processing that enables the detection of flood disasters quickly and accurately, using the SONIC (Smart water indication optimizer) method. Concentrated detection in online real-time camera systems has been performed by several tests consisting of classifying camera objects, thermal cameras, and learning cameras. The introduction to the RTC web enables real-time and multiplatform data delivery systems on devices comprised of computers and android gadgets, on object classification using the SONIC algorithm. The object consists of humans, yellow balls, and green balls, with each sample having 50 points of view. The experiments showed test results up to 100% per age with real-time camera capture speeds.


Introduction
Flood is a natural phenomenon that originates from rainfall with high intensity and long duration in watersheds. Environmental damage, physical changes in the surface of the land cause a decrease in the capacity and storage capacity of rainwater so that most of the rainfall flows in high discharge potential as a flood disaster. The International Charter and global network for Earth observation-based major disaster emergencies created a regional network based on earth observation for better natural disasters in terms of management and damage assessment [1][2][3][4][5].
Digital image processing is the most informative data processor for the detection method [6][7][8][9]. The image processing using the Fuzzy C-Means (FCM) technique can identify flooding areas [10]. This Fuzzy C-Means (FCM) framework uses multimedia analysis from three training architectures. This method uses objects that are placed in the data three techniques, the initial data double data and the last data in the image processing process. Analysis of satellite images in Flood Detection in Satellite Imager (FDSI) assignments, using a convolutional neural network (CNN), and asking for a deep classification of Deep Learning to understand the path that can be IOP Publishing doi:10.1088/1757-899X/1034/1/012002 2 passed in satellite imagery [7]. The method of implementing CNN architecture for building detection can accommodate a compilation model of low training data numbers. On CNN architecture, this method is called Modular-CNN to improve the performance of building detectors using Histogram Oriented Gradient (HOG) and local binary patterns (LBP) in remote sensing datasets [11,12].
In urban mapping, the introduction of a two-level assessment framework to the DEM reconstruction method is necessary to obtain numerical and morphological accuracy [13,14]. Besides, CNN-based approaches has been developed as well. The technique produced superior results, providing innovative, cost-effective methods for obtaining high-resolution DEM on data applied to urbanization areas [15]. The process with a combination of CNN and ANN to detect informative tweets during a disaster was tested on real-time twitter datasets such as Hurricane Harvey 2017. The way outperformed existing methods regarding speed, memory, F1 scores, and accuracy [16].
Today, deep learning for classification and analysis of image processing by a neural network has been studied in mapping [17][18][19][20][21]. The ConvNet as a neural network in image processing, and prediction with SVM to exploit the data available from Geographic Information Systems (GIS) and technological advances have been made in Indonesia [23]. The automatic disaster detection system observed the occurrence of disasters in extent, with satellite images and finding each accident assisted with deep learning techniques reveals 80% ~ 90% accuracy for disaster detection. The results presented here can facilitate improvements in detecting natural disasters efficiently by building an automatic disaster detection system [24]. CrowdVAS-Net, which achieves 77.8% classification accuracy, outperforms advanced machine learning models. Moreover, this framework can reduce video processing and analysis time by up to 96.8% extracting the representation of motion and display features of a video frame that classifies behavior as abnormal or usual in crowd motion data [25].
Digital image processing is advantageous in dealing with flood problems. However, the accuracy side still needs to be improved. Therefore, designing an integrated system with a cloudbased web network with the speed of data interaction and collaboration of artificial intelligence algorithm needs to be studied, so that this system can be integrated and accessed by all parties.

Material and method 2.1 Material
The materials used in this study are AMD type E2 laptop webcam integrated with the Samsung J1 smartphone system based on a web server with a camera capture architecture scheme on a spherical object. Data sent by the camera via the WRTC (real-time web connection) stunt server is obtained with the firebase REST API with a web connection real-time data format, including time, object location, and data address from the web socket port.

Method
The method employed digital image processing methods. Digital image processing consists of arrangements of object detection, distance, and motion, as shown in figure 1. For object detection methods are used for segmentation methods. Data sent by the camera via the WRTC (real-time web connection) stunt server is captured with the REST API web formatting. The real-time data connection, including time, object location, and data address from the web socket port. This video streaming data will be tested, namely camera Heatmap, camera motion object, and object classification camera.

Figure 1. Experimental Setup
The segmentation is obtained from the distribution of image data in the form of object image arrays into three basic features, namely red, green and blue, to be obtained in the filter process in epoch and modeling. The data is modeled with CNN and classified using machine learning with the KNN method shown in Figure 2. The RGB image (Red, Green, Blue), which is 32x32 pixels in size, is a multidimensional array with a capacity of 32x32x3 (3 is the number of channels).

Figure 2. Process of object segmentation
At the classification stage, the information is extracted into 50 parts of the angle change. The images are divided into three primary classes, namely yellow balls, green balls, and humans using 50 layers of epochs 16 batch size and 0.001 learning rate as a reference for filtering data from web camera schema schemes are shown in Figure 3. The process is done by using the library of the Tensorflow Framework to maximize GPU functionality. Besides that, the process of filtering is complex enough to get the best value from data taken directly from the camera.

Result and discussion
A new intelligence method, SONIC, is used to detect flooded areas. The system under study consists of CNN, K Nearest Neighbours (KNN), HOG, and WRTC classification bases for image processing and the development of IoT-based architecture as a web app control output.

Object detection
The results of taking pictures with objects that are given water and leaf noise provide a precise measurement of motion detection by a camera-based detector system.

Figure 5. Object detection with noises A) in the morning B) in the evening C) at night
The experiment is done in the morning, evening, and night with a maximum light resolution on the ball in the measuring cup and given water up to 200 ml in the bounding box. The results of taking pictures show that the system can still detect objects with the classification by the name of yellow ball demonstrated in Figures 5a, 5b, and 5c.

Motion detection
Retrieval of heatmap camera data applied an SNI (Standard National Indonesia) measuring tube. In the initial experiments, water was put on the surface of the ball with a maximum capacity of 100 ml. Retrieval of data starts from the movement of the object measured by measuring the frame value on the processing camera. The amount of shifting data from the initial 0 ml to 100 ml is shown in Figures 6a, 6b, and 6c. The highest frame load data shift when the ball is affected by the water shift to the resulting discharge value during the first 80 seconds and the final 140 seconds. This shifting shows the process of taking a significant object, as seen in Figure 7. The results of image acquisition with objects for the classification of motion detectors using a static camera showed that objects have a probability value of 0.3, 0.6, and 1. The indicator indicates that the glass is predicted according to the dry conditions, 100 ml, and 200 ml on the camera shows the results of detecting the value of the measuring cup on the camera that significance is proportional to changes in the value of the measuring cup images 8a, 8b, and 8c. In the calculation of the dimension map features depicted by the comparison of the accumulated length or height of the input minus the number of filter heights in the image classified as given by equation 1. (1)

Conclusion
The result of detection measurements shows that the SONIC algorithm has reached 100% accuracy shown in the data tested and the time the camera captures the object based on changes in the categorization of objects under each different condition. Collaboration using real-time web cameras can be gained data for implementing in the all platform gadget and computer.

Acknowledgment
Our thanks go to all supporters and organizers of this conference. We also acknowledge our alma mater, Jember University, for every support and contribution given in this research.