Planting Pattern Modeling Based on Rainfall Prediction Using Backpropagation Artificial Neural Network (Case Study: BMKG Rainfall Data, Deli Serdang Regency)

The results of data analysis are known from the Agricultural Research and Development Agency that the cropping pattern in Deli Serdang Regency was initially rice-rice with changes in varieties, the cropping pattern changed to the rice-rice-rice pattern. The continuous rice cropping pattern for some time eventually caused new problems, namely the exploitation of rice pests (leafhoppers, Nephotettix Virescens, Orsealia Oryzae) and causing crop failure. This exploitative rice pest is one of the causes of the decline in agricultural productivity in the Deli Serdang Regency. The purpose of this study provides alternative solutions to increase agricultural production in Deli Serdang with modeling cropping pattern most profitable based on the placement of planting time that best suits the needs of rainfall in Deli Serdang that predicted using Neural Network Backpropagation so it can be used as guidelines in the utilization of agricultural land in Deli Serdang Regency. The model Backpropagation best in this study is 12-2-1, with the learning rate best 0.08 and best momentum 0.99 with Mean Square Erro testing is 0.0260. Based on the planting calendar and cropping models obtained from rainfall predictions, the cropping patterns that can be applied in Deli Serdang Regency are modeled, namely the cropping patterns of palawija-rice-rice, palawija-rice-palawija, palawija-palawija-palawija, and palawija-palawija-rice.


Introduction
Deli Serdang Regency is one of the districts in North Sumatra which has the second largest harvest area after Simalungun of 33 districts/cities in North Sumatra (Indonesian Center for Rice Research Ministry of Agriculture). In addition, Deli Serdang Regency is one of the districts that contributes rice to meet food needs in North Sumatra Province. Production of paddy (GKP) North Sumatra province Year 2017 (Last update: 04 Sep 2018) amounted to 4669777.5 tons and 512321.5 tons (10.97%) of Deli Serdang. This means that Deli Serdang Regency has a considerable contribution in supporting food security in North Sumatra.
The cropping pattern is one of the things that plays an important role in producing high agricultural productivity, this is in line with what was stated by [1]. The results of data analysis are known from the Agricultural Research and Development Agency that the cropping pattern in Deli Serdang Regency was originally paddy with a change in variety, the cropping pattern changed to a rice-paddyrice pattern. The continuous rice cropping pattern for some time has finally created a new problem, namely the exploitation of rice pests (leafhoppers, tungro, ganjur) and causing crop failure (Research Institute for Assorted Beans and Tubers-Ministry of Agriculture, 7 May 2015). This exploitative rice pest is one of the causes of the decline in agricultural productivity in Deli Serdang Regency. One of Table 1. Requirements for Growing Crops An alternative that can be done to increase agricultural production and reduce crop failure in Deli Serdang Regency in increasing national food productivity is determining the most profitable planting time based on cropping patterns that can be used as guidelines in the utilization of agricultural land in Deli Serdang Regency which is determined and predicted based on rainfall data in Deli Serdang Regency.
The Artificial Condition Network (ANN) is one of the artificial representations of the human brain which is always trying to simulate the learning process in the human brain. The term artificial is used because this neural network is implemented using a computer program that is able to complete a number of calculation processes during the learning process [5].
The neural network method backpropagation is a learning algorithm to reduce the rate error by adjusting the weight based on the difference in output and the target to be achieved. The backpropagation learning algorithm is a supervised learning algorithm where the input and output have been predetermined by [6]. This pair of data also functions to provide clear information about how the network system must be built and modified so that the best shape will be obtained. These data pairs are used to train input weights to find the actual output to compare with the initial target output. The difference between the actual output and the target output is called an error [7]. The purpose of this study is to obtain a model of cropping patterns in Deli Serdang based on the prediction of rainfall in Deli Serdang using Artificial Neural Network (ANN) Backpropagation.

Types and Sources of Data
Data to be used in the research process is secondary data which contains rainfall data for Deli Serdang Regency which is obtained from the Online Data Center Database BMKG Indonesia. The data taken are monthly rainfall data in January 2010 until December 2019.

Neural Network
The artificial neural network architecture used in this study consists of one layer input, one hidden layer, and one layer out put . The layer input is defined as 12 neurons because it refers to the number of months in a year, in the layer the out put number of neurons set is one neuron because the forecasting time is carried out one month later, for the hidden layer the number of neurons used is 1 to 12 neurons. This research will use 12 input, namely rainfall month-1, month-2 to month 12. The output node used is one output that shows the prediction of rainfall in the 13th month, with nnodes hidden layer (hidden screens) of which n will be known after performing the optimum network selection steps. The activation function used is the function binary sigmoid (does not reach 0 or 1), so data transformation is required to be carried out at smaller intervals (0.1, 0.9) using the technique Minmax normalization. The normalization equation used is as follows: where x = normalized data, a = lowest data and b = data higest

Data Collection
Rainfall data in Deli Serdang Regency in 2010-2019 amounted to 120 data, the data is divided into two parts, namely rainfall data for the period January 2010 to December 2017 as many as 96 data as training data (training) and rainfall data for the January 2018 period until December 2019 as many as 24 of data as test data(testing).

Data Analysis Method
The steps in the algorithm backpropagation as described by [8], namely: Step 0: initializing the weights with fairly small random values. The common procedure is to initialize the bias and weights, both from the input unit to the hidden unit and from the hidden unit to the output into a certain interval (-γ and γ), for example between -0.4 to 0.4, -0.5 to 0.5, and -1 to 1.
Step 1: as long as the stop condition is still not fulfilled, then carry out until step 9. The condition stops when the target error specified has been reached, or if the iteration has reached the specified maximum iteration limit.
Step 2: For each pair of training, work began to step 3 to step 8. Phase forward(feedforward): Step 3: for each input neurons, x i , i = 1, 2, 3, . . . , N receives input x i and spread the signal to all neurons to the upper layer (hidden layer).
Step 4: for each hidden neuron (hidden layer), , = 1,2,3, . . . , the value is calculated input using the weight value: = total input signal in hidden unit j, = bias weight in hidden unit j, x i = input value in unit input, vij = weight between input unit i and hidden unit j. Then calculate the output value using the function activation: Where the activation function used is the binary sigmoid function, so that = ( ) = 1 1+ − (4) The results of the function are sent to all neurons in the layer above it.
Step 5: for each neuron output (yk, k = 1,2,3, ..., m) the input value is calculated with its weight value = 0 + ∑ =1 (5) With; yink = total input signal at output unit k, w0k = bias weight at output unit k, zi = input value in hidden unit j wjk = weight between hidden unit j and output unit k. Then calculate the output value using the function activation used.
= ( ) Where: yk = output value at k unit output, yink= total input signal at k unit out put reverse phase Backpropagation Step 6: for each output neuron (yk , k = 1, 2, 3,.., K) receive the target pattern that corresponds to the input pattern and then the error information calculated = ( − ) ′( ) (8) because ( ( ) = ) using the sigmoid function: (1 − ) With; δk = error factor at k unit output, tk = target value at unit k output, yk = output value at unit k output. Then the weight value correction is calculated which will then be used to update the value learning rate ∆ = (9) With; ∆wjk = error factor in hidden unit j and output unit k, α = constant training rate (learning rate) (learning rate) 0 < α < 1. Calculate the correction of the bias value which will then be used to update the value as in 10: ∆ 0 = (10) With; ∆w0k = error weight bias factor to the k unit output, and then thevalue will be δk sent to the neurons in the previous layer.
= ∑ =1 (11) With; = total error in hidden j (momentum). Then the value is multiplied by the value derived from the activation function to calculate the error information = ′( ) (12) With; = error factor in hidden unit j Calculate the correction of the weight value which is then used to update the weight value: ∆ = (13) And calculate the value of the bias correction which is then used to update the value: ∆ = (14) With; ∆vij = weight change value between input unit i and hidden unit j. Weight modification phase (adjustment): Step 8: each bias value and weight value (j = 0, 1, 2, 3,., P) at the output neurons (yk, k = 1,2,3, ..., m) are updated ( ) = ( ) + ∆ (15) Each hidden unit (zj, j = 1,2,3, ..., p) corrects the bias and its weight (i = 0,1,2,3, ..., n) is updated Step 9: test whether the condition has stopped. If this condition has stopped then testing can be stopped. There are two things that make the condition stop, namely: a. Give a limit to thetolerance error as desired. One way to calculate the error is by calculating the Mean Square Error (MSE) value. The training process will be carried out until the MSE value is smaller thantolerance error the predetermined. Furthermore, the weight value will be stored for identifying data. b. Give a limit to the number of epochs that will be done. One epoch is a process that is carried out from step one to step six.

Network Training
Training is conducted using backpropagation with a learning rate and momentum. The learning rate in this study is the adaptive learning rate, meaning that it can change during the training process. This network training stage was carried out using the software Matlab R2020b. The training was conducted on data training with a target error was set at 0.00001, the value of momentum and learning rate tested between 0.0001 until 0.9999 with multiples 0.01. The learning rate value is adaptive learning rate, meaning that it can change during the training process. The architectural model,values are learning rate and momentum tested at once with a total of 10000 points, then the optimal network is selected from the smallest MSE testing. The network training process will stop if the iteration has reached the specified maximum limit, or the training will stop if thetarget error specifiedhas been reached. Each training the number of neurons in the hidden layer is varied in the range of 1 to 12 neurons. This training process is carried out to find the best configuration by changing the parameter values according to the predetermined ranges and multiples bytrial and error because there is no general procedure for determining the values of these parameters. Network testing is carried out in three stages, namely network testing based on network architecture models, network testing based on learning rates and network testing based on momentum.

Network Training Based on
The architectural. Model tested at this stage are varied with a range of 1-12 neurons in the hidden layer. The results of the architectural model testing are in Table 2. The results of network testing based on the architectural model in Table 2 above show that the small amount of MSE training does not guarantee the small number of MSE testing. The best architectural model is chosen from the MSE testing smallest, namely the 12-2-1 network architecture with 12 nodes input, 2 nodes hidden layer and 1 output with MSE of testing is 0.023792.

Network Training Based on Learning Rate and Momentum.
This network testing is done by varying the learning rate and momentum simultaneously from 0.0001 to 0.9999 with a multiple of 0.01 where the number of variation is 10000 points. This test uses the optimal architecture that has been selected in the previous test. Table 3 shows only a part (snippet) of the variation in learning rate and momentum around the lowest MSE.  The results of network testing based on learning rate and momentum shown in Table 3 that the smallest MSE test is a learning rate of 0.08 and a momentum value of 0.99 with MSE is 0.026. The test with 10000 points took 9 hours 30 minutes.

Rainfall Prediction
The optimum network architecture model obtained is 12-2-1 with a learning rate of 0.08 and a momentum of 0.09 is used to predict the monthly rainfall in 2020 from January to December which will be used to model the best cropping patterns in Deli Serdang Regency. Table 4 is the prediction result using the optimum model equipped with denormalization to restore the shape of the rainfall data to the actual data pattern. The prediction results in Table 4 are denormalized so that they can be used for modeling the cropping patterns shown in Table 5. Based on the rainfall prediction data in Table 5, we turn it into a planting calendar based on Table1. Table 6 shows the planting calendar in Deli Serdang Regency in 2020 where the black dot shows the suitability of the cropping pattern based on the month and type of plant.