Next Article in Journal
Assessment of Tuff Sea Cliff Stability Integrating Geological Surveys and Remote Sensing. Case History from Ventotene Island (Southern Italy)
Next Article in Special Issue
CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images
Previous Article in Journal
An Estimation of Top-Down NOx Emissions from OMI Sensor Over East Asia
Previous Article in Special Issue
Computer Vision and Deep Learning Techniques for the Analysis of Drone-Acquired Forest Images, a Transfer Learning Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem

by
Christos Vasilakos
*,
Dimitris Kavroudakis
and
Aikaterini Georganta
Department of Geography, University of the Aegean, 81100 Mytilene, Greece
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 2005; https://doi.org/10.3390/rs12122005
Submission received: 21 May 2020 / Revised: 15 June 2020 / Accepted: 20 June 2020 / Published: 22 June 2020

Abstract

:
Land cover type classification still remains an active research topic while new sensors and methods become available. Applications such as environmental monitoring, natural resource management, and change detection require more accurate, detailed, and constantly updated land-cover type mapping. These needs are fulfilled by newer sensors with high spatial and spectral resolution along with modern data processing algorithms. Sentinel-2 sensor provides data with high spatial, spectral, and temporal resolution for the in classification of highly fragmented landscape. This study applies six traditional data classifiers and nine ensemble methods on multitemporal Sentinel-2 image datasets for identifying land cover types in the heterogeneous Mediterranean landscape of Lesvos Island, Greece. Support vector machine, random forest, artificial neural network, decision tree, linear discriminant analysis, and k-nearest neighbor classifiers are applied and compared with nine ensemble classifiers on the basis of different voting methods. kappa statistic, F1-score, and Matthews correlation coefficient metrics were used in the assembly of the voting methods. Support vector machine outperformed the base classifiers with kappa of 0.91. Support vector machine also outperformed the ensemble classifiers in an unseen dataset. Five voting methods performed better than the rest of the classifiers. A diversity study based on four different metrics revealed that an ensemble can be avoided if a base classifier shows an identifiable superiority. Therefore, ensemble approaches should include a careful selection of base-classifiers based on a diversity analysis.

Graphical Abstract

1. Introduction

Remote sensing image classification is considered among the main topics of remote sensing that aims to extract land cover types on the basis of the spectral and spatial properties of targets in a study area [1]. The land cover/land use mapping is essential for many applications from a local to a global scale, i.e., environmental monitoring and management, detection of global change, desertification evaluation, support decision making, urban change detection, landscape fragmentation, and tropical deforestation [2,3,4]. A vast amount of remote sensing data is archived and can be accessed freely or with low cost while new data become available every day for the whole planet. The rapid growth of computational approaches, the evolution of sensors’ characteristics and the availability of satellite data have fueled the development of novel methods in image classification. The most widely used methods are the supervised ones, including the traditional approaches of maximum likelihood and minimum distance, as well as more recently, the modern machine learning classifiers, especially as pixel-based classifiers [5,6,7,8,9,10].
According to the literature, non-parametric methods tend to perform better compared to the parametric methods [11,12]. The majority of articles compare machine learning algorithms on the basis of their accuracy performance and their advantages and disadvantages, aiming to identify the best algorithm for each classification case [11]. The most used algorithms with a wide range of application in classification are: Random Forest (RF); Support Vector Machines (SVM), Artificial Neural Networks (ANN) and Decision Trees (DT). In a recent study by Ghamisi et al. [13], hyperspectral data are classified with the usage of various classifiers including, amongst others, SVM, RF, ANN, and logistic regression. The comparison focuses on speed, the setup of various parameters, and the competence of automation. None of the classifiers have a clear advantage in terms of speed or accuracy. However, there is a significant number of SVM studies that have ascertained that the SVM algorithm presents a higher classification accuracy than the other algorithms [14,15,16]. The mathematical model of the SVM theory can distribute and separate the data more accurately than methods such as ANN, Maximum Likelihood Classifier (MLC), and DT [17]. In contrast, Lapini et al. [18] in a forest classification study based on Synthetic Aperture Radar (SAR) images at a Mediterranean ecosystem of central Italy, compared six classifiers, i.e., RF, AdaBoost with Decision Trees, k-Nearest Neighbor (KNN), ANN, SVM, and Quadratic Discriminant. According to their results, in almost all examined scenarios, RF performed better, while SVM was sensitive to unbalanced classes. In another recent study, authors compared KNN, RF, SVM, and ANN in a classification of Landsat-8 Operational Land Imager (OLI) image data of arid desert-oasis mosaic landscapes. ANN performed marginally better that other classifiers, while the RF had a stabile performance across several aspects, i.e., stability, ease of use and overall processing time [19].
Some authors suggest a hybrid approach on base classifiers. For example, in a recent work, Dang et al. [20] suggest that a combination of Random Forest and Support Vector Machine, namely Random Forest Machine, gives more accurate results than each algorithm separate and constitutes a promising tool. The study compares the results of Random Forest Machine with those of Random Forest and Support Vector Machine and observes higher efficiency on accuracy classification by this new hybrid approach. This new algorithm seems to be a promising tool for future applications.
In another work, RF, KNN, and SVM were compared to a land use/cover study based on Sentinel-2 Multi-Spectral Instrument (MSI) [16]. Various training datasets of different sizes were tested, representing the six classes of the study area in the Red River Delta of Vietnam. All classifiers showed an overall accuracy up to 95% with SVM presenting the highest of all, while remaining less sensitive to training data size. Comparison of machine learning methods have been recently further explored in the classification of boreal landscapes with Sentinel-2 data. In particular, the algorithms of SVM, RF Xgboost, and deep learning have been implemented at the multi temporal image with the higher accuracy corresponding to the SVM algorithm, with a total accuracy of 75.8% [21]. Sentinel-2 data was also used in an object based classification by comparing AdaBoost, RF, Rotation Forest, and Canonical Correlation Forest (CCF) classifiers [22]. Three different datasets were developed. The first dataset included only the 10 m bands, the second dataset included the bands with 20 m resolution, and the third dataset included the 10 m and pansharpened version of the 20 m bands. According to the results, the Rotation Forest and Canonical Correlation Forest outperformed for all datasets.
Except from the usage of unitemporal images, multitemporal classification has been extensively applied for more accurate results in land use/cover extraction [23,24,25,26,27,28]. The main reason is the seasonal variance of the vegetation’s spectral reflectance, which changes according to the season and the growing stage for each vegetation type. The limited spectral information of a single image can be compensated by using multiple dates of the same type of images [29]. Kamusoko [30] compared five machine learning methods KNN, ANN, DT, SVM, and RF in a single date and multidate images of Landsat 5, concluding that multidate and RF method provided the best results among other combinations. In another study applied in a highly heterogeneous fragmented area and in a homogenous mountain area, the combination of maximum likelihood and multidate Sentinel-2 data performed better that SVM. The multidate input dataset was able to distinguish the classes of the highly fragmented area despite the spectral similarities between classes [31]. Thus, the multitemporal data are essential for discriminating the vegetation types, resulting in higher classification accuracy results [32].
Most remote sensing classification studies have relied on a single classifier or a comparison of a number of them [33,34,35]. Since all classifiers perform within an accuracy range, an ensemble approach may show improved accuracy levels and increased reliability in remote sensing image classification [36]. To this end, several methods are reported in the literature to address the issue of how to develop an ensemble classifier that combines the decisions from multiple base-classifiers [37,38,39,40,41] that can be used either on hard or soft classifications [42]. Three categories of methods can be identified in the literature [36,43]: (i) algorithms that are based on training data manipulation including the well-known “bagging” and “boosting” [44,45] applied on a single based classifiers, i.e., SVM and DT [46,47], (ii) algorithms that are based on a chain of classifiers that perform in a sequential mode, i.e., the output of a classifier is the input for the next one in the chain, and (iii) algorithms that are based on parallel processing of the base classifiers and the combination of their outputs. The main method to combine the decisions of the base classifiers is a weighted or unweighted voting [48,49]. The weights usually depend on the majority, the estimated probability and the accuracy metrics of the base classifiers. Shen et al. [1] compared the producer’s accuracy and overall accuracy and they concluded that the overall accuracy had stability issues, while the producer’s accuracy performed better in the classification of different land cover types.
This paper aims to apply a number of machine learning approaches, i.e., DT, Linear Discriminant Analysis (DIS), SVM, KNN, RF, and ANN to classify multitemporal Sentinel-2 images and add to whether an ensemble of these base classifiers can further enhance the output accuracy. The classification is applied to an insular environment at the Mediterranean coastal region. Even if various studies have been conducted for Mediterranean environments, an ensemble classification on multitemporal Sentinel-2 data, to the best of our knowledge, has not been examined for this type of ecosystem. Previous studies were focused either on specific types, i.e., on applying machine learning on forested areas [50] or wetlands [51]. Our implementation is somehow different. Each one of the base classifiers uses its own validation dataset rather than a common one, while the final evaluation of the ensemble is compared to base classifiers by using a common and unseen testing dataset.

2. Materials and Methods

This chapter presents a detailed description of our study area of Lesvos Island, Greece. A thorough description of the input data and the classification methods is followed by the accuracy metrics. Finally, the ensemble voting methods, the diversity measures, and the accuracy metrics are analytically presented.

2.1. Study Area and Data

The island of Lesvos is located at the northeastern Aegean Sea of Greece and covers an area of 1636 km2 and the total length of shore 382 km. The island has a variety of geological formations, climatic conditions, and vegetation types (Figure 1). The climate conditions are categorized as “Mediterranean”, with warm and dry summers and mild and moderately rainy winters. Annual precipitation average is 710 mm; the average annual air temperature is 17 °C with high oscillations between maximum and minimum daily temperatures. The terrain is rather hilly and rough, with a highest peak of 960 m a.s.l. Slopes greater than 20% are dominant, covering almost two-thirds of the island. The soils of Lesvos are widely cultivated, mainly with rain-fed crops such as cereals, vines, and olives.
Due to low productivity, many sites were abandoned 50–60 years ago; after abandonment, these areas were moderately grazed, and the shrub regeneration has been occasionally cleared by illegal burning to improve forage production [52]. The vegetation of these areas, defined on the basis of the dominant species, includes phrygana or garrigue-type shrubs in grasslands, evergreen-sclerophylous or maquis-type shrubs, pine forests, deciduous oaks, olive groves, and other agricultural lands.
In order to perform the classification, three cloud-free satellite images were retrieved in JPEG 2000 format from Copernicus Open Access Hub [53], acquired by the Sentinel-2A (S2A) and Sentinel-2B (S2B) MSI satellites. The dataset consists of the dates 28/04/2018 (S2A), 12/07/2018 (S2B), and 04/11/2018 (S2A) and the product type is Level-2A. We selected three images of spring, summer, and autumn for our multitemporal approach. According to previous works, a combination of spring, summer, and autumn image provides the highest classification accuracy and high class separability [27,54]. Level-2A products are radiometrically, atmospherically and geometrically corrected, providing the bottom of atmosphere (BOA) reflectance in Universal Transverse Mercator (UTM)/WGS84 projection. We used 10 bands (Table 1) out of 13 available. The final image composition includes in total 30 bands.

2.2. Methodology

The methodology consists of three stages: the ground truth data collection, the classification by applying six base classifiers including the estimation of accuracy and diversity metrics, and finally the application of nine ensemble voting methods.

2.2.1. Ground Truth Data Collection

As used by several other studies [55,56,57] the input dataset was created by visual interpretation of Google Earth’s very high resolution images among with auxiliary data collected during field trips and a land cover map that was previously produced on the basis of a Worldview-2 image. A total of 1,119 homogenous polygons were identified and outlined with a total area of 127.4 km2 across the island (Table 2). Within these polygons, random points were created to extract the values from the 30 layers of the image composition. The next step was to randomly split the dataset into training and testing partitions based on the 80% and the 20% of the initial cases. The training dataset was used to train all base classifiers and was further randomly split into a secondary training and validation dataset including the 75% and the 25% respectively.
Figure 2 presents the multitemporal spectral responses per class. The data of Figure 2 reveals significant differences in the spectral signatures within the date range especially in the near infrared (NIR) region except for pine forest. Furthermore, these data also address the phenology stages of deciduous, i.e., chestnut trees and agricultural lands. The variation of chlorophyll content in vegetation results in a significant variation of reflectance especially in infrared bands. These variations cause different phenological patterns for each vegetation cover type. According to previous studies, the different phenologies as described by these spectral responses is expected to improve the classification accuracy values compared to a single-date image especially in study areas where crops and vegetation are the dominant land cover types [58].

2.2.2. Base Classifiers

On the basis of the literature, six base classifiers have been selected for the case study. A widely used non-parametric approach is the decision tree (DT) classifier characterized also by its intuitive simplicity [59]. Within DT the input data is recursively split, based on a set of rules, into smaller and more homogenous groups forming the branches of the tree until the end-nodes, which are the target values. In our case, these target nodes forming the leaves represent the classes. One of the major advantages of DT is that it does not have any prerequisites about input data distribution. Moreover, a good generalization can be achieved by pruning the DT that means to remove some branches or turning some branches into leaves. Therefore, pruning will increase the accuracy by avoiding overfitting [11,60].
Another widely used classification approach, which is based on Fisher’s score optimization, is the DIS [61]. DIS approaches has been extensively used in the classification of hyperspectral data, either the initial or modified methods [62,63,64]. One of the major drawbacks of DIS in hyperspectral classification is that these data are ill-posed when the number of data are less than the number of bands. In our case, our data are more than sufficient to avoid this phenomenon during the classification of a 30-dimension space.
A third method that we applied is the Support Vector Machine (SVM) which aims to find a hyperplane that separates categorical data in a high dimension space with the maximum possible margin between the hyperplane and the cases [65]. The cases that are closest to the hyperplane are called support vectors. However, in most cases, the classes are not linearly separable, hence a slack variable is included, and a kernel function is used to perform a non-linear mapping into the feature space. The most widely used kernels in remote sensing applications are the polynomials and radial basis functions (RBF) [11,66].
The forth approach was a k-Nearest Neighbor (KNN) classifier. This algorithm calculates the distances between an unclassified case and the nearest k training cases and classifies the unclassified case to the majority class of the nearest k training cases. The user can choose from a variety of distance metrics, however, the most widely used is the Euclidean Distance which can be applied either unweighted or with a weight [67].
The next applied classification method was the random forest (RF). The concept of DTs is expanded and enhanced through the RF algorithm. Multiple DTs are trained on the basis of a subset of the training data where each one is trained on the basis of its own random sample. A majority vote of all the DTs defines the final class of each case. One of the advantages is that RF does not make any assumption about the probability distribution of the input data [13]. A more detailed description of the RF algorithm in remote sensing applications can be found in [68,69,70].
Finally, we developed and applied an artificial neural network (ANN) classifier. ANNs have been very popular and have been extensively used in pattern recognition and in modeling complex problems. In the last 30 years, ANNs play a fundamental role in remote sensing land cover classification applications [71,72,73,74], while the new trend in the classification of very high resolution images are the convolutional neural networks (CNN) [75,76]. The CNNs have proven, in the last years, to be very powerful classifiers in image recognition, object detection, image segmentation, and instance segmentation [77]. However, CNNs are fundamentally based on spatial-contextual dependencies of the input data with the majority of them being trained on high resolution RGB images [78,79]. Opposite to patch-based CNNs, pixel-based CNNs have been developed. However, according to the literature, the common problems of pixel-based classification, i.e., the salt and pepper effect and the boundary fuzziness effect within the classification result are quite severe in CNN implementations [80]. Other disadvantages of CNNs are the higher processing time and resources. We believe that, for the pixel-based hard classification of the present multispectral and multitemporal approach, ANNs are more suitable given also the nature of the other base classifiers. ANNs are characterized by their architecture, their training algorithm, and their activation function. The most well-known and efficient type is the Multilayer Perceptron (MLP) with three layers: input, hidden, and output, while ANNs with one hidden layer are able to map any nonlinear function. Various gradient descent learning methods have been proposed.
The evaluation, comparison, and voting during the ensemble of the applied methods were based on the below metrics:
v e r a l l   A c c u r a c y   O A = T P + T N T P + F P + T N + F N
U s e r s   a c c u r a c y   U A = T P T P + F N
P r o d u c e r s   a c c u r a c y   P A = T P T P + F P
k a p p a = p 0 p e 1 p e   w i t h
p 0 = T P + T N T P + T N + F P + F N   and
p e = T P + F N × T P + F P + F P + T N × F N + T N T P + T N + F P + F N 2
F 1 s c o r e = 2 × T P 2 × T P + F P + F N
MCC = T P × T N F P × F N T P + F P × T P + F N × T N + F P × T N + F N
where TP: true positive, TN: true negative, FP: false positive, and FN: false negative. The overall accuracy (OA) is a single and very basic summary measure of the probability of a case being correctly classified and is based on the sum of the diagonal elements of the confusion matrix. User’s accuracy (UA) and producer’s accuracy (PA) provide an accuracy performance for each class. UA is a performance measure of the credibility of the output map that is how well the map represents the actual cover types. On the other hand, PA measures the accuracy of how well the reference data is represented by the map. UA and PA are related with the commission and the omission errors respectively. The kappa coefficient on the other hand is a more advanced metric, which compares the observed accuracy against random chance. Opposite to OA, the kappa coefficient takes also into consideration the non-diagonal elements. Furthermore, the F1-score is a rather different measure of accuracy, defined as the weighted harmonic mean of both classification’s precision and recall. It balances the use of precision and recall and provides a more realistic measure of performance. Finally, the Matthews correlation coefficient (MCC) is a more balanced metric that takes into account all parts of the confusion matrix and can handle under-represented classes. Each classifier produced a contingency matrix presenting the classification results of its validation dataset and the corresponding accuracy metrics. It should be noticed that for the calculation of the kappa, F1, and MCC for each class, each confusion matrix was converted to multiple binary matrices based on the ‘one-vs-all’ scheme.
Moreover, four different diversity statistics were calculated on the basis of the results of the base classifiers to the testing datasets, as depicted by the following 2 × 2 table of the relationship between a pair of classifiers Ci and Ck (Table 3) [81].
Where N11 is the correctly classified cases by both classifiers, N10 is the correctly classified cases by Ck classifier, N01 is the correctly classified cases by Ci classifier, and N00 is the incorrectly classified cases by both classifiers. The diversity measures were the Q-statistic, the disagreement measure, the double-fault measure, and the inter-kappa statistic given by [36,81,82,83]:
I n t e r k a p p a   m e a s u r e = 2 N 11 N 00 N 01 N 10 N 11 + N 10 N 01 + N 00 + N 11 + N 01 N 10 + N 00
Q i , k   s t a t i s t i c = N 11 N 00 N 01 N 10 N 11 N 00 + N 01 N 10
d i s a g r e e m e n t   m e a s u r e = N 01 + N 10 N 11 + N 10 + N 01 + N 00
d o u b l e f a u l t   m e a s u r e = N 00 N 11 + N 10 + N 01 + N 00

2.2.3. Ensemble Voting Methods

After obtaining results from the six classifiers, an ensemble classifier should be constructed in order to classify the 4,910 cases of the testing dataset. For each testing case, all the base classifiers provided a prediction, and we tested nine different voting schemes to further evaluate:
  • ‘Mode’: This voting method selects the suggestion with greater frequency in the six suggestions. In the cases with equal frequency, it selects the one with the higher sum of kappa.
  • ‘Max kappa’: This voting method selects the suggestion with greater kappa.
  • ‘Greater Sum of Kappa’: This voting method selects the suggestion with the greater sum of kappa aggregated on the suggestions. Identical suggestions are summed up and then compared with all other kappa values.
  • ‘Greater Mean Kappa’: This method selects the suggestion with greater average kappa per suggestion. Identical suggestions are averaged and then compared with all other suggestions.
  • ‘Greater Weighted Sum Kappa’: This method calculates the weighted sum of kappa which is the multiplication of the sum of kappa over the frequency of each suggestion group. Then, it selects the suggestion with the greater weighted sum of kappa.
  • ‘Greater mean F1’: This voting method evaluates the average F1-score per suggestion and selects the one with the greater average F1. After grouping suggestions, we estimate the average F1 by group and compare the results. The result will be the one with the one with greater average F1-score.
  • ‘Greater sum F1’: This voting method selects the suggestion with a greater aggregation of F1. After grouping the suggestions, we calculate the summation of F1 per group and compare the results. The result will be the suggestion group with the greater sum of F1-score.
  • ‘Greater mean MCC’: This voting method evaluates the average MCC per suggestion group and then selects the one with the greater “mean MCC”. After grouping the suggestions, we average their MCC value and compare the results. The result will be the suggestion group with the greater average MCC
  • ‘Greater sum MCC’: This last voting method selects the suggestion with the greater average MCC. After grouping suggestions, we evaluate the summation of MCC per group before evaluating the result. The result will be the suggestion group with the greater sum of MCC.
For all the above voting methods, the metrics of kappa, F1, and MCC are the ones of the corresponding metrics calculated for each class of the validation dataset during the training phase. The final comparison and selection of the best voting method was based on the kappa value. It should be noticed that even if we have computed the OA, we did not use it during the ensemble phase. Due to the imbalanced input dataset, the overall accuracy does not have an adequate performance, thus we used the kappa, the F1-score, and the MCC.
The nine ensemble models were further statistically compared by applying McNemar’s test [84]. McNemar’s test has been widely used in comparison of classifiers performances [85]. All models were compared in pairs and the McNemar’s value was given by:
McNemar s   value = n 01 n 10 1 2 n 01 + n 10
where n 01 is the number of samples misclassified only by algorithm A and n 10 is the number of samples misclassified only by algorithm B. The null hypothesis is that both of the classification methods have the same error rate. McNemar’s test is based on a x 2 test with one degree of freedom, where the critical x 2 value with a 95% confidence interval and a 5% level of significance is 3.841. If the computed McNemar’s value for each pair is more than 3.841, then the null hypothesis is rejected, therefore, the two classification methods are significantly different.
The ArcGIS 10.2 [86] was used for the spatial processing and visualization of the data, the Matlab 2018a [87] for the base classifications, while the ensemble of the classifiers through the voting methods was carried out using R, including the packages caret, dplyr, and magrittr [88,89,90,91]. Figure 3 presents the overall workflow of the current research.

3. Results and Discussion

Each base classifier was carefully designed and trained under different settings. This section presents the training results of the base classifiers and the classification ensemble. A diversity analysis of the base classifiers and a significant test of the voting methods provide a more comprehensive view of the results.

3.1. Base Classifiers Training

For the training of the DT classifiers we tested 4, 10, and 100 as the maximum number of splits. The best results obtained with 100 maximum splits based on Gini’s diversity index. The SVM classifier was trained with three different kernels; linear, RBF, quadratic and cubic. The cubic kernel provided the best accuracy and was further analyzed. We are aware that our approach is a multiclass imbalanced problem. It was decided that the best approach was to apply a “one-vs-one” instead of “one-vs-all” coding scheme in order to reduce the effect of the imbalance problem [92,93]. At the same time, the kappa, UA, PA, F1, and MCC provide a better interpretation of accuracy in an imbalanced dataset opposed to the overall accuracy. For the KNN we tested 1, 10, 100 nearest neighbors and the best results were provided with 10 neighbors. During the training, we applied the Euclidean distance unweighted and with a weight, but the overall accuracy increased when we used as a weight the inverse square of the distance for each case. The RF model tested with 30, 40, 50, 70, and 100 trees. The final model included 30 trees based on overall accuracy. Finally, during ANN training we applied different architectures with one hidden layer with 16 to 35 hidden nodes. We also tested two gradient descent learning methods: the Levenberg Marquardt [94] and the scaled conjugate gradient [95]. Each network was trained 10 times with different random initial weights. The model with the best performance had 16 hidden nodes and was trained with a scaled conjugate gradient for 272 epochs.
The classification accuracy of each classifier was evaluated based on the confusion matrix of the validation dataset presented in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6. Table 4 shows the user’s (UA), producer’s (PA) accuracy per class and the OA for each classifier while Figure 4 presents the heat map of UA and PA where the colors are normalized for each classifier. According to the results, the SVM outperformed all the classifiers according to the OA and the kappa (Figure 5). In most of the land cover classes, SVM presented the lower omission and commission errors while the DT and the DIS had the poorest performance with kappa 0.79 and 0.83, respectively. Aquatic bodies were almost perfectly classified by all classifiers, while brushwood, built up, Pinus brutia, and agricultural land classes also showed high accuracy. Figure 6 presents the diversity of UA and PA among all classifiers for each class. The base classifiers had significant different performances in omission error for other broadleaves, barren land, and grassland classes and different performances in the commission error for oak forest, Pinus nigra, and other broadleaves. It should be noticed that the UA of SVM for other broadleaves is an outlier, i.e., its value is more than 1.5 times the interquartile range above the upper quartile.
Results are consistent with what has been found in previous studies. Shang et al. [96] applied SVM, RF and AdaBoost for the classification over an Australian eucalyptus forest. According to their results all three machine-learning algorithms outperformed the results produced by DIS. DT and DIS methods have also shown a poor performance in the comparison for the classification of Sentinel-2 data where RF outperformed followed by SVM and ANN [97]. However, the diversity of the results in the literature, reveals that the applied methods are data-driven and depended on the classification scheme, the number of training data and the type input data i.e. whether only the bands are taken into account or vegetation indices and other auxiliary data are used [19].

3.2. Classification Ensemble

During the ensemble, we tested the nine voting methods and the base classifiers with the testing dataset. Figure 7 shows the k coefficient for the base classifiers and the voting methods applied in the testing dataset. According to the results, the SVM outperforms not only the base classifiers but also all the voting methods. However, all the voting methods based on sums as well as the method based on the majority of the votes performed better that all the rest of the base classifiers. It is worth mentioning that DT present the lower k among all classifiers, while DIS has the second to last performance.
Table 5 presents the kappa coefficient per class for each classifier for the testing dataset, while the Figure 8 presents the corresponding heatmap. It is observed that SVM shows a better performance in almost all classes. The voting methods based on sums and the majority of the votes performed slightly better in the built-up class. The confusion matrices of the testing dataset of the best classifier and the best ensemble methods are presented in Table A7, Table A8 and Table A9. According to the results, it is evident that the combination of the classifiers does not provide always a better performance compared to the base classifiers. In a crop classification study in a fragmented arable landscape by Salas et al. [98], the authors concluded that when no classifier is clearly performing better than the others then an ensemble approach can be the best alternative. In our case SVM, RF, ANN, and KNN show a similar performance, however the SVM method performs better than all the applied voting methods. Therefore, a diversity study was applied in order to identify any potential dissimilar performance between SVM and the other base classifiers.
Table 6 presents the result of the four diversity statistics for all the possible pairs of the base classifiers for the testing dataset. According to the inter-kappa measure (Table 6a), all classifiers show a moderate agreement between them, except for SVM, which shows a fair agreement with DT and DIS and a moderate agreement with the rest of the classifiers. The high values of the Q-statistic (Table 6b) and the low values of the disagreement measure (Table 6c) suggest that there is not any significant diversity of the classifiers. The same conclusion results from the double-fault measure (Table 6d). However, from Figure 9, it is evident that the SVM presents a diverse performance especially based on the double-fault and the inter-kappa measures (Figure 9a,d). SVM’s double-fault measures are tightly grouped while the values are quite low. Furthermore, the group of SVM’s inter-kappa measures are lower, while the rest of the classifiers have a similar performance. Therefore, from the combination of the classification performance of the base classifiers with the diversity results is revealed that a voting method does not provide a better performance when a base classifier has a small but identifiable better performance than the rest of the classifiers.
On the other hand, McNemar’s test of the ensemble methods showed that the voting method based on greater kappa is significantly different from the rest of the voting methods (Table 7). More specifically, the x 2 exceeds the critical x 2 value of 3.84 and thus ‘MaxK’ is statistically significant at a 95% confidence interval for all the pair comparisons. Interestingly, the rest of the comparisons revealed that the null hypothesis cannot be rejected according to McNemar’s test, hence the difference in accuracy between the ensemble methods is not statistically significant.

4. Conclusions

This work illustrates the potential use of a number of classifiers on identifying land cover types. The land cover type mapping is essential for the land management of the Mediterranean ecosystems. Long-term human activities along with geographic and climatic conditions have created a heterogeneous fragmented ecosystem that changes rapidly [99]. One of the main disturbances of Mediterranean ecosystems are wildfires. Land cover type mapping provides valuable information, i.e., vegetative fuel and socioeconomic inputs in wildfire risk assessment [100,101]. Furthermore, through remote sensing classification we can identify the change detection, possible land degradations and empower ecosystem monitoring [102,103].
An ensemble approach with nine voting methods has been developed for increased accuracy over classification algorithms using multi-temporal Sentinel-2 data from a mixed Mediterranean ecosystem. Each base classifier was trained with its own dataset in order to create the accuracy metrics that were used within the voting methods. All the base classifiers and the ensemble methods were applied to an unseen testing dataset. The result shows that the combination of multiple classifiers based on the examined voting schemes does not always provide a better performance in land cover classification. The SVM algorithm outperformed all the classifiers and was proven as the most accurate approach especially for this quite unbalanced dataset.
The diversity measures can explain the outperformance of SVM. The double-fault measure clearly shows that SVM significantly differs from the rest of the classifiers. Therefore, diversity measures should be thoroughly examined before building an ensemble method. The diversity metrics can be evidence in identifying possible overperformance within base classifiers hence an ensemble may not be always necessary. On the other hand, possible underperformances can be identified leading to the exclusion of some base classifiers. To sum up, our voting methods were influenced by the number of classifiers with a lower performance opposed to SVM. Hence, the accuracy of the ensembles are lower than the best base classifier, probably due to the ‘curse of conflict’ problem [104].
Potential further improvements of this methodology should include the incorporation of additional base algorithms and more ensemble methods. Moreover, opposed to pixel-based approaches, ensembles of segmentation approaches can be explored including traditional segmentation algorithms and CNNs. An interesting potential improvement of this work should be the comparison of multiple Mediterranean areas based on the very same ensemble of algorithms. Nevertheless, this work has proven that contemporary computational approaches along with advanced algorithmic measures show potential for land cover classification of unbalanced data.

Author Contributions

Conceptualization, C.V.; methodology, C.V. and D.K.; writing—Original draft preparation, C.V., D.K., and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank the four anonymous reviewers for providing constructive comments for improving the overall manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Confusion matrix for the validation dataset during the training phase of the base classifiers for the decision tree (DT) model.
Table A1. Confusion matrix for the validation dataset during the training phase of the base classifiers for the decision tree (DT) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG61782230100190460107890.78
OF4717810182300514402900.61
BW390101210000049201011580.87
BU4222250001910102450.92
PB3320010343472600110011560.89
CF1160021640350010002280.72
PN3000130321000200600.53
MS529003125119006001790.66
BL00090000410000500.82
GL5116261020001770702800.63
OB10002503001310250.52
AL02010001010216701830.91
AB0000000000002672671
Total80132710642461065194872345433158182267
PA0.770.540.950.910.970.850.370.510.760.530.220.921OA = 0.82
kappa0.730.550.880.910.910.770.430.560.790.550.310.911kappa = 0.79
Table A2. Confusion matrix for the validation dataset during the training phase of the base classifiers for the discriminant (DIS) model.
Table A2. Confusion matrix for the validation dataset during the training phase of the base classifiers for the discriminant (DIS) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG674639130080170007750.87
OF1717100090200313502380.72
BW32196710000064200010580.91
BU0022120000200002160.98
PB4230010530443101180011920.88
CF127000164016008302190.75
PN33005040400000550.73
MS536004203152007002270.67
BL011230000460000710.65
GL26148500000026311003990.66
OB18000103001100240.46
AL0000000005016401690.97
AB0000000000002672671
Total80132710642461065194872345433158182267
PA0.840.520.910.860.990.850.460.650.850.790.190.91OA = 0.85
kappa0.830.580.890.910.910.790.560.640.730.70.260.931kappa = 0.83
Table A3. Confusion matrix for the validation dataset during the training phase of the base classifiers for the support vector machine (SVM) model.
Table A3. Confusion matrix for the validation dataset during the training phase of the base classifiers for the support vector machine (SVM) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG743358040070131108120.92
OF232590004015021103050.85
BW5110332000023800010810.96
BU3002420000500002500.97
PB6000103901440010010640.98
CF02000172013005001920.9
PN100012072200100880.82
MS3240071611860010102480.75
BL00020000470000490.96
GL176230000002751603280.84
OB00003207003700490.76
AL0000000003117301770.98
AB0000000000002672671
Total80132710642461065194872345433158182267
PA0.930.790.970.980.980.890.830.790.870.830.640.951OA = 0.93
kappa0.910.810.950.970.970.890.820.760.910.820.690.961kappa = 0.91
Table A4. Confusion matrix for the validation dataset during the training phase of the base classifiers for the k-nearest neighbors (KNN) model.
Table A4. Confusion matrix for the validation dataset during the training phase of the base classifiers for the k-nearest neighbors (KNN) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG673452410270220207580.89
OF34253000150340511203540.71
BW271103815000035000011340.92
BU2102210000700002310.96
PB171001047024130290211150.94
CF020001610170016001960.82
PN100010058600100760.76
MS920006183157009102230.7
BL00160000440000510.86
GL383230000002470703180.78
OB00001000001200130.92
AL0100000005017001760.97
AB0000000000002652651
Total80132710642461065194872345433158182267
PA0.840.770.980.90.980.830.670.670.810.750.210.930.99OA = 0.89
kappa0.840.720.930.920.950.820.710.670.840.740.340.951Kappa = 0.87
Table A5. Confusion matrix for the validation dataset during the training phase of the base classifiers for the random forest (RF) model.
Table A5. Confusion matrix for the validation dataset during the training phase of the base classifiers for the random forest (RF) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG724548040314038117248470.85
OF2223500070180332222900.81
BW211102730000349002111040.93
BU41224200001200242630.92
PB1000010490271500701011080.95
CF160001730140015012090.83
PN10003057300101650.88
MS5240081401690010052300.73
BL00010000390000400.97
GL1342700000023408132860.82
OB00001001002000220.91
AL0200000007116901790.94
AB00000000000002671
Total80132710642461065194872345433158182801
PA0.90.720.970.980.980.890.660.720.720.710.340.930.9OA = 0.90
kappa0.850.750.930.950.960.850.750.710.830.740.50.930.85Kappa = 0.88
Table A6. Confusion matrix for the validation dataset during the training phase of the base classifiers for the artificial neural network (ANN) model.
Table A6. Confusion matrix for the validation dataset during the training phase of the base classifiers for the artificial neural network (ANN) model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG717557250030280108180.88
OF252200109037038203050.72
BW11010443000034302011060.94
BU1172250000800002420.93
PB800010340391200130011060.93
CF030001630240011302040.8
PN100012052000100660.79
MS5230010161143007002050.7
BL01080000510000600.85
GL235271000102430303030.8
OB02001309001810340.53
AL1101000007316901820.93
AB0000100000002792801
Total79231110852411063191922296232461181279
PA0.910.710.960.930.970.850.570.620.820.750.30.931OA = 0.89
kappa0.870.70.940.930.940.820.650.640.830.760.370.931Kappa = 0.87
Table A7. Confusion matrix for the test dataset of the SVM model.
Table A7. Confusion matrix for the test dataset of the SVM model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG7403572610101210108240.90
OF27271010209051003160.86
BW8010313000003400010760.96
BU00124600001000002570.96
PB2000106201530140010870.98
CF01000152011005001690.90
PN000010066000000760.87
MS412009171189016002390.79
BL00040000500000540.93
GL154491000002500203210.78
OB12003604003400500.68
AL0200000000018501870.99
AB0000000000002542541
Total797327108825710901788222661312501882544910
PA0.930.830.950.960.970.850.800.840.820.800.680.981OA = 0.92
kappa0.90.830.940.950.970.870.830.80.870.780.680.991Kappa = 0.91
Table A8. Confusion matrix for the test dataset of the Greater Weighted Sum of Kappa ensemble model.
Table A8. Confusion matrix for the test dataset of the Greater Weighted Sum of Kappa ensemble model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG727425340171250008150.89
OF202440002013065002900.84
BW15010373000033501010940.95
BU0012480000800002570.96
PB6000107402690270011240.96
CF030001590160012001900.84
PN00001055000000560.98
MS9280010170179015002490.72
BL00120000490000520.94
GL207441000002410403170.76
OB01001002001900230.83
AL0200000002218301890.97
AB0000000000002542541
Total797327108825710901788222661312501882544910
PA0.910.750.950.960.990.890.670.790.800.770.380.971OA = 0.91
kappa0.870.760.930.960.950.850.680.740.850.730.400.961Kappa = 0.89
Table A9. Confusion matrix for the test dataset of the Mode ensemble model.
Table A9. Confusion matrix for the test dataset of the Mode ensemble model.
Reference
OGOFBWBUPBCFPNMSBLGLOBALABTotalUA
PredictedOG727425340171250008150.89
OF202440002013065002900.84
BW15010373000033501010940.95
BU0012480000800002570.96
PB6000107402690270011240.96
CF030001590160012001900.84
PN00001055000000560.98
MS9280010170179015002490.72
BL00120000490000520.94
GL207441000002410403170.76
OB01001002001900230.83
AL0200000002218301890.97
AB0000000000002542541
Total797327108825710901788222661312501882544910
PA0.890.840.950.960.960.840.980.720.940.760.830.971OA = 0.91
kappa0.870.760.930.960.950.850.680.740.850.730.400.961Kappa = 0.89

References

  1. Shen, H.; Lin, Y.; Tian, Q.; Xu, K.; Jiao, J. A comparison of multiple classifier combinations using different voting-weights for remote sensing image classification. Int. J. Remote Sens. 2018, 39, 3705–3722. [Google Scholar] [CrossRef]
  2. Maulik, U.; Chakraborty, D. Remote Sensing Image Classification: A survey of support-vector-machine-based advanced techniques. IEEE Geosci. Remote Sens. Mag. 2017, 5, 33–52. [Google Scholar] [CrossRef]
  3. Fathizad, H.; Hakimzadeh Ardakani, M.A.; Mehrjardi, R.T.; Sodaiezadeh, H. Evaluating desertification using remote sensing technique and object-oriented classification algorithm in the Iranian central desert. J. Afr. Earth Sci. 2018, 145, 115–130. [Google Scholar] [CrossRef]
  4. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, K.S.; Tzeng, Y.C.; Chen, C.F.; Kao, W.L.; Ni, C.L. Classification of multispectral imagery using dynamic learning neural network. In Proceedings of the IGARSS ’93—IEEE International Geoscience and Remote Sensing Symposium, Tokyo, Japan, 18–21 August 2013; IEEE: Piscataway, NJ, USA, 1994; pp. 896–898. [Google Scholar]
  6. Lawrence, R. Classification of remotely sensed imagery using stochastic gradient boosting as a refinement of classification tree analysis. Remote Sens. Environ. 2004, 90, 331–336. [Google Scholar] [CrossRef]
  7. Sohn, Y.; Rebello, N.S. Supervised and unsupervised spectral angle classifiers. Photogramm. Eng. Remote Sens. 2002, 68, 1271–1280. [Google Scholar]
  8. Strahler, A.H. The use of prior probabilities in maximum likelihood classification of remotely sensed data. Remote Sens. Environ. 1980, 10, 135–163. [Google Scholar] [CrossRef]
  9. Foody, G.M.; Mathur, A. A relative evaluation of multiclass image classification by support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1335–1343. [Google Scholar] [CrossRef] [Green Version]
  10. Collins, M.J.; Dymond, C.; Johnson, E.A. Mapping subalpine forest types using networks of nearest neighbour classifiers. Int. J. Remote Sens. 2004, 25, 1701–1721. [Google Scholar] [CrossRef]
  11. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  12. Qian, Y.; Zhou, W.; Yan, J.; Li, W.; Han, L. Comparing Machine Learning Classifiers for Object-Based Land Cover Classification Using Very High Resolution Imagery. Remote Sens. 2014, 7, 153–168. [Google Scholar] [CrossRef]
  13. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
  14. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef] [Green Version]
  15. Seetha, M.; Muralikrishna, I.V.; Deekshatulu, B.L.; Malleswari, B.L.; Hegde, P. Artificial Neural Networks and Other Methods of Image Classification. Theor. Appl. Inf. Technol. 2008, 4, 1039–1053. [Google Scholar]
  16. Thanh Noi, P.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  18. Lapini, A.; Pettinato, S.; Santi, E.; Paloscia, S.; Fontanelli, G.; Garzelli, A. Comparison of Machine Learning Methods Applied to SAR Images for Forest Classification in Mediterranean Areas. Remote Sens. 2020, 12, 369. [Google Scholar] [CrossRef] [Green Version]
  19. Ge, G.; Shi, Z.; Zhu, Y.; Yang, X.; Hao, Y. Land use/cover classification in an arid desert-oasis mosaic landscape of China using remote sensed imagery: Performance assessment of four machine learning algorithms. Glob. Ecol. Conserv. 2020, 22, e00971. [Google Scholar] [CrossRef]
  20. Dang, V.-H.; Hoang, N.-D.; Nguyen, L.-M.-D.; Bui, D.T.; Samui, P. A Novel GIS-Based Random Forest Machine Algorithm for the Spatial Prediction of Shallow Landslide Susceptibility. Forests 2020, 11, 118. [Google Scholar] [CrossRef] [Green Version]
  21. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GISci. Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
  22. Tonbul, H.; Colkesen, I.; Kavzoglu, T. Classification of poplar trees with object-based ensemble learning algorithms using Sentinel-2A imagery. J. Geod. Sci. 2020, 10, 14–22. [Google Scholar] [CrossRef]
  23. Langley, S.K.; Cheshire, H.M.; Humes, K.S. A comparison of single date and multitemporal satellite image classifications in a semi-arid grassland. J. Arid Environ. 2001, 49, 401–411. [Google Scholar] [CrossRef]
  24. Yuan, F.; Sawaya, K.E.; Loeffelholz, B.C.; Bauer, M.E. Land cover classification and change analysis of the Twin Cities (Minnesota) Metropolitan Area by multitemporal Landsat remote sensing. Remote Sens. Environ. 2005, 98, 317–328. [Google Scholar] [CrossRef]
  25. Hütt, C.; Koppe, W.; Miao, Y.; Bareth, G. Best Accuracy Land Use/Land Cover (LULC) Classification to Derive Crop Types Using Multitemporal, Multisensor, and Multi-Polarization SAR Satellite Images. Remote Sens. 2016, 8, 684. [Google Scholar] [CrossRef] [Green Version]
  26. Eisavi, V.; Homayouni, S.; Yazdi, A.M.; Alimohammadi, A. Land cover mapping based on random forest classification of multitemporal spectral and thermal images. Environ. Monit. Assess. 2015, 187, 291. [Google Scholar] [CrossRef] [PubMed]
  27. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal RapidEye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
  28. Alcantara, C.; Kuemmerle, T.; Prishchepov, A.V.; Radeloff, V.C. Mapping abandoned agriculture with multi-temporal MODIS satellite data. Remote Sens. Environ. 2012, 124, 334–347. [Google Scholar] [CrossRef]
  29. Key, T. A Comparison of Multispectral and Multitemporal Information in High Spatial Resolution Imagery for Classification of Individual Tree Species in a Temperate Hardwood Forest. Remote Sens. Environ. 2001, 75, 100–112. [Google Scholar] [CrossRef]
  30. Kamusoko, C. Image Classification. In Remote Sensing Image Classification in R; Springer: Singapore, 2019; pp. 81–153. [Google Scholar]
  31. Rujoiu-Mare, M.-R.; Olariu, B.; Mihai, B.-A.; Nistor, C.; Săvulescu, I. Land cover classification in Romanian Carpathians and Subcarpathians using multi-date Sentinel-2 remote sensing imagery. Eur. J. Remote Sens. 2017, 50, 496–508. [Google Scholar] [CrossRef] [Green Version]
  32. Sharma, A.; Liu, X.; Yang, X. Land cover classification from multi-temporal, multi-spectral remotely sensed imagery using patch-based recurrent neural networks. Neural Netw. 2018, 105, 346–355. [Google Scholar] [CrossRef] [Green Version]
  33. Pal, M.; Mather, P.M. A comparison of decision tree and backpropagation neural network classifiers for land use classification. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Toronto, ON, Canada, 24–28 June 2002. [Google Scholar]
  34. Rogan, J.; Franklin, J.; Stow, D.; Miller, J.; Woodcock, C.; Roberts, D. Mapping land-cover modifications over large areas: A comparison of machine learning algorithms. Remote Sens. Environ. 2008, 112, 2272–2283. [Google Scholar] [CrossRef]
  35. Pal, M.; Mather, P.M. Support vector machines for classification in remote sensing. Int. J. Remote Sens. 2005, 26, 1007–1011. [Google Scholar] [CrossRef]
  36. Du, P.; Xia, J.; Zhang, W.; Tan, K.; Liu, Y.; Liu, S. Multiple Classifier System for Remote Sensing Image Classification: A Review. Sensors 2012, 12, 4764–4792. [Google Scholar] [CrossRef]
  37. Briem, G.J.; Benediktsson, J.A.; Sveinsson, J.R. Multiple classifiers applied to multisource remote sensing data. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2291–2299. [Google Scholar] [CrossRef] [Green Version]
  38. Aguilar, R.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.; de By, R.A. A Cloud-Based Multi-Temporal Ensemble Classifier to Map Smallholder Farming Systems. Remote Sens. 2018, 10, 729. [Google Scholar] [CrossRef] [Green Version]
  39. Foody, G.M.; Boyd, D.S.; Sanchez-Hernandez, C. Mapping a specific class with an ensemble of classifiers. Int. J. Remote Sens. 2007, 28, 1733–1746. [Google Scholar] [CrossRef]
  40. Lei, G.; Li, A.; Bian, J.; Yan, H.; Zhang, L.; Zhang, Z.; Nan, X. OIC-MCE: A Practical Land Cover Mapping Approach for Limited Samples Based on Multiple Classifier Ensemble and Iterative Classification. Remote Sens. 2020, 12, 987. [Google Scholar] [CrossRef] [Green Version]
  41. Amani, M.; Salehi, B.; Mahdavi, S.; Brisco, B.; Shehata, M. A Multiple Classifier System to improve mapping complex land covers: A case study of wetland classification using SAR data in Newfoundland, Canada. Int. J. Remote Sens. 2018, 39, 7370–7383. [Google Scholar] [CrossRef]
  42. Doan, H.T.X.; Foody, G.M. Increasing soft classification accuracy through the use of an ensemble of classifiers. Int. J. Remote Sens. 2007, 28, 4609–4623. [Google Scholar] [CrossRef]
  43. Giacinto, G.; Roli, F. Ensembles of Neural Networks for Soft Classification of Remote Sensing Images. In Proceedings of the European Symposium on Intelligent Techniques, European Network for Fuzzy Logic and Uncertainty Modelling in Information Technology, Bari, Italy, 20–21 March 1997; pp. 166–170. [Google Scholar]
  44. Drucker, H.; Cortes, C.; Jackel, L.D.; LeCun, Y.; Vapnik, V. Boosting and Other Ensemble Methods. Neural Comput. 1994, 6, 1289–1301. [Google Scholar] [CrossRef]
  45. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  46. Pal, M. Ensemble of support vector machines for land cover classification. Int. J. Remote Sens. 2008, 29, 3043–3049. [Google Scholar] [CrossRef]
  47. Pal, M. Ensemble Learning with Decision Tree for Remote Sensing Classification. World Acad. Sci. Eng. Technol. 2007, 36, 258–260. [Google Scholar]
  48. Battiti, R.; Colla, A.M. Democracy in neural nets: Voting schemes for classification. Neural Netw. 1994, 7, 691–707. [Google Scholar] [CrossRef]
  49. Oza, N.C.; Tumer, K. Classifier ensembles: Select real-world applications. Inf. Fusion 2008, 9, 4–20. [Google Scholar] [CrossRef]
  50. Puletti, N.; Chianucci, F.; Castaldi, C. Use of Sentinel-2 for forest classification in Mediterranean environments. Ann. Silvic. Res. 2018, 42, 32–38. [Google Scholar]
  51. Chatziantoniou, A.; Psomiadis, E.; Petropoulos, G. Co-Orbital Sentinel 1 and 2 for LULC Mapping with Emphasis on Wetlands in a Mediterranean Setting Based on Machine Learning. Remote Sens. 2017, 9, 1259. [Google Scholar] [CrossRef] [Green Version]
  52. Henderson, M.; Kalabokidis, K.; Marmaras, E.; Konstantinidis, P.; Marangudakis, M. Fire and society: A comparative analysis of wildfire in Greece and the United States. Hum. Ecol. Rev. 2005, 12, 169–182. [Google Scholar]
  53. ESA Copernicus Open Access Hub. Available online: https://scihub.copernicus.eu/ (accessed on 10 February 2020).
  54. Hill, R.A.; Wilson, A.K.; George, M.; Hinsley, S.A. Mapping tree species in temperate deciduous woodland using time-series multi-spectral data. Appl. Veg. Sci. 2010, 13, 86–99. [Google Scholar] [CrossRef]
  55. Guirado, E.; Tabik, S.; Alcaraz-Segura, D.; Cabello, J.; Herrera, F. Deep-learning Versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study. Remote Sens. 2017, 9, 1220. [Google Scholar] [CrossRef] [Green Version]
  56. Li, Q.; Qiu, C.; Ma, L.; Schmitt, M.; Zhu, X.X. Mapping the Land Cover of Africa at 10 m Resolution from Multi-Source Remote Sensing Data with Google Earth Engine. Remote Sens. 2020, 12, 602. [Google Scholar] [CrossRef] [Green Version]
  57. Bwangoy, J.-R.B.; Hansen, M.C.; Roy, D.P.; De Grandi, G.; Justice, C.O. Wetland mapping in the Congo Basin using optical and radar remotely sensed data and derived topographical indices. Remote Sens. Environ. 2010, 114, 73–86. [Google Scholar] [CrossRef]
  58. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  59. Friedl, M.A.; Brodley, C.E. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  60. Almuallim, H. An efficient algorithm for optimal pruning of decision trees. Artif. Intell. 1996, 83, 347–362. [Google Scholar] [CrossRef] [Green Version]
  61. Aylmer Fisher FRS, R. Summary for Policymakers. In Climate Change 2013—The Physical Science Basis; Intergovernmental Panel on Climate Change, Ed.; Cambridge University Press: Cambridge, UK, 1936; pp. 1–30. ISBN 9788578110796. [Google Scholar]
  62. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  63. Feret, J.-B.; Asner, G.P. Tree Species Discrimination in Tropical Forests Using Airborne Imaging Spectroscopy. IEEE Trans. Geosci. Remote Sens. 2013, 51, 73–84. [Google Scholar] [CrossRef]
  64. Clark, M.; Roberts, D.; Clark, D. Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales. Remote Sens. Environ. 2005, 96, 375–398. [Google Scholar] [CrossRef]
  65. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  66. Kavzoglu, T.; Colkesen, I. A kernel functions analysis for support vector machines for land cover classification. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 352–359. [Google Scholar] [CrossRef]
  67. Franco-Lopez, H.; Ek, A.R.; Bauer, M.E. Estimation and mapping of forest stand density, volume, and cover type using the k-nearest neighbors method. Remote Sens. Environ. 2001, 77, 251–274. [Google Scholar] [CrossRef]
  68. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  69. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  70. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  71. Mas, J.F.; Flores, J.J. The application of artificial neural networks to the analysis of remotely sensed data. Int. J. Remote Sens. 2008, 29, 617–663. [Google Scholar] [CrossRef]
  72. Atkinson, P.M.; Tatnall, A.R.L. Introduction Neural networks in remote sensing. Int. J. Remote Sens. 1997, 18, 699–709. [Google Scholar] [CrossRef]
  73. Yuan, H.; Van Der Wiele, C.; Khorram, S. An Automated Artificial Neural Network System for Land Use/Land Cover Classification from Landsat TM Imagery. Remote Sens. 2009, 1, 243–265. [Google Scholar] [CrossRef] [Green Version]
  74. Kavzoglu, T.; Mather, P.M. The use of backpropagating artificial neural networks in land cover classification. Int. J. Remote Sens. 2003, 24, 4907–4938. [Google Scholar] [CrossRef]
  75. Längkvist, M.; Kiselev, A.; Alirezaie, M.; Loutfi, A. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks. Remote Sens. 2016, 8, 329. [Google Scholar] [CrossRef] [Green Version]
  76. Pires de Lima, R.; Marfurt, K. Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis. Remote Sens. 2019, 12, 86. [Google Scholar] [CrossRef] [Green Version]
  77. Hoeser, T.; Kuenzer, C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends. Remote Sens. 2020, 12, 1667. [Google Scholar] [CrossRef]
  78. Zuo, Z.; Shuai, B.; Wang, G.; Liu, X.; Wang, X.; Wang, B.; Chen, Y. Convolutional recurrent neural networks: Learning spatial dependencies for image representation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 18–26. [Google Scholar]
  79. Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  80. Lv, X.; Ming, D.; Chen, Y.; Wang, M. Very high resolution remote sensing image classification with SEEDS-CNN and scale effect analysis for superpixel CNN classification. Int. J. Remote Sens. 2019, 40, 506–531. [Google Scholar] [CrossRef]
  81. Kuncheva, L.I.; Whitaker, C.J. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 2003, 51, 181–207. [Google Scholar] [CrossRef]
  82. Petrakos, M.; Atli Benediktsson, J.; Kanellopoulos, I. The effect of classifier agreement on the accuracy of the combined classifier in decision level fusion. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2539–2546. [Google Scholar] [CrossRef] [Green Version]
  83. Thomas, G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Mach. Learn. 2000, 40, 139–157. [Google Scholar]
  84. Edwards, A.L. Note on the “correction for continuity” in testing the significance of the difference between correlated proportions. Psychometrika 1948, 13, 185–187. [Google Scholar] [CrossRef]
  85. Kavzoglu, T. Object-Oriented Random Forest for High Resolution Land Cover Mapping Using Quickbird-2 Imagery. In Handbook of Neural Computation; Academic Press: London, UK, 2017; ISBN 9780128113196. [Google Scholar]
  86. Environmental Systems Research Institute. ESRI ArcGIS Desktop: Release 10; Environmental Systems Research Institute: Redlands, CA, USA, 2013. [Google Scholar]
  87. The Mathworks Inc. The Mathworks Inc.: Massachusetts. 2018. Available online: https://www.Mathworks.com/Products/Matlab (accessed on 10 February 2020).
  88. R Development Core Team. R: A Language and Environment for Statistical Computing; R Development Core Team: Vienna, Austria, 2017. [Google Scholar]
  89. Wickham, H.; Francois, R. The Dplyr Package; R Core Team: Vienna, Austria, 2016. [Google Scholar]
  90. Bache, S.M.; Wickham, H. Package ‘magrittr’—A Forward-Pipe Operator for R. Available online: https://CRAN.R-project.org/package=magrittr (accessed on 10 February 2020).
  91. Kuhn, M. Building Predictive Models in R Using the caret Package. J. Stat. Softw. 2008, 28, 1–26. [Google Scholar] [CrossRef] [Green Version]
  92. Anthony, G.; Gregg, H.; Tshilidzi, M. Image classification using SVMs: One-Against-One vs One-against-All. In Proceedings of the 28th Asian Conference on Remote Sensing 2007, ACRS 2007, Kuala Lumpur, Malaysia, 12–16 November 2007. [Google Scholar]
  93. Daengduang, S.; Vateekul, P. Enhancing accuracy of multi-label classification by applying one-vs-one support vector machine. In Proceedings of the 2016 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), Khon Kaen, Thailand, 13–15 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  94. Marquardt, D.W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  95. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  96. Shang, X.; Chisholm, L.A. Classification of Australian Native Forest Species Using Hyperspectral Remote Sensing and Machine-Learning Classification Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2481–2489. [Google Scholar] [CrossRef]
  97. Pirotti, F.; Sunar, F.; Piragnolo, M. Benchmark of machine learning methods for classification of a sentinel-2 image. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B7, 335–340. [Google Scholar] [CrossRef]
  98. Salas, E.A.L.; Subburayalu, S.K.; Slater, B.; Zhao, K.; Bhattacharya, B.; Tripathy, R.; Das, A.; Nigam, R.; Dave, R.; Parekh, P. Mapping crop types in fragmented arable landscapes using AVIRIS-NG imagery and limited field data. Int. J. Image Data Fusion 2020, 11, 33–56. [Google Scholar] [CrossRef]
  99. Gauquelin, T.; Michon, G.; Joffre, R.; Duponnois, R.; Génin, D.; Fady, B.; Bou Dagher-Kharrat, M.; Derridj, A.; Slimani, S.; Badri, W.; et al. Mediterranean forests, land use and climate change: A social-ecological perspective. Reg. Environ. Chang. 2018, 18, 623–636. [Google Scholar] [CrossRef]
  100. Vasilakos, C.; Kalabokidis, K.; Hatzopoulos, J.; Kallos, G.; Matsinos, Y. Integrating new methods and tools in fire danger rating. Int. J. Wildl. Fire 2007, 16, 306. [Google Scholar] [CrossRef]
  101. Vasilakos, C.; Kalabokidis, K.; Hatzopoulos, J.; Matsinos, I. Identifying wildland fire ignition factors through sensitivity analysis of a neural network. Nat. Hazards 2009, 50, 125–143. [Google Scholar] [CrossRef]
  102. Bajocco, S.; De Angelis, A.; Perini, L.; Ferrara, A.; Salvati, L. The impact of Land Use/Land Cover Changes on land degradation dynamics: A Mediterranean case study. Environ. Manag. 2012, 49, 980–989. [Google Scholar] [CrossRef]
  103. Otero, I.; Marull, J.; Tello, E.; Diana, G.L.; Pons, M.; Coll, F.; Boada, M. Land abandonment, landscape, and biodiversity: Questioning the restorative character of the forest transition in the Mediterranean. Ecol. Soc. 2015. [Google Scholar] [CrossRef] [Green Version]
  104. Song, C.; Pons, A.; Yen, K. Sieve: An Ensemble Algorithm Using Global Consensus for Binary Classification. AI 2020, 1, 16. [Google Scholar] [CrossRef]
Figure 1. The Lesvos Island at the north-east Aegean sea (source of location panels: Esri).
Figure 1. The Lesvos Island at the north-east Aegean sea (source of location panels: Esri).
Remotesensing 12 02005 g001
Figure 2. Spectral reflectance per class and per date.
Figure 2. Spectral reflectance per class and per date.
Remotesensing 12 02005 g002
Figure 3. Workflow of the method followed for data classification and ensemble voting.
Figure 3. Workflow of the method followed for data classification and ensemble voting.
Remotesensing 12 02005 g003
Figure 4. User accuracy (UA) and producer accuracy (PA) heatmap for the validation dataset during the training phase of the base classifiers.
Figure 4. User accuracy (UA) and producer accuracy (PA) heatmap for the validation dataset during the training phase of the base classifiers.
Remotesensing 12 02005 g004
Figure 5. Kappa coefficient per class for the validation dataset during the training phase of the base classifiers.
Figure 5. Kappa coefficient per class for the validation dataset during the training phase of the base classifiers.
Remotesensing 12 02005 g005
Figure 6. Distribution of (a) user’s accuracy and (b) producer’s accuracy per land cover class for the validation dataset.
Figure 6. Distribution of (a) user’s accuracy and (b) producer’s accuracy per land cover class for the validation dataset.
Remotesensing 12 02005 g006
Figure 7. Kappa coefficients of the base classifiers and the voting methods for the testing datasets.
Figure 7. Kappa coefficients of the base classifiers and the voting methods for the testing datasets.
Remotesensing 12 02005 g007
Figure 8. Kappa coefficients of the base classifiers and the voting methods for the testing datasets.
Figure 8. Kappa coefficients of the base classifiers and the voting methods for the testing datasets.
Remotesensing 12 02005 g008
Figure 9. Distribution of diversity measures for all the possible pairs of base classifiers (a) inter-kappa measure, (b) Q-statistic, (c) disagreement measure, and (d) double-fault measure.
Figure 9. Distribution of diversity measures for all the possible pairs of base classifiers (a) inter-kappa measure, (b) Q-statistic, (c) disagreement measure, and (d) double-fault measure.
Remotesensing 12 02005 g009
Table 1. Spatial and spectral resolution of Sentinel-2.
Table 1. Spatial and spectral resolution of Sentinel-2.
BandCentral Wavelength (nm)Bandwidth (nm)Spatial Resolution (m)
Band 2—Blue4906510
Band 3—Green5603510
Band 4—Red6653010
Band 5—Vegetation red edge7051520
Band 6—Vegetation red edge7401520
Band 7—Vegetation red edge7832020
Band 8—Near infrared84211510
Band 8A—Narrow near infrared8652020
Band 11—Shortwave infrared16109020
Band 12—Shortwave infrared219018020
Table 2. Number of pixels per class and dataset.
Table 2. Number of pixels per class and dataset.
ClassNumber of PolygonsNumber of Training DatasetNumber of Testing DatasetTotal Samples
Olive grove34232037974000
Oak forest7213093271636
Brushwood73425410885342
Built up1079842571241
Pinus brutia113425710905347
Chesnut forest31778178956
Pinus nigra2034982431
Maquis-type shrubland1079362261162
Barren land4421261273
Grassland12713273121639
Other broadleaves823450284
Agricultural land54729188917
Aquatic bodies2110702541324
Total111919642491024552
Table 3. Relational table between a pair of classifiers.
Table 3. Relational table between a pair of classifiers.
Ci Correct (1)Ci Wrong (0)
Ck correct (1)N11N10
Ck wrong (0)N01N00
Table 4. User accuracy (UA) and producer accuracy (PA) per class 1 for the validation dataset during the training phase of the base classifiers 2.
Table 4. User accuracy (UA) and producer accuracy (PA) per class 1 for the validation dataset during the training phase of the base classifiers 2.
DTDISSVMKNNRFANN
UAPAUAPAUAPAUAPAUAPAUAPA
OG0.770.780.840.870.930.920.840.890.900.850.910.88
OF0.540.610.520.720.790.850.770.710.720.810.710.72
BW0.950.870.910.910.970.960.980.920.970.930.960.94
BU0.910.920.860.980.980.970.900.960.980.920.930.93
PB0.970.890.990.880.980.980.980.940.980.950.970.93
CF0.850.720.850.750.890.900.830.820.890.830.850.80
PN0.370.530.460.730.830.820.670.760.660.880.570.79
MS0.510.660.650.670.790.750.670.700.720.730.620.70
BL0.760.820.850.650.870.960.810.860.720.970.820.85
GL0.530.630.790.660.830.840.750.780.710.820.750.80
OB0.220.520.190.460.640.760.210.920.340.910.300.53
AL0.920.910.900.970.950.980.930.970.930.940.930.93
AB1111110.9911111
Overall Accuracy0.820.850.930.890.900.89
1 With bold fonts the maximum UA and PA per class. OG: olive grove, OF: oak forest, BW: brushwood, BU: built up, PB: Pinus brutia, CF: chesnut forest, PN: Pinus nigra, MS: maquis-type shrubland, BL: barren land, GL: grassland, OB: other broadleaves, AL: agricultural land, AB: aquatic bodies. 2 DT: decision tree, DIS: discriminant, SVM: support vector machine, KNN: k-nearest neighbors, RF: random forest, ANN: artificial neural network.
Table 5. Kappa coefficient per class 1 of all classifiers 2 for the testing dataset.
Table 5. Kappa coefficient per class 1 of all classifiers 2 for the testing dataset.
OGOFBWBUPBCFPNMSBLGLOBALAB
DT0.720.610.880.900.930.810.400.590.770.540.320.911.00
DIS0.840.590.880.910.910.750.510.640.750.620.050.931.00
SVM0.900.830.940.950.970.870.830.800.870.780.680.991.00
KNN0.840.740.910.890.950.820.720.700.780.710.350.981.00
RF0.840.750.930.940.960.840.730.720.840.720.540.961.00
ANN0.860.710.920.940.950.820.530.690.860.700.330.951.00
Mode0.870.760.930.960.950.850.680.740.850.730.400.961.00
MaxK0.820.740.910.940.900.800.420.690.750.580.000.931.00
GSK0.870.760.930.960.950.840.680.740.850.730.300.961.00
GMK0.790.640.900.930.900.750.420.680.720.580.040.931.00
GWSK0.870.760.930.960.950.850.680.740.850.730.400.961.00
GMF10.780.630.890.890.900.750.400.680.720.540.040.931.00
GSF10.870.760.930.960.950.840.680.740.850.730.300.961.00
GMMCC0.790.640.900.930.900.750.420.680.730.590.040.931.00
GSMCC0.870.760.930.960.950.840.680.740.850.730.350.961.00
1 With bold fonts the maximum kappa per class. 2 Mode: mode, maxK: max kappa, GSK: greater sum of kappa, GMK: greater mean kappa, GWSK: greater weighted sum kappa, GMF1: greater mean F1, GSF1: greater sum F1, GMMCC: greater mean MCC, GSMCC: greater sumMCC.
Table 6. Diversity measures for all the possible pairs of base classifiers (a) inter-kappa measure, (b) Q-statistic, (c) disagreement measure, and (d) double-fault measure.
Table 6. Diversity measures for all the possible pairs of base classifiers (a) inter-kappa measure, (b) Q-statistic, (c) disagreement measure, and (d) double-fault measure.
DTDISSVMKNNRFDTDISSVMKNNRF
DIS0.523----0.895----
SVM0.3560.367---0.8700.871---
KNN0.5400.4830.477--0.9300.8960.924--
RF0.5830.5150.5090.671-0.9620.9260.9350.971-
ANN0.5110.6020.4930.5320.5740.9150.9510.9310.9220.943
(a) (b)
DTDISSVMKNNRFDTDISSVMKNNRF
DIS0.130----0.097----
SVM0.1420.132---0.0520.050---
KNN0.1130.1220.092--0.0860.0750.051--
RF0.0990.1100.0810.064-0.0870.0740.0500.078-
ANN0.1200.0940.0890.0960.0830.0830.0890.0520.0680.068
(c) (d)
Table 7. McNemar’s test x2 values for pair comparisons of ensemble methods 1.
Table 7. McNemar’s test x2 values for pair comparisons of ensemble methods 1.
MaxKGSKGMKGWSKGMF1GSF1GMMCCGSMCC
Mode13.48301.02801.12501.9360
MaxK-13.48336.86013.48336.42314.09540.83013.483
GSK--1.02801.12501.9540
GMK---1.02800.9213.0631.028
GWSK----1.12501.9540
GMF1-----1.0281.8951.125
GSF1------1.7870
GMMCC-------1.936
1 With bold fonts the McNemar’s test x2 values greater than value 3.84.

Share and Cite

MDPI and ACS Style

Vasilakos, C.; Kavroudakis, D.; Georganta, A. Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem. Remote Sens. 2020, 12, 2005. https://doi.org/10.3390/rs12122005

AMA Style

Vasilakos C, Kavroudakis D, Georganta A. Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem. Remote Sensing. 2020; 12(12):2005. https://doi.org/10.3390/rs12122005

Chicago/Turabian Style

Vasilakos, Christos, Dimitris Kavroudakis, and Aikaterini Georganta. 2020. "Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem" Remote Sensing 12, no. 12: 2005. https://doi.org/10.3390/rs12122005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop