Next Article in Journal
Accuracy Analysis and Appropriate Strategy for Determining Dynamic and Quasi-Static Bridge Structural Response Using Simultaneous Measurements with Two Real Aperture Ground-Based Radars
Previous Article in Journal
Electromagnetic Scattering and Doppler Spectrum Simulation of Land–Sea Junction Composite Rough Surface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Emulation of Radiative Transfer Models for Top-of-Atmosphere BRDF Modelling Using Sentinel-3 OLCI

1
Département de Géomatique Appliquée, Université de Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
2
Canadian Space Agency, Longueuil, QC J3Y 8Y9, Canada
3
RHEA Group, Montreal, QC H4T 2B5, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(3), 835; https://doi.org/10.3390/rs15030835
Submission received: 10 December 2022 / Revised: 28 January 2023 / Accepted: 30 January 2023 / Published: 2 February 2023

Abstract

:
The Bidirectional Reflectance Distribution Function (BRDF) defines the anisotropy of surface reflectance and plays a fundamental role in many remote sensing applications. This study proposes a new machine learning-based model for characterizing the BRDF. The model integrates the capability of Radiative Transfer Models (RTMs) to generate simulated remote sensing data with the power of deep neural networks to emulate, learn and approximate the complex pattern of physical RTMs for BRDF modeling. To implement this idea, we used a one-dimensional convolutional neural network (1D-CNN) trained with a dataset simulated using two widely used RTMs: PROSAIL and 6S. The proposed 1D-CNN consists of convolutional, max poling, and dropout layers that collaborate to establish a more efficient relationship between the input and output variables from the coupled PROSAIL and 6S yielding a robust, fast, and accurate BRDF model. We evaluated the proposed approach performance using a collection of an independent testing dataset. The results indicated that the proposed framework for BRDF modeling performed well at four simulated Sentinel-3 OLCI bands, including Oa04 (blue), Oa06 (green), Oa08 (red), and Oa17 (NIR), with a mean correlation coefficient of around 0.97, and RMSE around 0.003 and an average relative percentage error of under 4%. Furthermore, to assess the performance of the developed network in the real domain, a collection of multi-temporals OLCI real data was used. The results indicated that the proposed framework has a good performance in the real domain with a coefficient correlation ( R 2 ), 0.88, 0.76, 0.7527, and 0.7560 respectively for the blue, green, red, and NIR bands.

1. Introduction

The Bidirectional Reflectance Distribution Function (BRDF) defines the surface reflectance anisotropy as a function of sun-surface-sensor geometry and surface and sensor properties [1]. The BRDF plays an important role to geometrically normalize satellite observations to a standard direction. This is useful to various remote sensing applications [2], to estimate surface energy parameters such as albedo [3], to estimate accurate biophysical and structural surface parameters [4], and to improve land cover classification and segmentation [5]. However, in more remote sensing applications, the surface reflectance behavior is assumed to be Lambertian (isotropic, same reflectance in all directions), resulting in uncertainties in the analysis [6]. Theoretically, a set of high-quality, sufficient, and well-distributed remote sensing observations over a short period of time should exist to describe the BRDF accurately, along with a reliable model [7].
The literature reports several simple widely used empirical and semi-empirical models at national, regional, and global scales for BRDF modeling [8]. For example, Ross-Thick Li-Sparse (RTLS) [9] and Rahman-Pinty-Verstraet (RPV) [10], and Walthall [11] are used for BRDF modeling due to their simplicity and low computational cost. However, semi-empirical models are simplified versions of physically based radiative transfer models (RTMs) and are derived by imposing some prior assumptions on some RTMs parameters and are mainly developed for bottom of atmosphere (BOA) BRDF. However, BOA reflectance requires atmospheric, topographic, and adjacency effect correction that may lead to introducing sources of uncertainty and errors in BRDF modeling [12]. Additionally, the early developed BRDF models were based on observations collected by single satellite sensors such as MODIS [13], MIRS [14], or VIIRS [15] in a long-time window to providing more observations with minimum cloud and cloud shadows. However, long time windows can introduce certain uncertainties in BRDF estimation due to surface and atmosphere variability. Recent growth and progress in geospatial Cloud Computing Platforms (CCPs) such as Google Earth Engine (GEE) [16], Amazon Web Services (AWS) [17], and Microsoft Planetary Computer (MPC) [18] have facilitated access to multi-sensor, multi-angular, and multi-spatial resolutions satellite observations that can lead increasing the number of observation and shorten the data collection time window and then minimizing the variability effect of surface and atmosphere. The CCPs like GEE also can provide a wealth of information about atmospheric and surface parameters such as LAI or aerosol optical thickness (AOT) that can be useful in the TOA BRDF modeling [19] Furthermore, to overcome the issues associated with the BOA reflectance BRDF modeling, the directional TOA radiance/reflectance directly can be used to model the BRDF. In this way, the RTMs that express the propagation and interaction of solar radiation with different mediums such as surface or atmosphere can be used to simulate BOA or TOA reflectance [20,21]. For example, PROSAIL is widely used in remote sensing to link directional ground surface reflectance, i.e., BOA, with structural and biophysical properties of the surface [22,23,24]. Moreover, an atmospheric model such as the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum, version 6SV from 2005) can describe solar radiation propagation through the atmosphere [25]. The coupling of PROSAIL and 6S enables accurate BRDF modeling by simultaneously linking up the satellite sensor TOA directional reflectance with atmospheric and surface parameters and BOA directional reflectance [26]. However, the complexity, high dimensionality, and computationally cost of physical RTMs, particularly in large areas with big datasets, restrict their use in real remote sensing applications, especially for BRDF modeling. To overcome these issues, it has been suggested to approximate the physical RTMs by means of machine learning known as emulation [27]. Machine learning (ML) algorithms such as neural networks have demonstrated a good ability to emulate the nonlinear and complex physically based RTMs [28].
The emulation-based approach can train a ML model for different goals, including, approximating the complex physical models [29], retrieving or quantifying the vegetation biophysical variables [30,31]. For instance, in [32,33,34] Support Vector Machines (SVMs), Random Forests (RFs), and Artificial Neural Networks (ANNs) were trained using simulated data generated by physical RTMs to retrieve surface or atmospheric parameters. In [35], an ANN emulation methodology was developed to simulate the climate forecast system (CFS). The results found that the ANN can be used as an emulator of climate simulations and seasonal predictions with an accurate performance. Furthermore, the emulation approach was used in different study to generate the synthetic spectral dataset [36] or synthetic scene generation [37].
However, traditional ML algorithms based on SVM, RFs, and shallow ANN have a straightforward structure that is incapable of discovering and mapping complex and nonlinear relationships between input and output variables from high-dimensional RTMs. A few recent studies have investigated the potential of new and advanced ML algorithms such as deep neural networks (DNNs) as emulators of physically based model. For example, in [38], a fully connected deep neural network was proposed to emulate the Community Radiative Transfer Model (FCDN_CRTM) using simulations of brightness temperatures (BTs) over ocean surfaces. In another research [39], several deep learning-based networks were trained to emulate the lookup table generated by the RTM of the Multi-Angle Implementation of Atmospheric Correction (MAIAC).
Similar to the aforementioned studies, a new deep learning-based approach can be developed to emulate the physically based BRDF model. Many DNNs, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM), have been developed in various application domains, such as computer vision [40], natural language [41] and remote sensing applications [42]. Among the DNNs, CNNs have received more attention and success including in remote sensing applications. Two-dimensional (2D) CNNs are helpful for image-based processing tasks, such as image classification [43], object detection [44] and image semantic segmentation [45], multivariable regression [46]. However, one-dimensional (1D) CNNs are commonly used for 1D signal data such as audio, text, and 1D time-series data, and similarly can also be used for regression data analysis [47]. Since the input and output variables of physical BRDF—obtained by coupling PROSAIL+6S—can be treated as 1D-regression problems, a 1D-CNN can be trained using a representative BRDF dataset generated by PROSAIL+6S. Once the network training, testing, and validation stages are accomplished based on a large dataset in the simulation domain, the network can be transferred to the real domain using the transfer learning (TL) [48]. The TL technique is helpful when a limited amount of data for one task is available in one domain (i.e., real domain), but a large volume of data can be accessible for a similar task in another domain (i.e., simulation domain) [49]. This paper mainly focuses on developing an efficient DNN framework for BRDF characterization in the simulation domain. However, to assess the performance of the proposed network in the real domain, the network was transferred to the real domain and applied to a series of observations collected during two weeks from Ocean and Land Colour Instrument (OLCI) sensor carried by Sentinel-3A (S3A) and Sentinel-3B (S3B) satellites. Thus, this study first begins by simulating a wide range of directional spectral reflectance for a variety of surface, atmospheric and geometric (illumination and viewing angles) configurations using the coupled PROSAIL+6S model. Second, the generated dataset is used to design, train and validate a 1D-CNN architecture for BRDF modeling. The third step consists of evaluating the trained 1D-CNN using an independent dataset also created by the coupled PROSAIL+6S based on unknown configurations to the network in the simulation domain. In the last step, the proposed network is applied to real data consisting of OLCI observations.
The remaining sections go as follows: Section 2 regards the research’s theoretical background; Section 3 describes the methodology; Section 4 presents the results and analysis in the simulation and real domain; with the conclusions in Section 5.

2. Theoretical Background

2.1. PROSAIL

PROSAIL is one of the widely used and standard RTMs in remote sensing applications; it combines radiative transfer models of the leaf (PROSPECT) and canopy (SAIL) [24]. PROSAIL defines directional reflectance as a function of leaf property, canopy parameters, sun-surface-sensor observation geometry, and soil factors ( ρ s o i l ) [26]. The leaf parameters include the internal structure parameter of the leaf mesophyll (N), chlorophyll a + b concentration ( C a b ), leaf equivalent water thickness (Cw), dry matter content ( C m ), and brown pigments ( C b p ). The canopy parameters comprise leaf area index (LAI), leaf inclination distribution function (LIDF), and average leaf inclination angle (ALIA). The geometry parameters also contain view zenith angle (VZA), sun zenith angle (SZA), and relative azimuth angle between sun and sensor (RAA) [50] Using PROSAIL in forward mode can simulate directional reflectance and provide training and testing datasets for an ML algorithm [51]. This method is particularly useful when satellite sensors cannot provide sufficient observations for BRDF modeling due to limited viewing geometry, cloud contamination, or poor atmospheric conditions [52]. PROSAIL can simulate only BOA directional reflectance; thus, requiring an additional step to couple the PROSAIL with an atmosphere RTM like the 6S code to produce the top of atmosphere (TOA) directional reflectance and to represent satellite data properly.

2.2. Atmospheric RTM: The 6S Code

The Second Simulation of a Satellite Signal in the Solar Spectrum (6S) was introduced by Vermote [53] to describe the scattering and absorption effects of atmospheric gases and aerosols on solar radiation propagation downward and upward through the atmosphere under cloudless conditions [54]. The multiple interactions of solar radiation with atmospheric layers are described by a method called the Successive Orders of Scattering (SOS). In 2005, a vector version of 6S (6SV) that accounts for radiation polarization was proposed [55]. The inverse mode of 6S can be used in retrieving atmospheric parameters, such as aerosol optical thickness (AOT), columnar water vapor (CWV), and O3 column concentration or atmospheric correction of satellite images [54]. However, through the forward application of 6S, it is possible to simulate TOA directional reflectance for various input conditions and use them as a training and testing database [56]. To account for the anisotropic effect of the surface in 6S calculation, the PROSAIL can be coupled with 6S to accurately describe the TOA directional reflectance. Figure 1 depicts a schematic overview of coupling PROSAIL and 6S and their parameters. According to Figure 1, after calculating the surface directional reflectance by PROSAIL (combination of PROSPECT and SAIL), the outputs are used as ground reflectance, which is one of the pre-requirements for running 6S.

2.3. One-Dimensional Convolutional Neural Networks (1D-CNNs)

In recent years, one-dimensional convolutional neural networks (1D-CNNs) have received more attention in remote sensing due to their low computational cost, small training dataset requirement, and compatibility in dealing with applications mainly based on one-dimensional spectral or time-series data from one-pixel imagery or field measurement data [57]. In contrast to two-dimensional (2D) CNNs, which are appropriate for image-based applications and employ convolutional filters over the two dimensions, the 1D-CNNs extract the spectral or temporal features by applying filters along one dimension of the data. However, as 2D-CNNs, the 1D-CNNs model includes the five following layers: (1) an input layer that receives the input data in a tensor format and then feeds it to the network; (2) hidden layers consisting of several learnable convolution filters to convolve with input data and extract hidden information and high-level features; (3) pooling layer (if necessary) to downscale the dimensionality of features derived by the convolutional filters while highlighting the most dominant features and reducing the noise. The 1D-CNN also includes (4) a flattening and fully connected layer that transfers the results of the convolutional layer into a one-dimensional vector form to (5) the output layer, which contains the same number of target variables [58]. Such architecture is used in this work, as shown in the methodology. Additionally, most CNN-based models use a particular activation function called Rectified Linear Units (ReLU), which outperforms previous activation functions such as Sigmoid and Tanh. Regularization techniques, like the dropout technique, that accidentally assign zero weights to many neurons and drop them during the training process can be used in 1D-CNN to significantly mitigate the overfitting problem and improve the generalization accuracy of the network [59].

3. Material and Methodology

The proposed methodology in this article is divided into three phases. The first phase involves simulating a sufficient dataset of directional reflectance values for a variety of sun-sensor-surface and atmospheric configurations using coupled leaf canopy (PROSAIL) and atmospheric RTMs (6S). The second phase consists of designing and training-validating a 1D-CNN architecture for BRDF estimation using the simulated dataset from the previous phase. The trained and validated network was evaluated using the k-fold cross-validation method and independently generated testing datasets unknown to the network. In the third phase, the final designed network was transferred from the simulation domain to the real domain to evaluate the performance of the network for real satellite data. In this way, the general steps of real OLCI data collection, sampling and network tuning and retraining were taken to evaluate the network performance in the real domain. Figure 2 depicts the flowchart of the proposed methodology in both simulation and real domain.

3.1. PROSAIL and 6S Parameterization to Generate Synthetic OLCI Data

Before designing the 1D-CNN in the second phase of the study, the training and validation datasets in the simulation domain were generated by running the coupled PROSAIL and 6S models in the forward mode using [60]. In this study, we considered only PROSAIL and 6S input parameters with the higher importance according to global or local sensitivity analysis found in the literature [61]. To ensure generalizable results, the incorporated input variables were drawn randomly from a uniform distribution extending to the minimum and maximum range of each parameter, as listed in Table 1 [44]. From the geometrical parameters, the VZA ranged between 0–60° to cover the viewing geometry of wide-angle remote sensing sensors likes Sentinel 3-OLCI, which has a maximum VZA of 55° [62]. The RAA range was set from −185° to +185°, allowing for the demonstration of TOA reflectance anisotropy at all possible azimuth angles and even slightly more than possible values (−180° to +180°) to avoid discontinuity on the borders. The SZA value also was taken between 10° and 50° which cover the seasonally varying SZA in the study area.
To determine the range of canopy structure parameters, LAI varied between 0.1–6, obtained from the literature [63,64,65]. The range of hot-spot, related to the ratio of mean leaf size over canopy height, varies between 0.01 (small leaves, tall canopies) and 0.9 (large leaves, short canopies) [66]. The leaf inclination distribution function (LIDF) consists of two input components, L I D F a (average leaf angle) varied from 15° to 60° and L I D F b (distribution bimodality) parameter was ignored [51].
From the leaf parameters, only C a b and leaf structure parameter (N) was considered as float variables for simulation. The C a b was ranged from 3 to 30. The N, which is linked to leaf characteristics like as thickness and intercellular properties, varied between 1 to 3, following the literature [67,68]. Due to the close relationship between C a b and C a r ( R 2 = 0.86), reported in [69], the C a r was assumed a constant value of 10 μg cm 2 . The remaining three leaf parameters, due to their negligible influence on directional reflectance in the visible and near infrared were fixed ( C w = 0.002 , C m = 0.002 , and C b p = 0 ) according to values in different studies [70]. The background soil reflectance was assumed to be Lambertian, and a typical soil related to dry/wet factor value of ( ρ s o i l = 0.5) was set [71].
To set atmospheric conditions, only the AOT, which is the most influential optical atmospheric parameter in visible and near-infrared (NIR) spectral bands, was ranged between 0.1 to 0.5 [43]. A continental aerosol mixture and a standard predefined atmospheric model (US Standard1962) were selected. Water vapor was kept to the default value in the 6SV code (1.42 g/cm2).
Therefore, nine critical input variables consisting of three solar-sensor geometry angles (SZA, VZA, RAA), three canopy properties (LAI, LIDF, Hspot), two leaf traits (N, Cab), and one atmospheric parameter (AOT) were selected for simulations. According to the finding of [51,55,57], a total number of 100,000 samples can help to reach a sufficient accuracy for canopy variables estimation. In addition, it covers a diverse and representative leaf-canopy-atmosphere-geometry configurations for such a training dataset for BRDF modeling. Thus, in this study, a combination of 100,000 input parameters was used to run the coupled PROSAIL+6S in forward mode to simulate the corresponding BRDF. Note that the generation of these simulations took about 90 h.
Since PROSAIL and 6S simulate directional reflectance between 400 nm and 2500 nm with 1 nm interval, the output should be resampled to a multispectral version dataset using the desired sensor Spectral Response Function (SRF) to match real-sensor observations [72]. This study uses the SRF of the OLCI as shown in Figure 3 for the resampling process.
The simulated spectral directional reflectance was resampled to the four OLCI bands, including Oa04, Oa06, Oa08, and Oa17 using the following equation:
ρ S ( i ) = A i B i w i ( λ ) E 0 ( λ ) ρ ( λ ) A i B i w i ( λ ) E 0 ( λ ) ,
where ρ S ( i ) denotes the resampled directional reflectance value with respect to the specific spectral band i, ρ ( λ ) is the TOA directional reflectance, E 0 ( λ ) is the TOA spectral solar radiance, and w i ( λ ) indicates the weight of the band SRF of OLCI where A and B are respectively the minimum and maximum wavelengths of OLCI in each i band.

3.2. Real Sentinel 3-OLCI Data for Validation and Application

While OLCI synthetic data was used to develop the proposed 1D-DNN network in the simulation domain, a collection of real observations acquired from the OLCI sensor was used to ensure that the proposed network also performs in the real domain. The OLCI has a spatial resolution of about 300 m at the nadir and a swath width of 1270 km. It covers 21 spectral bands with wavelengths ranging from 390 nm to 1040 nm [73]. In this study, as given in Table 2, only four OLCI bands, including Oa04, Oa06, Oa08, and Oa17 were used to represent respectively the blue, green, red, and NIR bands to cover the visible and NIR spectrum.
The minimum temporal resolution of OLCI at the equator over land and ocean is about 1 and 2 days, respectively, while at high altitude parts of the Earth, due to the rising overlap in adjacent swaths, it can reach less than 0.6 days over land and 1 day over the ocean with two-coplanar S3 satellites [73]. Therefore, a fixed ground point on the surface can be viewed at many different angles during the orbital repeat cycle especially at high latitudes, where there is a significant amount of overlap between images acquired from adjacent orbital paths. In addition, the OLCI has a push-broom imaging system with a field of view of 68.5° and the maximum VZA is limited to a 55°, allowing the acquisition of off-nadir observations, which is essential for BRDF modeling [62].

3.2.1. Collection of OLCI Data for Application in the Real Domain

The proposed method was applied to the OLCI real data to evaluate the performance of the developed 1D-CNN in the real domain. To achieve this, at first, a two-week collection of OLCI observations were collected from an area located in Canada with high latitude. This helps to ensure that a significant amount of overlap between images acquired from adjacent orbital paths is found and more observations from different angles can be collected for each pixel. Figure 4 shows the collected OLCI data and the study area, located in the province of Quebec, between the 46.04° and 47.8°N and 73.8° and 77.16°W that covers the part of the Boreal shield Ecozone.
All acquired OLCI images by both S3A and S3B satellites from 1st to 16th July 2019 over the study area were downloaded using Copernicus Open Access Application Programing Interface Hub (API) with a python library, namely, Sentinelsat (https://pypi.org/project/sentinelsat/) (accessed on 9 December 2022), that enables to query and download the Sentinel data.

3.2.2. Sampling of OLCI Data

The collected time series of OLC data in the real domain include different land covers with a large number of pixels recorded from different angles. For the aim of efficient processing and following the simulation domain experiments to achieve comprehensive coverage of the viewing and illumination angles, a random sampling method was developed based on the histogram of variables to choose an optimal subset of observations in the vegetation-covered areas, particularly forests and wetlands. This sampled dataset was used for post-fine-tuning and re-training the pre-designed network in the real domain. Therefore, under the assumption that the surface and atmosphere do not vary over two weeks, a subset of n pixels was selected from the image domain in both spatial and time dimensions according to the following sampling rules inspired by [74]:
  • To determine the optimal n pixels, the average RMSE between the measured and the estimated reflectance by 1D-CNN in four bands was considered.
  • The normalized Difference Vegetation Index (NDVI) of the selected pixels should be greater than 0.5 to cover the dense vegetation areas.
  • Geometrical parameters, including VZA, SZA, and RAA values were divided into 10 bins using the histogram analysis of each variable, and then for each bin, a defined number of pixels were chosen randomly.

3.3. 1D-CNN Developing

Building a robust, accurate, and well-generalized CNN model is dependent on several factors, including preprocessing the training dataset, building the network architecture, and finally, tuning hyper-parameters such as learning rate, batch size, etc. [75]. The following sections describe these critical factors.

3.3.1. Data Preprocessing for the 1D-CNN

Differences in units and ranges of the input variables of PROSAIL+6S can prevent the network from properly learning from all features as required, resulting in poor performances and high generalization errors. First, scaling all dataset variables to specific ranges and orders of magnitude can mitigate these issues, lead to good performance, and accelerate network training and convergence. The ranges of all input variables (of uniform distributions) were rescaled between [0, 1] using Min-Max normalization [76], Second, to avoid the network from being influenced by random outliers generated by the coupled PROSAIL+6S, each output variable was analyzed independently to identify and remove the outliers before running the network with training and validation data. Third, instead of feeding entire datasets to the network in one iteration (one batch), they were divided into small batches and then introduced to the network in several iterations to increase the network’s speed and ensure efficient gradient descent optimization. Additionally, before training the network, the batched dataset was shuffled to enhance the training convergence and improve the network’s generalization ability. Finally, the shape of the simulated data, which was a matrix of n (number of samples) by m (number of variables), had to be matched to the 1D-CNN, which accepts the input data as a three-dimensional tensor (batch size, channels, length).

3.3.2. 1D-CNN Architecture Design

Finding the optimal CNN architecture highly depends on the problem to solve. This study used a trial-and-error approach inspired by previously developed 1D-CNNs [47] to establish an efficient and well-generalized architecture for BRDF characterization. As a result, various 1D-CNN architectures were designed and evaluated with a variety of depths, widths, kernel sizes, and a number of hidden, convolutional and pooling layers. Most conducted studies to develop a CNN-based model have proposed expanding the network feature space (channels) while decreasing the height and width of initial inputs by going deeper [77]. In light of this, we started with a network with a one-layer convolutional layer neural network as basic architecture, and then different convolutional layers were added to find a suitable architecture. Therefore, designing the architecture began with a shallow network and gradually expanded its depth to find more complex patterns and features. During the architecture design, it is recognized that increasing the network depth leads to more feature extraction, but the number of model parameters (weights and biases) also increases, resulting in more challenging in training and generalizing the network. Additionally, the trials indicated that increasing the number of filters improved the accuracy of the 1D-CNN model but diminished the model generalization performance. The architecture of final designed network started by reshaping the input data into a tensor and feeding it to a convolutional layer (Figure 5). The following two convolutional layers contained 32 and 64 filters to derive additional features from the input data. Next, two layers were also 1D-convolutional layers, involving 128 and 256 filters with the same kernel size, stride, and padding values of respectively 2, 1, and 1 (See Table 3 for more details). A polling layer was added to the network to down sample the data while preserving the most prominent features. Moreover, one flatten layer was also used to reshape the multidimensional output of previous layers into a one-dimensional array and prepare them for the next two fully connected dense layers, with respectively 512 and 128 neurons. Finally, the output layer was composed of four neurons corresponding to the four bands of the OLCI. The non-linear ReLU (Rectified Learned Unit) function: f(x) = max (0, x), (where x is the input to a neuron) was used as an activation function in all hidden layers due to its outstanding performances. Figure 5 illustrates the proposed 1D-CNN architecture. Table 3 provides additional information regarding the layer type and configuration, such as filters, kernels, stride, and pad size used in the proposed 1D-CNN architecture. The last column of Table 3 lists the number of learnable parameters for each layer.

3.3.3. 1D-CNN Hyper-Parameter Tuning

The network learning process is controlled by manually tuning a set of hyper-parameters, such as batch size, epoch number, type of loss and optimization function, and learning rate that defines step size by adjusting network parameters to minimize the loss function [78].
The network performance was monitored in terms of training time as well as training and testing accuracy, using different values for each of the hyper-parameters through different scenarios. Table 4 provides a summary of tested and selected hyperparameters. For example, in one scenario, the learning rate was dropped by magnitudes of 0.01, 0.001 and 0.0001, and the batch size values were 16, 32, 64, 128 and 256. This hyperparameter diversity helped to monitor the network performance in different conditions. For instance, if the network was training too slowly or converged too fast, the batch size or the learning rate was changed.
All scenarios were implemented on a Windows PC with Intel Core i7-3820 CPU @ 3.60 GHz and 32 GB RAM. The PC GPU was NVIDIA GeForce GTX 1080.
In addition, the optimization model and loss function had a direct effect on how the network weights and biases were adjusted. Several different optimizers, including Stochastic Gradient Descent (SGD) and Adam (Adaptive moment estimation), demonstrated a good training performance on a few occasions, but the network encountered a generalization problem. Ultimately, an improved optimization method, known as AdaMax (Adaptive moment estimation with Maximum), demonstrated satisfactory performance in terms of accuracy, mitigating overfitting and generalization problems when compared to the other optimization models tested [79].
Furthermore, the mean squared error (MSE) function was used to calculate the network’s loss over 100 epochs. For the real domain, the same hyper-parameters were used, except for the batch size that was changed from 32 to 16 to fit with the real data size.

3.4. Performance Evaluation

The performance of the proposed 1D-CNN was examined independently using a set of testing samples. In this way, three criteria, including root-mean-squared error (RMSE), and correlation coefficient (R2), according to the following equations, were used to evaluate the degree of correlation and error between a certain number (N) of estimated directional reflectance ( y ^ i ) and simulated directional reflectance ( y i ):
RMSE = i = 1 N ( y ^ i y i ) 2 N ,
R = ( N i = 1 N ( y ˆ i y i ) ) ( i = 1 N y ˆ i ) ( i = 1 N y i ) ( N i = 1 N y ˆ i   2 ( i = 1 N y ˆ i ) 2 ) ( N i = 1 N y i 2 ( i = 1 N y i ) 2 ) ,

4. Results

4.1. Network Training and Testing in the Simulation Domain

The entire simulated dataset was divided into two subsets: training and validation, in a proportion of respectively 80% and 20% of the total simulated dataset. To ensure an unbiased dataset splitting, the dataset was randomly shuffled, then divided into training and validation data. The testing dataset was generated using an independent procedure next presented in the network testing Section 4.2. As it was discussed in the previous section, different scenarios, including varying network architecture and tuning network hyper-parameters with various values, were evaluated to build an optimal network for BRDF modeling. As part of these experiments, Table 5 gives information about the outperformed network performance in terms of training, testing average accuracy, and training time for values of the two most important hyper-parameters: batch size and learning rate. As can be seen from Table 5, the training accuracy is similar during most of the experiments with a value of 0.99 or higher. Testing accuracy varies from very low (0.560) to very high (0.987), with a more stable performance for the learning rate of 0.001, except for the large batch size of 256 which led to decreasing testing accuracy. The latter also degraded when the batch size was increased above 256 (results not shown). The training time ranges between approximately 7 min, for the large batch size of 256, and more than 80 min, for the small batch size of 16. In addition, a high value of learning rate with low a value of batch size required more time to converge and created a noisy learning curve and an unstable training procedure (curves not shown). Finally, the highest testing accuracy was obtained for a batch size of 32 and a learning rate of 0.001 (highlighted with green color) during 100 epochs and 37.92 min. This model has a more generalization ability.
Figure 6 illustrates the variation of the loss function (MSE) and accuracy ( R 2   s c o r e ) during training and validating the proposed 1D-CNN model with a learning rate of α = 0.001 and a batch size of 32. The shape and evolution of these curves provide information about the model performance and, in general, about the network under- or over-fitting issues. As can be seen, both values of the loss function using training and validation datasets decrease sharply, and the accuracy curve comes near 1, indicating that the model rapidly learned and converged. The training and validation loss curves decrease rapidly until about 40 epochs, then flatten out and gradually decrease. We noticed that the network did not exhibit an under-fitting problem since the training loss is not significantly higher than the validation loss. Furthermore, the validation loss was not considerably higher than the training loss, ensuring the proposed 1D-CNN model did not suffer from overfitting; this is a sign of proper hyper-parameter selection and network learning.
Figure 7 displays the scatter plots of the simulated BRDF values by the 1D-CNN and the simulated BRDF values by PROSAIL+6S RTMs for four bands of the OLCI, including Oa04, Oa06, Oa08, and Oa017. The least square regression lines (Red) show a high agreement between the simulated and the estimated BRDF values along all bands as the coefficients of determination ( R 2 ) are over 0.99 and the RMSE values are about 0.003.

4.2. Simulation Testing Dataset for Network Evaluation

The performance and generalization ability of the proposed 1D-CNN model were evaluated using independently generated test data hidden from the network during the training and validation processes. To create the testing dataset, the values in Table 6 were selected for input variables, after ensuring that these values were not included in the training nor validation dataset.
The generated directional reflectance test dataset is visualized in polar coordinates to illustrate the visual anisotropy of the surface reflectance and to ease the interpretation, analysis, and comparison of the simulated and predicted BRDF patterns at any azimuth and zenith angles. Figure 8 and Figure 9 illustrate the polar representations of the simulated and the predicted BRDF using the physical RTM and 1D-CNN. Both figures demonstrate the BRDF variation in a polar space for the four OLCI bands: Oa04, Oa06, Oa08, and Oa17. Polar angles were used to represent relative azimuth angles (RAA), ranging from −180°, to 180°, with a 12° interval. Simultaneously, the VZA is denoted by a radius ranging from 0° to 50°, with a 1.667° interval.
The predicted BRDF pattern using the 1D-CNN model matches with the simulated BRDF pattern using RTM base model. Indeed, the estimated and simulated BRDF in four bands vary from low-directional reflectance values (dark blue) to high-directional reflectance values (vivid red). When the sensor direction and sun coincide, directional reflectance values in the backscattering direction are remarkably higher than in the forward and nadir directions (Figure 8 and Figure 9). This is the most characteristic and prominent feature of the BRDF, known as the hot-spot effect, which is characterized in the proposed 1D-CNN by a peak reflectance at a view zenith angle of 30° near the sun position at all the illustrated polar BRDF. These results indicate the outstanding prediction ability of the 1D-CNN for BRDF characterization.
The directional reflectance values estimated by the 1D-CNN are plotted against the corresponding simulated directional reflectance values by PROSAIL+6S (Figure 10). The results show the developed 1D-CNN BRDF model can approximate the PROSAIL+6S in a near-perfect performance for the test dataset also. The coefficient of determination R 2 between the predicted and simulated reflectance values for Oa04, Oa06, Oa08, and Oa17 bands of the OLCI is over 0.97, indicating a good agreement between the simulated directional reflectance by PROSAIL+6S and the predicted directional reflectance using 1D-CNN. Table 7 summarizes the R2 and RMSE. Similarly, compared to the validation dataset, the 1D-CNN showed a significant accuracy achievement for the testing dataset and confirmed the general ability of the proposed 1D-CNN to perform BRDF estimation.

Analysis of Estimation Error

Figure 11 illustrates the polar plot of the Percentage Error (PE) between the simulated reflectance values by PROSAIL+6S and the estimated directional reflectance values by the proposed 1D-CNN model. The red region indicates that more PE has been raised, particularly in the direction of the hot-spot near the principal plane, where the sun and sensor were closer to each other.
Moreover, there was a tendency for higher error in several relative azimuth angles when the view zenith angle increased. Regarding the polar plots of the PE at four of the OLCI bands, the minimum PE error obtained was relatively small and respectively equaled 1.72%, 2.12 % and 2.44% for NIR (Oa017), green (Oa06) and red (Oa08) bands, respectively. In comparison, a more significant error of about 3% arose for the blue (Oa04) band. The result indicated that the proposed 1D-CNN model was capable of characterizing BRDF patterns with high accuracy in the majority of azimuth and view zenith angles, except the hot-spot area, which slightly reduced the model’s predictive ability.

4.3. Comparison of the BRDF Shape in Principal and Cross-Principal Plans

The BRDF polar plots enabled us to compare the directional reflectance pattern at any azimuth and zenith angle, particularly in the Principal Plane (PP) and the Cross-Principal Plan (CPP) which are frequently used directions in BRDF analysis. Indeed, we could effectively analyze the proposed 1D-CNN’s performance compared to the RTMs-based model. The BRDF pattern as a function of VZA along the PP and CPP at four bands of OLCI appears in Figure 12 and Figure 13. The blue and red curves represent the estimated and simulated BRDF pattern using respectively the 1D-CNN and the physical RTM. Generally, both simulated and estimated curves form a similar curve influenced by the hot-spot effect in PP and bell-shaped in CPP. As shown, the 1D-CNN model appears to match the simulated BRDF with a slight difference in the hot-spot region and when the VZA increases. However, the results not only demonstrated a high degree of consistency between the simulated and predicted BRDF patterns but also confirmed the excellent ability of the developed 1D-CNN model to predict the BRDF behavior.

4.4. Application to Real Data

After downloading the required observations, a series of pre-processing steps were applied to build a data cube of high-quality, cloud-free reflectance-based observations. In the first step, the downloaded OLCI Level-1 data products in Full Resolution (300 m) were converted from radiometrically calibrated TOA radiances (W/m2·sr·μm) to TOA reflectance data and then projected the data into the geographic coordinate system (WGS 1984). In the second step, a range of filtering was applied based on the quality flag layer to mask the low-quality observations. In addition, all contaminated pixels by cloud and cloud shadow affecting quantitative analyses in the BRDF modeling were masked out using the quality flags (QF) band associated with the OLCI images. The QF is a 32-bit integer: bits from 0 to 20 indicate saturation per band and the rest of the bits present other quality information. The bit 27 (quality flags bright) related to clouds and cloud shadows was used to mask the contaminated pixels [80]. Figure 14 shows the collected OLCI data at the Oa17 band, from 1st July to 16th July 2019 before and after cloud and cloud shadow masking.

4.5. 1D-CNN Evaluation in the Real Domain

To evaluate the performance of the pre-developed 1D-CNN BRDF modeling in the real domain, the relationship between the estimated and the measured directional reflectance was calculated in the terms of R 2 , and RMSE at four bands, including Oa04 (Blue), Oa06 (Green), Oa08 (Red), and Oa17 (NIR). The scatterplots and boxplots between the measured and the estimated directional reflectance across all four bands are shown in Figure 15. When comparing the measured reflectance with the estimated reflectance values at the Oa04 band (Blue), the R 2 shows a strong agreement ( R 2 = 0.87, RMSE = 0.007). However, at Oa17 (NIR) band, there is a slightly weak relationship between the measured and estimated reflectance values (Oa17 band; R 2 = 0.75, RMSE = 0.007), showing a remarkably lower agreement around 13% than the Blue. In addition, the correlation at the Oa06 band (Green; R2 = 0.79, RMSE = 0.01) and Oa08 band (Red band; R 2 = 0.83, RMSE = 0.015) is also lower around 9% than Blue. This can be because of the lower signal-to-noise ratio of OLCI images in the Green, Red, and NIR bands compared to the blue band or other scaling issues which make reconstructing the BRDF accurately more challenging. This finding indicates that our proposed network works better with less noisy data than with highly noisy data. This can be because of this reason that our network is developed in a simulation domain that is not affected by real noises.

5. Conclusions

This study presented an emulation-based approach to approximate the RTMs-based TOA BRDF model using a 1D-CNN. The RTMs (PROSAIL and 6SV) were used to simulate a large number of input-output values of geometry-surface-atmosphere parameters and corresponding TOA BRDF for training, validating and testing the 1D-CNN. The developed 1D-CNN reached a high accuracy and computational speed. The proposed approach also had an acceptable performance in the real domain, and based on a collection of about two weeks of OLCI observations. However, the proposed method is mainly developed for the vegetation area and does not apply to surfaces like water which has a different BRDF pattern. In addition, the model is applicable only for the defined ranges of inputs in the simulation domain and needs retraining in the case of changing these ranges for other areas of interest. Finding an appropriate 1D-CNN architecture and the optimal hyperparameters were based on a trial-and-error approach, which requires more time and effort to set. But once the network was trained and tested, it can work with acceptable accuracy and speed. Furthermore, the coupled surface–atmosphere RTMs for simulating the TOA BRDF could be furtherly improved by involving topographic effects to simulate heterogeneous rugged areas viewed from an off-nadir direction. Working at TOA level helps to integrate multi-sensor datasets (MODIS, VIIRS and OLCI) in future improvements of the method with a minimum pre-processing effort (no atmospheric nor topographic corrections required).More importantly, this effort proved that emulation-based approach can be an effective solution for establishing the relationship between the remote sensing observations and surface and atmospheric parameters, demonstrating the DL model’s ability to and emulate the complex environmental and physical relationships in remote sensing applications [47,48]. The deep learning-based approach could be applied to emulate even complex combinations of physical and/or empirical models, encountered in some remote sensing applications (ex.: hydrology, sediment transportation, climate change effects, life cycle). Deep learning can establish nonlinear and complex connections between input and output variables with advantageous accuracy and computation cost.

6. Future Works

Our future works will focus particularly on the application of the proposed methodology to the real domain (actual EO data) with multi-source, multi-temporal satellite data that are available for BRDF modeling through a variety of cloud computing platforms (CCPs). In the real domain, different CCPs, like Google Earth Engine (GEE), can provide various remote sensing data ranging from coarse to fine spatial resolution acquired from multi-source sensors. Additional works will investigate how to apply DNN based emulators to the atmospheric correction of satellite images using atmospheric-based RTMs such as 6SV or retrieving the vegetation traits based on TOA reflectance data (RTMs inversion mode). BRDF will be considered in these processes since the model has already been developed. An integrated radiometric correction including atmospheric, topographic and BRDF effects on satellite acquisitions is also an avenue to look at.

Author Contributions

Conceptualization Y.B., S.F. and M.B.; methodology, S.O., Y.B. and S.F.; visualization, S.O.; formal analysis, S.O.; writing—original draft, S.O.; Writing—review and editing, Y.B., S.F., M.B. and C.S.; supervision, Y.B. and S.F.; funding acquisition, Y.B., S.F., M.B. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Mitacs in partnership with Centre de recherche informatique de Montréal (CRIM) and Rhea Group Inc.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the first author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schaepman-Strub, G.; Schaepman, M.E.; Painter, T.H.; Dangel, S.; Martonchik, J.V. Reflectance quantities in optical remote sensing-definitions and case studies. Remote Sens. Environ. 2006, 103, 27–42. [Google Scholar] [CrossRef]
  2. Latifovic, R.; Cihlar, J.; Chen, J. A comparison of BRDF models for the normalization of satellite optical data to a standard sun-target-sensor geometry. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1889–1898. [Google Scholar] [CrossRef]
  3. Luo, Y.; Trishchenko, A.P.; Latifovic, R.; Li, Z. Surface bidirectional reflectance and albedo properties derived using a land cover–based approach with Moderate Resolution Imaging Spectroradiometer observations. J. Geophys. Res. Atmos. 2005, 110, 1–17. [Google Scholar] [CrossRef]
  4. Qi, J.; Cabot, F.; Moran, M.; Dedieu, G. Biophysical parameter estimations using multidirectional spectral measurements. Remote Sens. Environ. 1995, 54, 71–83. [Google Scholar] [CrossRef]
  5. Guan, Y.; Zhou, Y.; He, B.; Liu, X.; Zhang, H.; Feng, S. Improving Land Cover Change Detection and Classification With BRDF Correction and Spatial Feature Extraction Using Landsat Time Series: A Case of Urbanization in Tianjin, China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4166–4177. [Google Scholar] [CrossRef]
  6. Odongo, V.O. Uncertainty in Reflectance Factors Measured in the Field: Implications for the Use of Ground Targets in Remote Sensing. Master’s Thesis, University of Twente, Enskord, The Netherlands, 2010; p. 103. [Google Scholar]
  7. Zhang, X.; Jiao, Z.; Zhao, C.; Guo, J.; Zhu, Z.; Liu, Z.; Dong, Y.; Yin, S.; Zhang, H.; Cui, L.; et al. Evaluation of BRDF Information Retrieved from Time-Series Multiangle Data of the Himawari-8 AHI. Remote Sens. 2021, 14, 139. [Google Scholar] [CrossRef]
  8. Roberts, G. A review of the application of BRDF models to infer land cover parameters at regional and global scales. Prog. Phys. Geogr. 2016, 25, 483–511. [Google Scholar] [CrossRef]
  9. Roujean, J. A bidirectional reflectance model of the Earth’s surface for the correction of remote sensing data. J. Geophys. Res. Atmos. 1992, 97, 20455–20468. [Google Scholar] [CrossRef]
  10. Biliouris, D.; Van Der Zande, D.; Verstraeten, W.W.; Stuckens, J.; Muys, B.; Dutré, P.; Coppin, P. RPV Model Parameters Based on Hyperspectral Bidirectional Reflectance Measurementsof Fagus sylvatica L. Leaves. Remote Sens. 2009, 1, 92–106. [Google Scholar] [CrossRef]
  11. Walthall, C.L.; Norman, J.M.; Welles, J.M.; Campbell, G.; Blad, B.L. Simple equation to approximate the bidirectional reflectance from vegetative canopies and bare soil surfaces. Appl. Opt. 1985, 24, 383–387. [Google Scholar] [CrossRef]
  12. Laurent, V.C.; Verhoef, W.; Clevers, J.G.; Schaepman, M.E. Estimating forest variables from top-of-atmosphere radiance satellite measurements using coupled radiative transfer models. Remote Sens. Environ. 2011, 115, 1043–1052. [Google Scholar] [CrossRef]
  13. Strahler, A.H.; Muller, J.; Lucht, W.; Schaaf, C.; Tsang, T.; Gao, F.; Li, X.; Lewis, P.; Barnsley, M.J. MODIS BRDF/albedo product: Algorithm theoretical basis document version 5.0. MODIS Doc. 1999, 23, 42–47. [Google Scholar]
  14. Scarino, B.R.; Bedka, K.; Bhatt, R.; Khlopenkov, K.; Doelling, D.R.; Smith, W.L., Jr. A kernel-driven BRDF model to inform satellite-derived visible anvil cloud detection. Atmos. Meas. Tech. 2020, 13, 5491–5511. [Google Scholar] [CrossRef]
  15. Liu, Y.; Wang, Z.; Sun, Q.; Erb, A.M.; Li, Z.; Schaaf, C.B.; Zhang, X.; Román, M.O.; Scott, R.L.; Zhang, Q.; et al. Evaluation of the VIIRS BRDF, Albedo and NBAR products suite and an assessment of continuity with the long term MODIS record. Remote Sens. Environ. 2017, 201, 256–274. [Google Scholar] [CrossRef]
  16. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  17. Ferreira, K.R.; Queiroz, G.R.; Camara, G.; Souza, R.C.M.; Vinhas, L.; Marujo, R.F.B.; Simoes, R.E.O.; Noronha, C.A.F.; Costa, R.W.; Arcanjo, J.S.; et al. Using Remote Sensing Images and Cloud Services on Aws to Improve Land Use and Cover Monitoring. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 22–26 March 2020; pp. 558–562. [Google Scholar] [CrossRef]
  18. Chen, K.; Sun, S.; Li, S.; He, Q. Analysis of water surface area variation of Hanfeng Lake in the Three Gorges Reservoir Area based on Microsoft Planetary Computer. In Proceedings of the 2022 3rd International Conference on Geology, Mapping and Remote Sensing (ICGMRS), Zhoushan, China, 22–24 April 2022; pp. 229–232. [Google Scholar] [CrossRef]
  19. Campos-Taberner, M.; Moreno-Martínez, Á.; García-Haro, F.J.; Camps-Valls, G.; Robinson, N.P.; Kattge, J.; Running, S.W. Global Estimation of Biophysical Variables from Google Earth Engine Platform. Remote Sens. 2018, 10, 1167. [Google Scholar] [CrossRef]
  20. Faurtyot, T. Vegetation water and dry matter contents estimated from top-of-the-atmosphere reflectance data: A simulation study. Remote Sens. Environ. 1997, 61, 34–45. [Google Scholar] [CrossRef]
  21. Guanter, L.; Segl, K.; Kaufmann, H. Simulation of Optical Remote-Sensing Scenes With Application to the EnMAP Hyperspectral Mission. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2340–2351. [Google Scholar] [CrossRef]
  22. Zhang, X.; Jiao, Z.; Dong, Y.; Zhang, H.; Li, Y.; He, D.; Ding, A.; Yin, S.; Cui, L.; Chang, Y. Potential Investigation of Linking PROSAIL with the Ross-Li BRDF Model for Vegetation Characterization. Remote Sens. 2018, 10, 437. [Google Scholar] [CrossRef] [Green Version]
  23. Jacquemoud, S.; Baret, F. PROSPECT: A model of leaf optical properties spectra. Remote Sens. Environ. 1990, 34, 75–91. [Google Scholar] [CrossRef]
  24. Jacquemoud, S.; Verhoef, W.; Baret, F.; Bacour, C.; Zarco-Tejada, P.J.; Asner, G.P.; François, C.; Ustin, S.L. PROSPECT + SAIL models: A review of use for vegetation characterization. Remote Sens. Environ. 2009, 113 (Suppl. S1), S56–S66. [Google Scholar] [CrossRef]
  25. Vermote, E.F.T.D.; Tanré, D.; Deuzé, J.L.; Herman, M.; Morcrette, J.J.; Kotchenova, S.Y. Second simulation of a satellite signal in the solar spectrum-vector (6SV). 6s User Guide Version 2006, 3, 1–55. [Google Scholar]
  26. Estévez, J.; Vicent, J.; Rivera-Caicedo, J.P.; Morcillo-Pallarés, P.; Vuolo, F.; Sabater, N.; Camps-Valls, G.; Moreno, J.; Verrelst, J. Gaussian processes retrieval of LAI from Sentinel-2 top-of-atmosphere radiance data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 289–304. [Google Scholar] [CrossRef] [PubMed]
  27. Verrelst, J.; Sabater, N.; Rivera, J.P.; Muñoz-Marí, J.; Vicent, J.; Camps-Valls, G.; Moreno, J. Emulation of Leaf, Canopy and Atmosphere Radiative Transfer Models for Fast Global Sensitivity Analysis. Remote Sens. 2016, 8, 673. [Google Scholar] [CrossRef]
  28. Danner, M.; Berger, K.; Wocher, M.; Mauser, W.; Hank, T. Efficient RTM-based training of machine learning regression algorithms to quantify biophysical & biochemical traits of agricultural crops. ISPRS J. Photogramm. Remote Sens. 2021, 173, 278–296. [Google Scholar] [CrossRef]
  29. Vicent, J.; Verrelst, J.; Rivera-Caicedo, J.P.; Sabater, N.; Munoz-Mari, J.; Camps-Valls, G.; Moreno, J. Emulation as an Accurate Alternative to Interpolation in Sampling Radiative Transfer Codes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4918–4931. [Google Scholar] [CrossRef]
  30. De Grave, C.; Verrelst, J.; Morcillo-Pallarés, P.; Pipia, L.; Rivera-Caicedo, J.P.; Amin, E.; Belda, S.; Moreno, J. Quantifying vegetation biophysical variables from the Sentinel-3/FLEX tandem mission: Evaluation of the synergy of OLCI and FLORIS data sources. Remote Sens. Environ. 2020, 251, 112101. [Google Scholar] [CrossRef]
  31. Reyes-Muñoz, P.; Pipia, L.; Salinero-Delgado, M.; Belda, S.; Berger, K.; Estévez, J.; Morata, M.; Rivera-Caicedo, J.P.; Verrelst, J. Quantifying Fundamental Vegetation Traits over Europe Using the Sentinel-3 OLCI Catalogue in Google Earth Engine. Remote Sens. 2022, 14, 1347. [Google Scholar] [CrossRef]
  32. Sawut, R.; Li, Y.; Liu, Y.; Kasim, N.; Hasan, U.; Tao, W. Retrieval of betalain contents based on the coupling of radiative transfer model and SVM model. Int. J. Appl. Earth Obs. Geoinf. 2021, 100, 102340. [Google Scholar] [CrossRef]
  33. Jiao, Q.; Sun, Q.; Zhang, B.; Huang, W.; Ye, H.; Zhang, Z.; Zhang, X.; Qian, B. A Random Forest Algorithm for Retrieving Canopy Chlorophyll Content of Wheat and Soybean Trained with PROSAIL Simulations Using Adjusted Average Leaf Angle. Remote Sens. 2022, 14, 98. [Google Scholar] [CrossRef]
  34. Trombetti, M.; Riano, D.; Rubio, M.; Cheng, Y.; Ustin, S. Multi-temporal vegetation canopy water content retrieval and interpretation using artificial neural networks for the continental USA. Remote Sens. Environ. 2008, 112, 203–215. [Google Scholar] [CrossRef]
  35. Krasnopolsky, V.M.; Fox-Rabinovitz, M.S.; Hou, Y.T.; Lord, S.J.; Belochitski, A.A. Accurate and Fast Neural Network Emulations of Model Radiation for the NCEP Coupled Climate Forecast System: Climate Simulations and Seasonal Predictions*. Mon. Weather. Rev. 2010, 138, 1822–1842. [Google Scholar] [CrossRef]
  36. Morata, M.; Siegmann, B.; Perez-Suay, A.; Garcia-Soria, J.L.; Rivera-Caicedo, J.P.; Verrelst, J. Neural Network Emulation of Synthetic Hyperspectral Sentinel-2-Like Imagery With Uncertainty. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 762–772. [Google Scholar] [CrossRef] [PubMed]
  37. Verrelst, J.; Caicedo, J.P.R.; Vicent, J.; Pallarés, P.M.; Moreno, J. Approximating Empirical Surface Reflectance Data through Emulation: Opportunities for Synthetic Scene Generation. Remote Sens. 2019, 11, 157. [Google Scholar] [CrossRef]
  38. Liang, X.; Garrett, K.; Liu, Q.; Maddy, E.S.; Ide, K.; Boukabara, S. A Deep-Learning-Based Microwave Radiative Transfer Emulator for Data Assimilation and Remote Sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8819–8833. [Google Scholar] [CrossRef]
  39. Duffy, K.; Vandal, T.; Wang, W.; Nemani, R.; Ganguly, A.R. A framework for deep learning emulation of numerical models with a case study in satellite remote sensing. arXiv 2019, arXiv:1910.13408. [Google Scholar] [CrossRef]
  40. Sinha, R.K.; Pandey, R.; Pattnaik, R. Deep Learning For Computer Vision Tasks: A review. arXiv 2018, arXiv:1804.03928. [Google Scholar] [CrossRef]
  41. Palaz, D.; Magimai, M.; Collobert, R. Convolutional Neural Networks-based continuous speech recognition using raw speech signal. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 4295–4299. [Google Scholar] [CrossRef]
  42. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  43. Yao, C.; Luo, X.; Zhao, Y.; Zeng, W.; Chen, X. A review on image classification of remote sensing using deep learning. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; Volume 2018, pp. 1947–1955. [Google Scholar] [CrossRef]
  44. Hoeser, T.; Bachofer, F.; Kuenzer, C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review—Part II: Applications. Remote Sens. 2020, 12, 3053. [Google Scholar] [CrossRef]
  45. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef]
  46. Raj, A.; Shah, N.A.; Tiwari, A.K.; Martini, M.G. Multivariate Regression-Based Convolutional Neural Network Model for Fundus Image Quality Assessment. IEEE Access 2020, 8, 57810–57821. [Google Scholar] [CrossRef]
  47. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2020, 151, 107398. [Google Scholar] [CrossRef]
  48. Alem, A.; Kumar, S. Transfer Learning Models for Land Cover and Land Use Classification in Remote Sensing Image. Appl. Artif. Intell. 2021, 36, 2014192. [Google Scholar] [CrossRef]
  49. Weiss, K.; Khoshgoftaar, T.M.; Wang, D.D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
  50. Verhoef, W. Theory of Radiative Transfer Models Applied in Optical Remote Sensing of Vegetation Canopies; Wageningen University and Research: Wageningen, The Netherlands, 1998; ISBN 90-5485-804-4. [Google Scholar]
  51. Berger, K.; Atzberger, C.; Danner, M.; D’Urso, G.; Mauser, W.; Vuolo, F.; Hank, T.; Berger, K.; Atzberger, C.; Danner, M.; et al. Evaluation of the PROSAIL Model Capabilities for Future Hyperspectral Model Environments: A Review Study. Remote Sens. 2018, 10, 85. [Google Scholar] [CrossRef]
  52. Gómez-Dans, J.L.; Lewis, P.E.; Disney, M. Efficient Emulation of Radiative Transfer Codes Using Gaussian Processes and Application to Land Surface Parameter Inferences. Remote Sens. 2016, 8, 119. [Google Scholar] [CrossRef]
  53. Vermote, E.F.; Tanré, D.; Deuze, J.L.; Herman, M.; Morcette, J.-J. Second Simulation of the Satellite Signal in the Solar Spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675–686. [Google Scholar] [CrossRef]
  54. Bouroubi, Y.; Batita, W.; Cavayas, F.; Tremblay, N. Ground Reflectance Retrieval on Horizontal and Inclined Terrains Using the Software Package REFLECT. Remote Sens. 2018, 10, 1638. [Google Scholar] [CrossRef] [Green Version]
  55. Kotchenova, S.Y.; Vermote, E.F. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data Part II Homogeneous Lambertian and anisotropic surfaces. Appl. Opt. 2007, 46, 4455–4464. [Google Scholar] [CrossRef]
  56. Lee, C.S.; Yeom, J.M.; Lee, H.L.; Kim, J.-J.; Han, K.-S. Sensitivity analysis of 6S-based look-up table for surface reflectance retrieval. Asia-Pac. J. Atmos. Sci. 2015, 51, 91–101. [Google Scholar] [CrossRef]
  57. Coppini, F.; Jiang, Y.; Tabti, S. Predictive Models on 1D Signals in a Small-Data Environment; Research Report; IMB—Institut de Mathématiques de Bordeaux: 2021; hal-03211100. Available online: https://hal.inrae.fr/MATHS-ENTREPRISES/hal-03211100v1 (accessed on 9 December 2022).
  58. Mozaffari, M.H.; Tay, L.-L. A Review of 1D Convolutional Neural Networks toward Unknown Substance Identification in Portable Raman Spectrometer. arXiv 2020, arXiv:2006.10575. [Google Scholar]
  59. Mele, B.; Altarelli, G. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Phys. Lett. B 1993, 299, 345–350. [Google Scholar] [CrossRef]
  60. Le Maire, G.; François, C.; Soudani, K.; Berveiller, D.; Pontailler, J.-Y.; Bréda, N.; Genet, H.; Davi, H.; Dufrêne, E. Calibration and validation of hyperspectral indices for the estimation of broadleaved forest leaf chlorophyll content, leaf mass per area, leaf area index and leaf canopy biomass. Remote Sens. Environ. 2008, 112, 3846–3864. [Google Scholar] [CrossRef]
  61. Verrelst, J.; Vicent, J.; Rivera-Caicedo, J.P.; Lumbierres, M.; Morcillo-Pallarés, P.; Moreno, J. Global sensitivity analysis of leaf-canopy-atmosphere RTMs: Implications for biophysical variables retrieval from top-of-atmosphere radiance data. Remote Sens. 2019, 11, 1923. [Google Scholar] [CrossRef]
  62. Available online: https://sentinels.copernicus.eu/documents/247904/4598066/Sentinel-3-OLCI-Land-Handbook.pdf/455f8c88-520f-da18-d744-f5cda41d2d91?t=1664349550631 (accessed on 31 January 2023).
  63. You, D.; Wen, J.; Liu, Q.; Zhang, Y.; Tang, Y.; Liu, Q.; Xie, H. The Component-Spectra-Parameterized Angular and Spectral Kernel-Driven Model: A Potential Solution for Global BRDF/Albedo Retrieval From Multisensor Satellite Data. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8674–8688. [Google Scholar] [CrossRef]
  64. Darvishzadeh, R.; Skidmore, A.; Schlerf, M.; Atzberger, C. Inversion of a radiative transfer model for estimating vegetation LAI and chlorophyll in a heterogeneous grassland. Remote Sens. Environ. 2008, 112, 2592–2604. [Google Scholar] [CrossRef]
  65. Berger, K.; Atzberger, C.; Danner, M.; Wocher, M.; Mauser, W.; Hank, T. Model-Based Optimization of Spectral Sampling for the Retrieval of Crop Variables with the PROSAIL Model. Remote Sens. 2018, 10, 2063. [Google Scholar] [CrossRef]
  66. Sinha, S.K.; Padalia, H.; Dasgupta, A.; Verrelst, J.; Rivera, J.P. Estimation of leaf area index using PROSAIL based LUT inversion, MLRA-GPR and empirical models: Case study of tropical deciduous forest plantation, North India. Int. J. Appl. Earth Obs. Geoinf. 2019, 86, 102027. [Google Scholar] [CrossRef]
  67. Boren, E.J.; Boschetti, L.; Johnson, D.M. Characterizing the Variability of the Structure Parameter in the PROSPECT Leaf Optical Properties Model. Remote Sens. 2019, 11, 1236. [Google Scholar] [CrossRef] [Green Version]
  68. Andrieu, B.; Baret, F.; Jacquemoud, S.; Malthus, T.; Steven, M. Evaluation of an Improved Version Model for Simulating Bidirectional of Sugar Beet Canopies of SAIL Reflectance; ©Elsevier Science Inc.: Amsterdam, The Netherlands, 1997. [Google Scholar]
  69. Féret, J.-B.; François, C.; Gitelson, A.; Asner, G.P.; Barry, K.M.; Panigada, C.; Richardson, A.D.; Jacquemoud, S. Optimizing spectral indices and chemometric analysis of leaf chemical properties using radiative transfer modeling. Remote Sens. Environ. 2011, 115, 2742–2750. [Google Scholar] [CrossRef]
  70. Sun, J.; Wang, L.; Shi, S.; Li, Z.; Yang, J.; Gong, W.; Wang, S.; Tagesson, T. Leaf pigment retrieval using the PROSAIL model: Influence of uncertainty in prior canopy-structure information. Crop J. 2022, 10, 1251–1263. [Google Scholar] [CrossRef]
  71. de Sá, N.C.; Baratchi, M.; Hauser, L.; van Bodegom, P. Exploring the Impact of Noise on Hybrid Inversion of PROSAIL RTM on Sentinel-2 Data. Remote Sens. 2021, 13, 648. [Google Scholar] [CrossRef]
  72. Wang, W.; Ma, Y.; Meng, X.; Sun, L.; Jia, C.; Jin, S.; Li, H. Retrieval of the Leaf Area Index from MODIS Top-of-Atmosphere Reflectance Data Using a Neural Network Supported by Simulation Data. Remote Sens. 2022, 14, 2456. [Google Scholar] [CrossRef]
  73. Seitz, B.; Mavrocordatos, C.; Rebhan, H.; Nieke, J.; Klein, U.; Borde, F.; Berruti, B. The sentinel-3 mission overview. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 4208–4211. [Google Scholar]
  74. Jia, W.; Pang, Y.; Tortini, R.; Schläpfer, D.; Li, Z.; Roujean, J.-L. A Kernel-Driven BRDF Approach to Correct Airborne Hyperspectral Imagery over Forested Areas with Rugged Topography. Remote Sens. 2020, 12, 432. [Google Scholar] [CrossRef]
  75. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; Volume 8. [Google Scholar] [CrossRef]
  76. Han, J.; Kamber, M.; Pei, J. 3—Data Preprocessing. In Data Mining (Third Edition), 3rd ed.; Han, J., Kamber, M., Pei, J., Eds.; Morgan Kaufmann: Boston, MA, USA, 2012; pp. 83–124. [Google Scholar] [CrossRef]
  77. Padarian, J.; Minasny, B.; McBratney, A. Using deep learning to predict soil properties from regional spectral data. Geoderma Reg. 2018, 16, e00198. [Google Scholar] [CrossRef]
  78. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  79. Yi, D.; Ahn, J.; Ji, S. An Effective Optimization Method for Machine Learning Based on ADAM. Appl. Sci. 2020, 10, 1073. [Google Scholar] [CrossRef] [Green Version]
  80. Prikaziuk, E.; Yang, P.; van der Tol, C. Google Earth Engine Sentinel-3 OLCI Level-1 Dataset Deviates from the Original Data: Causes and Consequences. Remote Sens. 2021, 13, 1098. [Google Scholar] [CrossRef]
Figure 1. Schematic view of coupled PROSAIL and 6S [26].
Figure 1. Schematic view of coupled PROSAIL and 6S [26].
Remotesensing 15 00835 g001
Figure 2. Flowchart of the proposed methodology in the simulation and real domain.
Figure 2. Flowchart of the proposed methodology in the simulation and real domain.
Remotesensing 15 00835 g002
Figure 3. Mean relative spectral response of the OLCI.
Figure 3. Mean relative spectral response of the OLCI.
Remotesensing 15 00835 g003
Figure 4. Study area and collection of OLCI data.
Figure 4. Study area and collection of OLCI data.
Remotesensing 15 00835 g004
Figure 5. Illustration of the proposed 1D-CNN architecture.
Figure 5. Illustration of the proposed 1D-CNN architecture.
Remotesensing 15 00835 g005
Figure 6. Training and validating loss function (a) and accuracy curve ( R 2 score) (b) of the proposed 1D-CNN model versus the number of epochs.
Figure 6. Training and validating loss function (a) and accuracy curve ( R 2 score) (b) of the proposed 1D-CNN model versus the number of epochs.
Remotesensing 15 00835 g006
Figure 7. Scatterplots between simulated BRDF values and estimated BRDF values of validation datasets at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17. The red line is the least square regression line.
Figure 7. Scatterplots between simulated BRDF values and estimated BRDF values of validation datasets at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17. The red line is the least square regression line.
Remotesensing 15 00835 g007
Figure 8. Polar plots representation of simulated BRDF by the RTM-based model at four bands of OLCI: Oa04, Oa06, Oa08, and Oa17. The solar zenith angle is fixed at 30°.
Figure 8. Polar plots representation of simulated BRDF by the RTM-based model at four bands of OLCI: Oa04, Oa06, Oa08, and Oa17. The solar zenith angle is fixed at 30°.
Remotesensing 15 00835 g008
Figure 9. Polar plots representation of the estimated BRDF by the proposed deep learning-based model at four bands of OLCI: Oa04, Oa06, Oa08, and Oa17. The solar zenith angle is fixed at 30°.
Figure 9. Polar plots representation of the estimated BRDF by the proposed deep learning-based model at four bands of OLCI: Oa04, Oa06, Oa08, and Oa17. The solar zenith angle is fixed at 30°.
Remotesensing 15 00835 g009
Figure 10. Scatterplots between simulated directional reflectance and predicted directional reflectance using testing datasets at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17. The red line is the least square regression line.
Figure 10. Scatterplots between simulated directional reflectance and predicted directional reflectance using testing datasets at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17. The red line is the least square regression line.
Remotesensing 15 00835 g010
Figure 11. Polar plots representation of the percentage error (PE) of simulated and predicted BRDF at four bands of the OLCI: Oa04; Oa06, Oa08 and Oa17.
Figure 11. Polar plots representation of the percentage error (PE) of simulated and predicted BRDF at four bands of the OLCI: Oa04; Oa06, Oa08 and Oa17.
Remotesensing 15 00835 g011
Figure 12. Principal plane comparison of simulated and estimated BRDF patterns at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17.
Figure 12. Principal plane comparison of simulated and estimated BRDF patterns at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17.
Remotesensing 15 00835 g012
Figure 13. Cross principal plane comparison of simulated and estimated BRDF patterns at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17.
Figure 13. Cross principal plane comparison of simulated and estimated BRDF patterns at four bands of the OLCI: Oa04, Oa06, Oa08, and Oa17.
Remotesensing 15 00835 g013
Figure 14. Illustration of the OLCI observations from 1st to 15th July 2019 at Oa17 band in unit of radiance (a) before cloud masking (b) after cloud masking.
Figure 14. Illustration of the OLCI observations from 1st to 15th July 2019 at Oa17 band in unit of radiance (a) before cloud masking (b) after cloud masking.
Remotesensing 15 00835 g014
Figure 15. Scatterplots and boxplots between measured and estimated directional reflectance using transferred 1D-CNN at four bands, including Oa04 (blue), Oa06 (green), Oa08 (red), and Oa17 (NIR) bands. The red line is the least square.
Figure 15. Scatterplots and boxplots between measured and estimated directional reflectance using transferred 1D-CNN at four bands, including Oa04 (blue), Oa06 (green), Oa08 (red), and Oa17 (NIR) bands. The red line is the least square.
Remotesensing 15 00835 g015
Table 1. Description and ranges of the PROSAIL+6S model key input parameters used to simulate BRDF.
Table 1. Description and ranges of the PROSAIL+6S model key input parameters used to simulate BRDF.
ParametersSymbolDescriptionUnitRangeDistribution
Sun-sensor geometrySZASolar zenith angleDeg10–50Uniform
VZAViewing zenith angleDeg0–60Uniform
RAARelative azimuth angleDeg−185–185Uniform
Leaf properties C a b Chlorophyll a, b contentμg cm 2 3–30Uniform
NLeaf structure parametersunitless1–3Uniform
C a r Leaf Carotenoids contentμg cm 2 10Fixed value
C w Equivalent water thicknesscm0.002Fixed value
C m Dry matter contentg cm 2 0.002Fixed value
C b p Brown pigmentsunitless0Fixed value
Canopy architectureLAILeaf area index m 2   m 2 0.1–6Uniform
L I D F a Average leaf slopeDeg15–60Uniform
L I D F b Leaf inclination distributionDeg0Fixed value
HspotHot spot parameterunitless0.01–0.9Uniform
ρ s o i l Dry/Wet soil factorunitless0.5Fixed value
AtmosphericAOTAerosol Optical Thicknessunitless0.1–0.5Uniform
Table 2. The selected spectral bands of OLCI for this study.
Table 2. The selected spectral bands of OLCI for this study.
Spectral BandsSpectral Range
(nm)
Center
(nm)
Width
(nm)
Spatial Resolution (m)
Oa04438–44849010300
Oa06555–56556010300
Oa08660–67066510300
Oa017856–87686520300
Table 3. Details of the proposed (1D-CNN) architecture.
Table 3. Details of the proposed (1D-CNN) architecture.
Layer TypeLayer ConfigurationOutput SizeLearnable Parameters
Filters sizeKernelStridePadChannelLength
Reshaped input layer -- - - 330
Conv-1D32211324(2 × 1 × 3 + 1) × 32 = 224
Conv-1D64211643(2 × 1 × 32 + 1) × 64 = 4160
Conv-1D1282111284(2 × 1 × 64 + 1) × 128 = 16,512
Conv-1D2562112565(2 × 1 × 128 + 1) × 256 = 65,792
Max-pooling 22025620
Flatten-51210
Dense512 neurons-512 × 128 + 128 = 65,664
Dense128 neurons-
Output layer4 neurons-0
Total parameters--152,352
Table 4. Summary of the hyper-parameters of the proposed 1D.
Table 4. Summary of the hyper-parameters of the proposed 1D.
Hyper-ParametersTested ValuesSelected Values
Learning rate0.01, 0.001, 0.00010.001
Epochs number100, 250, 500100
Batch size16, 32, 64, 128, 25632
OptimizerSGD, ADAM, AdaMaxAdaMax
Loss functionMAE(L1Loss), MSEMSE
Table 5. Network performance for different learning rates and batch sizes (the best learning rate and batch size are highlighted in green color).
Table 5. Network performance for different learning rates and batch sizes (the best learning rate and batch size are highlighted in green color).
Learning Rate
Batch Size 0.010.0010.0001
Training AccuracyTesting AccuracyTime
(Minutes)
Training AccuracyTesting AccuracyTime
(Minutes)
Training AccuracyTesting AccuracyTime
(Minutes)
160.9970.56083.620.9970.94283.050.9980.92580.53
320.9980.67038.270.9980.98737.920.9880.81138.08
640.9980.79020.010.9980.87620.220.9980.85320.28
1280.9980.91012.370.9990.92511.470.9970.86411.51
2560.9990.8717.630.9980.7187.650.9970.8617.17
Table 6. Input variables values for the testing dataset.
Table 6. Input variables values for the testing dataset.
N Cab   ( μ g   cm 2 ) LIDFA (°)Hspot LAI   ( m 2 m 2 ) AOTSZA (°)VZA (°)RAA(°)
1.510.530.50.33.50.1530.50–50°−180–180°
Table 7. Comparison of metrics in four bands of the OLCI for testing datasets.
Table 7. Comparison of metrics in four bands of the OLCI for testing datasets.
Spectral Bands R 2 RMSE
Oa040.9960.0013
Oa060.9770.0044
Oa080.9790.0030
Oa0170.9970.0018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ojaghi, S.; Bouroubi, Y.; Foucher, S.; Bergeron, M.; Seynat, C. Deep Learning-Based Emulation of Radiative Transfer Models for Top-of-Atmosphere BRDF Modelling Using Sentinel-3 OLCI. Remote Sens. 2023, 15, 835. https://doi.org/10.3390/rs15030835

AMA Style

Ojaghi S, Bouroubi Y, Foucher S, Bergeron M, Seynat C. Deep Learning-Based Emulation of Radiative Transfer Models for Top-of-Atmosphere BRDF Modelling Using Sentinel-3 OLCI. Remote Sensing. 2023; 15(3):835. https://doi.org/10.3390/rs15030835

Chicago/Turabian Style

Ojaghi, Saeid, Yacine Bouroubi, Samuel Foucher, Martin Bergeron, and Cedric Seynat. 2023. "Deep Learning-Based Emulation of Radiative Transfer Models for Top-of-Atmosphere BRDF Modelling Using Sentinel-3 OLCI" Remote Sensing 15, no. 3: 835. https://doi.org/10.3390/rs15030835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop