Spectrum Correction Using Modeled Panchromatic Image for Pansharpening

Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for reducing the spectral distortion based on the intensity–hue–saturation (IHS) method targeting satellite images. The IHS method improves the resolution of an RGB image by replacing the intensity of the low-resolution RGB image with that of the high-resolution PAN image. The spectral characteristics of the PAN and MS images are different, and this difference may cause spectral distortion in the pansharpened image. Although many solutions for reducing spectral distortion using a modeled spectrum have been proposed, the quality of the outcomes obtained by these approaches depends on the image dataset. In the proposed technique, we model a low-spatial-resolution PAN image according to a relative spectral response graph, and then the corrected intensity is calculated using the model and the observed dataset. Experiments were conducted on three IKONOS datasets, and the results were evaluated using some major quality metrics. This quantitative evaluation demonstrated the stability of the pansharpened images and the effectiveness of the proposed method.


Introduction
The optical sensor of an Earth observation satellite receives radiances in the visible to infrared regions of the electromagnetic spectrum. The sensor simultaneously receives two kinds of data; the multi-spectral (MS) image with high spectral resolution and low spatial resolution, and a panchromatic (PAN) image with high spatial resolution and low spectral resolution. Satellites with such optical sensors include the IKONOS, QuickBird, GeoEye, and WorldView. Satellite data are widely used for various purposes such as change detection, object detection, target recognition, background for map application, and visual image analysis. Pansharpening is an image processing technique that generates high-spatial and high-spectral-resolution MS images using the spatial information from the PAN image and the spectral information from the MS images. It can be used for preprocessing in the data analysis and applications of satellite-image processing described above. The pansharpening methods can be divided into four categories: component substitution (CS), multi-resolution analysis (MRA), machine learning, and the hybrid methods. The CS methods substitute the spatial information of the MS images by the spatial information from the PAN image and then generate the MS images with high spatial resolution. This category of methods includes several techniques such as the intensity-hue-saturation (IHS) transform [1], principal component analysis [2], Gram-Schmidt (GS) transform [3,4], and the Brovey transform [2]. For these techniques, it is known that the difference in the spectral characteristics between the replaced spatial component and the original spatial component gives rise to spectral distortion. The MRA class extracts the spatial information from the PAN image and then adds it to the MS image. For these methods, several techniques such as the decimated wavelet transform [5], undecimated wavelet transform [6], a "trous" wavelet transform [7], Laplacian pyramid [8], curvelet transform [9], contourlet transform [10], nonsubsampled contourlet transform [11], and shearlet transform [12,13] are used to extract the detailed spatial information. Although these methods increase the quality of the spectral information, ringing artifacts called spatial distortion may occur. Machine learning methods use techniques such as dictionary learning, sparse modeling [14], deep learning [15], and the Bayesian paradigm [16]. The performance of these methods depends on the amount of the available data, and their computational complexity is greater than that of the other methods. For example, sparse modeling incurs a significant computational cost for creating a dictionary. The hybrid methods combine several techniques to exploit their advantageous features and therefore can achieve higher performance than the methods in other classes. Recently, machine learning techniques have been widely used. Wang et al. [17] proposed a method using sparse representation and a convolution neural network. Fei et al. [18] and Yin [19] proposed an improved Sparse Representation-based details injection (SR-D) [14]. For the CS-based category, Imani [20] proposed a method that removes the noise and redundant spatial features to improve the band-dependent spatial detail (BDSD) [21].
Many methods have been proposed over the past several decades in order to reduce the spectral distortion and to enhance the spatial resolution [22][23][24] and in particular, pansharpening includes a process for correcting the image intensity. Two types of techniques are used for this process: the first uses the relative spectral response graph, and the second uses an intensity model based on the observed images. However, the numerical image quality of these methods depends on the image dataset, and it is difficult to obtain consistent results.
In this study, we propose a technique for correcting the intensity based on the IHS method. The IHS method is a known pansharpening method and substitutes the intensity of the red-green-blue (RGB) images with the intensity of the PAN images. Since the intensity of the pansharpened (PS) image is replaced by that of the PAN image, it contains a high level of spatial information. However, the PS image exhibits spectral distortion (i.e., color distortion) because of the differences in the spectral characteristics between the intensities of the RGB and PAN images [25]. To address this drawback, several methods have been proposed. Tu et al. [26] proposed a generalized IHS (GIHS) transform that transforms the IHS into a simple linear transform. Since GIHS can accelerate the calculations, it has been frequently used as a framework for the development of other methods focusing on correcting the intensity. Tu et al. [26] presented a fast IHS that corrects the intensity using the mean value of the intensity of the MS images and also presented a simple spectral-adjustment IHS method (SAIHS) [27] that corrects the intensity of the green and blue bands by exploring the best value from 92 IKONOS images. Tradeoff IHS is a method that controls the tradeoff of the spectral characteristics between the intensities of the RGB and PAN images [28]. Choi et al. [29] presented an improved SAIHS (ISAIHS) that calculates the intensity correction coefficients of the red and near-infrared (NIR) bands using 29 IKONOS images in addition to SAIHS. These approaches have the advantage that strong outliers are not generated because the correction coefficient is obtained from the images as a constant. However, the values of the constant may not be optimal for the processed image. Audicana et al. [30] proposed an expanded fast IHS with the spectral response function (eFIHS-SRF) method to correct the intensity of the PS image using the mean value of the MS images and the fraction of the photons detected by the sensors (i.e., the MS and PAN sensors). In this method, the correction is performed using the correction coefficient obtained from the relative spectral response graph and the observation data. However, the obtained results differ depending on the processed data. Garzelli et al. [21] presented the BDSD method that applies the correction coefficients calculated by the minimum-variance-unbiased estimator [31] to the MS images. The practical replacement adaptive CS (PRACS) [32] calculates the correction coefficients using high-frequency information and the characteristics of the MS images for each image dataset. The fusion approach (IHS-BT-SFIM) proposed by Tu et al. [33] calculates the intensity correction coefficients using the modeled low-spatial-resolution intensity of the MS images.
These methods use either unique or non-unique correction coefficients. In pansharpening, spectral distortion may be caused by three different effects: the relative spectral response of the sensors [27], aging of the optical and electronic payload [34], and the observation conditions [35]. Since the observation conditions differ for each dataset, the correction coefficients must be calculated separately for each image dataset. Even in the conventional methods, the correction coefficient is obtained from the intensity modeled based on the image. However, the use of a model formula obtained by combining the intensity with and without color information has not been considered. We estimate the PAN image without color information using the intensity of the RGB image and the intensity of the NIR image, and perform a detailed correction including the relative spectral response and observation conditions at each intensity of the RGB image with color information. In this study, we propose a novel model for low-spatial-resolution PAN images using MS images. Compared to other related methods for intensity correction, our method showed consistently good performance in terms of the numerical image quality. Therefore, it was concluded that unlike for the methods based on IHS, the proposed method can reduce spectral distortion and obtain results that are independent of the processed image.
Note that there are multiple satellites that have sensors whose characteristics are similar to those of IKONOS, such as Quick Bird and GeoEye. Therefore, experiments have been conducted on IKONOS images in many studies in the literature. Our proposed method can also be applied to those satellite images.

Image Datasets
The three image datasets from the IKONOS used for the experiments are listed in Table 1. The first was collected in May 2008 and covers the city of Nihonmatsu, Japan. The second was collected in May 2006 and covers the city of Yokohama, Japan. Both the Nihonmatsu and Yokohama datasets were provided by the Japan Space Imaging Corporation, Japan. The third dataset covering Mount Wellington in Hobart, Tasmania in Australia was collected in February 2003 and was provided by Space Imaging, LLC. These datasets have PAN and MS images with the spatial resolutions of 1 m and 4 m, respectively. The original dataset contains: a PAN image with 1024 × 1024 pixels and MS images with 256 × 256 pixels for the Nihonmatsu region, a PAN image with 1792 × 1792 pixels and MS images with 448 × 448 pixels for the Yokohama region, and a PAN image with 12,112 × 13,136 pixels and MS images with 3028 × 3284 pixels for the Hobart region in Tasmania. To evaluate the quality of the PS image, we experimented with the test images and original images according to the Wald protocol [36]. The test images were used to evaluate the numerical image quality, and the original images were used as reference images for numerical and visual evaluation. We regard the original images as ground truth images. The spatial resolution of the test PAN image was reduced from 1 to 4 m and that of the test MS image was reduced from 4 to 16 m. Hence, the test image datasets have a PAN image with 256 × 256 pixels and MS images with 64 × 64 pixels for the Nihonmatsu region, a PAN image with 448 × 448 pixels and MS images with 112 × 112 pixels for the Yokohama region, and a PAN image with 3028 × 3284 pixels and MS images with 757 × 821 pixels for the Hobart region in Tasmania.

Proposed Method
It is considered that a high-resolution PAN image can be estimated from the corresponding high-resolution MS images. However, it is not obvious how to combine the MS images, and it may differ depending on the spectral response and observation conditions. Based on this idea, we propose a novel technique for correcting the intensity that can reduce the spectral distortion in each image by modeling the PAN image. The technique does not require detailed knowledge of sensor characteristics; in other words, it only requires the image dataset. The procedure of the proposed method includes a technique for estimating the intensity correction coefficients and also a technique for image fusion. The former method first models the PAN image with a low spatial resolution, using the MS images, and the coefficients are then calculated using the method of least squares in the comparison between the PAN image and the modeled PAN image. The latter method calculates the high-spatial-resolution intensity of the RGB image and then generates a PS image using GIHS.

Notation
I high (i) and I low (i) denote the i-th pixel intensities of the high-and low-spatial resolution RGB images, respectively. PAN high (i) and PAN low (i) denote the i-th pixel high-and low-spatial resolution, respectively. MS low b∈B (i) denotes the i-th pixel low-spatial resolution MS image in the b band and B = red, grn, blu, nir ; red, grn, blu, and nir are the red, green, blue, and NIR, respectively. |B| denotes the total number of MS bands. · denotes the scalar multiplication of matrices, × denotes multiplication, and * denotes a matrix product. N denotes the total number of the pixels in an image.

High-Spatial-Resolution Intensity of the RGB Image
The intensity of the RGB image is calculated by the IHS transform and is represented by Equation (1).
The intensity ratio between the PAN and RGB images is the same for the high-and low-spatial resolutions. Thus, if we have both PAN and RGB images with high-and low-spatial resolutions with the same number of pixels, this intensity relationship can be expressed as: The dimensions of the PAN and MS images used for the above process are the same as those of the MS images before processing, R P×P . The PAN image is generated by down-sampling with bicubic interpolation.
To obtain an expression for the high-spatial-resolution intensity of the RGB image, Equation (2) can be rewritten as

Low-Spatial-Resolution PAN Image Model
Since the observation wavelength range of the PAN sensor includes the wavelength ranges of all of the MS sensors, we modeled the low-spatial-resolution PAN image using low-spatial-resolution MS images. Based on the relative spectral response graph of IKONOS [37] depicted in Figure 1, we noticed the following features: The relative spectral response of the PAN band shows low sensitivity from the blue to green bands [27].

2.
The relative spectral responses of the PAN band show low sensitivity in some regions of the red and NIR bands.

3.
The relative spectral responses of the red, green, blue, and NIR bands partly overlap with those of their neighboring bands [4,27].

4.
The PAN band includes the NIR band.
We design the low-spatial-resolution PAN image model including the above features as follows: where α, β, γ, and ξ denote the correction coefficients of the NIR, blue, green, and red bands, respectively. For the right-hand side of Equation (4), the third and fourth terms are affected by feature 1, the second and fifth terms are affected by feature 2, the second to fifth terms are affected by feature 3, and the second term is affected by feature 4. The first and second terms on the right-hand side of Equation (3) estimate the PAN image using the intensities of the RGB and NIR images. The third, fourth, and fifth terms are considered to be the overflowing and overlapping parts of the MS bands in the relative spectral response graph. The overflowing part of the NIR band is included in the coefficient α.
J. Imaging 2020, 6, 20 5 of 17 3. The relative spectral responses of the red, green, blue, and NIR bands partly overlap with those of their neighboring bands [4,27]. 4. The PAN band includes the NIR band.
We design the low-spatial-resolution PAN image model including the above features as follows: where , , , and denote the correction coefficients of the NIR, blue, green, and red bands, respectively. For the right-hand side of Equation (4), the third and fourth terms are affected by feature 1, the second and fifth terms are affected by feature 2, the second to fifth terms are affected by feature 3, and the second term is affected by feature 4. The first and second terms on the right-hand side of Equation (3) estimate the PAN image using the intensities of the RGB and NIR images. The third, fourth, and fifth terms are considered to be the overflowing and overlapping parts of the MS bands in the relative spectral response graph. The overflowing part of the NIR band is included in the coefficient .

Estimation of the Intensity Correction Coefficient
The high-and low-spatial images observed under the same conditions have similar intensity characteristics, and the relationship between the high-and low-spatial resolution PAN images can be expressed as ℎ ℎ ( ) ≃ ( ). Therefore, the intensity correction coefficients are obtained when the sum of the differences between the high-and low-spatial resolution PAN images reaches the minimum value, as expressed by the root mean square error that is computed as: . . , , , ≥ 0 Equation (5) is used with the observed data-that is, the PAN and MS images, because the observed data include the relative spectral response of the sensors and the observation conditions. In practice, we use Equation (6) instead of Equation (5).
To express the above formula in matrix representation, we define

Estimation of the Intensity Correction Coefficient
The high-and low-spatial images observed under the same conditions have similar intensity characteristics, and the relationship between the high-and low-spatial resolution PAN images can be expressed as PAN high (i) PAN low (i). Therefore, the intensity correction coefficients are obtained when the sum of the differences between the high-and low-spatial resolution PAN images reaches the minimum value, as expressed by the root mean square error that is computed as: Equation (5) is used with the observed data-that is, the PAN and MS images, because the observed data include the relative spectral response of the sensors and the observation conditions. In practice, we use Equation (6) instead of Equation (5). The term PAN high (i) − PAN low (i) in Equation (5) can be rearranged as PAN high (i) − I low (i) + −α·MS low nir (i) + β·MS low blu (i) + γ·MS low grn (i) + ξ·MS low red (i) . To express the above formula in matrix representation, we define We compute the intensity correction coefficients using the least square method, as described in Equation (6): argmin

Fusion Process
The fusion process generates a PS image with GIHS, using the intensity correction coefficients and the observed images. Since GIHS is a simple fusion technique, it is the optimal technique for the evaluation of the image quality performance. The validity of the GIHS formula was rigorously proved for RGB images in Tu et al. [26], and the extension to NIR images has been derived from the form of the equation. Therefore, this time we applied it to RGB images. Figure 2 shows the procedures of the fusion process, which are also listed below:

1.
Change the MS images into the same size as that of the PAN image using bicubic interpolation, and produce enlarged MS images MS low b∈B .

2.
Calculate the enlarged low-spatial-resolution intensity I low from MS low b∈{red,grn,blu} , as expressed by Equation (1).

3.
Calculate the high-spatial-resolution intensity I high of MS low b∈{red,grn,blu} using the estimated correction coefficients, MS low b∈{red,grn,blu} , I low , and PAN high using Equations (3) and (4).

4.
Synthesize the PS image from I low , MS low b∈B , and I high with GIHS, as expressed by Equation (7).

Experimental Setup
The procedure of the proposed method used the image sizes listed in Table 2. Let the sizes of the original PAN and MS images be × and × , respectively, where is the ratio of the number of vertical and horizontal pixels of the PAN image relative to that for the MS image. Then, the size of the down-sampled test PAN image and test MS image used for the estimation intensity correction coefficients in Table 2 is × , and the size of the Test PAN image and the up-sampled test MS image used for image fusion is × . The test image was generated by bicubic spline interpolation. Due to the possible loss of data, the procedure for the estimation of the intensity correction coefficients does not use up-sampling. The correction coefficients were determined by solving Equation (5) using the non-negative least-squares method. The correction coefficient is the value for which the closest agreement is obtained between the spectral characteristics of the highresolution PAN image and those of the low-resolution PAN image. The bicubic interpolation was used for down-sampling and up-sampling of the images used for the experiment of the proposed method shown in Table 2. For a fair comparison, the up-sampling of the MS images of the related works was also carried out using bicubic interpolation.

Experimental Setup
The procedure of the proposed method used the image sizes listed in Table 2. Let the sizes of the original PAN and MS images be RP row × RP col and P row × P col , respectively, where R is the ratio of the number of vertical and horizontal pixels of the PAN image relative to that for the MS image. Then, the size of the down-sampled test PAN image and test MS image used for the estimation intensity correction coefficients in Table 2 is P row R × P col R , and the size of the Test PAN image and the up-sampled test MS image used for image fusion is P row × P col . The test image was generated by bicubic spline interpolation. Due to the possible loss of data, the procedure for the estimation of the intensity correction coefficients does not use up-sampling. The correction coefficients were determined by solving Equation (5) using the non-negative least-squares method. The correction coefficient is the value for which the closest agreement is obtained between the spectral characteristics of the high-resolution PAN image and those of the low-resolution PAN image. The bicubic interpolation was used for down-sampling and up-sampling of the images used for the experiment of the proposed method shown in Table 2. For a fair comparison, the up-sampling of the MS images of the related works was also carried out using bicubic interpolation.  denotes the covariance of the reference and PS images in the b-band. h and l denote the spatial resolution of the PAN and MS images, respectively.

Numerical Quality Metrics
To evaluate the numerical image quality, we employed four metrics: the correlation coefficient (CC), university image quality index (UIQI) [38], erreur relative globale adimensionnelle de synthese (ERGAS) [39], and the spectral angle mapper (SAM) [40]. All of the metrics measure the spectral distortion, and UIQI, ERGAS, and SAM are global metrics.
The CC measures the correlation between the images and ranges from -1.0 to 1.0. A CC value closer to 1.0 implies a stronger correlation between the spectral information of the PS image and the original image. The CC is given by UIQI [38] comprehensively measures the value of the loss of correlation, intensity distortion, and contrast distortion. The loss of correlation measures the degree of the linear correlation between the images. The intensity distortion measures the closeness of the mean intensity values of the images. Contrast distortion measures the similarity of the contrasts of the images. These values range from -1.0 to 1.0. A UIQI value closer to 1.0 implies smaller values of the loss of correlation, intensity distortion, and contrast distortion, so that a higher UIQI value corresponds to a higher quality of the PS image. UIQI is given by ERGAS [39] measures the global image quality with a lower ERGAS value corresponding to a higher spectral quality of the PS image, and it is given by RMSE b is the root-mean-square error between the reference image and the PS image in the b-band. SAM [40] measures the global spectral distortion with the value closer to 0.0 corresponding to weaker spectral distortion, and is given by

Experimental Results
The intensity correction coefficients were estimated using Equation (5). Table 3 lists the estimated intensity correction coefficients for the three datasets, where α represents the fraction of the NIR included in the PAN image, and β, γ, ξ are the fractions of the image where the intensity of the RGB image that does not match the PAN image. The PS image of the proposed method was compared to those obtained by the related methods for intensity correction, namely fast IHS [26], SAIHS [27], ISAIHS [29], Tradeoff IHS [28], eFIHS-SRF [30], PRACS [32], and Brovey Transform-Smoothing-Filter-based Intensity Modulation (BT-SFIM) [33]. PRACS [32] was performed using the code developed by Vivone et al. [23]. The detailed parameters of the existing methods were as follows: the weight parameter for SAIHS was (Green,Blue) = (0.75,0.25), the weight parameters for ISAIHS were (Red,Green,Blue,NIR) = (0.3,0.75,0.25,1.7), the tradeoff parameter for the Tradeoff IHS was 4.0, the fraction of the number of photons in the MS band detected by the PAN sensor of eFIHS-SRF was 0.8, and the mean filter kernel size and the weight parameter for BT-SFIM were 7 × 7 and (Red,Green,Blue,NIR) = (0.26,0.26,0.122,0.375), respectively. The numerical image quality was evaluated using the CC, UIQI [38], ERGAS [39], and SAM [40] metrics. The sliding window size of UIQI was 8 × 8. Tables 4 and 5 summarize the image quality results for CC and UIQI, and for ERGAS and SAM, respectively. It is observed that with the exception of the image of Tasmania, ISAIHS gave good results. The proposed method gave the best UIQI and ERGAS values for all of the images. The SAM values of eFIHS-SRF and the proposed method are not correlated with the size of the processed image, while the values obtained by the other methods decrease with the increasing number of pixels in the processed image. The Tradeoff IHS and the proposed method gave consistently good results for all of the images. Figure 3 shows the ranking of the quality metric results for the seven methods. Here, for each test image, the best result is worth three points, the second-best result is worth two points, and the third-best result is worth one point. The maximum possible number of points is 36, which is obtained when a method has the best values for all of the metrics. BT-SFIM uses the coefficients estimated from a relative spectral response graph and does not give good results. eFIHS-SRF using the coefficients estimated by the relative spectral response graph is the second-best method after Tradeoff IHS. In contrast, ISAIHS and Tradeoff IHS use the coefficients estimated from large image datasets and gave good results. These results show that techniques using image datasets tend to perform better than those using the relative spectral response graph. This demonstrates the need to consider other observation conditions in addition to the spectral response graphs, and it can be concluded that the observation dataset contains this information.
Visual analyses are shown in Figures 4-6. In this evaluation, the PS images generated from the test images were compared to the ground truth RGB images. In Figures 4 and 5, (d)-(l) are the expanded images corresponding to the area surrounded by the yellow box in (c). As shown in Figure 4, eFIHS-SRF (Figure 4i), BT-IHS (Figure 4k), and the proposed method (Figure 4l) reproduced the color tone of the forest (indicated by red arrows), while the color of the rice field in Figure 4i for eFIHS-SRF was darker (indicated by green arrows). The green component of PRACS (Figure 4j) was brighter than expected (indicated by white arrows). As shown in Figure 5, SAIHS (Figure 5f) and ISAIHS (Figure 5g) images were generally brighter, and the eFIHS-SRF image (Figure 5i) was darker. In Figure 6, eFIHS-SRF (Figure 6i) and the proposed method (Figure 6l) reproduced the overall color tone, while the PRACS (Figure 6j) image had a brighter appearance than the images obtained using the other methods, which were generally whitish. In summary, for the proposed method (Figure 6l), the color tone of the whole image was consistent with the ground truth image for all images, and the resolution was also good. The green component of the PRACS image (Figure 6j) is brighter. The results obtained by the other methods differed depending on the image.          (a) Test RGB image (b) Test PAN image (c) Ground truth RGB image (j) PRACS (k) BT-SFIM (l) Proposed method

Discussion
The experimental results show that for SAM, only the proposed method and eFIHS-SRF for which the coefficients are estimated by the relative spectral response graph do not depend on the size of the processed image. This indicates that the overall color was not destroyed. In other words, there is little spectral distortion. Tradeoff IHS performs well on average for all images; however, it does not exhibit best fitting for the processed images. Since each method has advantages and disadvantages, we calculated the ranking of the quality metric. According to the results of this ranking, the proposed method is the best, followed by Tradeoff IHS and eFIHS-SRF. Tradeoff IHS is

Discussion
The experimental results show that for SAM, only the proposed method and eFIHS-SRF for which the coefficients are estimated by the relative spectral response graph do not depend on the size of the processed image. This indicates that the overall color was not destroyed. In other words, there is little spectral distortion. Tradeoff IHS performs well on average for all images; however, it does not exhibit best fitting for the processed images. Since each method has advantages and disadvantages, we calculated the ranking of the quality metric. According to the results of this ranking, the proposed method is the best, followed by Tradeoff IHS and eFIHS-SRF. Tradeoff IHS is a method of uniquely calculating the correction coefficients from the image datasets included in the modeling method, and eFIHS-SRF is a method of uniquely calculating the correction coefficients using the relative spectral response graph. The techniques for intensity correction use two methods: the first uses the relative spectral response graph and the other uses modeling of the obtained image datasets. The comparative results show that the latter technique provides better performance. Since this method calculates the intensity correction coefficients from the observed data that include the effect of all factors, it is able to obtain a good result. For modeling techniques such as SAIHS [27], ISAIHS [25], and PRACS [31], the modeled intensity is mostly expressed as I low = b∈B ω b MS low b or PAN low = b∈B ω b MS low b , where ω b denotes the b-band correction coefficients; subsequently, some correction coefficients are acquired by its formula. While the previously used modeling techniques generally obtain correction coefficients by optimization using image datasets of satellites or observations, the proposed technique is optimized using an image dataset. The proposed method obtains the best result in the quality metrics score for all of the image datasets; in particular, the UIQI and ERGAS metrics gave consistently good results. The results show that the proposed technique reduces the spectral distortion compared to other related conventional techniques. This suggests that the model can be considered to be adequate. For the estimation of the correction coefficients, the proposed technique calculates the intensity correction coefficients for each of the observed data, thus obtaining the better numerical quality metrics than some of the other modeling techniques that calculate the intensity correction coefficients separately for each satellite. This suggests that the estimation of the correction coefficients for each image dataset is adequate because of the different observation conditions and spectral characteristics of the PAN and MS sensors.

Conclusions
This study proposed a novel model of low-spatial-resolution PAN images for pansharpening, and the intensity correction coefficients were computed using this model and the obtained image dataset. The PS image is generated using its coefficients and GIHS. The proposed model is formulated according to the characteristics of the relative spectral response graph. Due to its inclusion of subtraction, the design of this model is different from that of a conventional model. Therefore, the correction coefficients calculated for each image correspond to the observation conditions and sensor characteristics. Compared to other related methods, the proposed method demonstrated consistently good performance. These results show that the proposed model is adequate and effective for estimating the intensity of pansharpening. However, the experiments were performed only on the images obtained from IKONOS; therefore, further verification experiments on images obtained from other optical sensors are required. We would like to consider finding new application as an important future work.