Color Matching Images With Unknown Non-Linear Encodings

We present a color matching method that deals with different non-linear encodings. In particular, given two different views of the same scene taken by two cameras with unknown settings and internal parameters, and encoded with unknown non-linear curves, our method is able to correct the colors of one of the images making it look as if it was captured under the other camera’s settings. Our method is based on treating the in-camera color processing pipeline as a concatenation of a matrix multiplication on the linear image followed by a non-linearity. This allows us to model a color stabilization transformation among the two shots by estimating a single matrix -that will contain information from both of the original images- and an extra parameter that complies with the non-linearity. The method is fast and the results have no spurious colors. It outperforms the state-of-the-art both visually and according to several metrics, and can handle HDR encodings and very challenging real-life examples.


I. INTRODUCTION
C OLOR matching techniques aim to map the colors of one image, defined as source, to those of a second image, defined as reference.A particular case is color stabilization, where the two pictures are taken from the same scene and differ in terms of color.These differences in color may be caused either by the use of different camera models, which follow different internal procedures tailored by the manufacturer or by the use of the same camera but under different settings like white balance, exposure time, etc.
Digital cameras perform a number of image processing steps, including demosaicing, white balance, color correction (from RGB camera sensor to device independent color space), and encoding standard.Bianco et al. [1] proposed a generic model of the color processing pipeline of digital cameras Fig. 1.Linear response (dashed) versus gamma-corrected (circle) and logarithmic response (continuous).The gamma correction was defined as γ = 2.2, and the logarithmic curve is an instance of an ARRI Log C curve.
where I lin is the linear image read by the camera sensor after demosaicing, I out is the output image, A is a 3 × 3 matrix which carries color information and white balance and the value γ defines a power law function (usually known as gamma correction).This is a simplified version of the pipeline, since other processing techniques, like denoising, contrast enhancement, etc. are also applied.Nonetheless, this approximation is quite accurate for those pixels not laying close to the boundary of the color gamut.
Although gamma correction has been the most used encoding technique, it fails when working with high dynamic range (HDR) imaging, mostly due to quantization issues.Current professional cinema cameras are able to capture a wide range of light intensities, and therefore, a compression of this range is needed for storage, while preserving all the details and appearance.For this reason, cinema cameras replace gamma correction with a logarithmic function whose general form (common to the most popular log-encoding approaches [2], [3]) can be expressed as: where I out and I lin are defined as above, and the parameters a, b, c, and d are constant real values (varying for different camera manufacturers and camera settings).Figure 1 shows the plot of linear (dashed), gamma-corrected (circle) and logarithmic (continuous) responses to linear values.Notice that gamma correction and logarithmic curve assign respectively 50% and 80% of the output range to the first 20% of the linear intensity values.More recently, other non-linear encoding curves devised for HDR content have appeared.The most well-known of these curves are Perceptual Quantizer (PQ) [4] and Hybrid Log-Gamma (HLG) [5].They were designed to reduce quantization errors in the storage and coding of HDR scenes.Both PQ and HLG are mathematically well-defined.
This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see http://creativecommons.org/licenses/by/4.0/For more details on the curves definition, we refer the reader to [4] and [5].
In the industry, solutions for bringing consistency across image shots usually involve very skilled manual work, done by colorists during color grading in movie post-production and by technicians using camera control units (CCU) [6] in live TV broadcasts.They may also require a proper characterization of the cameras used and their settings like with the ACES framework [7], or the presence of color-charts in the shots.
In image processing and computer vision research, it is an open problem to color match a pair of pictures.We can differentiate between two different cases: i) the image pair does not necessarily share any content (color transfer), or ii) the image pair is acquired from the same scene (color stabilization).The latter can be understood as a constrained color transfer problem.
The aim of color transfer methods is to transfer the colors presented in the reference image to the source image.A seminal work in color transfer was proposed by Reinhard et al. [8], where the pair of RGB images are first converted to a decorrelated color space and then the mean and variance from the reference are transferred to the source.Pitié et al. [9], [10] defined the images as probability density functions, and then match them through an iterative non linear process.It is worth mentioning color transfer as an application of optimal transport, which minimizes the cost of transferring probability density distributions of the source image into the reference one, as in Rabin et al. [11] and Ferradans et al. [12] works.Pouli and Reinhard [13] performed histogram matching along different scales given images of different dynamic ranges.The method presented by Kotera [14] proposed to compute the principal components of the 3D color distributions, in order to match the principal axes of the source to the reference image by a matrix multiplication (rotation and scaling).Xiao and Ma [15] worked with color statistics, and in [16] they proposed a gradient preserving color transfer technique, and an evaluation metric for color transfer methods.Nguyen et al. [17] presented a color transfer method that first applies color constancy to the input images, then it performs luminance matching, and finally the color gamuts are aligned by a linear transformation.Hwang et al. [18] suggested to use moving least squares for color transfer, by also incorporating a probabilistic measure to ensure robustness against noise and outliers.Gong et al. [19], [20] proposed a color transfer method based on a projective transformation and a mean intensity mapping.All the above mentioned methods are global, although some local approaches also exist.The work of Tai et al. [21] segmented images into regions, and then it used Gaussian Mixture Models (GMM) to represent color distributions before the matching is performed.Xiang et al. [22] also followed a GMM representation of color areas, before matching them.
Color stabilization tackles the situation where some regions or objects appear in both the reference and the source images.HaCohen et al. [23] presented a method to compute dense correspondences between the images, combined with a global color mapping model.Vazquez-Corral and Bertalmío [24] proposed a color stabilization algorithm that consists of estimating a power law (γ value) for each of the images, and a single 3 × 3 matrix, to color match the source image to the reference.It is built on the assumption that in digital cameras the color encoding can be expressed as a matrix multiplication followed by a power law (gamma correction).Frigo et al. [25] presented a method to color stabilize video sequences, based on the estimation of a non-linearity and channel-based scaling.To the best of the authors' knowledge, there are only two color stabilization works for logarithmic images.One is the method of Vazquez-Corral and Bertalmío [26], that relies on finding a sufficiently large number of achromatic matches among source and reference.Let us note that the need of detecting achromatic matches may be a challenging limitation in some cases.The other method [27] is an earlier version of the current work, with a different algorithm that consistently underperforms the one we will introduce here as it is shown in the Results section.
Color consistency refers to the situation when a set of images from the same scene need to be color matched.HaCohen et al. [28] extended their previous approach in color stabilization problem [23], to the case of more than two images.In a recent work, Park et al. [29] proposed a model in which the parameters to be estimated are a gamma correction and a white balance constant.Xia et al. [30] presented a method to achieve color consistency in image stitching.On the overlapping regions among the shots, it computes parametric curves for each channel under color, gradient and contrast constraints.A review of the performance of color transfer methods is presented in Xu and Milligan [31] for image stitching.
Our main contribution in this work is a method able to color match pairs of images that were encoded with different non-linearities (gamma, logC, HLG, PQ).This work is an extension of the one presented in [27].In this paper, we improve over the previous approach in different ways.First, we allow the matrix in our model to be a projective 4 × 4 matrix.This brings more flexibility to the model which enables allowing it to better deal with saturated pixels that have gone through different non-linear in-camera operations such as tone-mapping or gamut-mapping.Also, we introduce a new term in the minimization, which minimizes errors in the perceptual CIELab color space.Furthermore, we show how our method can be used when images are encoded with HLG and PQ curves, the two current standards for HDR encodings.Lastly, in this paper we present a new dataset and framework for the problem, and perform larger comparisons both in terms of the methods and the metrics considered.Our results outperform the rest of the algorithms both quantitative and qualitatively.

II. METHODOLOGY
In this paper, we present a color stabilization method that takes as input an image pair encoded with unknown non-linear curves.For simplicity we explain the method for the case of gamma correction and logarithmic encoding.Later on, we will show how to handle as well PQ and HLG encodings.The main steps of our method can be outlined as follows: 1) If source or reference are log-encoded, we transform them into gamma-corrected.2) We color-stabilize the images by estimating a 4 × 4 matrix and two power law values.3) Finally, we undo the correction made in the first step if necessary (in case the original reference image is log-encoded).We refer the reader to the flowchart of the proposed model in Figure 2. We have made the code for our implementation available at http://ip4ec.upf.edu/ColorMatching.
A special case is when dealing with HLG and PQ encoded images.In this case, we proceed as if the images were logencoded.Please note that this is an approximation, as these curves do not follow the definition in Equation ( 2).This said, this approximation works extremely well, as it is shown in section IV.

A. Conversion of Log-Encoding to Gamma Correction
Let us consider a log-encoded image as in Equation ( 2).If we apply a power 10 function to it (denoted as T (•)), we obtain the following expression In logarithmic encoding curves, the value of parameter b is usually small.As Figure 3 shows, for the three different logarithmic curves (continuous lines), their equivalent curves fixing b = 0 (dashed lines) lie on top.Therefore, we can simplify Equation (3) by neglecting b, where K = a • A • 10 d/c is a matrix with the same size as A. Notice how Equation 4 has the same form as Equation (1).Therefore, by applying the power 10 function to a log-encoded image, it can now be treated as a regular gamma-corrected picture.

B. Color Stabilization
If I r or I s (or both) are log-encoded, we transform them into gamma-corrected images I r and I s , as explained in the previous Section.Then we compute a set of correspondences pts r Fig. 3. Graph of 3 logarithmic encoded curves Log C ARRI of EI 320 (green), Log C ARRI of EI 1280 (magenta) [2], and S-Log [3] (blue) plotted in continuous lines.In addition, the same logarithmic curves by setting b = 0 in their definitions (dashed lines).Note that the distance between the dashed and continuous lines from the same color is negligible.and pts s ; we use SIFT [32] for this purpose, although it can be replaced by any other method.It is important to note that we compute the correspondences between I r ↔ I s , and I s ↔ I r , and select those that appear in both directions.This allows us to discard some potentially incorrect correspondences.Let us now define the pixel values in the corresponding locations of I r and I s as where i = 1, . . ., N denotes the number of correspondences.We follow the idea from the color stabilization model proposed in [24], where H s was a 3 × 3 matrix that transforms colors from the source to match the ones of the reference, and γ r , γ s are inverse gamma correction values.In this work, we extend the matrix H s as a projective transformation with size 4 × 4 (inspired by color homography [19], [20]).By doing this, the model can deal not only with pixels in the core of the color gamut, but also with those values that appear on the border, which are the most affected by gamut mapping and tone mapping.Then, from the set of correspondences, we can build a system of equations considering matrix size 4 × 4 and homogeneous coordinates, where {γ s , γ r , H s , H r } are the unknowns.We perform a single optimization process, where the only constraint is H r • H s ∼ (the identity).This constraint assures that the transformation H s has an inverse, and that is represented by H r (which corresponds to the matrix that would transfer the colors of the source into the reference).The objective function considers the 3 × 1 non-homogeneous coordinates.This function considers the differences on (R,G,B) points, plus the differences on CIELab color space.In this way, we bring the corresponding point clouds (color matched and reference) as close as possible, both in terms of the display RGB color space, and also regarding perceptual color differences in the CIELab space.
where V = {γ r , γ s , H r , H s } is the set of unknowns, and E RG B and E Lab are the errors in the RGB and CIELab color spaces, respectively.These terms are defined as where RG B s , RG B r are the (R, G, B) values of corresponding points pts s and pts r , Lab(•) corresponds to the color transformation from RGB to Lab, and finally the functions g 1 V (•) and g 2 V (•) are defined as Finally, the matrices and non-linearities are applied to the entire images as in Equation ( 6), and we obtain the color matched image:

C. Conversion Back to Log-Encoded Images
If I s was log-encoded, we apply a log 10 function, denoted as T −1 (•), to the result of the previous step so as to undo the power 10 transform we applied at the beginning.

III. EXPERIMENTS WITH GAMMA AND LOGARITHMIC ENCODING NON-LINEARITIES
This section is divided into 3 different parts.First, we describe how we have created an image dataset for evaluation.Second, we compare our approach with seven popular color matching methods.Third, we evaluate the performance of the rest of methods when the proposed power 10 is applied in the case of log-encoded images.

A. Dataset
Our data is composed of different scenes, where each of them contains a reference image Ref, a source image Src and a ground truth image GT.In order to acquire our data, we work in camera manual mode to have full control over exposure time, white balance, ISO value, and aperture.We stored RAW and JPEG formats for each image.In that way, we have the linear information read by the camera sensor (RAW), as well as the final compressed image (JPEG).Images were taken using two camera models, Nikon D3100 (12-bits) and Canon EOS80D (14-bits).Let us explain the steps we follow from acquisition to the final triplet Ref, Src, and GT images for each scene: • Set the parameters of the camera P 1 (exposure time, white balance, ISO value, and aperture), the camera position Perspective 1 , and acquire the Reference (Ref) set RAW and JPEG.• Use the same camera parameters (P 1 ) as in the Ref set, and change the camera position to Perspective 2 , then we acquire the Ground-Truth (GT) set RAW and JPEG.• From the last camera position Perspective 2 , vary the camera settings to a different configuration P 2 to acquire the Source (Src) set.• For each pair (RAW, JPEG) obtain {I, nonli n}: 1) Preprocess the RAW input to obtain an RGB linear image using DCRAW [33] open source code, we refer to this image as R AW rgb .2) Estimate the γ correction curve, between the preprocessed R AW rgb and the JPEG using [24].3) Undo γ from the JPEG image in order to obtain a linear image called I with the camera color processing still in.4) Apply a random generated non-linearity to I .In case of gamma correction, we set values to be selected in the range [1.7, 2.7], and for logarithmic curves, we select the definitions from Log C ARRI (a total of 11 curves) [2], and S-Log from Sony [3].
We name the applied non-linearity nonli n. 5) In case of GT, the same nonli n as the one selected for the reference is applied.

B. Results and Comparisons
We evaluate our approach against seven state-of-theart methods for color transfer, stabilization and consistency: Reinhard et al. [8] (Reinhard), Kotera [14] (Kotera), Xiao and Ma [15] (Xiao), Pitié et al. [10] (Pitie), Ferradans et al. [12] (Ferradans), Park et al. [29] (Park), and Gil Rodríguez et al. [27] (Gil).We want to emphasize that for Pitié et al. [10], we focus only on the global part of the method.We studied all possible combinations of applied non-linearities to the reference and source image: 1) two gamma-corrected images, 2) two log-encoded images, 3) one gamma-corrected as reference and one log-encoded as source, 4) one log-encoded as reference and one gamma-corrected as source.In the quantitative evaluation we select the following color metrics: • PSNR of luminance channel (PSNR L), • color PSNR defined as CPSNR is the mean of the PSNR among the three color channels, • root mean squared error (RMSE), • E * 00 [34] accounts for 'perceptually uniform' differences in the CIELab color space, • CID [35] is the color extension of SSIM [36], and it is therefore supposed to capture errors more perceptually than PSNR.
For each metric we show the mean (μ) and the median ( μ).
Notice that in order to compare the color-stabilized and the GT in case of log-encoded images, we first undo the ground-truth non-linearity (since it is known) from the result and GT, and then we apply a gamma-correction of value 1/2.2.We use the data computed as described in Section III-A, which consists of 35 image pairs.In all the Tables for the quantitative comparisons we show in green the best results, and then in blue and orange, the second and third, respectively.In Figures 5  and 6, the log-encoded images are shown in sRGB for display purposes (as before, we first discount the ground-truth non-linearity, and then we apply the sRGB gamma).

1) Gamma-Corrected Inputs:
The first block in Table I presents the results of pairs encoded using gamma correction.In most of the metrics our method outperforms the rest of algorithms, except for median values of E * 00 and PSNR L, in which Park et al. [29] obtains better results.
In the first row of Figure 5, we show the results for one scene.The first column shows the reference, the second the source, and the third the GT.In this example, we compare our algorithm (last column) against the method of Ferradans et al. [12] (fourth column).We can see that this method loses color saturation in general, and introduces gray colors in the output of the floor.
2) Log-Encoded Inputs: In the second block of Table I, it is shown that our method outperforms the rest of algorithms in all the metrics.
From Figure 5 (second and third rows), Gil Rodríguez et al. [27] is not able to darken the green color of the grass, which it is closer to the vivid look in the source image.In the second scene, the output from Pitié et al. [10] cannot recover the red color of the garage in the background, and it presents a purplish color in one of the doors, and it makes some clouds to appear in the sky.
3) Log-Encoded Reference and Gamma-Corrected Source: In the third block of Table I, our proposed method outperforms the rest of the algorithms in all the metrics.
Figure 5 shows the results from Xiao and Ma [15] (fourth row), and Park et al. [29] (fifth row), for this case.In the first scene, notice that Xiao and Ma enhances yellow and red colors, and it saturates the upper right corner of the wall.The method in Park et al. shows a color shift in the floor, and intensifies the purple color on the right side.

4) Gamma-Corrected Reference and Log-Encoded Source:
The last block in Table I presents the results where the reference is a gamma-corrected image, and the source is logencoded.For all the metrics, our method outperforms the rest of algorithms.
Figure 5 shows the results from Reinhard et al. [8], and Kotera [14] (last two rows), for this case.The result from the former method shows a yellowish cast on the wall.The latter method presents washed out colors, e.g. the chair and the wall behind it.

C. Results With Power 10
In this subsection we analyse the performance of the rest of methods when also applying a power 10 function to them.The first column presents the reference, the second shows the source, the third the GT, the fourth the methods result, and the last our result.We present for 1) Ferradans et al. [12], 2) Gil Rodríguez et al. [27] and Pitié et al. [10] methods and 3) Xiao and Ma [15] and Park et al. [29] methods, and 4) Reinhard et al. [8] and Kotera [14] methods.
More in detail, we first apply the power 10 function to the log-encoded inputs, we then apply the selected method to the new images, and finally we undo the power 10 if necessary.From now on, we refer to this process as method 10 .The results for all the comparisons and methods are presented in Table II.Our results and the results of Gil Rodríguez et al. from the previous Table are also included for comparative purposes.
1) Log-Encoded Inputs: Results show a considerable improvement between the original methods and after applying power 10, see first block.The only exceptions are the algorithms of Pitié et al. [10] and Ferradans et al. [12], which have a similar performance with and without power 10.
2) Log-Encoded Reference and Gamma-Corrected Source: Results show a considerable difference between the original  II (second block).In Figure 6, the Park 10 method presents no color shift on the floor, The GTs and our results are tone mapped using [37].Images from ARRI [38].In the case of PQ curve, we set up the absolute luminance of the display to 1000 cd/m 2 .
although it cannot completely recover the yellow color of the truck.
3) Gamma-Corrected Reference and Log-Encoded Source: In this case, although Park et al. improves their previous results, it is not as noticeable as in the previous comparison.The method of Pitie 10 outperforms Park 10 as opposed to the previous case; Pitie 10 in this context shows a more consistent performance in both cases.
The results of our experiments show that the proposed framework (applying a power 10 function to log-encoded images) boosts the performance of the majority of the methods we compare with.This is true both in terms of quantitative metrics and image quality.The exceptions are the algorithms of Pitié et al. [10] and Ferradans et al. [12].These two methods define the input images as probability density functions in order to match them.This fact allows these methods to better adapt for modifications present in the range of the input images.

IV. EXPERIMENTS WITH HDR ENCODINGS
In this section, we color match a pair of images encoded using different transfer functions: PQ, HLG and Log C ARRI curves.

A. Dataset
The dataset we use for experiments is the one provided by ARRI.This data contains HDR videos.The linear RAW data is obtain by using ARRIRAW Converter [39].We select three different scenes, and for each scene we set a reference image encoded with one of the 3 different options { PQ, HLG, Log C } (a random Log C ARRI curve).Then, we build the data by selecting all the possible combination pairs for each image reference (total of 9 pairs).We add an extra pair comparing two different Log C ARRI curves.Therefore, we have a total of 10 image pairs.

B. Results and Comparisons
We compare our method, described in Section II, with the algorithms presented in the previous experiments in subsection III-B: Reinhard et al. [8] (Reinhard), Kotera [14] (Kotera), Xiao and Ma [15] (Xiao), Pitié et al. [10] (Pitie), Ferradans et al. [12] (Ferradans), Park et al. [29] (Park), and Gil Rodríguez et al. [27] (Gil).The last method also considers the inputs as log-encoded images.In order to compute the quantitative results, we undo the non-linearity (since it is known) of the resulting color matched image and the GT, and  III, it is apparent that our method is accurate when working with real data and common situations.Figure 7 presents the image results, where we show the GTs and our results after applying the tone mapping operator (TMO) from [37].The reference and the source are presented without tone mapping.After applying TMO to the HDR images, we will not be able to appreciate the main differences on the different encodings anymore.For this reason, we decided to present the inputs without TMO, since it gives a general understanding of the encoded images.As it can be seen in the last column in Figure 7, our method recovers the colors and appearance of the reference image, in different input situations.We show for 3 different scenes (rows), and for each scene: the reference (first column), the source (second column), the GT (third column) and our result (last column).Notice that on the last row, where the reference is PQ-encoded and the source HLG-encoded, our result (last column) is not able to completely recover the blue on the t-shirt on the left upper corner.In our output, the blue appears brighter than in the GT.This is due to the fact that no correspondences are available in this particular hue, thus the recovery is not perfect.

V. CONCLUSION
In this paper we have presented a method for the color matching of different image views encoded with unknown non-linear curves.The method is based on the modification of logarithmic-encoded images so that they behave as gamma-corrected ones.In this way, we can color stabilize the images by estimating a 4 × 4 matrix and a power law value.Our results show that our method outperforms state-of-the-art algorithms quantitatively and qualitatively.In a future work, we would like to explore the more general case, when no content is shared among the input images.

Fig. 2 .
Fig.2.Flowchart of the proposed color stabilization method.Given two non-linear encoded images, reference (I r ) and source (I s ), we apply the transformations T r and T s to the image pair.These transformations are defined as the power 10 function 10 × , in case of a given log-encoded image; and as the identity , in case of gamma corrected input.Then, we compute a set of correspondences pts r and pts s , using standard feature descriptor (e.g. in this article SIFT[32]).From this set of corresponding pixel locations, we estimate the parameters {γ r , γ s , H s } in the pixel values correspondences.The computed values are applied to the T s (I s ) image.Finally, T −1 s function is applied to the color matched image.

Fig. 4 .
Fig.4.Evaluation framework.On the left, data acquisition is described.Pictures are taken from the same scene, and from two different points of view Perspective 1 and Perspective 2 .From the first one, the reference image is taken, and from the second, the source and the ground truth.Images are stored in RAW and JPEG format, and we chose different camera settings P 1 , for reference and ground truth, and P 2 parameters for the source.On the middle, data is created by linearizing the JPEG image, i.e undoing the gamma correction I .Once linearized, a random non-linearity is applied, and the new image and the non-linearity are stored {I, nonlin}.Finally, the reference Ref and source Src become the input images for the color matching methods, and the corresponding output is evaluated against the GT.

Fig. 5 .
Fig.5.Results of all the methods for the four comparisons.Each block represents: 1) gamma-corrected image pair, 2) log-encoded input images, 3) log-encoded reference and gamma-corrected source and 4) gamma-corrected reference and log-encoded source.The first column presents the reference, the second shows the source, the third the GT, the fourth the methods result, and the last our result.We present for 1) Ferradans et al.[12], 2) Gil Rodríguez et al.[27] and Pitié et al.[10] methods and 3) Xiao and Ma[15] and Park et al.[29] methods, and 4) Reinhard et al.[8] and Kotera[14] methods.

Fig. 6 .Fig. 7 .
Fig. 6.Example of applying power 10 function to Park et al.[29].The input images are a log-encoded reference and a gamma-corrected source.The first column presents the output of the original method from[29], the second shows the output of[29] applying power 10 (Park 10 ), the third shows the GT, and the last column shows our result.

TABLE I RESULTS
FROM THE COMPARISON AMONG 35 IMAGE PAIRS FOR: 1) TWO γ -ENCODED IMAGES, 2) TWO LOG-ENCODED IMAGES, 3) REFERENCE LOG-ENCODED AND SOURCE γ -CORRECTED, AND 4) REFERENCE γ -CORRECTED AND SOURCE LOG-ENCODED TABLE II RESULTS FROM THE COMPARISON AMONG 35 IMAGE PAIRS FOR: 1)) TWO LOG-ENCODED IMAGES, 2) REFERENCE LOG-ENCODED AND SOURCE γ -CORRECTED, AND 3) REFERENCE γ -CORRECTED AND SOURCE LOG-ENCODED.IN THIS CASE, WE APPLIED POWER 10 TO THE INPUTS (IF NECESSARY) FOR THE REST OF ALGORITHMS, EXCEPT GIL method and after applying the power 10 preprocessing.Notice the boosting of Park et al., which improves significantly versus its original version.It is ranked second after our approach in most of the metrics, and in median PSNR L and CPSNR it gets the best results, see Table

TABLE III RESULTS
SHOW MEAN (μ) AND MEDIAN ( μ) AVERAGES OVER 10 PAIRS, WHERE REFERENCE AND SOURCE IMAGES ARE ENCODED USING HLG, PQ AND LOGARITHMIC CURVES.IN THE CASE OF PQ CURVE, WE SET UP THE ABSOLUTE LUMINANCE OF THE DISPLAY TO 1000 cd/m 2 then apply a γ correction of 1/2.2, as done in the previous experiments.From the data in Table