Pixel-wise Orthogonal Decomposition for Color Illumination Invariant and Shadow-free Image

In this paper, we propose a novel, effective and fast method to obtain a color illumination invariant and shadow-free image from a single outdoor image. Different from state-of-the-art methods for shadow-free image that either need shadow detection or statistical learning, we set up a linear equation set for each pixel value vector based on physically-based shadow invariants, deduce a pixel-wise orthogonal decomposition for its solutions, and then get an illumination invariant vector for each pixel value vector on an image. The illumination invariant vector is the unique particular solution of the linear equation set, which is orthogonal to its free solutions. With this illumination invariant vector and Lab color space, we propose an algorithm to generate a shadow-free image which well preserves the texture and color information of the original image. A series of experiments on a diverse set of outdoor images and the comparisons with the state-of-the-art methods validate our method.


Introduction
Shadows, a physical phenomenon observed in most natural scenes, are problematic to many computer vision algorithms.They often bring some wrong results in edge detection, segmentation, target recognition and tracking.For these reasons, shadow processing and illumination invariant on an image are of great practical significance and have attracted a lot of attentions [1][2][3][4][5][6].Some previous works dealt with this problem by utilizing additional information from multiple images [7][8][9], time-lapse image sequences [2,10,11] or user inputs [9,12,13].Although these methods achieve impressive results for illumination invariant and shadow-free image, they may not be applied automatically to a single image.In general, a robust method for illumination invariant and shadow-free on a single image is more favored and challenging.The recent works related to a shadow-free image on a single image can be divided into two categories: methods that need shadow detection and methods that do not need shadow detection.
Methods with shadow detection.Most works for shadow-free image often undergo two basic steps: shadow detection and shadow removal [14].A fair proportion of shadow detection methods are based on shadow features with statistical learning [15][16][17][18].The most commonly used shadow features are intensity, histograms, texture, color ratio, geometry property, and gradient.Lalonde et al. [16] obtained shadow removal results in consumer-grade photographs by a conditional random field (CRF) learning approach on color ratio, texture and intensity distributions.Based on initial segmentation, the method proposed by Guo et al. [15] detects shadows using SVM on shadow features (e.g, texture, chromaticity and pairwise relation) and then relights each pixel to obtain a shadow-free image.These shadow features may not be robust enough in some applications.For example, shadowed regions are often dark, with less texture and little gradient, but some non-shadowed regions may also have similar characteristics.Due to the lack of robust shadow features, these shadow feature and learning based methods are not always effective, especially in soft shadow regions or some complex scenarios.Besides, all these learning-based methods are usually time-consuming since large complex features are needed to be analyzed for classifiers.Some other works for shadow-free image with shadow detection are based on computing physically-based illumination invariants [19][20][21][22].With a physically-based illumination invariant, Finlayson et al. [19,20] proposed a method to obtain a shadow-free image by shadow edge detection and solving a Poisson's equation.In [22], we deduced a linear relationship between pixel values of a surface in and out of shadow regions, and then proposed a shadow removal method based on shadow edge detection.In the condition of correct shadow edge detection, the method proposed by Arbel and Hel-Or [14] yields a shadow-free image by finding scale factors to cancel the effect of shadows.All these mentioned methods require shadow edges to be correctly detected.However, automatic edge detection is known to be nonrobust [23], especially in complex scenarios and texture (such as the shadows of leaves and grasses).The defect of these shadow detection based methods is revealed in [24].Even when the shadows are detected rightly, retaining the original color and texture in a shadow-free image is also a challenging problem [14,25].
Methods without shadow detection.Another effective approach for shadow-free image is intrinsic image via image decomposition.Tappen et al. [17] proposed a learning-based approach to predict the derivatives of reflectance and illumination images.Although this method successfully separates reflectance and shading for a given illumination direction used for training, it is not designed for arbitrary lighting directions [24].Assuming that the neighboring pixels that have similar intensity values should have similar reflectance, Shen et al. [26] proposed both an automatic and user scribbles method to separate intrinsic image from a single image with optimization.Although their automatic method is able to extract most of the illumination variations, residual shadows remain in the reflectance image (to see Fig. 8 (r2)) and they stated that user scribbles cannot improve the result significantly.Finlayson et al. [20] derived a grayscale shadow-free image by finding the special direction in a 2-D chromaticity feature space.In [22] we obtained a grayscale shadow-free image based on a physical deduced linear model from the view of atmospheric transmittance effects.These derived grayscale shadow-free images have some limitations in real applications due to the loss of color information.Applying narrowband camera sensors in deduction, Maxwell et al. [27] found that a 2-D illumination invariant image can be obtained by projecting the colors (in log-RGB) onto a plane.In their method, the projecting normal vector is determined either from a user input or a 2-D entropy searching, which may have some limitations in real applications.With the assumption of chromatic surfaces, Planckian lighting and narrowband camera sensors, Yang et al. [24] obtained a shadow-free image using bilateral filtering without shadow detection.The comparison results in [24] have shown that this method for shadow free image outperforms Finlayson's [19] (a classical method for shadow-free image that need shadow detection).However, based solely on chromaticity, this method may not be valid on neutral regions when the neighboring regions of the image are also neutral [24].The experimental results presented in section 4 of this paper also show that bilateral filtering often makes the method in [24] fail to recover a shadow-free image from the image with darker shadows or in which shadows pervade in a scenario.In this paper, we propose a novel, effective and efficient method to obtain a shadow-free image from a single outdoor image without feature extraction or shadow detection.The research is based on our previous work of grayscale illumination invariant image derivation from a physically-based shadow linear model [22].With three different grayscale illumination invariant images (e.g., Fig. 1 (b)) from this linear model [22], we apply them to set up three linear equations for each pixel value vector.The solution space of these linear equations is decomposed into one-dimensional nullspace (free solution space) and particular solution space.The free solution, representation of illuminants ratio, is only determined by illumination condition.The particular solution, which is perpendicular to free solutions, is unique and illumination invariant.It retains the basic color information of the original pixel value vector.Therefore, we name this process as pixel-wise orthogonal decomposition for color illumination invariant.Due to that this color illumination invariant image (e.g., Fig. 1 (c)) still has some color distortion compared with the original image, combining color illumination invariant image and Lab color space, we propose an algorithm to generate a shadow-free image (e.g., Fig. 1 (d)), which better preserves the color and texture information of the original image.
The main contributions of this paper are: (1) We propose a new pixel-wise orthogonal decomposition method to obtain a shadow-free image without feature extraction or shadow detection.
(2) Our method requires only once pixel-wise calculation on an image and can run in real time.
(3) This pixel-wise orthogonal decomposition retains the original texture well.(4) The color illumination invariant image yields an identical reflectivity result regardless of illumination condition, which is both invariant for different illumination conditions and shadows.
The rest of the paper is organized as follows.In Section 2, we give a brief introduction of the derivation of grayscale illumination invariant image and then present our orthogonal decomposition method for color illumination invariant image from a single image.In section 3, we present a color restoration method for the generation of shadow-free image from color illumination invariant image.Some quantitative and qualitative experiments on a diverse set of shadow images are given in Section 4. We end this paper with a brief discussion and conclusion in section 5.

Pixel-wise orthogonal decomposition and color illumination invariant image
In this section, we first give a brief introduction of our previous work of shadow linear model and the derivation of relevant grayscale illumination invariant image.Then we present our pixelwise orthogonal decomposition method for color illumination invariant image from a single image.

Pixel-wise orthogonal decomposition
2.1.1.Shadow linear model on an image [22] Given spectral power distributions (SPD) of illumination E(λ ), objects reflectance S(λ ) and sRGB color matching functions Q(λ ), the RGB tristimulus values in sRGB color space are where H = {R, G, B} represent red, green and blue channel respectively.It is revealed that the sRGB tristimulus values of a surface illuminated by daylight are proportional to those of the same surface illuminated by skylight in each of the three color channels, i.e., log(F where F H denotes the RGB values of a surface in non-shadow area and f H denotes the RGB values for the same surface in shadow area.The proportional coefficients K H are independent of reflectance and are approximately equal to constants determined by Eq.3.
From Eq.2, the following equation holds. where For a pixel in an image, Eq.4 represents a shadow invariant.For an arbitrary pixel and its RGB value vector, ) keeps the same no matter whether the pixel is on shadow region or not.Apparently, a grayscale illumination invariant image is obtained.Two results of this type of illumination invariant image are shown in Fig. 2. The grayscale illumination invariant image has some limitations in some applications due to the loss of color and contrast information.We expect to get an illumination (including shadow) invariant image, which can retain the color information of the original image and appear the same as the original image without shadow.For this purpose, based on the model in previous section, we deduce a pixel-wise orthogonal decomposition and obtain an illumination invariant image which retains the basic color information of the original image.For a RGB value vector (v R , v G , v B ) T , we first define the following grayscale image according to Eq.4: Similar to Eq.4, we then obtain the following equations, where For convenience, let u u u = (u R , u G , u B ) T defines a Log-RGB value vector of a pixel, where u H = log(v H + 14), H = {R, G, B}.From Eq.4, Eq.7 and Eq.8, we have the following linear equations, where I i , i = 1, 2, 3, correspond to the invariant values calculated by Eq.6, Eq.7 and Eq.8 respectively.The matrix format of Eq.10 is as following: where According to the definitions and calculations of β 1 , β 2 and β 3 , we have which leads to rank(A) = 2.For a given image, from algebraic theory, we know that Eq.11 has infinite number of solutions.The solution space can be decomposed into a particular solution plus one-dimensional nullspace solutions.For a pixel on an image, we only know one of the solutions, i.e., pixel Log-RGB value vector.We aim to find one solution of Eq.11 which is illumination invariant.In the following, we prove that this solution exists and is unique.
From algebraic theory, any solution u u u of Eq.11, can be expressed as following: where u u u s s s is an arbitrary particular solution and u u u 0 0 0 is the normalized free solution of Eq.11, such that Au u u 0 0 0 = 0 and u 0 = 1, α ∈ R. The symbol • denotes L 2 norm.One of the free solution is solved in Formula 14, From Formula 14, the normalized free solution u u u 0 0 0 can be calculated as following: Here the free solution has no relationship with the image itself but is determined by matrix A, i.e. illumination condition.Formula 14 further reveals that this free solution is a representation of the ratio of illuminants in imaging environments.
Given a particular solution u u u s s s and the normalized free solution u u u 0 0 0 of Eq.11, according to Eq.13 and algebraic theory, we define u u u p p p as u u u p p p = u u u s s s + α p u u u 0 0 0 α p = − u u u s s s , u u u 0 0 0 (16) It is a particular solution of Eq.11, where •, • denotes vector inner product.It can be proved that u u u p p p , satisfying u u u p p p ⊥u u u 0 0 0 , is the unique particular solution of Eq.11 (The proof of the uniqueness of u u u p p p is provided in Appendix A).For a Log-RGB value vector u u u, whether it is on shadow or not, it is the solution of Eq.11 and can be decomposed as u u u = u u u p p p + αu u u 0 0 0 (17) where α = u u u, u u u 0 0 0 and u u u p p p , u u u p p p = u u u − αu u u 0 0 0 , is the unique particular solution of Eq.11.Due to u u u p p p ⊥u u u 0 0 0 , we call this process as pixel-wise orthogonal decomposition.For a given pixel, no matter how different the values of the pixel are with different illumination conditions, this pixel-wise orthogonal decomposition will yield a unique particular solution u u u p p p .From the deducing procedure above and the orthogonal decomposition, we know that u u u 0 0 0 , representation of illuminants ratio, is only related with (β 1 , β 2 , β 3 ) T and u u u p p p , perpendicular to u u u 0 0 0 , is illumination invariant.It means that for a pixel with Log-RGB value vector u u u, no matter whether it is on shadow or not, u u u p p p is invariant and only α reflects the variation of pixel RGB values caused by shadow or different illuminants.The different solutions caused by different α component in Eq.17 can be regarded as Log-RGB values of a pixel on different illumination conditions.Therefore, given an image under multiple illumination conditions, for each pixel with Log-RGB value vector u u u of this image, we can directly calculate its illumination invariant vector u u u p p p as u u u p p p = u u u + α p u u u 0 0 0 α p = − u u u, u u u 0 0 0 (18) By exponential transform on u u u p p p for each pixel, we can get an illumination invariant image which retains basic color information of the original image in RGB color space.

Unique particular solution up
Normalized free solution u0 Perpendicular to free solution This pixel-wise orthogonal decomposition process can be shown more clearly from the illustration in Fig. 3.For a pixel, u u u 1 s s s and u u u 2 s s s denote its log-RGB vector within shadow and log-RGB vector without shadow, respectively, which are the solutions of Eq.11.Here we denote any solution of Eq.11 by vector u u u.According to our orthogonal decomposition Eqs.(14-15) and Eq.17, u u u 1 s s s , u u u 2 s s s and u u u can be projected along the vector u u u 0 0 0 into an specific solution of Eq.13, u u u p p p , which is perpendicular to u u u 0 0 0 .Therefore, for a pixel on an image, it means that, no matter how different the values of the pixel are with different illumination conditions (within shadow, without shadow or other illuminated), we can get an unique invariant u u u p p p by our orthogonal decomposition.

Analysis of Parameters β 1 , β 2 , β 3 and color illumination invariant image
The estimation of parameters β 1 , β 2 , β 3 is important in determining whether our pixel-wise orthogonal decomposition works well or not.Luckily, from Eq.5, Eq.9 and Eq.3, we know that these parameters are reflectance irrelevant and only determined by the illumination condition (i.e, the SPD of daylight and skylight), such as zenith angle (time) and aerosol(weather).In clear weather that shadows mostly take place, these parameters are mainly determined by zenith angle.The SPD can be measured by spectrometer or calculated in atmosphere optics (software  sun angles can be used for most situations in our decomposition, except some images taken at twilight, i.e., at sunrise or sunset, when sun angle is less than 10 • .Also, for more precise parameter values, an automatic entropy minimization method similar to Finlayson et al. [20] is employed.In addition to the factors (zenith angle and aerosol) mentioned above that may affect parameters β 1 , β 2 , β 3 , some other aspects are also worthy of attention.For example, cloud and fog will affect the SPD of daylight and skylight and further affect the parameters β 1 , β 2 , β 3 in our algorithm.From clear weather to cloudy, as the clouds increase (aerosol increases), shadows vary from strong to weak.When shadows are weak, the error of the color illumination invariant results caused by parameters (β 1 , β 2 , β 3 ) deviation from cloud is also weaker.Therefore, the influence caused by the cloud distribution would be small and can be corrected by automatic entropy minimization [20].This similar situation also goes for foggy weather.The above analysis shows that our pixel-wise orthogonal decomposition can work well on most situations.Our experimental results on many images in different scenes and reflectance also support this analysis.Two original images, their color illumination invariant images and α information are presented in Fig. 4. The shadows disappear and the main color information is maintained in our color illumination invariant images.
Compared with prior works [20,27,28], our 2-D color illumination invariant image has three main advantages: (1) Our 2-D color illumination invariant image is a true 2-D image which can be automatically obtained through a simple pixel-wise orthogonal decomposition.In [20,28], Finlayson and Drew obtained a 2-D chromaticity image by projecting the 1-D chromaticity back to 2-D via keeping the vector components.In [27], Maxwell et al. stated that this 2-D chromaticity image was inherently one dimensional.Applying narrowband camera sensors in deduction, Maxwell et al. [27] successfully obtained a truly 2-D color illumination invariant image by projecting the colors (in log-RGB) onto a plane based on Shafer's dichromatic reflection model [29].Their 2-D color illumination invariant image is obtained either from a user input for projecting normal vector or an 2-D entropy searching.This user input of 2-D entropy search of the angle of the normal in a three dimensional space may bring some difficulties in real applications, especially for some images with small fragment shadows and complex reflectance.While our 2-D color illumination invariant image can be directly obtained by measuring the SPD of daylight and skylight or simply giving the information about at what sun angles the pictures were taken (the time when the pictures were taken).Even when these information is unknown to the users, a mean value of SPD information is suitable for most situations.For the correction of the errors caused by the mean values, a simple 1-D entropy searching is used to get optimized parameter values among all the given parameters that be measured in different sun angles in advance.
(2) Based on a rigorous mathematical deducing, our 2-D color illumination invariant image has a explicit and simple mathematical expression.This expression and fast pixel-wise calculation of our 2-D color illumination invariant image can be applied directly to the real-time applications.
(3) From the view of atmospheric transmittance effects, we do not apply the assumption of Planckian lighting and narrowband camera sensors in our derivation, which may be more practical in real situation.

Color restoration and shadow-free image
By the orthogonal decomposition on an image, some image color information gets lost (Fig. 4).As shown in Tab.1 and Eqs.(14-15), the directions of free solution in different sun angles (from sun angle 20 • to sun angle 80 • ) are all near to (1, 1, 1) T .From the orthogonal decomposition equation, we have, if u u u ≈ αu u u 0 0 0 , the illumination invariant u u u p p p ≈ (0, 0, 0) T .It implies that the closer to the neutral surface on RGB space a pixel RGB value vector is, the more color information is got lost.To restore this loss, we present a color restoration method to generate a shadow-free image by correcting color vectors on the illumination invariant image which are near RGB color neutral surface.
Let v v v(x, y) = (v R (x, y), v G (x, y), v B (x, y)) T be an RGB image, where x = 1, 2, ..., M; y = 1, 2, ..., N; M and N denote the image width and height respectively.We define With the orthogonal decomposition on u u u (to see Eq.17), we obtain where u u u p p p (x, y) is illumination invariant on pixel (x, y), α(x, y) ∈ R. For u u u, we define the following pixel set, which indicates pixels that are near RGB color neutral surface, where parameter ε is set to 0.15 empirically.We then calculate the following parameter vector, where G is the number of the pixels in set S. This parameter vector is used to measure the average deviation of the pixel vectors of image u u u in set S from the vector u u u 0 0 0 .Then the color correction on u u u p p p is calculated as following: Then the shadow-free image u u u L f f f in Lab space is obtained as following, An overview of our shadow-free image generation pipeline is presented in Fig. 5.More shadow-free images can be seen in the experiment section.From the results, we can see that our method for shadow-free image effectively removes shadows from images while retaining the color and texture information of the original image.It is worth noting that, unlike shadow removal method that only recovers pixels in the shadow region, our shadow-free image, without the process of shadow detection, is operating on the entire image.In our algorithm, all the brighter pixels and darker pixels are pulled to the same illumination intensity and it may reduce the contrast of the image.Therefore, our shadow-free image may sometimes looks a bit dull compared with the original image (e.g., Fig. 7 (f)).

Experiments
In our experiment, we evaluate the results of color illumination invariant image and the relevant shadow-free image on more outdoor real images.These images consist of common scenes or objects under a variety of illumination conditions without shadow or with different types of shadows, such as soft shadows, cast shadows and self shadows.We first compare our method with the state-of-the-art statistical learning method [15] and physical-based method [24] respectively (Fig. 6 and Fig. 7).Then the results obtained by our method and Shen's method [26] on images under different illumination conditions will show that our color illumination invariant image is much closer to the intrinsic feature than Shen's from the view of yielding an identical reflectivity result regardless of illumination condition (Fig. 8).

Comparison of shadow-free images
In our experiment, we compare our method with two state-of-the-art methods ([15] and [24]) in terms of shadow-free results and running time.These two methods are based on shadow feature with statistical learning and physical model respectively.These methods are executed with the codes published by the authors of [15] and [24].
In Fig. 6, we show two shadow-free image results.For the first image (Fig. 6 (a1)), both our method and Yang's method [24] remove the soft-shadow of leaves successfully.However, Guo's method [15] fails to recover a shadow-free image due to the wrong detection on shadows.Due to lack of robust shadow features, this shadow feature and learning based method [9] are not always effective, especially in soft shadow region or some complex scenarios (e.g., Fig. 7 (f), (g)).Close-ups of the first image (Fig. 6 (b1)) show that, compared with our result, Yang's method [24] fails to retain the white arrow on the ground.This is because that Yang's method is not solid for neutral regions when its neighbor regions are also neutral [24].For the second image (Fig. 6 (a2)), our method gets a shadow-free image with texture preserved, while Guo's method [15] fails to retain the original texture even when it detects the shadows successfully due to the inappropriate relighting on pixels.Specially, some pixels are relighted serious wrongly, such that it looks like that some objects that do not exist in the original image appear (e.g., the tiny yellow flowers shown in Guo's shadow-free image in Fig. 6 (b2)).Close-ups of the red rectangle (Fig. 6 (b2) show that some of the grasses are blurred in Yang's method [24] due to bilateral filtering, while our method, without any filtering operation, retains the texture well.Some other comparison results are shown in Fig. 7.The images in Fig. 7 show that, our method removes shadows effectively from a single image with texture well-preserved.However, Guo's method [15] fails to recover a shadow-free image either due to the wrong detection on shadow regions (Fig. 7 (b), (d), (e)) or the inappropriate relighting on pixels (Fig. 7 (a), (c)).Due to the complex of image reflectance and the irregular distribution of shadows in Fig. 7 (g), none of the shadow regions are correctly detected in Guo's shadow-free image.For images with dark shadows or in which shadows pervade in a scenario (Fig. 7 (a), (b), (c), (d), (e)), Yang's method [15] blurs the shadow-free images.For example, the clear water in the original image in Fig. 7 (e) is seriously blurred in Yang's method.Some small and tiny objects such as leaves are also removed in Yang's results.
Fig. 6 and Fig. 7 verify that our method removes shadows more effectively from a single image, and performs steadily on images with complex outdoor scenes.Our shadow-free image also better preserves the original texture compared to the other two methods.In running time, our method also outperforms Guo's [15] and Yang's [24].In Tab.2, we compare the running time of these three methods on the images shown in Fig. 6 and Fig. 7.All experiments were conducted on a computer with Intel (R) Core (TM) i3-2120 3.3GHz CPU and 2G RAM memory.Without any statistical learning or complex filtering operation, our method is faster than Guo's [15] and Yang's [24].Note that, when the reflectance of image becomes complex and shadows on the image get intricate, it will be extremely hard for Guo's method to detect shadow regions and then remove them, and the running time rose up to 6.4827e+003 second, almost 2 hours in Fig. 7 [24] method are designed for removing shadows from an image.They cannot yield an identical reflectivity result regardless of illumination condition.In this section we will show that our color illumination invariant image has intrinsic feature.We apply our algorithm to a series of images taken in outdoor scenes at different times on a sunny day to testify the intrinsic feature of our color illumination invariant image.Shown in Fig. 8 (r1), these tested images were taken under four different illumination condi-tions, one under total daylight (Fig. 8 (r1,a)), two under partly daylight and partly skylight (Fig. 8 (r1,b c)) and one under total skylight (Fig. 8 (r1,d)).Taking the image under total daylight as reference image, we compare the differences between the corresponding processed results by different methods.For an intrinsic image derivation algorithm, the difference of the reference intrinsic image and other three intrinsic images should be approximately the same or at least much closer to each other than the original image sequences.In our experiment, we calculate the differences both for our color illumination invariant images and shadow-free images.
As a comparison, we also present the intrinsic images obtained by Shen et al. [26] and the corresponding differences.The root mean square error (RMSE) is used as the measurement.Since being compressed, the range of our color illumination is much lower compared with original images and shadow-free images.Therefore, besides RMSE, we also adopt Relative Error ( RMSE The Range of Mearsued Image ) as our measurement.All these measured images are 24 bit RGB images.As shown in Fig. 8 (r3), the four color illumination invariant images generated by our method are basically the same.It verifies that our orthogonal decomposition on an image yields an identical reflectivity result regardless of illumination condition.The quantitative measurement is given in Tab.3.Taken the images under total daylight and total skylight as an example, the RMSE difference of the original images is decreased from 101.85 to 4.67 in our color illumination invariant images.Also, when considering the Relative Error, this value is decreased from 39.94% to 5.7%.Even after a color restoration operation on illumination invariant images, the RMSE difference (24.88) between our shadow-free images is still much smaller than original images.However, the intrinsic images obtained by Shen et al. [26] still have a big difference,  [26].The corresponding shadow-free images also maintain this "intrinsic" to some extent.

Discussion and conclusion
In this paper, we propose a novel, effective and fast method to obtain a shadow-free image from a single outdoor image.Different from state-of-the-art methods that either need shadow detection or statistical learning, our shadow-free image is obtained by a pixel-wise orthogonal decomposition and color restoration.Also, our shadow-free image better preserves the texture information compared to the state-of-the-art method.Intrinsic to the illumination condition, our color illumination invariant image is useful for relighting by adjusting α information.Besides, only requiring pixel-wise calculation, our method for color illumination invariant image and shadow-free image can be further applied to the real-time applications to resist illumination variation.Despite that our pixel-wise orthogonal decomposition is designed for outdoor shadow images, it is also valid for indoor situation by the correction of physical parameters β 1 , β 2 , β 3 .Fig. 9 (a) gives two results for our shadow-free images on the indoor shadow images.It shows that our pixel-wise orthogonal decomposition method can be also applied in indoor situation.According to our experiments, we also found two main limitations of our method: 1) Shadowfree image has somewhat color distortion, especially near neutral surface; 2) It does not work well on images with overexposure regions.The main reason for the failure on overexposure regions is: the pixel values on overexposure region are not physically correct values, they do not satisfy the linear model and we cannot get their correct orthogonal decomposition.Fig. 9 (b) shows two failure examples for our shadow-free images.The modification and improvement of the model and method for an over-exposed regions, indoor image and more accurate color restoration are in process.
See Appendix for supporting content.

Fig. 1 .
Fig. 1.One experimental result of our algorithm.(a) Original image.(b) One of the three grayscale illumination invariant images.(c) Color illumination invariant image with our pixel-wise orthogonal decomposition.(d) Shadow-free image after color restoration.

Fig. 2 .
Fig. 2. Grayscale illumination invariant image results based on the linear model
zenith angle and aerosol (for weathers) in advance.We have calculated parameters β 1 , β 2 , β 3 in sunny weather at different sun angles from 9:00 AM till 4:00 PM in our city and some of the values are shown in Tab.1.The mean values in Tab.1 are calculated by the mean SPD of daylight and skylight at 20 • to 70 • .From Tab.1, we notice that the values of parameters β 1 , β 2 , β 3 from sun angle 20 • to sun angle 80 • are quite stable.In real applications, the sun angle (time) is known and the parameters can then be determined.In the condition that we have no information about at what sun angles the pictures were taken, the stability of parameters shows that the mean values for β 1 , β 2 , β 3 at 20 • to 70 •

u u u c 1 κ 3 +1
c c (x, y) = u u u p p p (x, y) • u u u p p p (x,y) u u u p p p (x,y) + above, if u u u(x, y) = α(x, y)u u u 0 0 0 , we have u u u c c c (x, y) = u u u p p p (x, y) + T .This color correction aims to correct the color for pixels in set S. The role of function 1 κ u u u(x,y) u u u(x,y) −u u u 0 0 0 in Eq.23 is used to minimize the impact of this correction on pixels not in set S and keep the correction smooth.The parameter κ is set to 0.02 empirically.With exponentiation, we transform u u u c c c and u u u p p p into RGB space and get u u u e c c c and u u u e p p p respectively, where image u u u e p p p has almost the same chrominance as original image and image u u u e c c c has almost the same illuminance as original image.This is verified by experimental results on many images in different scenes and reflectance.Then a shadow-free image is obtained by transferring the right luminance component of u u u e c c c to the color illumination invariant image.We adopt Lab color space (L, the luminance component; Shadow free image O; ①: Pixel-wise decomposition on I according to Eq.18; ②: Color correction on unique particular solution according to Eqs.(21-23) gets ; ③: Combining right lightness of and right color-opponent of according to Eqs.(24-25) and then transferring them from Lab space to RGB space.

Fig. 5 .
Fig. 5.An overview algorithm pipeline for the generation of shadow-free image.Note that the arrow in the first row represents the color space conversion, that is converting from RGB space to Lab space.These three components have no actual use in our algorithm.They are only used to act as the reference for the Lab component of our shadow-free image O.

Fig. 9 .
Fig. 9. (a) Two original indoor images and the corresponding shadow-free images.(b) Two failure cases: original images with over-exposed regions and the corresponding shadowfree images.

Table 2 .
Comparison of running time of the three methods on images shown in Fig.7(measured in seconds)

Table 3 .
MSE and relative error between images RMSE Relative Error (%) Images a and b a and c a and d a and b a and c a and d The quantitative measurements demonstrate that our color illumination invariant images are considerably closer to generate an intrinsic representation than Shen et al.