Recommendations for long-term luminance distribution measurements

Currently, luminance distribution measurement devices are increasingly used because there is high relevancy in the luminance distribution to the perceived visual comfort, also technology is maturing. It is now feasible to conduct long-term measurements and integrate these devices into lighting control systems. This, however, can result in new issues such as privacy controversies and high computational costs, induced by high spatial resolutions. Therefore, this study aims to propose a spatial resolution that is able to measure the luminance accurately while minimizing privacy sensitivity and computational costs. This is done based on luminance distribution measurements in office environments. The accuracy of lower resolution luminance distributions is tested for the mean and maximum luminance and the illuminance. Additionally, the ability to recognize faces is measured as an indicator for privacy-sensitive content. Finally, the processing time is measured as an indicator for the computational costs. The results show that for mean luminance or illuminance measurements the spatial resolution can be reduced significantly to 440 x 330 and 720 x 540 pixels, respectively. This spatial resolution does not compromise the accuracy while minimizing the ability of automated facial recognition and reducing the computational costs significantly. However, for maximum luminance measurements, a high resolution of 3000 x 2250 pixels is deemed appropriate, although this does allow automated facial recognition and results in high computation costs. A toolbox has been developed to assist others in choosing a relevant spatial resolution for their luminance camera during long term luminance measurements in typical office environments.


Introduction
Lighting is one of the major elements of a comfortable indoor environment.Nevertheless, lighting control strategies are mainly focused on limiting energy consumption [1].This might cause serious discomfort when the visual comfort is omitted due to an emphasis on the energy performance.Moreover, lighting is usually designed for -and indicated using illuminance while research has shown that the luminance is more closely related to the perceived brightness [2][3][4].This suggests that luminance based metrics are more suitable to enhance visual comfort than illuminance based metrics.Traditionally, the luminance distribution was not easily measured, which limited wider use.However, a number of recent studies were able to measure the luminance distribution in high resolution using High Dynamic Range (HDR) imaging [5] utilizing commercially available cameras [6][7][8][9][10][11].In general, HDR images are captured using sequential exposure bracketing [12], subsequently, the luminance is calculated based on the Red-Green-Blue (RGB) information.Nevertheless, this technology introduces new issues to be considered such as, but not limited to, the spatial resolution in which the HDR images are captured.
To integrate luminance cameras with control algorithms that provide high visual comfort and limit the energy use, or to use them for long term field studies, additional issues arise, which argue for a lower spatial resolution.Firstly, continuous high resolution HDR imaging can enable one to monitor individuals, hampering the application of such systems due to privacy concerns [13,14].Secondly, continuous high resolution HDR imaging requires significant computational resources, as the system has to transfer and analyze bulky pixel data, while low resolution HDR images might be sufficient to measure the luminance with an acceptable accuracy as discussed by Inanici [13].
As previously indicated, the spatial resolution (width and height in pixels) of the HDR images has an increased significance for long term measurements of the luminance distribution, such as conducted in Refs.[15][16][17].Nonetheless, the spatial resolution is often not specified.Studies that specified the spatial resolution [18][19][20][21] generally use the maximum spatial resolution supported by the applied imaging sensor, even for long term measurements [20,22].One study used a spatial resolution of 901 x 676 pixels with an imaging sensor that supports resolutions up to 2592 x 1944 pixels [8].Summarized, spatial resolutions found in the literature range from 320 x 240 [23] to 6000 x 4000 pixels [22] for measurement periods ranging from 2 weeks to a year.
A sufficiently low spatial resolution of the HDR images prevents face recognition and limits the ability to track persons and monitor their behavior.Multiple studies [24][25][26][27] researched the threshold spatial resolution for automated face recognition, which ranged between face resolutions of 21 x 16 and 64 x 48 pixels.Below the threshold resolution, the ability to recognize faces decreases rapidly [28].This threshold was determined from multiple algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA) and elastic bunch graph matching (EBGM), while subjects faced the camera.Based on these studies [24][25][26][27], Wang et al. [28] proposed to use 32 x 24 pixels, with an eye-to-eye distance of 10 pixels, as a threshold face resolution.It is expected that this threshold resolution is an overestimation, relative to the context researched in this study, because in real applications subjects are moving, not directly facing the camera, and image quality can be poor.Furthermore, HDR imaging for luminance distribution measurements often makes use of fisheye lenses to achieve extremely wide angles of view, adding additional distortions.However, to date, no further studies have been found in the literature that researched privacy sensitive information in HDR images.
Moreover, a low spatial resolution also limits the computational costs.Long term continuous luminance distribution measurements can only be conducted autonomously because it is not feasible to perform each measurement manually.Both in long term field studies and in lighting control systems these measurements have to be controlled by some, depending on the complexity, intelligence.Preferably these kind of systems are integrated into a standalone system using a single-board computer [11], microcontroller [29] or microprocessor [30], often with limited computational resources.Therefore, it is desired to limit computational costs, such as the processing time and CPU usage.Especially in the case of bulky data such as high resolution luminance distributions, it can be beneficial to reduce the spatial resolution.
The objective of this study is to limit the spatial resolution of HDR images for long term luminance distribution measurements without compromising the accuracy while minimizing the ability to recognize faces and the computational costs.High resolution luminance distribution measurements are conducted in a mock-up office.Based on these measurements, scaled data sets are constructed with decreasing spatial resolutions.Subsequently, the performance of the scaled spatial resolutions is compared with the initial high resolution measurements.To be able to further generalize the findings, this process is repeated in a real living office environment together having lare similarities with the majority of existing office environments.Finally, the performance of the reduced resolution HDR images is related to the face recognition threshold resolution and the processing time, to develop spatial resolution recommendations for long term, continuous, luminance distribution measurements in office environments.

Measurement device
In this study, all measurements were conducted with three identical devices that are able to autonomously measure the luminance distribution, similarly as in Ref. [8], hereafter referred to as Bee-Eyes.These devices are able to conduct long term measurements with the objective to provide long term visual comfort.A Bee-Eye consist of three major components: a single-board computer, a camera, and a fisheye lens.The Raspberry Pi 3 Model B, with a Cortex-A53 (1.4 GHz, quad-core) processor, was used as a single-board computer.The camera was a Raspberry Pi Camera Board version 2 with a CMOS sensor (Sony IMX219) (3.04 mm, f/2) with a maximum spatial resolution of 3280 x 2464 pixels equipped with a miniature equisolid-angle fisheye lens.Due to the fixed aspect ratio (4:3) and the fisheye lens two strips of approximately 300 pixels, left and right, were ineffective.The HDR images were developed using sequential exposure bracketing with seven exposures ranging from 9 μs to 2 s, which were merged to a HDR image using the command-line HDR builder developed by Ward [31].Additionally, a camera response curve, which was approximated once according to best practice guidelines [32] using the algorithm of Mitsunaga and Nayar [33], was included.The luminance was calculated using the methodology described by Inanici [6] where the R, G and B coefficients originating from the HDR image are translated to CIE XYZ color space, using standard color space reference primaries, according to Equation (1).This methodology resulted in an approximated spectral VðλÞ mismatch f '  1 of 43%, which is expected to lead to measurements errors of approximately 10%-15% for LED lighting [34].Due to this significant spectral mismatch, photometric calibration factor k was determined using reference spot measurements of a grey (ρ ¼ 0:18) and white (ρ ¼ 0:90) card using a Konica Minolta LS 100 for LED lighting only, to reduce the aforementioned measurement error [35].

Lab study
High resolution luminance distribution measurements were conducted in a mock-up office environment at the Building Physics and Services laboratory at Eindhoven University of Technology, between 12-11-2018 and 25-12-2018.The mock-up office measuring 5 m � 5.5 m was oriented west and consisted of four workplaces, containing a desk and a monitor (turned off), as illustrated in Fig. 1.Measurements were performed during morning periods (8:30-12:00) and afternoon periods (13:00-16:30) at an interval of 10 min for two days with clear sky conditions, two days with intermediate sky conditions, and two days with overcast sky conditions.For each weather type one day was measured without and one day was measured with the electric lighting (9x PHILIPS RC461B G2 PSD W60L60 1xLED34 S/840) switched on, providing an additional uniform 750 lx on the horizontal work plane.The luminance distribution was measured by a single horizontally oriented Bee-Eye (BE1) attached to the ceiling measuring two desktop surfaces (Fig. 1, BE1_D2 and BE1_D3) and two vertically oriented Bee-Eyes (BE2 and BE3) attached to tripods at 20 cm distance from the desks at a height of 1.2 m.Two opposing, north and south oriented, desks were monitored with varying distance (1.2 m-2.8 m) to the background area measuring the luminance of the desktop, monitor and background area (Fig. 1).The initial spatial resolution of the Bee-Eyes was determined with the constraint that a single measurement had a duration of less than 1 min, resulting in a spatial resolution of 2130 x 1600 pixels instead of the maximum supported resolution in order to limit the processing time of the measurements as well as to reduce the time required for post processing.

Field study
Additionally, a field study was conducted for three consecutive weeks (14-01-2019 to 03-02-2019) in a Dutch office landscape (51 � 35 0 N 4 � 47 0 E).During this three week period, the sky conditions were mainly overcast with some days of clear and intermediate sky conditions.The office was located on the first floor and consisted of twelve desks divided into three groups, as is displayed in Fig. 2. For each group of desks, a calibrated Bee-Eye was attached to the ceiling (three in total), similar to the lab study.The floor area, the area were the occupants move, was masked such that a limited amount of privacy-sensitive content was captured.Measurements were conducted daily from 08.30 to 17.30 with an interval of 10 min.

Interpolation
During the measurements, elaborated in sections 2.2 and 2.3, one single HDR image was captured every 10 min with each Bee-Eye.These initial HDR images were digitally scaled by steps of 10% to achieve identical HDR images in varying spatial resolutions.An additional lab study was conducted to verify whether digital resizing of the HDR image was appropriate to replicate HDR images in a lower spatial resolution and to determine the most suitable interpolation method.Therefore, two series of HDR images were captured in a windowlessroom, preventing fluctuations due to the dynamic nature of daylight, measuring 4.4 m � 3 m illuminated with fluorescents and LEDs.To achieve constant conditions between different HDR images, the light was turned on approximately half an hour in advance to ensure a stable lumen output.Each series consisted of five HDR images with resolutions of respectively, 901 x 676 (as used in Ref. [8]), 451 x 338, 300 x 225, 226 x 169 and 901 x 676 pixels.The latter was used to indicate the error margin due to temporal noise.The first HDR image of the series was scaled by 50% (451 x 338), 33% (300 x 225) and 25% (226 x 169) using nearest-neighbor, bilinear, and bicubic interpolation, respectively, matching the spatial resolution of the other original HDR images.The luminance was calculated according to Equation (1), L sca and L org , respectively.Due to the constant conditions, the scaled HDR images should yield the same luminance as the respective original HDR images.The mean and median luminance values were calculated for the full HDR image and the background area only.The average relative differences ( ΔL ¼ � � L org À L sca � � =L org ) of the scaled copies with the original HDR images are presented in Table 1.
Table 1 shows that until 50% scaling the deviations were rather limited, the median luminance was even within the tolerance of the temporal noise.For smaller scaling factors the relative differences seemed to be increasing.This was mainly due to the scaling of very bright spots, also indicated by the differences found between the mean and median luminance as outliers have a smaller effect on the median luminance.For areas without very bright spots the differences in mean luminance between the original and resized image were negligible (�0.2%-0.7%).Moreover, the results show only minor differences between the interpolation methods.Therefore, we can assume that the different interpolation methods provide a sufficiently accurate rescaling of the initial HDR images.Nevertheless, the bilinear interpolation method was chosen for this study.For bilinear interpolation "the output pixel value is a weighted average of pixels in the nearest 2-by-2 neighborhood" [36].The bicubic interpolation has similar deviations, however, this method can result in pixel values outside the original range, which is not representative for this situation.

Analysis
All initial HDR images were gathered from the measurements described in sections 2.2 and 2.3 and replicates were made for each individual HDR image with varying spatial resolutions between 90% and 10% of the initial resolution.The initial images were resized by bilinear interpolation using MATLAB R2017a, resulting in nine different linearly scaled copies of the initial HDR images.In other words, series of ten identical linear scaled HDR images were available for the analysis with relative spatial resolutions of 100%-10% with step sizes of 10% as indicated in Fig. 3.
Subsequently, these series of HDR images were converted into luminance maps according to Equation (1).Subsequently, for each luminance map, the visual comfort indicators mean and maximum luminance of the relevant surfaces (Figs. 1 and 2) were extracted, representing the amount of light and glare [1], respectively.Additionally, the illuminance on the Bee-Eye sensor was calculated using Equation (2) with elevation angle ε and azimuth angle α.To allow inter-comparisons Fig. 2. Office layout of the living office landscape.All Bee-Eyes were attached to the ceiling measuring the desktop luminance (D1-D12) only.

Table 1
The average differences of the mean (ΔL) and median luminance of the digitally scaled images, using nearest-neighbor, bilinear, and bicubic interpolation, compared to the initial HDR images.The last row indicates the temporal noise.between different measurements, independent of the absolute luminance, the mean and maximum luminances extracted from the scaled luminance maps were normalized (L n ) relative to the initial luminance map.Moreover, the illuminance measurements were normalized (E n ) in a similar manner.
L⋅cos ε⋅sin α⋅dε dα In 1760 Bouguer [37] found, using two candles illuminating a screen, that the just notable difference (JND) in luminance described by the ratio δL / L was constant and independent to the absolute luminance.A constant ratio of approximately 0.016 was considered just notable [38,39], which was also confirmed using modern methods [39].This means that a luminance increase of 1.6%, achieved by alternating the distance of one candle, will result in a just notable difference in luminance.In this study, the objective was to measure the luminance accurately, relative to the end user, with a low spatial resolution for a longer period of time.Therefore, this study allowed a spread of �1.6% with a confidence interval (CI) of 95% ðσ ¼ 0:0082Þ relative to the initial luminance map as this will not affect the experience of the end user.
Inferential statistics were used to determine whether the normalized indicators (mean luminance, maximum luminance, and illuminance) of the resized HDR images (Y) were statistically different from a standard normal distribution with μ ¼ 1 and σ ¼ 0:0082 (X), representing the initial HDR image and its tolerated JND spread.Based on the Onesample Kolmogorov-Smirnov test it was concluded that XÀ Y is not normally distributed (p < 0.001).Therefore, the non-parametric Wilcoxon signed rank test was applied to verify whether X and Y were different (α ¼ 5%, CI 95%).Also, for large datasets (>300) the p-value tends to go to zero quickly, therefore the results, solely focusing on the pvalue, may not have a practical significance [40].Thus, it is advised to focus on the effect size, which gets more precise for an increasing sample size.Therefore, for this study, which resulted in an extensive amount of data, effect sizes were calculated according to r ¼ Z= ffi ffi ffi n p (r is Pearson's r, Z is Z-score and n is sample size) [41].Effect sizes of 0.1, 0.3 and 0.5 are considered small, medium and large effects, respectively.The resized images were considered similar to the initial HDR image when the effect size was medium (< 0:3), indicating a difference with only limited practical significance.Large effect sizes were tolerated when they were followed by small or medium effect sizes for lower spatial resolutions, to account for the zigzagging effect, which is an observed artifact of the digital scaling (Section 4.2).
It was hypothesized that large luminance gradients have a negative impact on the performance of the resized luminance maps.Fig. 4 illustrates a large luminance gradient in the background area, due to the shadow of the window frame, compared to the small gradient of the bright surfaces.However, these details might vanish when the resolution is lowered.As a result, the authors expect that the luminance gradient, or the complexity, might give reasoning behind the effects found for reduced spatial resolutions.Therefore, the gradient of each individual pixel was determined for the initial luminance map according to Equation (3).Subsequently, the maximum gradient (rL m , Equation ( 4)) was extracted as a gradient indicator of the luminance map.
∂L n ∂xy ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi � The maximum luminance gradient was related to the performance of a resized luminance map relative to the initial luminance map.This performance was quantified by the difference in luminance between the normalized initial luminance map and the luminance maps with a linear resolution scale of 0.1, according to Equation (5).Subsequently, the difference for the maximum luminance was used as an indicator to compare to the maximum gradient.The ΔL n;max relative to a scaling factor of 0.1 was used because this exhibits large differences and hence potentially clearer results.

Facial recognition
For lower resolutions, facial recognition becomes less easy; below the threshold facial resolution of 32 x 24 pixels and an eye-to-eye distance of 10 pixels [28] (Fig. 5), the capabilities to automatically recognize faces decreases drastically.However, it does not guarantee that face recognition becomes impossible.For long term measurements, it is preferable that automated facial recognition is prevented.A laboratory study was conducted using the setup described in section 2.2 to determine the facial and eye-to-eye resolution in a standard office environment relative to the spatial resolution.Six horizontal (using BE1) and six vertical (using BE2) oriented HDR images with a maximum spatial resolution of 3240 x 2462 pixels were captured while a participant was seated at location A (Fig. 1) focusing at Bee-Eye 1, Bee-Eye 2 and a monitor, consecutively.Using bilinear interpolation 20 scaled images were created for each of the twelve initial HDR images with diminution steps of 5%.For each reduced spatial resolution the face height, face width, and eye-to-eye distance were measured, in pixels, to indicate the potential facial recognition capabilities.

Computational costs
The spatial resolution of the HDR images is directly linked to the computational costs.In this study, the computational costs were indicated as the processing time required for a single luminance measurement.The processing time, but also the computational costs in general, are increasingly important for long term measurement as this determines the dimensioning of the automated system to a large extend.A single measurement, conducted on the device specified in section 2.1, includes the sequential exposure bracketing (7 images), formation of HDR image, calibrating the HDR image and uploading the HDR image to a server via Wi-Fi.The CPU usage did not show significant differences for different resolutions and was, therefore, not considered.The processing time was calculated, in an office environment, for 20 different spatial resolutions, with an initial spatial resolution of 3240x2462 pixels, with diminution steps of 5% similar to section 2.6.This was repeated thrice to include potential variability in processing time.

Lab study
This section describes the effect of reduced resolutions under varying conditions as found in the lab study.92% of the measurements in the mock-up office environment were successful resulting in a total of 728 HDR images, the remaining 8% of the measurements failed mainly due to excessive luminance values exceeding the dynamic range of the Bee-Eyes.In the majority of these cases, direct sunlight hit the image sensor saturating the shortest feasible exposure (9 μs) by this hardware, returning inappropriate luminance values.Henceforth, these saturated HDRs were not considered.Table 2 shows the effect sizes of the mean luminance differences between the initial HDR image and the HDR replicates for the eight predetermined surfaces of the mock-up office environment as indicated in Fig. 2. It shows that the effect of a decreased resolution is generally small for the mean luminance.With exception to the monitors, acceptable effects were exhibited up to a resolution scaling factor of 0.2, representing a spatial resolution of 426 x 320 pixels.No clear differences in performance were found between desktop and background surfaces and between desktop surfaces measured with a horizontal (B1) and vertical (BE2_D2 and BE3_D3) orientation.The monitors performed differently, especially BE2_M (monitor facing north) performed much worse even though both monitors were turned off, this might be due to their low luminances being sensitive to minor absolute changes in luminance having a disproportional large effect on the relative difference.As a result, the monitors require a higher resolution for an accurate assessment.Fig. 6 illustrates that the effect of a decreased resolution, for Bee-Eye 3, was indeed small as the average mean luminance is practically horizontal (L n � 1).For the desktop and background surfaces, only minor deviations were visible from L n � 1 for scaling factors of approximately 0.2-0.1, which corresponded to the effect sizes found in Table 2 Bee-Eye 1 and Bee-Eye 2 exhibited similar results as Bee-Eye 3.
In contrast to the mean luminance, the maximum luminance generally exhibited a large effect due to a reduced spatial resolution as indicated in Table 3.Also, Fig. 7 illustrates that for every reduction in resolution the maximum luminance of the surface also decreased significantly.Even a scaling factor of 0.9 led to a maximum luminance outside the JND threshold.This same effect was exhibited for Bee-Eye and 2. This indicates that, for instance, glare caused by specular reflections of excessively high luminances, representing a low number of pixels, can go undetected for low spatial resolutions resulting in an underestimated discomfort glare analysis.
The results indicated that the maximum luminance can only be measured with a resolution of at least 2130 x 1600 pixels.Therefore, we aimed to approximate the required spatial resolution for maximum luminance measurements.The average normalized maximum luminance values (L n ) of all surfaces (black line in Fig. 7) for the scaling region between 0.9 and 0.2, except the monitors, showed distinct linear relations (adj-R 2 0.77-0.99).Scaling factors 1 and 0.1 were not considered in this case because these were, respectively, forced to L n of (wrong assumption to be the ground truth) or showed extreme deviations.Extrapolating these linear relations resulted in approximated scaling factors ranging between 0.95 and 1.43 for an accurate representation of the maximum luminance.A scaling factor of 1.43 relative to a resolution of 2130 x 1600 pixels resulted in a resolution of 3046x2280 pixels.When rounding off, a spatial resolution of 3000 x 2250 pixels seems to be an appropriate approximation of the spatial resolution needed for accurate maximum luminance measurements.However, more importantly, it indicates that a high spatial resolution is required for maximum luminance measurements.
It was hypothesized that the maximum luminance gradient within a surface has a large effect on the performance of lower spatial resolutions.Fig. 8 shows the relation between the maximum luminance gradient (rL m ) and the relative performance of the lowest resolution (ΔL n;max ).It shows that these aspects were rather strongly related, with correlation

Table 2
Effect size table of the mean luminance extracted from the relevant surfaces with an initial resolution of 2130x1600 measured in the mock-up office environment.An effect size below 0.1 was considered small, an effect size between 0.1 and 0.3 was considered a medium effect (*), effect sizes larger than 0.3 were considered large (**).Effect sizes that violated the threshold conditions are indicated in bold.factors varying between 0.20 and 0.80.Generally, a higher gradient resulted in a lower relative performance, as was expected, although the spread is substantial.This shows that large luminance gradients had generally a negative effect on the performance of the reduced resolutions as was hypothesized.Similar analyses have been conducted for the illuminance on the sensor calculated from the luminance distributions according to Equation (2) (Table 4).In contrast to the mean luminance, a clear difference was found between horizontally and vertically applied devices representing significant different field of views.The two vertically oriented devices (BE2 and BE3) captured a more complex scene including a large portion of the outside environment, which required a spatial resolution of at least 640 x 480 pixels (scaling factor of 0.3).BE3 (North oriented) performed slightly better, indicated by lower effect sizes, than BE2 (South oriented).For the horizontally oriented device, with a much simpler scene, a spatial resolution of 213 x 160 sufficed.Nevertheless, for all orientations, a significant reduction in resolution could be achieved.Fig. 9 also illustrates that until a scaling factor of 0.3, almost no deviations occurred from the horizontal line.Additionally, it also shows smaller effects for the horizontally applied devices (BE1).

Table 3
Effect size table of the maximum luminance extracted from the relevant surfaces with an initial resolution of 2130x1600 measured in the mock-up office environment.An effect size below 0.1 was considered small, an effect size between 0.1 and 0.3 was considered a medium effect (*), effect sizes larger than 0.3 were considered large (**).Effect sizes that violated the threshold conditions are indicated in bold.

Field study
This section illustrates the effect of a reduced resolution on the measuring accuracy as found in the field study.During the field study, 98% of the measurements succeeded resulting in 3,380 unique HDR images.
Table 5 shows the effect size of the reduced resolution for the normalized mean desktop luminance starting with an initial resolution of 901 x 676 pixels measured in a real office environment.When all surfaces were included, a spatial resolution of 450 x 338 pixels, corresponding to a scaling factor of 0.5, was required for an accurate representation of the mean luminance in this office environment, which shows large similarities to the spatial resolution of 426 x 320 pixels as found, excluding the monitors, in the lab study.However, a large spread was exhibited between the different desktops, for instance, for desk 3 (BE1_D3) a resolution of 180 x 135 pixels would have been sufficient.This large spread was not exhibited during the field study, which might be caused due to the living environment.On the other hand, during the lab study, the lower number of surfaces and larger absolute step sizes in resolution might have faded the spread.The findings of Table 5 correspond to Fig. 10.There exist almost no deviations until a scaling factor of 0.5 (zigzagging effect excluded, section 4.2).After a scaling factor of 0.5, a reduction in performance was always observed, either very earlier but in a smooth manner (BE3_D10) or very late but in an abrupt manner (BE3_D12).
The lab study showed that for maximum luminance measurements a spatial resolution of approximately 3000 x 2250 pixels was required both for measurements at eye level and from the ceiling.In correspondence to this, the field study, measuring from the ceiling, showed that all scaling factors had a large effect on the accuracy, indicating a spatial resolution higher than 901 x 676 was required.In contrast to the lab study, no additional extrapolation was conducted because this is questionable for resolutions so far from its initial range.Therefore, it is assumed that for the field study also a spatial resolution of approximately 3000 x 2250 pixels was required.Analogous to Figs. 8 and 11 illustrates the relationship between the luminance gradient (rL max ) and the relative performance of the luminance map with a linear scaled resolution of 0.1 (ΔL n;max ).Again, the results showed large similarities with the lab study with correlation factors ranging between 0.34 and 0.75.Fig. 11 shows that this relation was exponential.Nevertheless, the implication remains that the measuring performance for lower spatial resolution decreases when large luminance gradients are present.Hence, complex scenes with large luminance gradients might require a higher spatial resolution.
The lab study showed that a resolution of 640 x 480 pixels was suitable for illuminance measurements.However, for the ceiling-based measurements, even a scaling factor of 0.1 sufficed.The ceiling-based measurements, during the field study, required a spatial resolution of 720 x 540 pixels according to Table 6.This is a spatial resolution which is higher than what was found during the lab study, especially compared to the ceiling-based (BE1) measurements.This is explained by the larger luminance gradients found on the desktops during the field study (Fig. 11) caused by it being a living environment were multiple objects were placed on the desktops.Similar to the mean desktop luminance (Table 5) a relatively large amount of spread was exhibited between suitable spatial resolutions for illuminance measurements by the three devices.However, it also shows that generally a higher spatial resolution was required for illuminance measurements than the mean luminance.

Facial recognition
The relation between the spatial resolution of the Bee-Eyes and the facial resolution is displayed in Fig. 12.It shows that the eye-to-eye resolution threshold was a much stricter criterion than the horizontal and vertical face resolution.The eye-to-eye resolution threshold was met for scaling factors lower than 0.33, representing a spatial resolution of 1082 x 813 pixels, while the horizontal and vertical face resolution threshold were met for scaling factors of 0.46 and 0.44, respectively.It shows that the initial spatial resolution in the lab study (2130 x 1600 pixels) allowed facial recognition while the spatial resolution applied in the field study (901 x 676 pixels) did not allow this.The eye-to-eye distance seems to be the most suitable and robust indicator as it exhibited relative little spread, as compared to, mainly, the vertical face resolution.Nevertheless, one should take into account the distance between the face and camera, which was in this context approximately 1.7 m-2 m.This is easily accounted for by a linear scaling factor.

Computational costs
Fig. 13 shows the average processing time, used to indicate the computational costs, in relation to the spatial resolution.The processing time can be approximated by an exponential function.According to these measurements a scaling factor of 0.74, resulting in a spatial resolution of 2437 x 1830 pixels, limits the processing time to 60 s.As an example, measurements with a resolution of 450 x 338 pixels, which was sufficient for mean luminance measurements, has an estimated processing time of approximately 12s, which can be considered very minor for a luminance camera Alternatively, the processing time can be minimized by using a stronger processor or optimized software.This allows for high interval (<1 min) measurements that might be needed to capture the fast variations of daylight [42].Nevertheless, processing times much lower than 10 s are not expected to be feasible.Additionally, one can assume that other computational costs such as data storage behave familiar and can also be reduced significantly.

Mean luminance
The objective of this research was to validate whether the luminance can be measured with relatively low spatial resolutions such that it is suitable for long term measurements for control algorithms that aim to provide high visual comfort.Both the lab study and the field study confirmed that there is no need to apply the maximum spatial resolution of a camera when measuring the mean luminance.The lab and field study found almost identical resolutions of 426 x 320 and 450 x 338, respectively, which can be rounded off to 440 x 330 pixels.This was only validated for the experimental conditions applied in this research.
For instance, a higher ceiling might result in a higher spatial resolution.Nevertheless, the experimental conditions resemble the conditions for numerous office environments.The overall complexity of the Fig. 8. Correlation plot of the maximum gradient ðrL max Þ relative to the performance of the luminance map with scaling factor 0.1 ðΔL n;max Þ measured in the mockup office environment with an initial resolution of 2130x1600.

Table 4
Effect size table of the illuminance extracted from the luminance distribution with an initial resolution of 2130x1600 measured in the mock-up office environment.An effect size below 0.1 was considered small, an effect size between 0.1 and 0.3 was considered a medium effect (*), effect sizes larger than 0.3 were considered large (**).Effect sizes that violated the threshold conditions are indicated in bold.field study, resulting in larger luminance gradients, was much higher because it contained numerous objects on the desk surfaces in contrast to the empty desks in the lab study.However, the influence of the larger luminance gradient was limited for the mean luminance.
The results of the lab study (Table 2) showed that the monitors, especially BE2_M, exhibited a lower measurement performance for the mean luminance compared to the desk and background surfaces.Zooming in on this monitor, it turned out that this was mainly caused by the monitor frame which contained a number of chrome like buttons with a maximum diameter of only 14 pixels, which were not present for monitor BE3_M.The pixels representing the buttons had a luminance several orders of magnitude larger than the average luminance of the computer screen (Fig. 14); hence, having a large effect on both the maximum and mean luminance of the monitor while only being a small detail.This effect also appears in Fig. 8 were a high correlation (0.8) was found between the maximum gradient (transition button to black monitor) and the performance of the lower resolution for BE2_M.This shows that down-sampling, relative to mean luminance measurements, differs from simply averaging the figure even though there are large similarities, which was also clearly indicated in Fig. 10.Due to the alignment of pixels (Section 4.2), the effect of luminance masks and the effect of bilinear interpolation small differences arise.The analysis of

Table 5
Effect size table of the mean luminance extracted from the relevant surfaces with an initial resolution of 901 x 676 pixels measured in the living office landscape.S.F. represents the scaling factor.An effect size below 0.1 was considered small, an effect size between 0.1 and 0.3 was considered a medium effect (*), effect sizes larger than 0.3 were considered large (**).Effect sizes that violated the threshold conditions are indicated in bold.effect sizes was repeated for the monitor screen only, leaving out the monitor frame.This resulted in a low sensitivity (small effect size up to a scaling factor of 0.2) relative to the spatial resolution because the screen itself was very uniform as indicated by the large spread of the pixel locations representing the maximum luminance values (Fig. 14).Additionally, in Fig. 6 the monitor of Bee-Eye 3 also showed larger deviations for the three smallest scaling factors compared to the other two surfaces.This was caused by the low luminance values of the black computer screen.As a result, a small change in luminance, due to the image resizing, resulted in a big relative change.Moreover, the monitor surface was somewhat smaller than the other surfaces that were considered in this study, making it also more sensitive to small changes.

Maximum luminance
The maximum luminance is an important indicator as it is a major contributor to discomfort and disability glare as the glare source luminance (often but not exclusively the maximum luminance) is one of the four quantities reoccurring in all established glare indices [43].Hence, the maximum luminance is essential for visual comfort.Therefore, it is important to be able to measure the maximum luminance accurately; however, it is also desirable to do this with a relatively low spatial resolution to limit, for instance, the computational costs.The maximum luminance was largely affected by the spatial resolution of the luminance camera because the maximum luminance is generally represented by a low number of pixels.In those cases, the high luminance pixels are averaged with lower luminance pixels in its vicinity for lower spatial resolutions.The measurements in the lab and in the field indicate that the spatial resolution used in the lab study was not sufficient to measure the maximum luminance.This is contradicting with the recommended spatial resolution of 1000x1000 pixels for discomfort glare measurements as stated in the RADIANCE evalglare manual [44].In the lab study, this recommended spatial resolution would have led to an underestimation of the maximum luminance.
In practice, this effect might be even more distinct as the luminance gradient between the sun and the sky vault can be enormous.In this study, only the maximum luminance values of the respective surfaces were considered, already resulting in high spatial resolution requirements even though the luminance gradient was comparatively low.An additional validation, in the field, would be beneficial to assess the magnitude of this effect.On the other hand, the maximum luminance might be caused by noisy pixels.This might have occurred for a number of luminance maps; however, due to the large amount of tested luminance maps this could have only had minor effect on the final spatial resolution.
Based on extrapolation of the average maximum luminance it was approximated that a spatial resolution of 3000 x 2250 pixels might have been appropriate for accurate maximum luminance measurements.This spatial resolution is much higher than the recommended resolution of evalglare and also, as already indicated, much higher than the resolution required for mean luminance measurements.It was assumed that for the field study, a similar spatial resolution was required for the maximum luminance.Indicated by the large similarities between the spatial resolution for the mean luminance between the lab and field study.Consequently, one could argue that the maximum luminance, and also glare, is less straightforward to measure during long term measurements than the mean luminance.
A zigzagging effect was exhibited, especially for BE3_M in Fig. 7.However, similar effects, but less distinct, were found throughout the results.This effect was caused by interpolating in small consecutive steps.Pixels with a large influence (very high or very low luminance) can be more effectively divided among the available pixels for a certain scaling factor.For instance, a certain luminance pattern might be effectively covered by a scaling factor of 0.8, meaning that the majority of pixels is covered entirely by the luminance pattern.While on the other hand for a scaling factor of 0.7 a large number of pixels is covered only partly by the luminance pattern.This effect is dependent on the specific luminance pattern, therefore this effect was not exhibited in all cases.Additionally, it was also difficult to predict.This phenomenon was also found in Table 1 for the median luminance scaled by 33%.

Illuminance
The illuminance at the sensor was calculated by integrating the luminance distribution.The lab study found that a resolution of 639 x 480 pixels was appropriate for illuminance measurements while the field study suggested a slightly higher spatial resolution of 720 x pixels.However, for the ceiling based measurement (BE1) in the mockup office, a spatial resolution of 213 x 160 sufficed.It should be clearly noted that this ceiling-based measurement (BE1) is generally not necessarily relevant, in contrast to the vertical illuminances measured by BE2 and BE3, which are very relevant especially related to the nonimage forming effects of light [45].However, it is applied to indicate

Table 6
Effect size table of the illuminance extracted from the luminance distribution with an initial resolution of 901x676 measured in the living office landscape.An effect size below 0.1 was considered small, an effect size between 0.1 and 0.3 was considered a medium effect (*), effect sizes larger than 0.3 were considered large (**).Effect sizes that violated the threshold conditions are indicated in bold.the effect of the luminance gradient on the spatial resolution that can be used, which showed that for scenes with low complexity, as captured by BE1, accurate measurements can be conducted with very low spatial resolutions but for more complex conditions (larger gradients) higher spatial resolutions are required.The vertically oriented measurements of the same scene required higher resolutions because the complex daylight openings represented a much larger part of the field of view while the empty desks represented a smaller area.Also, for ceiling-based measurements of a more complex scene, such as measured in the field study, higher spatial resolutions were required.This was considered a complex scenario because it contained a lot of small details such as objects on the desk surfaces.This complexity impacted the measured illuminance because small but important details might get omitted in low resolutions.Therefore, a higher complexity, and hence a larger luminance gradient, has a negative effect on the accuracy of illuminance measurements in lower spatial resolutions as was already indicated in Figs. 8 and 11.

Practical aspects
Besides the measurement accuracy, two practical aspects were identified that are relevant for the spatial distribution: the ability to recognize faces and the processing time.Both these aspects argue for a lower spatial resolution while for the accuracy of the luminance measurements higher resolutions are preferred.To limit the facial recognition a maximum resolution of 1082 x 813 pixels was identified for the experimental conditions tested.Therefore, limiting the ability to automatically recognize faces can be combined with mean luminance and illuminance measurements; however, this was not feasible for maximum luminance measurements.Therefore, the mean luminance and illuminance are suitable for long term measurements because there is no direct risk of automated facial recognition.
Different face recognition thresholds were found depending on the gaze direction and the facial resolution indicator.However, the eye-toeye distance performed most robust because it was rather easy to extract independent of the gaze direction.The horizontal and vertical face resolutions were difficult to extract when the gaze direction differed because of the roundish shape of the human head.Moreover, the eye-toeye threshold formed a stricter requirement.
Because daylight can exhibit variations within a few minutes and even seconds [42] a low processing time is preferred to measure these daylight variations.Similar to the facial recognition it was not feasible to combine a low processing time with accurate maximum luminance measurements.On the other hand, for mean luminance and illuminance measurements, the processing time could be reduced drastically to approximately 12 s for illuminance measurements and even lower for mean luminance measurements.However, the actual processing time on other devices might differ as it also depends on the processor and the software applied.Nevertheless, this research indicates that the processing time and henceforth the computational costs behave exponential, whereby reducing the resolution by a factor 2 reduces the processing times by a factor of approximately 2.7.Therefore, it is valuable to consider reducing the resolution to limit computational costs significantly, especially for measurements at a high interval.

Validity
This study included two case studies to recommend suitable spatial resolutions.Naturally, only two case studies have a limited validity.Nevertheless, the study indicates that the luminance can be reduced for certain occasions.A systematic analysis of the spatial spectrum in the frequency domain using Fourier transforms might be able to improve the validity.Reducing the spatial resolution will eliminate the high frequency part of the spectrum which might correlate to the luminance gradient and the measurement error of the applied metrics.By including this approach in future research, more insight is expected in the underlying principles as the two methods will be complementary.Moreover, the validity can be improved by performing additional case studies.

Conclusion
Since technology has enabled us to measure luminance distributions for a longer period of time new issues have presented themselves.For instance, the spatial resolution of a luminance map can be considered a very important aspect as this impacts the measurement accuracy, but also to the ability to prevent automated facial recognition and largely influences the processing time.This research tried to recommend a spatial resolution that satisfies these three relevant requirements (accuracy, face recognition, and processing time) for long term visual comfort measurements such that luminance cameras can be implemented in control algorithms to achieve high visual comfort.
It was shown, that for a typical open office environment, both for mean luminance and illuminance measurements, all three requirements can be satisfied.The mean luminance can be accurately measured with a spatial resolution of 440 x 330 pixels under any weather condition in the experimental conditions applied in this study.This resolution is lower than the face recognition threshold and it limits the processing time to approximately 12 s, which indicates a limitation in computational costs by a factor of nine relative to the maximum spatial resolution of this specific imaging sensor.For illuminance measurements, a slightly higher spatial resolution of 720 x 540 pixels is required because it is, in contrast to the mean luminance, more sensitive to luminance gradients.However for very simple scenes, with limited luminance gradients, the spatial resolution can be decreased even further.The spatial resolution is dependent on the luminance gradient of the measured surface: a very large luminance gradient, or complex scene, will generally require a higher spatial resolution, although this effect is more pronounced for illuminance and maximum luminance measurements.
In contrast to the mean luminance and the illuminance, the three requirements cannot be met simultaneously for maximum luminance measurements.For accurate measurements of the maximum luminance, spatial resolutions were required that exceeded the spatial resolutions applied during this study.It was (roughly) estimated that a spatial resolution of approximately 3000 x 2250 pixels might have been suitable for an accurate representation of the maximum luminance under these conditions.This also indicates that for discomfort glare a very high spatial resolution, higher than suggested in the RADIANCE evalglare manual [44], might be required as the maximum luminance is a major contributor to discomfort glare.This effect might even be more extreme when daylight and sunlight are considered.However, more research is required to verify these findings.Consequently, this would have allowed facial recognition and would have resulted in significant computational costs.Nevertheless, it is recommended that similar measurements are conducted with higher initial spatial resolutions to find a suitable resolution for maximum luminance measurements in a more accurate manner.In general, the validity can be improved by additional cases studies and alternative analysis of the spatial spectrum in the frequency domain using Fourier transforms.
This research has shown that spatial resolution does influence the accuracy of the luminance distribution.Therefore, the spatial resolution should be chosen wisely depending on the required accuracy, office dimensions, presence of human subjects and the available time and resources, especially for long term measurements.Therefore, a toolbox has been developed based on the results to assist others to choose a relevant spatial resolution for their luminance camera for office conditions similar to the experimental conditions applied in this study.This MATLAB based toolbox can be installed as a MATLAB application or as a standalone application, which can be found in the supplementary materials.The MATLAB toolbox can also be downloaded from the Math-Works file exchange, the standalone application can also be received by contacting the first author.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Fig. 1 .
Fig. 1.Mock-up office environment.Bee-Eye 1 (BE) was attached to the ceiling, while Bee-Eye 2 (BE2) and Bee-Eye 3(BE3) were placed on tripods facing south and north, respectively.The luminance was extracted for six unique surfaces (Desktop, Monitor and Background areas) of which the desktop areas (D2 and D3) are measured both with horizontal and vertical orientations.Position A is elaborated in section 2.6.

Fig. 3 .
Fig. 3.A single series of resized HDR images as scaled by MATLAB with the initial resolution indicated in orange.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

Fig. 4 .
Fig. 4. Snapshot of a large luminance gradient area originating from BE3.The arrows indicate the magnitude of the gradient caused by the shadow of the window frame as calculated using equation (3).

Fig. 6 .
Fig. 6.Normalized mean luminance relative to an initial resolution of 2130x1600 measured in the mock-up office environment with BE3.The black lines indicate the average mean luminance, the grey lines indicate each individual HDR image measured with Bee-Eye 3 and the orange lines indicate the JND.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

Fig. 7 .
Fig. 7. Normalized maximum luminance relative to an initial resolution of 2130x1600 measured in the mock-up office environment with BE3.The black lines indicate the average maximum luminance, the grey lines indicate each individual HDR image measured with Bee-Eye 3 and the orange lines indicate the JND.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

Fig. 9 .
Fig. 9. Normalized illuminance relative to an initial resolution of 2130 x 1600 pixels measured in the mock-up office environment with BE3.The black lines indicate the average illuminance, the grey lines indicate each individual HDR image measured with Bee-Eye 3 and the orange lines indicate the JND.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

Fig. 10 .Fig. 11 .
Fig. 10.Normalized mean luminance relative to an initial resolution of 901 x 676 pixels measured in the living office landscape with BE3.The black lines indicate the average mean luminance, the grey lines indicate each individual measurement measured with Bee-Eye 3 and the orange lines indicate the JND.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

Fig. 13 .Fig. 14 .
Fig. 13.The processing time for a single measurement relative to the spatial resolution of 3280x2562 pixels.The dashed line represents the 1 min threshold, the brown and orange dots represent the spatial resolution applied in the lab and field study, respectively.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)