Utilization of programmable cameras for web-based sensing and control of daylight in buildings

The concept of integrating programmable low-cost cameras into the office infrastructure and BMS for real-time, web-based sensing and control of the luminous environment in buildings is presented in this study. Experiments were conducted to evaluate the potential of predicting the luminance field perceived by an office occupant using a programmable calibrated HDR camera installed at the rear side of a computer monitor or on the wall behind the occupant, for a variety of sky conditions and shading options. The generated luminance maps using Python scripts with OpenCV packages were further processed to extract daylighting and glare metrics using Evalgare. The results showed that: (i) among the different camera resolutions that were compared, the 330x330 resolution was selected as the best option to balance between accurate capturing of visual environment and comfort and computational efficiency; (ii) a camera sensor embedded on the rear side of a computer screen could capture interior visual conditions consistently similarly to those viewed by the occupant, except for sunny conditions without proper shading protection. This prototype study paves the way for luminance monitoring and daylight control using programmable low-cost camera sensors embedded into the office infrastructure.


Introduction
Daylighting is one of the most important aspect of building design and operation. It significantly influences building energy performance and indoor environmental quality, as well as occupants' comfort. Therefore, it is important to balance these effects by monitoring visual conditions for effective performance-based daylighting control in perimeter building zones. A number of studies have been conducted in recent years to measure the occupant's visual conditions using luminance maps and distributions, converted from High Dynamic Range (HDR) images [1]. The entire process of generating luminance mapping using these HDR images can be fully automated using software or customized Python scripts. However, there are several practical issues such as computational cost and proper camera location during monitoring occupant's visual conditions using luminance information. For example, the spatial resolution of HDR images affects computational time and proper resolutions should be selected for efficient luminance monitoring applications [2]. Moreover, locating a camera near the occupant is intrusive and can interfere with the occupant's daily activities. Therefore, alternative camera locations should be studied to meet both accuracy and practicality criteria. This paper evaluates the feasibility of integrating programmable low-cost cameras into the office infrastructure (e.g., on computer screens) for monitoring luminance conditions perceived by the occupant, and for real-time, web-based daylight sensing and control. By comparing and analyzing luminance maps captured at the occupant's seat and

Basic specs
Raspberry Pi 3 B+ (Raspi), which is a representative type of general-purpose single board computer, was used for this research. All the measurements using Raspi were controlled using Python scripts under an embedded UNIX-based operating system. In addition, a low-cost High Dynamic Range (HDR) image sensor, Raspberry Pi Camera Module V2, was linked to the Raspi for the measurement. This camera contains CMOS sensor (Sony IMX219), which has a maximum dynamic range of 44.56 dB to generate a resolution of 3280x2384 pixels. Since a wide field of view (FOV) was necessary for this research, M12 fisheye lens with an aperture of f/1.8 and a focal length of 1.05 mm [3] was used. A customized mount was designed using 3D printer to perfectly fit the fisheye lens to Raspi camera to minimize light leakage during calibration and measurement procedures.

Geometric calibration
Although a camera with fisheye lens can detect a wide FOV compared to a typical type of lens, the radial distortion of the fisheye lens significantly deforms the scene. Therefore, geometric calibration is necessary to extract the mapping information of real-word scenes into image pixels captured in a fisheye lens by estimating the intrinsic camera parameters and skewness distortion coefficients. There are three steps in geometric calibration using fisheye lens calibration module embedded in OpenCV Python library. The procedure consists of 1) capturing several images containing a checkerboard with predefined length, 2) extracting corner coordinate information of a checkerboard in each image, and 3) generating the intrinsic matrix and distortion coefficients.

Photometric calibration
In addition to the geometric calibration, photometric calibration was carried out to automate generating High Dynamic Range (HDR) images and converting them to luminance maps using Python codes with OpenCV libraries. The photometric calibration consists of five steps: 1) capturing a series of images with different exposures; 2) combining the captured images into an HDR image using a formerly extracted camera response function (CRF); 3) generating luminance map using the generated HDR; 4) extracting and applying a calibration factor (CF) to the converted luminance map to make it reliable; and finally, 5) generating a correction function to recover the vignetting effect. After both geometric and photometric calibration were finished, the Raspi camera could generate HDR images and convert them to a luminance map as in Figure 1.

Overall process
The overall process of this research consists of three stages, which are shown in Figure 2. At the first stage, geometric and photometric calibration was conducted to generate reliable luminance maps from captured HDR images. The luminance maps were used as input data of Evalglare [4], which is an opensource software used for glare analysis. Average luminance, vertical illuminance, and average luminance of window were extracted and used as visual environment parameters, while the daylight glare probability (DGP) [5] was computed and used as a visual comfort parameter. These computed parameters were used in both the experiments for selecting proper resolution and for long-term measurements to evaluate the feasibility of non-intrusive camera locations for representing the occupant's visual conditions.

Selecting a proper resolution to balance between accurate capturing of visual environment and computational efficiency
The experiments were conducted in a mock-up private office (3.2m x 4m x 3.2m high) at Purdue University in West Lafayette, Indiana. The office consists of south-facing aluminum curtain wall framing with window to wall ratio of 54% and two electric lighting lamps (32-W T5 fluorescent lamps) on the ceiling. Controlled roller shades with 2.8% normal light transmittance were used on the windows. For this experiment, three Raspi cameras were used to select a proper resolution for long-term measurement. The cameras were located at an occupant's seat location in two different viewing directions (facing the window and parallel to the window) to compare the results in terms of the ability of capturing visual environment parameters and visual comfort. Two additional resolutions were compared with the recommended resolution (800x800 pixels) of Evalglare at the same location: 330x330 pixels, which was a proposed limit resolution for measuring reliable mean luminance [2], and 200x200 pixels, which was selected for maximizing the computational and storage efficiency. The lowest resolution was found to underestimate actual DGP and average luminance by 15% in specific cases. Considering the relatively less visual information loss of visual performance and comfort information, less storage size for captured HDR images and efficient computational, the 330x330 resolution was selected for long-term measurements for the next steps of this work.

Setting and methodology
The long-term measurements were also carried out in the same private office. In this experiment, 5 Raspi cameras with the same resolution (330x330) were used for visual environment and comfort measurements (Figure 3). The objective was to compare luminance maps and visual environment metrics between measurements taken at the occupant's seat vs other locations (e.g., on computer monitors or on the back wall). Two cameras were attached to the rear side of the monitors representing the location of desktop monitor and laptop monitor rear cameras, respectively. Since a typical size of laptop monitor is smaller than the desktop monitor and the laptop monitor is usually tilted, the laptop rear camera (d) was attached 0.16m below the camera at the top of the monitor (b) and tilted downward 15°. Another two cameras were located at the occupant's seat to represent the baseline, occupant's view towards a desktop monitor (a) and towards a laptop monitor (c) respectively (the first matched the view line to the camera attached to the top part of the monitor and the second was tilted downward 15°). The last camera was installed on the wall behind the occupant. This camera was tilted downward 20° directing to the monitor to mimic the situation proposed by [6]. This camera was also used as a reference comparison camera for the cameras placed at the rear side of the monitors. The measurements were conducted for 18 days with different weather and shading conditions (over 1,000,000 cd/m 2 luminance from the sun) during 9am~6pm with 10 minutes interval. The average vertical illuminance from the entire captured luminance maps was also computed (as a visual environment metric) and DGP was computed as a visual comfort metric, to compare between the 5 camera locations. To include results under various conditions, 3 different roller shade positions (0%, 50%, and 75%) and 2 different weather conditions (sunny and cloudy) were considered. However, since occupants usually do not adjust shade on cloudy days to prevent glare, only 100% open roller shades were included in the analysis for cloudy days. The number of datapoints in each cluster is listed in Table  1.

Results
Representative Ev and DGP results collected from the five cameras under these conditions are shown in Figures 4-Figure 5. With the shades 50% closed, relatively small differences were observed between the monitor-mounted cameras and the occupant cameras -similarly for the wall-mounted camera, for both desktop and laptop view directions. For example, seeing the first row of Figure 5, the monitor-mounted cameras resulted in mean absolute error (MAE) of about 0.02 in DGP results (compared to the camera measurements at occupant location). This trend was also observed in other results under different situations except for the case of direct sun in the captured scene. Applying a mask on the monitor area can even reduce this error. As a simple demonstration, a mask was automatically generated and applied at the center part of the scene with a constant luminance of 100 cd/m 2 , which was similar to the average luminance of a monitor when turned off. Figure 6 presents representative results of applying the mask, showing that it was possible to reduce the MAEs in most cases.

Conclusion
The objective of this study was to evaluate the feasibility of using programmable cameras near the office user and integrated in the space infrastructure (e.g., at the rear side of a monitor or on the back wall) for simulating the users' visual environment and comfort perception in a private office in a non-invasive way. A low-cost programmable HDR image camera with a fisheye lens was used to capture a wide FOV. First, the 330x330 resolution was selected as the best option to balance between accurate capturing of visual environment and comfort and computational efficiency and was then used for long-term experiments. By comparing the luminance field captured by the user to the field captured at the other camera locations, practical non-invasive camera positions are proposed, capable of representing the users' visual environment and comfort through computed DGP and Ev. The results showed that a camera sensor embedded on the rear side of a computer screen could capture interior visual conditions consistently similarly to those viewed by the occupant under the condition of 50% lowered shade for both desktop and laptop view directions -for the space and time period studied. In addition, the error could be reduced by applying a simple mask of the monitor area at the center part of the monitor camera scene. This study is an initial step of integration of smart programmable cameras into computers for real-time and web-based daylight sensing and personalized lighting control.