Reflectance Modeling in Machine Vision: Applications in Image Analysis and Synthesis

Taking images of objects under different illumination and viewing directions and analyzing them has long been an active research area in both machine vision and computer graphics. While computer graphics aims to synthesize realistic images from abstract scene descriptions, machine vision is concerned with the problem of deducing properties of a scene based on its interaction with light. For this reason, many algorithms from both disciplines rely on an accurate measurement and modeling of how light reflects off surfaces in a physically correct way.


Introduction
Taking images of objects under different illumination and viewing directions and analyzing them has long been an active research area in both machine vision and computer graphics. While computer graphics aims to synthesize realistic images from abstract scene descriptions, machine vision is concerned with the problem of deducing properties of a scene based on its interaction with light. For this reason, many algorithms from both disciplines rely on an accurate measurement and modeling of how light reflects off surfaces in a physically correct way.
In this chapter we show how machine vision for automated visual inspection can greatly benefit from reflectance measuring and modeling in the context of image analysis and synthesis.
From the viewpoint of image synthesis, reflectance measurement and modeling of real-world materials can be used to simulate machine vision systems by synthesizing images with computer graphics. The design process of machine vision systems requires a lot of domain specific experience and is also often based on the "trial and error" principle. Many aspects have to be taken into account and often the construction of a prototype system is inevitable. Finding the right camera position(s) and achieving a satisfying illumination of the inspected objects and surfaces is a difficult process as well as the training of the classification and decision algorithms. Simulation of machine vision systems using computer graphics can support and shorten this process and even lead to better results than using a manual setup.
Traditionally, computer graphics systems are designed to create images that are presented to human eyes. The goal is, to make the viewer judge the images as, e.g., believable, logical or beautiful. But for machine vision systems, the physical credibility is the most important factor. To achieve this goal, several areas of computer graphics have to be investigated under the perspective, what is required to create synthetic images that can be used as ground truth data for image processing, classification and decision algorithms?
Crucial parameters for machine vision systems are the reflection properties of the object surfaces in the scene, which we will focus on in this work. Modeling, how light behaves when it hits an object surface has been an important research area in computer graphics since 11 www.intechopen.com the beginning. Measuring, e.g., the Bidirectional Reflectance Function (BRDF) of real world materials and fitting the mathematical models to the data opened up the path to very realistic looking images and also introduced more physical correctness to synthetic images.
We present a machine vision simulator that can create synthetic images showing the inspected objects under varying perspectives and illumination conditions and compare them with images captured using a real camera. The synthetic images and real world images are compared on an image pair basis, but also the results of different image processing algorithms applied to the images are investigated. To achieve a high degree of realism, we use camera calibration to calculate the intrinsic and extrinsic parameters of the real camera and use those parameters in the camera model of the simulator. Our simulator uses measured BRDF-values applied to CAD models to achieve a high quality in simulating diffuse and specular behavior for isotropic and even anisotropic materials.
From the viewpoint of image analysis, reflectance measurement is performed by capturing illumination series. Illumination series are obtained by taking images of objects under varying illumination directions providing the input for various machine vision algorithms like photometric stereo and algorithms for image fusion. Since illumination series contain much more information about the reflectance properties of the illuminated surface than a single intensity image, they facilitate image analysis for defect detection. This is demonstrated by means of a machine vision application for surface inspection.
However, capturing a large number of images with densely sampled illumination directions results in high-dimensional feature vectors, which induce high computational costs and reduce classification accuracy. In order to reduce the dimensionality of the measured reflectance data, reflectance modeling can be used for feature extraction. By fitting parametric models to the data obtained from recording illumination series, the fitted model parameters represent meaningful features of the reflectance data. We show how model-based extraction of reflectance features can be used for material-based segmentation of printed circuit boards (PCBs).

Measuring reflectance
Over the last decades, many different setups and devices have been developed to either measure the reflection properties of a material directly or to capture an image series and derive the reflective behavior of the inspected material or a group of materials from them. The approaches differ a lot in their mechanical, optical and electrical setup, depending on the type of material and the material parameters to be acquired. They range from large outdoor equipment and room-filling robot-based equipment to tiny handheld devices. Measurement times can range from seconds to many hours. In the following, we'll give an short introduction on the different configurations known from literature and then present our own measurement device.
Two main distinctions between the measurement setups can be made: First, how many pairs of incident and outgoing angles of the light on a point on the sample surface are captured at a time? And second, is only one point on the sample surface or multiple points measured? Sampling of multiple points on the object surface is necessary to acquire an SVBRDF (Spatially Varying BRDF) and BTF (Bidirectional Texture Function).

Devices for measuring the reflectance properties of a material
One representative example in the field of the table based gonioreflectometers is the work of Murray-Coleman & Smith (1990), who designed a gonireflectometer with a single light source and one detector movable along arcs in the hemisphere above a rotatable sample area. Similar did White et al. (1998) with improved accuracy. To capture the reflection for more than one angle pair at a time, curved mirrors are widely used, like in the works of Ward (1992), Dana (2001), Ghosh et al. (2007), Mukaigawa et al. (2007). To add more mechanical degrees of freedom in the positioning of the material probe, robot arms have been used, e.g., Dana et al. (1999), Sattler et al. (2003), Hünerhoff et al. (2006). Debevec et al. (2000) constructed a light stage with multiple light sources and cameras at fixed positions to acquire the reflectance field of a human face. Wrapping the material probe around a cylinder, Marschner et al. (2000) simplified the mechanical BRDF measurement setup to measure isotropic BRDFs. Ngan et al. (2005) extended this setup making the cylinder itself rotatable to also measure anisotropic BRDF. Gardner et al. (2003) developed a linear, tabletop scanner-like device to caputure the spatially varying BRDF of a material. An interesting work, that shows how to capture SVBRF and BTF without mechanically moving camera and light source by using a kaleidoscope is the one by Han & Perlin (2003). Using an array of 151 digital cameras arranged in form of a dome above the material sample, Mueller et al. (2005) presented their measurement device, similar to Malzbender et al. (2001). With the goal of developing a small handheld device to measure BRDFs, Ben-Ezra et al. (2008) created an LED based device using the LEDs as lightsource and detector. More camera based handheld devices to measure SVBRDFs were presented by Dong et al. (2010) and Ren et al. (2011). Measuring reflectance is also an interesting topic for other research areas. A large outdoor field goniometer to capture the BRDF of earth and plants for remote sensing applications was presented by Sandmeier (2000). Jackett & Frith (2009) designed a device to measure the reflection properties of road surfaces to improve the safety and sustainability of road lighting.    Figure 1 approximates a distant illumination whose direction is described by the azimuthal angle ϕ and polar angle θ with origin F. The relationship between the illumination pattern and the illumination direction is given through Equation (1).

Devices for capturing illumination series
Closely related to the approaches described above to measure reflectance functions of material samples are devices to record images of a scene as a whole under varied illumination directions. In the field of machine vision, this set of images is commonly referred to as illumination series (Puente León (1997); Lindner & Puente León (2007)). In the following, we present an acquisition device to capture illumination series of small objects. The proposed illumination device is shown in Figure 1. The optical components are: a digital LCD projector, a Fresnel lens, a parabolic reflector with a center hole and a digital camera. All components are aligned along their optical axes.
The projector serves as programmable light source, which allows to control the relative radiance along the emitted light rays independently. Assuming a pinhole model for the projector, each projector pixel can be thought of as the source of a single ray of light that emanates from the optical center of the projector. By placing the projector at the focal point of the Fresnel lens, the light rays are collimated by the lens and converge at the focal point F of the parabolic reflector. As a consequence, the illumination direction of light rays incident to the focal point F can be controlled by projecting spatially-varying illumination patterns.
To image objects under different illumination directions, the object is placed at the focal point F of the parabolic reflector and the reflected radiance is captured by the camera attached to the center hole of the reflector. Although the proposed illumination device allows to project arbitrary complex illumination patterns, in our first experiments we consider a simple binary illumination pattern depicted in Figure 2. The illumination pattern consists of a single fixed-size circular spot of white pixels with all other pixels being black. Due to the spatial extent of the illumination spot, the illumination pattern leads to a bundle of light rays rather  Fig. 3. Projected ray bundle reflected at the surface of the parabolic reflector. A thin wax layer is used to roughen the specular surface of the reflector in order to obtain a diffuse directed reflection and to broaden the illuminated area size at F. than a single ray. However, each individual light ray of the bundle follows the laws of reflection and refraction and hence a cone-shaped illumination is incident to F (see Figure  3).
By establishing a Cartesian coordinate system with its origin at the focal point F of the parabolic reflector and its Z-axis aligned with the optical axis of the device and pointing into the direction of the camera, we are able to parametrize the illumination pattern and so the resulting illumination direction incident to F as depicted in Figure 2. The position of the illumination spot is considered in the X-Y-plane and can alternatively be represented by its radial coordinate r and angular coordinate ϕ in polar coordinates. The resulting illumination is then described by the illumination vector L(ϕ, θ), which points in the direction of the cone-shaped illumination. The angular coordinates of L(ϕ, θ) are derived as follows: the azimuthal coordinate ϕ equates the angular coordinate of the illumination pattern and the polar coordinate θ is determined by where f denotes the focal length of the parabolic reflector.
Note that given highly accurate optical components and a perfectly calibrated setup, all emitted light rays by the projector would intersect at a small point at the focus F of the parabolic reflector. Clearly, this is not practical for illuminating objects with spatial extent. Moving the test object slightly behind or in front the focal point would result in a spatial extension of the directed illumination, however, as consequence, the position of the incident illumination will vary dependent on its direction. To account for this problem, we use a parabolic reflector that is not perfectly specular, having instead a matte surface appearance. Hence, the specular reflection at the reflector surface becomes partly diffuse and the incident ray bundles from the projector are reflected in a small solid angle toward the focal point F. This results in a broadening of the area for which the direction of the incident illumination can be varied. To enhance this effect, we use wax based transparent dulling spray to coat the surface of the reflector in order to further increase the diffuse component of the reflection (see Figure 3). However, care must be taken to keep the proportion between diffuse and specular reflection right, since the wax coating can cause undesirable interreflections and, as consequence, the directional property of the illumination decreases. By applying the dulling spray in a thin layer, we yield a circular area of radius ≈ 15 mm around F of nearly homogeneous irradiance for which we can control the direction of the illumination. In a similar approach in Peers et al. (2006), heated acrylic is used to roughen the texture of a hemispherical mirror in order to obtain a more diffuse reflection. Another way to solve the problem of inhomogeneous illumination is to use prospective shading correction techniques as employed by Jehle et al. (2010).

Modeling reflectance
A lot of research work has been published in the field of modeling reflectance. It was soon recognized in the early days of computer graphics that the way light reflects off a surface is a key point in calculating light paths through a scene. Most of the models were invented to reproduce the appearance of a material, that existing models could not model well enough by that time. The models have been designed taking into account or neglecting some physical constraints, e.g., energy conservation, helmhotz reciprocity, polarization or sub-surface scattering. Often, they are a trade-off between being intuitive, accurate and computationally demanding.

Analytical models
Torrance & Sparrow (1967) laid a cornerstone for analytical models with their work about surfaces consisting of small microfacets. Modeling the geometry of those microfacets and their distribution was subject of many follow-up works, like Blinn (1977) who combined this theory and the ideas of Phong (1975). Cook & Torrance (1982) suggested a reflection model based on their previous work and added a Fresnel term to better model effects that appear at grazing angles and also models the dependency of reflection on the wavelength. Kajiya (1985) developed an anisotropic reflection model for continuous surfaces directly based on the Kirchhoff approximation for reflection properties of rough surfaces. In their work, Poulin et al. (1990) modeled anisotropy by using small cylinders. Hanrahan & Krueger (1993) presented work on modeling complex reflection due to subsurface scattering in layered surfaces. To reduce the number of coefficients in the previous physically based models, Schlick (1994) proposed his rational fraction approximation method. Oren & Nayar (1994) suggested to model rough surfaces through small Lambertian faces.  presented a BRDF generator, that can calculate a BRDF based on a general microfacet distribution.
Recently, Kurt et al. (2010) suggested a new anisotropic BRDF model based on a normalized microfacet distribution.
Other analytical reflection models have also been built upon the theory of physical optics, like in the work of He et al. (1991), which was extended to also model anisotropy by Stam (1999).

Empirical models
The empirical model most known in the world of computer graphics is the model proposed by Phong (1975) and improved by Lewis (1994) to increase physical plausibility. Neumann et al. (1999) combined it with parts of analytical models and derived their own BRDF model for metallic reflection.  extended the model to also model anisotropic behaviour. A widely known work in modeling anisotropic reflection is Ward (1992). Other approaches purely focus on finding a mathematical representation for the shape of a material's BRDF. So did Schröder & Sweldens (1995) with wavelets, Sillion et al. (1991) and Westin et al. (1992) with sherical harmonics, Koenderink et al. (1996) with Zernike polynomials, Lafortune et al. (1997) with a set of non-linear primitive functions, Kautz & McCool (1999) with separable approximations based on singular value decomposition and normalized decomposition, Lensch et al. (2001) by expressing a spatially varying BRDF through a linear combination of basis functions achieved by analyzing the SVBRDF of multiple measured materials which in turn inspired Matusik et al. (2003) to derive their data-driven reflectance model based on the measurements of more than 130 real world materials. Another work in the area of data-driven reflectance models is Dong et al. (2010), who present an SVBRDF bootstrapping approach for the data aquired using their own handheld measurement device to speed up the aquisition and processing of material properties significantly.

Model fitting
One question, that has been a challenge to many authors in the field of modeling reflection is: How can an analytical or empirical model be fit to the measured BRDF data of a material? In the following, we'll give a short description on the fitting process of the Lafortune reflection model and refer the reader to the original publications for further details. Lensch et al. (2001) defined the isotropic version of the Lafortune model with multiple lobes to be fitted in the following form: with u, v being light and viewing directions, ρ d denoting the diffuse component. C x , C z define the direction of the lobe i, n denotes the width of the lobe. A widely used method in the literature to perform a non-linear optimization to fit the model parameters to approximate the measured BRDF data (Lafortune et al. (1997)) or sampled radiance (Lensch et al. (2001)) is to use the Levenberg-Marquardt algorithm (Marquardt (1963))and to define the error between the model prediction and the measured value for a given light and view angular constellation as the objective function to be minimized. The objective function proposed by Ngan et al. (2005) is the mean squared error between the model value for a given parameter vector and the measured value.

Synthetic ground truth data for machine vision applications
Design, prototpying, testing and tuning of machine vision systems often requires many manual steps, time and domain specific knowledge. During the development phases of such a system, the use of computer graphics and computer generated synthetic images can simplify, speed up and improve this process. Different aspects of such a simulation system were also subject of related work, e.g. Reiner et al. (2008) focused on modeling different real world luminainares, Chessa et al. (2011) proposed a system for creation of synthetic ground truth data for stereo vision systems using virtual reality technology. In contrast to this related work, our work mainly deals with modeling the reflectance properties of materials as good as possible by using the methods described in Section 2.2. Simulating object surface appearance correctly under different viewing and illumination conditions is one of the fundemental requirements to such a simulation system to be able to create ground truth data for machine vision applications. In the following we will demonstrate the benefits of using synthetic images in the process of developing a machine vision system.

Scene modeling
Using computer graphics, not only the objects to be inspected can be modeled, but also the whole scene including lightsources and camera(s). For the positioning of the camera(s) in the scene, it is important to know, if all necessary parts of the objects to be inspected are inside the field of view of the camera and are also not blocked by other objects or the object itself (Khawaja et al. (1996)), e.g., for inspection of threads. Adding such a constraint to the optimiziation process of the system, many mechanically possible setups can be omitted in an early design phase. Figure 4 shows the scene preview of our simulation software. In real world applications, the positioning of camera and illumination is often constrained by mechanical limitations as part of a production line or by other parts of the machine vision system, e.g., housings to block extraneous light, which can also be included in the simulation.

Automated creation of synthetic image series
A major advantage of using simulation for the development of a machine vision system is the possibility to automatically simulate known variations of the scene and the objects as shown in Figure 5. One can imagine, that the combination of possible camera-, illumination-and object variations leads to an enourmous number of system conditions to be evaluated to find the optimal setup. Building a prototype system to check all of them would be a long lasting procedure and, due to mechanical restrictions in positioning of the components, it would be very hard to create reproducable results. Here we see a great potential to use the power and flexibility of computer graphics to simulate these variations and on the one hand, provide valuable information to the machine vision expert, on the other hand automatically optimize the system configuration based on objective functions.

Techniques for creation of synthetic images
Traditionally, there have been two major groups of techniques to render images in computer graphics. The aim of the first group is to create images at high framerates, required for interactive applications like CAD or computer games. They make heavy use of the increasing power of graphics processors. The aim of the second group is to create images as photorealistic as possible by implementing highly sophisticated algorithms for light propagation in a scene, e.g. Monte-Carlo sampling. We propose a dual approach as shown in Figure 6, sharing a common scene data format to switch between the two modes of rendering if required. This approach makes it also possible to use the real-time image creation part to simulate a Fig. 6. Dual approach to create synthetic images including a comparison with real world images based on a common scene data format.
live virtual camera for a machine vision system as well as photorealistic rendering for an accurate prediciton of the pixel values in the real camera image. The scene data format we use is the one of the digital asset and effects exchange schema COLLADA (www.collada.org). This XML-based format allows us to store required data about the whole scene in a single file, including 3D models, textures, reflection model parameters, camera and lightsource parameters. It can also be used to store multiple scenes and their chronological order to be converted into frames during rendering. A core feature of COLLADA, which we make use of, is the possibilty to store properties of material appearance in many different ways, from reflection model parameters to shader code fragments.
Our real-time rendering application is also used as a scene composer and editor and is based on the open source scene graph framework OpenSceneGraph (www.openscenegraph.org). Our dual approach is designed and implemented as follows: Our simulation system expects CAD models of the workpiece(s) as a basic input of the simulation to have knowledge of the dimensions with high accuracy. To be imported by our real-time simulation application, the CAD models are converted into meshed models. This is either done by export-filters of the CAD application or by using the finite element tesselation tool gmsh (www.geuz.org/gmsh). The real-time application is then used to compose the desired scene consisting of camera(s), illumination and one or more models and assigning material properties to the models with immediate display of the rendering result. To simulate light propagation in the scene more accurate, especially for inner-and inter-object light ray tracing, the scene can be exported into a COLLADA file and then loaded in any other rendering application that supports this format.
In our experiments, we used the open source software Blender (www.blender.org) to test this functionality.

Using synthetic images for algorithm parameterization
An additional usage of synthetic images in machine vision applications is the creation of large sets of samples to choose and parameterize the processing algorithms appropriated for the desired application. There are many different scenarios for this process. They range from simply supporting machine visions engineers during their system design phase through providing them a first impression on how the camera view on the scene will look like in the final application to an in-depth performance analysis of an image processing algorithm by calculating the error between the ground truth data and the processing result of the algorithm. The scope of this feature is to also create sample images, that may occure very rare in reality, but should still be covered by the processing algorithm or at least, should not lead to a false decision or system failure. Figure 7 demonstrates this idea by an example of an edge detection algorithm. The algorithm is parameterized by processing an image series of the workpiece to be detected under various lighting conditions. For this example, we defined two metrics as input for the optimization process. The first one was the total number of detected edge pixels inside the regions of interest. The second one was the distance of the edge pixels from a straight line determined by using linear regression. As parameters of the edge detector to be optimized the gray levels thresholds around an edge pixel are choosen. To aquire the intrinsic and extrinsic camera parameters of the experimental setup, a camera calibration was executed. We used the publically available Camera Calibration Toolbox for MATLAB (www.vision.caltech.edu/bouguetj/calib_doc) for this calibration. The material parameters of the workpiece were aquired by using a robot-based gonioreflectometer at our institute which was also used to measure the LED spotlight parameters. Finally, the algorithm performance was then tested on a real workpiece that was manufactured after the CAD model to verify successful parameterization as shown in Figure 6.

Illumination series in machine vision
The choice of an appropriate illumination design is one of the most important steps in creating successful machine vision systems for automated inspection tasks. Since in image acquisition all information about a scene is encoded in its exitant light field, the incident light field provided by the illumination must be able to reveal reflectance features of the test object that are relevant to the inspection task. The relationship between incident and extant light field is described by the scene's reflectance field (Debevec et al. (2000)) which models light transport of a scene in a global manner.
For many inspection tasks it is difficult or even impossible to find a single optimal illumination condition and therefore, inspection images under multiple different illumination conditions have to be evaluated. In a widely used technique, inspection images are captured under directional light from different directions which yields a so-called illumination series of the object (see Section 2.1.2 ). Thus, recording an illumination series corresponds to sampling the reflectance field of the scene.
In the field of automated visual inspection, the acquisition and evaluation of illumination series have been studied and applied to solve difficult surface inspection tasks. Puente León (1997) proposed an image fusion algorithm to compute images with maximal contrast from an illumination series. To this end, an energy functional is introduced and minimized which specifies the desired requirements on the image fusion result. Lindner & Puente León (2007) proposed several methods for surface segmentation using varying illumination directions and demonstrated the superiority of illumination series over a single image. For this purpose, different reflectance features are extracted for each pixel and used for unsupervised clustering. With this approach, a wide variety of textures on structured surfaces can be segmented. Grassi et al. (2006) used illumination series to detect and classify varnish defects on wood surfaces. By constructing invariant features, good detection and classification ratios can be achieved.
In order to segment images into material types, Wang et al. (2009) used illumination series acquired with a dome providing lighting from many directions. A hemispherical harmonics model is fit to the measured reflectance values and the model coefficients are used to train a multi-class support vector machine. To account for geometric dependencies on the measured reflectance, photometric stereo is applied to estimate the surface normal at each pixel and to transform the measurements to the local surface reference frame. Jehle et al. (2010) used a random forest classifier to learn optimal illumination directions for material classification by using embedded feature selection. For illumination series acquisition, an illumination device very similar to the one presented in this chapter is used. However, our device, developed independently, differs in the wax coating of the parabolic mirror to obtain a homogeneous illumination of the test object.

Acquisition of reflectance features
The illumination device introduced in Section 2.1.2 is used to capture images of small test objects under varying illumination directions. For this purpose, the illumination pattern shown in Figure 2 is projected for different r and ϕ so that we get equidistant illumination directions L(ϕ, θ) along the angular domains ϕ ∈ [0, 2π) and θ ∈ [θ min , π). Note that due to the position of the camera, the polar angle θ is limited to θ min to prevent a direct illumination of the camera. Since the spatial extend of the objects to be illuminated is small (≈ 15 mm) compared to the diameter of the parabolic reflector (600 mm), we make the approximation that the projection of the illumination pattern emulates a distant point light source. This means, for each scene point the illumination originates from the same direction with the same intensity. As consequence, a single-channel image can be written as mapping where Ω ⊂ Z 2 is the domain of pixel positions and Ψ =[ 0, 2π) × [θ min , π) the space of illumination parameters. Debevec et al. (2000) refer to g as the reflectance field of a scene, which describes the optical response of a scene illuminated by a distant light source. By varying the illumination direction and capturing n images g(x, ω i ), we obtain an illumination series where x ∈ Ω denotes the pixel location and ω i ∈ Ψ describes the illumination direction incident on the object. Therefore, the illumination series S can be considered as samples of the mapping g with respect to parameter space Ψ. In Figure 8(a), a small image series of a coin for various illumination directions is shown. By considering a fixed pixel location x 0 in Ω, the reflectance function can be defined, describing the illumination-dependent appearance at individual pixels. Figure 8(b) shows reflectance functions for different pixel locations in the coin image series. (1) Scratches, (2) paint stain, (3) groove, (4) dent. (b) RX anomaly detection applied to an illumination series of the same textured plastic surface (γ = 0.95).
Note that the reflectance function is specified in an image-based coordinate frame, i.e., it includes non-local and geometry-induced illumination effects like the foreshortening term, interreflections and self-shadowing. As consequence, r x 0 (ω) can be considered as a 2D slice of the so-called apparent BRDF (Wong et al. (1997)).

Reflectance features for unsupervised defect detection
In the following, we present an approach to unsupervised defect detection using illumination series. In automated visual inspection, collecting labeled training data is often expensive or difficult, because defects are not known a priori. However, in many cases it can be assumed that defects are rare and occur with low probability compared to the nominal state of the inspection task. This is especially true for the appearance of defects under varied illumination directions and hence for the reflectance function of defective surface regions.
In order to detect defects by their illumination-dependent appearance in an unsupervised manner, we apply the RX anomaly detector developed by Reed and Yu (Reed & Yu (1990)) to illumination series. The RX detector assumes a Gaussian data distribution and is widely used in hyperspectral image analysis to detect regions of interest whose spectral signature differs from the Gaussian background model without a prior knowledge. Applied to illumination series, the RX detector implements a filter specified by where µ ∈ R n is the sample mean and C ∈ R n×n the sample covariance matrix of the reflectance functions in the image series. Therefore, the detector output δ RDX (x) is the Mahalanobis distance between a tested pixel and the mean reflectance function of the image series. Large distances correspond to low probabilities of occurrence, and hence, by displaying the detector output as grayscale image, more anomalous pixels appear brighter. In order to segment anomalous surface regions from the background, a threshold α has to be applied to the detector output. In doing so, anomalous pixels are rejected as outliers of the Gaussian background model. We determine the threshold value α by setting a confidence coefficient γ such that P(δ RDX (x) < α)=γ.
In a practical experiment, an illumination series of n = |{ϕ 0 ,...,ϕ 23 }| × |{θ 0 ,...,θ 5 }| = 144 grayscale images of a textured plastic surface with various surface defects was recorded. Figure 9(a) shows the plastic surface under diffuse illumination and the hand-labeled position of the surface defects. In Figure 9(b), the thresholded output (γ = 0.95) of the RX detector δ RDX (x) applied to the whole illumination series is shown. The result shows that nearly all surface defects become visible as anomalous pixels, demonstrating the ability of illumination series for unsupervised defect detection in textured surfaces.

Model-based feature extraction for material classification
Illumination series contain large amounts of information regarding the reflectance properties of the illuminated object. However, capturing a large number of images with densely sampled illumination directions results in high-dimensional reflectance features. From statistical learning theory it is known, that the complexity of any classification problem grows with the number of input features (Hastie et al. (2009)). This means, more training examples are needed to train a classifier due to the curse of dimensionality. In order to reduce the dimensionality of the feature space, methods from feature extraction aim to construct a reduced set of features from the original feature set without loosing discriminative information. As consequence, a less complex classifier can be applied and the reduced feature set allows a better understanding of the classification results. For an unsupervised approach to feature selection to reduce the dimensionality of illumination series see Gruna & Beyerer (2011).
Model-based reflectance features are extracted by fitting a parameterized reflectance model (see Section 2.2.3) to the high-dimensional reflectance measurements. The fitted reflectance model then provides a compact representation of the measurements and the estimated model parameters give a reduced set of descriptive reflectance features. Since reflectance models incorporate local surface geometry, knowledge about the scene geometry, e.g., obtained by photometric stereo (Wang et al. (2009)) or estimated from the specular reflection direction (Lindner et al. (2005)), can be utilized for feature extraction. In doing so, the extracted reflectance features become invariant to the surface normal direction.
In a practical experiment, we utilize model-based reflectance feature extraction for the material classification for printed circuit boards (PCBs). The automated visual inspection of PCBs is a challenging problem due to the mixture of different materials such as metals, varnishes, and substrates of which the PCB elements are composed (see Figure 10). Numerous approaches to PCB inspection have been described in the literature, however, most of them are based on measuring the spectral reflectance of the materials by multispectral imaging (Ibrahim et al. (2010)). In the presented approach, we use simple grayscale images (i.e., without color or spectral information) but evaluate angular resolved reflectance measurements to extract features for material classification. To this end, we record an image series with n = 144 grayscale images of a small PCB and utilize the Lafortune reflectance model (see Section 2.2) with one isotropic specular lobe for feature extraction. The fitting process is done Fig. 10. Material classification results for a part of a PCB consisting of ground substrate (red), gold and silver conducting elements (marked blue and green, respectively) and paint (turquoise). (a) Color image of the PCB to illustrate the different material components. However, in our experiments we use illumination series of grayscale images without color or spectral information. Material samples marked with rectangles are displayed in the scatter graphs in Figure 11 according to the color encoding (b) Result of k-means clustering using measured reflectance values as feature vector directly (feature space dimension is 144).
(c) Result of k-means clustering using model-based reflectance features (feature space dimension is 3).
using the Levenberg-Marquardt algorithm as described in Section 2.2.3. We assume a flat surface geometry with the surface normals aligned with the Z-axis as illustrated in Figure 2. After the reflectance model is independently fit to the reflectance function of each pixel, the estimated model parameters are extracted as feature vector (C z , k d , n) T , where C z represent the inclination of the specular direction, k d the diffuse component and n the width of the specular lobe.
In order to demonstrate the ability of the extracted reflectance feature for material classification, we compare the model-base approach to the alternative that uses the measured reflectance measures directly as high-dimensional feature vector. For unsupervised material classification we use the k-means clustering algorithm with a fixed number of k = 4 material classes. The classification results of a small PCB, which consists of ground substrate, gold and silver conducting elements and paint, are shown in Figure 10. Both approaches show very similar classification results, however, model-based feature extraction with the Lafortune model is able to reduce the dimensionality of feature space from 144 to a 3-element feature vector without losing relevant information. Furthermore, a closer examination of Figure  10(c) reveals that with the model-based features soldering points are identified as conducting elements and not as ground substrate as in Figure 10(b).
Due to the dimension reduction, the new feature space can be plotted and analyzed visually for relevant structures. In Figure 11, the feature space spanned by the Lafortune reflectance parameters is illustrated with scatter plot graphs. By depicting the hand-annotated material samples from Figure 10(a) in the scatter plot graphs, it can be seen that the different material samples are well separated in the model-based feature space. For a more in-depth analysis of reflectance features for material classification see Hentze (2011). Fig. 11. Illustration of the feature space of the PCB spanned by the Lafortune reflectance model parameters. Material samples from the PCB in Figure 10(a) are marked by different colors (red: ground substrate, blue: gold conducting elements, green: silver conducting elements, turquoise: paint, gray: unlabeled data). Cluster centers found by k-means clustering are marked as black crosses.

Summary and conclusions
Machine Vision for automated visual inspection can greatly benefit from computer graphics methods for reflectance measuring and modeling. We gave an overview on different ways how to measure the reflection properties of material surfaces and how this data is either used to fit reflection models or evaluated as illumination series.
In the main part of this chapter we presented practical applications of these techniques from an image synthesis and image analysis point of view. We showed, how the reflectance models can be used to create synthetic images of scenes under varying illumination and viewing conditions. To achieve the goal of raising the degree of realism in simulating materials under varying illumination conditions, we discussed the required steps from reflection data aquisition to using it for simulating the appearance of a material in rendering applications. We drawed a bow to how this is can used to create synthetic images that can be used to support the design and development process of machine vision systems. We see two main benefits in this approach: The first one is to simplyfiy the creation of a large set of sample images for training of machine vision algorithms including the possibility to create samples with varying scene setups, e.g., simulated surface defects moving along the surface. This would close a gap in vision algorithm development where often the small set of sample images is a limiting factor. The second one is to make machine vision systems more robust against changes in scene illumination or changes in the view position, e.g. cameras mounted on the head of a moving humanoid robot.
From an image analysis point of view, we demonstrated the use of angular-resolved reflectance measurements in a machine vision application for visual inspection. By applying a density-based anomaly detection method on the high-dimensional measurements we were able to detect surface defects on a highly textured surface. Thereby, we demonstrated the potential of illumination series for unsupervised visual inspection.
In another application example, illumination series were used for inspecting printed circuit boards (PCBs). Here, we demonstrated the feasibility of model-based reflectance features for material classification. To this end, the Lafortune reflectance model was used for feature extraction and it was shown, that the dimension of original feature space can be reduced to 3 model parameters without losing relevant material reflectance information.
While the benefit of using angular-resolved reflectance measurements instead of single images has previously been reported in the literature (Lindner & Puente León (2007) (2011)), using reflectance measurements in combination with modeling and simulating complex machine vision systems is a new research field and has the potential to be subject of future works.

Acknowledgements
We thank the reviewers for their valuable feedback on this article. Parts of the research leading to these results has received funding in the program "KMU-innovativ" from the German Federal Ministry of Education and Research under grant agreement no 01IS09036B. Vision plays a fundamental role for living beings by allowing them to interact with the environment in an effective and efficient way. The ultimate goal of Machine Vision is to endow artificial systems with adequate capabilities to cope with not a priori predetermined situations. To this end, we have to take into account the computing constraints of the hosting architectures and the specifications of the tasks to be accomplished, to continuously adapt and optimize the visual processing techniques. Nevertheless, by exploiting the low?cost computational power of off?the?shell computing devices, Machine Vision is not limited any more to industrial environments, where situations and tasks are simplified and very specific, but it is now pervasive to support system solutions of everyday life problems.