Fusion of Visual and Thermal Images Using Genetic Algorithms

subimages some form composite Inverse pyramid transform of composite transform gives the final fused Bio-inspired computational algorithms are always hot research topics in artificial intelligence communities. Biology is a bewildering source of inspiration for the design of intelligent artifacts that are capable of efficient and autonomous operation in unknown and changing environments. It is difficult to resist the fascination of creating artifacts that display elements of lifelike intelligence, thus needing techniques for control, optimization, prediction, security, design, and so on. Bio-Inspired Computational Algorithms and Their Applications is a compendium that addresses this need. It integrates contrasting techniques of genetic algorithms, artificial immune systems, particle swarm optimization, and hybrid models to solve many real-world problems. The works presented in this book give insights into the creation of innovative improvements over algorithm performance, potential applications on various practical tasks, and combination of different techniques. The book provides a reference to researchers, practitioners, and students in both artificial intelligence and engineering communities, forming a foundation for the development of the field.


Introduction
Biometric technologies such as fingerprint, hand geometry, face and iris recognition are widely used to identify a person's identity. The face recognition system is currently one of the most important biometric technologies, which identifies a person by comparing individually acquired face images with a set of pre-stored face templates in a database.
Though the human perception system can identify faces relatively easily, face reorganization using computer techniques is challenging and remains an active research field. Illumination and pose variations are currently the two obstacles limiting performances of face recognition systems. Various techniques have been proposed to overcome those limitations in recent years. For instance, a three dimensional face recognition system has been investigated to solve the illumination and pose variations simultaneously [Bowyer et al., 2004;S. Mdhani et al., 2006]. The illumination variation problem can also be mitigated by additional sources such as infrared (IR) images [D. A. Socolinsky & A. Selinger, 2002].
Thermal face recognition systems have received little attention in comparison with recognition in visible spectra partially due to the high cost associated with IR cameras. Recent technological advances of IR cameras make it practical for face recognition. While thermal face recognition systems are advantageous for detecting disguised faces or when there is no control over illumination, it is challenging to recognize faces in IR images because 1) it is difficult to segment faces from background in low resolution IR images and 2) intensity values in IR images are not consistent due to the fact that different body temperatures result in different intensity values in IR images.
The overall goal of this research is to develop computational methods for obtaining efficiently improved images. The research objective will be accomplished by integrating enhanced visual images with IR Images through the following steps: 1) Enhance optical images, 2) Register the enhanced optical images with IR images, and 3) Fuse the optical and IR images with the help of Genetic Algorithm.
Section 2 surveys related work for IR imaging, image enhancement, image registration and image fusion. Section 3 discusses the proposed nonlinear image enhancement methods.

www.intechopen.com
Section 4 presents the proposed image fusion algorithm. Section 5 reports the experimental results of the proposed algorithm. Section 6 concludes this research.

Literature survey
In this section, we will present related work in IR Image technology, nonlinear image enhancement algorithms, image registration and image fusion.

IR tecnology
One type of electromagnetic radiation that has received a lot of attention recently is Infrared (IR) radiation. IR refers to the region beyond the red end of the visible color spectrum, a region located between the visible and the microwave regions of the electromagnetic spectrum.
Today, infrared technology has many exciting and useful applications. In the field of infrared astronomy, new and fascinating discoveries are being made about the Universe and medical imaging as a diagnostic tool.
Humans, at normal body temperature, radiate most strongly in the infrared, at a wavelength of about 10 microns. The area of the skin that is directly above a blood vessel is, on average, 0.1 degrees Celsius warmer than the adjacent skin. Moreover, the temperature variation for a typical human face is in the range of about 8 degrees Celsius [F. Prokoski, 2000].
In fact, variations among images from the same face due to changes in illumination, viewing direction, facial expressions, and pose are typically larger than variations introduced when different faces are considered. Thermal IR imagery is invariant to variations introduced by illumination facial expressions since it captures the anatomical information. However, thermal imaging has limitations in identifying a person wearing glasses because glass is a material of low emissivity, or when the thermal characteristics of a face have changed due to increased body temperature (e.g., physical exercise) [G. S. Kong et al., 2005]. Combining the IR and visual techniques will benefit face detection and recognition.

The nonlinear log transform
The non-linear log transform converts an original image g into an adjusted image g′ by applying the log function to each pixel g [m, n] in the image,

g′[m, n] = klog(g[m, n])
(1) where k=L/log(L) is a scaling factor that preserve the dynamic range and L is intensity. The log transformis typically applied either to dark images where the overall contrast is low, or to images that contain specular reflections or glints. In the former case, the brightening of the dark pixels leads to an overall increase in brightness. In the latter case, the glints are suppressed thus increasing the effective dynamic range of the image.
The log function as defined in equation 1 is not parameterized, i.e. it is a single input/output transfer function. A modified parameterized function was proposed by Schreiber in [W. F. Schreiber, 1978] as: image, where α parameterizes the non-linear transfer function.

Registration
Image registration is a basic task in image processing to align two or more images, usually refereed as a reference, and a sensed image [R. C. Gonzalez et al., 2004] It is impossible to implement a comprehensive method useable to all registration tasks and there are many different registration algorithms. The focus is on the feature based registration techniques in this research and they usually consist of the following three steps [B. Zitova & J. Flusser, 2003].
• Feature detection: The step tries to locate a set of control points such as edges, line intersections and corners in the image. They could be manually or automatically detected.

•
Feature matching: The second step is to establish the correspondence between the features detected in the sensed image and those detected in the reference image.

•
Transform model estimation, Image resampling and Geometric transformation: The sensed image is transformed and resampled to match the reference image by proper interpolation techniques [B. Zitova & J. Flusser, 2003].
Each registration step has its specific problems. In the first step, features that can be used for registration must spread over the images and be easily detectable. The determined feature sets in the reference and sensed images must have enough common elements, even though the both images do not cover exactly the same scene. Ideally, the algorithm should be able to detect the same features [B. Zitova & J. Flusser, 2003].
In the second step, known as feature matching, physically corresponded features can be dissimilar because of the different imaging conditions and/or the different spectral sensitivities of the sensors. The choice of the feature description and measuring of similarity has to take into account of these factors. The feature descriptors should be efficient and invariant to the assumed degradations. The matching algorithm should be robust and efficient. Single features without corresponding counterparts in the other image should not affect its performance [B. Zitova & J. Flusser, 2003].
In the last step, the selection of an appropriate resampling technique is restricted by the trade-off between the interpolation accuracy and the computational complexity. In the  Holland, 1975;S. K. Mitra et al., 1998].
Genetic algorithms manipulate a population of potential solutions for the problem to be solved. Usually, each solution is coded as a binary string, equivalent to the genetic material of individuals in nature. Each solution is associated with a fitness value that reflects how good it is, compared with other solutions in the population. The higher the fitness value of an individual, the higher its chances of survival and reproduction in the subsequent generation. Recombination of genetic material in genetic algorithms is simulated through a crossover mechanism that exchanges portions between strings.
Another operation, called mutation, causes sporadic and random alteration of the bits in strings. Mutation has a direct analogy in nature and plays the role of regenerating lost genetic material [M. Srinivas & L. M. Patnaik, 1994]. GAs have found applications in many fields including image processing [J. Zhang , 2008;L. Yu et al., 2008].

Continuous Genetic Algorithm (CGA)
GAs typically represent solution as binary strings. For many applications, it is more convenient to denote solutions as real numbers known as continuous Genetic algorithms (CGA). CGAs have the advantage of requiring less storage and are faster than the binary counterparts. Figure

Components of a Continuous Genetic Algorithm
The various elements in the flowchart are described below [D. Patnaik, 2006].

Cost function
The goal of GAs is to solve an optimization problem defined as a cost function with a set of parameters involved. In CCA, the parameters are organized as a vector known as a chromosome.
Equations (3) and (4) along with applicable constraints constitute the problem to be solved. Since the GA is a search technique, it must be limited to exploring a reasonable region of variable space. Sometimes this is done by imposing a constraint on the problem. If one does not know the initial search region, there must be enough diversity in the initial population to explore a reasonably sized variable space before focusing on the most promising regions.

Initial population
To begin the CGA process, an initial population of pop N must be defined, a matrix represents the population, with each row being a 1x var N chromosome of continuous values [D. Patnaik, 2006

Pairing
A set of eligible chromosomes is randomly selected as parents to generate next generation. Each pair produces two offspring that contain traits from each parent. The more similar the two parents, the more likely are the offspring to carry the traits of the parents.

Mating
As for the binary algorithm, two parents are chosen to produce offsprings. Many different approaches have been tried for crossing over in continuous GAs. The simplest method is to mark a crossover points first, then parents exchange their elements between the marked crossover points in the chromosomes. Consider two parents:

Natural selection
The extreme case is selecting var N points and randomly choosing which of the two parents will contribute its variable at each position. Thus one goes down the line of the chromosomes and, at each variable, randomly chooses whether or not to swap information between the two parents. www.intechopen.com The problem with these point crossover methods is that no new information is introduced: each continuous value that was randomly initiated in the initial population is propagated to the next generation, only in different combinations. Although this strategy worked fine for binary representations, in case of continuous variables, we are merely interchanging two data points. These approaches totally rely on mutation to introduce new genetic material. The same variable of the second offspring is merely the complement of the first (i.e., replacing β by 1 -β ). If β = 1, then mn p propagates in its entirety and dn p dies. In contrast, if β = 0, then dn p propagates in its entirety and mn p dies. When β = 0.5, the result is an average of the variables of the two parents. This method is demonstrated to work well on several interesting problems in [Randy L. Haupt & Sue Ellen Haupt, 2004].
Choosing which variables to blend is the next issue to be solved. Sometimes, this linear combination process is done for all variables to the right or to the left of some crossover point. Any number of points can be chosen to blend, up to var N values where all variables are linear combinations of those of the two parents. The variables can be blended by using the same β for each variable or by choosing different β 's for each variable. These blending methods effectively combine the information from the two parents and choose values of the variables between the values bracketed by the parents; however, they do not allow introduction of values beyond the extremes already represented in the population. The simplest way is the linear crossover [Randy L. Haupt & Sue Ellen Haupt, 2004], where three offspring are generated from two parents by Any variable outside the bounds is discarded. Then the best two offspring are chosen to propagate. Of course, the factor 0.5 is not the only one that can be used in such a method. The algorithm is a combination of an extrapolation method with a crossover method. The goal was to find a way to closely mimic the advantages of the binary GA mating scheme. It begins by randomly selecting a variable in the first pair of parents to be the crossover point: where the m and d subscripts discriminate between the mom and the dad parent. Then the selected variables are combined to form new variables that will appear in the children: where β is a random value between 0 and 1. The final step is to complete the crossover with the rest of chromosome: where β is also a random value between 0 and 1. The final is to complete the crossover with the rest of the chromosome as before: If the first variable of the chromosomes is selected, then only the variables to the right of the selected variable are swapped. If the last variable of the chromosomes is selected, then only the variables to the left of the selected variable are swapped. This method does not allow offspring variables outside the bounds set by the parent unless β > 1.

Mutation
If care is not taken, the GA can converge too quickly into one region on the cost surface. If this area is in the region of the global minimum, there is no problem. However, some functions have many local minima. To avoid overly fast convergence, other areas on the cost surface must be explored by randomly introducing changes, or mutations, in some of the variables. Random numbers are used to select the row and columns of the variables that are to be mutated [Randy L. Haupt & Sue Ellen Haupt, 2004].

Next generation
After all these steps, the chromosomes in the starting population are ranked and the bottom ranked chromosomes are replaced by offspring from the top ranked parents to produce the next generation. Some random variables are selected for mutation from the bottom ranked chromosomes. The chromosomes are then ranked from lowest cost to highest cost. The process is iterated until a global solution is achieved.

Image fusion
In last decades, the rapid developments of image sensing technologies make multisensory systems popular in many applications. Researchers have begun to work on the fields of these systems such as medical imaging, remote sensing and the military applications [D. Patnaik, 2006]. The outcome of using these techniques is a great increase of the amount of diversity data available. Multi-sensor image data often present complementary information about the region surveyed so that image fusion provides an effective method to enable comparison and analysis of such data [H. Wang, 2004]. Image fusion is defined as the process of combining information in two or more images of a scene to enhance viewing or understanding of the scene. The fusion process must preserve all relevant information in the fused image [A. Mumtaz & A. Majid, 2008;S. Erkanli & Zia-Ur Rahman, 2010].
Image fusion can be done at pixel, feature and decision levels. Out of these, the pixel level fusion method is the simplest technique, where average/weighted averages of individual pixel intensities are taken to construct a fused image [K. Kannan & S. Perumal, 2007]. Despite their simplicity, these methods are not used nowadays because of some serious disadvantages they possess. For instance, the contrast of the fused information is reduced and also redundant information is introduced in the fused image, which may mask the useful information. These disadvantages are overcomed by feature level and decision level fusion methods. Feature and decision level fusion methods are based on human vision system. Decision level fusion combines the results from multiple algorithms to yield a final fused image. Several pyramid transform methods for feature level fusion have been suggested [A. Wang et al., 2006]. Recently, developed methods based on the wavelet transform become popular [A. Wang et al., 2006]. In the method source images are decomposed into subimages of different resolutions and in each subimage different features become prominent. To fuse the original source images, the corresponding subimages of different source images are combined based some criteria to form composite subimages. Inverse pyramid transform of composite transform gives the final fused image.

Introduction
The human visual system (HVS) allows individuals to assimilate information from their environment [S. Erkanli & Zia-Ur Rahman, 2010b;H. Kolb, 2003]. The HVS perceives colors and detail across a wide range of photometric intensity levels much better than electronic cameras. The perceived color of an object, additionally, is almost independent of the type of www.intechopen.com illumination, i.e., the HVS is color constant. Electronic cameras suffer, by comparison, from limited dynamic range and the lack of color constancy and current imaging and display devices such as CRT monitors and printers have limited dynamic range of about two orders of magnitude, while the best photographic prints can provide contrast up to 3 10 : 1. However; real world scenes can have a dynamic range of six orders of magnitude [S. Erkanli & Zia-Ur Rahman, 2010b;. This can result in overexposure that causes saturation in high contrast images, or underexposure in dark images [Z. Rahman, 1996]. The idea behind enhancement techniques are to bring out details in images that are otherwise too dim to be perceived either due to insufficient brightness or insufficient contrast [Z. Rahman, 1997]. A large number of image enhancement methods have been developed, like log transformations, power law transformations, piecewise-linear transformations and histogram equalization. However these enhancement techniques are based on global processing which results in a single mapping between the input and the output intensity space. These techniques are thus not sufficiently powerful to handle images that have both very bright and very dark regions. Other image enhancement techniques are local in nature, i.e., the output value depends not only on the input pixel value but also on pixel values in the neighborhood of the pixel. These techniques are able to improve local contrast under various illumination conditions.
Single-Scale Retinex (SSR), is a modification of the Retinex algorithm introduced by Edwin Land [G. D. Hines et al., 2004;E. Land, 1986]. It provides dynamic range compression (DRC), color constancy, and tonal rendition. SSR gives good results for DRC or tonal rendition but does not provide both simultaneously. Therefore, the Multi-Scale Retinex (MSR) was developed by Rahman et al. The MSR combines several SSR outputs with different scale constants to produce a single output image, which has good DRC, color constancy and good tonal rendition. The outputs of MSR display most of the detail in the dark pixels but at the cost of enhancing the noise in these pixels and the tonal rendition is poor in large regions of slowly changing intensity. As a result, Multi-Scale Retinex with Color Restoration (MSRCR) was developed by Jobson et al., for synthesizing local contrast improvement, color constancy and lightness/color rendition. Other non-linear enhancement models include the Illuminance Reflectance Model for Enhancement (IRME) proposed by Tao et al. [L. Tao et al., 2005], and the Adaptive and Integrated Neighborhood-Dependent Approach for Nonlinear Enhancement (AINDANE) described by Tao [L.Tao, 2005]. Both use a nonlinear function for luminance enhancement and tune the intensity of each pixel based on its relative magnitude with respect to the neighboring pixels.
In this section, a new image enhancement approach is described: Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD). The details of the new algorithm are given in Section 3.2, respectively. Sections 3.3 describe experimental results and compare our results with other techniques for image enhancement. Finally in Section 3.4, conclusions are presented.

Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD)
The major innovation in ETNUD is in the selection of the transformation parameters for DRC, and the surround scale and color restoration parameters. The following sections describe the selection mechanisms. www.intechopen.com

Selection of transformation parameters for DRC
The intensity I of the color image c I can be determined by: ( , ) where r, g, b are the red, green, and blue components of c I respectively, and m and n are the row and column pixel locations respectively. Assuming I to be 8-bits per pixel, n I is the normalized version of I, such that: (,) (,) / 2 5 5 n Im n I m n = Using linear input-output intensity relationships typically does not produce a good visual representation compared with direct viewing of the scene. Therefore, nonlinear transformation for DRC is used, which is based on some information extracted from the image histogram. To do this, the histogram of the intensity images is subdivided into four ranges: 1 r =0-63, 2 r = 64-127, 3 r = 128-191 and 4 r = 192-255. n I is mapped to drc n I using the following: The first mapping pulls out the details in the dark regions, and the second suppresses the bright overshoots. The value of x is given by 3 . 0 , where () f a refers to number of pixels between the range (a), , and Λ is the logical AND operator. α is the offset parameter, helping to adjust the brightness of image. The determination of the x values and their association with the range-relationships as given in Equation 20 was done experimentally using a large number of non-uniform and uniform dark images and x value can be also determined manually. The DRC mapping of the intensity image performs a visually dramatic transformation. However, it tends to have poor contrast, so a local, pixel dependent contrast enhancement method is used to improve the contrast.

Selection of surround parameter and color restoration
Many local enhancement methods rely on center/surround ratios [L. Tao, 2005]. Hurlbert [A. C. Hulbert, 1989] investigated the Gaussian as the optimal surround function. Other surround functions proposed by [E. Land, 1986] were compared with the performance of the Gaussian proposed by [D. J. Jobson, et al., 1997]. Both investigations determined that the Gaussian form produced good dynamic range compression over a range of space constants. Therefore, the luminance information of surrounding pixels is obtained by using 2D discrete spatial convolution with a Gaussian kernel, G(m, n) defined as: where, E(m, n) is given by:

Im n Emn
Imn σ is the contrast-standard deviation-of the original intensity image. If σ < 7, the image has poor contrast and the contrast of the image will be increased. If σ ≥ 20, the image has sufficient contrast and the contrast will not be changed. Finally, the enhanced image can be obtained by linear color restoration based on chromatic information contained in the original image as: represents the RGB spectral band and j λ is a parameter which adjusts the color hue.

Evaluation citeria
In this work, following evaluation criteria was used. www.intechopen.com

A new metric
There are some metrics such as brightness and contrast to characterize an image. Another such metric is sharpness. Sharpness is directly proportional to the high-frequency content of an image. So the new metric is defined as [Z. Rahman, 2009 where h is a high-pass filter, periodic with period 12 M xM and h ∧ is its direct Discrete Fourier Transform (DFT). I is also DFT of Image I. The role of h ∧ (or h) is to weight the energy at the high frequencies relative to the low frequencies, thereby emphasizing the contribution of the high frequencies to S. The larger the value of S, the greater the sharpness of I and conversely.
where σ is the parameter at which the attenuation coefficient . A smaller value of σ implies that fewer frequencies are attenuated and vice versa. For this research σ =0.15.

Image qality asessment
The overall quality of images can be measured by using the brightness , µ contrast σ and sharpness S, where brightness and contrast are assumed to be the mean and the standard deviation. However, instead of using global statistics, it is used regional statistics. In order to do this [Z. Rahman, 2009 3. Classify the block as either GOOD or POOR based on the computed measure (will be discussed with the following). 4. Classify the image as a whole as GOOD or POOR based upon the classification of regions (will be discussed with the following).
A region is considered to have sufficient contrast when 0.25 0.5. n σ ≤≤ When 0.25, n σ < the region has poor contrast, and when 0.5, n σ > the region has too much contrast. 3. Let n S be normalized sharpness parameter, such that n S =min(2.0,S/100). When n S >0.8, the region has sufficient sharpness. Image Quality is evaluated using by: where 01 . 0 Q << is the quality factor. A region is classified as good when 0.55, Q > and poor when 0.5. n σ ≤ An image is classified as GOOD when the total number of regions classified as GOOD, 0.6 . G NN >

Experimental result
The image samples for ETNUD were selected to be as diverse as possible so that the result would be as general as possible. MATLAB was used for AINDANE and IRME algorithms and their codes were developed by the author and research team. MSRCR enhancement was done with commercial software, Photo Flair. From visual experience, the following statements are made about the proposed algorithm: 1. In the Luminance enhancement part it has been shown that ETNUD works well for darker images and the technique adjusts itself to the image (Figure 2). 2. In the contrast enhancement part it is clear that unseen or barely seen features of low contrast images are made visible. 3. In Figure 2 Gamma Correction with γ = 1.4 does not provide good visual enhancement. IRME and MSRCR bring out the details in the dark but have some enhancement of noise in the dark regions, which can be considered objectionable. AINDANE does not bring out the finer details of the image. The ETNUD algorithm gives good result and outperforms the other algorithms if the results are compared (in Table 1

Conclusion
The ETNUD image enhancement algorithms provide high color accuracy and better balance between the luminance and contrast in images.

Introduction
Image fusion is defined as the process of combining information from two or more images of a scene to enhance the viewing or understanding of that scene. The images that are to be fused can come from different sensors, or have been acquired at different times, or from different locations. Hence, the first step in any image fusion process is the accurate registration of the image data. This is relatively straightforward if parameters such as the instantaneous field-of-view (IFOV), and locations and orientations from which the images are acquired are known, especially when the sensor modalities produce images that use the same coordinate space. This is more of a challenge when sensor modalities differ significantly and registration can only be accomplished at the information level. Hence, the goal of the fusion process is to preserve all relevant information in the component images and place it in the fused image (FI). This requires that the process minimize the noise and other artifacts in the FI. Because of this, the fusion process can be also regarded as an optimization problem [K. Kannan and S. Perumal, 2002]. In recent years, image fusion has been applied to a number of diverse areas such as remote sensing  Image fusion can be divided into three processing levels: pixel, feature and decision. These methods increase in abstraction from pixel to feature to decision levels. In the pixel-level approach, simple arithmetic rules like average of individual pixel intensities or more sophisticated combination schemes are used to construct the fused image. At the featurelevel, the image is classified into regions with known labels, and these labeled regions from different sensor modalities are used to combine the data. At the decision level, a combination of rules can be used to include part of the data or not.

www.intechopen.com
Genetic algorithms (GA) are an optimization technique that seeks the optimum solution of a function based on the Darwinian principles of biological evolution. Even though there are several methods of performing and evaluating image fusion, there are still many open questions. In this section, a new measure of image fusion quality is provided and compared with many existing ones. The focus is on pixel-level image fusion (PLIF) and a new image fusion technique that uses GA is proposed.
The GA is used to optimize the parameters of the fusion process to produce an FI that contains more information than either of the individual images. The main purpose of this section is in finding the optimum weights that are used to fuse images with the help of CGA. The techniques for GA and image fusion are given in Section 4.2. Section 4.3 describes the evaluation criteria. Section 4.4 describes the experimental results, and compares our results with other image fusion techniques. In Section 4.5, conclusion is provided.

Genetic Algorithm
As stated earlier, GA is a non-linear optimization technique that seeks the optimum solution of a function via a non-exhaustive search among randomly generated solutions. GAs use multiple search points instead of searching one point at a time and attempt to find global, near-optimal solutions without getting stuck at local optima. Because of these significant advantages, GAs reduce the search time and space. However, there are disadvantages of using GAs as well: they are not generally suitable for real-time applications since the time to converge to an optimal solution cannot be predicted. The convergence time depends on the population size, and the GA crossover and mutation operators. In this fusion process, a continuous genetic algorithm has been selected.

Continuous Genetic Algorithm (CGA)
GAs typically operates on binary data. For many applications, it is more convenient to work in the analog, or continuous, data space rather than in the binary space of most GAs. Hence, CGA is used because they have the advantage of requiring less storage and are faster than binary. CGA inputs are represented by floating-point numbers over whatever range is deemed appropriate. Figure 6 shows iii. Natural Selection: The chromosomes are ranked from the lowest to highest cost. Of the total of chromosomes in a given generation, only the top Keep N are kept for mating and the rest are discarded to make room for the new offspring . iv. Mating: Many different approaches have been tried for crossover in continuous GAs. In crossover, all the genes to the right of the crossover point are swapped. Variables are randomly selected in the first pair of parents to be the crossover point: where subscripts m and d represent the mom and dad parent. Then the selected variables are combined to form new variables that will appear in the children.
where β is a random value between 0 and 1. The final step is to complete the crossover with the rest of chromosome: v. Mutation: If care is not taken, the GA can converge too quickly into one region of the cost surface. If this area is in the region of the global minimum, there is no problem. However, some functions have many local minima. To avoid overly fast convergence, other areas of the cost surface must be explored by randomly introducing changes, or mutations, in some of the variables. Multiplying the mutation rate by the total number of variables that can be mutated in the population gives the amount of mutation. Random numbers are used to select the row and columns of the variables that are to be mutated. vi. Next Generation: After all these steps, the starting population for the next generation is ranked. The bottom ranked chromosomes are discarded and replaced by offspring from the top ranked parents. Some random variables are selected for mutation from the bottom ranked chromosomes. The chromosomes are then ranked from lowest cost to highest cost. The process is iterated until a global solution is achieved [Randy L. Haupt & Sue Ellen Haupt, 2004].

Image fusion
A set of input images of a scene, captured at a different time or captured by different kinds of sensors at the same time, reveals different information about the scene. The process of extracting and combining data from a set of input images to form a new composite image with extended information content is called image fusion.

Evaluation criteria
In this section, the following criteria were defined to evaluate the performance of the image fusion algorithm.

Image quality assessment
This evaluation criterion was discussed in Section 3.2.3.

Entropy
Entropy is often defined as the amount of information contained in an image. Mathematically, entropy is usually given as: where L is the total number of grey levels, and { } 01 ,....., L pp p − = is the probability of occurrence of each level. An increase in entropy after fusion can be interpreted as an overall increase in the information content. Hence, one can assess the quality of fusion by assessing entropy of the original data, and the entropy of the fused data.

Mutual information indices
Mutual Information Indices are used to evaluate the correlative performances of the fused image and the source images. Let A and B be random variables with marginal probability distributions () A higher value of Mutual Information (MI) indicates that the fused image, F, contains fairly good quantity of information present in both the source images, A and B. The MI can be defined as .

MII I =+
A high value of MI does not imply that the information from the both images is symmetrically fused. Therefore, information symmetry (IS) is introduced. IS is the indication of how symmetrically distributed is the information in the fused image, with respect to input images. The higher the value of IS, the better the fusion result. IS is given by :

Experimental results
The goal of this experiment is to fuse visual and IR images. To minimize registration issues, it is important that the visual and the thermal images are captured at the same time.
Pinnacle software was used to capture the visual and the thermal images simultaneously. Although radiometric calibration is important, the thermal camera can not always be calibrated in field conditions because of constraints on time. Figure 3 shows an example where the IR and visual image were captured at the same time. It is obvious from the figure that the images need to be registered before they can be fused since the field-of-view and the pixel resolution are obviously different.
The performance of the proposed algorithm was tested and compared with different PLIF methods. The IR and visual images were not previously registered as shown in Figure 3. The registered image, base image (IR Image) and fused image with CGA are shown in Figure 4. The cost function is very simple and defined as: where V and IR are the visual and IR images, w a and w b are the respective associated weights, and F is the fused image. The initial population size is 100×3. The first and second columns in population matrix represent w a V, and w b IR and the last column represents the cost function which is the entropy of F. Then initial population has been ranked based on the cost. In each iteration of the GA, 20 of the 100 rows are kept for mating and the rest are discarded. The crossover has been applied based on the Equation 35. The mutation rate was set to 0.20, hence the total number of mutated variables is 40. The value of a mutated variable is replaced by a new random value in the same range.  The CGA results after 50 iterations of the GA such that the CGA maximize the cost and find optimum weights of images. In the 2nd, 8th, and 25th iterations, the cost increased but was not associated with the global solution. The optimum solution was determined in 45th iteration and remained unchanged because it is optimum solution. Figure 4 shows the fusion results of point-rules based PLIF. After registering IR and visual data, we determined that w a = 0.9931 and w b = 0.0940 provide the optimum values for maximizing the entropy cost function for the F specified in Equation 38. The evaluation of these weights results is shown in Table 2.

Conclusion
In this section, CGA based image fusion algorithm was introduced and compared with other classical PLIFs. The results show that CGA based image fusion gives better result than other PLIFs.

Introduction
With face recognition, a database usually exists that stores a group of human faces with known identities. In a testing image, once a face is detected, the face is cropped from the image or video as a probe to check with the database for possible matches. The matching algorithm produces a similarity measure for each of the comparing pairs.
Variations among images from the same face due to changes in illumination are typically larger than variations rose from a change of face identity. In an effort to address the illumination and camera variations, a database was created, considering these variations to evaluate the proposed techniques.
Besides the regular room lights, four additional spot lights are located in the front of the person that can be turned off and on in sequence to obtain face images under different illumination conditions. Note that it is important to capture visual and thermal images at the same time in order to see the variations in the facial images. Visual and thermal images are captured almost at the same time. Although radiometric calibration is important, the thermal camera can not be calibrated because of current IR camera characteristics.The Pinnacle (Pinnacle Systems Ltd.) software has been implemented to capture 16 visual and thermal images at the same time.   Figure 4 and Figure 5.
In this chapter, the focus is on visual image enhancement. Then the visual images will be registered with the IR images based landmark registration algorithm. Finally, the registered IR and visual images are fused for face recognition.

Enhancement of visual images
The ETNUD algorithm was applied to 16 visual images as shown in Figure 6 under different illumination conditions. In all figures besides the regular room lights, the four extra spot lights located in the front of the person were turned off and on for creating different illumination conditions. To enhance those visual images, the luminance is first balanced, then image contrast is enhanced and finally, the enhanced image is obtained by a linear color restoration based on chromatic information contained in the original image. The results in the luminance enhancement part showed that the algorithms work well for dark images. All the details, which cannot be seen in the original image, become evident. The experiment results have shown that for all color images, the proposed algorithms work sufficiently well.

IR and visual images registration
First, the IR and visual images taken from different sensors, viewpoints, times and resolution were resized for the same size. The correspondence between the features detected in the IR image and those detected in the visual image were then established. Control points were picked manually from those corners detected by the Harris corner detection algorithm from both images, where the corners were in the same positions in the two images.
In the second step, a spatial transformation was computed to map the selected corners in one image to those in another image. Once the transformation was established, the image to be registered was resampled and interpolated to match the reference image. For RGB and intensity images, the bilinear or bicubic interpolation method is recommended since they lead to better results. In the experiments, the bicubic interpolation method was used.

Discussion
Experimental results have been applied on the database, which is created by the research team. This algorithm is categorized into four steps, which are described respectively. In the first step, there is enhancement of visual images, as described in Section 3. The fused image should be more suitable for human visual perception and computer-processing tasks. The experience of image processing has prompted the research to consider fundamental aspects for good visual presentation of images, requiring nonlinear image enhancement techniques of visual recorded images to get a better image, which has more information from the original images. In the second step, the corners of visual and IR images were determined with the help of Harris Detection algorithm for registration purpose to use as control points.
In the third step, because the source images are obtained from different sensors, they present different resolution, size and spectral characteristic, the source images have to be correctly registered. In the last step, an image fusion process is performed, which was described in Section 4.
The registered images were overlapped at an appropriate transparency. The pixel value in the fused image was a weighted submission of the corresponding pixels in the IR and visual images. In the next section, results from advanced image fusion approaches are presented.

Fusion of visual and IR images
The Image fusion algorithm was applied with the help of Genetic Algorithm to the database. One of the issues is the determination of the quality of image fusion results. As part of the general theme of fusion evaluation there is a growing interest to develop methods that address the scored performance of image fusion algorithms as described in Section 4. Given the diversity of applications and various methods of evaluation metrics, there are still open questions concerning when to perform image fusion. There is an interest in exploring mean, standard deviation, entropy, mutual information, peak signal to noise ratio and image quality as described in Section 4. Because source images have different spectrum, they show quite distinct characters and have complementary information. It can be seen in Figure 6 (a and c) that the visual image does not have enough information to see the faces and is very dark. Figure 6 (b) shows that the luminance enhancement part works well for dark images and the technique adjusts itself to the image. In the contrast enhancement part it is clear that unseen or barely seen features of low contrast images were made visible. Enhancement algorithms were developed to improve the images before the fusion process. After enhancement it was found that the corners of the enhanced image and the IR image then registered the enhanced image as shown in Figure 6 (d). Then, the enhanced image was fused with the IR image in Figure 6 (f). . The evaluation of these weights results is shown in Table 3. By inspection, the faces and the details in the fused image are clearer as compared to either the original IR image or the visual image. Table 3 shows the detailed comparison results of the fused images. A is the fused image by averaging the visual and IR images. B is the fused image by the proposed approach. The total images used in this experiment were from the created database. The results show that this approach is better than the averaging fusion result.

Conclusions
In this chapter, a database for visual and thermal images was created and several techniques were developed to improve image quality as an effort to address the illumination challenge in face recognition.
Firstly, one image enhancement algorithm was designed to improve the images' visual quality. Experimental results showed that the enhancement algorithm performed well and provided good results in terms of both luminance and contrast enhancement. In the luminance enhancement part, it has been shown that the proposed algorithm worked well for both dark and bright images. In the contrast enhancement part, it was proven that the proposed nonlinear transfer functions could make unseen or barely seen features in low contrast images clearly visible.
Secondly, the IR and enhanced visual images taken from different sensors, viewpoints, times and resolution were registered. A correspondence between an IR and a visual image was established based on a set of image features detected by the Harris Corner detection algorithm in both images. A spatial transformation matrix was determined based on some manually chosen corners and the transformation matrix was utilized for the registration.
Finally, a continuous genetic algorithm was developed for image fusion. The continuous GA has the advantage of less storage requirements than the binary GA and is inherently faster than the binary GA because the chromosomes do not have to be decoded prior to the evaluation of the cost function.
Data fusion provides an integrated image from a pair of registered and enhanced visual and thermal IR images. The fused image is invariant to illumination directions and is robust under low lighting conditions. They have potentials to significantly boost the performances of face recognition systems. One of the major obstacles in face recognition using visual images is the illumination variation. This challenge can be mitigated by using infrared (IR) images. On the other hand, using IR images alone for face recognition is usually not feasible because they do not carry enough detailed information. As a remedy, a hybrid system is presented that may benefit from both visual and IR images and improve face recognition under various lighting conditions. Bio-inspired computational algorithms are always hot research topics in artificial intelligence communities. Biology is a bewildering source of inspiration for the design of intelligent artifacts that are capable of efficient and autonomous operation in unknown and changing environments. It is difficult to resist the fascination of creating artifacts that display elements of lifelike intelligence, thus needing techniques for control, optimization, prediction, security, design, and so on. Bio-Inspired Computational Algorithms and Their Applications is a compendium that addresses this need. It integrates contrasting techniques of genetic algorithms, artificial immune systems, particle swarm optimization, and hybrid models to solve many real-world problems. The works presented in this book give insights into the creation of innovative improvements over algorithm performance, potential applications on various practical tasks, and combination of different techniques. The book provides a reference to researchers, practitioners, and students in both artificial intelligence and engineering communities, forming a foundation for the development of the field.