Optimal Sensor Placement for Structural, Damage and Impact Identification: A Review

The optimum location of the sensors is a critical issue of any successful Structural Health Monitoring System. Sensor optimization problems encompass mainly three areas of interest: system identification, damage identification and impact identification. The current paper is intended as a review of the state of the art at the year 2012 and going back to 1990. The above topics have been dealt with in separate contexts so far but they contain interesting common elements to be exploited.


Introduction
A key issue in Structural Health Monitoring (SHM) is the selection of the optimal number and location of sensors. Damage and impact identifications are primary issues in which the best deployment of sensors is vital to guarantee effectiveness and robustness. They are strictly correlated as impact above certain energy level would result in damage in the structure.
The optimum number and location of sensors is also a main concern in the general field of System Identification (SI) where the model is selected from a parameterized class by fitting measured dynamic data. The identification is aimed at choosing the most "realistic" model among the available ones in order to reproduce the structural response as correctly as possible. SI is also linked to SHM; in fact a correct identification of the structural numerical model is a necessary condition to improve damage and impact detection procedures.
Sensor optimization is vital for safe operations of civil and aeronautic structures. It allows cost reduction both in mounting/maintaining a structural monitoring system The survey is organized as follows. Section 2 comprises a review of the sensor placement methods aimed at the identification of structural characteristics, emphasizing the adopted objective function. In Section 3 the recent literature on sensor optimization in damage and fault detection is briefly reviewed; attention is paid on the main aspects of the technique adopted. The review of the optimal sensor placement for impact detection is carried out in Section 4. The paper ends with some comments and concluding remarks in Section 5.

Optimal Sensors in System Identification
In a system identification methodology, the information about the condition of the system is provided by the measured data. The sensor configuration should be selected such that the resulting measured data are most informative about the condition of the structure or, equivalently, the uncertainty in the parameter estimates is the least possible. Implementation in structural dynamics is concentrated on the design of optimal sensor location for modal identification and estimation of structural numerical model parameters.
The available literature on the above topic is dominant when compared to the other two fields (i.e. damage and impact identification). This can be attributed mainly to two factors: 1) the parametrization of the numerical model undoubtedly occur before any non linear inverse identification problem, 2) SI has dealt with linear problems for many years and only recently has been extended to the damage identification matter. The more-trodden property of this issue is also demonstrated by the survey papers Kubrusly and Malebranche (1985) and Mottershead and Friswell (1993).
It is clear that SI has a strong link with damage identification: in fact, most SHM approaches establish their reliability on the changes of either the mechanical properties (e.g. the stiffness) or the dynamic behavior in terms of the natural frequencies and associated modes of vibration.
In Table 2 the papers dealing with the SI problem are grouped on the basis of the choice performed on the objective function. The papers reported in the second column are listed in increasing order of the publishing year. Table 1: Papers reviewed per objective function . IE=Information Entropy, FIM=Fisher Information Matrix, off-MAC = off-diagonal elements of the MAC, J(a) = least-squares output-error, EID = Effective Independence Distribution.
Parametric identification has many applications in civil and aeronautic engineering. Large-scale structures in civil engineering, such as high-rise buildings, suspension bridges and transmission towers are more and more utilized nowadays. Long service lives, inadequate designs and increasing traffic loads are causing severe crisis in such structures; however, they must be kept operational provided that minimizing the chances of a collapse and, thus, of loss of life and property. Structural  Meo and Zumpano (2005).
Another typical application in civil engineering is the determination of the uncertain lateral inter-storey stiffnesses of a typical steel/reinforced concrete shear building. In such a case the sensors located in their optimum location should provide the best identification of the inter-storey stiffnesses on the basis of displacement/acceleration measurements. Fig. 2 presents the optimal arrays of accelerometers for a 8-DOF system on the basis of the maximum of the expected trace of the FIM as obtained by Heredia-Zavoni and Esteva (1998).
A similar issue is encountered in aeronautic engineering. Fig. 2 in Kammer and Tinker (2004) shows the FEM of an advanced vehicle that was sensed in order to measure 27 target modes. The paper presented the minimum number and location of sensors distributed on the surface of the vehicle that best acquire the above information.
For the sake of clearness the papers examined are listed in Table 2 in chronological order. The sequence of the references in the same year coincides with the temporal order.
Undoubtedly a starting point of the present review is given by Kammer (1991). Given a set of (target) structural modes to be identified, the best sensors location should provide measurement data that render the extracted test-mode shapes and frequencies the most possible linearly independent, otherwise the test modes and the corresponding FEM modal partitions would be indistinguishable. The procedure is, thus, oriented towards modal identification by sensor measurements.
If θ denotes the vector of the k target modal coordinates andθ its estimation, the optimization is led by the covariance matrix of the estimate error P: where E denotes the expected value, H the displacement/velocity/acceleration prediction of the measured response based on a (Finite Element or Finite Difference or Boundary Element) model and the right hand side (rhs) is the well-known FIM: In the entire time history T the rhs of Eq. (1) needs to be integrated over T : Copyright © 2013 Tech Science Press SDHM, vol.9, no.4, pp.287-323, 2013 1991Kammer (1991, Hac and Liu (1993), Yao, Sethares, and Kammer (1993), Udwadia (1994), Kirkegaard and Brincker (1994), Penny, Friswell, and Garvey (1994), Kammer and Brillhart (1996), Heo, Wang, and Satpathi (1997-2004Beck and Katafygiotis (1998, Beck (1998), Heredia-Zavoni andEsteva (1998), Katafygiotis, Papadimitriou, and Lam (1998), Heredia-Zavoni, Montes-Iturrizaga, and Esteva (1999), Reynier and Abou-Kandil (1999), Papadimitriou, Beck, and Au (2000), , Katafygiotis and Yeun (2001), Yuen, Katafygiotis, Papadimitriou, and Mickleborough (2001), Kammer and Tinker (2004), Papadimitriou (2004), Li, Tang, and Li (2004) 2005-2009Papadimitriou (2005, Meo and Zumpano (2005), Rao and Anandakumar (2007), Li, Li, and Fritzen (2007Yi, Li, and Gu (2011, Papadimitriou and Lombaert (2012) The sensor output at any instant can be represented as: where the vector N is a stationary Gaussian white noise with variance σ 2 0 and Φ s the k x s matrix of FEM target modes partitioned to the s sensor locations. Therefore the covariance matrix P at any instant can be expressed as: The procedure proposed by Kammer (1991) reduces progressively an initially selected candidate set of s sensor locations to an allotted number m < s by eliminating the sensors that do not contribute substantially to the linear independence and identification of the mode shapes. The initial set is selected in order that the associated modal kinetic energy distribution is about 40 − 50% of the total value for each target mode. The sensor elimination process is based on a related measure of the contribution of each sensor location (out of s) given by the vector E D .
If λ and Ψ are the eigenvalue and eigenvector matrices of being the i th row of the modal partition Φ corresponding to the i th DOF or sensor location), respectively, then the s x k matrix: where the symbol ⊗ represents a term-by-term matrix multiplication, collects the fractional contribution of the sensor locations to the eigenvalues. The s-dimensional vector: referred as Effective Independence Distribution (EID), is demonstrated to be capable to rank the candidate sensor set, i.e. E D i = 0 means that the sensor location does not contribute and it can be eliminated, whereas E D i = 1 means that the sensor is vital in identifying the target mode.
In practice a sensor location will have a contribution in the range 0 ≤ E D i ≤ 1.0. The final sub-optimal sensor configuration of a pre-defined number s of sensors is obtained in an iterative manner by eliminating one sensor per time on the basis of the values in E D . The sensor with the lowest value of E D i is iteratively eliminated and E D is to be re-calculated at each iteration.
It can be shown (see for instance Kammer and Brillhart (1996)) that the i th component of the E D vector represents the fractional change in the determinant of the FIM if the i th candidate sensor location is deleted, i.e.: where Q i is the FIM with the i th candidate sensor location deleted. Furthermore, Kammer and Brillhart (1996) shows that the EID method also enhances the modal identification process as it maximizes the observability of the system. Kammer and Tinker (2004) presents the reformulation of the procedure developed in Kammer (1991) to place triaxial accelerometers as single units rather than as three different sensors. The accelerometers are ranked on the basis of their effective independence value. Its expression is re-obtained in order to measure the fractional change in the determinant of the FIM if the ith node, and not one of the three directions sensored by the accelerometer, is deleted from the candidate set.
In Yao, Sethares, and Kammer (1993) the authors propose the Genetic Algorithm (GA) as an alternative to the EID method which is not guaranteed to produce an optimum solution. The fitness function is taken as the determinant of the FIM. Such a paper represents the first application of a GA to the sensor placement problem. The procedure slightly outperforms the EID algorithm in the examples presented by the authors.
In Hac and Liu (1993) the problem is extended to the sensor and actuator best location in systems governed by the generalized wave equation whose displacement solution w in a point P at the instant t can be represented as the series: where η i (t) are modal co-ordinates and Φ i (P) are eigenfunctions with corresponding natural frequencies ω i . Following a common procedure in structural dynamics, the governing equation can be replaced by a set of n ordinary differential equations: under the assumption that higher order modes can be neglected as they are not likely to be excited in practice and typically exhibit higher structural damping. Defining the state and the input vectors as x = [η 1 , ω 1 η 1 , · · · , ω n η n ] T , u = [ f 1 , · · · , f p ] T yields the state representation of the above equation: If the displacement y(t) is supposed to be measured at r sensor points, Hac and Liu (1993) proposes to locate such sensors in order to maximize a Performance Index (PI) expressed in terms of the eigenvalues λ j of the observability gramian: where it is well known that the observability gramian Q is proportional to the output energy released by the system from the initial state x(0) = x 0 : Maximization of PI is shown by Hac and Liu (1993) to guarantee that the system response is large under a persistent excitation provided that the natural frequencies to be well spaced and the damping to be low. Therefore, the idea, as a matter of fact rather simplicistic, is to optimize the sensors location by imposing the system output to be as large as possible. On the other hand the approach cannot guarantee that the noise associated to the measurement is the lowest possible. Furthermore, the observability gramian depends on the particular choice of the state variables, making the application rather complicate.
In 1991 a different approach based on the FIM was proposed even if the paper was only published in 1993. A research line aimed at obtaining the optimal sensors locations as those providing the best structural parameters estimates had already been trodden in the eighties, but the approaches were all depending on the specific estimator adopted, thus, requiring an exhaustive search to be performed for each of it.
Many older papers had shown that the optimal location of the sensors can be obtained by maximizing the FIM (say Q) as such a maximization minimizes the covariance of the estimation error E[(θ −θ )(θ −θ ) T ] on the basis of the Cramer-Rao lower bound: Eq. (14) leads to a great simplification, but it is valid under the postulate that an asymptotically efficient unbiased estimator exists.
In Udwadia (1994) the FIM is established for a m-dof structural dynamic system and the proposed methodology may be also applied to systems governed by nonlinear differential equations. The authors assert that the sensors are to be placed at those locations that are most sensitive to any changes in the parameters θ , i.e. at those locations that maximize the slope of H(θ ) or, better, that maximize a suitable norm of the FIM. Among the various commonly used norms, Udwadia (1994) suggests the trace of the FIM having the advantage to be linear. The author proposes an algorithm to locate m optimal sensors in the positions corresponding to the largest m values of the diagonal of the FIM. The expression of the trace of the FIM is given in terms of the sensitivities ∂ x s k /∂ θ i where x s k is the response of node s k with sensor. The analytical expression of such a sensitivity is determined by the author with an implicit approach in the case of N-dof classically damped linear dynamic system.
The assumption of the above work is that the measurements are independent random variables. In the case of the optimal location of sensors for a vibrating simply supported beam, following Kirkegaard and Brincker (1994), if the measurements are dependent, then the expression of the FIM is more cumbersome as it involves an N s x N s -dimensional integral (N s =number of sensors to optimally locate) to be solved. The difference between the two cases is shown by the authors if one observation is taken simultaneously once at each of two sensors, i.e. the integral becomes 2D. The comparison shows that more reliable conclusions can be drawn in the second case as the spatial correlation is taken into account. The optimal sensor locations are obtained by maximizing the determinant of the FIM. Furthermore the authors show that the optimal result is sensitive to the variance of the noise but such a sensitivity decreases with increasing number of sensors. The proposed method essentially formalises the old practical method of placing sensors near the antinodes of the low-frequency vibration modes of the system.
In Penny, Friswell, and Garvey (1994) an a priori FE model is assumed to guide the selection of the locations. Two approaches are considered. In the first the objective function is the average driving point residue (ADPR): where ψ i j is the i th element of the j−th eigenvector and ω i is the i th natural frequency. The coordinates with the highest ADPR are chosen, i.e. the coordinates that give the highest contribution to the modeshapes. In the second approach the objective function to maximize is related to the ratios m ii /k ii , following the idea of Guyan model reduction: the coordinates to select are to be masters.
In 1998 a general Bayesian framework is proposed , Katafygiotis and Beck (1998)) for system identification. The procedure is detailed in the model updating issue in which the FEM is adjusted so that either the calculated response time histories, frequency response functions, or the modal parameters best match the corresponding quantities measured from the test data. The procedure proposed is mainly aimed at finding the unknown parameters of the model that best fit the measurement response in certain points and not at finding the related best sensor locations. Despite of this, the above papers deserve attention as they give a complete and mathematically consistent formulation of system identification in the statistical framework. In fact, in Beck and Katafygiotis (1998) the authors provide the basic probability model for statistical identification of the free parameters a describing the input-output structural behavior. Such a model is given in terms of two probability distribution functions (PDFs): the PDF for the system output, related to the uncertainty in the prediction accuracy of the governing mathematical model, and the PDF for the model parameters θ and σ relating to the uncertainty in the prediction-error probability model. The former is given by: where θ are the free parameters, M the number of time steps, Y M 1 the observed DOFs and X M 1 the unobserved DOFs, N d the total number of DOFs, N o the observed DOFs, σ the variance of the prediction errors, S 0 q(n; θ ) and S u q(n; θ ) the model output at the observed and unobserved DOFs, respectively.
The PDF related to the uncertainty in the prediction-error probability model is given by: The final PDF is provided prior to utilizing data (initial predictive PDF): and after a set of observed time history data from the structural system is measured (updated predictive PDF): where c is a normalizing constant.
The optimal structural parameters and variance are obtained by maximizing the updated predictive PDF, i.e. maximizing likelihood estimates. Again the assumption that the prediction error at any location and any time is not influenced by the uncertainty at other times or at other locations within the structure is formulated. As the integrals involved are almost impossible to be evaluated for a relatively low number of structural parameters to be identified, the authors propose an alternative asymptotic approach if a large number of data measurements is available. In Katafygiotis and Beck (1998) an example is given where the best stiffnesses of a two-dof linear planar building are determined in order that the corresponding FEM give exactly the accelerations measured at the roof under a given base excitation.
In Katafygiotis, Papadimitriou, and Lam (1998) the general Bayesian statistical model updating framework presented in Beck and Katafygiotis (1998) is discussed in the cases in which the asymptotic approximations developed in Beck and Katafygiotis (1998) are not valid. In such a case the authors conclude with a numerical algorithm that approximates the posterior PDF.
An extension of the approach proposed by Udwadia (1994) is formulated by Heredia-Zavoni and Esteva (1998) to treat the case of large model uncertainties expected in model updating. The most efficient unbiased estimatorθ of the uncertain structural parameters collected in the vector θ is known to be the inverse of the FIM: The authors propose to minimize the expectation E[L(θ ,θ )] of the squared error loss function L(θ ,θ ) = (θ − θ ) t (θ − θ ) to obtain the optimal set of sensor locations. By carrying out a Taylor series expansion of L about θ and on the basis of the results shown by Udwadia (1994) the authors demonstrate that: where Q is the FIM defined as: with f (Y|θ ) denoting the conditional joint probability density function of the random vector observations Y and the superscript t denoting vector transpose.
The expression of is obtained in terms of the covariances between Fourier coefficients of the recorded responses. Such expressions are derived by the authors for the case of linear stochastic structural response. It is worthy to point out that in order to use the Eq.
(21) it is necessary to assign a prior distribution of θ before the measurements are carried out.
In Heredia-Zavoni, Montes-Iturrizaga, and Esteva (1999) the expressions obtained by Heredia-Zavoni and Esteva (1998) are extended to take into account the soilfoundation interaction effects in 2D structures, with uncertain properties, subjected to random earthquake ground motions. The base movement is approximated as rigid, i.e. having horizontal displacement v 0 and rotation φ of the base. The updated expressions of the covariances between Fourier coefficients of recorded responses are obtained.
The approach proposed in Reynier and Abou-Kandil (1999) has some points in common with the strategies presented in Kammer (1991) and Hac and Liu (1993). It limits the optimal sensor location issue to the low-frequency range. The target is the estimation of the model coordinates q. The optimal location of sensors is obtained by comparing two different approaches. The first one is based on the minimization of the covariance matrix of the estimated error: If matrix φ collects the (FEM) eigenvectors of a n-dof structure and N the white-Gaussian noise that affects the FEM solution to recover the measured displacements, the covariance matrix is given by: where β is the constant diagonal term of E[NN t ] as the noise is supposed to be white and equal on each sensor location. By some simple matrix manipulations the authors show that the minimization of the Eq. (24) is equivalent to the maximiza- The second approach is based on the maximization of a measure associated to the observability gramian matrix W 0 : where: and 0 is the η x η zero matrix, I is the η x η unit matrix, ψ is the η x η diagonal matrix collecting the eigenvalues ω i and Σ is the η x η diagonal matrix with Σ i = −2ξ ω i and supposing that η modes describe the response of the structure.
The numerical results presented by the authors appear to be more efficient than the ones obtained by the approach in Kammer (1991) and in Hac and Liu (1993). (2000) is the natural extension of the work carried out in Beck and Katafygiotis (1998), Katafygiotis and Beck (1998) to the computation of the optimal sensor locations. In coherence with the Bayesian framework described in Beck and Katafygiotis (1998), the uncertainties in the parameters θ to be identified and in the prediction error e(n, θ ) at the time t n are quantified using a probability density function (PDF) whose updated expression is given by the asymptotic expression for large sampling time interval provided by the authors in similar expressions to eqs. (14) and (21) in Udwadia (1994). The optimal valuê θ of θ is chosen as minimizing a measure of the uncertainty in θ expressed by the information entropy. It must be pointed out that an important advantage of the information entropy measure is that it allows to make comparison between sensor configurations involving a different number of sensors in each configuration.

Papadimitriou, Beck, and Au
The above minimization is equivalent to the maximization of either ln det Q(δ , θ 0 ) or det Q(δ , θ 0 ) when the updated values of the model parameters θ ,σ 0 do not deviate significantly from the nominal values θ 0 , σ 0 (chosen by the designer as representative for the structure and the given classes of models). Here σ 2 is the variance of the prediction error vector affecting the model output that is supposed to be Gaussian with zero mean; θ 0 is such that S 0 q(n, θ 0 ) is the mean of the Gaussian PDF model output.
The minimization is carried out by a GA. The formulation allows the comparison between configurations with different number of sensors. It is worthy to point out that the expressions of the FIM provided by the authors are analogous to those derived by Udwadia (1994), but the determinant rather than the trace is involved. The authors demonstrate that the determinant and the trace provide different optimal sensor configurations.
On the other hand, large uncertainties in the values may arise: for instance a severe damage may cause a significant reduction in the stiffness of the structure. In such a case the updated model is not close to the nominal model, the nominal values θ 0 and σ 0 need a prescribed PDF to be represented. The optimal parameters are obtained by minimizing the change in information entropy, or, equivalently, maximizing: i.e. maximizing the expected value of ln det Q(δ , θ 0 ) over θ 0 . Heredia-Zavoni and Esteva (1998) proposed an alternative formulation where the maximization is carried out with respect to the expected value of tr[Q −1 (δ , θ 0 )] over θ 0 .
Yuen and Katafygiotis (2001) and Katafygiotis and Yeun (2001) do not expressly deal with the sensor network optimization, but the framework developed is the starting point of successive optimization papers and, thus, deserve a brief introduction.
The issue is the knowledge of the input excitation related to the response measurement, that is usually completely known. What happens if the input is not available? For instance ambient vibrations surveys may offer a means of obtaining dynamic data in an efficient and economic manner. In such a case there is an additional uncertainty to take into account of. In  a Bayesian timedomain approach for modal updating using ambient data is developed on the basis of the Bayesian framework developed by Beck and Katafygiotis (1998), Katafygiotis, Papadimitriou, and Lam (1998). The external force is modeled as Gaussian white noise with known spectral density. The usual expression of the PDF of the random measurement vectorŶ 1,N for given θ : with Γ(θ ) being the covariance matrix, is unfeasible for a large number of observed data.  propose an approximate expansion provided that only the lower N m modes contribute significantly to the response and only N p previous time-data points have significant effect on the statistical behavior of the present. The update expression involves smaller matrices and smaller time intervals, thus, requiring a numerical effort that is negligible compared to the one required by the exact formula.
The optimal parametersθ are determined as the most probable, i.e. as the ones maximizing p(θ )p(Ŷ 1,N |θ ) where p(θ ) is the prior PDF of θ .
Katafygiotis and Yeun (2001) develop a similar procedure for modal updating using ambient data based on the statistics of an estimator of the spectral density. The response of an N-dof dynamic system under a Gaussian white noise force is a Gaussian process with zero mean and spectral density S(ω). An estimator of such a spectral density matrix is introduced by the authors. Assuming there is a set of M independent, identically distributed, N−step time histories Y (1) with N → ∞ the authors provide the PDF of the average spectral density estimate: The most probable parameters θ are determined by minimizing the term: where S M,k 1 ,k 2 y,N = S M y,N (k∆ω), k = k 1 , · · · , k 2 and p(S M,k 1 ,k 2 y,N |θ ) is the updated PDF of the model parameters θ given the data S M,k 1 ,k 2 y,N . The above results are correct only asymptotically as N → ∞.
Yuen, Katafygiotis, Papadimitriou, and Mickleborough (2001) represents the natural extension of the work developed in , Katafygiotis and Yeun (2001) to the optimal sensor placement problem for the case of uncertain excitation. In fact, the optimal sensor configuration is selected as the one which minimizes the information entropy measure: where E denoting the mathematical expectation with respect to θ and D,δ ,M denoting the class of models, the sensor configuration and the available data, respectively. The PDF involved in Eq. (32) is expressed in Yuen, Katafygiotis, Papadimitriou, and Mickleborough (2001) in terms of p(S M y,N (ω k )|θ ). For large number of available data the information entropy Eq. (32) can be expressed in terms of the Hessian Q(δ ,θ , D) of g(θ ): whereθ are the most probable parameters. For large number of data the information entropy does not depend on the data that are not available at the initial stage, i.e. Q(δ ,θ , D) → Q(δ ,θ ), but the dependence onθ is still a problem. The issue can be solved either assigning a nominal model θ 0 or prescribing a PDF p(θ 0 ). In the latter case the information entropy to be minimized changes into: where the multi-dimensional integral involved needs to be evaluated numerically by efficient asymptotic expansions.
In the same Bayesian statistical system identification methodology framework set by Beck and Katafygiotis (1998), Papadimitriou (2004) discusses the information entropy as measure to minimize. The Bayesian statistical framework is the same of the one developed by Beck and Katafygiotis (1998) to provide the expression of the updating PDF p(θ , σ |D) of the set of structural model and prediction error parameters (δ , σ ) given the measured data D. Furthermore, the information entropy as introduced by Papadimitriou, Beck, and Au (2000) is approximated by an asymptotic expansion valid for a large number of data and obtained by the use of the Laplace method. Such an approximation is similar to the one obtained by Yuen, Katafygiotis, Papadimitriou, and Mickleborough (2001) and it is expressed in terms of the determinant of the FIM. Some useful propositions are also obtained: 1) the information entropy for M sensors is higher than the information entropy for M + L sensors; 2) the min and max information entropy are decreasing functions of the number of sensors. Finally, two interesting numerical algorithms are proposed and investigated in alternative to the GA: the forward sequential sensor placement algorithm and the backward sequential sensor placement algorithm, the former placing one sensor at a time at a position that results in the highest reduction in the information entropy, the latter removing one sensor at a time at a position that results in the smallest increase in the information entropy.
Also in Papadimitriou (2005) the information entropy is used as measure to minimize for optimizing the sensor configuration. The corresponding multi-objective optimization problem of finding the sensor locations that simultaneously minimize appropriately defined information entropy indices for all model classes is addressed by estimating the Pareto optimal solutions with different algorithms. The Bayesian statistical framework is the one described by Beck and Katafygiotis (1998). The asymptotic approximation of the information entropy valid for large number of data and provided by Papadimitriou (2004) is adopted. An information entropy index IEI(δ ) is introduced as follows: where H(δ re f ) and H(δ 0,re f ) are the information entropies computed for two reference sensor configurations. Such reference configurations depend on the problem under analysis. If the number of sensors is fixed, they correspond to the optimal and to the worst sensor configurations; if the number of sensors is variable between 1 and N P , they correspond to the optimal sensor locations for 1 and N P sensors, respectively.
Let J i (x) = IEI i (δ i (x)) be the IEI of a sensor configuration x where δ i (x) maps the sensor configuration for different model classes M i . The optimal sensor locations are identified by minimizing J(x) = (J 1 (x), · · · , J µ (x)) (where µ is the number of structural models), i.e. as a multi-objective optimization problems that, in principle, may have alternative solutions known as Pareto optimal solutions. The authors in Papadimitriou (2005) use two algorithms to solve it: the Strength Pareto Evolutionary Algorithm and an heuristic algorithm based on the sequential sensor placement approach introduced by Papadimitriou (2004). (2005) is a useful comparison of different optimal sensor placement techniques for a bridge structure. The optimal sensor locations are obtained on the basis of the first three global modal properties of the bridge. Six different methods are investigated: the EID method developed by Kammer (1991), a compromise between the EID method and an energetic approach, the kinetic energy method introduced by Heo, Wang, and Satpathi (1997), the variance method developed by the authors, and two approaches based on the maximization of the vibration energy content of the signal acquired. After the description of the six techniques, the authors conclude that the EID technique results to have the best performance in identifying the optimal sensors capable to capture the low frequency vibration characteristics. It must be pointed out that the comparison is limited to a specific example and to a specific objective function. A connection between the EID method and the modal kinetic energy (MKE) method for the accelerometer placement problem is also investigated in Li, Li, and Fritzen (2007) for eigenfrequencies and mode shapes identification. The MKE ranks all candidate sensor positions by their MKE indices as follows:

Meo and Zumpano
where MKE i j is the kinetic energy associated with the i th dof in the jth target mod-e. The sensor locations with higher values of MKE are selected as measurement sensor set. The authors demonstrate that the EID method is an iterated version of MKE with re-orthonormalized mode shapes. Li, Tang, and Li (2004) also compares four different fitness functions in optimizing the sensor locations for structural vibration measurements. If Φ collects the n modes in the free vibration analysis of a structure, the first fitness function (to minimize) is given by: where Φ ir is the r th component of the i th mode, r ∈ 0 means that a whole set r is within the locations where there is no sensor installed. The other fitness functions are all oriented to avoid measurements of similar modes. The second fitness function is based on the modal scale factor (MSF), the third fitness function is set in terms of the Modal Assurance Criterion (MAC) and the fourth one is expressed in terms of: where Φ ia is the i th mode shape from calculation and Φ ib is the measured i th mode shape with m nodes. The optimization procedure is carried out with the uniform design method introduced in Statistics.
It can be argued that the information-based approach, firstly introduced by Heredia-Zavoni and Esteva (1998) and Heredia-Zavoni, Montes-Iturrizaga, and Esteva (1999), reduce the initial candidate sensor locations to the optimal one in suboptimal way. Furthermore, when GAs have been used, they have been tested only for problems with a small number of possible candidate sensor locations. In Rao and Anandakumar (2007) the authors propose an improved hybrid version of the particle swarm optimization (PSO) technique combined with the Nelder-Mead algorithm to improve the local search step. Total mean square error and determinant of FIM are taken as objective functions. The procedure, tested on a cantilever beam and on a rectangular plate, shows superior performance with respect to other informationbased approaches (such as EID).
In Yi, Li, and Gu (2011) number and locations of the sensor are determined in order to guarantee the most possible that the measured modal vectors are orthogonal. This can be achieved by forcing the matrix MAC: to be as more diagonal as possible. The procedure starts with assigning an initial set of sensor locations by maximizing the determinant of FIM. Then the sequential sensor placement (SSP) algorithm already presented by Papadimitriou (2004) is adopted, i.e. one sensor at a time is added at a position that results in the highest reduction in the maximum off-diagonal element of the MAC. The solution is clearly suboptimal or near-optimal. The SSP can also be used in an inverse order, obtaining the backward SSP (BSSP). The entire procedure is tested with reference to a simplified 3D beam FEM of the Guangzhou New TV Tower in China.
The influence of the spatial correlation between prediction errors on the design of the optimal sensor locations is investigated in Papadimitriou and Lombaert (2012). The covariance Σ t of the total prediction error is the sum of the covarianceΣ of the measurement error with the covariance Σ of the model errors. If the measurement error is assumed to be independent of the location of the sensors, its covarianceΣ becomes diagonal. On the other hand, it is reasonable to expect a certain degree of correlation for the model errors between two neighborhood locations. Such a correlation is assumed by the authors of the type: where λ is a measure of the spatial correlation length.
The measure to minimize is the information entropy as introduced by Papadimitriou, Beck, and Au (2000) and asymptotically expanded by Papadimitriou (2004) in terms of the determinant of the FIM. Along the line of the propositions demonstrated in Papadimitriou (2004), it is shown that the information entropy is a decreasing function of the shortest distance δ of a new sensor added to previous M sensors. This implies that sensors locations further away from an existing sensor have a higher information content, thus, the spatial correlation of the prediction error tends to shift a sensor away from existing sensor locations.

Optimal Sensors in Damage Identification
All load-carrying structures, such as aircraft, spacecraft, bridges, and offshore platforms, continuously accumulate damage during their service life. Any crack or local damage in a structure may affect the structural safety. So a structural monitoring system is needed. The location optimization of sensors is a crucial problem in a structural monitoring system. Taking the cost of sensors into account, it is uneconomical to install sensors on every part of a structure.
Two critical constraints exist in SHM applications: the number of sensors (and possibly actuators) available for the network, and the power available for interrogation. Due to the difficulty of replacing batteries for sensors embedded in a structure, the sensors energy efficiency is a critical concern for SHM systems. It is quiet common to consider the use of piezoelectric patches in the active-sensing process. For this reason in the present review only will be considered piezoelectric patches. Two actuation-sensing schemes are possible: pulse-echo, involving a single patch actuating a waveform and then detecting its reflections, pitch-catch, involving two different patches one to actuate and another one for sensing.
Sensor optimization in damage identification is a natural extension of the works illustrated in the previous section. Such an assertion is confirmed by the fact that many papers involving parametric identification are cited in the contributions dealing with damage identification.
An attempt to properly locate the sensors and to measure the related extension of damage is developed in Cobb and Liebst (1997). The approach is mainly based on sensitivity analysis, i.e. on examining the 1 st order eigenstructure sensitivity to changes in the structural stiffness of each Finite Element; structural damping is neglected. The location of the sensors is chosen in order to maximize such a sensitivity. If λ i and Φ i represent the FE eigenvalue and eigenvector for the i th mode, the authors extract the expression of the eigenvector sensitivity ∇λ i and ∇Φ i . Some metrics are then developed in terms of the above sensitivities; such metrics are capable 1) to distinguish in which elements the damage can be detected on the basis of the r measured modes (elements D) and in which one it cannot (elements U), 2) among the detectable elements D, in which ones the damage can be localized (elements I) and in which one it cannot (elements S). The optimal sensor locations are obtained by removing sensors from the dofs in order to have one sensor for each element I and one sensor for each group of elements S. It is worthy to underline that the procedure is developed with reference to truss structures for which it results to be straight and efficient. Furthermore, the damage is simply modeled by a reduction of the structural stiffness, without taking care to its nature. In conclusion, the procedure is aimed at prioritizing the dofs to instrument (as shown in the example in Fig. 3) when used to collect modal data for stiffness-reduction-damage identification.
A similar approach is given in Shi, Law, and Zhang (2000). The damage under analysis is again the reduction in the element's stiffness, identifiable by the decrease in the natural frequencies and modification of the modes of vibration of the structure. The dofs to measure are selected by progressively reducing a larger candidate set on the basis of their contribution in localizing the structural damage. The authors provide expressions of the 1 st order change ∆Φ i of the i th mode shape with respect to the damage coefficients α k (∆K = ∑ L k=1 α k K k ). The procedure is improved with respect to that in Cobb and Liebst (1997) by including the measurement noise and by introducing the FIM. Following the suggestion given in Udwadia (1994), the best estimate of the damage coefficients is led by the maximization of the FIM.
The optimal sensor locations are chosen retaining the ones that mostly contribute to the diagonal term of a matrixĒ. Such a matrix is similar to the matrix F E introduced in Kammer (1991) and the approach is equivalent to maximize the FIM. After locating the optimal sensors, the damaged sites are estimated by the Multiple Damage Location Assurance Criterion (MDLAC) and the damage extent is assessed by the measured modal frequencies as suggested in Shi, Law, and Zhang (1999). Figure 4: Optimal sensors on a 2D truss Shi, Law, and Zhang (2000). Fig. 4 shows the optimal sensor configuration for the 2D truss presented in Shi, Law, and Zhang (2000). The sensors are located in the nodes highlighted either by a circle (two sensors measuring both dofs), or by an horizontal rectangular (one sensor measuring the horizontal dof) or a vertical rectangular (one sensor measuring the vertical dof).
In Guo, Zhang, Zhang, and Zhou (2004) the fitness function described in Shi, Law, and Zhang (2000) is optimized by GA. A binary coding is adopted; this means that each chromosome is coded by a binary string with a length that is equal to the number of all the possible sensors' positions. Thus, the crossover and mutation operators may not satisfy the constraint that the sum of the active sensors is constantly equal to the pre-defined value. For this reason an improved GA is proposed to guarantee the offsprings to satisfy the constraint.
The previous papers are mainly developed in the civil engineering context: the damage is included as a reduction of the beam's stiffness. In Worden and Burrows (2001) an approach that is more oriented to mechanical and aeronautical engineering is proposed. The optimal sensor locations are obtained by combining the neural network approach with the GA, the simulated annealing (SA) and iterative insertion/deletion. The occurrence of the damage is simulated by removing small groups of elements from the FE model. A neural network (NN) is trained and tested by simulating faults in different positions and of different severity. For each fault, the response in terms of modeshapes and curvature is measured by a FE model. The input layer of the NN requires n sen nodes (n sen =number of pre-set possible sensor locations), the output is given by one value for each finite element that measures the level of damage. The optimal sensor locations are chosen in order to minimize the error given by the NN with reference to the testing data, i.e.: where i represents the i th output neuron and N T is the number of training sets indexed by j. The optimization is tested with three different algorithms. In the first, one sensor per time is removed from the fully-occupied sensor arrangement N and the N − 1 set is found in order to minimize Eq. (42). The algorithm is repeated until the desired number of sensors. The second and the third approach use the GA and the SA algorithm, respectively.
A different approach is proposed in Trendafilova, Heylen, and van Brussel (2001).
Here the optimal sensor arrangement is located on the basis of the best mutual distance, i.e. no information is lost and no information is doubled. The sensor selection tool is based on the average mutual information I AB between two different sensor configurations A and B, a concept which is well known in the Information Theory context. If A and B are formed by acceleration signals taken in n discrete time points and in N sensors regularly distributed on the structure, then the mutual information can be written as: where ∆x is the difference of sensor density between A and B, P stands for probability density. When the two sets tend to be completely independent, I tends to zero as P(a i , a i+∆x ) → P(a i )P(a i+∆x ).
The best sensor distribution is chosen by minimizing Eq. (43). The procedure is capable to determine the best mutual distance but it cannot take into account complex structural geometries where it is difficult to respect the exact mutual distance. In fact the authors show examples on simple rectangular plates where the probabilities are determined by following a stochastic pattern procedure described in previous papers.
An interesting application of the decision theory to the optimal sensor network is developed in Field and Grigoriu (2006). The procedure is presented as aimed at vehicle detection, classification and monitoring for the purpose of surveillance, but it may be extended to other fields of engineering. The vehicle traffic is modeled as a Poisson process and the sensor is assumed to measure an attribute Z k of the k th vehicle in order split the vehicles in "good" (g) and "bad" (b). Two probability density functions are assumed with reference to Z and to the measurement error typically associated to sensor applications. The sensor optimization consists in imposing given maximum allowable false alarm (i.e. sensor is activated but g vehicle is passing) and miss rates (i.e. sensor is not activated despite b vehicle is passing). The design variable is the sensitivity level δ of the sensor that classifies the vehicle either g or b if the measurement error is zero, whereas the design variables are two, i.e. δ and the variance σ of the measurement error, under non zero measurement error. Methods from decision theory are used to select the optimal design. The final procedure is tested to design the best location of a given number of sensors to monitor the road network in a region inside the New York state.
The problem of the position transducers for Lamb wave propagation aimed at damage detection is faced in Lee and Staszewski (2007). Their relative distance is particularly relevant to composite structures where amplitude attenuation is significant. In the paper a full 2D Lamb wave propagation field is simulated in a damaged structure for a selected actuator position and all possible sensor locations. The procedure is tested in an aluminum plate with a rectangular damage slot and a real fatigue crack. The wave propagation is simulated by the aid of the local interaction simulation approach, introduced in 2D by the authors in previous papers. Two simple experimental tests are performed to validate the numerical simulation method. The actuator is fixed and located as depicted in Fig. 1 of Lee and Staszewski (2007), approximately at the bottom center of the plate.
All possible sensor positions are investigated by gathering two wave packages of the Lamb wave response. Fig. 2 of Lee and Staszewski (2007) shows the contour plot for the peak-to-peak amplitudes of the first wave package for a slot damage in the centre of the plate: the higher the amplitude the better position the sensor has. The amplitude contour plots for the undamaged plate are subtracted from the amplitude maps for damaged plate. The resulting 2D amplitude maps shows the amplitude change due to damage for each sensor location (x, y). The procedure is also tested with a crack positioned in the centre of the plate and two actuators generating the Lamb wave. As a general comment it must be underlined that the sensor optimization procedure is strictly dependent on the position of the damage and it cannot be easily extended to any possible location of the damage.
So far the research on optimal sensor allocation for SHM has been mainly driven by the optimization of the area of coverage per sensor. On the other hand, the authors in Chang, Markmiller, Ihn, and Cheng (2007) introduce the probability of detection (POD) as a better measure for quantifying the reliability of a sensor network. The idea seems to be more successful as it keeps into account that the damage may occur in any part of the structure and, thus, it is important to address the issue of uncertainty in handling the optimal sensor configuration. The POD, in conjunction with the GAs, is tested to identify the optimal sensor network for detecting damage in a composite plate.
A similar probabilistic approach is developed in Azarbayejani, El-Osery, Choi, and Taha (2008). Numerous dynamic FE analysis are carried out with N L different damage locations and N d damage levels in each damage location. A certain number of damage features is supposed to be able to describe the damage state in the structure and it is measured at each sensor. The potential locations of the sensors coincide with the nodes of the FE mesh. The damage features measured by the FE analyses are used as inputs to an Artificial Neural Network (ANN); the corresponding damage locations represent the ANN outputs. No hidden layers are adopted and the hyperbolic tangent sigmoid function is used as transfer function. The network weights h associated to N sensors is obtained in terms of the weights h associated with a coarser distribution N by the finite impulse response interpolation function: A Probability Distribution Function (PDF) f (n) at each sensor location n can be established.
where δ is a discrete impulse function. Such a PDF represents the probability of the sensor ability to detect the damage. For a given number of sensors, the method allocates the sensors to places that have the highest probability to detect damage in the structure. The performance of the sensor distribution is tested by its POD, expressed by the authors as: where N(Γ mean ≥ Γ α ) is the number of simulations where the sensor network under testing is capable to identify the damage class correctly and N total is the total number of simulations performed. In order to guarantee the operativity of the sensor network in case one or more sensors fail, additional sensors are used at some critical positions. Such positions are identified by performing a "leave one sensor out" analysis and measuring the significance factor S i for each sensor: where sensors with higher S i are considered critical and useful to be redundant.
The procedure results to be interesting even if it is highly dependent on the a priori knowledge of the assumed locations and levels of damage.
The very first contribution dealing with the SHM optimal sensor placement fully cast in the theoretical framework of Bayes risk is given by Flynn and Todd (2010). Despite there have been numerous contributions dealing with damage detection methods based on a Bayesian probabilistic approach (see for instance Sohn and Law (1997)), Flynn and Todd (2010) is the first work including the optimization of the sensor locations.
After providing a general form of the Bayes risk as the sum of the expected costs of each type of damage, the position of N actuator-sensor pairs is determined by optimizing either the global detection rate (for given global false alarm rate) or the global false alarm rate (for given global detection rate). Under the assumption that the damage state is of binary type, i.e. state m 0 equals "no damage" and state m 1 equals "damage present", the global detection rateP D is the expected fraction of the structure's damaged regions that will be correctly identified as damaged, i.e.: where K is the total number of subregions whose union form the entire structure, d k1 is the event by which m 1 is decided to be the local damage state in region k and h k1 is the data by which m 1 results to be the true local damage state in region k. On the other hand, the global false alarm rateP FA is the expected fraction of the structure's undamaged regions that will be incorrectly identified as damaged, i.e.: where h k0 is the data by which m 0 results to be the true local damage state in region k.
BothP D andP FA depend on a cost function γ[k], generally related to inspection and failure costs, and on the deflection coefficient d 2 [k] = s t k C −1 k s k , with s k being the expected values of the (supposed) Gaussian distributed SHM features extracted in the region k, C k their covariance matrix, i.e.: where γ[k] = γ is supposed to be constant on the entire structure. The performance of a given actuator-sensor arrangement is to be determined at each step of the optimization process. Such a performance is obtained by using Eqs. (50) where one is used to evaluate γ, the other one provides the fitness function. The authors present the results for various demonstration cases: local false alarm and detection rate map with five-sensor optimal arrangement in a square plate in their Figs. 3-4, local damage, local detection and local detector rate maps with sixteen-sensor optimal arrangement in a gusset plate in their Figs. 5-6 and, finally, global detection rate analysis on T-shaped plate in their Fig. 7.
It must be pointed out that the procedure requires the setting of the probability of damage P(h k1 ) of the expected value of the SHM feature s k and of its covariance C k . These terms are established by the authors to present some demonstration cases. The formulation is interesting and would deserve to be implemented for more realistic damage scenarios, and with the support of experimental and numerical analyses.

Optimal Sensors in Impact Identification
The issue of impact identification is strictly connected to damage identification. It is well known that the major cause of in-service damage to composite structure is impact, both with debris during take-off and landing operations, and with ground support equipment. The problem is that the damage caused by the impact is often barely visible, especially when low energy impact events like tool drop occur. Thus, impact identification has direct relevance to the problem of damage detection in aerospace structures.
In Table 4 the most relevant contributions expressly dealing with the problem of determining the best number and location of sensors for impact identification are listed in chronological order. It must be underlined that a relatively small propor-tion of the effort has been concentrated on optimal sensor placement for impact identification. It is the authors' opinion that this is mainly due to the high complexity of the issue, as involving highly nonlinear dynamic phenomena with complex probabilistic characteristics.
On the other hand to identify the best number and location of the sensors aimed at impact identification is a fundamental issue both for economical reasons, that is, less sensors means less weight, and for safety reasons, i.e., best locations means higher probability to successfully detect the impact.  (2012) An impact identification procedure involving experimental data and ANN is proposed in Staszewski, Worden, Wardle, and Tomlinson (2000), Worden and Staszewski (2000). Two Multi-Layer Perceptron trained with the backpropagation learning rule NN are implemented separately to locate the impact and to quantify the impact force amplitude. The training set obtained experimentally is expanded by corrupting it with different Gaussian noise vectors. A rectangular composite plate with four aluminium channels and 17 piezoceramics sensors is investigated. The impacts are simulated by an instrumented hammer and are kept below 0.1 N. Two features (to train the two NNs) are extracted from the data recorded by each sensor: time after impact of maximum response and magnitude of maximum response. The best sensor distribution is obtained by minimizing the error in the impact identification with the aid of the GA: the number of sensors is set a priori and the gene is given by a vector of integers each specifying the position of each sensor. It must be underlined that the adopted GA does not avoid that repeated sensors in the gene occur after the application of the crossover and mutation operations. Two different fitness functions are tested: the inverse of the percentage error in predicting the impact level over a testing set of the NN, and the fail-safe fitness parameter that measures the performance of a sensor distribution if one of the sensor fails. Figure 5: Optimum three-sensor distribution with reference to the percentage error (left), and to the fail-safe fitness (right) Staszewski, Worden, Wardle, and Tomlinson (2000).
Results are presented in Fig. 5 with reference to three-sensor distributions. It is worthy to underline that the two fitness functions generate solutions in conflict, and both are different from the one obtained by the exhaustive search depicted in Fig.  6. Figure 6: Optimum sensor distribution obtained by exhaustive search Staszewski, Worden, Wardle, and Tomlinson (2000).
A general algorithm to cope with the sensor placement problem for target location under constraints of the cost limitation and of the complete coverage is proposed in Lin and Chiu (2005). Such a paper is worthy to be cited as the proposed procedure may be easily suited to the impact identification problem. The efficiency of the sensor is measured on the basis of its coverage. The detection radius r k of the sensor k, i.e. the maximum distance of an impact that is detected by the sensor k, is assumed to be known a priori. The field is said to be completely covered, as depicted in Fig. 7, if any grid point can be detected by at least one sensor.
A power vector v (collecting 0 and 1) for each grid point can be defined to indicate which sensors cover it. A sensor field is said to be completely discriminated when each grid point is identified by a unique power vector. Figure 7: A complete covered field Lin and Chiu (2005).
The sensor placement problem is then formulated as a combinatorial optimization problem in which the highest discrimination is searched: where d i j is the Euclidean distance between the sensor i and the sensor j, K is an arbitrarily large number. Eq. (51) is subject to five constraints: the first three involve the relationship between r k and d ik , the fourth limits the total deployment cost of the sensors, the fifth is the complete coverage limitation. The SA algorithm is used to solve the above combinatorial optimization problem. The merit of the procedure is its generality and its capacity to include cost limitation, the limit is the need to know the detection radius a priori.
In Markmiller and Chang (2010) the POD is taken as measure to evaluate the performance of a sensor deployment. The POD for the entire network is defined as: where m is the total number of impacts, n the number of sensors. POD i j for each possible sensor location x i and impact force at x j is given by: In Eq. (52) k provides the total number of sensors that detect the impact, i.e. with POD i j = 1. The POD for each sensor and for diffused possible impact forces is evaluated by the aid of the FEM. SDHM, vol.9, no.4, pp.287-323, 2013 The optimum sensor network is finally found by: max sensor location POD network for all impact forces F j ≥ F min

Copyright © 2013 Tech Science Press
GAs are adopted to solve the above optimization problem for a given number of sensors. The whole procedure is repeated changing the number of sensors. Examples are given with reference to two stiffened composite panels.
It must be pointed out that the procedure is related to a kind of detection radius; thus, it has some common points with the general formulation developed in Lin and Chiu (2005). The detection radius, here included in the POD, is the key point and needs to be determined numerically for each example. Furthermore, the definition of the strain threshold ε min needs special attention, and it is not simple to define it correctly.
An interesting formulation taking into account the probabilistic behavior of the error to detect the impact is developed in Mallardo, Aliabadi, and Khodaei (2012). The procedure is tested with reference to a composite plate, stiffened in both directions, on which 45 candidate sensors are located. The best sensor deployment is obtained by minimizing a suitable fitness function. An ANN network is built by carrying out nonlinear explicit time domain FE simulations for different impact locations and energy. The parametric analysis of the error provided by the ANN with reference to the testing set and for different sensor networks shows that the behavior of the associated PDF (and consequently the associated Cumulative Distributive Function CDF, see Fig. 8), independently on the number of cycles, tends to be either of lognormal type or of Weibull type: where the governing parameters (i.e. µ l and σ l or k and λ ) can be evaluated with a low number of cycles. On the basis of the above probabilistic behavior, the fitness function is assumed to be either the inverse of the probability associated to a preassigned error or the error related to a pre-assigned probability. The optimization is performed by the aid of GA modified by the authors in order to deal with integer genes and avoid repeated sensors with cross-over and mutation operations. Figure 9: Best three-sensor (left) and five-sensor (right) configuration Mallardo, Aliabadi, and Khodaei (2012).
Numerical results are provided with reference to various sensor numbers (see Fig.  9) and some of them are validated by an exhaustive search analysis (see Fig. 10).

Conclusions
In this paper a thorough description of the state of the art in sensor optimization aimed at system identification, damage identification and impact identification has been provided. It is the authors' opinion that the topics need more testing periods in an industrial environment. Furthermore, the probabilistic issue has not been fully trodden in the procedures developed in the damage identification and impact identification contexts. In blue the GA solution Mallardo, Aliabadi, and Khodaei (2012).
Surely the topic that deserves more improvement is the impact identification as very few contributions have been reported in the literature. The threshold over which to extract the arrival time requires more investigation and it has too much influence on the performance of the procedure. Furthermore, more work on coupling experimental and numerical tests is necessary to improve the performance of the optimization procedure. Finally, there is still lack of tests in actual flight operations.
Sensor optimization in damage identification still lacks of generality as either it is built with reference to beam structures or the damage phenomenon is included in very simple way.