Deep learning for automatic assessment of breathing-debonds in stiffened composite panels using non-linear guided wave signals

.


Introduction
Lightweight fibre-reinforced composites have gained popularity in aeronautic, aerospace, marine, infrastructure and automobile engineering sectors, owing to their high stiffness/weight ratio, fire resistance and acoustic damping, amongst others [1][2][3][4][5].Stiffened composites are often used for the lightweight construction of these engineering structures [6,7].Various types of stiffeners (e.g., cross-sections of I, L, T) are bonded to the base plate of these structures.At these stiffener-baseplate bond interphase, breathing type debond can occur owing to cyclic loading, improper handling, impact and ageing [8,9].While in service, these debond can grow further and lead to a catastrophic structural failure, if not identified beforehand [10,11].Hence, detecting and characterising these hidden damages in stiffened composites is important.
In [12][13][14][15], it was shown that the structural health monitoring (SHM) methods based on guided wave (GW) propagation have the potential for the accurate detection of hidden defects in complex composites with multiple layers.Such linear and nonlinear GW propagationbased SHM techniques offer long-range monitoring with high sensitivity against minor defects/discontinuities in layers [16][17][18][19].These SHM methods often involve the application of lightweight and economic broadband transducers such as the network of surface-mounted piezoelectric lead zirconium titanate transducers (PZTs) [14,15].
The breathing debonds in composites are complex and often difficult to identify using trad SHM methods [20,21].This breathing-type debond can sometimes behave as open (i.e.debond) or closed (i.e.undamaged) regions, owing to the occurrence of 'breathing behaviour' under dynamic wave loading [21,22].The breathing phenomenon of such debonds generates nonlinear ultrasonic waves that involve higher harmonics [23,24], sub-harmonics [25], mixed frequency-response [26], and non-linear resonance [27].Several studies [24,28,29] have demonstrated that the nonlinear response features are responsive to contact-type damage (e.g., breathing crack, kissing-delamination) and are not much influenced by operational conditions.Many authors [30,31] have analysed the generation of higher-harmonics, due to the occurrence of contact nonlinearity (often occurring for breathing-type damage) in the structures.In [23], contact-acoustic non-linearity (CAN) that occurs owing to the breathing-type cracks was investigated using signals from a PZT network.Nonlinear elastic wave signal-based SHM techniques are also presented by some other researchers [32,33].
In recent times, structural response data-based machine learning approaches are gaining popularity for autonomous condition monitoring of structures [34][35][36][37][38]. Deep learning algorithms using Convolutional Neural Network (CNN) have demonstrated their capability in the image-based characterization of structural conditions [34,35].The CNN algorithm can effectively handle the grid-like inputs from images to generate similar image features from the local regions of alike patterns [34,39].Image-based deep learning involves large datasets that can be produced synthetically by adding different levels of noise (e.g., Gaussian zero mean noise) to the actual images, which is known as 'data augmentation' [40][41][42].Recent studies [42,43] have presented deep learning based SHM methods for autonomous assessment of static damage/delamination in laminated composites.The literature review revealed that no investigation has been undertaken in the area of identification of breathing-type damage-induced nonlinear ultrasonic wave features using an automated deep learning approach.This paper aims to address this important research gap with a deep learning-based SHM strategy.
A CNN-based SHM strategy is presented in this paper for the automatic characterization of carbon fibre-reinforced stiffened composite panels (SCPs) with and without baseplate-stiffener debond using the raw GW signals and the filtered time-domain higher-harmonic signals.A series of laboratory experiments and numerical finite element based numerical simulations have been carried through on multiple SCP samples.The numerical and experimental GW signals are translated to images of RGB (red, green, blue) scalograms (time-frequency) using the Continuous Wavelet Transform (CWT).These scalograms have been supplied as input to the designed deep learning architecture to perform the training/validation/testing operations.This paper is structured as follows.Section 2 presents the details of laboratory experiments on SCP samples for SCPs with breathing debonds.In Section 3 the details of finite element modeling for nonlinear numerical simulations of GW propagation in SCP are described.Section 4 describes the details of the deep learning architecture for SHM of SCPs.The experimental results, findings, and discussions around them are presented in Section 5.The main conclusions from this study and scope the of future research in this area are detailed in Section 6.

Experimental procedure
A series of ultrasonic GW-based experiments are conducted in the laboratory on.
Fig. 1 shows the laboratory-based setup for the SCP sample with DSt2 debond, the PZT network of 5 sensors: S1, S2, S3, S4, S5 and an actuator: 'A'.The excitation signal was selected based on trials for a range of career frequencies and cycles.In the process, the 7-cycle tone-burst sine signal produced the most prominent higher harmonics in the frequency domain.A suitable actuation frequency was identified by actuating a series of sine waves of 7-cycle with different carrier frequencies through the actuator 'A', and the propagated signals are collected at the sensor 'S3 ′ (ref.Fig. 1).Thus, a frequency-response plot was generated as shown in Fig. 2(a).The plot shows higher magnitudes of response around 150 kHz.Thus, a Hanning window modulated 7-cycle 150 kHz sine pulse as shown in Fig. 2(b) is selected as the excitation signal for the experiments and simulations.Fig. 2(c) represents a Fast Fourier transform (FFT) of the actuator signal in the frequency domain.The actuator PZT introduces the excitation (Fig. 2(a)) and generates the GW propagation in the SCP.These GWs are registered at each of the PZT sensors (S1, S2, S3, S4 and S5) in the network.

Finite element modeling
The numerical simulation models are validated for all 3 experimental (baseline) cases and then extended for different debond locations and variable actuation positions to generate a large dataset for the deep learning model.As the experiments could be conducted only for (i) a UD, (ii) an SCP with a DSt2 debond and (iii) an SCP with a DSt1 debond.3D nonlinear simulation of ultrasonic GW propagation in SCPs with contactacoustic-nonlinearity owing to the breathing-debond phenomenon is challenging.In this study, finite element simulation of GW propagation in SCP samples (450 × 450 × 1 mm) with PZT actuator/sensors (thickness = 0.40 mm, diameter = 10.00 mm) has been carried out in ABAQUS implicit (modeling of PZT) and explicit solvers (for SCP) [18].The explicit and implicit solutions are linked by assigning the 'standardexplicit-co-simulation' [44].In the SCP (i) and (ii) models, zero-volume (approx.)debond (30 × 30 mm) regions are modeled by releasing connections between the inter-facial node-to-node connections.The SCPs are modeled with the 8-node brick element (0.5 × 0.5 × 0.2 mm), 'C3D8I'.These elements offer the advantage of hourglass control, elimination of shear locking and reduced volumetric locking.Each node has 3 degrees-of-freedom (translational).The material properties (homogenised) of quasi-isotropic CFCL are considered based on the empirical formulations in [45] given in Table 1.
In the numerical simulation, 15 DSt1-type SCP models are prepared with a 30 mm DSt1-type debond (ref.3(a)) are also prepared with a minutely shifted (±3 − 5 mm concerning the experimental 'A' position) actuator positions.In the SCP models, the contact acoustic nonlinearity is assigned to the baseplate and stiffener debond surfaces by assigning the surface-tosurface frictionless contact to avoid any penetration between the debond region surfaces while applying the wave propagation loading.A direct enforcement formulation that uses the Lagrange multiplier is assigned to maintain the pressure and penetration.The direct enforcement condition for contact virtual work contribution, '∂ ∏ Con ' is obtained using variational formulation [47] as: where 'f' represents the contact pressure and 'x' represents the overclosure.For hard contact, the contact pressure between 2-surfaces at a   The Rayleigh damping coefficients of mass proportional and frequency proportional have been considered for the numerical models.A time increment of 5 × 10 − 8 second/time-step was selected for the implicit and explicit solvers.In all numerical modelling, the actuator PZT 'A' generates a 150 kHz 7-cycle sine excitation pulse (Fig. 2(a)) and the ultrasonic GWs are collected at each of the five sensor locations in the PZT network (Fig. 3(a)).

SHM strategy based on deep learning
The SHM strategy being presented here uses the GW time-history signals from the sensors and produces time-frequency scalogram images of these time-history signals by applying CWT based on the Morse Wavelet [48].CWT is a convolution of the input dataset with a set of functions rendered by the mother wavelet.The convolution can be computed by using an FFT algorithm.Generally, the output is a realvalued function apart from a complex mother wavelet that converts the CWT to a complex function.Generalized Morse wavelets are a family of analytic wavelets [48] of complex values whose Fourier transform is affirmed only along the real axis with positive values.CWT is utile for studying modulated signals with time-varying frequency and amplitude.CWT is also useful for examining localized discontinuities.CWT produces RGB scalograms with its time-frequency spectrums.The extracted features of the dominant frequencies and the related scales from these plots are utilised for training and validation of a signal-classifier of a Neural Network.Here, the generated scalograms as input are supplied to the deep-learning network to train and characterize the 3 different SCP classes: (i) UD, (ii) DSt2, and (iii) DSt1.
Fig. 4 represents a schema of the SHM strategy that uses CNN-based deep learning for the automatic assessment of breathing debonds in

CNN-based deep learning architecture
A block diagram of the designed CNN consisting of 6 different layers is presented in Fig. 5.The functions of each layer of CNN for the current problem are discussed here in brief.A detailed description is given in [49,50].In the CNN algorithm, the input scalograms are converted to RGB pixels and supplied to the network.In this current problem, each of the Red, Green and Blue (i.e., RGB) has [292 × 219 pixels] and separate convolution kernels are assigned for each pixel matrix and a bias is added after the convolution.The outputs from these three channels (RGB) are then combined to obtain the output from this layer.A zero padding is assigned after the convolution to avoid any information loss due to the reduction in the successive layers.The value of bias and convolution kernel weights are updated through back-propagation.
In this problem, the convolution kernel size (i.e., filter) is 8*8 pixels and the kernel number is selected as 20 (trial based) to handle valuable information.After the convolution process, an activation function, 'Relu (rectified linear unit)' given in Eq. ( 3) is applied for mapping all the negative element values to zero.
RELU is chosen over the 'sigmoid' and 'tanh' functions owing to the higher speed and accuracy [46].After the RELU operation, a downsampling is performed by the max-pooling layer to reduce any dimensionality involved in the feature maps.Max-pooling picks max.values of the [2*2] size sliding window.In order to learn large-feature patterns, the selected features from the last layer of max-pooling are compounded in the fully connected layer (FCL).The classification stage commences from this FCL that multiplies the inputs from the pooling layer by adding a weight matrix and a bias vector.In the current problem, the FCL output size is equal to the dataset class numbers.This deep learning network training is an optimization operation that computes lossfunction gradients from every iteration to minimize loss for the assigned training data with updated weight components.The image pixel, weight and bias are the loss function's input.Loss-function considered in here is the 'softmax' that considers the multinomial logistic loss [49].The posterior probability Q i of the true class m i for a feature sample, 'S' is given as where the score vector u(u 1 , u 2 , ⋯u n ) calculates the proceeding features between 0 and 1 with the sum = 1, a i0 is the bias and a i is the vector of weights that are updated by backpropagation.A loss function, l(a) is defined as: where U g ( U g1 , U g2 , ⋯, U gn ) represents the ground-truth vector [49].The Ground-truth offers checking results corresponding to the real features.The loss function in Eq. ( 5) estimates the difference between current and targeted models.The value of ground truth (U) for the aimed neurons is 1 whereas for the rest of the neurons is 0, and loss is reduced by updating the 'weights' and the 'bias'.The updated weight values are obtained as: where R represents a learning rate (user-defined) which depends on computational time and accuracy in information.A cross-entropy loss, α is estimated by the classification layers with m mutually exclusive classes for multiclass classification.The layer of 'softmax' followed by the classification layer connects the final FCL and the loss function is a cross-entropy of an 1-of-m coding approach.Equation ( 7) represents the cross-entropy-loss or logarithmic-loss as: where the sample no. is 's', class no. is 'm', p ij represents either 1 pointing whether i th s belongs to the j th m otherwise, it is 0; y ij is the output for i th s and j th m from the layer of 'softmax' [50].

Higher harmonic extraction from the raw GW signals
The 2nd harmonic signals with a double central frequency (i.e., kHz (approx.)) to the fundamental excitation frequency (i.e., 150 kHz) are extracted from the raw signals by applying a designed filter.The filtered signal consists of debond information, and it is important to maintain the signal's shape while filtering.Hence, an FIR (Finite Impulse Response) filter with a linear phase is chosen to ensure a distortion-free shape without much group delay [51][52][53].A bandpass FIR Equiripple filter is designed to separate the 2nd harmonic signals having a peak around 310 kHz.The frequency response of the Equiripple filter is shown in Fig. 6(a) and a typical representation of second harmonic (HH1) the extraction from the raw GWs is given in Fig. 6(b).

Table 2
The confusion matrix shows the 10-fold Av.test performance of the trained network.pixels] to abridge the computational cost.Farther, the 0-mean Gaussian noise was randomly selected within the range of 10 % to 30 % and added to the resized CWT scalograms for image augmentation as schematically shown in Fig. 4. The Gaussian noise consists of a probability density function resembling a normal distribution and for image augmentation, a random Gaussian function can be added to the image function to introduce this noise.The zero mean value of the noise does not contribute to a net disturbance as the amount of positive and negative noise is the same and ultimately gets cancelled out from the system.A total of 4500 scalograms are obtained and 80 % of the scalograms (i.e.,  Further, the trained-network's performance is evaluated for the experimental test datasets corresponding to a UD SCP, an SCP with a DSt2 debond and an SCP with a DSt1 debond.Table 3 confusion matrix shows an overall 85.6 % test accuracy (10-fold av.) for the experimental data.
The deep learning model was trained with the simulation dataset and the test performance with available experimental data (never used for training) produced high accuracy justifies the SHM potential of the proposed deep-SHM model and the simulation success.The trained network is then tested for 2 entirely new debond datasets corresponding to a DSt1 and a DSt2 debond.The 10-fold average accuracy (92.85 %) in Table 4 indicates the robustness of the deep learning architecture for breathing-debond monitoring.The NaN% resembles zero input in the UD channel.

SHM of SCPs using filtered HH1 signals
The designed neural network is trained with time-frequency scalograms of HH1 (260 -360 kHz) that are extracted from the raw ultrasonic signals (ref.Fig. 6) by using the designed bandpass filter described in sub-section 4.2.Fig. 12 (a) represents a typical time-history signal of extracted HH1 and its resized time-frequency scalogram image is presented in Fig. 12 (b).
The tenfold training, validation and test are performed with HH1signal datasets and a training/validation performance (i.e., 1 of 10 results) of the network is presented in Fig. 13 and shows a very high accuracy (99 %).Whereas the 10-fold av.training-validation accuracy was found to be 98.99 % = {(98.64+ 98.8 + 98.98 + 99.11 + 99.14 + 99.00 + 99.00 + 99.12 + 99.14 + 99.00)/10}.This accuracy is significantly better than the results using the raw GW dataset.Further, the 10-fold training-validation with the raw data and the HH1 data are compared and represented in Fig. 14 indicating a better accuracy for the extracted HH1signals than the raw data.
The 10-fold training-validation accuracy corresponding to the raw data standard deviation (SD RD ) and HH1 data standard deviation (SD HH1 ) of the was calculated as:

Table 3
Test performance of the trained network for the experimental test dataset..00,99.00, 99.12, 99.14, 99.00]; 'μ‫'׳‬ is the mean accuracy using HH1 data = 98.993 and 'n' is number of trials = 10.Table 5 confusion chart represents the 10-fold average test performance (97.3 % accuracy) of the trained networks using only 100 RGB scalograms per class.Fig. 15 shows a comparison between the test performance using raw data (ref.Table 2) and the extracted HH1 signal data (ref.Table 5).The HH1 signal data-based results show better accuracy than the raw signal data-based results for each of the SCP classes.
The damage region classification potential of the network is evaluated for the experimental HH1 scalograms corresponding to the UD, DSt1 and DSt2.The test results in Table 6 confusion matrix show the overall accuracy of 90.4 % of the network, which is again more than the raw GW data-based test results (i.e.85.6 %).
Further, a class-wise comparison between the experimental raw and HH1 data-based test performance is presented in Fig. 16.The comparison shows higher test accuracies for the HH1 data-based results, in all the SCP classes.The SHM potential of the designed neural network is further evaluated for the HH1 signals from DSt1 and DSt2 datasets (never used for training).The 10-fold average test performance in Table 7 shows an overall 95.9 % accuracy, resembling the robustness of the SHM strategy.The NaN% has again appeared in the confusion matrix owing to the no input in the UD channel.
Fig. 17 represents a class-wise comparison between the HH1 signal and raw-GW image-based test results (10-fold av.) for DSt1 and DSt2 that again proves the better performance for the extracted nonlinear higher-harmonic signals, in both the classes.

Conclusions
A deep learning based SHM strategy is presented for the assessment of SCPs using nonlinear ultrasonic signals.This study is amongst the first to propose a damage identification methodology using a deep learning approach for complex stiffened composite panels with nonlinear ultrasonic signals.The SHM strategy has the advantage of being physicsinformed and data-driven by experimental guided wave data.This work has proven the potential of the deep learning-based SHM strategy for classifying breathing-debond regions in stiffened composite structures.This monitoring strategy provides the foundation for an automated strategy for the extraction of essential signal features (filtering and transformation) to be incorporated into the deep learning models.Thus, the potential of the proposed methodology can be explored for the assessment of a wide range of damages with acoustic fingerprints recorded at the sensory network and the trained model can be deployed for real-time or online identification of damages for operational complex lightweight structures.
The main conclusions can be listed as follows.
Table 5 10-fold average test performance using the HH1.

Table 6
The average test performance using the experimental HH1 signals.

Table 7
Test performance (10-fold av.) with HH1 data from the new debond.

Fig. 4 .
Fig. 4. Schema of the SHM strategy for deep learning-based classification of SCPs.

Fig. 5 .
Fig. 5. Designed architecture of the CNN for structural classification.

Fig. 6 .
Fig. 6.(a) Frequency-response of the designed filter and (b) an extracted HH1 from raw GWs.
GW signals for different cases are obtained from numerical simulations and experiments.The typical experimental GW signals

Fig. 9 .
Fig. 9. Waveform plots corresponding to (a) experimental UD case, (b) experimental DSt2 case, (c) experimental DSt1 case and (d) DSt1 case at a later stage where the breathing-debond is acting as the ultrasonic source.
3600) are considered for training, 10 % scalograms (i.e., 450) are used for the validation of the network and the remaining 10 % scalograms (i.e., 450) are used for testing of the trained network.The deep learning network described in Section 4 is trained/validated/tested using datasets consisting of the resized scalogram images corresponding to the UD, DSt2 and DSt1 classes.A 10-fold training/validation and test have been conducted to achieve the stable performance of the deep learning network.A total of 4500 images corresponding to the classes: UD, DSt2 and DSt1 are distributed into 10-parts with 450 images/part.Stage 1: part-1 for Testing and part-2….part-10for Training/Validation. Stage 2: part-1, part-3…part-10 for Training/Validation and part-2 for Testing.Likewise, Stage 3 to Stage 10 have been performed to improve confidence with training progress.The validation accuracy and loss curves have shown some oscillations during the early epochs, due to the usage of limited images per batch.After every passing batch through the CNN, updated weight values are applied with a proportion of all the classes.In order to avoid any overfitting, 10 epochs and 31 iterations/epoch are considered to train the network.A typical training-validation result in Fig. 11 indicates validation accuracy and loss.10-fold training and validation results show a 97.

Fig. 13 .
Fig. 13.Training and validation performance of the deep learning network with HH1 data.

Fig. 16 .
Fig. 16.10-fold average test performance per class using raw data and HH1 data.

Fig. 17 .
Fig. 17. 10-fold av.test results for new DSt1 and DSt2 cases with raw data and their HH1 data.

Table 1
Material properties of the CFCL.
Material E