AE-RTISNet: Aeronautics Engine Radiographic Testing Inspection System Net with Improved Fast Region-based Convolutional Neural Networks Framework

: To ensure the safety in aircraft flying, we aim use of the deep learning methods of nondestructive examination with multiple defect detection paradigms for X-ray image detection posed. The use of the Fast Region-based Convolutional Neural Networks (Fast R-CNN) driven model seeks to augment and improve existing automated Non-Destructive Testing (NDT) diagnosis. Within the context of X-ray screening, limited numbers insufficient types of X-ray aeronautics engine defect data samples can thus pose another problem in training model tackling multiple detections perform accuracy. To overcome this issue, we employ a deep learning paradigm of transfer learning tackling both single and multiple detection. Overall the achieve result get more then 90% accuracy based on the AE-RTISNet retrained with 8 types of defect detection. Caffe structure software to make networks tracking detection over multiples Fast R-CNN. We consider the AE-RTISNet provide best results to the more traditional multiple Fast R-CNN approaches simpler translate to C++ code and installed in the Jetson™ TX2 embedded computer. With the use of LMDB format, all images using input images of size 640 × 480 pixe l . The results scope achieves 0.9 mean average precision (mAP) on 8 types of material defect classifiers problem and requires approximately 100 microseconds.


Introduction
In the NTSB (National Transportation Safety Board) global aviation safety notice [1], it is stated that "An abnormal engine could cause a serious aircraft accident".To mitigate the risk of aircraft accident, it is thus of paramount important to perform quality-assured engine inspection, manufacturing and overhaul.Indeed, if the inspection and maintenance tasks of aero engine are negligent, flight safety problems may result.This is evidenced in the event of a Boeing 737-700 passenger aircraft on April 2018.In this event, a serious accident occurred when the engine blade ruptured and pierced the cabin of the aircraft, damaging the fuselage and causing injury to the passenger.According to report news show the Figure 1 depicts a failed fan blade as well as fracture surface with fatigue.The main cause of the accident was that engine blades inspection were not implemented, and the hidden fatigue defects in the engine blades could not be detected by the inspector.However, defects can occur during material welds maintenance work, including microcracks, incomplete fusion, voids, porosity, blowhole, and inclusions spatter [2,3].Once engine blades with hidden cracks had exceeded their service life, fatigue micro-cracks on metal surfaces cause cracks to continuously expand.The engine aeronautics composite materials (ACM) [4] are mainly made of aluminum (titanium) alloy composite materials and its related spare parts.
According to the Federal Aviation Administration (FAA) maintenance review board (MRB) [5] each online aerial engine should pass from period A、B、C and D levels check work take place at maintenance, repair and overhaul (MRO) [6] sites check work will perform the non-destructive testing (NDT) inspection of each parts internal structure.Engine components must pass NDT inspection before dynamic test.There are two NDT ways: digital radiography testing (RT) [7] and computed tomography testing (CT) [8].The deep learning model rely on generative adversarial model improve the existing techniques on X-ray image inspection.The aeronautics engine maintenance check cracks processes are resorted to X-ray images NDT to find defects in internal hidden.An application of deep learning for auto mark defects location technology can effectively identify presence and location of up to eight types of defects, leading to enhanced work quality and efficiency.The finer object detection of image feature maps will lead to more accurate identification of weld flaws than can be accomplished by the standard visual examination.The proposed approach adopts a region-based convolutional neural network and a deep learning neural network for object detection to render an efficient X-ray image diagnosis system.The approach may benefit the inspection work in the aviation industry via increased accuracy and efficiency.Fortunately, we propose a method improved transfer learning based on Fast R-CNN that detects composite material hidden crack from a single and multiple NDT radiational images with high reliability.Our neural network, called AE-RTISNet, is based on Fast R-CNN encoder-decoder neural blocks based on residual layers.Method experiment results of the proposed method.Based on the Fast R-CNN improved this AE-RTISNet is unique for simple code and quickly to detection the aerial material defect region is highlighted with red square for crack and burn, as liked as manual labeled damage are marked with solid red lines, whether alike the position, size, shape and direction is.Our automatic inspection of defection system discriminates the X-ray image multiple cracks as show as in Figure 2, and labeled crack between engine blades in as well as differentiate eight type multiple cracks in.
In addition to eight types defect classification, this paper experiment also explores the applicability of mutations Fast R-CNN detection models, for example the AE-RTISNet improved transfer learning Fast Region-based Convolutional Neural Networks (Fast R-CNN) for defect detection net.On the basis of the Fast R-CNN approach requires large amounts of X-ray defect imagery data.Those data facilitate a cracks image segmentation and detection on multiple defect of aeronautics engine defection by edge automatic system on detection feature extraction and classification process.Although, it is possible to application deep learning algorithm on datasets it is really also can be optimized explicitly domain.This result is empirically show that deep learning net complex with the pre-tuned model features also more yield superior performance to human crafted features on object identification tasks.We are primarily trained for aeronautics engine defect X-ray image classification eight types where sufficient training data exists.Although operators have superior experience on the RT method, but we know that radiation that is harmful to human bod.The X-ray radiation can take images of objects in real-time [9].The latter uses AE-RTISNet tackling both single and multiple detection for X-ray photos of the AI for inspection and will be the focus of the paper.The original check work shows the high accuracy of sample type defect detection, as we know that inspection jobs are all manual.In the inspection process, a technician visually inspects Xray images of the structures and parts to detect structural cracks and defects.It is easy to have shadow overlap when the sharpness of the X-ray image is poor.The task of the technician in visual inspection is thus highly demanding, as she or he may also be subject to fatigue and psychological conditions, with diagnostic finally RT results determined subjectively by operators.Thus, the final results are affected by the experience and to fatigue and psychological conditions.Moreover, the job detection process is time consuming and inefficient.
Under resolution the problem, the aviation industry including companies such as Pratt & Whitney, GE, and Honeywell, has attempted to automate the inspection process to reduce the burden of human inspector and assure the quality.An automatic inspection system can assist the inspector to identify the features and defects in X-ray images by marking the object localization window and bounding box automatically.The engine must not have any damage or cracks when thrusting operation at high speed.It is thus of importance to perform the inspection with assured quality and with efficiency.In the paper, the artificial intelligence (AI) based image processing technique is combined with the NDT inspection technology to improve the aero engine inspection task.
The three main contributions of that are; (a) AI-based function for the training and validation, (b) X-ray images of Eight different defects, and (c) The edge-computer based inference system in the engine repair plant.The paper presents the application of deep learning neural network methods in an aeronautics engine X-ray images diagnosis.The main contributions of the paper are following: A framework of the AI-based NDT inspection for engine parts is developed by using Fast Region-based Convolutional Neural Networks (Fast R-CNN), AE-RTISNet for defect feature detection and description multi-task loss function to localize objects for the training and validation.

•
The system is trained by collecting a set of X-ray images of engine parts.Eight different defects are labeled in the images for training and validation.

•
The proposed system is implemented by using an edge-computer based inference system in the engine repair plant.The effectiveness of the proposed detection in augmenting the inspection capability is demonstrated.This paper is divided into five sections.In Section 2, the main approach is described.The image data sets, and eight types defect detection are discussed.The adopted fast region-based convolutional neural network models are then delineated and some modifications to existing network are highlighted.In Section 3, the experiments are discussed.The experiment results indicate that the proposed AI-based inspection method can indeed effectively augment the overall inspection capability.In Section 4, the experimental results are discussed.The experiment results indicate that the proposed AI-based inspection method can indeed effectively augment the overall inspection capability.Finally, conclusions are given in Section 5.

Main approach
Using image processing methods for Computer Aided Screening (CAS) [10] of mark defects detection region on X-ray images.This chapters and sections detail on image enhancement [11] method that performs an automated multiple defect detection model that is no doubt the best way to applicability of maintenance detection Fast R-CNN algorithm solved original low pixel X-ray films identification problem.Our prepare work, has focused on image enhancement , feature segmentation [12], object classification [13] and multiple detections [14].Our focus is based on addressing the different type aeronautics engine material hidden crack object classification and detection tasks presented in the following sections.

Image data sets preparation
We had collected the main data sets of aerial engine X-ray images from an engine repair plant in Taiwan.The plant is specialized in repairing various types of aerial engines, such as the helicopters engine, transport aeronautics engine, and fighter's engine etc., As seen from the figures, we hold complete each kind aerial engine maintenance record of the experiments, as show in Figures 2.
In Figure 3 illustrates the RESCO-MF4 X-ray machine that is used in the experiment equipment.The aerial engine parts make of metal composite materials.Fortunately, we have researched an optimal intelligent transfer learning method that be pre-trained neural net be named AE-RTISNet diagnostic results based on the radiographic images are labeled defect region.For example, that there are 20 engines to be inspected in one month and each engine contains 300 parts, a total of 6,000 X-ray images can be collected in one month.If these images are to be inspected by human inspector, the inspection work is very demanding.Hence, automatic inspection by augmenting AI techniques may potentially reduce the workload of human inspectors and assure a consistent quality.

Eight type Defects Detection
Image data sets was taken from the archives of the engine repair records maintained.Next is preprocessing by using the Python toolkit's called image processing library PIL (Python Imaging Library) is executed to convert the image file format and adjust the output size to lead to a universal LMDB (Lightning Memory-Mapped Database) data format [15].All training and testing tasks are conducted by using image data in LMDB file format.The engine repair plant has accumulated a data gallery of X-ray images.In the dataset, weld for ACM defects in X-ray images can be categorized as either (a) cracks, (b) incomplete fusion, (c) incomplete penetration, (d) porosity, (e) slag inclusion, (f) undercut, (g) welding spatter, or (h) blowhole as depicted in Figure 4.The X-ray images in the dataset are converted into LMDB format and stored for deep learning application.In the dataset, some X-ray images in different light conditions and resolutions have been labeled with the above defects.An effort is made to prepare the labeled data in terms of AE-RTISNet.In this endeavor, the 8 types of label classes are stored in the first row of the category table string over 6,000 labeled X-ray images dataset.

Detection on Description
The AE-RTISNet inspection of X-ray images is required to pre-trained the detect of defect type that provide the location or region of the defects.AE-RTISNet algorithms improve accuracy thus used in the X-ray image defect recognition non-destructive for radiation testing at aerial engine maintenance factory.Based on the Fast R-CNN algorithm model is developed for the detection, localization, and classification.Many different deep neural network models including DetectNet [16], Fast R-CNN [17] and YOLO [18] have been proposed for object detection and semantic segmentation.In this paper, based on the aeronautics engine radiographic testing inspection system model is used to determine the bounding box of object defects inspection in X-ray images.The prepare transfer learning model architecture is evolved from based Fast R-CNN model and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation.The layers of Fast R-CNN learn the attention mechanism object detection with a Region Proposal Network (RPN) [19].The model of Fast R-CNN is thus flexible and robust to be applicable to the R-CNN.In the Fast R-CNN model is used to provide the bounding-box object detection for the design of convolutional networks.The normal Fast R-CNN is extended to allow marking of the defect feature maps using on pooling layers, which could lead to faster convolution speed and better accuracy [20].Furthermore, under the Jetson™ TX2 embedded computer the AE-RTISNet compiler to C++ code is used to model the 8 types of object classification feature maps and a normal regression to estimate object bounding boxes compiler to deep learning neural network for object detection [16].It is an extension to the AE-RTISNet to increase the number of hidden layers.Some changes to accommodate multi-class object detection are also made.This is different from common practice when applying FCNs [21].The determination of the structure model for multi-class object detection contains training and validation steps.Figure 5 illustrates the training and validation procedures in which L1/L2 loss function is adopted so that the results are not biased to single class.This paper propose a deep Fast R-CNN based framework function, which not only can efficiently detect defection damages from X-ray image files but also can be rechange to other aerial parts material damage detection problems more easily.We used the Fast R-CNN algorithm model by applying a deep learning algorithm structure, which pre-training AE-RTISNet model gains the ability to output both the defect regions and the eight types of damages in input X-ray image datasets.Additionally, the transfer learning method is introduced to reduces the required amount of data and yields high accuracy.Recently, some famous SCI papers neural net model design on deep learningbased object damage recognition methods.For example, some paper proposes deep layer R-CNN to detect different kinds of cracks while evaluates more than five R-CNN model architectures for special detection object material corrosion from input images [22,23].The Fast R-CNN training algorithm that make-up the disadvantages of R-CNN and SPPnet [17], while improving on their speed and accuracy.Thus, the Fast R-CNN algorithm method works large input datasets are first cropped into small images of fixed size, and then CNNs are applied to classify whether cracks or corrosion are contained in each small image with a fixed size which method has advantages higher.Thus, the AE-RTISNet model detection gain a more perfect mAP than R-CNN and SPPnet model.An AE-RTISNet model framework is further developed to enhance the recognition of multiple objects.As a result, unbiased classification for eight types of object classification feature maps can be obtained.The customized network as depicted in Figure 7 employs layers to realize an LRN (local area response normalization) [24] and uses a local neuron net to establish a set of active competition mechanisms in an attempt to suppress insignificant small feedback neurons for the improvement of learning performance.Based on Fast R-CNN model structure of AE-RTISNet has sibling output layers.The AE-RTISNet first processes the whole X-ray image with several convolutional and max pooling layers to produce a convolutional feature map.
Thus, for each object proposal a RoI pooling layer extracts a fixed feature vector from the feature map.Each sequence of fully connected be named FC layers.Finally, each set of encodes refined bounding-box positions for the K classes.The trained RoI was labeled with bounding-box regression offsets which mean the K object classes.According to the Fast R-CNN architecture define algorithm just as we know that each RoI training model was labeled with Lcls (p,u) = -log pu is a log loss and ∈{,,,ℎ} is a task loss in which define the Lcls and Lloc value [25].For background Fast R-CNN in which RoI algorithm the Lcls and Lloc is ignored.For the Fast R-CNN model structure of AE-RTISNet on bounding-box regression, here we are use the loss is the robust L1 loss and L2 loss there are require careful tuning of learning rates and to prevent exploding gradients on Eq. 1 [19] which is eliminates this sensitivity.

Inference
The AI-based inspection system is designed to augment the capability of a human inspector to speed up the inspection task and to reduce the likelihood of errors.The Fast R-CNN and AE-RTISNet after training and validation is thus implemented in an embedded processor to perform the defect detection and image segmentation of X-ray images.The embedded platform that is adopted is the NVIDIA® Jetson™ TX2 hardware which is responsible to perform the on-line inference for defect identification.The inference system is used to upgrade the factory NDT machine equipment, that helps the engineer to perform inspection all X-ray images more efficiently.

System Setup
To develop and verify the proposed AI-based inspection system, a hardware and software environment needs to be setup to support the processing of a large number of data sets for training and inference [26].The host computer is equipped with an 8-core Intel i7-7700K CPU and NVIDIA-GTX 1660Ti GPU core graphics card.The host computer is responsible for the training of the neural networks.The host computer connects the external expansion device, NVIDIA® Jetson™ TX2 embedded development board, to perform the inference task.The host computer is responsible for the overall data processing and training task.Once a model is trained and verified in the host computer, it is compiled and ported to the Jetson™ TX2 embedded development board for inference.
The main operation software is Ubuntu 16.04 version, together with additional installations of NumPy software for C++ and Python programming language interface and open source software

Network Training
In the experiment, a modified Fast R-CNN is developed and trained to identify the feature map on defect object detection and segmentation of X-ray images, and the results serve as inputs to the AE-RTISNet in which characteristic value of crack defect in the picture is compiled.The data set is compiled by using python code to read the data file format with the extension file name in xml format and input it into the neural network layer of the custom DNNs model.The DNNs model is the basic prototype of the neural network model is made by installing Caffe Deep Learning Framework [27], and the weight value of the neural network of each layer of Fast R-CNN on DNNs is adjusted by python code program compilation.
Some snapshots of the X-ray image data set are depicted in Figure 8 and Figure 9 illustrates the description screen of image models.In the presence of defects, a category labeling bounding box is made and the regions within the region proposals box is used in the GoogLeNet architecture to perform feature value extraction operations on the trained DNNs.Then, the optimization algorithm Adam is used to refine the model.The process can efficiently distinguish whether it is a defect or a background pixel [28] to reduce the error rate and correct the position of the bounding box by a linear regression model.In addition, the labeling tool ImageImg software is used for object labeling and compilation as shown in Figure 10.LabelImg software will automatically check whether there is a corresponding xml file extension tag file in the system so that the overall training can be conducted more efficiently.Both training and testing are conducted in the determination of the neural network model.To this end, the image file data sets are divided into two folders among which 66% is used for training and 33% is used for testing.The experimental uses deep networks as a model to identify the X-ray image defect object localization window and bounding box.In addition, AE-RTISNet is used to reduce the error value of the identification trap feature value.The operation follows the description in Figure 9 and, typically, the model is trained for more than 600 iterations.The customized neural network structure plays the role in screening the data.When passing the first layer of convolution, it is forced to pass a customized layer.The customized sharpens the image features so that the features become prominent.As a result, the error rate of the model identification is reduced by 1~2%.

Experiment Results
The defect categories are cracks, Incomplete Fusion, incomplete penetration, and welding.Porosity, slag inclusion, undercut, spatter, and blowhole are common 8 types of defect characteristics.Warn the maintenance staff in the engine maintenance manual.Must have a NDT Inspector Engineer with FAA, CAA, and EASR certifications to complete the maintenance of aeronautics engine [29].The model learns to identify defect categories through image recognition.Experimental results After the model is compiled, the program can be lightweight and installed in the Jetson™ TX2 embedded computer.The computer can automatically detect the characteristic values of structural defects such as engine components, weld beads and blade bodies.The test process can identify the type of defect even if the original input X-ray image does not require any preliminary training.The customized DNN and the overall neural network model is evaluated in terms of mAP (Mean-Average-Precision).The results confirm that the model has reached the technical standards of a qualified human inspector.The average accuracy (mAP) obtained by the experimental test is 0.82 if the image does not contain a defect and it is 0.8 if the special picture has certain defects.Figure 11 illustrates an inspection result and Figure 12 depicts the training performance.

FIGURE 11. Inspection result of an X-ray image FIGURE 12. Visualization of the training process
A field comparison of human visual inspection and the proposed AI-based inspection is shown in Table 1.The experiment verifies that the augmentation of AI-based inspection system to engine diagnosis is feasible and can be an effective tool in quality assurance.The results imply that the maintenance cycle can be made shorter.Preliminary estimate of the reduction of workhour for each engine is more than 60 hours which is equivalent to the saving of at least $3 million dollars.Save $ Million Dollars 0 3

Conclusions
The paper aims to develop an AI-based inspection procedure to augment non-destructive inspection work in an engine repair plant.The experimental results use deep learning technology to determine parameters of a customized deep learning neural network model to improve the quality assurance in NDT detection.The resulting AE-RTISNet is successfully deployed C++ code in the TX2 development board.The model is tested and verified to be able to assist the inspector to perform diagnosis task of X-ray images in an aeronautics engine repair plant.In particular, the field trail indicates that the AI-based inspection system can augment the overall inspection system in relaxing human workload and assuring quality.

FIGURE 1 .
FIGURE 1.Samples of fracture surface with fatigue [1] FIGURE 3. (a) RESCO-MF4 X-ray machine in factory, (b) X-ray file FIGURE 2. Factory of various military aeronautics engine

FIGURE 5 .
FIGURE 5. Overall Architecture Function Charts

•
Pre-Train Model • Finish Model Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 5 October 2020 doi:10.20944/preprints202010.0060.v1 Using the AE-RTISNet model accelerates Fast R-CNN by 10× at test time.The AE-RTISNet model training time is also reduced by 3× due to faster proposal feature extraction.

FIGURE 6 .
FIGURE 6. Conv Feature Map the L loc (  , ) = ∑ ℎ  1 (   −   ) Learning GPU Training System) under the Caffe Deep Learning Framework to provide multi-class training and testing.In the iterations of training, the optimal one-time iteration convergence curve in the trained model is selected as the data for the convergence model.The stored model data is converted to the snapshot image file format by the DIGIS software.The snapshot model for deployment is then ported to the TX2 to perform inference task through Tensor RT software.The deep learning model performs object identification module for images in LMDB format and the selflabeled defect features model is trained to identify the object localization window and bounding box.The input data library data set uses the previous layer, the convolutions layer, the max pooling layer, and the normalized exponent function SoftMax layer, 22-layers neural network, optimizer calculation model of stacked deep neural network module, custom DNNs model recognizes eight styles, and successfully marks object feature map segmentation.