Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Bio-Inspired Memory Model Embedded with a Causality Reasoning Function for Structural Fault Location

  • Wei Zheng ,

    zw3475@163.com

    Affiliation Key Laboratory for Optoelectronic Technology and System of the Education Ministry of China, College of Optoelectronic Engineering, Chongqing University, Chongqing, China

  • Chunxian Wu

    Affiliation Key Laboratory for Optoelectronic Technology and System of the Education Ministry of China, College of Optoelectronic Engineering, Chongqing University, Chongqing, China

Abstract

Structural health monitoring (SHM) is challenged by massive data storage pressure and structural fault location. In response to these issues, a bio-inspired memory model that is embedded with a causality reasoning function is proposed for fault location. First, the SHM data for processing are divided into three temporal memory areas to control data volume reasonably. Second, the inherent potential of the causal relationships in structural state monitoring is mined. Causality and dependence indices are also proposed to establish the mechanism of quantitative description of the reason and result events. Third, a mechanism of causality reasoning is developed for the reason and result events to locate faults in a SHM system. Finally, a deformation experiment conducted on a steel spring plate demonstrates that the proposed model can be applied to real-time acquisition, compact data storage, and system fault location in a SHM system. Moreover, the model is compared with some typical methods based on an experimental benchmark dataset.

Introduction

Monitoring tasks are increasingly complex. Hence, structural health monitoring (SHM) technology has entered a stage of intelligent development in which individual data, signal, and knowledge processing procedures are presently integrated. Knowledge-based techniques are usually combined with several intelligent methods to identify structural faults[13]. However, some notable problems in industrial applications must be resolved urgently.

First, massive data storage pressure is an outstanding issue in long-term SHM systems. To save on storage costs, sampling frequency is often reduced. However, strong external disturbances such as earthquakes may occur in the interval between samplings, which may last a few minutes. These disturbances may result in an unrecorded change in structural state [4]. Data compression techniques can also be used to save on storage cost, including methods based on fast Fourier transform, wavelet transform, and model-based compression [57]. However, complex compression algorithms lengthen the response time of the system, and processing the large amounts of corresponding data inevitably enhances computational difficulties [8]. Nonetheless, the process of instantly capturing structural changes while controlling data volume reasonably remains a challenge.

Second, a structural fault usually deteriorates gradually and extends from the focus to the surface. This process is difficult to monitor using a structural mathematical model when the effect of structural degradation and environmental changes is considered [9]. Recent studies have measured a single parameter in multiple channels, such as the monitoring of stress, strain, and pressure [10,11]. The consideration of a combined effect strongly complicates fault analysis given the individual differences in the monitored signals.

Thus, the current study establishes a bio-inspired memory model embedded with causality reasoning function for structural fault location in consideration of how the human memory works and of the causal relationship in the structural fault process. This approach does not require a precise mathematical structural model and can be used for real-time acquisition, compact data storage, and system fault location in a SHM system.

Related Works

From a psychological perspective, memory is the ability of an organism to store, retain, and subsequently retrieve information. In 1968, Atkinson and Shiffrin proposed a model of human memory that posits three distinct memory stores: sensory, short-term, and long-term memory[12]. The study conducted by Ruff shows that neural processing in the sensory cortical areas can be biased toward behaviorally relevant stimuli and toward thoughts generated by feedback projections from the frontal and parietal brain areas [13]. Nee and Jonides reviewed the evidence that short-term memory is an amalgamation of three qualitatively distinct states [14]. Widrow and Aragon reported that memory and pattern recognition are interrelated and justified why long-term memory is stored in DNA or RNA[15]. Bacca, Salvi, and Cufi proposed a system for long-term mapping and localization based on the introductory concepts of short-term and long-term memory [16]. Wang and Qi presented a memory-based cognitive model for visual information processing as inspired by human perception of an environment [17]. Song, Weng, and Lebby developed a dynamic model for the flapping motion control of micro air vehicles under a control scheme inspired by human memory [18].

Moreover, structural faults develop progressively with the inherent potential of causal relationships [19,20]. This condition guides the authors in mining the causal relationship of the monitored signals for system fault location.

Causal relationships exist in various fields. Furthermore, several reason events occur in different timing sections; these events result in either the onset or the conclusion of result events [21,22]. The present techniques that evaluate causal information are mainly based on probability theories, such as Bayesian network theory. A Bayesian net is a directed acyclic graph that can be used to indicate the probability of the causal relationships among events. This theory is widely used in fields such as fault diagnosis and pathology inference [23,24]. Several multi-technical fusion methods have also been established for causal information assessment. For example, Sharda and Banerjee combined the Bayesian net with a genetic algorithm in a robust design [25]. Baldwin and Di Tomaso [26] reduced the complexity of the automatic learning of a Bayesian net from the data using a fuzzy set.

The aforementioned methods generally consider conditional probability to be the modelling foundation, and they lack temporal information regarding an event. Therefore, Wen et al. [27] identified the correlation of the time series between cause and effect events. Karimi and Hamiton [28] integrated temporal information into a decision tree of causal relation. Arnold, Liu, and Abe [29] established a temporal causal model through graphical modeling based on the concept of “Granger causality”. Mosterman and Biswas [30] developed temporal causal graphs to extend the traditional causal constraints by including the temporal constraints that are important in the analysis of dynamic systems. However, causality is insufficiently described quantitatively because the application of causality to locate faults in a SHM system is novel.

Model

Most scientists agree that human memory can be described as a set of stores into which information is “placed”. A set of processes then acts on these stores. A very simple model may contain three different stores: sensory information store (SIS), short-term store (STS), and long-term store (LTS). It may also include three processes: encoding (inputting information into a store), maintenance (keeping this information “alive”), and retrieval (determining encoded information) [31].

Much information from the outside world is filtered through our sight, hearing, smell, taste, and touch sensors. We store this sensory information in the cortical areas of the brain. The data that catch our attention or are urgently needed are moved to the short-term memory. Sensory memory corresponds to that generated in approximately the initial 200 milliseconds–500 milliseconds after an item is perceived. Short-term store (STS) enables one to recall a memory that ranges from several seconds to a maximum of a minute without rehearsal. However, its capacity is highly limited. There are three ways in which one can forget information in the STS: decay, displacement and interference. By contrast, long-term memory can store much larger quantities of information for a potentially unlimited duration (occasionally, an entire lifespan).Still though, we can forget information through decay (as in short-term forgetting) and interference from other memories [3234].

Fig. 1 shows the working process of the human memory[35].

This biological system filters out useful information from various sources, such as sight, hearing, and smell, while discarding much useless data. The process is similar to filtering, which effectively reduces data volume.

Inspired by the features of human memory, this study establishes the bio-inspired memory model embedded with causality reasoning function for structural fault location as shown in Fig. 2. This study simulates the behavior of human memory based on knowledge regarding psychology and behavioral science. Human memory differs from artificial neural networks such as dendrites and axons, which imitate the physical structure of the human nervous system. Therefore, the study of the human memory does not require sophisticated physiological knowledge. The inspired memory model is primarily used to compress the volume of data. Furthermore, a causal inference function is integration into the LTS for structural fault location.

This study divides the SHM data to be processed into three temporal areas: SIS, STS, and LTS. First, the acquired data enter the SIS, in which they are maintained for a few seconds. If the value of the information is within the normal range of the structural state, this information is instantly forgotten. Otherwise, the signal proceeds to the STS for further observation. The onset and end times of the abnormal signals are recorded in this area. Much of the uninteresting data collected by the SHM system is discarded in the process, thus alleviating data storage pressure considerably.

The process can be described as follows:

Let xi represent the structural property monitored by sensor i. yi denotes the output of sensor i; tj corresponds to the current monitoring time; ΔTSIS indicates the time width of the SIS area; and θi represents the fluctuation limit. If the original normal value is yi_normal, then we obtain (1) where f(xi, tj) is the function of sensory output.

  1. If |yi(tj) – yi_normal| < θi, then yi(tj), j = j + 1 is disregarded;
  2. If |yi(tj) – yi_normal| ≥ θi, then yi(tj) and tj are recorded. The following process is then applied for operation:
    1. If tj ≤ ΔTSIS, then j = j + 1 should be observed further;
    2. If tj > ΔTSIS, then the signal is switched to STS.

In STS, the signal from the SIS is monitored for either a few seconds or a few minutes, which is longer than the monitoring duration in the SIS. If the structural state fluctuates, the data related to the stable structural balance state are sent to the LTS. The SHM system detects serious damage to the structure if the signal values exceed the normal range and either increase or decrease monotonously. The onset and end times of the monotonous change are then recorded. The system determines SIS disturbance if the signal values return to the normal range and abnormal monitoring data are no longer observed. The structural state stabilizes after further monitoring in the STS. Thus, monitoring data need not be sent to the LTS, and the STS disregards the disturbance monitoring data.

The working principle of the short-term memory area can be described as follows:

Suppose the signal stream is yi(tj), ΔTSIS < tj ≤ ΔTSIS + ΔTSTS.

  1. If |yi(tj) – yi_normal| < θi, then yi(tj) is recorded and j = j + 1 should be observed further.
  2. If |yi(tj) – yi_normal| ≥ θi, then yi(tj) is recorded and j = j + 1 should be examined further.

If branch 1) is always followed in the time interval [ΔTSIS, ΔTSIS + ΔTSTS], the recorded data in the STS can be forgotten because this time interval suggests that abnormal data are no longer generated.

Branch 2) evaluates this situation further as follows:

If yi(tj) ≥ yi(tj −1), j = h, h + 1, ……h + f or yi(tj) ≤ yi(tj−1), j = h, h + 1, ……h + f, then an alarm signal is emitted immediately because these expressions indicate that the signal values exceed the normal range and either increase or decrease monotonously f times from time th. The onset and end times of this monotonous change are then recorded, and the monitoring information is then sent to the LTS.

If the structural state fluctuates and tj > ΔTSIS + ΔTSTS, then the new balance location of the structure can be determined. The information is then sent to the LTS.

LTS is a large-capacity storage area in which information can be preserved for a long time. Long-term memory stores the changing parameters of the structural state and the results of structural fault identification derived from the causality reasoning unit. Additional data are stored in the LTS to provide information for structural state analysis. The LTS data that remain unused for long periods of time are forgotten to control data volume reasonably.

Causality Reasoning

Part 1. Analysis of causal relations

Causal relationships can be described as several reason events occurring in different timing sections that result in either the onset or the conclusion of result events.

When T represents the time set for event set D, T|D = [TB, TE], where TB and TE are the onset and end times of D, respectively.

Suppose the reason event set is C = {C1, …, Cm} and the result event set is R = {R1, …, Rn}:

Ci and Rj are related in several ways given T|Ci = [tp, tq], T|Rj = [tg, th] (i = 1, 2, …, m; j = 1, 2, …, n; 1 ≤ p, q, g, hk). The typical relations are presented in Fig. 3.

If events Ci and Rj are causally related, event Ci should lead to either the onset or the end of event Rj (Fig. 3[A] and 3[B]). If events Ci and Rj do not intersect in the time domain, then they should not be causally related (Fig. 3[C]). Although events Ci and Rj intersect in the time domain, they are not related in this manner as well (Fig. 3[D]).

Part 2. Quantitative calculation

Different disturbances are attributed to various structural faults in a SHM system [36,37]. Some structural faults originate from the effect of the onset of a reason event, whereas others are induced by the cumulative influence of the reason event. For example, an earthquake is a strong external disturbance to a structure, and its effect is instant. By contrast, a vehicle load places pressure on a road, but its effect accumulates over time. The present paper proposes a quantitative calculation method for causality based on the onset and cumulative influences of the reason events.

The onset effect of events Ci to Rj in Fig. 3(A) can be expressed as (2) where k1 is the onset effect coefficient. 0 < k1 ≤ 1.

Given CiC, (i = 1, 2, …m), a small value of tg(Rj) − tp(Ci) indicates that the onset effect of event Ci on event Rj is strong.

The cumulative influence of event Ci on event Rj in Fig. 3(A) can be expressed as (3) where k2 is the cumulative effect coefficient. 0 < k2 ≤ 1.

A high value of [tq(Ci) − tg(Rj)/[th(Rj) − tg(Rj)]2 suggests that event Ci has a strong cumulative influence on event Rj when CiC, (i = 1, 2, …m).

Hence, the present study defines a causality degree as follows:

Definition 1.T|C, T| R, Ci (i = 1, 2, …m) is assigned as the row vector, and Rj(j = 1, 2, …n) is the column vector. The 2D matrix σm×n is then established. The value of σij is the causality index and reflects the causality strengths of Ci and Rj.

Given the causality calculation method displayed in Fig. 3, the computation formula for σij can be expressed as follows: (4)

Formula (4) quantitatively calculates the causality strengths of the reason and result events.

Another index known as the dependence index is defined to evaluate causal knowledge and to locate system faults.

Definition 2. The onset or end of Rj relies on the cooperation of all of the elements in set C in ∃RjR(j = 1, 2, …n). The dependence level of Rj on Ci is called the dependence index DEPi(Rj) given CiC(i = 1, 2, …m).

DEPi(Rj) can be calculated in combination with the probability method using the following formula: (5)

Wd(Ci, Rj) is the strength of the dependence connection strength between Ci and Rj. It is designed to provide probabilistic reinforcement in our study. If DEPi(Rj) (i = 1, 2, …m) is calculated individually and the results are arranged in descending order, then the event Ci that displays a high dependence index value is highly likely to be the reason event of Rj. Thus, system faults can be located based on knowledge reasoning from the result event to the reason event.

For instance, reason event Ci can be denoted as the monitoring information collected by the numerous sensors of a complex structural state monitoring net, whereas overall structural performance can be taken as result event Rj, including collapse, distortion, and fracture. An event Ci with a high dependence index value is the most likely reason event for a given result event (structural state output). Therefore, the position of the corresponding sensor for this event is emphasized in system fault location.

To simplify the original values of Wd(Ci, Rj), we set these values to in the present study. Nonetheless, we subsequently designed a rule to dynamically adjust these values. Thus, parameters are self-renewable in the model. The rule is explained as follows:

Following the calculation of DEPi(Rj) (i = 1, 2, …m; j = 1, 2, …n), the new value of Wd(Ci, Rj) can be regulated dynamically using the following formula: (6) where is the previous value of DEPi(Rj) and vd is a slight increment.

A high dependence index value enhances dependence connection strength, and the value of Wd(Ci, Rj) increases accordingly to prepare for the next causality evaluation.

Part 3. Causality reasoning for system fault location

Based on the aforementioned methods, a causality reasoning mechanism for system fault location is proposed as follows (Fig. 4):

  1. The sensor net for structural state monitoring uses numerous sensors. Changes in the sensor signal are captured in real time. Subsequently, the onset and end times of the abnormal signals are recorded, then the sensor net containing the temporal information is mapped to reason event set C = {C1, …, Cm}.
  2. Meanwhile, the onset and end times of the output of the structural fault state are recorded. The structural states include the comprehensive evaluation results obtained from the sensors. The structural state output with the temporal information is then mapped to result event set R = {R1, …, Rn}.
  3. σij is calculated to establish the causality degree net.
  4. Subsequently, the dependence index is dynamically calculated to evaluate the causal knowledge.
  5. System faults are located by comparing the calculation results of dependence.
  6. Finally, the dependence connection strength Wd(Ci, Rj) is updated for the next causality evaluation.

The use of this model is appealing because it is relatively easy and simple to implement. It successfully avoids analyzing the differences among the numerous sensors in a complicated monitoring net while quantitatively calculating the inherent potential of the causal relationships between the monitoring processes and the results. Therefore, an intelligent fault location mechanism is established based on the causality reasoning for SHM.

Results and Discussion

Experiment System

An experimental system was established to demonstrate the validity of the proposed model and to concretely describe the application process of the model. The system is depicted in Fig. 5.

This study monitors a steel spring plate on which 12 full-bridge, metal-foil strain gauges are affixed. Fig. 6 displays the images and circuit diagrams of each gauge, which are important in the experiment. The general Wheatstone bridge illustrated in Fig. 6(A) consists of four resistive arms. Moreover, an excitation voltage E is applied across the bridge. UBD is the output voltage of the bridge.

Data are acquired using six strain processing boards with a serial communication interface. The monitoring data are sent to a serial to USB converter. Finally, the data is downloaded to a control computer through a USB hub. The 12 strain gauges are designed with connectors that can be easily installed or uninstalled such that the data from these sensors can be acquired alternately because only six independent strain boards are provided.

Furthermore, three spiral force-measuring platforms were used in the experiment. One platform is fixed onto a 2D displacement platform and is placed under the steel spring plate. A dowel steel is installed on the spiral force-measuring platform, which can be moved over 3D. If the torque handle on this platform is turned, torque stimulation is converted into pressure stimulation. This stimulation strains the steel spring plate through the dowel steel. The value of pressure stimulation can be measured using the dynamometer on the spiral force-measuring platform. Moreover, a laser rangefinder is also installed under the steel spring plate to measure its vertical deformation. The two other spiral force-measuring platforms are positioned on both sides of the steel spring plate. The upward pressures induced by the deformation of the steel spring plate are placed on the aforementioned platforms and are measured by the dynamometers.

As mentioned previously, the steel spring plate is strained by the dowel steel when the torque handle on the spiral force-measuring platform below the plate is turned. The force point is moved to a different area of the steel spring plate by controlling the 2D displacement platform. As a result, the steel spring plate is deformed in different ways in various locations and times.

Fig. 7 indicates the corresponding relations between the experimental and causal information systems.

thumbnail
Fig 7. Corresponding relations between experiment system and causal information system.

https://doi.org/10.1371/journal.pone.0120080.g007

The sensor information derived from the 12 strain gauges is regarded as reason event set C = {C1, …, C12}. The structural state outputs are the vertical deformation measured by the laser rangefinder and the upward pressures determined with the two spiral force-measuring platforms. Collectively, these outputs are considered result event set R = {R1, …, R3}.

Data volume control

The figures cited in the following paragraphs are obtained from a group of data derived from numerous experiments within a specific period of time. The source data are the strain values measured by strain gauge C3, and the force point is located below it. In these experiments, disturbance, structural deformation, and structural fluctuation events were simulated over a period of 20 min. First, we recorded every datum acquired at a sample rate of 0.5 Hz as in tradition. The acquisition time was also recorded in the data acquisition software. Therefore, the stored data includes both the signal values and the acquisition time. The values derived from the strain gauges are shown in Fig. 8(A). με is the unit that corresponds to the Y-axe.

The source data were then inputted into the proposed model to verify their validity. Suppose that ΔTSIS = 10 s and ΔTSTS = 2 min, and the initial limit of structural safety is θi = 300 με. Within time ΔTSIS, many normal data were disregarded in the SIS area. Rapid force was applied to the steel spring plate as a disturbance at approximately 9:02:54. The SIS captured the changes in the data at a new step time of ΔTSIS, which began at 9:03:02. Nonetheless, only abnormal data that exceeded the safety limit were recorded as presented in Fig. 8(B).

The same figure indicates that the signal entered the STS for further monitoring within the time ΔTSTS. The signal values normalized, and abnormal data were no longer observed. The proposed model deduced transient disturbances to the structure; consequently, the STS disregards the disturbance monitoring data.

The steel spring plate was deformed slightly by the force applied at 9:08:42. The proposed model quickly captured the changing signal and recorded the data in the SIS and STS, as depicted in Fig. 8(C).

Fig. 8(D) indicates that if the signal values formed a fluctuating curve, then the structure can be balanced in a new location. The information is then sent to the LTS.

The data storage space is well-managed when proposed technology is used. Fig. 9 exhibits the histogram of the comparison of the accumulated amount of recorded data with the amount of source data. The recorded data constitute only 18.6% of the source data in the experiment. Therefore, this result demonstrates that the proposed model efficiently alleviates data storage pressure on a computer.

Causal information system

Fig. 7 illustrates that the sensor information is regarded as reason event set C = {C1, …, C12}. The structural state outputs are then considered result event set R = {R1, …, R3}.

The steel spring plate is strained by turning the torque handle on the spiral-force measuring platform. The force point was located near C3, C4, and C9. The monitoring data regarding the reason and result events were processed using the proposed model. Moreover, the onset and end times of the abnormal signals were recorded. The resultant data volume was well-managed, and the temporal information was acquired as displayed in Fig. 10.

Given Formula (4) and set k1 = k2 = 1, the value of σij(i = 1, 2, …12, j = 1, 2, 3) is calculated as

Fig. 11 shows the 3D surface curve for the causality index values. A peak curve indicates strong causality, whereas a valley curve denotes weak causality. The causality distribution map is thus demonstrated quantitatively.

System fault location

Initially, the dependence connection strength is set as Wd(Ci, Rj) = 1/12 using Formula (5). The calculation result of DEPi(Rj) (i = 1, 2, …12) is depicted in Fig. 12.

We simply aim to determine the greatest potential locations of system fault; thus, we can ignore the reason events with very low dependence index values. In line with this objective, we developed a threshold dependence index value to simplify the comparison process.

The threshold dependence index value for the dependence index in system fault R1 is set to 0.15. The dependence indices that exceed this threshold are DEP2(R1), DEP3(R1), DEP4(R1), and DEP9(R1). These indices can be arranged in descending order as follows:

This relationship indicates the dependence levels of R1 on C2, C9, C4, and C3. C2 is the most likely reason event for system fault 1. Thus, the sensor location is C2. The other potential locations of system fault 1 are C9, C4, and C3.

If the threshold value of the dependence index is set to 0.2, then the potential locations of system fault 2 are C7, C1, C11, C5, and C8, given DEP7(R2) > DEP1(R2) > DEP11(R2) > DEP5(R2) > DEP8(R2) > 0.2.

Similarly, the potential locations of system fault 3 are C12, C6, C3, C9, and C2 if the threshold value for the dependence index is set to 0.25, given

The experimental results are consistent with the intuitive spatial analysis results. A precise mathematical structural model is unnecessary to quantitatively calculate the causal relationships between the monitoring processes and the results and locate the system fault.

Update of the connection strength

Once DEPi(Rj) (i = 1, 2, …m, j = 1, 2, …n) is computed, the new value of Wd(Ci, Rj) can be regulated dynamically using Formula (6). An example of the new value of Wd(Ci, R2)(i = 1, 2, …12) is provided in Fig. 13.

thumbnail
Fig 13. Dependence connection strength between R2 and Ci (i = 1, 2, …12).

https://doi.org/10.1371/journal.pone.0120080.g013

The data in this figure show that Wd(C7, R2) > Wd (C1, R2) > Wd (C11, R2)> ……, which is similar to the data trend presented in Fig. 12. If the value of the current dependence index is high, then dependence connection strength is enhanced to improve subsequent probabilistic reinforcement.

Comparison experiment based on the benchmark dataset

The fault identification procedure is carried out in the Los Alamos National Laboratory of the United States. Specifically, the factor analysis, principal component analysis, and Mahalanobis distance methods are adopted to analyze the faults in a three-story building structure.

To demonstrate the validity of the presented model, the model is applied to a three-story building structure. Algorithms are compared using experimental benchmark datasets.

Introduction of the three-story building structure

As shown in Fig. 14, the structure consists of aluminum columns and plates assembled using bolted joints. A center column is suspended from the top floor. This column can be used to simulate fault by inducing nonlinear behavior when it contacts a bumper mounted on the next floor. In the context of SHM, this source of fault is intended to simulate fatigue cracks that can open and close or loose connections that can rattle under dynamic loading. An electrodynamic shaker provides a lateral excitation to the base floor along the centerline of the structure. A load cell (Channel 1) with a nominal sensitivity of 2.2 mV/N was attached at the end of a stinger to measure the input force from the shaker to the structure. Four accelerometers (Channel 2–5) with nominal sensitivities of 1000 mV/g were attached at the centerline of each floor on the opposite side from the excitation source to measure the system response[38].

Application of the model based on the benchmark dataset

The structural fault in the three-story building structure is caused by shaker shock and by the collision between the bumper and the column. Channel 1 is located near the shaker, and Channel 4 is positioned on the bumper layer. Thus, Channels 1 and 4 are considered reason event set C = {C1, C2}. Channels 2, 3, and 5 are regarded as result event set R = {R1, R2, R3} in our experiment. Among these channels, Channel 2 is nearest to Channel 1 and is farthest from Channel 4. Thus, the fault monitored by Channel 2 (R1) is mainly attributed to Channel 1 (C1). This fault originates from shaker shock. Channel 3 is located between Channels 1 and 4. Thus, the fault monitored by Channel 3 (R2) is primarily caused by the comprehensive influences of Channels 1 (C1) and 4 (C2). Channel 5 is nearest to Channel 4 and is farthest from Channel 1. Therefore, the fault monitored by Channel 5 (R3) is mainly ascribed to Channel 4 (C2). This fault is in turn generated by the collision between the bumper and the column.

In the experiment, 500 sample points at each channel state are collected from the benchmark dataset. These points are regarded as the raw experimental data. Given that state 1 represents the baseline condition, the raw data of state 1 are subtracted from the raw data of states 2–17 to acquire the preprocessing data.

The proposed model is then used to preprocess the data. The volume of the results data is compared with that of the raw data, as shown in Fig. 15. S1–S17 in Figs. 1517 represent states 1–17, respectively.

thumbnail
Fig 16. Dependence indices between the result and reason events.

https://doi.org/10.1371/journal.pone.0120080.g016

thumbnail
Fig 17. Comparison of algorithms for the fault indicator.

https://doi.org/10.1371/journal.pone.0120080.g017

Fig. 15 indicates that the volume of data has been compressed to 25% of the raw data.

Safety thresholds are first set for each channel in the SIS and STS stages of the proposed model. Subsequently, the measured data are matched against the safety thresholds. If the data in the channel exceed the safety threshold, then the time value of the monitored data is recorded as the onset time of event i (reason or result event). If the absolute value of the measured data remains greater than the safety threshold, then the time value of the monitored data is recorded as the end time of event i. Table 1 depicts a time set of events as per the proposed model.

Based on the aforementioned time set, the dependence index between the reason and result events can be obtained at different states as presented in Table 2.

thumbnail
Table 2. Dependence index between the reason and result events at different states.

https://doi.org/10.1371/journal.pone.0120080.t002

Fig. 16(A)–16(C) exhibit the dependence indices between the result and reason events. The structural fault location can then be analyzed.

Prior to state S10, bumper—column collision does not generate faults, as displayed in Fig. 16(A). The dependence indices of C1-R1 almost exceed 0.6, and all of the dependence indices of C2-R1 are 0. This phenomenon suggests that the fault monitored by Channel 2 is influenced by C1 (shaker) alone. Following the action of the bumper from the state S10, all of the dependence indices of C1-R1 are greater than those of C4-R1. This finding indicates that the fault monitored by Channel 2 is mainly caused by C1 (shaker).

Under the same bumper operation condition, Fig. 16(B) suggests that the dependence indices of C1-R2 are almost similar to those of C2-R2. It also indicates that the fault monitored by Channel 3 is primarily induced by C1 (shaker) and C4 (bumper).

Fig. 16(C) also reveals that the dependence indices of C2-R3 are greater than those of C1-R3 following bumper operation from state S10. This finding suggests that the fault monitored by Channel 5 is mainly attributed to C4 (bumper).

In summary, Fig. 16(A)–16(C) indicate that the fault monitored by Channel 2 is primarily caused by shaker shock; that the fault monitored by Channel 3 is mainly induced by shaker shock and bumper—column collision; and that the fault monitored by Channel 5 is primarily attributed to the collision between the bumper and the column. The experimental result is consistent with the structural fault analysis conducted previously.

Comparison of algorithms

Faults were analyzed using the factor analysis[39], principal component analysis[38], and Mahalanobis distance [40]methods in the Los Alamos National Laboratory of the United States. Mahalanobis distance is a weighted Euclidean distance that considers the links among various characteristics and can calculate the similarity between two sample sets, whereas structural fault identification can be regarded as a process that evaluates the similarity between the measurement datasets and the normal dataset. The experimental data in this article are derived from multiple channels and are in multiple states. Factor analysis and PCA methods can reduce dimensionality, and this feature can help simplify the problem to be addressed in the experiment. The benchmark dataset originates from Los Alamos National Laboratory of the United States. Researchers at the laboratory have applied factor analysis, PCA, and Mahalanobis distance methods to analyze faults in a three-story building structure. Therefore, the proposed method is compared objectively with the aforementioned methods. The calculation of the fault analysis consumed most of the time and space during fault location; thus, the algorithm of the proposed model was compared with three other typical algorithms. The comparison is based on the experimental data acquired from channels 1–5 at five different locations. In the experiment conducted by the Los Alamos National Laboratory of the United States, shaker shock represents the environmental changes to the structure, and the structure is considered unfaulty. Bumper collision is regarded as the source of structural faults, and the structure is regarded as faulty. The data acquired from channels 1–5, which are located at different layers, are adopted to determine whether or not the overall state of a three-story building structure is faulty. During the experiment, shaker shock alone is introduced into S1–S9. Subsequently, the algorithms are processed to assess whether or not the structure remains in the desired unfaulty state. By contrast, both shaker shock and bumper collision are introduced into S10–S17. Then, the algorithms are then processed to evaluate whether or not the structure remains in the desired faulty state.

In the proposed experimental method, the sum of the dependence indices with similar states is regarded as the fault indicator (FI) of the presented model. Part of the preprocessed data is used in model training to obtain an FI threshold value of 1.5 for this experiment. Then, the test data are inputted into the trained model to determine the corresponding FI value. If the FI value is higher than the threshold, then the structure is considered faulty. The results are shown in Fig. 17(A).

The FI construction method based on factor analysis operates as follows: Raw data are inputted into an autoregressive (AR) model to acquire AR parameters. These data are then regarded as fault-sensitive features. Subsequently, the factor analysis model is used to calculate the factor cores for the fault-sensitive feature vectors under a normal structural condition. The fault indicator is determined through these cores. A simple FI threshold is then generated after model training. The fault identification results are shown in Fig. 17(B).

The FI construction method based on PCA operates as follows: Raw data are inputted into an AR model, and the root mean square (RMS) errors of this model are considered fault-sensitive features. A machine learning algorithm based on PCA is used to obtain a fault indicator that is invariant for feature vectors under a normal structural condition. This indicator is enhanced when the feature vectors originate from the fault structural condition. The fault identification results are depicted in Fig. 17(C).

The FI construction method based on Mahalanobis distance operates as follows: Raw data are inputted into an AR model, and its parameters are used as fault-sensitive features. The Mahalanobis distance model is then trained by fault-sensitive feature vectors obtained under a normal structural condition. The FI is determined by calculating the Mahalanobis distance from these fault-sensitive feature vectors to their mean vector. The fault identification results are presented in Fig. 17(D).

Fig. 18 depicts the comparison of space and time consumption. Fig. 17 indicates that the method incorporating factor analysis includes an FI construction technique based on the factors for all of the states except S4. Thus, the two groups can be separated by a simple threshold. Although this method is faster than that of the proposed model, its space storage cost is greater. The methods that use principal component analysis and Mahalanobis distance generate FI construction techniques to separate the two groups according to a simple threshold. These techniques identify faults more effectively than the proposed model does. However, they require much sampling data to train the model. Thus, they incur higher space and time costs than the proposed model does. Based on this comparison, the proposed model can effectively identify structural faults at low space and time costs.

thumbnail
Fig 18. Comparison of algorithms for space and time cost.

https://doi.org/10.1371/journal.pone.0120080.g018

Conclusion

A SHM system uses numerous sensors. However, this system is plagued by the two challenges of massive data storage pressure and structural fault location. To address these concerns, the current research proposed the bio-inspired memory model embedded with causality reasoning function for structural fault location. First, the model filters much normal data. Thus, only the critical data that reflect structural change are recorded in the three memory areas. Therefore, the storage pressure on the SHM system is lowered. Second, the model is embedded with a quantitative causal reasoning function that includes two causal indices, namely, causality and dependence. The causality index indicates the causality strengths of the reason and result events. This index is associated with the function of causal knowledge discovery. The dependence index indicates the dependence levels of a result event on the reason events. This index is used to reason out the sensor locations of the structural fault. Experiments demonstrate that the proposed model can effectively identify structural faults at low space and time costs.

Acknowledgments

The authors would like to thank Mr. XinDa Qian for establishing the experimental system and Mr. Qi Lu and Mr. Yinsheng Li for discussions.

Author Contributions

Conceived and designed the experiments: WZ. Performed the experiments: WZ CW. Analyzed the data: WZ CW. Wrote the paper: WZ.

References

  1. 1. Mujica LE, Vehi J, Staszewski W, Worden K. Impact Damage Detection in Aircraft Composites Using Knowledge-based Reasoning. Struct Health Monit. 2008; 7: 215–230.
  2. 2. Borges CCH, Barbosa HJC, Lemonge ACC. A structural damage identification method based on genetic algorithm and vibrational data. Int J Numer Methods Eng. 2007; 69: 2663–2686.
  3. 3. Chandrashekhar M, Ganguli R. Uncertainty handling in structural damage detection using fuzzy logic and probabilistic simulation. Mech Syst Signal Proc. 2009; 23: 384–404.
  4. 4. Hou ZK, Hera A, Shinde A. Wavelet-based structural health monitoring of earthquake excited structures. Comput-Aided Civil Infrastruct Eng. 2006; 21: 268–279.
  5. 5. Lynch JP, Sundararajan A, Law KH, Kiremidjian AS, Kenny T, Carryer E. Embedment of structural monitoring algorithms in a wireless sensing unit. Struct Eng Mech. 2003; 15: 285–297.
  6. 6. Huang YX, Liu CL, Zha XF, Li YM. An enhanced feature extraction model using lifting-based wavelet packet transform scheme and sampling-importance-resampling analysis. Mech Syst Signal Proc. 2009; 23: 2470–2487.
  7. 7. Zhang XG, Huang TJ, Tian YH, Gao W. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding. IEEE Trans Image Process. 2014; 23: 769–784.
  8. 8. Yuen KV, Katafygiotis LS. Substructure identification and health monitoring using noisy response measurements only. Comput-Aided Civil Infrastruct Eng. 2006; 21: 280–291.
  9. 9. Kullaa J. Distinguishing between sensor fault, structural damage, and environmental or operational effects in structural health monitoring. Mech Syst Signal Proc. 2011; 25: 2976–2989.
  10. 10. Holroyd TJ, Meisuria HM, Lin D, Randall N. Development of a practical acoustic mission-based structural monitoring system. Insight. 2003; 45: 127–129.
  11. 11. Zagrai A, Doyle D, Gigineishvili V, Brown J, Gardenier H, Arritt B. Piezoelectric Wafer Active Sensor Structural Health Monitoring of Space Structures. J Intell Mater Syst Struct. 2010; 21: 921–940.
  12. 12. Cowan N, Rouder JN, Stadler MA. On human memory: Evolution, progress, and reflections on the 30th anniversary of the Atkinson-Shiffrin model. Am J Psychol. 2000; 113: 639–648.
  13. 13. Ruff CC. Sensory processing: who's in (top-down) control? Ann Ny Acad Sci. 2013; 1296: 88–107. pmid:23909769
  14. 14. Nee DE, Jonides J. Trisecting representational states in short-term memory. Front Hum Neurosci. 2013; 7: 796. pmid:24324424
  15. 15. Widrow B, Aragon JC. Cognitive memory. Neural Networks. 2013; 41: 3–14. pmid:23453302
  16. 16. Bacca B, Salvi J, Cufi X. Long-term mapping and localization using feature stability histograms. Robot Auton Syst. 2013; 61: 1539–1558.
  17. 17. Wang YJ, Qi YJ. Memory-based cognitive modeling for robust object extraction and tracking. Appl Intell. 2013; 39: 614–629.
  18. 18. Song YD, Weng LG, Lebby G. Human Memory/Learning Inspired Control Method for Flapping-Wing Micro Air Vehicles. J Bionic Eng. 2010; 7: 127–133.
  19. 19. Sultan M, Wagdy A, Manocha N, Sauck W, Gelil KA, Youssef AF, et al. An integrated approach for identifying aquifers in transcurrent fault systems: The Najd shear system of the Arabian Nubian shield. J Hydrol. 2008; 349: 475–488.
  20. 20. Bastesen E, Rotevatn A. Evolution and structural style of relay zones in layered limestone-shale sequences: insights from the Hammam Faraun Fault Block, Suez rift, Egypt. J Geol Soc London. 2012; 169: 477–488.
  21. 21. Wood J, Sule VR, Rogers E. Causal and stable input/output structures on multidimensional behaviors. SIAM J Control Optim. 2005; 43: 1493–1520.
  22. 22. Mazack LJ. Causal possibility model structures. In: Proceedings of the 12th IEEE International Conference on Fuzzy Systems; 2003 May 25–28; St Louis, MO, USA. IEEE; 2003. pp. 684–689.
  23. 23. Nikovski D. Constructing Bayesian networks for medical diagnosis from incomplete and partially correct statistics. IEEE Trans Knowl Data Eng. 2000; 12: 509–516.
  24. 24. Sterritt R, Marshall AH, Shapcott CM, McClean SI. Exploring dynamic Bayesian Belief Networks for intelligent fault management systems. In: 2000 IEEE International Conference on Systems, Man & Cybernetics; 2000 Oct 8–11; Nashville, TN, USA. IEEE; 2000. pp. 3646–3652.
  25. 25. Sharda B, Banerjee A. Robust manufacturing system design using multi objective genetic algorithms, Petri nets and Bayesian uncertainty representation. J Manuf Syst. 2013; 32: 315–324. pmid:23391457
  26. 26. Baldwin JF, Di Tomaso E. Inference and learning in fuzzy Bayesian networks. In: Proceedings of the 12th IEEE International Conference on Fuzzy Systems; 2003 May 25–28; St Louis, MO, USA. IEEE; 2003. pp. 630–635.
  27. 27. Wen FH, Lan QJ, Ma CQ, Yang XG. An algorithm for mining association patterns between two time series and application in finance. In: WCICA 2006: Sixth World Congress on Intelligent Control and Automation; 2006 June 21–23; Dalian, CHINA. IEEE; 2006. pp. 5938–5942.
  28. 28. Karimi K, Hamilton HJ. TimeSleuth: A tool for discovering causal and temporal rules. In: 14th IEEE International Conference on Tools with Artificial Intelligence; 2002 Nov 4–6; Los Alamitos,California, USA. IEEE Computer Soc; 2002. pp. 375–380.
  29. 29. Arnold A, Liu Y, Abe N. Temporal Causal Modeling with Graphical Granger Methods. In: Kdd-2007 Proceedings of the Thirteenth Acm Sigkdd International Conference on Knowledge Discovery and Data Mining; 2007 Aug 12–15; San Jose, California,USA. Assoc Computing Machinery; 2007. pp. 66–75.
  30. 30. Mosterman PJ, Biswas G. Diagnosis of continuous valued systems in transient operating regions. IEEE Trans Syst Man Cybern Paart A-Syst Hum. 1999; 29: 554–565.
  31. 31. Bryer EJ, Medaglia JD, Rostami S, Hillary FG. Neural Recruitment after Mild Traumatic Brain Injury Is Task Dependent: A Meta-analysis. J Int Neuropsych Soc. 2013; 19: 751–762.
  32. 32. Johnson S, Marro J, Torres JJ. Robust Short-Term Memory without Synaptic Learning. Plos One. 2013; 8: e50276. pmid:23349664
  33. 33. Sumner JA, Mineka S, Zinbarg RE, Craske MG, Vrshek-Schallhorn S, Epstein A. Examining the long-term stability of overgeneral autobiographical memory. Memory. 2014; 22: 163–170. pmid:23439226
  34. 34. Pastukhov A, Lissner A, Fullekrug J, Braun J. Sensory memory of illusory depth in structure-from-motion. Atten Percept Psycho. 2014; 76: 123–132. pmid:24097015
  35. 35. Atkinson RC, Shiffrin RM. Human memory: A proposed system and its control processes. Psychology of Learning and Motivation. 1968; 2: 89–195.
  36. 36. da Silva S, Dias M, Lopes V. Damage detection in a benchmark structure using AR-ARX models and statistical pattern recognition. J Braz Soc Mech Sci Eng. 2007; 29: 174–184.
  37. 37. Bejarano FJ, Figueroa M, Pacheco J, Rubio JD. Robust fault diagnosis of disturbed linear systems via a sliding mode high order differentiator. Int J Control. 2012; 85: 648–659.
  38. 38. Figueiredo E, Park G, Figueiras J, Farrar C, Worden K. Structural Health Monitoring Algorithm Comparisons using Standard Data Sets. Los Alamos National Laboratory;2009. Report No.: LA-14393.
  39. 39. Ijmker J, Stauch G, Hartmann K, Diekmann B, Dietze E, Opitz S, et al. Environmental conditions in the Donggi Cona lake catchment, NE Tibetan Plateau, based on factor analysis of geochemical data. J Asian Earth Sci. 2012; 44: 176–188.
  40. 40. Worden K, Manson G, Fieller NRJ. Damage detection using outlier analysis. J Sound Vib. 2000; 229: 647–667.