Dispersion Reduction of the Source Data Created by Time History Analyses for Developing More Reliable Fragility Curves of Selected Electric Substations Equipment


 High dispersion of data, created by time history analyses (THA) for developing fragility curves (FCs), results in remarkable uncertainty in risk evaluation. Regarding the lack of studies on uncertainty reduction of FCs of electric power equipment, this paper presents three means for reducing this dispersion: 1) using spectral acceleration at fundamental period of the system, Sa(T1), instead of peak ground acceleration (PGA) as the intensity measure (IM), 2) using Sa(T1)+Sa(T2), as the IM, and 3) using a set of scenario earthquakes for THA instead of randomly chosen accelerograms. The methods were compared for two key structures of electric substations, a post insulator (PI) and a current transformer (CT). For the PI, first a set of randomly chosen accelerograms was employed, once using PGA, and once more Sa(T1) as the IM, and then a set of scenario-based records, with the same IMs. For the CT, PGA, Sa(T1) and Sa(T1)+Sa(T2) were used as the IM. Results show that using scenario-compatible records leads to more consistent data for developing the FCs, and that this consistency gets better and much better, respectively, by using Sa(T1) and Sa(T1)+Sa(T2) as the IM. It is suggested that for very high frequency systems, such as PI, Sa(T1) is used, while for moderate- and low-frequency systems, such as CT, Sa(T1)+Sa(T2) is used.


Introduction
Fragility curves (FCs) are used as the most common tool for quick seismic damage estimation and risk evaluation of the built environment, particularly important buildings and facilities. The use of these curves in seismic risk evaluation of essential facilities and lifeline systems has become very common in recent decades. This is while, there are usually vast dispersions in the source data, used for developing these curves, which in turn, lead to great uncertainty in seismic risk evaluation. These uncertainties can affect drastically the amount of budget assigned for risk reduction programs, particularly for essential facilities and lifelines whose seismic reduction are usually very costly and time consuming. The main reason behind the aforementioned dispersion and its consequent uncertainty is using a single parameter as the Intensity Measure (IM) in developing the FCs. It is obvious that damageability level of an earthquake for any structure depends on several characteristics, including the intensity of the ground shaking in different directions, the frequency content of ground vibrations and the effective duration of strong ground motion. Therefore, it is quite expectable that by using only one index of the several effective ones, high reliability is not achieved in the obtained results. Despite this fact, although various techniques have been proposed so far for decreasing the uncertainties of the fragility functions, as explained in section 3 of the paper hereinafter, yet, in almost all of the studies related to components of electric power systems, only FCs with a single IM have been used. As such, and since the source data for developing the FCs are very disperse, the resulting FCs have a low level of reliability, resulting in the low reliability of the developed curves. This low reliability level, in turn, results in large uncertainties in the subsequent risk evaluations, which may lead to even wrong decision making by the seismic risk reduction authorities. To resolve this shortcoming, some researchers have tried, as addressed in section 3 of this paper, to nd some ways for reducing the amount of dispersion in the source data. The source data may be either the real data, obtained from the eld observations, or the simulated data, produced by numerical response calculations such as time history analyses (THA). The dispersion issue is more crucial in case of industrial plants' equipment, particularly the power industry, for which (other than nuclear power) the seismic FCs have been rarely developed.
Actually, very few attempts have been spent for reducing the dispersion of the source to be used for developing the FCs, particularly in case of electric power systems. Therefore, the necessity of more endeavors for dispersion reduction, and development of more reliable fragility functions is felt for such equipment. In the following parts of the paper, rst a short history on the development of FCs for electrical equipment, particularly in substations is presented. Then, attempts made for decreasing the uncertainties of the FCs or increasing their reliability level are brie y addressed, which based on the available publications, all except one are related to systems other than electric power system. In the next stage, three methods are introduced for reducing the aforementioned dispersion in the source data. The suggested methods are then employed in development of FCs for two selected key equipments of electric power substations, including a post insulator (PI), as a relatively high-frequency component, and a current transformer (CT), as a relatively moderate-frequency component. Finally, the developed FCs for the considered key structures based on the three proposed methods of dispersion reduction are compared to show the e ciency of the proposed methods.

A Brief History Of Fcs Development For Electric Power Equipment
As one of the pioneer works on FCs of electrical equipment Schiff and Newsom (1979) did a survey which shows that: 1) Most fragility data is based on engineering estimates rather than on the results of tests or analysis; 2) for many types of equipment, estimates are quite speculative; and 3) the range of estimates for a given type of equipment, for the few cases where there are more than one, often vary by a factor of three or more. Obviously, risk estimation based on such fragility data would have a very low level of reliability.
Years later, Huo and Hwang (1995) and also Hwang and Huo (1998) presented a seismic fragility analysis of equipment and structures in an electric substation in the eastern United States, using Substation 21 in Memphis as an example. Mentioning that seismic damage to electric facilities in the eastern United States has been rare and information on dynamic testing of electric equipment similar to those installed in the substation is not available; they used an analytical approach to perform the fragility analysis. They noted that the analytical method may not cover all the possible failure modes of substation structures and equipment. However, it should be mentioned that they have used spectral seismic response analysis, not THA. Anagnos (1999) developed a database for an electrical substation equipment performance to evaluate the equipment fragilities. She considered performance of substation equipment in 12 California earthquakes. The majority of data were related to equipment operating at 220/230 kV and 500 kV. The purpose of the database was to provide a basis for developing or improving equipment vulnerability functions. The probabilities of failure were calculated by dividing the number of damaged items by the total number of items of that type at each site. Using peak ground acceleration (PGA) as the ground motion parameter, failure probabilities were compared with opinion-based FCs for a few selected equipment classes. Comparisons were somewhat crude as the calculated failure probabilities did not include information about the mode of failure. The comparisons indicated that some of the existing FCs provide reasonable matches to the data and others should be modi ed to better re ect the data.
Also Der Kiureghian (1999) presented fragility estimates for electric substation equipment by using Bayesian approach based on observed performance during past earthquakes. He presented point estimates of the fragility based on maximum likelihood, posterior mean and predictive analysis, as well as con dence intervals on fragility that re ect statistical uncertainties. He accounted for errors in the measurement of ground motion intensity and correlation between observations at the same substation.
In that study it has been mentioned that the accuracy of the formulas proposed for the joint rst-passage probability is highly dependent upon the accuracy of the marginal rst-passage probability formulas. Therefore, it has been recommended to improve the accuracy of the formulas for marginal and joint rstpassage probability, especially for the case of strongly narrowband response.
In 2003, Shinozuka and colleagues presented experimental curves as two-parameter normal distribution functions for which the data were obtained from Kobe earthquake of 1995. Apart from the vulnerability of transformers, the seismic vulnerability of other equipment, such as circuit breakers and disconnect switches, was integrated into the analysis by using corresponding FCs. Paolacci and Giannini (2005) evaluated the seismic fragility of electrical insulators, by using spectral acceleration as the IM. Jaigirdar (2005) analyzed the seismic risk index of electric power substations of Hydro-Quebec. Mentioning that vulnerability of substations largely depends on the performance of circuit breakers and control buildings during earthquakes, they identi ed the critical parameters responsible for vulnerability of substations by statistical analysis of the eld data. They also studied the correlation of different parameters with vulnerability, sensitivity of the weighting factors of critical parameters and sensitivity of seismic exposure levels to seismic risk index by statistical analysis Straub and Der Kiureghian (2008) worked on improved seismic fragility modeling from empirical data.
Their improved empirical fragility model addresses statistical dependence among observation of seismic performances which arises from common but unknown factors in uencing the observation. Their proposed model accounts for this dependence by explicitly including common variables in the formulation of the limit state for individual components. Additionally, the fact that observations of the same component during successive earthquakes are correlated is considered in the estimation of the model parameters It should be noti ed that although the improved formulation proposed by Straub and Der Kiureghian (2008) can lead to signi cantly different fragility estimates than those obtained by using the conventional assumption of statistical independence among the empirical observation, still there is a remarkable difference between the developed curves and the observed data. They have expressed that, consideration of statistical dependence among observations leads to larger uncertainty on the estimated fragility. This is because dependence among observation reduces the information content of the data. It follows that neglecting these dependences, as in the conventional approach, may lead to a serious underestimation of the statistical uncertainty and overcon dence in the estimated fragility. Bradley (2010) has discussed the epistemic uncertainties in component fragility functions used in performance-based earthquake engineering. He has expressed that there exist many uncertainties in the development of such fragility functions, and has presented and discussed the sources of epistemic uncertainty in fragility functions, their consideration, combination, and propagation. He has used two empirical fragility functions presented in literature to illustrate the epistemic uncertainty in the fragility function parameters due to the nite size of the datasets (see Another case of large variation in the fragility values can be found in the curves presented in a by Buriticá (2013), who conducted a study on seismic vulnerability assessment of power transmission networks based on system thinking approach, and presented a set of FCs for low, medium and high voltage substations. Fig. 3 shows the curves presented for high-voltage substations It is observed in Fig. 3 that there are remarkable differences between the presented FCs of HAZUS and other curves. For example, the values of exceedance probability corresponding to 0.2g vary from around 5%, given by HAZUS, to almost 70%, presented by Hwang and Huo (1998). These variations and difference show the very low level of reliability in the FCs presented in almost all of the previous studies.
This low level of reliability has encouraged the designers of such systems in recent years to focus more on more reliable techniques such as the use of dampers for seismic response reduction (Yue et al. 2019). A few reasons can be mentioned for the aforementioned low level of reliability of FCs presented in previous works of different scholars. One is the age or the passed time of the operation lifetime of the equipment which may make it more vulnerable against earthquake. For example, the fatigue conditions resulted from Aeolian vibrations in the conductors of power transmission lines, (Zhao et al. 2020), can create near-failure conditions in the conductor just before the earthquake. Another reason is believed to be the factors that can have either increasing or decreasing effect on the seismic response of the electric equipment, such as the coupling effect with the adjacent equipment (Yang 2021). Finally, another reason, which is the main incentive for developing the present paper, is believed to be the high dispersion in the data used for this purpose. Therefore, it would be quite desired to decrease this dispersion in some logical ways. In the following sections of this paper rst the studies on improvement of fragility functions are brie y reviewed, and then three means are proposed for reducing the dispersion of the data used for development of FCs.

A Short Review Of The Studies On Improvement Of Fragility Functions
In recent decades, a few techniques have been suggested by scholars for reducing the uncertainties engaged in development of fragility functions or increasing their reliability level. These techniques include the use of fragility surface (FS) instead of FC, or two-and multi-dimensional fragility functions, the use of response surface approach, the use of Vector-Valued IMs as well as the use of support vector machine. In this section a brief review of the studies conducted in this regard is presented. Grigoriu and Mostafa (2002) introduced fragility surface (FS) as a more reliable measure of seismic performance to be used instead of FC. However, it should be noted that they expressed fragility as a function of two parameters, the earthquake magnitude and epicentral distance, mentioning that these parameters provide a unique characterization of the ground motion, and that PGA, can be unsatisfactory since events with the same PGA can cause very different damages. This is while their two suggested parameters gives, in combination, the intensity of an earthquake in any site, and are not actually two independent characteristics of any earthquake at that site.
Cimellaro and colleagues (2006) also worked on two-dimensional fragility of structures by considering spectral pseudo acceleration and spectral displacement as IMs. However, they did not show to what extent using these two IMs improves the reliability level of the fragility function. Gehl and colleagues (2009) and also Seyedi and colleagues (2010) discussed the use of fragility surfaces for a more accurate modeling of the seismic vulnerability of reinforced concrete structures. For this purpose, they suggested to associate the values of spectral displacement, S d , at the rst two rst modal periods T 1 and T 2 (S d (T 1 ), S d (T 2 )) with a probability of exceeding a given damage level to develop the FS. They reported signi cant reduction in the scatter in the fragility analysis by using the two mentioned parameters. Pan and colleagues (2010) in order to reduce some of the uncertainty inherent in the response to seismic loading, compared seismic FCs and FSs of multispan simply supported steel highway bridges in New York State, and claimed that FSs were more reliable than FCs. However, they used moment magnitude and epicentral distance as more readily available representative parameters of earthquakes. This is while, as mentioned above, these two parameters in combination basically gives the intensity of earthquake in the bedrock of each site around the causative fault, and therefore, they are not actually two independent IMs. Koutsourelakis (2010) assessed structural vulnerability against earthquakes using multi-dimensional FSs by using a Bayesian framework to produce more accurate predictions irrespective of the amount of data available, and found out that the Arias intensity (AI) in combination with root mean square (RMS) intensity was statistically more informative than the PGA.
Gehl and colleagues (2011) developed fragility surfaces for more accurate seismic vulnerability assessment of masonry buildings based on using peak ground displacement (PGD) and PGA as IMs, and made comparisons between the one-parameter fragility curve and "slices" of the fragility surface to show that the use of a second ground-motion parameter delivers a clearer de nition of the vulnerability. Bojórquez and colleagues (2012) compared some vector-valued intensity measures for fragility analysis of steel frames in the case of narrow-band ground motions. All the vectors they considered had, as the rst component, S a (T 1 ), and as the second component, compared IMs were chosen among peak and integral parameters, the former represent the spectral shape in a range of periods, while the latter refer to cumulative damage potential of earthquakes. As a result of the comparison, they observed that spectralshape-based vector-valued IMs have the best explicative power with respect to seismic fragility estimation. Analyses, even if limited to the peculiar ground motions considered, suggest that a recently proposed parameter N p =S a,avg (T 1 , …, T N )/S a (T 1 ), is especially promising as a candidate for the next generation of IMs when combined with spectral acceleration. This appears independent of the type of seismic response measure considered. Also Xu and Wen (2012) evaluated seismic fragility of RC frame structure using vector-valued IMs, considering S a (T 1 ) as the rst parameter, and either of the peak ground velocity (PGV), spectral shape parameters, or N p as the second parameter, and con rmed the appropriateness of the latter. They claimed that FSs based on vector-valued IMs are better able to represent the damage potential of earthquake than FCs. Li and colleagues (2013) presented a fragility analysis methodology for anti-sliding stability of a gravity dam based on least square support vector machine by using the results of nite element analysis as learning samples. Wang and colleagues (2013) proposed a multi-dimensional fragility analysis of bridge system under earthquake to incorporate uncertainties in ground motion and performance limit state. They obtained samples of maximum responses through nonlinear dynamic analysis to calculate the maximum likelihood estimators of unknown parameters. Their result shows that multi-dimensional fragility of bridge is higher than component fragility. Sousa and colleagues (2014) developed multi IM based fragility functions for earthquake loss estimation of typical pre-code reinforced concrete buildings of Portugal. They considered S a (T 1 ), PGA, PGV, acceleration spectrum intensity (ASI), Housner intensity (HI), and spectral ordinates within the range of 0.05 to 3.0 seconds as IMs. Modica and Stafford (2014) presented vector FSs for reinforced concrete frames in Europe and reported compound values of S a (T 1 ) and N p as the most e cient vector. Grigoriu and colleagues (2014) in a study on seismic performance by FSs showed that the characterization of seismic ground intensity by PGA and ordinates of response spectra provides insu cient details about the time histories of ground shaking for constructing accurate fragilities, and therefore, additional information is needed to construct fragilities.
Mahmoudi and Chouinard (2016) assessed seismic fragility of highway bridges using support vector machines by considering pairs of S a and ε values, the latter being the number of logarithmic standard deviations by which a target ground motion differs from a median ground motion, that is an indicator of spectral shape as it shows whether an S a at a speci ed period is in a peak or a valley of the spectrum.
They demonstrated that support vector machine based fragility curves provide more accurate predictions compared to rigorous methodologies such as component based fragilities developed by Monte Carlo simulations. Zareei and colleagues (2017) developed multivariate fragility functions for a 420 kV electric power circuit breaker by considering PGA and PGV as IMs. Their results showed that for moderate damage state the fragility values are not much dependent on the PGV variation, while for severe damage state the dependence of fragility values on PGV is noticeable, particularly for PGA values in range of 0.1-0.7 g. Huang and colleagues (2017) modelled seismic fragility of a rock mountain tunnel based on support vector machine. They considered PGV and Arias intensity (AI), claiming that the latter indicated the energy of an earthquake better than PGA. Sainct and colleagues (2018) used active learning on support vector machines for more e cient seismic fragility curve estimation. They considered PGA, PGV, PGD and the total energy of the input excitation as the IMs. Jafarian and Miraei (2019) conducted scalarand vector-valued fragility analyses of a gravity quay wall on lique able soil of Kobe port. They investigated 24 IMs of strong ground motions of which PGV, cumulative absolute velocity (CAV), AI, speci c energy density (SED), and velocity spectrum intensity (VSI) showed a strong correlation to the horizontal displacement of the quay wall, and PGA was among the moderate correlated IMs. Xu and Gardoni (2020) conducted Multi-level, multi-variate, non-stationary, random eld modeling and fragility analysis of engineering systems, and implemented the formulation in the modeling of a deteriorating transportation system with eight nodes. They considered the magnitude M w and the corresponding spectral acceleration at the critical node of the network as the intensity measures. Hosseini and colleague (2020) developed double-variable seismic fragility functions for oil re nery piping systems. They considered PGA and PGV as IMs, and modeled the piping system of the ISOMAX Unit of Tehran oil re nery by a powerful nite element analysis program under various loadings, including gravity, pressure and seismic loads. Their results indicated that using PGA and PGV jointly, as the IMs in the development of fragility functions, provides more reliable vulnerability estimations. For example, the single-IM fragility function gives, for PGA = 0.2 g, a probability of exceedance of 75%, while by using the double-IM fragility function this probability may change from 30% for PGV = 10 cm/s to 95% for PGV = 60 cm/s. Lei and colleagues (2021) assessed seismic fragility of regional bridges using Bayesian multi-parameter estimation. They considered eight intensity measures from three aspects to represent the regional diversity of ground motions; PGA and CAV to represent the peak effect of ground motion, acceleration spectrum intensity and spectral acceleration at 0.5 s, 1.0 s, and 3.0 s to show the spectrum intensity characteristics, signi cant durations of 5%-75% and 5%-95% to represent the time duration of strong motion. They compared their proposed multi-parameter seismic fragility models with the Monte-Carlo simulations, and reported a considerable agreement between the results.
It is seen that although several studies have been conducted on increasing the precision of fragility estimations, none of them, except one done by the second author of the paper and his colleagues, is related to components of electric power system. In fact, even in the latest studies on fragility estimation of electric power equipment still the single-IM FCs are worked on, and it seems that the tendency to the use of FSs and multi-variable fragility functions has not been acknowledges yet by the professional community in this eld. Therefore, it still seems necessary to propose the possible ways for increasing the reliability level the common FCs. In this study it has been tried to do an attempt in responding to this need by reducing the dispersion of the source data to be used for developing the FCs of two samples of electric power equipment.

Reducing The Dispersion Of The Source Data For Fcs Development
In this section of the paper three means are proposed for reducing the dispersion of the source data for developing the FCs. These means are: Using S a (T 1 ), instead of PGA as the IM, as recommended in many of the previous studies, mentioned in Sections 2 and 3, Using S a (T 1 )+S a (T 2 ) instead of PGA as the IM, Using a set of scenario earthquakes for THA instead of randomly chosen records.
Although PGA is the most commonly used has IM in development of FCs, using S a (T 1 ) instead of PGA as the IM seems much more logical, since PGA is related to only the input earthquake, and is not affected by the system characteristics, while S a (T 1 ) directly include the effect of the natural period of the system, as the main dynamic characteristic, in fragility analysis. In case of moderate frequency structures in addition to the rst mode, the seismic response is affected by the second mode as well. Therefore, using S a (T 1 )+S a (T 2 ) seems to be more effective in dispersion reduction. With regard to the third method it should be noted that reliability and accuracy of digitized accelerograms depend, to a great extent, on their processing method. So far, a variety of signal processing tools has been developed for this purpose, however, how these tools affect the seismic structural responses is an issue which has been discussed only in recent years. In previous researches, like those conducted by Iervolino and Cornell (2005) and Hate Ardekani (2010) special attention has been paid into ltering, which is known as the most effective tool in signal processing. The results of those studies show that the ltering effect on the nonlinear responses analysis is more than the linear analysis, and can change the nonlinear responses values drastically. Preliminary researches also show that the processing method can be signi cantly effective on predicting the collapse capacity of structures in incremental dynamic analysis (IDA), as addressed by Ardekani and Ghafory-Ashtiany (2015). They processed 66 scenario events to obtain the collapse level of a SDOF system by using nonlinear THA. Results of those analyses are shown in Fig. 4.
It can be observed in Fig. 4 that by using a single record and eight processing methods, the collapse capacity range of variation of the structure for ductility factor (µ) of 10, as an example, lays in a range varying from 0.5 to 0.9, while this range varies from 0.25 to around 2.10 by using 66 accelerograms, proposed in Ardekani & Ashtiany (2015), based on appropriate signal processing. This sensitivity is related to dispersion of the IM of the earthquakes of the selected accelerograms. Regarding the high effect of signal processing method on the obtained collapse capacity of the systems, it is necessary to perform comprehensive investigations on these effects on the responses obtained by THA.
Considering the magnitudes of destructive earthquakes recorded in Iran, which are between 6 and 6.5, providing M = 6.5 database scenario seems to be the most suitable scenario for Iran. Criteria considered for the selection of the scenario earthquakes include:

The 1st Case Study-Fcs Dispersion Reduction For A 63 Kv Power Insulator (Pi)
In this section the three proposed methods for reducing the dispersion of results, obtained by linear THA for development of more reliable FCs, are applied to a PI as a sample structures, to realize how e cient each of these methods is in dispersion reduction. First, the considered PI is introduced, and its conventional FCs developed by employing a set of 75 earthquake records from PEER website for THA, and using the PGA as the IM are presented. In the second and third stages the FCs developed by employing the same 75 records, but using S a (T 1 ) instead of PGA as the IM, are presented and the results are compared. In the fourth stage the developed FCs by employing a different set of 66 earthquake records, selected based on an M=6.5 scenario, and using PGA as well as S a (T 1 ) as the IM, are presented and the results are compared with those obtained by the previous set of earthquake records.

Characteristics of the 63 kV PI and Developing Its Conventional FCs
The selected 63 kV PI has been manufactured by Iran Insulator Company (IIC). Fig. 5 shows the geometric details of insulator used for its modeling.  Fig. 6 shows the plot of σ max values obtained from the 75 employed records in 13 different PGA categories, in which the curves related to the mean values as well as mean plus one standard deviation and mean minus one standard deviation are also shown.

Developing FCs of the PI by Using PEER Records and PGA as the IM
High dispersion of the data in Fig. 6 is noticeable. For developing the FCs, as it is common, the PGA values of the earthquake records have been used as the 'ground motion parameter' and the maximum It is seen in Fig. 7 that there is a relatively high dispersion in the data based on which the conventional FCs have been developed. For example, based on the obtained data the exceedance probability for PGA value of 0.97g is 0.67 (point A in the Fig. 7), while for PGA value of 1.5it is 0.40 (point B in the Fig. 7), however, the FC gives exceedance probability of around 0.48 and 0.74 for those PGA values respectively. 5.3. Developing FCs of the PI by Using PEER Records and S a (T 1 ) as the IM S a (T 1 ) values of the 75 employed records, obtained from their acceleration response spectra for T 1 =0.05 sec, corresponding to frequency value of 20 Hz (fundamental frequency of the PI, obtained from the test) (Hosseini et al., 2010), are also given in Table A-1 (Appendix). For using S a (T 1 ) as the IM the data in Table   A-1 were sorted based on this parameter, and again 13 classes were formed with some slight modi cations in a few cases. Fig. 8 shows the plot of σ max values obtained from the 75 employed records in thirteen different S a (T 1 ) categories, in which the curves related to the mean values as well as mean plus one standard deviation and mean minus one standard deviation are also shown.
Comparison of Figs. 6 and 8 shows that by using S a (T 1 ) as theIM the dispersion of the source data reduces to some extent. The FCs developed based on the source data shown in Fig. 8 are presented in Fig. 9 along with the source data points.
Comparing Figs. 7 and 9 shows the relatively less dispersion of data in case of using S a (T 1 ) as the IM, in case of DI value of 11.72 MPa, although no dispersion reduction is observed in case of DI value of 23.44 MPa. 5.4. Developing FCs for the PI by Using Scenario Events and PGA as the IM Regarding the high sensitivity of THA results to characteristics of the employed records, the authors decided to use a different set of records instead of the selected set of PEER events to achieve more dispersion reduction in the resulted data. For this purpose the set of 66 scenario events processed and suggested by Hate Ardekani and Ghafory-Ashtiany (2015) was considered and prepared some minor modi cations. Table A Comparing Fig. 10 with Figs. 6 and 8, remarkable dispersion reduction in the THA resulted data can be observed. Furthermore, the ascending trend of the mean value curve in Fig. 10 is much more logical than that in Figs. 6 and 8. Also, it is seen the FCs, shown in Fig. 11, have more logical curvature for the fragile structure of the PI. 5.5. Developing FCs by using scenario events and S a (T 1 ) as the IM Regarding the dispersion reduction achieved by using S a (T 1 ) instead of PGA as the IM in the last step of FCs development, the same 66 scenario records were sorted once more based on the S a,T1 values in 11 classes, which resulted in the data plotted in Fig. 12 and the FCs shown in Fig. 13.
Comparing Figs. 12 and 10, it is seen again slight reduction in dispersion of the results data. This slight improvement is also observed in the developed FCs by comparing Figs. 13 and 11.

The 2nd Case Study-Fcs Dispersion Reduction For The 230 Kv Current Transformer (Ct)
In this section the two rst methods for reducing the dispersion of results, obtained by THA for development of FCs, namely using S a (T 1 ) as the IM and using S a (T 1 )+S a (T 2 ) as the IM have been compared for developing the FCs of a 230 kV CT as another sample structure of electric power substations. First, the considered CT is introduced, and its conventional FCs, using the PGA as the IM, developed by employing a set of 67 PEER earthquake records for THA, selected based on their dominant frequency to be close to that of the CT, are presented. In the second and third stages the FCs developed by employing the same 67 records, but once using S a (T 1 ) and once more S a (T 1 )+S a (T 2 ) as the IM, are presented and the results are compared. The selected 230 kV lower core CT has been manufactured by Mitsubishi Company in 1978, with serial number of TPS-230-CT01. This type of CT is very common in power substations. Weights of the CT and its base structure are, respectively, 1811 kgf and 235 kgf, totally 2046 kgf. The CT and its base truss structure have a total height of more than 6 m. Fig. 14 shows the details of the considered CT and its base truss structure, and Table 1  The relatively high dispersion of the data in Fig. 15 is noticeable. This high dispersion has resulted in relatively high dispersion in the data points used for developing the FCs shown in Fig. 16, as expected.

Developing FCs by Using PEER Records and S a (T 1 ) as the IM
To nd out if using S a (T 1 ) can decrease the dispersion of the data obtained from THA for developing the FCs, the 67 selected records were categorized based S a (T 1 ) in thirteen groups, and the maximum stress values were plotted as shown in Fig. 17.
Comparing Figs. 17 and 15, one can see that using S a (T 1 ) decreases the dispersion of the THA to a great extent. However, still relatively high dispersion is observed in some categories, such as S a =1.8g. The FCs obtained based on the data shown in Fig. 17 are presented in Fig. 18. 6.3. Developing FCs by Using PEER Records and S a (T 1 )+S a (T 2 ) as the IM As the last attempt for decreasing the dispersion of THA data for developing FCs, the values of S a (T 1 )+S a (T 2 ) were used as the IM. It should be noted that the rst and second modes of the CT take place in two different planes perpendicular to each other, and therefore both of them can be fully excited by earthquake ground motion. On this basis, the mode participation factor can be considered as unity for both of them. Fig. 19 shows the plot of maximum stress values obtained from THA of the CT categorized based on the S a (T 1 )+S a (T 2 ). Comparing Fig. 19 with Figs. 15 and 17, it is seen that using S a (T 1 )+S a (T 2 ) as the IM not only decreases the dispersion of the THA data, but also creates a smoother trend in the variation of maximum stress values versus the IM. The FCs obtained based on the data shown in Fig. 19 are presented in Fig. 20.
Comparison of Figs. 18 and 20 shows the reduction of dispersion of the data used for developing the FCs, resulted by using S a (T 1 )+S a (T 2 ) instead of S a (T 1 ) as the IM. This is particularly true for structures like the considered CT, which have moderate frequency.

Discussion And Conclusion
Comparison of FCs of the PI, shown in Figs. 7, 9, 11 and 13, and also of the CT, shown in Figs. 16, 18 and 20, which have been developed based on the THA data, obtained by different sets of accelerograms, and processed based on various IMs, one can realize that using S a (T 1 ) or S a (T 1 )+S a (T 2 ) as the IM on the one hand, and using the scenario earthquakes instead of randomly chosen records, on the other, lead to more consistent source data, and therefore, more reliable FCs. In fact, as both considered equipment (PI and CT) have brittle behavior, therefore, it is expected that their FCs have a steep slope around some speci c IM. This steep slope is not observed in the previously developed FCs based on the PGA as the IM, particularly by using randomly selected accelerograms. As a nal result, the developed FCs with the empirical ones, developed based on the eld data are compared as shown in Fig. 21, which shows the FCs of the CT, developed by using PGA as the IM, in comparison with those developed based on the eld data.
It is seen in Fig. 21 that there is a good agreement between the FCs developed in this study with those developed based on the eld data, particularly those related to low seismic design. Based on this study using scenario earthquakes for seismic analyses and S a (T 1 ) or S a (T 1 )+S a (T 2 ) as the IM can be recommended to achieve more reliable FCs. However, still further studies are necessary to reach the desired level of reliability. Using more than one IM and developing multi-variable fragility functions can be possibly a good idea in this regard.
Histogram of dispersion (left) and individual and mean ± one standard deviation fragilities (right) discussed by Bradley (2010)   Collapse capacity dispersion of a SDOF system obtained by nonlinear THA, using 66 records (left), and using a single accelerogram with eight processing conditions (right) (Hate Ardakani, 2010) The exceedance probability data and the Sa(T1)-based FCs developed based on them for the 63 kV PI considering the two considered DI values of 11.72 MPa (left) and 23.44 MPa (right) Figure 10 Page 25/32 The maximum stress values in the PI, obtained by using the 66scenario records, categorized around eleven PGA values Figure 11 The exceedance probability data and the PGA-based FCs of the 63 kV PI, developed by using the 66 scenario events, for the two considered DI values of 11.72 MPa (left) and 23.44 MPa (right) Figure 12 The maximum stress values in the PI, obtained by THA using the 66scenario records, categorized around eleven Sa(T1) values Figure 13 The exceedance probability data and the Sa(T1)-based FCs of the 63 kV PI, developed by using the 66 scenario events, for the two considered DI values of 11.72 MPa (left) and 23.44 MPa (right) Figure 14 The overall view and details of the 230 kV CT used for developing the FCs in this study (Mohammadpour, 2010) Page 28/32 The exceedance probability data and the PGA-based FCs of the 230 kV CT, developed by using the 67 PEER records, for the two DI values of 11.72 MPa (left) and 23.44 MPa (right) Figure 17 The maximum stress values in the CT, obtained by THA using the 67 PEER records, categorized around thirteen Sa(T1) values The maximum stress values obtained by THA of CT by using the 67 selected records, in thirteen Sa(T1)+Sa(T2) categories

Figure 20
The exceedance probability data and the Sa(T1)+Sa(T2)-based FCs of the 230 kV CT, developed by using the 67 PEER events, for the two DI values of 11.72 MPa (left) and 23.44 MPa (right)