COMPREHENSIVE ANALYSIS ON SEEPAGE AND STRUCTURAL STABILITY OF EARTH-ROCK DAM: A CASE STUDY OF XIQUANYAN DAM IN CHINA

Earth-rock dam is commonly used in the high-dam engineering around the world. It has been widely accepted that the analysis on structural and seepage stability plays a very important role, and it is necessary to take into account while designing the earth-rock dam. In performing the analysis of structural and seepage stability, many remarkable methods are available at current stage. However, there are still some important issues remaining unsolved, including: (1) Finite element methods (FEMs) is a means of solutions to analysis seepage process, but it is often a difficult task to determine the so-called seepage coefficient, because the common-used water injection test is limited in the practical work due to the high cost and complex procedure. (2) It has long been discussed that the key parameters for structural stability analysis show a significant spatial and temporal variations. It may be partly explained by the inhomogeneous dam-filling during construction work and the developing seepage process. The consequence is that one constant value of the parameter cannot represent the above variations. In this context, we solve the above issues and introduce the solution with a practical earth-rock dam project. For determining the seepage coefficient, the data from the piezo metric tube is used to calculate the potential value, based on which the seepage coefficient can be back-analysed. Then the seepage field, as well as the seepage stability are numerically analysed using the FEM-based SEEP/W program. As to the structural safety, we take into account the spatial and temporal variations of the key parameters, and incorporate the Monte-Carlo simulation method into the commonly used M-P method to calculate the frequency distribution of the obtained structural safety factor. In this way, the structural and seepage safety can be well analysed. This study is also beneficial to provide a mature method and a theoretical insight into the earth-rock dam design. The earth-rock dam of Xiquanyan Reservoir near Harbin city in China is also selected as case study to illustrate the described method. Article no. 9 THE CIVIL ENGINEERING JOURNAL 2-2016 ----------------------------------------------------------------------------------------------------------------DOI 10.14311/CEJ.2016.02.0009 2


INTRODUCTION
A vast amount of sediment can be transported during flash flood events in mountain streams and rivers. The transported sediment is broadly graded and includes coarse fractions forming intense bed load. At high bed shear typical for flood conditions, the transport is associated with the regime of the upper-stage plane bed (UPB). In the UPB regime, the top of eroded bed is plane (no bed forms occur) and the flow above the bed is stratified [1][2]. Basically, the flow structure is composed of three dominant layers: the bed (deposit) at the bottom, the transport layer adjacent to the bed and the water layer below the water surface ( Figure 1). A majority of bed load is transported in the transport layer, where individual particles are subject to intense collisions with surrounding particles. Occasionally, some particles are kicked out of the transport layer to the water layer and spend some time there before they join the collisional transport layer again. In the UPB regime, resistance of flow carrying bed load is due primarily to the interaction of transported particles with the top of the plane bed [1].

Materials
Five fractions of model sediment were used: two fractions of glass beads of different sizes (TK30, TK1216) and three fractions of plastic granulate (HSF3, TLT25 and TLT50), which differed from each other in size and shape. Properties of the fractions are given in Table 1 (vt is the terminal settling velocity of particle).

Procedures
In total, 216 test runs were carried out in the UPB regime from which 49 runs represented conditions at the limits of the UPB regime. The following criteria were applied to recognize the limits of the UPB regime. In literature, the lower limit of the UPB regime is the threshold at transition to the regime of dunes. In our experiments, however, the transition to dunes was not abrupt and incipient bed undulation got a form of intermittent movement of clusters of particles at the top of the plane bed rather than a form of regular dunes. Hence, we decided to define a development of a continuous transport layer at the top of the plane bed as the criterion for the threshold. Usually, the transport layer was two to three particle diameters thick when it became continuous.
For the upper limit of the UPB regime, literature suggests a transition to the regime of antidunes. Again, our observations indicated a slightly different flow pattern at the transition. We did not observe neither typical standing waves nor downstream/upstream migrating antidunes. Instead, we identified the bottom of the transport layer to become instable and to start following wavy (approximately sinusoidal) paths rather than straight paths typical for the UPB regime. The particle paths seemed to follow the shape of the water surface (the flow was super-critical and the water surface undulated). The top of the bed went through periods of local erosion and deposition synchronized with the wavy shapes of particle paths. We called this condition a wavy flow and considered an incipient development of the wavy flow as the criterion for the threshold of the UPB regime. The upper limit of the UPB regime could not be reached for each mixture discharge in the flume due to limitations given by flow conditions in the connecting pipes of the recirculating system (a danger of deposition of sediment and clogging of the pipes) and the limited capacity of the flume inlet.

RESULTS
Based on the criteria described above, an experimental data set with points at both thresholds of the UPB regime was collected. Flow conditions at the threshold were evaluated using a modified Shields-Parker diagram with an aim to confirm a bed-load character of the sediment transport. The diagram sorts out flow conditions and bed forms on a basis of two dimensionless groups, the Shields parameter (θ) and shear Reynolds number (Re*). The groups θ and Re* are defined respectively as: The evaluation confirmed an absence of suspended load in flows at threshold conditions ( Figure 3). Also, the plot shows that the upper threshold of the UPB regime (the transition between the UPB and wavy flow) cannot be associated with a constant value of θ as suggested in [4]. Figure 4 plots the thresholds at different total discharges of mixture, Q, against the corresponding slope of the energy grade line (i.e. the longitudinal slope of the bed in observed uniform flows), Ie. The plot reveals an interesting fact that for both the plastic and glass fractions the lower threshold occurs at approximately constant slope no matter how big is the total discharge. It also shows that the threshold slope is sensitive to sediment density, giving the higher slope for the material with the higher density. Our tests revealed a very tight correlation between the slope of the energy grade line and the average delivered (transport) concentration of sediment in the flow for all tested sediment fractions [6]. The delivered concentration is defined as a ratio of the volumetric discharge of sediment and the total volumetric discharge. Hence, the lower threshold seems to occur at a constant value of the ratio of the sediment and mixture discharges. For the upper threshold of the UPB regime, the plot in Figure 4 indicates that a higher slope is required for a lower total discharge to reach the threshold. Thus for low total discharges, the threshold occurs at relatively high delivered concentration and hence at high proportional transport of bed load. It seems that intense transport of bed load stabilizes the UPB regime because the developed collisional transport layer protects the top of the bed from an undulating influence of the water surface.
In literature, the transition from the UPB regime to antidunes is often related to the transition from the sub-critical to super-critical flow, i.e. flow at Froude number Fr = 1 (e.g., 1.0-1.2 in [4]). In our experiments, we take the side-wall effect into account in a calculation of the Froude number so that: (3) where Rb is the hydraulic radius of the part of the discharge area associated with the bed.
In Figure 5, Fr is plotted against θ and the plot shows that the upper threshold is at an approximately constant value of Fr slightly higher than one only for plastic sediments. The threshold for the glass sediments corresponds with considerably higher values of Fr, say between 1.5 and 2. Following our hypothesis of a collisional layer protecting the top of the bed from the undulation of the water surface in the super-critical flow, it seems that the collisional layer composed of particles of a higher density is more effective in protecting the bed from the undulation than the layer composed of lightweight particles. Figure 3.

Fig 5: Relation between Froude number and Shields parameter. Legend: as in
The question arises -how the effect of different density of particles should be included. We examined a number of regime maps including the effect of ρs in the literature and found that our data weas either outside the range of a map or map thresholds did not match our observed thresholds.
We decided to employ the densimetric Froude number in the form which is commonly used to determine a deposition-limit velocity for solid-liquid flows in pressurized pipes [7]. Modified simply for conditions in the open-channel flow, the densimetric Froude number reads: (4) in which the relative density

CONCLUSIONS
Laboratory experiments in a tilting flume enabled to investigate thresholds of the upper-stage plane bed regime and their sensitivity on sediment size, density and transport rate. The lower threshold (at transition to the regime with intermittent bed-load transport) is well represented by a constant bed slope and thus by a constant value of the ratio of the volumetric discharge of sediment and the volumetric discharge of mixture. Threshold values are sensitive to sediment density.

INTRODUCTION
In every human activity, some risks can be found, civil engineering is not an exception. The space for risk is even bigger than in other areas of human activity. Undesirable events can be estimated with certain probability, which are not included in the planning, the construction or the delivery of final building construction. The consequences of non-expected events incur damage. It is very important to be secure in these cases. Typical risks of a design-bid-build (hereafter referred to as DBB) project are different from a design-build (hereafter DB) project [1]. Even if DBB is the classic way of construction, it brings higher costs related to the project realization. This is caused by a delay between the project inception and the choice of the contractor. By the usage of DB, this delay can be eliminated. Thus the discrepancies are modified and the transactional costs related to finding a suitable contractor are lower. Nevertheless, both methods contain some hidden threats. Traditional construction procurement approaches try to find adequate construction and building

RISK EVALUATION IN DESIGN-BUILD PROJECTS
Design-Build projects can be risky for both the ordering (public procurement) and the contracting party. It is desirable to evaluate the risk by analysis [5]. The risk strategy is the key part of each project. The goal of the research is to confirm that there is a real reduction of risk for the public sector and quantify the risk reduction. The parts of the project where the public sector is able to reach the greatest amount of reduction and the part with no influence by the usage of the designbuild scheme are also shown in the research. This can be very useful for authorities who are planning a new project, because they will know where they need to be careful so they can achieve the largest net income. All the research has been done in the area of the Czech Republic. In this case the Czech Republic is characteristic in that there is almost no usage of design-build in the public sector, even though the private sector is familiar with this scheme. The motivation is to show the advantages of design-build to public authorities [6].

Risk management methods
It is very important to understand the main characteristic of risk management to be able to decrease the risk level. Risk management is the process where the managing subject makes an effort to eliminate the influence of existing and future risks and designs arrangements to remove the non-desirable influences where possible. Simultaneously, the positive influence is usedthe analysis of non-desirable influence and the risk monitoring belong among risk management processes. By using risk analysis, every risk can be identified and also the probability of expecting damages and risk responses can be considered. Risk monitoring means continual discovery to see if the risk level is invariable and the prospective arrangement does not need to be realisedas a response to the risk expectation.
Potential threats can be found and, above all, suitable reactions and arrangements can be arranged to reduce them thanks to risk analysis. Probability and possible damage must be defined. The risk identification techniques can be categorized, for example, by the documentation review, brainstorming, Delphi, the method of nominal group, interview with an expert and other methods. As the risk is identified and considered, the estimation of probability to the risk occurrence and its negative influence to the whole project is done. The evaluation can be qualitative (verbal value) or quantitative (number value). The target is to create an arrangement to decrease the probability of risk occurrence to an acceptable level. To be able to change the effectiveness of the arrangements it is fundamental to follow construction rules.
Different methods can be used to create the risk analysis. The methods are divided into two groups:  The methods of risk analysis concerning the project product  The methods of risk analysis concerning the project management Article no. 7

Research method
A research method called RIPRAN has been used to modify the method for evaluation of the risks [7]. This method is designed for evaluation and reduction of the project risk in various sectors. The RIPRAN method is excellent for every phase of the ongoing project [8]. The basic phases of this method can be taken as the process where each phase is connected to the other phase. Found among the basic phase are: the preparation of risk analysisthe identification of project hazardthe quantification of project risksthe reaction the risksthe overall risk evaluation. The manner of its composition is found among the advantages of this method, created from the international standards. The benefit is simple usage in practice which enables analysis of risk in incorrectly structured projects. This method can seem more complicated than in reality, but it is not complicated to get the recommendations and proposals to eliminate the potential risks.
The basic phases of the RIPRAN method: The preparation of risk analysisthe projection of the time frame creates the source of needed documentation; the output is a plan to execute the risk analysis.
The identification of project dangerthe target is to find all possible threats and scenarios, the statistical data and prognoses are used; the output is a list of the threatscenario pairs.
The quantification of project risksthe effort to evaluate the probability of listed scenarios and size of damages; the output is a chart with listed threats and scenarios and also probability, impact and risk value.
The reaction to project risksto use the data from the prepared chart; the output is a chart complemented by columns with the proposal to arrangements, the new risk value and also the cost of the arrangements.
The overall risk evaluationto evaluate the analysed project; the output is an overall evaluation of risk levels [9].
An author of the method is Doc. Ing. Branislav Lacko, CSc. The method was established for the analysis of risk in automation projects in pursuance of scientific research at VUT Brno. The praxis showed that after a few modifications, the method is applicable for analysis of various risks in many projects. RIPRAN TM is a trademark registered by the office of industry ownership in Prague [9].

Research process
For our research, part of the RIPRAN method was used for evaluation of the typical risks of a design-bid-build (DBB) project and separately for a design-build (DB) project.
The research has been done in four steps:

Identification of the risks
In this step, an economic survey was carried out to find as many risks for construction projects as possible. The survey was done by questioning 12 construction managers [10]. Each respondent had to write down a list of risks which he or she thinks is relevant to the comparison of DBB and DB. The final list was discussed with the respondent to get the right projection of what he or she had meant by each risk. Finally, all the lists were matched and the duplicated data was deleted [1]. Because the final list has almost 150 records, it was necessary to determine groups and sub-groups of risks. Each of the risks was described as a part of a pair: threat and scenario. For example, the threat could be an actual danger (e.g. a lightning strike) and the scenario would be the result which is caused by the threat (e.g. a fire). In this phase, 149 pairs of risks were identified, which were split into 9 chapters. It was necessary to look at the risks from the public authority's point of view, and also in the same manner, to make an evaluation of the risks [11]. The main criteria for the evaluation was the level of the influence on public procurement.

Checking of the risk matrix
In this step, the final risk matrix was checked with the respondents from the first step. This ensured that the basis for the future research respected the reality of the market. Of course, the meaning of each risk and the correctness of the threat and scenario pair were also discussed [12]. The goal of this step was to finalize the list of risks and clarify the meaning of each risk.

Final risk evaluation
Concrete threats and scenarios were judged by their probability and influence on the project. This was done for DB and DBB separately. The chart n. 3 shows the level of risk probability and the possibility of overall impact by using three probability valueslow, middle and high. In DB projects, some risks are transferred into the contracting side from the public procurement side, so that the probability or influence could be lowered, as can be seen from the example in Table 3.  After the identification of risks for every threat and scenario and after adding the possible impact on the project, the risk level was quantified [13]. The risk level was defined separately for each type of project in the construction -Design-Bid-Build (DBB) and Design-Build (DB). The risk level was defined by the mixture of the probability and the impact on the project. The method is shown in Table 4.

Tab. 3. -Example of probability and impact quantification
Tab Finally, all types of risks were considered and judged by numbers (1, 2, 3) as well as by verbal evaluation (high, middle, low risk level) of the overall level of probability connected with the possibility of total impact on the project [14]. For DBB projects, the final average risk level for the public sector is 1,8 and for DB projects the final average risk level is 1,5. It shows that by using design-build, the risk is decreased by 17%. In Table 5 the evaluation of the risks is shown.

Appraisal of the results
By the comparison of each pair, the parts of the project where the usage of design-build decreased the risk were identified [15]. There are 43 threats which are affected by using designbuild. What is really interesting is the amount of decreased risks for each chapter, which can be seen in Table 6.
Tab. 6. -Amount of risk decrease by chapters It can be seen from the table that the biggest risk decrease is made in the area of "management" and "decision making and in the technological area". On the other hand, this means that it is imperative for project managers to define precisely these areas in the contract and to focus on risk transfer in those areas [16]. In the next table you can see the average risk for each area of design-bid-build (standard) project. Source: author By a comparison of Table 6 and Table 7, it can be seen that by the usage of design-build, it is possible to decrease most high-risk areas, because the areas "Management and Decision Making" and "Technological Area" have the biggest average risk and the highest amount of decreased risks at the same time.

Tab. 7. -Average risk on design-bid-build projects by chapters
This directly shows how effective the design-build scheme can be for the public sector [17]. By transferring the risk responsibility to the contractor, they can extend their field of operation and responsibility. The contractor is also responsible for the planning and the building construction during the project realization. Design-build decreases the risks which are the most important for the contractor, and should help the public sector to be more effective.

CONCLUSION
From the given RIPRAN project risk analysis, it can be observed that on the public procurement side (the client) and from the risk level point of view, it is advantageous to use the Design-Build method, and to work on this presumption. The presumption being, with the application of Design-Build projects, there is a lower risk level on the contractor's side for planning and realization of construction projects. This analysis also confirms the character of DB projects where is a higher risk level on the contractee's side (i.e. on the side of the builder). This side has a significantly higher operation effect in both parts of the project, design and construction, so consequently, this side has a higher level of responsibility and risk as well.
So, for the contractor (client), it is useful to use Design-Build projects for public procurement in construction when realizing public contracts because of the lower risk level as compared to DBB projects.

INTRODUCTION
China is one of the countries with the world's strongest seismic activities. Large parts of the country are located in highly seismic areas. Wenchuan earthquake which was 8 degree intensity happened in 2008 and caused 6,140 bridges to suffer varying degrees of damage and destruction, and even collapse. The loss of such bridges seriously delayed the rescue process, increased the number and extent of injuries, seriously exacerbated economic losses, and triggered larger secondary disasters. The Qinghai Yushu earthquake in 2010, 7.1 degree intensity, also caused varying degrees of damage to bridges within the county, mainly because the seismic intensity that occurred exceeded the seismic fortification criteria and the bridge construction inadequacies in earthquake resistance. The Ya'an Lushan earthquake in 2013, 7.0 degree intensity, brought different degrees of damage to highways totaling 2986 km, and caused 327 bridges to suffer varying degrees of damage and destruction. In those earthquake disasters, bridges are the most vulnerable to seismic damages in the highway traffic system. Once they collapse, they will lead to greater secondary disasters and indirect economic losses, especially in No. 213 National Highway and Duwen Expressway, which are close to the epicenter and are seriously damaged or destructed in the Wenchuan earthquake. Indeed, the current research on the damping of bridge constructions is necessary and urgent.
After the period of bridge with damper exceeds a certain limit, the overall seismic response will Article no. 8 10.14311/CEJ.2016.02.0008 2 decrease with the increasing period, and the seismic response of the same period will reduce with the increasing damping. The seismic reduction design utilizes the seismic response and achieves the damping purpose by increasing the natural vibration period and the damping factor. Therefore, the basic principle of seismic reduction design is as follows [1]: flexible supports are adopted to extend the natural vibration period of the whole structure and to reduce the structural seismic response. Energy dissipation devices are then adopted to dissipate energy and to limit the structural displacement. The main function of elastic cable devices is to provide the appropriate elastic stiffness, rather than energy dissipation. These devices, which have relatively simple construction, convenient installation, moderate price, and ability to provide greater elastic stiffness, have already been widely applied to bridge seismic reduction in Japan and many other countries. The No. 2 Shantou Bay Bridge in Guangdong Province, which is a hybrid beam cable-stayed bridge with main span of 518m, also adopts elastic cable devices at the joint of the tower and beam to reduce the seismic response, with a 54m long 55×75 steel strand on each side of the tower. One end of the steel strand is anchored to the bottom end rail of the tower, whereas the other end is anchored to the stiffening girder [2]. Energy dissipating dampers refer to dampers installed between the tower and the stiffening girder as shown in Fig.1. When the bridge is suffering earthquake, the piston and the cylinder will move in relation to each other because the difference in pressure before and after the piston leads the damping fluid to flow through the damping hole, produce a damping force, and minimize the structural response by converting part of the energy from structural vibration into thermal energy via the viscous damping fluid. Many researchers have conducted thorough studies on dampers applied in bridge seismic reduction. For example, Feng (1993) [3], Yang (1995) [4], and Tsopelas (1990) [5] successively studied the damping effects of variable dampers on the seismic response of bridges. They unanimously agreed that variable dampers could effectively reduce the absolute acceleration of superstructure and the relative displacement between stiffening girder and pier. Tsai (1996) [6] proposed to adopt viscoelastic dampers for minimizing the seismic response of bridge. Yang Menggang (2004) [7] considered that the application of MR dampers in bridge construction had a good damping effects. The viscous fluid damper in bridge construction has been widely used in bridge damping without affecting the original structural vibration period and mode before being added. It is characterized by elliptical hysteresis curve, reusability under earthquake effect, and other evident advantages. The famous American Golden Gate Bridge also successfully adopted viscous dampers to retrofit the structure resistance ability. In this paper, seismic reduction performance of one cable-stayed bridge is analyzed. In order to study the features of the seismic reduction devices, the applications of viscous dampers and elastic cable device in the bridge were analyzed from the aspects of structural damping or longitudinal structural stiffness. Moreover, a finite element model is established based on FEM software, by which time-history method is used to analyze the seismic responses of the two damping measures and explore their damping effects to provide a scientific basis for the damping design of cable-stayed bridge.

Mechanical properties of elastic and viscous dampers (1) Elastic cable
The working principle of damping about elastic cable devices mainly involves two aspects: (1) by adopting the flexible coupling between the tower and stiffening girder to change the force propagation passage, the stress situation of the structure is relieved; (2) Seismic displacement at girder end is reduced by increasing the structural stiffness. After elastic connection devices are installed, the force F, the device stiffness e K and relative displacement d between tower and stiffening girder can be expressed in the following linear equation: Article no. 8

THE CIVIL ENGINEERING JOURNAL 2-2016
The controlling effects of coupling devices between tower and stiffening girder on seismic displacement depend upon the device parameters. Therefore, reasonable parameter settings are very critical, and the parameters of elastic coupling devices should be set according to the requirements of such devices in beam-end displacement, bending moment at the bottom of the tower.

(2) Viscous damper
The main construction of liquid viscous damper is shown in Fig. 1, which consists of cylinder block, piston, liquid, etc. The working principle of viscous damper [8] states that the piston under the influence of an external force may slide inside the cylinder block, then a pressure difference produces and that lead the damping fluid to flow through the damping hole, bringing a relatively large damping force. The flow of such viscous damping fluid will convert most of the energy from the vibrating structure into thermal energy that will disperse soon, thus achieving the purpose of energy dissipation and damping.

Fig. 1 -Viscous Damper
Given that the viscous damper is related to the structural vibration velocity [9], its mechanical properties can be described by an equation about the relation between force and velocity: where F, C, V, and α represent the damping force, damping coefficient, relative motion velocity of piston, and damping exponent (also called the velocity exponent), respectively. When α=1, the damping force of damper is linear with velocity (V), and the damper is a linear damper. When α<1, the damper is into the nonlinear operation and is a non-linear viscous damper.Moreover,when α>1,the damper is an ultra-linear damper, and the F-V relation of viscous damper is shown in Fig. 2.

Fig. 2 -Relation between Damping Force and Relative Velocity
As shown in Fig. 2, when the velocities of linear and ultra-linear dampers are lower, the output damping forces are far less than those of non-linear viscous damper. However, when the velocities are higher, the output damping forces are much greater than those of non-linear viscous damper. Therefore, although the aseismic performance of the structure can be improved

Finite element model and dynamic characteristic analysis
Huai'an Bridge is a steel-concrete cable-stayed bridge with dual tower and dual cables, which is a floating support system. A combination of spans (45+67+416+67+45 m) is shown in Fig. 3. The box girder for main span is an all-welded steel beam, wherein the top, middle chamber, side chamber, and cantilever board widths are 40 m, 15.2 m, 8.95 m, and 3.45 m, respectively. The tower is A-shaped, with a height of 115 m. The spatial arrangement of 64 cables is fan-shaped [9]. The motion equation of an SDOF structure under seismic load is as follows: represents the seismic acceleration time history. For seismic problems of linear elastic structure, an analytical expression in integral calculus form is given using Duhamel's integral. The equivalent seismic load is , and using Duhamel's integral, the displacement in response to the structural seismic response is: In Equation (4), 2 1 Dn     represents natural frequency of the damping system. When the damping coefficient is very small, Equation (4) can be simplified into: Article no. 8

THE CIVIL ENGINEERING JOURNAL 2-2016
The linear dynamic finite element equation for MDOF structure is: In Equation (6), respectively represent the total mass matrix, total damping matrix, and total stiffness matrix respectively, while ,, u u u respectively represent the acceleration vector, velocity vector, and displacement vector.
The structural vibration characteristics of cable-stayed bridge at complete stage can be obtained from the following eigenvalue equation: where  represents the natural frequency of bridge construction with different orders.
According to the finite element analysis, the first 10-order frequencies of the three models are shown in Table 1. As Table 1 shows, after viscous dampers are considered, the natural frequency of the structure is slightly reduced. The longitudinal damping effect of bridge construction is increased, and the natural vibration period of construction is extended, thereby reducing the structural internal force response. After the elastic cable damping devices are installed, the longitudinal stiffness of the main bridge is changed, and the first-order vibration mode is greatly affected, while the rest are less affected. However, the vibration mode is still in small value part of the response spectrum, characterized by the vertical drift of the main beam and the longer period.

Construction of damping device
The finite element dynamic analysis of the main bridge of Huai'an Bridge is carried out by Midas Civil software to build up a finite element model, as shown in Fig. 4. The backbone, beamelement, truss-element, spring element, and Maxwell damping models are adopted by main beam, main tower, piers, and foundation, stayed cable, elastic cable, and viscous damper, respectively. Article no. 8

THE CIVIL ENGINEERING JOURNAL 2-2016
To study the damping effect of the cable-stayed bridge with viscous damper and elastic cable, the following three cases are analyzed: (1) Semi-floating system model without damping device.
(2) Four elastic cables between the stiffening girder and the tower shown in Fig. 5, with elastic stiffness K＝3000 kN/m.
(3) Four viscous dampers between the tower and the beam shown in Fig. 6, with damping coefficient C＝7000kN.S/m.

Seismic input
The Huai'an Bridge, which belongs to Seismic Group I and Site Category II, has the designed seismic intensity of 7° and the designed basic seismic acceleration equal to 0.1 g. Combined with the actual situation of the bridge and the above-mentioned requirements, this paper chooses the seismic waves of Wenchuan earthquake, with the correction factor of Through finite element analysis, the displacement of each control node of Huai'an Bridge under the longitudinal seismic action and the peak internal force response of the control section are shown in Table 2. After the elastic cable device or viscous damper is mounted, the structural internal force is significantly reduced to provide the effective aseismic level of the structure. After the viscous damper is longitudinally mounted at the junction of the tower and beam, the bending moment of the tower bottom is reduced by 21%, and the tower top longitudinal displacement is reduced by 52%. After the elastic cable is mounted, the longitudinal displacement damping rate of the tower top is up to 41%, and the bending moment of the tower bottom is 8%. Thus, it is clear that both viscous damper and elastic cable have good damping effects, and the peak dynamic response subject to earthquake effect, is reduced to the utmost extent. As shown in Figs. 8 to 11, after the viscous damper and the elastic cable are adopted, Huai'an Bridge has the internal force response of the control section and the displacement of each control node rapidly attenuated. This indicates that both viscous damper and elastic cable play very good roles in damping, especially the viscous damper which has the more obvious effect in controlling the displacement.

Conclusions
As shown in the seismic response analysis, the cable-stayed bridge has the larger tower top longitudinal displacement, larger beam-end longitudinal displacement, and larger bending moment of the tower bottom. Through a comparative analysis on the damping effects of elastic cable limiter and viscous damper on the cable-stayed bridge with floating system, it is found that both seismic isolation and control means can effectively control the structural internal force and displacement, especially the viscous damper which has certain superiorities in controlling the internal force. The main results include: (1) the viscous damper will not change the natural vibration characteristics of the structure, but the elastic cable will increase the natural frequency; and (2) both viscous damper and elastic cable can cut down the bending moment at the bottom of the main tower, while greatly reducing the longitudinal displacement of the main tower and main beam of cable-stayed bridge and diminishing the displacement and internal force at the same time.
Due to the uncertainty of seismic oscillation, and through the optimization analysis and comparison of elastic cable and viscous damper parameters, the appropriate parameters can be selected to achieve the desired damping effects. The two damping methods that use viscous damper and elastic cable, respectively have sound damping effects, with the former having better displacement-controlling effect.

INTRODUCTION
Earth-rock dam is one of the commonly used types in dam construction engineering worldwide. Recent years in China, the earth-rock dams, especially the high earth-rock dams have been developed rapidly both in theoretical studies and engineering applications [1]. However, dam failure due to seepage of the earth-rock dams becomes a severe problem. Previous studies have long indicated that seepage stability and structural stability are two important issues that should be taken into account in designing the earth-rock dams. The analysis on the structural stability of the dam plays an essential role relating to the dam structural safety, while the analysis on the seepage stability can be beneficial to control the damage by infiltration. Several lines of evidence also suggest that both the two issues relate to the safety and economy in designing an earth-rock dam [2][3][4][5]. They implied that the dam-failure cases due to seepage problems may even extend up towards to nearly 30% to 40% of total cases [6]. In this sense, the history of earth-rock dam development can be regarded as the one of theoretical research on structural and seepage stability.
Theoretical analysis method on the seepage issue of earth-rock dam is a means of solution to investigate the construction quality and safety. This kind of methods majorly treat the seepage process of the earth-rock dam as a steady laminar flow with a phreatic surface, and can be analyzed nu the Darcy's seepage law [7][8][9][10][11]. Some other studies also made supplement that internal flow features, as well as the dam soil properties should be known beforehand to investigate the advection law of the flow among soil particles. However, it is still a great scientific challenge to know the internal reaction between soil particles and flows. For this reason, some theoretical simplifications had to be made in the practical work. Recently, with the rapid development of computer performance, numerical simulation technique has been greatly developed [22][23][24][25], and seepage analysis using numerical simulation methods are being highlighted, especially the well-known Finite Element Methods (FEM). FEM has long been demonstrated as a satisfying method to simulate seepage process with a low cost, high efficiency and accuracy. It performs well when the cross-section shape of the dam is regular, and advanced in taken into account a variety of conditions [12]. The emerging of numerical simulation method also extends the seepage analysis from the steady laminar flow condition to a more complex nonsteady flow condition [13][14][15]. While performing a seepage simulation, determination of the seepage coefficient is a basic but important issue. The seepage coefficient shows a significant influence to the simulation results, and it was commonly measure by the permeability test in traditional methods. However, the measurement data show a significant dispersion, and also the test is costly and complicated to conduct. Therefore, a method that able to accurately and easily determine the permeability coefficient of the dam is necessary when using the FEM to analyze the seepage process.
As to the structural stability analysis of earth-rock dam, the commonly used method is majorly based on the Morgenstern-Price (MP) method [16] or the Bishop method [15]. In both methods, the dam slope is usually simplified as a homogeneous slope, and stability factor can be calculated. These methods are easy-to-use, and able to attain a result with good accuracy. However, due to nonlinear spatial distribution of the complex filling material and different compaction degree when constructing the dam, the core parameters, the cohesion strength c, and frictional angle φ presents significant random spatial distribution. This random distribution feature has also been revealed in both the laboratory and in-situ measurement. Besides of that, these soil strength parameters are also gradually reduced with the seepage development. In view of these, the simulation results will be illogical because one set of representative values of c and φ is not able to present the random spatial and temporal distribution as described.
As a consequence of the limitations from the determination of seepage coefficient and random distribution of soil strength in the dam, the seepage and structural stability analysis could not be well analysed, which also affect the safety of the earth-rock dam. In this context, we solve the above issues and introduce the solution with a practical earth-rock dam project. For determining the seepage coefficient, the data from the piezo metric tube is used to calculate the potential value, based on which the seepage coefficient can be back-analysed. Then the seepage field, as well as the seepage safety are numerically analysed using the FEM-based SEEP/W program. As to the structural safety, we take into account the spatial and temporal variations of the key parameters, and incorporate the Monte-Carlo simulation method into the commonly used M-P method to calculate the frequency distribution of the obtained structural safety factor. In this way, the structural and seepage safety can be well analysed. This study is also beneficial to provide a mature method and a theoretical insight into the earth-rock dam design.

ENGINEERING BACKGROUND
The engineering background of this study is the gravity earth-rock dam at the so-called Xiquanyan reservoir, which locates near the Harbin city, north-eastern China. The earth-rock dam is the gravelly clay core dam. The maximum height of the dam is 29.1 m, with crest elevation of 215.1m. In the cross-section of the dam along the river, the width of the dam crest is 8.0 m. Inside of the dam, the clay core is 4.0 m in the top width, and 8.0 m in the bottom width. The length of dam across the river is 400.6 m. The construction work of the dam was finished in 2000, and since then the reservoir was formally operated to store water for irrigation. However, several years' right after the operation, the "Frosted" phenomenon at the spillway of the downstream slope was reported in winter. The phenomenon implied that the spillway surface should be cracked, causing the water leakage. And some local residents reported that dam slope at the downstream side slipped slightly. All the evidences indicated that the dam was under risk, and detailed analysis on both structural and seepage stability was in a pressing need.

Simulation method and analyzing cases
In our study, Finite Element Method is selected to simulation the seepage process of the dam. The SEEP/W software from Geo-Studio Company in China is used. It is one of the commonly used commercial software to investigate the seepage issue based on the FEM [17]. The simulation domain is described below. X-axis points the direction downstream the river, z-axis denotes the direction along the dam height. The depth of the dam foundation is about 1/3 to the dam height, i.e., the foundation depth is around 10.0 m. The seepage in the dam is simplified as a steady laminar one, and three kinds of calculation cases are selected, the normal water level, the designing water level, and the conservative checking water level as shown in Table 1. Numerical simulation of this three cases will be carried out to investigate the seepage stability of the dam under different conditions.

The selected cross-sections for seepage stability analysis
Two cross-sections are selected to analyse the seepage stability of the dam (Fig.1). The cross-section 1 (No. 0+285) is near the thalweg of the river, where the dam height of the crosssection approximates the maximum height of the dam, i.e., 29.1 m, and the cross-section 2 (No. Article no. 9

Back-analysis of seepage coefficient
As described in the introduction section, seepage coefficient plays an important role for analysing the seepage stability. At the current stage, the commonly used methods include empirical methods based on the Terzaghi equation, laboratory testing method, and field permeability testing method. With rapid development of computer performance, the back-analysis of seepage coefficient using iteration solution with numerical model is available. In this study, we use the above back-analysis to indirectly search a best fitting seepage coefficient for the described seven different zone in section 3.2.
To determine the initial value of the seepage coefficient, field geotechnical tests and empirical values are used. The initial value of the seepage coefficient in the Zone (2), (6), (7) are based on the field geotechnical tests that mean values are used. The initial value of the seepage coefficient in the Zone (3) is empirically determined. While the initial value of the seepage coefficient in the Zone (1), (4), (5), the data from the piezo metric tube is used to calculate the potential value Φ Article no. 9

Numerical simulation of seepage stability
The basic model of the earth-rock dam is shown in Fig.2. The cross-section of the dam is divided into 157 blocks (blue fonts in the figure), with 190 nodes (black fonts in the figure). Each block is set as a quadrilateral one because this kind of shape is flexible to consider the complex geometry and anisotropic material in the seepage field of the dam. reproduction period of 1000 years. In both the selected cross-sections 1 and 2, the seepage field of the dam and dam foundation are simulated. The corresponding equipotential line distribution, velocity vector, zero-pressure seepage surface are shown in Fig.3, and key features of the seepage process in both cross-sections are summarized in Tab.3 and Tab.4. As shown in Tab.3 and Tab.4, seepage discharge of cross-section 2 is much larger than cross-section 1. It can be explained in part by the fact that the seepage coefficient of the core wall zone in cross-section 2 is too large, i.e., that material with seepage coefficient 1.56×10 -3 cm/s should belong to the strong-permeability material. It is even twice as large as the seepage coefficient in the dam shell zone, which is only 6.41×10 -4 cm/s. Overlarge seepage coefficient in the core wall of the dam is supposed to result from the poor construction quality. Besides of that, the seepage coefficient of the dam foundation zone is 7.0×10 -3 cm/s, which means that the clay and gravel layer consisting of the dam foundation should be also the strong-permeability material. As such, the core wall and foundation of the earth-rock dam should be rather weak to prevent the seepage and water leakage.

Tab. 3 Hydraulic features of the seepage process in the cross-section 1 under different water Levels
This prediction is also supported by the analysis in Tab.4. It is indicated that the hydraulic gradient of the core wall zone of the cross-section 2 ranges from 1.608 to 1.875 under different water levels. It is beyond the allowed maximum hydraulic gradient 0.35 as demonstrated in the current Chinese standard, and direct consequence is that seepage may occur in this zone. On the contrast, the cross-section 1 performs better to prevent the seepage. As shown in Tab.3, the hydraulic gradient of all the zones under different water levels is within the allowed maximum hydraulic gradient 0.35.
Overall, the averaged seepage discharge of both cross-section ranges from 12.01 m 3 /m.d to 14.99 m 3 /m.d. Since the length of the dam across the river is 400.56 m, the seepage of the dam per day is around 4800~6000 m 3 , and annual seepage is around 175.2~219.0 million m3.

STRUCTURAL STABILITY ANALYSIS
As to the structural stability analysis of the earth-rock dam, it is suggested by the recent research [18] and current Chinese standard [19] that the structural stability of the earth-rock dam can be evaluated by the safety factor against slipping. In order to calculate the safety factor, it is necessary to know the soil strength including cohesion strength c, and frictional angle φ. In previous study, only one group of representative values is used. However, considering the complex spatial and temporal variation of these parameters, one group of representative values should be limited, especially in the earth-rock dam engineering. And many studies, e.g., laboratory experiments and in-situ surveys, have long supported this view by illustrating the significant randomness of soil strength parameters. For this reason, we use a method incorporating with Monte-Carlo simulation to investigate the probability of dam structural failure.
The Monte-Carlo method uses repeated random sampling to simulate data for a given mathematical model and evaluate the outcome. Monte-Carlo method has been already applied in geotechnical researches. The first step in the Monte-Carlo method is to define the input parameters. Sensitive parameters in terms of the entrainment process, c, φ are selected. These parameters are assumed to follow a certain distribution. To do a valid simulation, the second step is to create a large, random data set for each input parameter so that the spatial and temporal uncertainties can be covered by this data set. Safety factors are calculated for many times and the combined effect of uncertainties in the input parameters can be shown.
Generally, the soil strength of the dam is represented by the cohesion strength c, and frictional angle φ. Each of the parameters is regards as an independent and random variable, obeying the lognormal distribution. Several geotechnical test of soil strength at different parts of the dam have been conducted, and all the testing data are collected to determine the distribution. Mean value and standard deviation of both parameters can be expressed as where x denotes the selected parameters, n denotes the total number of the obtained data.
As suggested by the standard of GB50007-2011 in China, the correction factor regarding total number n can be evaluated by the following equation (4) where δx is the variation factor of parameter x. Thus the corrected mean value of the parameter x can be expressed as (5) According to the procedure from Eq.(2) to (5), the distribution of the parameters in Monte-Carlo simulation is shown in Tab.5. The procedure for Monte-Carlo simulation as illustrated above has been implemented in the code. We programmed the core function of the procedure in the MATLAB environment. MATLAB was chosen because of its powerful capability for matrix operation and visualization features. We use 1000 trials of the Monte-Carlo simulation. Each trial generates a group of random values for the cohesion strength c, and frictional angle φ, and consequently a safety factor of the dam can be calculated. The commercial software SLOPE/W is used to calculate the safety factor based on the Morgenstern-Price (M-P) method from the limited equilibrium theory, and different water levels can be taken into account. After 1000 trials of Monte-Carlo simulation, 1000 sets of safety factors can be obtained. By analyzing the mean value and variation of these results, the probability of dam structural failure can be finally obtained. Fig.9 shows the frequency distribution of safety factor under different water levels. Gray zones in the figure denote the structural failure range where the safety factor is smaller than 1.05. As to the cross-section 1 (left column in Fig.4), nearly 70% of results range between 1.30 to 1.80, showing that the dam slope is stable under all the three water-level conditions. While for the crosssection 2 (right column in Fig.4), the safety factor decreases from around 1.40 under the normal It indicates that the probability of the dam structural failure will extend up towards to 15.5%, and the cross-section will be under risk.

Tab. 5: The distribution of the soil strength parameters in Monte-Carlo simulation
The probability of the dam structural failure in both cross-sections are summarized in Fig.5 and Tab.6. It demonstrates that failure probability of the cross-section 1 is not sensitive to the water level, and dam slope is rather stable. While the cross-section 2 increases significantly with the water level, the maximum failure probability will be 15.5% under the checking water level. For this reason, it is necessary to reinforce the dam slope for safety.
In fact, reports from local government also support our prediction. It has been reported that uneven settlement at the surface of the dam crest were observed, as well as some slight slip near the cross-section 2. For the seepage stability, it is reported that noticeable water leakage were observed at the downstream slope of dam, and also the "Frosted" phenomenon in winter. All the evidences support our view that this earth-rock dam is under risk and need some reinforcement to ensure the safety.

CONCLUSIONS
Analysis on the seepage stability and structural stability are two key issues that should be considered in earth-rock engineering. In this paper, we discussed a method to solve the above issues. The method is able to provide a solution to determine the hard-to-value seepage coefficient, and also taken into account the spatial and temporal distribution of soil strength. In detail, (1) For determining the seepage coefficient, the measurement data from the piezo metric tube wer used to calculate the potential value, based on which the seepage coefficient can be back-analyzed. Then the seepage field, as well as the seepage safety were numerically backanalyzed using the FEM-based SEEP/W program. (2) As to the structural safety, we took into account the spatial and temporal variations of the key parameters, and incorporated the Monte-Carlo simulation method into the commonly used M-P method to calculate the frequency distribution of the obtained structural safety factor. In this way the probability of dam structural failure can be quantitatively analyzed. (3) The earth-rock dam of Xiquanyan Reservoir near Harbin city was selected as case study. Both the seepage and structural stability analysis implied that the earth-rock dam was under risk, and reinforce should be required. Reports from local government also supported our analysis that many evidences had been reported showing the water leakage and structural failure of the dam.
This study is also beneficial to provide a mature method and a theoretical insight into the earth-rock dam design.

INTRODUCTION
A fire modelling is a progressive discipline of the fire-safety engineering because it is more and more used in practice when designing buildings and their structures, evaluating fire hazards, investigating the causes of fires, the spread of dangerous gases and vapors by flowing etc. Deterministic models are divided into the models of field which include programs FDS [1] SOFIE [2] SMARTFIRE [3], FLUENT [4] FLACS [5] Exodus [10]) and zone models, e.g. BRANZFIRE [6] CCFM [7] CFAST [8] CONTAM [9] and ARGOS [11]. The models predict the behavior of a fire e.g. inside buildings, specifically they describe the gas temperatures, heat flow rates through the openings, heat flows, opacity of smoke, production of selected toxic gases / vapors, the reduction of the strength or the extent of damage of building components and activation time of sprinklers, electrical fire detectors in defined positions (x, y, z) during the time of simulated fire, see the example in Fig. 1.  [47].
Input parameters must be specified for a successful solution of the fire simulation: threedimensional geometry of a fire field with dimensions of floors, walls, ceilings, openings (doors, windows) and their locations; thermal technical properties of the boundary surfaces of structures (wall, ceiling); position and thermal technical and fire characteristics of flammable/combustible substances/materials incl. burning item; ventilation (forced, natural and combinations thereof); Electronic fire signalization (EPS) and Fixed firefighting systems (SHZ) components and their location. Physical thermal technical and fire characteristics of flammable substances / materials can be determined by laboratory fire tests, or they may be found in the literature. The study of sensitivity to the accuracy of their description is important for quality modelling. Individual values and possibility of their description ability are summarized below.

DENSITY
The bulk density of a material ρ [kg/m 3 ] is its mass per unit volume [12]. Density of materials varies for two reasons during heating: releasing the contained volatiles (combustible or non-combustible) and expansioning -volume dilatation (expansion or contraction).
Density of the materials is determined by their weight and volume, see e.g. [13] and [14]. Temperature dependence on density is determined by the temperature changes of a mass with the thermogravimetric analysis, and by volume changes with the dilatometric analysis.

SPECIFIC HEAT CAPACITY
The specific heat capacity of materials under constant pressure, cp [kJ/kg.K] is the amount of heat required for rise of the temperature of 1 kg by 1 K [15]:

THERMAL CONDUCTIVITY
Heat conduction corresponds to a movement of heat in the material due to the temperature gradient. Modelling a heat conduction requires knowledge of the Thermal conductivity [W/m.K], of materials which is defined as [15]: (2) where is the rate of steady flow of heat conduction per unit area in the x-direction [W/m 2 ], and (dT/dx) is a temperature gradient in the x-direction. Thermal conductivity depends on the moisture content, temperature, porosity, density and microstructure of the material. Some models neglect those dependencies and prefer to use a constant value for the thermal conductivity.
There are two methods for measuring the thermal conductivity of solids: a method of steady state and a transient state method. Method of steady state uses a heat source to maintain a constant temperature gradient across the sample. It is slow and we often need several days to determine the thermal conductivity at several temperatures [17]. Transient state method is classified generally as a method of a "hot drive" [18]. It is measured by the rate of rise of temperature of electrically-heated sample lines in the ambient of sample at a given temperature. This method is much faster than the steady-state method.

THERMAL INERTIA
The thermal inertia of the material , ρ c [J 2 /s.m 4 .K 2 ], is a product of the thermal conductivity λ, (see para. 3 above), the density ρ (see para. 1 above) and the specific heat capacity cp (see para. 2 above). It is used when modelling the transient heating of solid substances/materials. The greater thermal inertia, the longer it takes to reach temperatures of the flow of contained volatiles.
Modelling problems can occur, especially when the solids degrade thermally rapidly, for example, when the flame is spreading on the surface, therefore it is useful to establish an effective thermal inertia of materials, ρ c. The effective thermal inertia is determined by analyzing igniting data by an auxiliary source of ignition when exposed to radiant heat [19], [15].

IGNITION TEMPERATURE
Ignition of solid fuels is defined as the start of flame combustion in the gaseous phase [20]. When the solid material is exposed to the external heat, it starts to decompose thermally /pyrolyze at the certain time. Fuel vapors are mixed with air in the boundary layer. Shortly after, a pyrolysis rate may already be sufficient to achieve the LEL and the mixture may ignite under certain conditions. There are two types of ignition: with auxiliary piloted ignition and without the auxiliary source. Flame combustion of the gas mixture is initiated with a small ignition source when igniting with the auxiliary piloted ignition. Piloted ignition may be a small gas burner flame, an electric spark or a hot wire. A solid surface shall achieve a sufficiently high temperature at which the combustion reactions take place to ignite without the auxiliary source. It is difficult to predict when the solid fuel ignites at a certain heat flux. A critical surface temperature when igniting is often used as a criterion for ignition. This critical temperature is the ignition temperature. It is higher without auxiliary piloted ignition than with this source. But the ignition temperature is the material property and it does not change with heat flux density for each type of ignition.
Ignition temperature [°C] can be determined in several ways. In the Czech Republic there are commonly determined: -flash points and ignition temperatures of solid combustible materials with the method according to Setchkin, see [21] -ignition temperatures of flammable gases and vapors with the test method that simulates the risk of ignition from hot surfaces, see [22]. Further there are published the test methods that measure: -surface temperatures when testing ignitability by using very fine thermocouples attached to the surface of a sample [19]. This method is difficult because it is difficult to control the fine thermocouples and ensure good contact with the surface to be measured, -the temperatures of the surface of a material sample using an infrared pyrometer targeted to a small area of the surface. In this case the disadvantage is that the heat radiation is measured and the surface temperature is not. Thermal radiation is partly emission and partly a reflection from the test surface. As the surface characteristics (emissivity, reflectivity and absorption) change during the exposure, it is not easy to calculate the surface temperatures from the pyrometer record; -another way is to check the surface temperature by the application of the theory of ignition in a series of measured test results (various times to ignition at different heat fluxes). Then the ignition temperature is obtained from the heat balance equations at the critical density of the radiant flux and for the steady state. We must mention the test standard setting the ignitability of building products using a radiant heat source [23].

GROSS HEAT OF COMBUSTION
The heat energy releases in all reactions of combustion/burning. Combustion heat, Δhc [kJ/kg] is defined as the amount of heat generated by the complete combustion of fuel unit, see e.g. [24].
Gross heats of combustion are measured by the combustion bomb calorimetry [30]. A known amount of fuel is burned completely in an apparatus comprising pure oxygen. The gross heat of combustion is calculated from a known mass of fuel and a temperature increase after subtracting of the dissolving heat and the heat of formation of H2SO4 and HNO3. The net heat of combustion is obtained by subtracting the latent heat of evaporation of water caused by burning of hydrogen (contained in the combustible material) and moisture contained in the original sample. Effective heat of combustion is determined e.g. by the conical calorimetry [25], which measures the mass loss rate and the heat release rate.

HEAT OF GASIFICATION
Heat of gasification of material Δhg [kJ/kg] is equal to the heat that must be supplied so as the unit quantity of material converts to the gaseous volatiles [15]: is the absorbed heat flux in the material [kW/m 2 ] and ̇ is the mass loss rate of material [kg/m 2 .s].
For burning sample the absorbed heat flux directed to the material is equal to the sum of heat fluxes of radiation and convection from the flame and external heat flux (from the radiant heat emitters in small-scale tests) minus the heat losses by radiation from the surface. These properties are dependent on the surface temperature, which is very difficult to measure. The cone calorimetry [25] is used to determine Δhg in connection with the measurement of the surface temperature. For some materials the surface temperature is approximately constant and independent of the exposure conditions. A graph of the dependence of the mass loss rate on the external radiant heat flux density is nearly linear for this type of materials. Then the Δhg values can be estimated as the inverse of the slope of the regression line extending across points of the measured data. Unfortunately surface temperatures are not constant for most materials (materials that char and materials with high production of smoke). In this case the modified equation for the Δhg is used and the time-dependent curve of the heat of gasification is obtained instead of one of its values.

REACTION HEAT
Reaction/(pyrolysis heat, Δhp [kJ/kg] is the energy emitted or consumed during pyrolysis or the thermal degradation of the material [15]. It can be also defined as the difference between the enthalpy of the starting material and the reaction enthalpy of the pyrolysis products. In the calculations of reaction heat it is supposed that the products are present at the thermal decomposition temperature and the original material is present at ambient temperature. The reaction heat of a small sample, which is subjected to the prescribed thermal conditions, is measured by a laboratory test. The reaction heat or the corresponding change of the enthalpy is usually the input parameter in the balance equations of energy for solid materials, which are degraded thermally. Reaction heat is used in the mode where the temperature profile in a solid material, which is heated, is calculated. Generation of internal energy term may be described in several different ways depending on the used model. A common way is to multiply the reaction heat Δhp [kJ/kg] with the local decomposition rate [kg/m 3 .s] thereby obtaining a member of the generation of energy. Instead of the reaction heat it is possible to use a specific heat capacity and the enthalpy of formation and the computer program calculates the enthalpy and the corresponding reaction heat. Some models do not have a term with the reaction heat, because they assume that the change of the real energy is zero. The most common experimental method for measuring the reaction heat is the Differential Scanning Calorimetry (DSC) [16]. A small amount of sample (mg) is placed in the apparatus and it is subjected to the thermal degradation by a specific exposure to heat in the time. Heat is supplied to a sample and an inert reference material and both materials are maintained at the same temperatures. Added heat is recorded and it is assumed that it is equal to the heat loss or heat gain as a result of the endothermic or exothermic reactions. These results are influenced by several factors: e.g. particle size and heating rate. Thus, these results for such small samples cannot be considered as the representative behavior of materials in practice. DSC methods can be used for measuring the specific heat capacity of materials and enthalpy associated with the physical processes such as evaporation and desorption. The value of the reaction heat Δhp is generally considered to be negative for exothermic reactions and positive for endothermic reactions.
Another method used for determining is the Differential Thermal Analysis (DTA) and the Thermogravimetry (TA) [26]. Temperature differences between the sample and the reference material are measured as a function of temperature and the reaction heat is calculated from these results.

PRODUCTION RATE OF SPECIES
Combustion species are substances that result in the actual combustion process [20]. As a result of complex chemical reactions that take place during the fire, it is not possible to predict the production rate of species only from one chemical reaction. Therefore, we must rely on experimental data. Some experimental devices record the formation of the different products. The Cone calorimeter, the furniture calorimeter and experiments in the fire room measure certain products with specific gas analyzers. Concentrations of O2, CO, CO2 and hydrocarbons are often measured in these fire tests. The yield for individual measured products can be achieved according to the following relationship with experimental measurements and assuming that the production rate of species is proportional to the mass loss rate of a fuel: The value Yj may vary during the time depending on the combustion conditions. In this case it is necessary to define Yj as a function of time and not as an average value of time as expressed above. Instantaneous value Yj at time t * is given by the following formula: This approach simplifies the problem of prediction of the production rate of species in a fire. Great attention must be paid to the experimental methods used to obtain the data. Several variables affect the production rate of species, e. g. fuel type, geometry of the fuel and retroaction of heat radiation on the surface of the fuel. It is also a function of ventilation in the fire room and can be changed orderly up among the conditions of lack of air and excess air conditions. Useful apparatuses for this type of experiments: cone calorimeter [25], and furniture calorimeter and room / corner test [27], [28], large-scale verification test [29].

MASS LOSS RATE
Most of the fuels in a fire are burning in a gas phase by a flame. Mass loss rate ̇ of a fuel is equal to the rate at which the fuel gasification takes place [25], [45]. In a laboratory test the mass loss rate of samples ̇ in [kg/s] or [kg/m 2 .s] is measured under the prescribed temperature conditions. Mathematical models of fire inside the room can predict the thermal environment in defined positions and the mass loss rate only on large surfaces, if the surface spread of flame is accurately calculated. It is common to divide the surface into small parts/segments, so that the heat flux to each part can be regarded as the same. Algorithms of flame spread (in both directions) in the model are determined, when a certain part / segment ignites. After ignition, this model uses the speed of a mass loss determined in the whole range of densities of radiation heat fluxes in the laboratory or large-scale tests. The mathematical model must also consider the orientation of the fuel. This is achieved by the exposure because the orientation affects predominantly retroaction of flame to fuel and consequently the exposure on interfaces solid fuel/gas. The surface temperature of the burning material is constant and stable, and can be obtained at constant ̇ that proofs the material has a sufficient thickness. Constants (̇−̇ ) and Δhg are obtained from the intersection and the slope of the curve of the dependence of ̇ on ̇. Many materials do not behave in this way, but approximations on average values (̇−̇) and Δhg is still acceptable. It was found that the flame radiation is a linear function of the concentration of O2 and therefore it is possible to separate ̇ from ̇ by a correlation of the mass loss data derived from a wide range of density values of radiant heat flux and O2 concentration. Carbonizing materials, e.g. wood, do not have constant values (̇−̇) and Δhg even if ̇ is constant. In this case ̇ can be calculated with a model based on exposure and exposure history by interpolation by series of charts of the rate of mass loss obtained in a cone calorimeter at constant ̇.
Useful apparatus for the measurement in question -for large surfaces in a laboratory scale using a cone calorimeter [25], -for the data from the large-scale tests using the room-corner test, furniture calorimeter and large-scale test [27], [28] a [29].

HEAT RELEASE RATE
Realistic calculations of a fire influence require knowledge of the burning rate. This can be expressed as a mass rate of generation of volatiles (products generated by the reaction) or as the heat release rate (heat flow rate) with SI units in [W] or [kW] [25].
The heat release rate cannot be predicted from a measurement of material properties. It is a function of the thermal environment, and the volatility of the fuel and combustion efficiency of volatiles. The heat release rate is expressed by the formula   [25], and furniture calorimeter and room/corner test [27], [28], Large-Scale Validation Test [29]. When applying data in the models of a fire room, it is appropriate to take into account the limits of ventilation and the back flow of heat from the upper layer of smoke and from the walls.

PYROLYSIS TEMPERATURE
If the solids are exposed to external heat, they start to decompose at a certain pyrolysis temperature Tp [°C] or [K] [34], [45]. Pyrolysis is defined as the thermal decomposition under oxygen absence [30]. Certain types of plastics (e.g. PMMA) burn more or less like liquids. Phase transformation (of solid to liquid) occurs on the surface and the solid residues do not occur here. Other materials, such as wood, do not evaporate completely but produce carbon residues. The charred layer increases during time and a thermal decomposition occurs in a greater depth below the surface. For both types of materials, the thermal decomposition is described with combination of chemical reactions by Arrhenius-type equations for all components of the fuel: (12) where ρ is the density [kg/m 3  . Modelling of the pyrolysis using Arrhenius-type equations is not easy even in the case when the fuel contains about one component. It is difficult to find kinetic parameters A, n and E. During heating at rates that are typical for fire conditions, many building materials begin to decompose thermally already at a certain lower temperature at which the thermal decomposition is negligible. This temperature is controlled by the activation energy. The thermal decomposition occurs in a relatively narrow temperature interval. This is due to the fact that after the beginning of the heat decomposition the rate increases rapidly and the temperature increases only by a few degrees. At the same time a fuel is consumed and a density member approaches rapidly to zero. It may be assumed that the material begins to decompose immediately at the pyrolysis temperature Tp. Pyrolysis temperature Tp of some materials can be found in the literature, or it is determined by thermogravimetric analysis, e. g. [26]. The essence of this method is the measurement of the rate of a mass loss rate of a small amount of material during heating at constant speed. Arrhenius´s equations are used to correlate the data. As explained above, the thermal decomposition takes place within a small temperature interval. Tp can be estimated as the average of this interval. It differs from the surface ignition temperature Tig when igniting with the auxiliary piloted ignition, but they are often very close to each other for some materials and Tig can be used as an estimate for Tp.

AIR/FUEL RATIO
Air/fuel ratio, is the amount of air [ ] required for combustion of fuel amount mp [kg] according to the following relationship [20]: The so-called fuel-air equivalent ratio is often used as the mass fraction of the fuel/oxygen to fuel/oxygen ratio needed to form a stoichiometric mixture, see formula [14]: Stoichiometric combustion means the combustion in which the  = 1.
Efficient air/fuel ratio used in some mathematical models of fire is greater than or equal to the stoichiometric air/fuel ratio. Air/fuel ratio is used in fire models to calculate the burning rate and the heat release rate. It is a dimensionless and specific for each fuel.
The stoichiometric fuel-air ratio can be calculated from the stoichiometry of the reaction equations, but this is not often possible, since elemental composition of the fuel is unknown. It is determined most commonly by calculating the ratio of the amount of energy released by combustion of unit mass of air when exhausting its oxygen content and from the gross heat of combustion [30].

COMBUSTION EFFICIENCY
Effective heat of combustion is smaller than the net heat of combustion due to incomplete combustion of fuel vapor. Combustion efficiency (or burning efficiency) χ explains imperfect combustion / burning [30]: χ = ∆ℎ , ∆ℎ (15) where Δhc,eff is the effective heat of combustion [kJ/kg] and Δhnet is the net heat of combustion [kJ/kg]. Combustion efficiency is in the range from 0.4 to 0.9 for most hydrocarbons. It can be measured using a cone calorimeter [25], or bomb calorimetry [30].

CONVECTIVE HEAT TRANSFER COEFFICIENT
Convective heat transfer relates to the movement of heat (energy) between the solid surface and the surrounding fluid substance due to temperature difference between them. The convective heat transfer coefficient  is used for its description. It is defined as [15]: where ̇is the heat flux [W/m 2 ] and ∆ is the temperature difference between the surface and the moving fluid [K], ̇ is the total heat flux [W] and S is the area [m 2 ]. It depends on the fluid properties (thermal conductivity, density, pressure, viscosity), the nature of the flow of volatiles (velocity, turbulence), and on the geometry of the solid surface.
Selection of an appropriate heat transfer coefficient can be difficult because a large number of variables is needed for its derivation. It can be simplified when the volatile is air. Most of the fire models assume that smoke has similar physical properties as air. For example, the convective heat transfer coefficient between the turbulent air flow and the vertical plane may be simplified as follows: . (19) where C2, x and y are constants.

EMISSIVITY
Emissivity of materials  is the ratio of energy emitted per unit area of real material and energy radiated by a black body at the same temperature [20]. It represents the thermal properties of the heat radiation integrated over all wavelengths. It is a dimensionless quantity and its maximum value for a black body is equal to one. There are several standard tests for measuring the emissivity of materials. The test specimen of the material is usually placed in an evacuated chamber and is typically electric-heated to a temperature of interest. The energy dissipated by a material is determined and it is equal to the radiative heat transfer to the surroundings. The Emissivity can be measured for example by a thermocouple, by an infrared camera TESTO and subsequently by calculation using the following formula (20) or by a spectrometer [31]: (20) where TT is the surface temperature measured by a camera [°C], TR is the ambient temperature (radiation) in [°C], Tw is the temperature [°C] measured by a thermocouple, n is the exponent expressing the dependence of the heat flux on temperature.

FLAME EXTINCTION COEFFICIENT
The flame extinction coefficient k correlates the average parameters of heat radiation, that is the emissivity, flame intensity and temperature for the entire spectrum of wavelengths in a mutual relationship. It is used to calculate the emissivity of the flame in the following empirical equation [45]: The coefficient k may be estimated from measurements of emissivity ε and the optical length l by the relationship: (22) Flame extinction coefficient can be determined by measuring all quantities in equation [45] except for k. Fire models usually contain similar empirical equations, but often without detailed specifications. Flame spread parameter, for a certain orientation, and the standard medium in the air is typical for heat transfer from the flame to the fuel in the neighborhood of the lower part of the flame before the flame front. It is a material property.
Flame spread parameter φ can be obtained by a correlation of data of counter-flow flame spread, that is the flame spread rate at different radiant heat flux (or surface temperature). ASTM E1321 [19] describes a method for measuring the characteristics of flame spread.

CONCLUSION
Thermal technical and fire characteristics of materials are important input data determining the quality of predictions from numerical models and should be selected according to specific scenarios of fire. Some models have their own small database of material properties, e.g. SMARTFIRE [3], FLUENT ANSYS [4], BRANZFIRE [6], others for example FDS [1] or SOFIE [2] do not have it. Missing characteristics can be found in the databases available on the Web, see e.g.
[36] -[39], in the literature, see e.g. [40] - [44] or determinated by tests with standardized internationally recognized procedures, see the text above incl. [45], [46]. However, the data obtained from the literature or laboratory tests do not guarantee proper behavior of the materials, as in real scale, which can be evaluated by the verification and validation of the calculations from the used mathematical model in comparison with e.g. the data measured during the large-scale fire test. It is the task of fire research, among others to develop. Pyrolysis models applicable to most flammable materials and methodologies for determining material properties as input data for CFD mathematical fire models and to complement relevant databases accessible to all interested parties.