HYBRID FINITE-DISCRETE ELEMENT MODELLING OF BLAST-INDUCED EXCAVATION DAMAGED ZONE IN THE TOP-HEADING OF DEEP TUNNELS

A hybrid finite-discrete element method (FEM/DEM) is introduced to model the excavation damage zone induced by blast in a deep tunnel. The key components of the hybrid finite-discrete element method, i.e. transition from continuum to discontinuum through fracture and fragmentation, and detonation-induced gas expansion and flow through fracturing rock, are introduced in detail. The stress and crack initiation and propagation of an uniaxial compression test is then modelled by the proposed method and compared with those well documented in literature to calibrate the hybrid FEM/DEM. The modelled stress-loading displacement curve presents a typical failure process of brittle materials. The calibrated method is then used to model the stress and crack initiation and propagation induced by blast for the last step of excavation in a deep tunnel. A separation contour, which connects the borehole through the radial cracks from each borehole, is observed during the excavation process. The newly formed tunnel wall is produced and the main components of excavation damage zone (EDZ) are obtained. Therefore, the proposed treatment has the capabilities of modelling blast-induced EDZ and rock failure process. It is concluded that the hybrid FEM/DEM is a valuable numerical tool for studying excavation damage zone in terms of crack initiation and propagation and stress distribution.


INTRODUCTION
Poor performance of bituminous mixtures under increased traffic volume and heavier axle load has led to the increased use and development of modified bitumen especially the use of discarded tires of vehicles in pavement construction. Modified binders generally exhibit decreased temperature susceptibility and potentially improved mix performances.
Researchers have demonstrated improved performance of bituminous mixes with shredded rubber. The advantages resulting in the use of used tire include: increased fatigue life or fatigue resistance, reduced reflective cracking and low temperature cracking, improved tensile strength, ductility, toughness, adhesion, resilience, tenacity, durability, and skid resistance [1].
Tires have limited lifespan and constitute a large volume in the environment such that adequate methods of disposal have not been developed or discovered. Hence they constitute an environmental nuisance. Recent research has been able to develop ways of recycling and reusing the used tires in road pavement construction.
The tire rubber can be modified into adequate asphalt binder through two processes namely the wet and dry process. Under the wet process, the ground rubber is added to the bitumen previously warmed at temperatures around 190 C remaining in contact for a period of 1 to 4 hours [2]. The rubber particles, especially if in large quantities, swell in the bitumen due to absorption of some of the lighter bitumen fractions to form a viscous gel with an increase in the overall viscosity of the modified binder. Under the dry process, the rubber particles are first added to the preheated mineral aggregate, before the bitumen is added. The aggregates are heated at temperatures between 200 C and 210 C for about 15 seconds, resulting in a homogeneous mixture. Thereafter, the bitumen is heated at temperatures between 140 and 160 C and added to the aggregaterubber mixture [2][3].
Hence, this study assesses the properties of asphalt modified with used tire. The underlying objectives are to determine the properties of asphalt binder modified with used tire and to compare the performance of the asphalt modified mix with the performance of conventional asphalt mix and determine the effectiveness of the process.

Background Literature
Mainly petroleum refinery plants with the evolution of modern technology produce asphalt binders. It has been shown that most petroleum products consist of asphalt in which it exists in solution. The crude petroleum is refined by distillation to separate the various fractions. During this process, asphalt is recovered. The modern refined asphalt is better than the crude natural ones of the olden days for obvious reasons. This is because in some distances, the natural asphalt has become mixed with variable quantity of mineral matter, water and other substances. This can impair the properties of the asphalt. For the purpose of refinery (distillation) however, natural bitumen (natural asphalts) has been conveniently classified into three groups from practical point of view namely materials occurring in a fairly pure state, materials found with an appreciable proportion of mineral matter but with the bitumen predominant and mineral materials associated with relatively small proportions of bitumen [4].
Engineers are very interested in asphalt because of its properties. It is readily adhesive, highly waterproof, durable and has high strength. It is plastic in nature thereby giving controllable plasticity to mixtures of mineral aggregates when combined with them. It is unaffected by most acids, alkalis and salts. Asphalt may be readily converted to liquid form by applying heat or by dissolving it in petroleum solvents of varying volatility.
There is a wide interest in the use of rubber tire modified binders in pavement construction as several laboratories and field tests reported improved overall performance of pavements. The asphaltenes and the light fractions of conventional binders interact with the granulated rubber of the used types forming a film of gel on the rubber leading to an increase in the volume [5]. The granulated rubber and conventional bitumen do not react. The rubber acts as an additive and not as a modifier agent [6]. The results of the study indicate that there is a physical interaction between the rubber particles obtained from the used tires and the bitumen leading to a different final behavior of the bitumen.
In the wet or dry process, the interaction of conventional bitumen, rubber granules should not be made above temperature of 170 C, in order to seize all the characteristics that the modified bitumen presents in the bituminous mixtures compared to the conventional bitumen. A proper dispersion of crumb rubber particulates into asphalt was achieved making crumb rubber compatible with modified asphalt with improvement both in high and low temperature properties, which can lead to reduced cracking, rutting, and raveling tendencies of the crumb rubber modified asphalt pavement [7]. The approach of the study was to join the crumb rubber and asphalt molecules with small bi-functional molecule called compatibilizers.
In terms of physical properties, tires consist of a rubber compound usually reinforced with steel and textile. Tires vary in design, construction and total weight depending on their size and usage. The weight of a used passenger car tire in Europe is about 6.5 kg and that of a truck tire is about 53 kg. Passenger car and truck tires make up approximately 85% of the total tires

MATERIALS AND METHODS
The materials used in carrying out this study are bitumen, fine and coarse aggregates, filler and ground tire. 70-80 penetration grade bitumen was used as the base asphalt binder in this study. This bitumen was obtained from Samchase Bitumen plant along Akure-Ado Road in Nigeria. Figure 1 shows a sample of the bitumen. The fine aggregate used for this experiment was obtained from Ogbese river in Ondo state, Nigeria. The sand was free from silt and and other organic materials that can reduce the strength or have any other negative effects on the asphalt made from it. Figure 2 shows a sample of the fine aggregate used. The coarse aggregate used in this research was crushed rock obtained from Samchase quarry Akure, Ondo state, Nigeria. It was carefully selected to ensure that it was free of deleterious materials. Figure 3 shows a sample of the coarse aggregate used. The filler used for this experiment was obtained by sieving quarry dust obtained from Samchase quarry Akure, Ondo state with sieve no 200 (75 micron). Figure 4 shows a sample of the filler used.

Tests Carried out
Tests were carried out on the fine aggregates, coarse aggregates, shredded crumb tire and asphalt binder (Bitumen) to determine their characteristic strength. Moisture content, particle size distribution and specific gravity test were carried out on the fine aggregate. Specific gravity, aggregate impact value (AIV), aggregate crusshing value (ACV) were carried out on the coarse aggregate. Moisture content, ductility, viscousity, softening point, flash and fire point and penetration tests were performed on the bitumen with and without the ground tire.

RESULTS AND DISCUSSION
In order to determine the asphalt mix design to use for the different tests at varying percentages of ground tire addition, the Marshall Stability test was carried out. The stability of the mixes was determined by multiplying the proofing ring factor (0.0328) value with the dial reading. Three mix designs with the highest stability were selected out of several from several mix designs and the average obtained. The results of the stability test are as shown in Table 1. Moisture content test Table 2 shows that the average moisture contents for the fine aggregates and the coarse aggregates used were 2.08% and 1.74% respectively.

Aggregate Impact value (AIV) test
The Aggregate impact value is a measure of the resistance of coarse aggregates to impact load (sudden load). Table 3 shows that the average aggregate impact value of the coarse aggregates is 22

Aggregate crushing value (ACV) test
Granular base layers and surfacing are subjected to repeated loadings from truck tires and the stress at the contact points of aggregate particles can be quite high. Crushing tests can reveal aggregate properties subject to mechanical degradation of this form. The aggregate crushing value gives a relative measure of the resistance of an aggregate crushing under gradually applied compressive load. The average aggregate crushing value of the sample was obtained as 13.47% as shown in Table 4.

Specific gravity
The specific gravity of the ground tire, filler, fine aggregate and coarse aggregate were 2.75, 2.76, 2.74 and 2.65 respectively. Figure 6 shows the particle size distribution chart for the fine aggregate used. It can be observed that the soil is composed of silt and sand.  Table 5 shows the results for the water in bitumen test (i.e. moisture content). The moisture content of the bitumen and modified bitumen (0%, 2%, 4%, 6%, 8%, and 10%) from the table above are 2.35, 3.01, 3.32, 3.47, 3.37, and 3.88 (%) respectively. The moisture content increased with respect to the percentage of modified bitumen. The control bitumen moisture contest was 2.35%, but when the bitumen was modified with 2% ground tire the moisture content increased to 3.01%. Also when the bitumen was modified with 4% ground tire, the moisture content increased to 3.32%. There was also an increase in the moisture content (3.47%) up to 6% addition of ground tire after which there was a drop at 8% ground tire addition. The highest moisture content of 3.88% occurred when the bitumen was modified with 10% ground tire.

Penetration test
Penetration is a measure of the consistency or hardness of bitumen and is the most common control test for penetration grade bitumen. Figure 7 shows the variation of penetration value against the corresponding percentages of tire added to the bitumen. The penetration value of the bitumen modified with ground tire decreases in relation to the control (bitumen without ground tire). The control's average penetration value was 70mm. When it was modified with 2% ground tire, the average penetration value decreased to 56mm. With 4% modified bitumen the penetration value further decrease to 47mm. However, at 6% modified bitumen the penetration value rises to 53mm, but starts to reduce again at 8% and 10% modified bitumen respectively.

Softening point test
The softening point is a measure of the temperature at which bitumen begins to show fluidity. It is also defined as the temperature at which a bitumen sample can no longer support the weight of a 3.5g steel ball. The softening point increases with increasing ground tire content as shown in figure 8. The control bitumen's softening point was 54 o C, but when the bitumen is modified with 2% shredded tire the softening point increases to 60 o C. With 4%, 6%, 8% and 10% modifications with ground tire the softening point increases to 66 o C, 73 o C, 78 o C, and 83 o C respectively.

Flash and fire point test
Bitumen volatilizes (gives up vapor) when heated and at very high temperatures, bitumen can release enough vapor to increase the volatile concentration immediately above the bitumen to a point where it will ignite (flash) when exposed to a spark or open flame. This is called the flashpoint. For safety reasons, the flash point of bitumen is tested and controlled. The fire point, which occurs after the flash, is the temperature at which the material (not just the vapors) will support combustion. Figure 9 shows the graph of flash and fire point against the percentages of ground tire addition. It clearly shows that the addition of the ground tire reduces the flash and fire point. This test should be carried out to know the heat to be applied on the bitumen before it flashes and burns for 2-5 seconds. This will determine the amount of heat the bitumen can withstand on the site before it becomes viscous.

Ductility test
Ductility is a measure of the breaking resistance or cracking resistance of bitumen during summer, a higher value of ductility shows a higher tensile strength of bitumen. Figure 10 shows that the ductility of the control bitumen was 93.11 cm, but when the bitumen was modified with 2 % ground tire the ductility increased to 99.98 cm. Thereafter, the ductility decreased up to 10% ground tire addition.

Discussion
The water in bitumen test on all the variations of bitumen resulted in moisture contents that were still within the specified standard of 0 to 5%. Under the penetration test, the results show that the penetration value of modified bitumen is affected by the presence of rubber particles mixed in the control bitumen. The penetration values for the modified binders decreases as the rubber binder mix increases as compared to the original bitumen. Lower penetration grades are preferred in temperate regions so as to prevent softening whereas higher penetration grades such as 180/200 are used in colder regions to prevent the occurrence of excessive brittleness. Lower penetration bitumen has better adhesion, and water-resistant properties.

CONCLUSION
Based on the laboratory test results, the mixture containing ground tire resulted in higher resistance to deformation. It appears that the ground tire causes a decrease in the consistency and an increase in the resistance of the material to temperature changes (this is based on the result of the penetration, ductility test, flash and fire point test and softening point test) while the resistance to flow also increases. It may be inferred that bitumen modified with used tire provided better resistance against deformations due to its higher softening point when compared to the control. [1] Oikonomou, N. and Mavridou, S. 2009. The use of waste tyre rubber in Civil Engineering works. In sustainability of construction materials. Ed. J. Khatib, Chapter 9. Woodhead Publishing, UK: 220-221. [

INTRODUCTION
The shaft spillway is a suitable type of protective structure at lower design flows if it is difficult to build a crest spillway or a side spillway chute in a narrow valley with steep slopes. Its advantages are even more pronounced if it is connected to a combined structure or if the diversion tunnel is used as the spillway outlet. The shaft spillway contains an intake part, a transition part, a shaft, a knee pipe and an outlet tunnel. The intake part usually has a circular plan with a hydraulically fitting crest, designed frequently with a wide crest in the past. Deflecting baffle ribs preventing the formation of random whirlpools and stabilizing the flow are often designed on the crest and in the intake part. The curved baffle ribs create a spiral flow regime in the funnel-shaped transition part and in the shaft. The pressure distribution over the spillway casing is more uniform. The water jet is pushed to it in the upper part of the spillway so that no underpressures arise causing cavitation phenomena and vibrations. The shaft spillway flooding is a complex hydraulic process. The length of the flooded shaft section into which the water flow falling freely through the shaft passes via the transition phenomenon grows gradually with increased discharges. Both the intake and the transition part of the spillway must be designed so that they are not flooded earlier than the shaft. The hydraulic solution must include all hydraulic phenomena arising during the two-phase flow [5].
This article deals with the investigation of a shaft spillway on the Labská Dam. It is a special type of emergency spillway diverting higher discharges through a vertical shaft, followed up by a horizontal shaft connecting the reservoir storage with the space below the dam. While designing its capacity the design flow must always be diverted by means of pressureless flow. In the case that the spillway outlet shaft is flooded, its capacity is very significantly reduced [6]. In modern history, this type of emergency spillway is designed mainly in embankment dams, however, in older hydraulic structures we can also encounter it in masonry gravity dams as is the case of the Labská Dam.

-
Protection of water and water-related ecosystems associated with the Labe River below the Labská Dam by increasing discharges via water handling in the reservoir effective storage.

-
Creating conditions for justified (authorized) handling of surface water in the Labe River, i.e. for using the power generation potential of surface water for the generation of electricity in small hydroelectric plants, stabilization of flows (particularly daily fluctuations) below the Labská Dam by handling water in the controlled reservoir protective and effective storage.

-
Creating conditions for general water handling in the Labská Dam, i.e. for sport, recreation, navigation and fishing.

-
Creating conditions for potential holding white water boat races in the Labe River channel below the Labská Dam by handling water in the controlled reservoir protective and effective storage and by increasing wave discharges below the Labská Dam. Protection against the dam overtopping is secured by 2 emergency spillways. The frontal spillway has a total of 4 openings with a clear width at the overflow crest level of 9.90 m each. The total clear width of the spillways is, therefore, 39.6 m. The spillway crest has an elevation of 691.26 m. s. l. The second, shaft type spillway is situated on the left bank. The crest (overflow crest) of the shaft spillway also has an elevation of 69l.26 m. s. l. The inside diameter of the spillway body at the overflow crest level is 11.50 m, the outside diameter of the spillway body being 14.40 m. The shaft spillway crest houses a 1.92 m high debris rack wall with a service bridge accessible from the bank.  ------------------------------------------------------------------------------------------------------------- The circular vertical outlet shaft starts at an elevation of 688.66 m. s. l. being 5000 mm in diameter. The shaft inlet is circular-shaped. The intake shaft part, including the rounded part, is 4 m in length (or height). From the end of the casing, the shaft passes via a circular knee pipe (the circle radius in the shaft axis being 18.83 m) into a horizontal shaft and its diameter continuously increases up to 7.00 m. The outlet shaft then opens into the diversion tunnel. The shaft spillway capacity is 79.37 m 3 .s -1 .

Fig. 1 -Shaft spillway in the Labská Dam
A small hydroelectric plant is situated below the dam. Two turbines are mounted in the hydroelectric plant. The first is a Kaplan type, the second a Bánki type turbine. The Kaplan turbine has an output of 525 kW, with an absorption capacity of 2.4 m 3 .s -1 . The Bánki turbine has an output of 75 kW and an absorption capacity of 0.6 m 3 .s -1 .

RESEARCH METHODOLOGY
The research included the assembly of a physical model of the Labská Dam shaft spillway in the Water Management Laboratory of the Faculty of Civil Engineering, CTU in Prague. The objective was to identify the hydraulic behaviour of the spillway at various discharges and for various technical modifications, including the water flow in the spillway vicinity [7]. The measured values provided the patterns of water levels, which were compared against calculations, and the pressure conditions in the outlet shaft.

MODEL CONDITIONS
In a Froude type similarity model, the dynamic similarity conditions of hydrodynamic phenomena are governed exclusively by gravity forces. Apart from gravity forces, however, the investigated flow may also be affected by other forcesviscous fluid frictional resistance, capillary forces, volume forces, etc. According to Froude formulae, a specific hydrodynamic phenomenon can only be investigated if the effects of the above forces are negligible compared to gravity forces. Limit conditions delimit the domains and scales in which a hydrodynamic phenomenon can be modelled. Kinematically similar phenomena affected exclusively by the gravity force are dynamically similar if the same Froude numbers are found in mutually corresponding cross-sections.
While modelling flow phenomena within the Froude similarity domain, surface water pressure may apply. Surface pressure does not apply if the overflow height on the model h ≥20 mm. If h ≤ 20 mm, due to capillary forces, the overflow jet shape nearly passes into a straight line. The surface flow velocity on object models should be u ≥ 230 mm.s -1 , so that capillary forces do not prevent the formation of surface waves due to gravity forces. While modelling according to the Froude similarity, the clear width of a spillway opening on the model must be b0 ≥ 60 mm. The intake opening must be The entire model's length L = 4 m, height H = 1 m and width B = 1 m. Water was fed to the shaft spillway model of the Labská Dam by the laboratory distribution pipes, the discharge was measured by means of a magnetic inductive flow meter, water was stilled in a stilling basin. Water from the model was discharged via a collecting tank to underground spaces of the Water Management Laboratory where the central water collection unit is installed.

ALTERNATIVE SOLUTIONS
The model investigation was divided into 5 alternative versions according to the technical modifications of the intake vicinity, but also the overflow crest itself. In all versions, ten design discharges corresponding to m-day discharges m30 and N-year discharges Q1, Q2, Q5, Q10, Q20, Q50, Q100, Q1 000, Q10 000 were measured. Version 1this version was only calculated for the spillway itself without terrain and without the debris rack.
Version2 -in this version, the model is complemented with "terrain", which further specifies the water flow into the spillway without a potential effect of vertical flow. Version3the model was complemented with a debris rack, this simulates the current state at the Labská Dam. Version 4this version was complemented with four left-side flow baffles placed at the intake to the model shaft.
Version 5-this version was complemented with four right-side flow baffles placed at the intake to the model shaft.

Consumption curves
One of the tasks of the model-based laboratory investigation was to compare the consumption curves obtained by measurement on the model with the consumption curves obtained by classic calculations [3].
The equation used for the overflow calculation was: is the overflow coefficient derived on the basis of the overflowing jet to the overflow crest width ratio, Comparing the reached results a relatively high agreement of the calculations with the model in places of low discharges was identified. Nearly identical values were reached for the majority of versions at lower discharges. Figure 4 presents an example of consumption curves. The chart for Version 3 clearly shows a marked deflection of the model curve at higher discharges. This is caused by continuous flooding of the outlet shaft thus decreasing its capacity. At lower discharges, however, we may state that the accuracy of the calculated values compared to the values measured on the model is satisfactory.
The most significant difference between the measured and calculated values was evident in Versions1 and 2. In these versions, this is most likely thanks to the terrain and the debris rack, which limits the vertical flow effect, and the water flow capacity through the debris rack is further reducedcalculations cannot take account of these influences.

Fig. 4 -Consumption curve -Version 3 Capacity of spillways
Design discharges causing the outlet shaft flooding were identified for all alternative versions. The flooding pattern for all versions corresponded nearly perfectly to the criteria specified by Bollrich (1965). He claims that the overflow over a shaft spillway is free for h/R < 0.45. For h/R > 0.60, the inlet section gets flooded. For the h/R ratio between 0.45 and 0.60, the transition state applies [2].
In the transition to the flooded inlet, irregular pulsations manifested by the jet pull-down into the outlet shaft followed by a massive water jetting back into the reservoir could be monitored. This state was evident in all four versions; in Version 2 this state came earliest, while in Version 1, on the contrary, it came last.

Pressure conditions in the outlet shaft
Relative real-time hydrodynamic pressures were measured for all versions and design discharges. 12 piezometric sensors, connected to pressure sensors with 1 Hz sampling, were used for the measurement, and, besides, pressure sensors with 1 kHz sampling were used for the Q50 discharge.
The sensors were distributed at three height levels, four sensors at each of them, mounted perpendicularly to the outlet shaft. The distance between individual sensors at one level was the same. The mounting diagram of sensors is displayed in Figure 5. The samplings from the pressure sensors were evaluated and compared for individual versions in length and time units and frequencies on the model. The obtained values evidently show very significant differences in pressures depending on the sensor position and the discharge. The greatest range in pressures was always reached at discharges causing flooding. The place most loaded by pressure was localised in the domain of bottom sensors placed in front of the outlet tunnel. These were sensors 9, 10, 11 and 12. The least loaded place, on the contrary, is the upper level of the sensors, mainly sensors 2 and 3 and, in the middle level, sensor 5. The charts also allow identifying individual phases of water suction and rejetting from the outlet shaft. The greatest range of the acting pressures was measured for Version 5, i.e. the version using right-side flow baffles. In this case, there was a local increase in pressure against the version without their use. Interesting information was also provided by the mutual linking of the results of individual interrelated sensors. As an example, there is a chart describing the pressure pattern in sensors 1, 5 and 9, i.e. sensors placed at the bottom part of the outlet shaft. These values correspond to the average value, occurring during the total measured time for a respective discharge and sensor. Figure 7 clearly shows that underpressures arise in the inlet part at all discharges, followed by a nearly pressureless flow in the bend and by very significant pressures in front of the horizontal outlet shaft. Thus, the entire outlet system is considerably loaded.

Frequency in the outlet shaft
For Version 3 (current state), the pressure pattern was measured by means of a pressure sensor with a fast sampling frequency. Evaluating this measurement, the pulsation frequency in the outlet tunnel was monitored by pressure sampling with a time step of 2 ms. The measurement was performed under the same conditions as that of relative pressures. The Q50 discharge, when the shaft spillway is flooded, was selected. Figure 8 displays the frequency analysis of the measurement in the outlet shaft. It is evident that these pressure pulsations do not show a significantly dominant oscillation frequency, which corresponds to pulsations in all measured pressure samplings. It follows from this that the pressure pulsations do not propagate as far as the outlet tunnel. The purpose of the measurement was to assess and avoid the propagation of pressure pulsations to the outlet tunnel by subsequent modifications. Consumption curves were used to identify the agreement of the capacity reached on the model and the calculation. The curves confirmed the relatively corresponding results, but also pointed out some facts relevant for the resulting spillway capacity (real terrain shape, curvature of flow lines by the supporting wall, non-uniform distribution of the inflow into the shaft spillway, debris rack position, baffle elements). Among them, the effect of the baffling elements at the start of the shaft is of most relevance. It was identified on the model that the left-side curvature had a minimum effect on reducing the capacity of the emergency spillway compared to the right-side one.
Thanks to the piezometric sensors installed, detailed data on the pressure patterns in the outlet shaft were obtained. The most loaded places were localised; it is, above all, the end of the outlet shaft before it is widened. The places least loaded by pressures, on the contrary, were detected behind the inlet part into the emergency spillway shaft. In total, the outlet shaft is exposed to the greatest pressures in the case that flow baffles are used, when the values measured in some places were up to twice higher than without them. This higher pressure contributes to the total pressure stability and there is a lower probability of the occurrence of pressure pulsations and undesirable underpressures. While comparing the pressure pulsations for Version 3 and 4, better results were reached for Version 4 using left-side baffles.

INTRODUCTION
As one of the most efficient ways for breaking rock, blasting is widely employed in underground excavation such as tunnelling, mining and shafting. However, blasting inevitably causes disturbance to the original state of rock in the form of creation of new fracture, closure and opening of pre-existing fractures, and redistribution of stresses [1]. The disturbance zone with properties and conditions irreversibly changed is often referred to as Excavation Damaged Zone (EDZ) or the Damaged Rock Zone (DRZ) [2,3]. Significant efforts have been spent worldwide in gaining an understanding of EDZ over the last few decades, especially characterizing and classifying EDZ [3], which includes the studies on EDZ around hydraulic tunnels in Russia [4] and that around in the Three Gorges Dam in China [5,6], and several international workshops on EDZ [2,7]. Nowadays, the fast development of the computer technology makes it become possible to complete large-scale numerical calculations in a short time. Correspondingly, various numerical methods have been implemented to simulate the rock blasting process including EDZ induced by rock blasting. According to the reviews conducted by Preece et al. [8], Latham et al. [9], Saharan et al. [10] and Perras et al. [11], continuous methods, e.g. finite element method, and discontinuous methods, e.g. discrete element method, have been mainly implemented to investigate the rock blasting processes. The continuous method usually focuses on modelling the blast-induced damage before rock fragmentation while the discontinuous method concentrates more on the detonation-induced crack propagation and resultant fragment movement during and after rock fragmentation. Therefore, it may be a better choice to combine the continuous and discontinuous methods to investigate the blasting-induced EDZ. Correspondingly, this paper is attempting to model the EDZ induced by rock basting in a deep tunnel using a combined continuousdiscontinuous method, i.e. the hybrid finite-discrete element method (FEM/DEM).

Hybrid FEM/DEM
In the hybrid finite-discrete element method (FEM/DEM), a hybrid FEM/DEM model can include a single discrete body or a number of interactive discrete bodies, each of which is of general shape and size and is modelled by a single discrete element. Each individual element is then discretized into the finite elements to analyze the deformability, fracture, and fragmentation. The essential components of the hybrid FEM/DEM consist of contact detection and interaction among those individual discrete bodies, deformability and transition from continuum to discontinuum through fracture and fragmentation of individual discrete bodies, temporal integration scheme and computational fluid dynamics [12].
A hybrid FEM/DEM comprising these key components has been implemented by Liu et al. [12] on the basis of their previous enriched finite element codes RFPA-RT2D [13] and TunGeo3D [14], and the open-source combined finite-discrete element libraries Y2D and Y3D originally developed by Munjiza [15] and Xiang et al. [16], respectively, which is to be used in this study.
The modelling of blast-induced EDZ in deep tunnel involves the fracture and fragmentation of rocks, and expansion of high pressured gas. Correspondingly, the transition from continuum to discontinuum and the gas expansion are the key components for the blast-induced EDZ modelling by hybrid FEM/DEM, which, therefore, are introduced briefly here while detailed introduction can be found in a recent paper published by the authors [17].

Transition from continuum to discontinuum
In the hybrid FEM/DEM, the discrete elements are discretized into the finite elements and those finite elements are bonded together through a four-node joint element. The enforcing constraint of the finite elements involves a bonding stress, which is taken to be a function of the separation between the surfaces of joints elements. The cracks are assumed to coincide at the surfaces of the joint elements during fracturing. The separation  at any point can be divided into two components in Equation 1 (1) where n and t are the unit vectors in the normal and tangential directions, respectively, of the surface at such a point, n  and s  are the magnitudes of the components of  on the normal and tangential directions, respectively. Figure 1 shows the relationship between the bonding stress and where D is the damage variable between 0 and 1,   described in the mechanical damage model [13], and f  is the joint residual friction angle.

Detonation-induced gas expansion and flow through fracturing rock
The modelling of detonation-induced EDZ involves in the interaction of the high pressured detonation products with surrounding rock mass and the gas flow through fracturing rock mass. In the hybrid FEM/DEM, the first section, i.e. the interaction between detonation products and surrounding rocks, is modelled through a pressure-time histories curve generated from commercial finite element codes AUTODYN through the Jones-Wilkens-Lee (JWL) equation of state (EOS) in Equation 4.
where P is the instantaneous pressure at any time, A , B , 1 R , 2 R and  are the material constants, 0  and  are the densities of the explosive and the detonation products, respectively, and 0 m E is the specific energy. The second section, i.e. the gas flow though fracturing rock mass, is modelled though a simple model consisting of a gas zone and constant area ducts. The gas zone is defined as a circle around the borehole and the explosive gas is presented only inside the circle. The cracks around the borehole are assumed to be the constant area ducts in the gas zone. The spatial and temporal distribution of the gas pressure in the gas zone are determined by an iteration procedure. Thus the process of gas exerting pressure on the crack walls and expanding the cracks can be modelled. More details can be found in the literatures by Munjiza et al. [15,18].

CALIBRATION OF THE PROPOSED HYBRID FINITE-DISCRETE ELEMENT METHOD
To calibrate the hybrid finite-discrete element method, the rock fracture processes in a quasi-static uniaxial compression test is modelled and the obtained results are compared with those well documented in literatures.

Fig. 2 -Geometrical and numerical model for uniaxial compression test a) Geometrical model b) Numerical model
The uniaxial compression test is simplified as a plane stress problem and a vertical section is considered as shown in Figure 2. The material properties of the rock specimen are Young's modulus The material properties of the loading plates follow those of standard steels.
A constant velocity of 1 1 m s  is applied on both, the top and bottom plates so that they move towards each other to load the specimen located in between them. It may be worthy pointing out that the applied loading rates are much higher than those used in laboratory static experiments. Nevertheless, the applied loading rates should not influence the rock properties significantly as these are still much smaller than the certain threshold level according to laboratory dynamic experiments [19]. The reasons of applying relatively bigger loading rates are that the calculation time can be significantly reduced.
As shown in Figure 3a) and b), the modelled progressive failure process during the uniaxial compression test is illustrated in terms of the distributions of minor principal stress and cracks respectively. The value of the minor principal stress is indicated by the colour legend in the left side of Fig.3. Moreover, the red colour represents shear failure while the blue represents tensile failure and model boundaries in Figure

MODELLING OF EXCAVATION DAMAGED ZONE IN DEEP TUNNEL BY BLAST
In this section, a perimeter blasting techniques, i.e. smooth blasting, are adopted for the excavation of a deep tunnel. Figure 5 illustrates the geometry and numerical model of the top heading in exaction tunnel. A 22 22 m m  square is taken as the boundary. The tunnel is assumed to be located 450m underground. The in-situ stresses around the tunnel are major principal stress 1 σ 30MPa  , which lies in the horizontal direction, and minor principal stress 3 σ 10MPa  , which lies in the vertical direction. In order to save the computational time, only the last steps of top heading excavation are modelled here.
As shown in Figure 5a, 46 boreholes with diameters of 0.1m are set around the top heading. The space between boreholes is 0.4m and the blasting burden is 0.5m. As shown in Figure 5b, triangular elements are employed to discretize the model, and dense elements are employed in the interested area. Due to symmetry, only a half of the model is simulated, in which the symmetrical and bottom boundaries are fixed in x and y directions, respectively, while other boundaries are set as in-situ stress, e.g.  Figure 6 visually illustrates the evolution of the minor principal stress in the top heading of the tunnel while the corresponding fracture initiation and propagation process is depicted in Figure  7. It should be noted that following the sign convention in solid mechanics, the compressive stress is taken as negative while the tensile stress is regarded as positive. Moreover, the colour in the Figure 6 represents the size of the minor principal stress (compressive stress) and the magnitudes can be referred to the legend shown in Figure 6a). Moreover, the red colour represents tensile failures while the boundaries and shear failures are marked as blue colour in Figure 7.

Stress and crack initiation and propagation
As can be seen in Figure 6, the in-situ stresses are applied on the left, right and top boundaries of the model, which hence initiate from the three boundaries and propagate towards the tunnel wall. The in-situ stresses arrive at the tunnel wall at 0.40ms (Figure 6b). The stresses are reflected from the tunnel wall and interact with the stress from the boundaries. Eventually, the in-situ stress equilibrium achieves (Figure 6c). A few cracks are observed (Figure7b) around the tunnel wall due to the in-situ stress intensive. As the explosives in the 46 boreholes are detonated simultaneously at 1.45 ms, the contained chemical charge in the boreholes react rapidly [21]. As a result, high-pressure gases from each borehole are created and interact with the rock mass surrounding the boreholes and immediately intense stress waves initiate and propagate rapidly out of the borehole as illustrated in Figure 6d).It is worth noting that due to the relatively lower initial gas pressure adopted for the smooth blasting, crush zones, in which the rock mass is supposed to be crushed and scattered immediately around a borehole, are not clearly formed. Instead, cracked zones are produced directly (Figure 7c). The blast induced stresses continue to propagate outwards and a strong stress zone along the tunnel wall are formed ( Figure 6d, e and f).While stress waves continue to propagate outwards, the cracks around the boreholes propagate radically and interact with cracks from the adjacent boreholes to form a crack contour ( Figure 7d). As a result, the boreholes are connected together by the cracks to form a single chamber. Thus, the cracks mainly propagate along crack contour due to the high pressure gas in the chamber. Additionally, more cracks initiate at the tunnel boundary because of the stresses reflection and eventually the excavated rock massis broken into fragments as illustrated in Figures 7f, g and h). Moreover, long cracks continue to propagate by the expansion of the high pressure gas (Figures 7 e-h). Finally, the excavated rock mass is pushed out by the high pressure gas and a new tunnel wall is formed (Figure 6i, Figure 7h). Figure 7i illustrates the newly formed tunnel wall after removing the fragments and the EDZ around the tunnel wall.

CONCLUSION
In this paper, a hybrid FEM/DEM is proposed to model the blast induced EDZ in a deep tunnel. Among the key components of the hybrid finite-discrete element method, transition from continuum to discontinuum through fracture and fragmentation makes the proposed method distinguish from the continuum-based method, e.g. FEM, and discontinuum-based method, e.g. DEM. The numerical model of the hybrid method comprises of one or a number of discrete elements which are discretized into finite elements. The transition from continuum to discontinuum through fracture and fragmentation is implemented through separating the adjacent finite elements, which are boned together by four-node joint elements. Additionally, detonation-induced gas expansion and flow through fracturing rock play significant roles in rock blasting. Hereby a pressure-time histories curve generated from commercial finite element codes AUTODYN through the Jones-Wilkens-Lee (JWL) equation of state (EOS) is implemented in the hybrid method to model the gas expansion process, while a simple model consisting of a gas zone and constant area ducts is adopted to model the gas flow through fracturing rock mass.
The proposed method is then used to model the rock failure process in uniaxial compression test to calibrate the hybrid FEM/DEM by comparing the modelled failure process and force-loading displacement curve with those in literatures. The calibrated method is then used to model the EDZ in a deep tunnel to demonstrate the potential application of the hybrid method. The stress and crack initiation and propagation are modelled and the EDZ is obtained. It is found that the modelled results of EDZ show a good agreement with those found in literature. Therefore, it is concluded that the hybrid FEM/DEM has the capabilities of modelling the rock failure processes by blast. Moreover, the hybrid finite discrete element method is a valuable numerical tool for the study of EDZ in tunnels.

ABSTRACT
The PICture Unsupervised Classification with Human Analysis (PICUCHA) refers to a hybrid human-artificial intelligence methodology for pavement distresses assessment. It combines the human flexibility to recognize patterns and features in images with the neural network ability to expand such recognition to large volumes of images. In this study, the PICUCHA performance was tested with images taken with area-scan cameras and flash light illumination over a pavement with dark textures. These images are particularly challenging for the analysis because of the lens distortion and non-homogeneous illumination, generating artificial joints that happened at random positions inside the image cells. The chosen images were previously analyzed by other software without success because of the dark coluor. The PICUCHA algorithms could analyze the images with no noticeable problem and without any image pre-processing, such as contrast or brightness adjustments. Because of the special procedure used by the pavement engineer for the key patterns description, the distresses detection accuracy of the PICUCHA for the particular image set could reach 100%.

INTRODUCTION
Road pavements are among the most expensive infrastructures for any country. The budgets to maintain the highways are always limited while a continuous demand for its expansion is present. The available funds are often insufficient for any country to keep all the road pavements in an excellent condition all the time. On the top of this, a large portion of the road infrastructure, constructed in the 1960/70, is approaching the end of its service life, which squeezes the budgets even more in the coming years. To help manage such situations and maximize the value of the existing budgets for roads, the effective pavement management is inevitable.
Together with the structural condition, the pavement surface condition assessment is the most important information for a proper pavement management as well as for pavement rehabilitation design. The handmade assessments, common decades ago, are no longer viable for many reasons, including the high costs, high labour demand and risks for staff on the current highly-trafficked highways.

Article no. 4 THE CIVIL ENGINEERING JOURNAL 1-2017
Over the years different imaging technologies were developed or adapted for automated pavement surveys, including traditional area-scan cameras that take an image covering a rectangular area on every shot, line-scan cameras that take one single line of pixels on every shot, and 3D cameras that read one transversal lane profile on every shot. Despite the progress on the hardware used in the field survey, the distresses detection and analysis software for generating field data remains relatively underdeveloped.
Puan and others developed an automated pavement imaging program (APIP) for pavement distresses assessment, capable to work with longitudinal, transverse and alligator cracks, and able to analyze also the crack severity by using a number of different algorithms [1]. Ting and others used an approach based on the k-means and classification algorithms to identify the pavement distresses on pictures. The images were processed and filtered in order to keep only black-andwhite pixels that were assumed as related to cracks. Then the images were grouped in clusters as distresses being detected by a decision-tree algorithm capable of recognizing horizontal, vertical, alligator and man-hole-like cracks [2].
To detect the pavement distresses in pictures Ouyang et al., tried an approach based on filtering the images to remove the background, or pavement texture, and image enhancements, segmentation and Canny edge detection [3]. Nguyen and others did the cracks detection using a kind of conditional texture anisotropy to characterize and classify the pixels as "crack" or "crackfree" pixels. The idea is to detect variations of features, including noise, continuity, homogeneity and others. The authors claimed the method can detect also other patterns, like joints [4].
Tsai and others worked on image segmentation as a kind of preprocessing for distresses detection and classification. Six different algorithms were used and evaluated with images taken near the city of Atlanta, USA, with varying lighting conditions, shadows and cracks [5]. Lin and Liu used Support Vector Machine (SVM), a topology of artificial intelligence similar to neural networks, for assessment of potholes on pavement pictures. The pavement texture was detected by using the histogram and a non-linear SVM was built to identify if the target region was a pothole. The experimental results show that the approach can achieve satisfactory results [6].
An approach combining analytic hierarchy process (AHP) and fuzzy logic theory for pavement condition assessment was developed by Sun and Gu. AHP is used to determine weight from a paired comparison matrix and an evaluation with fuzzy relations, combining the evaluation of five different indicators, including roughness, deflection, surface deterioration, rutting and skid resistance. A maximum grade principle and a defuzzified weighted cumulative index are proposed to assess the condition of a road [7].
Salman and others proposed a novel approach to automatically distinguish cracks in digital pavement images with the "Gabor" filter that is related to the mammalian visual perception. In their study the pavement images with a prominent surface macrotexture that are frequently harder to get analyzed [8].
Potholes were also the subject in Koch and Brilakis' study. The images were first segmented into defect and non-defect regions using the histogram-based thresholding and then, the potential pothole shape was approximated by utilizing morphological thinning and elliptic regression. After that, the texture inside the potential pothole was compared with the texture of the surrounding area to decide if it represented a pothole or not. The routine was implemented in MATLAB [9]. After the procedure was improved by using a vision tracker to reduce the computational effort and improve the detection and pothole counting [10].
To detect distresses in images Avila and others used a "minimum path" technique, defined as when the sum of pixels intensity is the smallest. It is a single step procedure that works with geometry and intensity property of pixels, made viable for practical use by dynamic programing [11]. For the distresses assessment Díaz-Vilariño and others used data from a mobile Light Article no. 4

THE CIVIL ENGINEERING JOURNAL 1-2017
Detection And Range (LiDAR) system, a surveying technology that measures distances by illuminating the target with a laser light. Their method is based on the evaluation of roughness descriptors from LiDAR data to segment and classify the asphalt pavement [12].
The Kinect device is a motion sensor originally developed to provide a natural interface for gamers. Because of its low cost, interesting detection capabilities and easy connection to computers, it has been adapted for different uses in many industries. Xie worked with Kinect generated data to detect pavement distresses by considering the measurements of depth, width, length, and area, and various cracking types, such as alligator cracking, traverse cracking and longitudinal cracking [13].

PICUCHA FOR PAVEMENT DISTRESSES ASSESSMENT
The PICture Unsupervised Classification with Human Analysis (PICUCHA) method is a new approach for pavement distresses assessment that combines the human flexibility to recognize patterns on images with the neural network ability to match patterns by similarity, expanding the (human) pavement engineer decisions over large image sets.
PICUCHA has been designed to circumvent the limitations commonly found on other methodologies such as the variations on the pavement colour and texture, handling all the patterns registered on images, distresses or not. PICUCHA can detect good pavement, raveling, complex or isolated cracks, block or alligator cracks, sealed cracks, patches, potholes, painted horizontal signals, like white or yellow strips, reflective signs attached to the pavement, drainage devices, embedded inductive loops, joints, asphalt bleeding, or any combination of two or more of such patterns, among others. It can analyze road sections with mixed pavement types, like asphalt and concrete, with any type of surface texture, colour or pattern, including anti-slippery strips or cuttings, and with the presence of complex or solid shadows.
PICUCHA is capable of analyzing orthogonal images ("downward facing") taken in the field with any device technology (line-scan or area-scan cameras, 3D cameras, …), different sources of illumination (laser beam, incandescent lamps, LED, …) and different image resolutions and dimensions such as 512 x 2048 or 2048 x 2048 pixels. The PICUCHA approach is structured in a few steps including field survey, key patterns extraction, key patterns analysis by a pavement engineer, and the engineer decisions expansion to all the images in a given set, as shown in Fig.  1. A study evaluating the PICUCHA method performance with images at different resolutions is also available [14].
The PICUCHA method is an extension of our previous developments on new methodologies and artificial intelligence applied for pavement engineering, including pavement management with genetic algorithms [15], pavement modelling with neural networks [16], the aside failure criteria that opens a new frontier of possibilities [17], and a deflection basin geometry analysis to calculate strains [18], among others.

CAMERA TECHNOLOGIES FOR PAVEMENT SURVEY
Over the years the companies developed different cameras technologies and other devices for pavement surface condition assessment with growing image resolutions. The most common are the area-scan cameras, line-scan cameras and 3D cameras. Those devices are available with different image resolutions and from different suppliers and integrators ("brands"). The camera type and resolution have a direct impact on the image quality and the equipment's cost.

Field survey
The field survey is done with any equipment capable to take downward facing pictures. The PICUCHA algorithms can process images taken with any device brand or technology including:  Line-scan or area-scan camera, laser crack measurement systems (LCMS) or other;  With laser, incandescent, LED or other types of lamps, or just natural illumination; and,  Images with any size and resolution.

Key patterns identification
The images are sliced into cells (e.g. 128 x 128 pixels) and a special algorithm will analyze and group the cells to identify the key patterns. There is no predefinition of distresses or limitations, the selflearning algorithm deals with any kind of pattern existing in a given image set. The key patterns represent the characteristics of every group.

The human pavement expert analysis
The key patterns are analyzed by a human pavement expert that will describe the distresses and other desired characteristics with base on any standard or manual for distresses assessment. At least one key pattern cell per group must be analyzed. This procedure avoids the problem to rely just on software tools and keeps the human expert on top of the process.

The human pavement expert analysis expanded to all images
The human pavement expert description is used to refeed the algorithms that will expand such decisions to all the images in the respective image set and groups, generating the final report.

Fig. 1 -Flow chart of PICUCHA Method's main steps
Line-scan cameras represent an advance in technology and image quality ( Fig. 2.b). As its name suggests, it takes one single line of pixels on every shot (e.g. 2048 x 1 pixels). Image distortions and illumination heterogeneity are minimal. The line-scanned pixels can be assembled together to generate a consistent and flat rectangular image. It is compatible with laser (beam) illumination in addition to incandescent, LED and other types of lamps.
The 3D cameras, also known as laser crack measurement systems (LCMS), collect two different information at the same time: a 2D top-view image and a transverse profile (Fig. 2.c). The image is usually a by-product for this equipment and its quality tends to be low. The analysis is performed by using the profile data to locate the cracks over the image. Some integrators install the 3D cameras together with line-scan cameras for a better image quality. This technology is relatively new and well-known for its very high costs.

PICUCHA PERFORMANCE WITH AREA-SCAN CAMERA AND DARK IMAGES
The objective of this study is to verify the PICUCHA algorithms ability to process images taken with area-scan cameras and showing a dark asphalt pavement.
Images taken with area-scan cameras and flash illumination represent an extra challenge for processing and distresses detection. Some pavement areas are closer to the camera than others, creating lens distortions and non-perfectly flat images, and the illumination is not homogeneous over the image because some areas are closer to the lamps than others, generating an artificial joint when two or more images are assembled together for the analysis, as shown in Fig. 3.
Dark images are an important challenge for the automatic distresses detection. Almost all software are capable to analyse only images with distresses patterns, colours and visual texture similar to those taken in consideration on the algorithms development, failing in all other cases. The images used in this study were previously analysed by other commercial software without success because of the pavement dark colour and texture. The images analysed in this study were taken by the Road and Transport Research Institute of Lithuania with a vehicle survey system equipped with two area-scan cameras, providing a final resolution of 1.1157 mm/pixel, and flash lights illumination (strobe light bar) (Fig. 4). The area-scan cameras took a rectangular image on every shot. Every image is 2921x444 pixels in size, resulting of the data fusion of both cameras, and covering an area of 3.25x0.50 meters (width x length), which is slightly narrow than a typical road lane.

Fig. 4 -Vehicle for pavement surveys with dual area-scan cameras and flashlights
For this study 40 sequential images with 2921x444 pixels each one are used, assembled together as shown in Fig. 5, with a final size of 2921x17760 pixels. No image pre-processing has been used, like contrast or brightness adjustments, or others; they are raw as generated by the cameras.
The analysis followed PICUCHA methodology and the 2921x17760 pixels image (Fig. 5) was broken down into cells. For the processing the cell's size was set to 133x133 pixels and the algorithms were set to generate 25 groups. This procedure, the 40 images assembling and successive slicing into cells, is required to have the illumination joints and lens distortion happening in random positions inside the cells (Fig. 6), preventing a bias and making the analysis more challenging.
PICUCHA algorithms had no problems to process the image (Fig. 5) and perform the cells' grouping. The distribution of the 2814 cells among the 25 groups are shown on Fig. 7. The group 5 is the largest, with 349 cells while the group 19 has just 17 cells. The cells groups is a key step to reduce, simplify and rationalize the subsequent pavement engineering work describing the patterns.
Following the PICUCHA method, the pavement engineer did the key patterns description according to the Lithuanian standard for distresses assessment [19]. Five patterns were identified in the images: (1) good pavement, (2) crack medium severity, (3) longitudinal crack medium severity, (4) alligator cracking, and (5) pavement patch. The characteristics (patterns and distresses) of every group are shown in Fig. 8 and the groups of cells over the original image are shown in Fig. 9. The final results, containing the distresses assessment for this study, are shown in Fig. 10. Because of the special procedure used by the pavement engineer analysing the key patterns, the PICUCHA accuracy for this particular study is 100%. Different image sets may lead to different accuracies, which depend on the images quality, used camera, illumination type, pavement characteristics, distresses patterns complexity, pavement engineer work analysing the key patterns and other variables.

CONCLUSIONS
40 images used in this study were taken with a survey device with two area-scan cameras and flashlights illumination. The images have lens distortion and illumination joints while the pavement has a dark colour texture. Other commercial software were not able to analyse and perform the distresses detection.
The 40 images were assembled together and, then, successfully divided into cells as a part of PICUCHA procedure. The illumination joints and lens distortion were distributed inside the cells in random positions as expected.
PICUCHA was able to perform the analysis despite of the lens distortion, illumination joints and dark aspect of the pavement. No image pre-processing was used. The algorithms made 25 groups, which is a relatively low number, and the distresses are well characterized in every group. The pavement engineer followed the Lithuania pavement distresses assessment standard to identify five key patterns: good pavement, crack medium severity, longitudinal crack medium severity, alligator cracking and patch. Because of the special procedure used by the pavement engineer, the distresses detection accuracy for this study could reach 100%. Different image sets may lead to different accuracies.    [ 19] TKTI., 1994. Asfaltbetonio Dangų Defektų Nustatymo Metodika. Kaunas, Lithuania.

INTRODUCTION
A well-known method of extinguishing fires is fire-fighting by water mist. After the ban of Halon fire extinguishing by the Montreal Protocol and the search for effective alternatives to gas fire-extinguishing agents fixed and semi-fixed fire extinguishing systems started to win recognition dynamically in the last decade. They were sprinkler, diffusing water in the form of a shower through sprinklers or nozzles, and the fog ones with fog nozzles forming water-mist spray. Studies have confirmed the following main properties or mechanisms of its extinguishing efficiency. This method has a high cooling effect, manifested by heat removal from the combustion zone and its cooling below the reaction temperature of combustion: water has a specific heat capacity cp = 4.2 J/ (g. K) and the latent heat of vaporization φ = 2442 J/g. The cooling effect is greater with flammable liquids with a flash point above the ambient temperature, such as with diesel with flashpoint (FP) ~ 55 °C. It is smaller with flammable liquids with a flash point below the normal ambient temperature, such as with heptane ~ -4 ° C, and with solid combustible materials forming the carbon residue. Inertizing effect due to displacement of oxygen from the combustion zone by water vapour resulting from the rapid evaporation of fine water droplets, where their volume increases about 1900 times at 95° C and the barometric pressure of 1 at, also depends on the type of flammable substances. Hydrocarbon-based substances burn at O2 concentrations of still below 13 vol%, whereas those with a carbonaceous residue burn still below 7 vol % of O2. It is also obvious that when you burn a bigger flammable file, it consumes oxygen present in a closed space faster than in the open one. As for the insulating effect blocking the effect of radiant heat from the flame to the yet-notburning flammable surfaces, it can be considered again that this effect is less evident, for example with burning flammable liquids with the flash point below the normal ambient temperature. Studies have confirmed that reducing radiation is < 10 % if the diameter of water droplets is > 100 microns, while the drops with the size < 50 microns can reach the radiation decrease > 50%. One must mention the advantages of water mist for extinguishing compared to the other extinguishing systems: it is not toxic, it does not pollute the environment, it causes minimal subsequent damage, it has a high extinguishing efficiency with certain fires compared to the other fire-extinguishing agents.
Another still-studied phenomenon is combustion and interaction of electric field with the flames without the presence of a commercial fire-extinguishing agent. Combustion process consists of chemical reactions of initiation, propagation, branching and termination. Radicals participate in these reactions. Cations are most represent in the flame. The concentration of anions and free electrons is much smaller. Anions predominate in a luminous flame zone. The electric field having an effect on the flame causes the so-called ion wind, which is the movement of radicals, ions, including free electrons by acting of coulomb electrostatic forces of the electric field. Collisions with neutral molecules occur during their motion. Furthermore, the electric field having an effect on the flame produces chemical effect by the following mechanism: free electrons increase their energy by the acting of electric field. They transmit this energy for example to oxygen molecules O2 after their collision. The oxygen molecules increase their vibrational energy and thus even the rate of the primary reaction of the combustion of hydrocarbons. The interaction between the flames and the electric field in the absence of extinguishing agent is still a subject of interest of both basic and applied research. Issues of design, construction and operation of the mist fire-fighting equipment are dealt with in a number of existing international and national standards, including CSN, see for example standards [1] - [8].
According to these standards, fire-fighting equipment consists of components for fire detection, activation of the fire-extinguishing system, water supply, pumps, or metal gas cylinders with gaseous propellant, piping with valves, valve station and nozzles / heads for the fragmentation of water into the small droplets of defined size. Water mist is a water spray consisting of droplets whose diameter Dv0,90, measured in the plane 1 m from the head at a minimum operating pressure, is smaller than 1 mm [2]. According to the pressure fog fire-extinguishing devices are divided into high-pressure ones, where the pressure is p ≥ 34.5 bar, medium-pressure ones with the pressure of 12.5 < p < 34.5 bar, and low-pressure ones with the pressure p ≤ 12.5 bar. Their water consumption is up to 90% lower than with the sprinkler fire-fighting equipment. Consequently, they show smaller subsequent damage to indoor equipment and building. As they are more efficient, they also reach lower fire-extinguishing times. The significant influence on extinguishment efficiency is also had by: the size of droplets (mm) and their distribution according to their size in the shower / spray flow, the flow of water per unit area of fire (l/min.m 2 ), the direction of the mist spray flow, the flowing of the surroundings, the spray moment given the flame.
However, none of these standards solves the increase of the extinguishing efficiency of the water mist with the help of an electric field. The literature does not contain references on specific practical applications of these fire-fighting equipment with improved efficiency with the help of an electric field. The literature search for available databases of patents and utility models was also negative.

ELECTRIFICATION OF WATER MIST -POSSIBLE TECHNICAL SOLUTIONS
The authors developed and tested the equipment experimentally in the test rooms of the Fire Technical Institute in Prague (HP) and at the Czech Technical University in Prague -UCEEB in Buštěhrad (LP and MP) to increase extinguishing efficiency of low-pressure (LP), mediumpressure (MP) and high-pressure (HP) water mist with fixed and semi-fixed fire-fighting systems using the electric field. The device is provided with one or more fog nozzles at the given work pressure with the defined size of water droplets connected to water pipes with valves. It consists of a generator DC of high voltage with adjustable output voltage of up to 25 kV, which is located outside the protected space. A positive electrode is connected to the positive terminal of the generator with an electric cable kept in a tubular beam. The negative electrode is connected to the negative pole of the generator with an electric cable, also conducted in a tubular beam. The axes of the positive electrode and the negative one lie in the axis of each mist nozzle. The positive electrode is formed by a metal strip in a ring shape having a minimum internal diameter of about 10 cm, a maximum width of about 20 mm and thickness in the range of 1 mm to 2 mm. Furthermore, this strip is provided with metal screws anchored to a ring at the same distance from each other and with the tips directed inside the annulus; a strip of wire mesh with wires projecting above and below the belt over the entire circumference of (1-2) mm is attached to the inner portion of the band. The surface of the positive electrode is located perpendicularly to the axis of the paraboloid of water mist opposite the orifice of fog nozzle at the distance, which ensures the passage of at least ¾ of mist spray jetting through the annulus. The negative electrode is formed by a 3-4-teeth element. It is formed by straight metal wires directed radially into the fire-extinguished space. The length of the wires is 70 to 100 mm, the diameter is 1 mm to 2 mm, and the ends of these wires are arranged in the tip. The peak of the n-teeth element is located at the verified distance from the mouth of the given mist nozzle. One positive electrode and one negative electrode are used with each used mist jet. The electrodes are connected to their common DC high-voltage generator.
The advantage of the equipment proposed in such a way for the electrification of water droplets in the cone of water mist jetting from fog nozzles of the LP, MP and HP mist fireextinguishing equipment is a significant increase of its fire-extinguishing effect on hydrocarbon flames, demonstrated by the fire-extinguishing times that are lower substantially, and thus the water consumption is even lower substantially compared to extinguishing the same fire scenario by the same fog fire-extinguishing agent, but without the effect of the electric field.
The equipment for increasing the extinguishing efficiency of HP, MP and LP water mist is realizable with the fixed and semi-fixed fire-extinguishing devices that have at least one fog nozzle with the defined water droplet size at the given pressure.
These mist nozzles are connected regularly to a water pipe provided with valves during the experiments there were used: HP water pump -ultrasonic flowmeter Porto-Sonic 7000 a gauge DIAPHRAGM by the company Concept with the range of (0-16) bar. In experiments there was used el. DC. Source / generator PZVN 01 that can be energized from the 230 V/50 Hz network, see Fig. 1.

Fig. 1 -A look at the test equipment at the time of extinguishing the burning of n-heptane in a metal tray
The generator is equipped with a switch. There is preferred a system of electric fire alarms before switching, for example, after the bursting of a glass flask of a water jet by the elevated temperature due to fire or by the manual switching by the personnel who notices a fire.
It is obvious that HP, MP and LP mist nozzles of fixed or semi-fixed fire-fighting system must be designed so as the extinguishing sprays of water mist cover the potential area or volume of fire residues completely at the required time. The electrical cables of the supply electrode of the single nozzles are drained into the terminal for plus and minus poles, or more precisely, the positive electrode connected with the mist nozzle is grounded. The terminal blocks are connected with a DC high-voltage generator.
The water pipe with the valve brings water into the fog nozzle at the pressure of for example, (5-16) bar and the flow rate of e.g. (4-6.5) l / min, while the temperature of water is 20 °C. The electric field generated between the negative electrode and the positive one by the high DC electrical voltage supplied by the cables from HV generator charges the fine droplets of water mist, that disrupt the chain burning chemical reactions occurring in them effectively by the interaction with the flames. This is how they increase the cooling and insulating fire-extinguishing effect of the water mist itself significantly. The generated currents are in the order of microamperes, and so there is neither danger of electrical short circuit or more precisely discharge between the water mist spray and the metal objects, which are in contact with mist in the protected space, nor the risk of injury to the personnel from the accidental contact with mist. The experiments were carried out in the close test area, at the atmospheric pressure, at the air temperature of (19 to 21) °C and the relative atmospheric humidity of (40 to 53) %.

CONCLUSION
When the operating voltage of the DC high-voltage generator is from (5-10) kV, there were extinguishing times on the electrodes within on an average 20 seconds from the moment of the ejection of the fire-extinguishing agent in repeat tests. When turning off the high voltage DC power the extinguishing the same fire scenario under the same test conditions does not occur in 1 min. The registrations of utility models were passed to Prague IPO (Industrial Property Office) on the basis of the positive results of the experiments. Subsequently, they were admitted / registered [9], [10]. The attention of those interested in the practical use of this experiment is attracted to among other things the validity, see also [11]. INTRODUCTION China has built plenty of tunnels in loess regions and accumulated rich experience in construction of loess tunnel engineering. As a kind of particular soil with collapsibility and vertical joints development, loess can lead to collapse, big surface settlement, surface crack, low bearing capacity of tunnel base, large deformation of primary support, and vertical settlement after free face results from the excavation of tunnel. Damage to loess tunnel mainly includes lining deformation, body cracking, chipping and collapse of roof, water seepage and leakage, crack, sinkhole and karst cave of surface, vault settlement, etc. Water damage caused by surface water and groundwater's directly or indirectly percolating through or flowing into tunnel affects and threatens the safety, comfort, and normal operation of loess double-arched tunnel seriously. Currently, researches on water damage to double-arched tunnel are mainly conducted through field investigation and statistic analysis, but few of them are about water damage to loess doublearched tunnel.
Wang Yuhua et al. [1] proposed treatment measures for water seepage on the basis of investigating the current status of damage to Jinzhulin double-arched tunnel and analyzing the causes of damage; Lai Jinxing et al. [2] classified water burst in loess and soft rock tunnel, put forward design principles for waterproofing and drainage of tunnel, designed waterproofing and drainage structures, and discussed their application condition through the investigation on water seepage of as-built loess and soft rock tunnels in Gansu Province and Shaanxi Province of China; Wang Jianxiu et al. [3] researched Sangongqing Tunnel on Yuanjiang-Mohei Expressway (Yunnan Province), arranged crack monitoring points, monitored crack movement, and analyzed its characteristics with time. Ding Zhaomin et al. [4] considered that the primary causes for damage are the features of loess engineering, surface water, and burial depth of tunnel based on the investigation and analysis on one loess highway tunnel, and determined the measures for repair and reinforcement of lining crack, treatment of arch foot foundation, and grouting reinforcement for tunnel roof through finite element analysis. Taking the loess double-arched tunnel on Lishi-Jundu Expressway as an example, Hu Jinchuan et al. [5] monitored the surface settlement, geological and supporting conditions, vault crown settlement and horizontal convergence on site, utilized finite element software to analyze the variation law of vault crown settlement and horizontal convergence of surrounding rock, and thus confirmed the law and factor for deformation of surrounding rock resulting from construction of loess double-arched tunnel by the three-pilot drift method. Wang Daoliang et al. [6] counted the major water seepage parts of one integral doublearched tunnel through field investigation, adopted AHP to sort the factors causing water seepage, and confirmed that the major factors causing water seepage are the structure and construction management of integral double-arched tunnel. Through field investigation on water seepage in seven double-arched tunnels on Hangzhou-Anhui Expressway, Dou Fengguang et al. [7] discovered that water seepage mainly occurred in construction joints and tunnel entrances, which was mainly affected by topography and geology, construction, and design; epoxy resin grouting materials, epoxy thickening coating, and polymer cement mortar should be used to block off the seepage parts. Lai Jinxing et al. [8] took Qijia Mountain Highway Tunnel as an example, adopted geological radar to detect the lining thickness, cavity behind the lining, and crack water, made use of sonic detector to detect lining materials, and proposed a reinforcement treatment scheme based on the detection result. Shi Jianxun et al. [9] employed the improved AHP to determine the major factors causing water seepage on the basis of field investigation on water seepage in tunnels on Hangzhou-Huizhou Expressway; Liang Dexian et al. [10] carried out a physical simulation test for water burst due to tunnel excavation, analyzed the variation laws of stress, displacement, and pore water pressure, divided the whole water burst process into two stages of accumulation and instability, and established water burst judging criteria in combination with the actual engineering and mutation theory. By revealing the deformation features of long-span loess expressway tunnels in China, Li Pengfei et al. [11] took advantage of field monitoring and numerical simulation to Article no. 6 determine the best construction method for excavation of surrounding rock in different orders, and analyzed two side-wall pilot tunnel method in detail. Mao Zhengjun et al. [12] proposed a modular waterproofing and drainage partition for double-arched tunnel as well as its design method.

DEVELOPMENT CHARACTERISTICS OF WATER DAMAGE
It is found out that water damage always develops in construction joints, expansion joints, settlement joints, and lining joints of tunnel and even around them through the field investigation on water damage to tunnels on Lishi-Jundu Expressway in Shaanxi Province, China. Water damage to tunnel is classified as dotted water seepage, linear water seepage, and planar water seepage according to its trace and scope on lining. Water damage to tunnels on Lishi-Jundu Expressway mainly refers to linear water seepage, and planar water seepage is also developed well. Besides, partition and equipment box at the entrance and exit of tunnel are prone to water seepage.

Circular linear water seepage
Circular linear water seepage is the most common and serious water damage to loess tunnel. It not only exists in loess double-arched tunnel, but also stands out in separated loess tunnel. Circular linear crack mainly results from nonuniform vertical load, geological change of surrounding rock, and improper treatment of settlement joint, etc., and it often occurs at tunnel portal or joint of unfavorable geologic zone and complete rock stratum. Circular linear water damage to loess double-arched tunnel develops along the circular linear crack from vault, spandrel, hance, and side wall to arch foot. See Figure 2 for the development of circular linear water seepage in tunnels on Lishi-Jundu Expressway.

Vertical linear water seepage
Vertical linear water seepage is accompanied by the vertical crack of lining parallel to tunnel axis, with small amount of water seepage. However, it is the most fatal to tunnel structure, and its development may lead to clipping and even collapse of vault. See Figure 3 for the development of vertical linear water seepage in tunnels on Lishi-Jundu Expressway.

Slant linear water seepage
Slant linear water seepage usually appears at arch foot and the junction of side wall and arch foot. See Figure 4 for the development of slant linear water seepage in tunnels on Lishi-Jundu Expressway.

Planar water seepage
For the as-built loess tunnels, planar water seepage generally occurs at the vault and expands along circular linear cracks. See Figure 5 for the development of planar water seepage in tunnels on Lishi-Jundu Expressway.

Partition and equipment box
Water seepage mainly occurs at the partition in tunnel entrance and exit. See Figure 6 for the development of water seepage in tunnel partition on Lishi-Jundu Expressway. Water seepage is common for equipment box. See Figure 7 for the development of water seepage in the equipment box of tunnels on Lishi-Jundu Expressway.
lu  indicates the fuzzy degree as shown in Figure 8. When

2
lu , the fuzzy degree is too low to reflect the fuzziness recognized by people; when 1 lu  , the fuzzy degree is too high, so that the degree of confidence is lowered. The practice result shows that when 1 1 2 lu    , the result is more realistic [15][16][17].  Fig. 9 -Intersection of 12 ,

Establishment of index system
An index system is established for the factors causing water damage to loess doublearched tunnel according to the field investigation on water damage to tunnels on Lishi-Jundu Expressway in Shanxi, China and the development characteristics of water damage in combination with relevant research results related to loess tunnel [18][19][20]. See Table 2 for the analysis model for hierarchical structure of factors causing water damage to loess double-arched tunnel.

Comparison between indexes
Scale numbers 1-9 are adopted for scores of indexes after comparison. The form and scoring method of investigation table is the same as the traditional AHP and based on this, TFN is applied for fuzzy expansion, as shown in Table 1.

Fuzzy judgment matrix for establishing grade-I index
See Table 3 for the fuzzy judgment matrix for the established grade-I index.
Tab. 3  Summarize the scores in Table 3, utilize the additive operation of TFN, and select the average score of three experts to determine fuzzy judgment matrix 1 FBM [21].

Calculation of comprehensive importance of grade-I index
Utilize Formula (9) for "weight summation" comprehensive fuzziness value to obtain the comprehensive importance value after comparison between grade-I indexes.

Determination of weight of grade-I indexes
According to theorem 1, measure the purity of index i greater than index   1, 2, 3 , ; k k n k i  [14]. The calculation result is as below:

Calculation for index weight of other grades
The calculation for index weight of other grades is conducted by the same method to calculate the weight of grade-I index, and the calculation result for weight of grade-II index is as below:  

Final sequence for weight of factors causing water damage
The relative importance between the lowest-leveled factors and the highest-leveled factors (general target) or the final sequence value of relative superiority can be calculated by weight combination based on hierarchical structure, namely the final sequence of weight. See Table 4 for the final sequence for weight of factors causing water damage to loess double-arched tunnel.

Result analysis
According to Table 4, the percentage of the factors is more than 4%, including construction of three joints, atmospheric precipitation, waterproofing and drainage construction, integral straight middle wall, integral curved middle wall, waterproof board damage, three-pilot drift method, landform, pore water, concrete construction for secondary lining, and blocking of drainage system, which are the major factors causing water damage to loess double-arched tunnel. Therefore, it can be obtained that construction stage is crucial for controlling water seepage during tunnel operation period and especially, construction of three joints and waterproofing and drainage should be in strict accordance with relevant specifications in actual engineering, so as to ensure the quality of tunnel waterproofing and drainage engineering.

CONCLUSION
1) Through the field investigation on water damage to tunnels on Lishi-Jundu Expressway in Shanxi Province, China, it is found out that water damage to loess double-arched tunnel always develops in construction joints, expansion joints, settlement joints, and lining joints of tunnel and even around them; there is dotted water seepage, linear water seepage, and planar water seepage according to the trace and scope of water damage to tunnel lining, of which, linear water seepage is the main water damage, planar water seepage is also developed well, and partition and equipment box at the entrance and exit of tunnel are prone to water seepage; 2) According to the field investigation result and development characteristics of water damage, an index system covering 36 evaluation indexes for construction condition, design stage, construction stage, and operation stage is established for the factors causing water damage to loess doublearched tunnel in combination with related research results of loess tunnel; 3) TFN-AHP is applied in calculating the weight of indexes at different levels, and the final sequence of weight of the factors causing water seepage to loess double-arched tunnel is obtained. It is discovered that construction stage is crucial for controlling water damage to loess double-arched tunnel and atmospheric precipitation is the main water source. The possibility of water seepage is incresed by the structure defect of double-arched tunnel; 4) The final sequence for weight of various factors calculated by TFN-AHP is similar to the actual result, so this method is practical to analyze the factors causing water damage to loess doublearched tunnel.

INTRODUCTION
Soil improvement in a broad sense incorporates the various methods employed for modifying the properties of a soil to enhance its engineering performance. Soil improvement as a term is being used for a variety of engineering projects; prominent among such are: road pavement construction and airfield pavements where the main objective is to increase the stability of soil and reduce the construction cost by making the optimal use of locally available materials [1]. Soil improvement could either be in form of soil modification or soil stabilization [2]. Modification refers to soil improvement that occurs in the short term, during or shortly after mixing (within hours). This modification reduces the plasticity of the soil (improves the consistency) to the desired level and improves the short-term strength to the desired level (short term could be defined as strength derived immediately and within about 7 days after compaction). Even if no remarkable pozzolanic or cementitious reaction occurs, the textural changes that accompany consistency improvements normally result in the measurable strength improvement. Stabilization occurs when a significant longer-term reaction takes place. This longer-term reaction can be due to hydration of calcium silicates and calcium aluminates in Portland cement or class C fly ash or due to pozzolanic reactivity between the free lime and the soil pozzolan or added pozzolan. A strength increase of 350 kPa or greater (of the stabilized soil strength compared to the untreated soil strength under the same conditions of compaction and cure) is a reasonable basis for stabilization [3]. According to Attoh-Okine [4], soil stabilization is the process of altering of the geotechnical properties to satisfy the engineering requirements, this has been used in land fill mines, building of roads, aircraft runways, earth drain and embankments, in erosion control and in the reduction of the cost having the possibility of employing stabilized soil for building houses at low cost in undeveloped region of the world [5].
The concept of the soil stabilization is of utmost necessity in civil engineering projects, such as roadways, building foundation, dams among others, in the sense that most lateritic soils in their natural states commonly have low bearing capacity and low strength due to high content of clay. When lateritic soil consists of high plastic clay, the plasticity of the soil may cause cracks and damage on the civil engineering projects, thus, the improvement in the strength and durability of lateritic soil in recent time has become imperative and has consequently encouraged researchers towards using stabilizing materials that can be sourced locally at very low cost. These local materials can be classified as either agricultural or industrial wastes [6].
Research into new and innovative use of waste material is continually advancing and particularly concerning the feasibility, environmental suitability and performance of the beneficial reuse of industrial and agro industrial waste products [7].
Koteswara et al. [8], performed geotechnical tests to investigate the effect of sawdust and lime on the marine clay. In the study, the soil was mixed with a variation of 5%, 10%, 15%, 20%, 25%. The maximum dry density was obtained at the addition of 15% sawdust to the marine clay. This led to combination of the marine clay with the sawdust at varying percentages of lime from 3% to 7% lime. The maximum dry density (MDD) for these variations was obtained at addition of 4% lime. It was also observed that on addition of 15% sawdust and 4% lime, the liquid limit (LL), plasticity, optimum moisture content (OMC) of the marine clay decreased while the Plastic Limit (PL), Maximum Dry Density (MDD), and California Bearing Ratio (CBR) value increased. He therefore concluded that sawdust can potentially stabilize an expansive soil solely or mixed with lime.
In order to evaluate the effects of Sawdust Ash on the geotechnical properties of lateritic soils, Ogunribido [9] performed tests on three samples of lateritic soils A, B and C, where he dealt with Consistency Limits, Specific Gravity, Compaction, Unconfined Compressive Strength, Shear Strength and California Bearing Ratio (CBR). These tests were conducted at non-stabilized and stabilized states by adding 2%, 4%, 6%, 8% and 10% sawdust ash (SDA). He obtained optimum results from a combination of 6% sawdust ash (SDA) and concluded that sawdust ash was an effective stabilizer for lateritic soils. However, he did not consider the addition of lime.
Ayininuola and Oyedemi [10] performed a study on the impact of hardwood and softwood ashes on soil and the geotechnical properties of the soil.
Two soil samples were collected from two different locations and mixed separately with hardwood ash and softwood ash in varying percentage replacements of 0%, 2%, 4%, 6%, 8%, 10% and 15% by sample weight. Geotechnical tests such as Particle size distribution, Specific gravity, Atterberg limit, Compaction test and California Bearing Ratio (CBR) were carried out on the samples. It was observed that the MDD values reduced with increase in ash content similar to the pattern obtained for soil specific gravity, while CBR for both samples increased from 0% to 8% for softwood and hardwood ashes with optimum result achieved at 8% ash replacement. Also, wood ash led to increase in soil liquid and plastic limits of the soil. It was therefore concluded that hardwood and softwood ashes are suitable for improving California Bearing Ratio (CBR) of soil. Adrian et al., [7] carried out laboratory study on compacted tropical clay treated with up to 16% rice husk ash (RHA), an agro-industrial waste, to evaluate its hydraulic properties and hence its suitability in waste containment system. Compacted samples were permeated and the hydraulic behavior of the material was examined considering the effects of molding water content, water content relative to optimum, dry density and RHA contents. Results revealed decreasing hydraulic conductivity with increasing molding water content and compactive efforts; it also varied greatly between the dry and wet side of optimum decreasing towards the wet side. Hydraulic Conductivity generally decreased with increased dry density for all efforts, but they were within recommended values of 1X10 -7 cm/s for up to 8% rice husk ash treatment irrespective of the compactive effort used. They concluded that the material was suitable as a hydraulic barrier in waste containment systems for up to 8% rice husk ash treatment.

Lime Stabilization
Lime is one of the oldest and still popular additives used to improve fine-grained soils. The following are the four major lime-based additives used in geotechnical construction; hydrated high calcium lime Ca(OH)2, calcific quick lime CaO, monohydrated dolomitic lime Ca(OH)2 MgO and dolomitic quick lime CaO MgO. Lime treatment of soil facilitates the construction activity in three ways. First, a decrease in the liquid limit and an increase in the plastic limit results in a significant reduction in plasticity index. Reduction in plasticity index facilitates higher workability of the treated soil. Second, as a result of a chemical reaction between soil and lime, a reduction in water content occurs. This facilitates compaction of very wet soils. Further, lime addition increases the optimum water content but decreases the maximum dry density and finally immediate increase in strength and modulus results in a stable platform that facilitates the mobility of equipment.
When lime is mixed with clayey material in the presence of water, several chemical reactions take place. They include cation exchange, flocculation-agglomeration, pozzolanic reaction and carbonation. Cation exchange and flocculation-agglomeration are the primary reactions, which take place immediately after mixing. During these reactions, the monovalent cations that are associated with the clay minerals are replaced by the divalent calcium ions. These reactions contribute to the immediate changes in plasticity index, workability and strength gain. Pozzolanic reaction occurs between the lime and the silica and alumina hydrates. Carbonation occurs when lime reacts with carbondioxide to produce calcium carbonate instead of calciumsilicate-hydrates, such carbonate is an undesirable reaction from the point of soil improvement [11].

Materials
The raw materials used for this study were lateritic clay soil, Sawdust Ash (SDA), hydrated lime and water.

Lateritic clay soil
Lateritic soil was collected at a depth not less than 1.2m from an existing burrow pit in the Federal University of Technology, Akure (FUTA), Nigeria.

Sawdust Ash (SDA)
According to Adetoro and Adam [12], sawdust is a by-product of cutting, grinding, drilling, sanding or otherwise pulverizing wood with a saw or other tool. The dust is commonly used as domestic fuel. The resulting ash is a form of pozzolana known as sawdust ash (SDA). Sawdust without a large amount of bark has proved to be satisfactory, this does not introduce a high content of organic material that may upset the reactions of hydration. Sawdust was collected from the sawmill factory in Akure town. It was later burnt to ashes and sieved through BS Sieve of 0.212mm to get a powdered ash.

Lime
The hydrated lime (Ca (OH)2) was obtained from an accredited chemical store.

Water
Water was obtained from the water taps in the laboratory.

Methods
Preliminary tests, such as Atterberg limit test, particle size distribution, specific gravity and natural moisture content were carried out for the purpose of classification and identification of the lateritic soil sample.

Natural Moisture Content test
The test is expressed as this: Where a= (weight of empty can+ wet sample) -(weight of empty can+dry sample) And b= (weight of empty can+dry sample) -(weight of empty can)

Specific gravity test
The test is expressed hereby:

Particle size distribution (sieve analysis)
The sieve analysis was done to determine the grain sizes of the soil collected so as to classify the soil to their known engineering properties. This involved sieving a quantity of soil through a stack of sieves of progressively smaller mesh opens from top to bottom of the stack.

Atterberg limits
There are three limits in the determination of the soil Atterberg limits and they are liquid limit, plastic limit and plasticity index.

Liquid Limit Determination
Soil Sample passing through 425 µm sieve, weighing 200g was mixed with water to form a thick homogeneous paste. The paste was collected inside the Casagrande's apparatus cup with a groove created and the number of blows to close it was recorded. The corresponding moisture content was used to indicate the liquid limit.

Plastic Limit Determination
Soil Sample weighing 200g was taken from the material passing the 425µm test sieve and then mixed with water till it became homogenous and plastic and was able to be shaped into a ball. The ball of soil was rolled on a glass plate until the thread cracked at approximately 3 mm diameter. The corresponding moisture content was used to indicate the plastic limit.

Plasticity Index Determination
This is simply gotten from subtracting Plastic limit from the corresponding Liquid limit. Mathematically expressed as: Liquid limit (%) -Plastic limit (%) = Plasticity Index (%).

Preparation of Soil-SDAL Mixtures
Sawdust was burnt to ashes first and then sieved through a BS Sieve of 0.212mm to get very fine ash. It was thereafter stored in an air-tight container to prevent moisture loss and any form of contamination. Design procedures for soil modification or stabilization [14] advocated the criteria for chemical stabilization of soil. It was suggested that lime could be used for soil with plasticity index greater than 10 (PI> 10) or lime flyash blends for plasticity index between 5 and 20 (5< < 20). It was also advised that lime to be added should fall within the range of 4% and 7%, while flyash class c should fall within the range 10% and 16%. A combination of both lime and flyash was advised to be in the range of 1:1 to 1:9 respectively. Beeghly [15] noted that a higher strength will be obtained from a combination of lime and flyash more than lime alone due to the pozzolanic reaction of lime and flyash. For the purpose of this experiment, sawdust ash was mixed with lime for stabilization in the ratio 2:1. For the purpose of this work, the mixture with the lateritic clay will be denoted by LAT-An, where n= percentage of SDAL mixture added. The symbol 'LAT' stands for lateritic clay while sawdust ash lime (SDAL) mixtures is denoted by An. The lateritic clay soil was thereafter treated with sawdust-lime mixtures (SDAL) at varying proportions of 2%, 4%, 6%, 8% and 10%. At each stage of the mixture, the stabilized lateritic clay soil had the following tests performed on them; Compaction; Atterberg limits and Unconfined Compressive Strength tests.

Compaction Characteristics
The compaction test was performed on the soil in its natural state and with addition of varying percentages of the sawdust ash lime mixtures. The method adopted for this test was the British Standard Light (BSL) energy method.

Unconfined Compressive Strength (UCS) tests
This test is applicable in virtually all geotechnical engineering design (e.g. design and stability analysis of foundations, retaining walls, slopes and embankments) to obtain a rough estimate of the soil strength and viable construction techniques. It is about the most popular method of soil shear testing because it is one of the fastest and cheapest methods of measuring shear strength. This method is mostly used for saturated, cohesive soils recovered from thin-walled sampling tubes.

Atterberg limits tests
With the increasing addition of SDAL to the soil sample in proportions of 0%, 2%, 4%, 6%, 8% and 10%, the mixes at each stage were subjected to Atterberg limits tests.

Fig. 1 -Particle size distribution curve for soil sample.
From Figure 1; the percentage of material passing through Number 200 sieve is 36.78% can be classified as a silt-clay material, it has significance constituent material of clayey soils and has its general subgrade rating as fair to poor therefore, it is not suitable for subgrade, sub base and base materials as the percentage by weight finer than the No 200 BS sieve is greater than 35% in accordance with AASHTO soil classification [16]. The specific gravity of the lateritic soil is 2.75, which is within the range of 2.6 and 3.4 as reported for lateritic soil [17]. The specific gravity of the sawdust is 1.98, which is relatively lesser than that of the lateritic soil. From Table 3, the lateritic soil has liquid limit (LL) value of 54%, while its Plastic Limit (PL) was 40.3%.This gave the Plasticity Index (PI) to be 13.7%.With these results obtained in combination with the result of sieve analysis, the soil can be classified using AASHTO system as an A-7-5 soil, subgroup A-7-5, has Plasticity Index (PI) 13.7 <LL(54.0)-30=24, and it is of silt-clay material. This, indicates that it is a fair to poor sub-grade material [18]. From Table 4, with the increasing addition in percentage of SDAL, the OMC values increased from 17.0% at 0% SDAL to 26.5% at 10% SDAL. While proportion of added SDAL increased in percentage, the values of the Maximum Dry Density decreased from 2040 kg/m 3 at 0% SDAL to 1415 kg/m 3 at 10% SDAL. The decrease in the maximum dry unit weight can be attributed to coating of the soil by the SDAL which results to large particles with larger void, hence less density. The decrease in the maximum dry unit weight may also be explained by considering the SDAL as filler (with lower specific gravity) in the soils, while increase in value of OMC with increase in SDAL content is due to the decrease of the quantity of free silt and clay fraction and coarser materials with larger surface areas formed (these processes need water to take place). This implies also that more water is needed in order to compact the soil -SDAL mixtures [19].  Table 5 shows that unconfined Compressive Strength of the soil without the SDAL (that is, at 0% SDAL) is 38.58 kN/m 2 . This showed that the soil is soft and the strength low, but the addition of SDAL mixture at 6% gives the highest strength value of 129.63 kN/m 2 . The increase in the UCS is attributed to the formation of cementitious compounds between Ca(OH)2 present in the soil and SDAL and the pozzolans present in the SDAL [20]

Fig. 2 -Showing effects of SDAL on Atterberg limits
From Figure 2,with the addition of Sawdust ash lime to the soil from 2% to 10%, the LL and PI were observed to decrease and later increase, but ultimately, both reduced in values,

INTRODUCTION
The gusset plate connections are widely used in steel structures to transfer forces from a bracing member to framing elements. A typical gusset plate connection in braced steel frame is shown in Figure 1 [1]. Depending on the particular connection detail the gusset plate can be either bolted or welded to the diagonal bracing member and to the main framing members. If the applied load to the bracing system is such that compression exists in the diagonal member, the compressive strength and stability of the gusset plate connection must be investigated. Due to the complexity of the connections, it is very difficult to evaluate the compressive strength of gusset plates.
Many documents and articles deal with the study of the gusset plates under a tension force. But few papers deal with the compression and buckling of the gusset plate. In this way, literature does not give a specific method to design the gusset plate in order to resist in compression. Actually this is engineer's judgment, experience and practice which help to design the gusset plates.
There are several kinds of the gusset plates. However, the most usual configurations are represented in Figure 2 [2]. Schemes a, b and c represent corner-brace configurations. Compact and non-compact gusset plates are similar in the shape but for a compact configuration, the bracing member is pulled in closer to the other members of the frame. And this is not the case for the non-compact version. Thereafter, the compact configuration has been chosen for a numerical study of the behaviour of the gusset plate connections in compression.  [2] DESIGN PRACTICE

Whitmore's effective width
The buckling capacity and the compressive stress in the gusset plate may be determined according to Whitmore's effective width concept [1][2][3][4][5][6]. Thus, in design, the gusset plates are treated as rectangular members with a cross section Lw x t, where Lw is the Whitmore's effective width. In fact, Whitmore defined the effective width as the distance perpendicular to the load, where 30° lines, which are projected from a first bolt row or the end of a weld intersect with a line perpendicular to the member through a bottom bolt row or the second end of the weld. In this way, it is possible to find the cross section called "Whitmore Section". Figure 3 illustrates the determination of the width for three configurations: a) connection with weld, b) connection with one bolt row, c) connection with two bolt rows [2]. An estimation of a gusset plate yield load (sometimes called as the Whitmore load) can be determined by multiplying the yield strength by the plate area at the effective width section [5] as it is performed by Equation (1).

Thornton's method
In 1984, Thornton [7] proposed an alternative method using the Whitmore Section to find the buckling resistance of the gusset plate. The method considers imaginary fixed-fixed column strips of unit width below the Whitmore effective width in the gusset plate. The buckling strength of the gusset plate is estimated to be the compressive resistance of the imaginary column strips. The critical length of the column strip is the maximum of l1, l2 and l3 [5]. For a corner gusset plate, it is necessary to calculate the column length lavg which is defined as the average of three lengths l1, l2 and l3 obtained as shown in Figure 4 [2]. Once the length of the column strip has been established, the compressive resistance of the column strip can be evaluated according to the column formulas in the design standards. The effective length factor, K, was recommended to be 0.65. The gusset plate will not buckle if the compressive resistance is greater than the normal stress on the Whitmore effective area [5]. Since a column buckling formula is employed, the Thornton´s method does not consider the effects of plate action. The Thornton´s model has been only verified for thick gusset plates [4]. Besides, the column buckling formula considers only the column strip underneath the effective width and the load redistribution due to the yielding is not considered properly [1]. Therefore, the Thornton´s method is not appropriate if significant yielding occurs in the plate prior to the buckling.

Other analytical models
Yam and Cheng [8] developed a modified Thornton´s method based on a load redistribution. Thus, instead of using a dispersion angle of 30° for the load redistribution, it is proposed to use a 45° dispersion angle to evaluate the effective width. The modified Thornton´s load is then calculated based on an extended effective width and the appropriate column curves. Since the modified Thornton´s method accounts for the load redistribution behaviour in the gusset plate connections, it produces a better estimate of the compressive strength of the gusset plates than the Thornton´s method. However, the column buckling formula is still employed and the effects of the plate action are not considered appropriately [1].
With the purpose to predict the compressive strength of the gusset plates Brown [9] was the researcher who developed an analytical model. The analytical model uses an edge-buckling equation, and is based on the Euler equation for an average buckling stress in a flat strip of plate. The model considers the plate buckling behaviour and leads to a stability design approach based on a rational plate buckling equations [1].
Other experimental and analytical studies of the compressive strength of the gusset plates were mostly done in the early twentieth century [6] and [10][11][12][13][14]. Most of the literature on gusset plates covers connections for steel building frames, but is relevant to steel bridges as well.
In the current design, general recommendations for buckling of steel plates are provided in the European standard EN 1993-1-5 [15]. Unfortunately, with no specific formulas to design the gusset plates loaded in compression. Therefore, a research study, which includes numerical modelling of the gusset plate behaviour in compression by finite element method is presented in this paper. Firstly, a model of the compact gusset plate connection is conducted by the finite element program Dlubal RFEM. Then results of the numerical model are validated on experimental data and compared to analytical results including Whitmore load, Thornton load and modified Thornton load. Finally, a parametric study is introduced.

EXPERIMENTAL STUDY
In this chapter an experimental programme conducted by M.C.H Yam and J.J.R Cheng [5] is introduced. A tested specimen from the experimental programme is chosen and used in the numerical study hereafter.

Description of tested specimen
Start hypothesises of the test programme were following:  Single plate gusset connections of a braced steel frame were considered;  Three gusset plate thickness (13,3; 9,8 and 6,5 mm) and one bracing angle (45°) were examined;  Only gusset plate with a rectangular shape with a single size of 500 mm x 400 mm was investigated;  No bending moments into beam or column.
Regardless the start hypothesises, several points had changed during experiments. For example, the bracing angle (45° and 30°), the gusset plate size (500 x 400 mm and 850 x 700 mm) and for some tests, bending moment into beams and columns was introduced.
Next, four specimen designations for the testing program were created. Figure 5 shows the designation marked as GP. The GP specimens are compact and represent practical gusset plate dimensions. They are employed to examine the general compressive behaviour of the gusset plate connections. The GP specimens were used for numerical modelling presented in this paper. Material properties of all the specimens chosen for numerical study are presented in Table 1. Tab. 1 -Test results and material properties [5] Specimen As the goal of the experimental programme was to study the behaviour of the gusset plate, a splice member was reinforced to avoid an unwanted collapse. Several configurations of the splice member are introduced in Figure 6.

Test setup
The testing setup constructed for the experiment is represented in Figure 7. The whole structure is oriented in order to have the bracing member exactly vertical. Out-of-plane displacement mode of the gusset plate can be achieved by allowing the beam and column base to sway out-of-plane instead of the bracing member as in a real frame [5]. In the testing setup two W310×129 sections were used as the beam and column members. The diagonal bracing member (W250×67) was fixed in place by four tension rods. The splice member was connected to the diagonal bracing member by M22 bolts. The stub beam and column were bolted to a distributing beam, which sat on three sets of rollers to allow out-of-plane movement [5].
The loading was applied incrementally and a smaller load increment was used to capture the nonlinear behaviour at the latter stage of the tests. Yielding pattern and process were recorded in detail. The test was terminated when the ultimate load was reached [5].

Experimental results
During all experiments an out-of-plane displacement of the test frame and the gusset plate deformations in various locations were measured. The ultimate load, which is used for the purpose of this study, was recorded and failure modes of specimens were studied. Results of the ultimate load from the experiments are summarized in Table 1.

Compact gusset plate
With the purpose to analyse behaviour of the gusset plate connections in compression, a numerical model using finite element software Dlubal RFEM 5.06 [16] is applied. The numerical simulation is focused on the determination of the buckling resistance of the gusset plate. Scheme of the studied structure with the splice member and the selected cross section of the bracing member is shown in Figure 8a. The model respects the geometry, material properties and boundary conditions of the specimens used in the experiments described above.
In the numerical model, 4-node quadrilateral shell elements with nodes at their corners are applied to simulate the gusset plate, the splice and bracing member. Six degrees of freedom are in every node: 3 translations (ux, uy, uz) and 3 rotations (φx, φy, φz). Material and geometric nonlinear analysis with imperfections (GMNIA) is applied. In the model materials properties of all members are created according to data given in [6], which are also introduced in Table 1. A bilinear material diagram with hardening was chosen to define materials and the von Mises yield criterion is applied. Equivalent geometric imperfections are derived from the first buckling mode and the amplitude is set according to Annex C in EN 1993-1-5 [15]. Large deformation analysis is used and the Newton-Raphson method for solving systems of equations is chosen. The number of loading steps is set to 50, the convergence criteria for tolerance to 1.0% and the maximum number of iterations to 50. The analysis stops at a certain limit of displacement.
For all calculation models, the length of finite elements is equal to 0.025 m. Boundary conditions covering different types of supports are involved in the model. The frame structure is entirely restraint at its ends of the cross-section. Thanks to this simplification the concentration on the gusset plate behaviour is allowed. Regarding the bracing member, the only allowed displacement is the axial displacement. For this, a support blocking the out-of-plane displacement uy and the displacement uz is created.
The frame members in the model are made of profiles HEA 300 of steel grade S355. These are modelled as beam members in RFEM. The gusset plate is welded to these members on two sides. The weld is simulated as fixed connection of the members in the model and there is no detailed model of the weld. Stiffeners for the gusset plate free edges were not created. With the purpose to study the buckling resistance of the gusset plate bracing IPE member was modelled as a rigid member. A sufficient stiffness of the splice member in the model was achieved by properly chosen geometry corresponding to the geometry of the test specimen, so there was no need to model it as a rigid element. Bolts were modelled as a rigid surface area in order to transmit all forces to the gusset plate. The bracing member is loaded with a point force in the direction of member axis.
One of the numerical models is presented in Figure 8b.

Larger gusset plate
Except the numerical study of the buckling resistance of the compact gusset plate of dimensions of 500 mm x 400 mm, larger sizes of the gusset plate are also investigated. Models were inspired by a study introduced in [5]. Dimensions of the gusset plates of 850 mm x 700 mm x 13.3 mm and 850 mm x 700 mm x 9.8 mm, the material properties and the boundary conditions taken from [5] are used. Other properties of the numerical model remain unchanged. In description given later this type of the gusset plate is called SP.

RESULTS
Results of the ultimate load calculated in the numerical model as well as in the analytical models are shown in Table 2. The specimens are designated in the same way as it is used in Table 1. GP are the compact gusset plates and SP the gusset plates with larger dimensions. A description of all gusset plates with a specification of their thickness and dimensions are repeatedly given in the table. Finally, values of the ultimate load come from following models:  "Dlubal Load" gives the numerical load;  "Whitmore Load" gives the Whitmore load obtained by the analytical model;  "Thornton Load" gives the Thornton load obtained by the analytical model;   Table 3 presents ratios between measured values of the ultimate load during the experiments described in [5] and [6] and calculated values coming from the numerical and analytical models. The table uses the same designation of the studied specimens as it is introduced before. The ratios of the experimental values and calculated values are given in separated columns, where  "PU/PDlubal" gives the ratio between experimental and numerical load;  "PU/PW" gives the ratio between experimental and Whitmore load;  "PU/PT" gives the ratio between experimental and Thornton load;  "PU/P'T" gives the ratio between experimental and Modified Thornton load.

VERIFICATION AND VALIDATION
It may be observed in the table, that the difference of the experimental and numerical results does not exceed 8%. The numerical model gives higher values than experimental for all specimens except SP2. The behaviour of the gusset plate GP1 achieved in all studied models can be observed in Figure 9. The behaviour of the gusset plate recorded during the experimental study and calculated by the numerical model is described by whole curves. Despite this, the results from the analytical models give only value of the ultimate load. Although the ultimate load from the numerical model and the experiment is identical, the displacement of the plate grows much faster with the experiment than with the numerical model. The value of the Whitmore load and the Thornton load are close together, but they are quite low comparing to the numerical calculation. The Modified Thornton load is closer to the numerical result, which is caused by a larger dispersion angle (taken to 45° and not 30°). Results of all analytical models give a conservative estimation of the load.

PARAMETRIC STUDIES
The parametric study was carried out to study influence of changing parameters of the gusset plate on the buckling resistance. The chosen parameters for the study include the gusset plate thickness and dimensions, the presence of stiffeners and their length, and the type of connection of the gusset plate to the frame members.
The effect of the gusset plate thickness on the ultimate load is shown in Figure 10a. With the increasing gusset plate thickness increases the ultimate load is increased. Moreover, it seems that the pattern of the increase is almost linear. Figure 10b shows the difference in the behaviour of the gusset plate connection of type GP and SP. The thicknesses of both plates are the same, see Tables 2 and 3. With the increasing size of the gusset plate, it becomes slenderer and the lower resistance is reached. Indeed, for the same thickness, the compact gusset plate is more resistant than the non-compact one. Sometimes, it is not possible to increase the size of the gusset plate. Then, it is beneficial to use stiffeners to strengthen the gusset plate. There are several types of the stiffeners. A centreline stiffener is created by extending the T-flange web. In Figure 11a the identical gusset plates with a different length (represented as LS in the figure) of the centreline stiffener are compared. Figure  11b introduced the effect of the different kinds of stiffeners including the centreline and the free edge stiffeners. Thus, with the same type of the gusset plate, it is possible to observe what kind of stiffener is more efficient.
The effect of different kind of the gusset plate connection and the frame members is shown in Figure 12. It illustrates the difference between a welded and bolted connection. It is not high but if the choice is just about the value of the ultimate load, then the welded connection for the gusset plate to the frame structure allows a higher buckling resistance than the bolted connection. It is caused by the spread of the force along the two sides and not just through bolts. Of course,

CONCLUSION
In the paper the behaviour of the gusset plate connections in compression is studied by finite element software RFEM. The model is validated on the experimental results taken from literature. Moreover, the results of the numerical model are compared to the analytical methods given by Whitmore and Thornton. The study proved that the analytical models give a conservative estimation of the ultimate load. Nevertheless, these methods may be used in the initial phase of a design, at least as a safe initial estimation. Then, the design may be improved by the numerical modelling. Even the results of the numerical study introduced in this paper are limited, several features were captured:

Fig. 12 -Effect of the type of connection
 It is preferable to weld the gusset plate to the frame structure rather than to bolt it. By welding the gusset plate, the ultimate load is higher;  It is advised to use the T-section or another kind of section having a high out-of-plane inertia. Without that the critical area of the gusset plate connection can be attributed to the splice member;  The use of the stiffeners is not necessary but can be useful to strengthen the gusset plate. But if it is not possible to increase the thickness of the gusset plate and if a higher buckling resistance is required, the stiffeners can be an interesting solution.  Let's note that when the slenderness of the gusset plate must be increased, free edge stiffeners should be efficient;  It is better to use the centreline stiffener for the compact gusset plate;  It is recommended to use the compact gusset plate. The stress redistribution is lower than for the larger gusset plate so the buckling resistance is higher.

INTRODUCTION
An impact of sound waves to humans is possible to display by physical quantities and their dependence. It is primarily sound pressure level L (dB) and frequency f (Hz) or frequency spectrum of sound. In case of enclosed space e.g. rooms there are more quantities. As an example there can be described Reverberation time T (s), coefficient of absorption α (-), different echoes descriptors and many others. If the source of sound is in different room it is also necessary to take into account the sound level difference D (dB) or another quantity which describes insulation such as sound reduction index R (dB). Despite great approximation we have at least 5 the most essential variables which the subjective evaluation of noise disturbance depends on.
The second option is using of modern method of auralization and to model surroundings which properly respond to real conditions which we want to describe. Then to take this model and inside it we can numerically process a sound signal of the source and get the sound signal in the receiving room after transferring through the partition element. The final signal is after that introduced into listening tests and statistically analyzed. The latter option which is based on psychoacoustic is discussed in this paper. Nowadays there is a great effort to draw auralization and human perception in general to the sound insulation problems directlywithout excessive amount of hard data. One example, which is commercially used is dB station [3] . It is a graphic user interface, which allows user to change both insulations between him and a source and a source of disturbing noise itself.
A goal of this work is neither to embrace all problematic of transmission of sound energy nor to bring a clear and an undiscussable solution how to eliminate low frequency sound propagating through structure elements. This work intends to point out second option for evaluation of sound insulation. The goal depends on creating methodology and process, which can lead to providing reliable and correct data. This system can be used later for bigger investigation about respondents, spatial geometry, different source signals and different partitions. The work is focused on environments in residential building and neighboring noise and only with perspective of airborne sound. That is a reason for choice of source sounds, localization of sources and receivers in rooms, dimensions and acoustic properties of rooms and choice of partitions. Next limitation is that only heathy people without hearing losses are involved into this research.

METHODS
This section describes in detail the study material, procedures and methods, which were used for this work. It is divided into a few subsections about software model, sources of sound, partitions, listening test and post processing of questionnaires.

Software model
The model was created in ODEON [6] software (version 10.1 Combinedsuitable for educational and research purpose only) which is the software specialized for room acoustic simulation and measurements and is recently used also for auralization. This exact feature was used in this project. The model itself consists of two adjacent rooms, which have the dimensions and shape of acoustic laboratory at UCEEB where the verification process of model was held.

Sources
Five different sources of sound were implemented into the model. Choice of sources was done due to intention of covering commonly occurred sound in neighboring conditions. All of five source signals are displayed bellow:

Music -Beethoven -Für Elise
Barking of a dog Crying baby

Sports event
All the sounds were reproduced by high-quality loudspeaker Nor276, except second sample -Beethoven was played by electric piano right in the source room of the acoustic laboratory.

Structures
For samples of partitions there were used three kinds of walls commonly used in buildings. All of them had the same sound reduction index Rw = 64 dB. The effect of bypassing sound is eliminated by laboratory and also by computer model. The sound reduction index for each of those partitions is displayed in Chart 1. There is a clearly visible difference between them and critical frequency of multi-layered structures. The values of sound reduction index of structures were measured in the acoustic laboratory.

Auralization
Auralization technique was used to obtain sound records in receiving room. The philosophy of auralization is a creating audible sound files from numerical (simulated, measured or synthesized) data [5] . These records were afterwards implemented into listening tests. Except of using different partitions and source sounds all properties of model remain the same in all cases. Verification of this software model was done by a comparison of records obtained by auralization and records which were physically recorded in receiving room during measurements. These measurements were realized at university center for energy efficient buildings (UCEEB) in Buštěhrad. More about the verification of this model is available in conference contribution [2] . In this model there is only one direct acoustic path.

Listening test
There was a simple examination of respondents before listening test itself. The examination was held in order to reveal any cruel pathology of respondents by a free online audiology test [1] . The online test was firstly tested by a comparison with my audiogram from a doctor. Results of the comparison were sufficient for purpose of this test because it was not important to know exact threshold of hearing but to know if hearing spectrum is more or less flat and without any significant hearing spans in any frequency (according to A weighting).
Every tested respondent was questioned about his acoustic background, sex, age, and preferable acoustic comfort at the beginning of the test in a graphic user interface (GUI). All questionnaires were anonymous and done with supervision of test manager or his representative. The respondents were chosen semi-randomly and fully voluntarily. There was also retained ethic codex of listening tests with requirements to health and comfort of respondents. The first page of GUI was intending to acquire information about respondent's background and living conditions as well as gender and age. The evaluation scale for this purpose was from 1 to 5 (1… I totally agree with the statement; 5… I totally disagree with the statement). Questions about background were (in this order): I feel sensitive to surroundings sounds. I intentionally avoid events with excessive sound (party, sports events). I wake up during night because of noise; I usually notice disturbing sound later than others. When in loud surroundings I focus better than my colleagues. I prefer weekends and relax in a quiet place.
Listening test consists of evaluation sets of three sounds which had the same sound in the source room however sound in receiving room (which were recorded) differ due to fact that different partition was used. There were five of these setsevery set for one source sound. Altogether 15 (3x5) sounds were evaluated. For purpose of evaluation there was progress bar from 1 to 10from minimal disturbance or loudness to maximal disturbance or loudness. This method is called interval scale valuation. The benefit of this method lies in easier further analysis of answersat least in comparison with nominal evaluation which is based on verbal description by the respondents [4] . Average length of one listening testincluding introduction and feedback -was 15 minutes. In Figure 3, there is a sample of screen from questionnaire, which shows user interface. This respondent has a task to evaluate subjective disturbance in three samples. The primary sound is sound 3: barking of a dog. However in the first column there is a scale for sound which is going seemingly through structure 1 (Concrete wall), in second column through the structure 2 etc.

Processing of questionnaires
The amount of finished listening tests was 28. The selection of population was semi-random, based on voluntarily enlisting for testing. However, a few questionnaires were r removed from analysis. There were several reasons for doing this. The most often problem was hearing loss or another physiological hearing defect. Five respondents did not pass through this criterion. And two other respondents showed irresponsible attitude and their results were also not taken in analysis.
Answers from questionnaires were processed by commonly used statistical methods. Neither concordance nor consistence were evaluated due to too low number of respondents and in perspective of lack of further work with this population.
Expected results were significantly higher numbers (the more loudness) for low frequency sounds (first and second sample) in structure number two and three. However, for higher frequency sounds (third and fourth sample) in structures number one. This presumption has not been fulfilled in the scale, which could be expected. No significant difference was found between answers from male and female part of population. In addition, no significant correlation was found between responses and background of respondents or their age. The complete set of answers, which was processed, is in table below (Table 1). In the column "living", there are a few possibilities: STsmall town. BGbig town, R -rural area. Numbers in table represent answers of respondents to questionnaire. First part contents responses to background questions (see chap. Listening tests), second part contents exact location of progress bar on "disturbance scale" for each sound file.

Tab. 1: whole population, their background and answers
Due to need of comparison within each set separately there is an appropriate method to establish one reference level. In this case the concrete wall was considered as a reference level and then compared to the other structures for percentage display. Evaluating within each set has the advantage that it is not necessary to deal with concrete set up volume in case of reproduction of noise through loudspeakers and it is possible to deal with problematic in more general way. 10

RESULTS
Result table shows that the answers do not incline to uniform direction. The scale of evaluation is very wide and each respondent has his own perception and therefore different feeling about the sound. However, it is possible to say that acoustic behavior of lightweight multi-layered partitions is more difficult and problematic than concrete structure (it would have 100% in table as a reference sample).
The table is colored according relative answers of respondents. A lot of cells remain yellow which means that the difference from the reference sound is not perceived or is very small, green cells indicated that the sound is perceived as less loud and on the opposite side -red cells indicates that the sound is perceived as louder than reference sound.
Below the table there are simple statistical tools as median, average, variance and standard deviation of the population.
It is very interesting that respondents do not have the same opinion on sounds and therefore the scale is enormousas an example we can take the sound 6piano through aerated concrete wall with lining. There is value span from 33% up to 400% -it can be because of a personal exception but it only shows that this problematics is very complex and numerous variables are involved in it.

CONCLUSION
This research is done on a relatively small amount of respondents and therefore the results should be considered only as an overview. For exact output and implementation there will be necessary to perform more tests similar to this one. Nevertheless, this work proved that it is essential to focus on comparison between the multi-layered and the single-layered structures from perspective of sound insulation. There is a question what quantity describes the best the sound insulation performance. In this case the sound reduction index Rw was used in spectrum from 100 to 3150 Hz,