Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises

© 2012 Xianxi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises


Introduction
Steam is important secondary energy resource media in iron& steel enterprises, amounting to nearly 10% of the whole energy consumption.When an enterprise is running, if all the produced steam can meet all the demands and no steam is bled, the overall energy efficiency can be effectively improve.Thus the complex networks of steam pipes and steam production scheduling systems were set up.Obviously, steam scheduling has to depend on the real time measured data from the steam pipe networks.Accurately measuring the variables of pressure, temperature and flow rate is essential to secure the safety and economic efficiency.It's also necessary for accumulating the amount of steam production or the consumption to calculate the energy cost for each working procedure.
With the help of Energy Management System (EMS), all the data collected from the distributed instruments.In practice, the measurements of pressure and temperature are usually accurate enough for the application except that the sensors or the transducers fail.However, the measurements of mass flow rate are not so accurate because of the complex nature of the steam itself, lacking of high precision measuring instruments, the impact of interference and information transmitting network failures and other reasons.The reliability of the measurement of mass flow rate is poor.When the steam mass flow measurement values deviate from the actual values to a certain extent, the automatic control system may largely deviate from the process requirements substantially.Even worse steam bleeding or accident could happen [1].Therefore, it is not a satisfactory to decide or adjust the production process according to the data from the flow rate meters [2].In energy management, the accumulation differences between production and consumption make it difficult to calculate the energy costs, analyze the segments of irrational use of steam, and find the weakness in the management links.Therefore, improving the reliability the steam flow rate measurement data is essential for the normal production and energy conservation in iron& steel enterprises.
The objective of the work may be depicted by figure 1.The real time data (mainly referring to the mass flow rate variables) would be mainly processed by three approaches.By fault data detection and reconstruction, the fault data would be picked out and the real value would be reconstructed or estimated.By gross error detection, the data with gross error would be discovered and re-estimated.By data reconciliation, the random error would be decreased and the quality of the data would be further improved.For fault data detection and reconstruction, the reasons of the low accuracy of steam flow rate data are introduced.By applying the statistical process control theory [3] to determine univariate and multivariate process control limit to monitor the abnormal data online.The approach to calculate the real data (mass flow rates) through the thermal and hydraulic mathematical models of the steam pipe networks is proposed.
For the section of gross error detection, the definition of the problem is introduced.And the two basic gross error detection approaches, the Measurement Test (MT) method and the Method of pseudonodes (MP), are demonstrated.
For the section of data reconciliation, the constrained least-squares problems stated in the section of gross error detection is discussed in detail, including the assumptions for the application,the constraint equations and the selection of the weighted parameter matrix.
The presented approaches of data processing can be programmed for computers to determine the abnormality and improve the precision of the mass flow rate data.

The reasons of the poor accuracy of steam flow rate measurement
There are many reasons for the poor accuracy of steam measurement data.The most important reasons can be concluded as follows: 1.The working conditions of the instrument deviating from the designed conditions At present, the mass flow rates are mostly deduced by the volume flow rates and the density.However, the changes of temperature and pressure in the transmission process would lead to the density of steam deviate from the original designed value [3].The measurement errors would be very large [1].Any more, some superheated steam would change into a vapor-liquid two-phase medium, which making precision worse.

The Complexity of Steam Characteristics
With the ambient temperature changing, the total amount of the condensate water in the transmission process will be different.That makes difference between the amount of the production and the consumption of steam.In addition, the steam pipe leakage will add to the difference.Therefore, the accumulated readings are always doubtful.

The Occurrence of wearing or damage to the Key Components
As the orifice differential pressure flow rate meter being in use for long time, the size of aperture would differ from the original size because of the adhered foreign bodies, or erosion by the durative high temperature steam flow.As the parameters can not be adjusted on time, and it is hard to calibrate the instrument, the measurement errors will accumulate.

The external Interference or the Failure of Data Transmission Channel
The instrument interfered by the disturbances or failure happening in data transmission channels will make significant errors to the data derived in the control center.

Determining control limits for fault data detecting
The abnormal data from certain sensors (including temperature, pressure and flow rate sensors) may be characterized as fluctuation quickly with a large magnitude (induced by the poor contact in the instruments), keeping a certain value without any varying (induced by the failure of the data sampling systems) or being outside the normal variable range.The first two cases are not discussed here because they are easy to be discovered.As for the last case, the statistical process control approaches to determine the control limits for data monitoring is applied.When the values of the controlled variables (or functions) exceed the limits, it shows there are abnormal data except that the process is actually abnormal.The statistical process control includes two types--univariate and multivariate control.

Determining the control limits for single variable
The statistical process control (SPC) and control chats were first proposed by Shewhart in quality prediction.Traditional SPC mainly treated single variables.When the value locates outside the normal range, the system would output an alarming signal to notify the operators to check and make sure whether the state is abnormal.In the work, if the process state is actually normally operating, the values of the variables can be judged as fault data.Reasonably determined control limits can reduce the probability of false alarm.Many researches focus on the problem [4][5][6].In the work, empirical distribution function combined with the principle of "3σ" [3]is applied to determine the control limits of different variables., the range of the data matrix is: Divide the range into N intervals with the same length, each length of the interval is: The corresponding intervals are marked as 12 , ,..., N cc c.So each element in X belongs to one of the intervals and these data are divided into N groups.The probability of the sampled variable value locates in each interval can be estimated as: is the number of the data locate in the ith interval.As the data are independent to each other (usually the hypothesis is reasonable).The probability defined: The average probability density of each interval can be written as: The empirical distribution of X can be got with the lengths of intervals as bottom and r p as the height.When n and N are large enough, the bar graph would be more similar to the global distribution of X.According to the central limit theorem, when the system has only one stable state, the measured data would follow normal distribution as the sample size grows.So the global distribution function of X can be written as normal distribution: Figure 2 shows the distribution graph.The probability of data locating in the range of μ±3σis 99.73%.When the value of X is outside the range, it's reasonable to suspect the value being abnormal (otherwise, the process doesn't normally function ).That's the principle of "3σ".In practice, the outside of "3σ"occupying about 0.25% probability is named "red zone", and the range inside μ±2σis named "green zone"(about 95% probability).The intervals between them are "yellow zone".Four control limits are determined by the method.
As the expectation and the deviation of the X global distribution are unknown, and the estimated values with the sample are not accurate especially when the empirical distribution isn't similar to normal distribution, the principle of "3σ" can not be applied directly.However, according the principle of "3σ" and the empirical distribution of X, the control limits can be deduced reversely by searching the respective probability of limits.
Two kinds of special instances have to be noticed: 1.No crossing point between empirical distribution and the testing level.
It shows the corresponding sample value is not appeared in the significance test level range.To determine the limits, the empirical distribution can be linearly extended to outer range or estimated with experience by comparing with the real process performance state.
2. Two more crossing points between empirical distribution and the testing level at one side.It shows there are disturbances in the data, which makes the empirical distribution fluctuate at one lateral side.According to principle of "3σ" , find the highest probability interval and search from it to the two laterals for the points first amounting to the probability (1-α)/2.
Figure 3-5 show the control limits (4 lines) and monitoring result in a plant.The interval between the two green lines is the "green zone" showing normally working.The intervals outside the red lines are the "red zones" indicating the fault data or process abnormality.
The other two intervals are the "yellow zones" which reminds the operators to notice the changes of the data.Figure 4 and 5, in the right side the red and green line are coincident.It can be explained that the outer equipment constraints the variable freely moving.
By determining the limits of single variable, the wild values considered as fault data can easily be discovered.However, two kinds of inherent errors can not be avoided.
The approach of Univariate Statistical Process Control (USPC) can roughly judge fault data.
The approach is also appropriate to monitor the temperature and pressure variables.

Determining the control limits of multivariate system based on PCA
The shortage of USPC is the operators being prone to "information overload"when the number of the variables is large.Anymore, it's too rough to judge the normalty of data just by monitor alarming.In practice, the variables maybe constrained by certain inherent functions.It shows the existence of fault data when the functions are unsatisfied.Principal Component Analysis (PCA) is one of the approaches to discover the relationship among the variables by means of statistics.By determining the control limits of two guidelines, Hotelling's T 2 and SPE (Squared Predictive Error, Q), the multivariate process can be monitored conveniently.The proposed approach is to distinguish different stable states of the process by empirical distribution function, and determine the control limits for each stable state.
i. Differentiate the stable states and group the samples Denote the measurement matrix as nm XR × ∈ , n is the size of samples, m is the number of the monitored variables.The rows of X are the serial samples in time order.Denote 12 ( , ,..., ) (i=1,2,…,m) as the ith column.The empirical distribution of i X can be derived as just mentioned.When the process has two or more stable states, the distribution bar graph would characterize as multi peaks.
Figure 6 is the curve of flow rate and the distribution bar graph of a chemical process in a period.The two peaks represent different demands for steam in different states.
Without loss of generality, the two peaks distribution is to be discussed.If the control limits are determined without considering the different states, the sensitivity of detecting fault data will be too low for the multivariate process.Apply two normal distributions to fit the two peak sections.If the averages and deviations are signed as is σ , the rule to divide the samples into different groups (different states) is listed as equation ( 7):

2 . 2 [min( ), ]
[, m a x ( ) ] γ can be selected in the interval of (1, 3), and the total state region should cover the whole range of the sampled data.Ideally, the state regions should not have folded region or the folded region is narrow and with low probability.Or, the variable have no necessity for state differentiating, which is not the case discussed here.
By the rule of ( 7), the samples are separated into two groups.By the same way, the two groups can be further grouped with other variables.However, it's not suggested to divide the sample into many groups with too few elements in each group.Assume ii.Determining the control limits of different state with PCA Through grouping, the sample data is more convergence.First standardize a X , denote: 12 (, , . . ., ) In it,, 12 , ,..., m xx x . Calculate each average and standard deviation: 12 (, , . . ., ) The measured data matrix is standardized as: Derive the eigenvalues of the covariance matrix 1 1 If the jth measurement vector is signed as aj X  (row vector), the corresponding scored vector is signed as aj T (ie.The jth row of matrix a T ), then 2 aj T is defined as [7]: When the testing level is α ,its control limit can be calculated according F distribution: An An An TF nA is the critical value corresponding to testing level α , degree of freedom A,n1-1.The control limit can be calculated by: In the equation , c α is the threshold value under the testing level α of normal distribution.When there's no fault data in the sample or the process normally functions, the two statistical variables satisfy the following inequalities.
the second data matrix b X can also be processed to derive the control inequalities: iii.The monitoring results for the experimental data Three variables are considered and some of the data are modified on the original data.The no. 167-172 and no.350-355 samples show the transition procedure of two states.The no.360-365 samples are modified data which deviate from the normal constraint.Figure 7 shows the changing curves of the three variables and the scatter diagram in 3-dimention space.
Two experiments are designed to testify the advantage of differentiating states applied in this work.Figure 8 is the case no differentiating states, whilst figure 9 is the differentiating case.As the figures shown, the approach of differentiating states and determining the control limits respectively is more sensitive to the transition of states and fault data.

iv. Locate and isolate the fault data
In the case of MSPC, the fault data can be located and isolated by contribution diagram.

The steam pipe network modeling and calculating the fault data based on the model
After the fault data are detected, the next step is to estimate the true values of the variables.As the experience shown, the measured temperature and pressure values are usually accurate.So the object of the work is to calculate the flow rate value with the detected temperatures and pressures according to the steam network model.

The static model of steam pipe network
To deduce the model, the real conditions are simplified as: 1) The steam in the pipes is axialdirection one dimensional flow; 2) The network is composed of nodes and branches(pipes).
3) There is no condensate or secondary steam and the effect is neglected.Usually, there's little difference between the products of compressibility factors and temperatures in the same pipe.Considering the elbows, reducer extenders and other friction factors, the equivalent coefficient η is added to the equation.Equation (24) changes to (25).
Δ -the equivalent absolute roughness, mm; Re -the Reynolds number of the steam.
After transferring the units of , Dq to mm and t/h (ton per hour), Reynolds number is : In the equation, u -the characteristic velocity of the steam in this pipe, m/s; µ - the mean dynamic viscosity coefficient.It's determined by equation (33) [9].ρρ as the input and output steam densities.12 , ρρ, µ , m ρ can be calculated with following equations (31),(32),(33).The unclaimed symbols were defined in reference [9].
The definitions and units of 12 , ,, TT Lq are the same as eqation (24).(note:changing equation (34) to ( 35) is to make thermal model has the same pattern with equation ( 26)).
β -the appending heat loss coefficient for the appendix of pipe, valve, and support, the value can be 0.15-0.25 depending on pipe laying methods;  (38) 0 , a TT-the out surface temperature of the pipe and the environment temperature, K; 0 , i DD -the outer and inner diameters of the steam pipe, mm; ε -the coefficient of thermal conductivity of the heat insulated material, W/m.K; In the flow rate of the ith node, unit t/h; For steam souce the value is minus j q , and for the user the value is positive j q , otherwise, the value is 0. In term of mass conservation law, the total flow rate of each node should be 0, that is: According to the hydraulic and thermal equations, for the pipe network equations: Substitute ( 42) into (41): * 0 Equations ( 26), ( 27), ( 35), ( 36), ( 42), ( 43), ( 44) comprise the static model.

Hydraulic and thermal calculation based on searching
Industrial networks are equipped with temperature, pressure and flow meters except intermediate nodes.The proposed algorithm tries to calculate the flow rate of each pipe with the given condition.The flow rate meter readings are applied to evaluate of the algorithm.Step 1. Determine the elements of matrices A, Q as the preliminarily defined.10000 01 0 0 0 00 1 00 000 1 0 11 0 0 1 0011 1 Energy Efficiency -The Innovative Ways for Smart Energy, the Future Towards Modern Utilities 258 1234 ( , , , ,0,0) T Qq q q q =− (46) As the previous definitions for equation (  can be calculated directly. ( ) ,( 1,2,...,5) 278 In equation ( 47), i -the pipe index, i T Δ -the temperature difference of input and output.
Step T with one step to continue calculation, or retain the temperature value and the corresponding flow rates.
Step 3. Hydraulic calculation Just as the thermal calculation, first presume the temperature of the intermediate nodes as the value searched by the former step, and set the start point and step size for the two intermediate pressures.Then begin searching in the region determined by inequality (48) with the step size to calculate the density (equation ( 32)), frictional factor ((28), ( 29), ( 33)), and flow rate ((26), ( 42)) of each pipe.When calculating the flow rate, presetting the initial values as the result of step 2 will reduce the iterative calculation time.Test whether

Comparisons of calculation results and measurements
The specifications of each pipe are shown in table 1.The parameters in the model and algorithm are initially set as: Substitute the specification data in Table 1 and temperature and pressure data in    The results show the validation of the model and effectiveness of the algorithm, and the proposed model and algorithm can be applied to simulate the running of static steam pipe network, reconstruct the discovered fault data of the mass flow rates.
As for the larger scale steam network with more than 3 intermediate nodes, it's difficult to apply the algorithm directly.However, the pipe network can be divided into several smaller networks from the nodes with known temperature and pressure.

Problem definition
A section of the steam network named "S2" for an iron &steel plant in China is shown in figure 12.In the figure, N1-N7 represent the different production processes.The arrows point to the direction of steam flow, the variables Xi or Xij on behalf of the real steam mass flow.The electric valves are remotely controlled by the operators.Many industrial steam systems are similar in structure to this system, but the scale is much larger.If all of the variables above have been measured, suppose that the electric valves are fixed at a certain position, and the pipeline leakage and the amount of condensate water can be neglected, the constraint equations can be written out on the basis of the mass balance.
The overall balance equation can also be written out as: If we remark as the vector of mass flow rates in the steam network.(51)(52) can be abbreviated as: In (54), A is the incidence matrix.The matrix is composed by the elements of 1, -1, and 0.
If Y represents the vector of measured flow rates and X is the vector of true flow rates , then: In the equation, W is the vector of the measurement gross errors (or systematic errors), and ε is the vector of random measurement errors with each element being normally distributed with zero means and known covariance matrix, Q.The approaches to detect and remove the gross error from the measurements are to be discussed in this section.
When there is no gross error in the measurement vector, finding a set of adjustments to the measured flow rates to satisfy equation ( 54) is the problem of data reconciliation.Denote the adjustment vector as a , and the adjusted flow rate vector as X , we get: Applying Least squares method, it can be stated as the constrained least-squares problems: Q is the weighed parameter matrix.Usually, Q is selected as: (,, . . ., ) is the deviation of the measurement variable (1 18) i Xi ≤≤ .
The solution * X to the problem can be obtained by Largrange multipliers [10] *1

ˆ() TT XY Q A
Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 263 The vector of residuals e By gross error detection, the systematic errors in the measurements can be removed, and by data reconciliation the random errors can be reduced.However, if the gross errors are not removed, the gross errors in some variables will propagate to other accurately measured variables in the procedure of data reconciliation.So gross error detection had better be adopted in advance.There are two basic types of gross error detection methods, they are based on measurement test (MT) and nodal imbalance test (NT).

Gross error detection based on test of residuals
Detection based on test of residuals belongs to the statistical test and has been termed and evaluated [11].Measurement test [12] is the basic algorithm.
Step 1. Apply the least-squares routine by using equation (58) (59) to compute * X and e .
Step 2. Compute the variable for each pipe (or stream) In the equation, () On the hypothesis that the measured value in the jth stream doesn't contain gross error, j z follows standard normal distribution. Step Where n is the number of measurements tested (currently, n=18), α (the recommended value is 0.05) is the overall probability of a type I error for all tests, and β is the probability of a type I error for each individual test.Denote by S the set of bad streams found by the above procedure.The measurement , j y jS ∈ is considered to contain gross error.
Step 4. If S is empty, proceed to step 7. Otherwise, remove the streams contained in S and aggregate the nodes connected with the stream.This process yields a system of lower dimension with compressed incidence matrix A′ , measurement vector Y′ , and weighed matrix Q′ .Denote T as the set of streams with the measurement data in Y′ .
Step 5. Replace A, Y, and Q with A′ , Y′ , and Q′ respectively and Compute the least- squares estimates in T applying equation (58).Notations to the algorithm: 1.In step 4, it's possible to remove the good streams into S in some cases.
2. The equation ( 62) and critical level value provides a conservative test since the residuals are generally not independent.It's not always applied [13].
3. The different significance levels will induce different results and effects.
4. For the shortage of MT that it tends to spread the gross errors over all the measurement and obtain unreasonable (negative or absurd) reconciled data, The Iterative Measurement Test (IMT) method and Modified IMT (MIMT) method were proposed.However, the main frames are the same with MT.

Gross error detection based on nodal imbalance test
The algorithms of gross error detection based on statistical test of nodal imbalance are mainly based on the work of literature [14].Applying nodal imbalance test to each node and the aggregation node (ie.pseudonode) to locate and remove the gross errors.The basic algorithm named Method of pseudo nodes (MP) is listed as: Step 1. Compute the nodal imbalances vector r and the statistical testing variable vector z .On the assumption that no systematic error exits, the defined variable i z are standard normal distributed. Where Step 2. Compare each i z with a critical test value Step 3. If no bad nodes are detected in step 2, proceed to step5.Otherwise, repeat steps 1 ,2 (changing the matrices and vectors accordingly) for pseudonodes containing 2, 3… m nodes.
Step 4. Denote the set of all streams not denoted as good in the previous steps by S. The measurements , i y iS ∈ are considered containing gross errors.Steps 5-8.The procedure is the same as the steps 4-7 of the MT algorithm.
Notations to the algorithm: 1.The principle assumption is that the errors in two or more measurements do not cancel.
2. In step 3, m is chosen by the effect of locating the gross errors.If the increasing of m doesn't gain any improvement, the step can be stopped.
3. By applying graph-theoretical rules [14] some streams can be determined as bad measured streams.The additional identification may be useful for the procedure of MP.
4. Equation ( 62) is not applied in the algorithm to control type I error.The probability of a type I error in the nodal imbalance test is not necessarily equal to the probability of rejecting a good measurement in MT [15].

5.
If the set S obtained in step 4 is empty, but there is one or more nodes are truly bad.Usually the instances happen when there's leak in the network system or the measured errors canceling each other.To solve the problem, the Modified MP (MMP), MT-NT combined methods [16] are proposed.
As the above two methods shown, gross data detection and data reconciliation are inherently combined together.Gross errors detection is proceeded with the help of the leastsquare routine, which actually can obtained the optimal estimates.

Data reconciliation 4.1. The basic problems of data reconciliation
Just as mentioned in section 3, the data reconciliation problem is expressed as the solution of the constrained least-squares problems.However, the assumptions, the constraint equations, and the weighted parameter matrix have to be discussed.

On the assumptions of data reconciliation in the present application
For the application of the steam network, there are four latent assumptions.They are: 1.The process is at the steady state or approximately steady state.Suppose that the electric valves are fixed at a certain position, and the mass flow rate in each steam has been close to a constant for a period of time.So the constraint equation (54) can be written out on the basis of the balance for all the nodes (including real nodes and pseudonode) and the solution of the problem can be in accordance with the actual state.
2. The measurement data are serial uncorrelated.The assumption makes it easier to estimate the deviations of the measurements.Although this is usually not the real case, the serial data can be processed to reduce the confliction [17].
3. The Gross Errors have been detected and removed.If there's gross error in the measurement data, the procedure of data reconciliation will propagate the errors.
4. The constrained equations are linear.The style of equation ( 54) is linear; however the true constrained equation will be related with many other environmental variables and obviously not linear.So the assumption is to be approximately satisfied.
Only these four assumptions are nearly satisfied, can the solution of the problem (equation ( 58)) be closer to the true value.

On the constraint equation
Here: (, , , , , ) It represents the vector for the condensate water and leakage loss amount of each constraint equation.Each element has relations with the environmental temperature τ, pipe diameter D, pipe length l, steam temperature T, pressure P and the flow rates of main pipes.
It's difficult to set up the mathematical models of these loss amounts.However, by the first order of Taylor Expansion at the point of ( 00 0 0 00 , ,, , , The constants in equation ( 69) can be determined by the method of multiple linear regression with a certain number of history data.As the constraint equation changed into: The solution to the least-squares problem of reconciliation is: The selection of Q directly influences the result of the data reconciliation.In theory Q is recommended as the equation (57).However, the deviations are usually unknown or may vary as the time of instruments being used getting long.The deviation of each measurement can be estimated by the standard deviation of the sample data.
Where, ij X is the j th sample of the variable i X , and i X is the average of i X .
If a small number of high-precision instrument are applied,, the corresponding elements in the matrix of Q with smaller value, then the quality of data will be greatly improved.Though some literatures [18] [19] recommended the methods to determine or adjust the matrix Q, the methods or theories for general application is not available.

Simulation results
The simulation results of data reconciliation for the present application are shown in The standard deviations are set to be 0.5 percent of the true values.In the simulation, the pipe network loss is not considered.As shown in table 3

Conclusion
In the chapter, three data processing approaches to improve data quality are demonstrated.
For the importance of properly controlling the steam system performance normally, The data obtained from EMS should be accurate and reliable.However, the data may be influenced many outer factors.The approaches proposed in the chapter are to detect the fault data, locate and remove the gross errors and reduce the random errors.
Four main reasons induce the low accuracy of the mass flow rate measurement.Combining the principle of "3σ" and empirical distribution function to determine control limit is proposed for single variable monitoring, and applying PCA to determine the control limits for the multivariate process.With the limits, most of fault data can be identified easily.For the fault data of flow rates, the approach to setup the mathematical model of the steam network and calculate the flow rates is proposed.The simulation and experimental results show the effectiveness of the approaches.
Two approaches, MT and MP, to detect the gross errors are demonstrated.Both are preceded by selecting the statistical variables, which follow standard normal distribution, and applying hypothesis test.Some notations for the two algorithms are stated.
The constrained least-squares problem applied for present application is discussed.The four assumptions are approximately satisfied when the steam network is normally function and the state is nearly static.The pipe network loss can be considered to add to the constraint equations for more accurate results.The weighed parameter matrix has influence on the results of data reconciliation.To estimate the deviations of the instruments online and apply several instruments with high precision will improve the quality of reconciled data.

Figure 1 .
Figure 1.The objective of the work

Figure 2 .
Figure 2. The probability density distribution graph of normal distribution

Figure 3 .
Figure 3. Monitoring the Steam Flow Rate of the Steel Making Process

Figure 4 .Figure 5 .
Figure 4. Monitoring the Flow Rate of Start Steam Boiler

Figure 6 .
Figure 6.The curve and distribution bar graph of flow rate for a chemical process two groups of samples are derived.They are marked as 1 samples may be discarded because they don't belong to any state region).

Figure 7 .Figure 8 .
Figure 7.The changing curves and scatter diagram of three variables

Figure 9 .
Figure 9.The result of monitoring no differentiating states i.The hydraulic and thermal model of single pipe.-Thehydraulic model of single pipe By the law of momentum conservation the hydraulic model can be written as[8]: input and output pressure of the steam in the pipe, Mpa;12 , TT-the input and output absolute temperature, K;12 , ZZ-compressibility factor of the input and output steam; λ -frictional factor; q -the mass flow rate, t/h; L -the length of the pipe, m; D -the inner diameter of pipe, mm; m ρ -the weighted mean density of steam, kg/m3.

-
The thermal model of single pipe By the law of energy conversation, the static thermal model can be written as follows:

pc
-specific heat capacity at constant pressure, kJ/kg.K; l q -the amount of heat loss along unit pipe length, W/m.In term of equation IF-97, The unclaimed symbols are defined in the reference[9].
amount of heat loss along unit pipe length can be calculated using equation(38)

Figure 10 .
Figure 10.Layout of steam pipe network

Figure 11 .
Figure 11.The flow chat of flow rate calculation

Figure 12 .
Figure 12.A section diagram of steam network

3 .
Compare j z with a critical test value c z .If j c zz > , denote stream j as a bad stream.c z recommended[12]

Step 6 . 7 .
Solve equation (54) to Compute the rectified values of the streams in S by substituting the estimated values computed in step 5 with the data of the steams in T. The original measured data are used for the streams in the set () RU S T =−∪ , where U is the all stream set.Step The vector comprised of the results from steps 5, 6 and the original measured data for the streams in R is the rectified measurement vector.If S is empty, then * ˆŶX = , the rectification of the measured data is completed in step 1.

3 .
On selection of the weighted parameter matrix Q Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 245 Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 251 SPE(also called statistical variable Q)is defined as: Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 255 Data Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 259 If it is satisfied, the calculation ends with the result of step 3.If not, presume the intermediate pressure values as the result of step 3, reduce step size and go to step 2 and start another calculation cycle.
Step 4. Thermal model Verification Substitute the intermediate nodes' temperature (determined by step 2), pressure and flow rate values (determined by step3) into thermal model to verify the inequality

Table 1 .
Table 2 to the algorithm calculating flow rate values.The comparision results are listed in Table 2.It shows the largest difference is less than 6%.Actually, the neglected factors in model, the parameter errors and measurement errors add to the difference.The Specifications of the Pipes

Table 2 .
The Comparisons of the Measured Data and the Calculated Data.
Energy Efficiency -The Innovative Ways for Smart Energy, the Future Towards Modern Utilities 262 is satisfied, the ith node is regarded as a good node and denote all streams connected to the node i as good streams.Processing Approaches for the Measurements of Steam Pipe Networks in Iron and Steel Enterprises 265 Need to note equations (51) and (52) are based on the assumptions of steady state and linear constraints, no pipeline leakage or condensate water loss.But it cannot avoid the condition of the pipeline leakage and condensate water.So the equations (51)(52)should be written as:

Table 3 .
, most of the rectified data are much closer to the true values.The simulation testifies the efficiency of data reconciliation.The Result of Data Rectification