Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.


Introduction
Physiological signals such as heartbeat intervals, blood pressure or breathing as output of the human complex system typically employ complicated nonlinear behaviour with significant information underlying the dynamics of their fluctuations, and investigation of the laws governing this dynamics plays an important role in better understanding the development of autonomic disorders. Traditional studies of physiological regulation were often limited to either one or several separate control mechanisms to maintain stability in the system by reacting to a variety of internal and external factors. An example of such a mechanism is the baroreflex, which maintains blood pressure at a certain range by decreasing the heart rate when blood pressure increases, and increasing the heart rate when blood pressure decreases. Usually each of these mechanisms has its own characteristic operating frequency. On long timescales, especially when the activity status of the subject is changing, these characteristic frequencies can drift significantly, thus indicating nonlinear transformations in the system. Furthermore, due to (possibly overlapping) nonstationarities related to changes in the physical and emotional activity caused by external factors, a direct measurement of slow regulation processes is difficult. In order to quantify the physiological regulation on long timescales, additional knowledge about the laws governing the regulatory mechanisms over a wide range of scales is of immense importance.
In the last decade, the scaling behaviour of the fluctuations in physiological signals has been investigated intensively [1]- [3]. It has been revealed that the fluctuations, e.g. in heartbeat interval records, can be characterized by self-similarity and long-term memory, indicating that the healthy system tends not to operate according to a characteristic timescale, but rather shows scale-free behaviour at least for a certain range of scales. These basic findings stimulated further research in the field to develop novel methods for quantification of the physiological status that yet needs to be implemented in clinical practice [4]- [6]. It was shown recently that the correlation properties of the heartbeat intervals differ significantly between various sleep stages. While in rapid eye movement (REM) sleep heartbeat intervals are long-term correlated, in non-REM sleep only short-term correlation can be observed [7,8]. Furthermore, when changing sleep stages, certain transitions occur in the heartbeat interval series. Very recently, a method for detecting these changes based on the detection of such transitions has been introduced [9,10].
It was also shown that physiological signals exhibit multifractal features that are reminiscent of turbulence and can be effectively modelled by multiplicative cascades [3,11,12]. These features are quite stable and demonstrate significant alternations only under serious 3 impairment of cardiac function or control and only slightly depend on age [13]- [16]. They are also rather independent from physical activity and from changes in sympathetic and vagal regulation, such as pharmacologically reduced sympathetic activity [17]. Although heartbeat intervals appeared the most studied quantity in this scope, other quantities such as blood pressure, different parameters derived from electrocardiogram (ECG) and electroencephalogram (EEG) and some others have been shown to demonstrate similar features.
In this paper, we show how multifractality and long-term memory (both linear and nonlinear) can be exploited to provide online a priori information about the probability that a certain physiological quantity will leave its typical range by exceeding some extremely large (or extremely small) value. The central quantities we study here are the intervals between events, when a signal exceeds a certain threshold value Q, since the nonlinear procedure of extracting the return intervals helps to elucidate both the linear and the nonlinear long-term memory inherent in the time series. The quantities of interest are the probability density functions (PDFs) of the return intervals and their conditional statistics, i.e. statistics of the intervals following all intervals of a certain length r 0 , quantifying the memory in the series of return intervals and thus in the occurrence of rare anomalous events. We concentrate on rather low values of thresholds Q, corresponding to the mean return intervals R Q ranging from 10 to 500 (so that on average every 10th or every 500th event is the 'large' one). By obtaining universal laws governing the occurrence of such events over a wide range of scales, we are able to obtain additional knowledge about the occurrence of the more extreme (anomalous) values, when Q is at the borderline of 'normal' values.
Here we focus on heartbeat interval records, obtained during 24 h Holter monitoring from 20 healthy subjects. We model the heartbeat interval records by the multiplicative random cascade (MRC) model characterized by a 1/ f power spectrum, and show that the return intervals between large events both in the model and in the observational records exhibit pronounced power-law distribution, following our general predictions for records exhibiting multifractal behaviour [18]. To characterize the long-term memory among the return intervals, we also study the conditional distributions of the return intervals and the conditional return periods. To elucidate the a priori information about the nearest future of the process, we discuss in greater detail two useful quantities: (i) the estimated mean number of intervals until the next event above Q in the record and (ii) the probability that within the next t heartbeat intervals at least one event above Q will occur. In both cases, we use the information about the elapsed number of heartbeat intervals t after the last Q-exceeding event to obtain these quantities. We show that both quantities again exhibit pronounced power-law behaviour. Finally, using these quantities we suggest an algorithm for the online prediction of Q-exceeding events in multifractal records with 1/ f noise, which, as we show below, can be successfully applied to the heartbeat interval records. We show that our method, which exploits long-term memory and which we call the return intervals approach (RIA), is superior to the conventional precursory pattern recognition technique (PRT), which exploits solely short-term memory.
As shown below, our theoretical and numerical results seem to be generally valid for multifractal records characterized by a 1/ f power spectrum, and a generalization that allows extension to multifractal records with different parameters is simple. We suggest that this approach can also be applied to other physiological quantities that are important for online monitoring, such as blood pressure and S-T segment level and slope on the ECG, as well as for derived quantities obtained from several physiological signals such as baroreflex sensitivity (BRS). This paper is organized as follows. In section 2, we describe the MRC that we use here to model heartbeat intervals, compare the results of the multifractal analysis of the simulated and of the observational records, and discuss the relation of the MRC model to the conventional physiological regulation models based on coupled oscillators. In section 3, we focus on the return interval statistics both in the observational and in the simulated records generated by the MRC model. Finally, in section 4, we suggest an online algorithm for predicting large heartbeat intervals during cardiomonitoring.

MRC in modelling heartbeat interval records
MRCs and their modifications are the most common tools to generate artificial multifractal records and thus to model various natural processes exhibiting multifractal behaviour. Here, we use the multiplicative cascade with random multipliers to model the heartbeat interval series originally proposed in [3,12].
In the MRC process (for illustration, refer to figure 1), the data are obtained in an iterative way, where the length of the record doubles in each iteration. In the zeroth iteration (n = 0) the data set (y i ) consists of one value, y (n=0) 1 = 1 (see figure 1(a)). In the nth iteration, the data y (n) i , i = 1, 2, . . . , 2 n , are obtained from converges to a Gaussian one, and therefore the distribution of N n=1 m (n) l appears to be log-normal. Thus when the total number of iterations N is not very large, it is preferable to use log-normal distribution of the multipliers for better convergence.
The construction of the MRC model for heartbeat dynamics can be viewed as an extension of conventional models that describe the physiological regulation as a set of several mechanisms operating around their characteristic frequencies. In [19], a model based on coupled oscillators has been proposed, based on the empirical evidence that several characteristic frequencies dominate in five main physiological signals (respiration, ECG, heart rate variability, blood pressure and blood flow). The results indicate that the characteristic frequencies are arranged roughly logarithmically, at least within the range from f ≈ 0.01 Hz up to f ≈ 1 Hz. These results were obtained while exploring rather short records (corresponding to a quasi-stationary regime) by means of wavelet analysis. Indeed, in longer records these frequencies have some drift, according to changes in physical and emotional activity, day/night variations and so on, leading to nonlinear transformations and 'spreading' of the localization in the frequency domain. This has been taken into account in the MRC model, where the first iterations represent slow (long-term) regulation, while the last iterations represent fast (short-term) regulation. Rectangular (instead of harmonic) elements constructing the model in the time domain lead to the 'spread' spectrum, representing the smoothing of the spectrum in long records.
To evaluate multifractal properties both of the simulated and of the observational records, we use the multifractal detrended fluctuation analysis (MF-DFA) introduced by Kantelhardt et al [20]. In the MF-DFA one considers the profile, i.e. the cumulated data series , and splits the record into N s (non-overlapping) segments of size s. In each segment a local polynomial fit y u ( j) of e.g. second order is estimated. Then one determines the variance between the local trend and the profile in each segment ν and determines a generalized fluctuation function F q (s), In general, F q (s) scales with s as F q (s) ∼ s h(q) . For a monofractal series, h(q) is independent of q and identical to the Hurst exponent H (see e.g. [21,22]). For multifractal data, the generalized Hurst exponent h(q) depends on the chosen moment q. If the data are characterized by a 1/ f power spectrum, h(2) = 1. For a flat power spectrum (characterized by the absence of linear correlations), h(2) = 0.5. In [20], it was shown that h(q) is directly related to the scaling exponent τ (q) defined by the standard partition function-based multifractal formalism [23], via τ (q) = qh(q) − 1. Figure 2 shows the fluctuation functions of four representative heartbeat interval records fitted by the corresponding functions for the MRC model shown by dashed lines. For better resolution, the fluctuation functions were divided by s, so that the flat line corresponds to the unit Hurst exponent h(q) = 1. The figure shows that both for the observational and for the simulated records h(2) is very close to unity, while for other moments q = 1 and 5 deviations from this value can be observed, thus demonstrating slight multifractality. The results for the model agree surprisingly well with the results for the heartbeat interval records.

Return interval statistics in heartbeat interval records
In this section, we focus on the statistics of return intervals above a certain threshold Q in heartbeat interval records. The nonlinear operation of extracting the return interval sequence from the raw data helps in elucidating both the linear and nonlinear memory in the series. For illustration of the procedure of extracting sequences of return intervals out of the raw data, see where δ(Q) decreases with increasing Q, in particular, δ(Q) = −1.6 for R Q = 10, δ(Q) = −1.4 for R Q = 70 and δ(Q) = −1.25 for R Q = 500. Some deviations can be observed only for large arguments r/R Q , that is along with our recent general predictions for multifractal records to concomitant linear correlations [24,25], and in contrast to the results for multifractal records without linear correlations [18]. For the MRC records characterized by a 1/ f power spectrum, these deviations can be well approximated by a Gamma-distribution P Q (r ) ∼ (r/R Q ) −δ(Q) e −cr /R Q with c ≈ 1/400. Due to finite size effects, additional deviations may occur for large R Q values.
show the conditional PDFs P Q (r |r 0 ) of the return intervals following a certain interval r 0 for the same R Q values. To obtain better statistics, instead of a single value of r 0 , it is plausible to use a certain range of values, typically with the same number of values in each bin (split at quarters, octaves and so on; see [26]). Here, we use an alternative variant of binning and show the distribution of all intervals following r 0 = 1 (first bin) and following r 0 > 3 (last bin) 5 . In both cases, corresponding results for the simulated data are shown by dashed lines.
Again, there is surprisingly good agreement between the results obtained from heartbeat intervals and from the MRC model. Both in the observational and in the simulated records, the conditional PDFs are more distinct for smaller R Q values. For r 0 = 1, there is not much difference between the conditional P Q (r |r 0 ) and the global P Q (r ) PDFs. Together with stronger finite-size effects and earlier deviations from a power-law, it makes it quite difficult to distinguish between P Q (r |r 0 ) for large and for small r 0 .
A simpler quantity describing memory among the return intervals, which requires less statistics and thus can be easily obtained from observational data, is the conditional mean return interval or conditional return period R Q (r 0 ). Here we used logarithmic binning of r 0 . Figures 4(g)-(i) show R Q (r 0 ) calculated as the average of all conditional return intervals for a fixed threshold Q, plotted versus r 0 /R Q (in units of R Q ). The figure demonstrates that, as a consequence of the memory, large return intervals are rather followed by large ones, and small intervals by small ones. In particular, for r 0 values exceeding the return period R Q , R Q (r 0 ) increases by a power-law, where the exponent ν decreases with increasing values of R Q , in particular δ(Q) = 0.63 for R Q = 10, δ(Q) = 0.53 for R Q = 70 and δ(Q) = 0.49 for R Q = 500. These results are also along with our recent general findings for simulated multifractal records with concomitant linear correlations [24,25]. Note that only for an infinite record can the value of R Q (r 0 ) increase infinitely with r 0 . For real (finite) records, there exists a maximum return interval that limits the values of r 0 , and therefore R Q (r 0 ). We note that here we expressed all the return intervals between large events in units of single heartbeat intervals, but not in time units. We also calculated the same quantities for the return intervals expressed in units of real time and found more pronounced deviations from power-law behaviour in all three quantities. This demonstrates that all characteristic frequencies drift together with the main heart rate rhythm, which acts like a nonlinear time-deformation process. Rescaling all the results in units of heartbeat intervals allows one to reach scale-free behaviour over a wide range of scales even for long records, where the heart rate (and thus other related rhythms) could change their typical frequencies significantly during the time of observation.

Online prediction of large heartbeat intervals
Next we synthesize a novel algorithm for the effective prediction of large heartbeat intervals (that exceed a certain threshold Q) on the basis of long-term memory inherent in heartbeat interval series. Such an algorithm might appear helpful, for example, in cardiomonitoring devices with alarm facilities, since they provide additional information on the a priori probability of a bradycardia event to occur. The universality of the proposed approach allows one to extend the solutions to quantities other than heartbeat intervals, for example blood pressure or some parameters of the ECG, altogether providing additional information about the development of autonomic disorders.
Existing strategies rely upon information about short-term memory and are based on finding precursory patterns y n,k : y n−k , y n−k+1 ,···, y n−1 of k events that typically precede an extreme event y n > Q. The strategies are mainly based on two approaches. In the first approach, one concentrates on the large events and their precursory patterns and determines the frequency of these patterns, leading to the a posteriori probability P(y n,k |y n > Q). In the second approach, one considers all patterns of k events y n,k : y n−k , y n−k+1 ,···, y n−1 that precede any event in the record, and determines the probability P(y n > Q|y n,k ) that a given pattern is a precursor for an extreme event y n > Q [27,28]. The second approach appears more profound, since it considers information about precursors of all events, thus providing additional information on the series studied, as has been confirmed recently for both short-and long-term correlated data [29,30].
The most straightforward solution for a decision-making algorithm is to look for the pattern that has the highest probability to be followed by an extreme event and give an alarm when this pattern appears. In the physiological records, since they are produced by a nonlinear complex system, this pattern may appear not to be representative, since many other patterns may have comparable probabilities to be followed by an extreme event. In this case, a better approach is to give an alarm when the estimated probability P for an extreme event to occur exceeds a certain threshold Q P . The (arbitrary) selection of Q P is usually optimized according to the minimal total cost of false predictions made, including false alarms and missed events, after specifying a certain cost of a single false alarm and of a single missed event (which may strongly differ for various tasks, see e.g. [27]).
Here we suggest a third approach for predicting large events, which requires less information and thus is easier to implement than the conventional approach. Our RIA is based on the return interval statistics and is particularly useful in records with nonlinear (multifractal) long-term memory.
Based on the PDF of the return intervals P Q (r ), we obtain two essential quantities in the prediction of the next occurrence of an event above Q. The first quantity is the expectation value of the waiting units τ Q (t) to the next event, when t units have elapsed after the last Q-exceeding event. By definition, for t = 0, τ Q (0) is identical to the return period R Q . In general, τ Q (t) is related to P Q (r ) by which for multifractal data becomes τ Q (t) ∼ t ξ(Q) . An extension of this quantity, which also considers the previous successful return interval r 0 , is the conditional expectation value of the waiting units until the next Q-exceeding event τ Q (t|r 0 ).
where the exponent ξ(Q) decays with increasing Q (ξ = 0.6 for R Q = 10 and ξ = 0.47 for R Q = 70). The conditional expectation units τ Q (t|r 0 ) for r 0 = 1 can also be described by a power-law with roughly the same exponent ξ(Q, r 0 ) as the global ξ(Q). On the contrary, τ Q (t|r 0 ) for r 0 > 3 shows significant deviations at small arguments t/R Q . For large arguments, curves for both r 0 values nearly collapse. The second quantity is the probability W Q (t; t) that within the next t heartbeat intervals at least one large heartbeat interval occurs, if the last large heartbeat interval occurred t heartbeat intervals ago. This quantity is related to P Q (r ) by In case of a purely algebraical decay of P Q (r ), as occurs in linearly uncorrelated multifractal records (which model, for example, financial data [31]), W Q (t; t) = (δ(Q) − 1) t/t for t t. Since W (t; t) is bounded by one for t/R Q → 0, the power-law behaviour can only be valid for t/R Q > (δ(Q) − 1) t/R Q and one arrives at W Q (t; t) = [(δ(Q) − 1)

Figures 5(c) and (d)
show W Q for the MRC record characterized by a 1/ f power spectrum, for R Q = 10 and 70, respectively. Since for this kind of record the PDF yields rather a gammadistribution than a power-law, incorporating deviations at large scales, an analytic expression for W Q can hardly be achieved. We found empirically that in this case a good estimate of W Q is obtained, when in the denominator of For large t/R Q , strong finite size effects occur, which become more pronounced for large R Q values (see figure 5(d)). These finite size effects, which decrease with decreasing R Q and increasing data length L, underestimate the denominator in (8) and thus lead to an artificial overestimation of W Q .  Probabilities W Q (t; 1) estimated by (9). (c) The same probabilities obtained by the precursory pattern recognition with k = 2 and l = 10.
Q-exceeding event to occur in the next heartbeat interval, so in W Q here we focus on t = 1. In all quantities, a surprisingly good agreement with the results for the MRC model is obtained. For large arguments t/R Q , more pronounced finite size effects for single observational records than in averaged and in the simulated records should be noticed. These finite size effects are considerably more pronounced in conditional statistics compared with global statistics. Since no analytical solution for conditional statistics is available, this appears to be a limiting factor in its usability for finite records.
The PDF of the return intervals exhibits power-law behaviour (see equation (4)), where R Q is the mean return interval and the exponent δ > 1 depends explicitly on R Q [24]. As shown above, this power-law dependence can be observed in heartbeat interval records except for very large r/R Q . Since for R Q = 500 finite size effects are already very strong and make it unable to provide reliable estimates for the observational records, here we concentrated solely on R Q = 10 and 70. However, since the above shown basic quantities do not have qualitative differences for larger R Q , we believe that our following results can also be extrapolated to large thresholds Q (corresponding to the borderlines of critical physiological conditions) when dealing with longer observational records.
In the following, we suggest a decision-making algorithm based on the comparison of the estimated W Q with a fixed threshold Q P , and show that it is superior to the conventional approach in predicting large heartbeat intervals. Figure 7(a) shows a representative fragment of a heartbeat interval record and a threshold Q corresponding to the mean return interval R Q = 70. Figure 7(b) illustrates the estimated probabilities W Q (t; 1) from (9) for the above heartbeat 14 interval record. Figure 7(c) illustrates the same probabilities estimated using the precursory PRT, using the information about the two preceding values P(y n > Q|{y n,2 }).
A straightforward way of designing a decision-making algorithm is to set up a threshold value Q P for the risk probability and to activate an alarm whenever it is exceeded. For a certain Q P value, the efficiency of the algorithm is generally quantified by the sensitivity Sens, which denotes the fraction of correctly predicted events, and the specificity Spec, which denotes the fraction of correctly predicted non-events. The larger the Sens and Spec, the better the prediction provided by the algorithm. The overall quantification of the prediction efficiency is usually obtained from 'receiver operator characteristic' (ROC) analysis, where Spec is plotted versus Sens for all possible Q P values. By definition, for Q P = 0, Sens = 1 and Spec = 0, while for Q P = 1, Sens = 0 and Spec = 1. For 0 < Q P < 1, the ROC curve connects the upper left corner of the panel with the lower right one. If there is no memory in the data, Spec + Sens = 1, and the ROC curve is a straight line between both corners (dashed lines in figure 8). The total measure of the predictive power P P, 0 < P P < 1, is the integral over the ROC curve, which equals one for perfect prediction and equals one half for the random guess.
To estimate the risk probability, one can either use the 'learning' observational record itself or use a model representing the record (here the MRC model, where we based our estimations on 150 MRC records of length L = 2 21 ). In the PRT, to create a database of all possible patterns y n,k of k preceding events in a sliding window, one divides the total range of the possible values y i of the record into l levels so that there is the same number of values between each level, leading to the total number of patterns l k . Next, for every precursory pattern y n,k one estimates the probability P(y n > Q|y n,k ) that the following event y n exceeds Q. The major disadvantage of the PRT is that it needs a considerable amount of fine-tuning for finding the optimum parameters l and k that yield the highest prediction efficiency. For transparency, we have kept the total number of patterns l k = const and concentrated on five pattern lengths k = 2, 3, 4, 5 and 6. For predictions in the MRC record, we have chosen l k = 10 6 and obtained the best result for k = 2. Smaller and larger values of l k did not improve the prediction efficiency. For predictions in the observational records where the statistics is limited, it is usually not possible to exceed l k = 10 2 for obtaining ROC curves that cover the whole area between zero and unit sensitivity. We obtained the best performance for k = 1 and 2. In the alternative RIA, the probability W Q (t; t) can be determined from the observational record by using equation (8), or by using the analytical form (9). For all cases we found that the latter approach, which is free of statistical limitations, gives the best result. short-term dynamics of heartbeat intervals including individual variations in physiological regulation. Accordingly, for the same high sensitivities the RIA yields considerably fewer false alarms than the PRT. We note that we have obtained similar conclusions for the overwhelming majority of the remaining records.

Conclusion
In summary, we have shown that the arrangement of heartbeat intervals exceeding a certain threshold Q is scale-free over a wide range of scales. It can be characterized by power laws in the distribution of the interoccurrence times between large heartbeat intervals and in the conditional return period. Therefore the expectation value of the waiting units to the next heartbeat interval longer than Q and the probability that at least one heartbeat interval longer than Q will occur within the next t heartbeat intervals under the condition that the last Q-exceeding heartbeat interval occurred t heartbeat intervals ago also exhibit power-law behaviour. Using these results, we have suggested an alternative method for predictions that is based on the return interval statistics between events above a certain threshold Q, which is a powerful tool for quantifying the occurrence of extremes in persistent records [32]. We have applied this method to the heartbeat interval records and have shown that by exploiting long-term memory inherent in the elapsed units after the last Q-exceeding event, this method is superior to the conventional precursory pattern recognition approach that focuses solely on short-term memory. The major disadvantage of the method is that it usually does not predict the first event in the cluster when the number of elapsed heartbeat intervals t is large, and thus W (t; t) is low. However, since in multifractal records the clustering of extreme events is pronounced, the gain from better predicting the events following the first one 'within' the cluster as well as from the lower false alarm rate while being 'outside' the cluster fully compensates, and in real records significantly exceeds, the loss from the weaker predictability of the first Q-exceeding event in the cluster, as we have confirmed using ROC analysis. An advantage of the RIA is that it does not require extensive learning or tuning procedures and thus it is easier to implement. Finally, in this paper we have focused only on heartbeat interval records, obtained during 24 h Holter monitoring from 20 healthy subjects. We note that we have also obtained qualitatively similar results for two groups of patients with impairment of autonomic regulation suffering from syncope syndrome with orthostatic and neurogenic origin, which demonstrates the stability of obtained results in moderate autonomic disorders. Since our results seem to be generally valid for multifractal records characterized by a 1/ f power spectrum, and can be easily extended to multifractal records with other parameters, they can also be applied to the analysis of other physiological records. Apart from heartbeat intervals, usually measured as intervals between consecutive R-peaks of the ECG (R-R intervals), there are also other measures like the level and the slope of a segment between S and T peaks of the ECG (S-T segment), which are important in online monitoring. For the latter example, which is actually in use for the prediction and detection of ischemia episodes, a better noise robustness of the proposed method is of immense importance, since performing direct measurement of S-T segment parameters for a single heartbeat is difficult due to strong noise impact. Another possible application of the method could be studying the shape of the ECG signal between two consecutive R peaks, a quantity recently suggested in [33]. Also, the method can be applied to analyse quantities derived from several physiological signals, such as dynamically estimated BRS, which plays an important role in medical issues related to diagnosis and classification of autonomic disorders [34].
We also note that due to the validity of our theoretical results for a wide class of processes exhibiting multifractal behaviour, we believe that they can be straightforwardly extended to make more reliable predictions in other complex records exhibiting multifractal behaviour, such as turbulent flow, precipitation, river flows and network traffic.