Introduction

Because of technical capacity and scientific interest, data sets are becoming increasingly large in medical research. In critical care, this is even more apparent: there is an abundance of continuously measured clinical, laboratory, radiological, and pharmaceutical data in the intensive care unit. In neurocritical care specifically, we have seen a vast increase of parameters that can be measured. Whereas we traditionally were only able to record systemic metrics, such as blood pressure and heart rate, nowadays we are able to (invasively) monitor highly granular physiological parameters, such as local brain oxygen pressure or intracerebral glucose and lactate levels [1, 2]. Already, multiple initiatives exist that share such anonymized intensive care unit data freely for research purposes [3,4,5,6,7,8]. Also, large observational studies and registries have been rolled out during the last decades, for example, in traumatic brain injury (TBI), [9,10,11,12], that make their data available for researchers on request. This omnipresence of data provides an opportunity to improve on clinical care [13]. However, it remains a challenge to inform treatment decisions based on the volume and dimensionality of big data to improve patient outcomes [2].

Within this context, it is important to realize that the scientific method seems to gradually shift from research question as a starting point towards available data as a starting point [14]. In the latter approach, there is less selection in relationships being tested and the probability is lower  that the findings are actually true [15]. Three types of research questions can be answered by analyzing data, increasing complexity: descriptive, predictive, and causal questions [16].

To answer such research questions, researchers can apply either more traditional statistical techniques or more modern machine learning (ML) algorithms. Part of the enthusiasm for ML algorithms stems from the promising results in diagnostic research. For neurocritical care, examples include deep learning algorithms to identify brain lesions [17,18,19] and perform volumetrical analyses [20]. The performance of these algorithms for these types of descriptive questions seems reliable and high and therefore likely of great assistance to assist or automatize interpretation of visual diagnostic information.

However, there is more discussion as to how ML algorithms should be applied and interpreted for clustering (a type of descriptive question) and for predictive questions. Inadequate predictive algorithms can cause harm when implemented. This was the case for the Epic Sepsis Model; because this model was inadequately validated before implementation, the model caused alarm fatigue and underdiagnosis of sepsis in clinical practice [21].

In this article, we aim to explain characteristics of commonly used data analysis techniques and to present a perspective on responsible use of these techniques for answering clustering (descriptive) and prediction questions. Effectively, this article provides guidance for clinically oriented readers to avoid the necessity to follow methodological literature [22]. It is important for clinicians to judge the appropriateness of published algorithms because invalid algorithms that are used to inform clinical practice might ultimately harm patients.

Data Analysis Techniques

Researchers can use various techniques to answer descriptive, predictive, and causal questions in their data sets [16]. A selection of most commonly used techniques is provided in Table 1, and these have been described elsewhere in more detail [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. Although there is a tendency to classify these techniques as ML versus statistical techniques, there are many common characteristics. We therefore refrain from referring to these algorithms in a dichotomous way because the distinction is not very clear. First and foremost, they are all algorithms that, when provided with appropriate data, quantify relationships between variables. The most fundamental differences between the techniques are in flexibility and functional aim. For more detailed descriptions of terms found in this article, we refer to Online Appendix.

Table 1 Frequently used algorithms for modeling big data

Flexibility is the ability of a model to incorporate various types of relationships. Relationships can be linear or nonlinear and additive or nonadditive (Fig. 1) [41].

Fig. 1
figure 1

Illustration of different types of relationships. a, Various ways of how two variables can be related linearly (upper left subpanel) or nonlinearly (the other subpanels). The data on the x-axis is an arbitrarily chosen range of numbers, and the relationship with the y-data was artificially simulated, including some noise (random error). b, the concept of nonadditivity. The upper two subpanels show for a linear relationship how the effect of group (color) can be additive (left) or nonadditive (right) over the x-variable. The bottom subpanels show the same for a nonlinear relationship

Nonlinear associations are, for example, the U-shaped effect of blood pressure on mortality in trauma patients with active hemorrhages [42] or the effect of body mass index on mortality in the general population [43]. Nonlinear associations are abundant in nature. Most algorithms can be extended to include these relationships. For example, in classical regression, nonlinear relationships can be explored by adding polynomial terms, such as x2. Further exploration may be with the log transformation, as is commonly used for laboratory measurements [44], or splines [23]. A neural network with hidden layers includes nonlinearity implicitly through its complex architecture. On the contrary, classification and regression trees do not really include nonlinear relationships (Fig. 2).

Fig. 2
figure 2

Fitting a regression model (lm function in R) and a regression tree (rpart function in R) to the nonadditive, nonlinear relationship shown in Fig. 1b, bottom right subpanel. Again, the data shown on the x-axis was an arbitrarily chosen range of numbers, and the y-data were artificially simulated, including some noise (random error). The regression model (in colored lines) included a restricted cubic spline and an interaction term between group and x and follows the relationship nicely. The regression tree (black line) failed to include group in the final tree and only included x, thereby disregarding complexity in the data

More rarer is the notion of nonadditivity [45, 46], commonly referred to as interaction (Fig. 1). We call two effects nonadditive if the association of a variable with the outcome differs when another variable changes its value. A recent example was the CRASH-3 trial, which showed that tranexamic acid reduces head-injury-related deaths, primarily in very early administration [47]. The effect of tranexamic acid is dependent on time; therefore, these effects are nonadditive. Again, classical regression can be extended to include interaction effects [23], and neural networks do so implicitly, in contrast to classification and regression tree models (Fig. 2).

The functional aim refers to how the variables in the algorithm are linked together. Researchers are required to define this (to various extent per method) before performing the analysis. Given the three types of possible research questions [16], data can be either baseline characteristics (“X”) or outcome (“Y”). Algorithms vary in the way they couple “X” with “Y” (Table 1), which determine what types of research questions they might feasibly answer, given appropriate data.

Clustering

Clustering and latent class analysis are techniques that focus on relating “X” characteristics. They are useful to answer descriptive questions. Some clustering algorithms also consider the “Y” outcome, such as latent class models. They can identify clusters conditional on outcome (“X|Y”).

In critical care, these techniques have been increasingly popular to identify subgroups or “clinical phenotypes” of patients (Fig. 3). The premise is that by identifying these clinical phenotypes, we learn something intrinsic to that population that informs us how to better describe or classify patients or better allocate treatment. Clustering studies in (neuro)critical care summarize the patterns in high-dimensionality of neurocritical care data [48,49,50]. Clustering algorithms commonly are performed in a static way with baseline data and/or outcome data. Examples are rare when continuously reported vital parameters (a characteristic of neurocritical care data) are incorporated in the analyses in a more dynamic way.

Fig. 3
figure 3

The increase in popularity of clustering studies in critical care. We used as a search string “(clustering OR unsupervised OR hypothesis-free) AND critical care” and included studies in Pubmed up to 2020

An important challenge for clustering studies is to ensure and assess validity. There are various useful measures for internal validity that assess how appropriate the clusters are formed within a data set [51,52,53,54]. External validity, defined as the degree of how well the identified clusters can be applied to new data sets or new patients, is harder to assess; clustering focuses on accurately describing the current data in relevant groups. Because it does not calculate a general metric or parameter of how the clusters depend on data values, new data or new patients cannot be clustered into the identified groups. It is, however, possible to repeat the clustering analysis in a different data set with the same number of clusters and variables [55] and assess whether the clusters seem similar. More problematic is to apply these clusters to a single patient. For example, given that a patient has a Glasgow Coma Scale (GCS) score of 9, has extracranial injury, and has had a low energy trauma mechanism [56], one can only relate these patient characteristics to the average characteristics of the clusters. We need to make an educated guess which of the clusters most resembles our patient. Therefore, the clinical usefulness of clustering patient groups remains limited, for now.

Interestingly, after having identified clusters, researchers commonly describe these clusters again based on the outcome. For example, clusters of patients with COVID-19 have been described on the mortality as observed by cluster [50, 55], clusters of patients with TBI have been described by the associated functional outcome [56, 57], and clusters of neurocritical care patients with invasive neuromonitoring “may have implications for…outcome predictions” [58]. From a prognostication perspective, this is inefficient; by first categorizing “X,” researchers lose information to predict “Y” [59, 60]. Therefore, predictive questions commonly require different techniques.

Prediction

Predictive research involves the development and validation of predictive algorithms [61,62,63,64]. The primary aim of predictive research is to most accurately predict outcome. All algorithms, ML techniques and statistical techniques, that couple “X” to “Y” are potentially suited for prediction (Table 1).

Development

We discern two characteristics of data sets that are relevant for researchers when they want to decide on a technique to develop a predictive algorithm (Fig. 4). On the one hand, dimensionality of the data is relevant, which is the number of potentially relevant predictors in the predictive algorithm. On the other hand, volume of the data is relevant. Volume of data entails the total number of patients and, specifically, the number of events of the least occurring outcome in case of dichotomous outcomes [65]. An example of low dimensionality and low volume is the Ottawa ankle rule [66]. This decision rule informs whether patients with ankle injuries need an x-ray. This rule was developed in a data set with 70 events in 689 patients, and includes five predictors (event per variable [EPV] = 70/5 = 14). An example of high dimensionality and high volume is the OHDSI (Observational Health Data Sciences and Informatics) model to predict hemorrhagic transformation in patients with ischemic stroke [67]. This model includes 612 predictors and was developed in electronic health record data with 5,624 events in 621,178 patients with stroke (EPV = 5634/612 = 9). Note that although the latter model is developed in a much larger cohort with many more events, the number of EPV is lower. The model is therefore not free from risk of overfitting [68]. We discuss the application of statistical techniques and ML techniques in four areas that can be defined by these two characteristics (dimensionality versus volume; Fig. 4).

Fig. 4
figure 4

Areas in which different types of algorithms might be considered for predictive modeling

First, with low-volume and high-dimensional data, it may be unwise to perform any predictive research. The risk of overfitting is too large [62], resulting in potentially harmful prediction tools [69, 70]. For example, a systematic review of ML in routinely collected intensive care unit data estimated that 30% of support vector machines were trained on fewer than 100 patients [71]. The chosen technique does not matter much. Both for regression methods as well as for ML techniques, adequately large sample sizes are necessary [72, 73]. However, because ML techniques are data hungry [74], using these techniques will likely result in more invalid or inaccurate prediction in this setting. Therefore, these techniques may be more harmful than traditional statistical methods in this setting.

Second, with low-volume data and low-dimensional data, we suggest using methods that focus on stability of estimation of the model (in either method). Exemplary prediction models that have been developed in this area are prediction models for acquired weakness in the intensive care unit (8–25 candidate predictors and 25–190 events) [75]. Within this area, penalized regression techniques are often used to shrink the coefficients during the estimation of the model [36, 76]. Therefore, they limit the extent to which extreme coefficients are estimated. This limits the effect of “overfitting”: the problem that the developed model performs well in the original data set but worse when applied to new data sets or to real patients. Neural networks can also use methods to avoid overfitting. Examples of methods to limit overfitting in neural networks are penalizing estimated weights (called “regularization”) or setting weights back to zero at random (called “drop-out”) [40]. It is important to realize that all abovementioned methods work poorer in smaller data sets in the sense that it is difficult to estimate the penalty parameter reliably [37]. Although it is appropriate to use these methods when sample size is limited, it is preferable to have a larger sample size. Unfortunately, as can be seen for the example of prediction models for acquired weakness in the intensive care unit, these techniques are underused [75].

Third, the area in which data volume is large and dimensionality is low is a safer area for predictive research. In contrast with the previous setting, there is less risk of overfitting. ML techniques and statistical techniques seem to perform similarly within this context [38, 77]. For example, only small differences in performance (discrimination, calibration) were found between various algorithms to predict outcome in TBI [77]. A possible explanation is that within the context of clinical data, most predictor effects are largely linear and additive on an appropriate modeling scale [62, 78]. Therefore, more flexible methods have limited opportunity to improve their predictions.

Fourth, the area of high volume and high dimensionality seems the most appropriate area for ML techniques. The previously mentioned OHDSI model to predict hemorrhagic transformation in patients with ischemic stroke is such an example in neurocritical care [67]. The increased flexibility can potentially exploit subtle nonadditive or nonlinear effects to improve accuracy of predictions. Unfortunately, current published predictive studies with ML techniques remain at high risk of bias [38, 79]. Finally, although sample size might seem larger within this context, we should remain aware of the number of EPV, indicating the effective sample size.

To summarize, the two suggested characteristics (dimensionality and volume) can be useful to inform what type of flexibility or control is required of an algorithm to be used reliably in a specific context. With reliability¸ we here mean that there is a high likelihood that the model performs similarly well (in terms of discrimination and calibration) in a different context. To actually confirm whether the algorithm performs well in a different context, researchers do need to rigorously validate predictive algorithms (see following section) [80].

For neurocritical care, another interpretation of dimensionality may be used to inform what algorithm can be used for a predictive question. Dimensionality may also refer to the extent to which continuous measurements (e.g., intracranial pressure, local brain oxygen levels) need to be incorporated into the algorithm. Although repeated measurements are increasingly common in neurocritical care, their inclusion in predictive algorithms can be improved [81]. To adequately model repeated measurements, specific techniques are required, such as mixed effects regression [82], joint modeling [83], or recurrent neural networks [40].

Validation

To ensure reliable applicability in clinical practice, predictive algorithms require extensive validation [61]. There are various ways of validating a predictive algorithm. In order of rigorousness, researchers can perform internal, internal–external, and external validation [80]. When risk of bias is high, performance is likely overestimated [84].

It is common in ML studies to use “split sample validation” as an internal validation method. We train the model in a training set and then estimate performance in the test set. This method is, however, quite inefficient to estimate performance [85]. Better methods are, for example, cross-validation or bootstrap resampling [23].

A more exiting variant of cross-validation is internal–external validation [77, 80]. This method can be used in data sets with multiple clusters (e.g., multiple centers, multiple studies). The algorithm is developed on all but one cluster and tested in the set that was withheld. This is repeated until all data sets have been used as the validation set and all estimated performances are averaged.

The most robust way to estimate performance is in a data set in a completely different setting, called external validation [64, 80, 86]. This is especially complex with high-dimensional models. It requires substantial data harmonization and technical efforts to be able to validate such models. Within the OHDSI consortium, a predictive algorithm with hundreds of predictors could be validated after harmonization of data in a common format [67]. However, this study remains one of few examples.

Bias

As with all clinical studies, studies with big data can be biased. An important bias in these studies is selection bias, for example, arising because of inappropriate handling of missing data [87]. The majority of ML studies fail to report how missing data were handled [79, 88] or use methods (e.g., complete case) that are not recommended [62, 89, 90]. Another source of selection bias is the inappropriate exclusion of groups of patients, also a common issue in prediction studies that use ML techniques [79]. A completely different type of bias, misclassification bias, also occurs regularly in studies with big data. Examples include multilevel data with different data definitions per cluster [91, 92] or insurance claim data with only limited granularity in defining exposures and outcomes [93]. To reduce the effect of these biases, epidemiologists and statisticians have developed frameworks that are readily available in most statistical software [89, 94,95,96]. Probably because ML was developed in a more deterministic environment, implementations of epidemiological frameworks to ML techniques are still lacking. For clinical researchers, it might still be reasonable to use more traditional methods because right now there is more experience to address biases with these techniques (e.g., resulting from missing data).

Because bias can only be identified when a study is adequately reported, we recommend the use of reporting guidelines for prediction models (TRIPOD [97]) and the upcoming guidelines for ML techniques (TRIPOD-AI [98]). There are currently no guidelines available for clustering studies.

Discussion

In this article, we have discussed opportunities and pitfalls of the use of ML techniques in controversial areas of (neuro)critical care research. Clustering studies are increasingly popular in critical care research. They can be used to explore new ways of describing or characterizing patient groups and suggest how patients can be treated better. A large challenge toward responsible use of these techniques is ensuring generalizability of these findings. For predictive questions, there is much discussion as to what algorithm should be used to most accurately predict outcome. The usefulness of ML techniques compared with statistical techniques depends on the volume of the data, the dimensionality of the preferred model, and the extent to which missing data or potential bias is present in the data. There are areas in which modern flexible techniques may be preferred, but efforts should be made to provide more comprehensive frameworks for using modern ML techniques.

More generally, we advocate testing research hypotheses just like clinical hypotheses are being tested. The recent uprising of large and complex data sets together with modern “data-mining” techniques has led to more data-driven hypothesis testing. Although this approach enables researchers to serendipitously encounter potentially new truths, it does overestimate the value of new data over current knowledge. It is important to be aware of prior probabilities of hypotheses being true and regard new evidence in that context [15]. Similarly, it is advised that only in patients with TBI with some risk of an intracranial lesion should the hypothesis of having such a lesion be tested using a computed tomography scan [99, 100]. An example of overestimation of the value of new data was a recent analysis in which TBI clusters were formed on the basis of two large data sets [48, 49]. Even though previous studies concluded that the GCS can “stand the test of time” [101] (i.e., remains a robust predictor of outcome), the authors conclude that their “data-derived patient phenotypes will enhance TBI patient stratification…beyond the GCS-based gold standard” [49]. They therefore provide an alternative to the GCS, thereby disregarding the long-standing use of this important characteristic. Even though a data set is large, the collective “data set” of medical knowledge being built up over centuries of meticulous research endeavors is much larger. Big data is not to be regarded as an endless source of full information about patients but rather as an opportunity to generate new hypotheses and update our knowledge.

We have not yet touched on the last-mentioned type of research question, causal questions. Causality is particularly hard to infer because it requires the application of a causal model of the problem to the data [16, 102]. A commonly used method is to include confounding factors in a regression model so that the estimated effect of the treatment on outcome is corrected for those confounding factors. This is more complex with ML techniques. Part of the enthusiasm for ML techniques stems from the idea that they require less assumptions. In fact, most ML techniques do not allow researchers to assume relationships between variables. This is, however, a drawback when exactly that control over the model is required to infer causality. Besides, the individual effects of ML techniques are relatively hard to interpret. Although more explainable algorithms exist, effects are less intuitive to interpret than regression coefficients [103]. There are some extensions of ML techniques that allow the researcher to assume relationships between variables, for example, through graph-based neural networks [104] or Bayesian networks [102, 105]. This type of control is required to appropriately address causal questions.

Some ML techniques, such as neural networks and random forests, are relatively hard to explain compared with regression models or decision trees, for example. It can be argued that algorithms should be explainable to be used in clinical practice and that clinicians and patients should be able to interpret what happens under the hood of an algorithm to ensure safety and establish control [106, 107]. However, the extent to which we can judge the reliability of an algorithm on the basis of how the algorithm arrives at its prediction is limited [108]. If the sole purpose is prediction, the relatively limited interpretability of some ML techniques may not be a problem. More relevant criteria to judge an algorithm in clinical practice might be (1) whether the algorithm shows good performance (discrimination and calibration) when validated in a relevant population [77, 80] and (2) whether relevant patient outcomes improve when the algorithm is used [63].

We suggest that there is a necessity for improved interaction between the engineer mindset of experts in ML techniques, the focus on limiting bias by epidemiologists, and the probabilistic mindset of statisticians. The difference in mindset becomes apparent when reading each professional’s published literature. For example, what is known in statistical literature as fitting or estimating is called learning or training in the ML literature [109]. Similarly, external validity is called transportability, and covariates are called features [110]. By developing a common language between these groups of researchers, we avoid complexities when aggregating results in systematic reviews and increase cross-contamination of ideas [109]. By converging these worlds, we probably will be able to extract more information from data, while avoiding harm by neglecting the need to address potential biases.

Box 1. Take aways for the clinical neurocritical care researcher

• Include researchers from various backgrounds (clinical, statistical, epidemiological, data scientists) in new research projects and be more critical toward studies that only include researchers from one perspective

• When reporting a prediction study, use the TRIPOD [97] or TRIPOD-AI (for ML studies, upcoming [98]) reporting guidelines so that readers can adequately assess reliability

• Use only predictive algorithms in clinical practice that have been rigorously validated and that have been shown to add clinical benefit to patients when used

• Appreciate the exploratory nature of clustering studies: use their results only as tentative updates on current knowledge about different patient groups rather than “new truths” (and refrain from using them in a prognostic framework)

Conclusion

There are important pitfalls and opportunities to consider when performing clustering or predictive studies with ML techniques. More generally, we advocate to be careful as not to overvalue new data compared with clinical relevance and collective knowledge. Also, there is a necessity for improved interaction between the engineer mindset of experts in ML techniques, the focus on limiting bias by epidemiologists, and the probabilistic mindset of statisticians: we need to extract as much information from data as possible, while avoiding harm when invalid algorithms are used to inform medical decision-making.