J Korean Med Sci. 2021 Jul 19;36(28):e187. English.
Published online Jun 21, 2021.
© 2021 The Korean Academy of Medical Sciences.
Original Article

Prediction of Neurological Outcomes in Out-of-hospital Cardiac Arrest Survivors Immediately after Return of Spontaneous Circulation: Ensemble Technique with Four Machine Learning Models

Ji Han Heo,1,2 Taegyun Kim,1,3 Jonghwan Shin,3,4 Gil Joon Suh,1,3 Joonghee Kim,5 Yoon Sun Jung,6 Seung Min Park,5 Sungwan Kim,7 and For SNU CARE investigators
    • 1Department of Emergency Medicine, Seoul National University Hospital, Seoul, Korea.
    • 2Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea.
    • 3Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Korea.
    • 4Department of Emergency Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea.
    • 5Department of Emergency Medicine, Seoul National University Bundang Hospital, Seongnam, Korea.
    • 6Division of Critical Care Medicine, Seoul National University Hospital, Seoul, Korea.
    • 7Department of Biomedical Engineering, College of Medicine and Institute of Medical & Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea.
Received March 05, 2021; Accepted June 14, 2021.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background

We performed this study to establish a prediction model for 1-year neurological outcomes in out-of-hospital cardiac arrest (OHCA) patients who achieved return of spontaneous circulation (ROSC) immediately after ROSC using machine learning methods.

Methods

We performed a retrospective analysis of an OHCA survivor registry. Patients aged ≥ 18 years were included. Study participants who had registered between March 31, 2013 and December 31, 2018 were divided into a develop dataset (80% of total) and an internal validation dataset (20% of total), and those who had registered between January 1, 2019 and December 31, 2019 were assigned to an external validation dataset. Four machine learning methods, including random forest, support vector machine, ElasticNet and extreme gradient boost, were implemented to establish prediction models with the develop dataset, and the ensemble technique was used to build the final prediction model. The prediction performance of the model in the internal validation and the external validation dataset was described with accuracy, area under the receiver-operating characteristic curve, area under the precision-recall curve, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Futhermore, we established multivariable logistic regression models with the develop set and compared prediction performance with the ensemble models. The primary outcome was an unfavorable 1-year neurological outcome.

Results

A total of 1,207 patients were included in the study. Among them, 631, 139, and 153 were assigned to the develop, the internal validation and the external validation datasets, respectively. Prediction performance metrics for the ensemble prediction model in the internal validation dataset were as follows: accuracy, 0.9620 (95% confidence interval [CI], 0.9352–0.9889); area under receiver-operator characteristics curve, 0.9800 (95% CI, 0.9612–0.9988); area under precision-recall curve, 0.9950 (95% CI, 0.9860–1.0000); sensitivity, 0.9594 (95% CI, 0.9245–0.9943); specificity, 0.9714 (95% CI, 0.9162–1.0000); PPV, 0.9916 (95% CI, 0.9752–1.0000); NPV, 0.8718 (95% CI, 0.7669–0.9767). Prediction performance metrics for the model in the external validation dataset were as follows: accuracy, 0.8509 (95% CI, 0.7825–0.9192); area under receiver-operator characteristics curve, 0.9301 (95% CI, 0.8845–0.9756); area under precision-recall curve, 0.9476 (95% CI, 0.9087–0.9867); sensitivity, 0.9595 (95% CI, 0.9145–1.0000); specificity, 0.6500 (95% CI, 0.5022–0.7978); PPV, 0.8353 (95% CI, 0.7564–0.9142); NPV, 0.8966 (95% CI, 0.7857–1.0000). All the prediction metrics were higher in the ensemble models, except NPVs in both the internal and the external validation datasets.

Conclusion

We established an ensemble prediction model for prediction of unfavorable 1-year neurological outcomes in OHCA survivors using four machine learning methods. The prediction performance of the ensemble model was higher than the multivariable logistic regression model, while its performance was slightly decreased in the external validation dataset.

Graphical Abstract

Keywords
Heart Arrest; Cardiopulmonary Resuscitation; Machine Learning

INTRODUCTION

Out-of-hospital cardiac arrest (OHCA) is one of the major health issues worldwide.1 Less than one-third of OHCA victims achieve return of spontaneous circulation (ROSC), and less than ten percent remain neurologically favorable after OHCA.2, 3

Current guidelines recommend evaluating neurological outcomes after cardiac arrest at least 72 hours after ROSC to minimize the rate of false-positive results.4, 5, 6 Despite the recommended guidelines, caregivers sometimes request early outcome predictions,7 which may allow the caregivers and the medical personnel enough time to share information and to discuss the care plan for cardiac arrest survivors.

Machine learning has been widely implemented in recent studies on cardiac arrest. Several studies have shown that prediction models developed with machine learning methods can predict neurological outcomes in cardiac arrest victims.8, 9, 10 These studies mainly used prehospital features for establishing outcome prediction models, except several hospital features such as initial electrocardiography rhythm at emergency department (ED), percutaneous coronary intervention, targeted temperature management and extracorporeal membrane oxygenation.

In recent studies, initial laboratory results at hospital arrival after OHCA, such as arterial pH,11, 12 serum potassium level,13, 14 and serum creatinine level,15, 16 have been reported to be associated with neurological outcomes after cardiac arrest. Machine learning is a crucial component in the establishment of prediction models that include laboratory test results as features since a variety of laboratory tests are performed and conventional statistical techniques have difficulty handling them. Previous machine learning studies did not include laboratory results in their prediction models, which have an important association with neurological outcomes in cardiac arrest survivors.

Few studies have evaluated the prediction of neurological outcomes with machine learning methods in OHCA survivors immediately after ROSC. We performed this study to investigate the long-term neurological outcome prediction performance of several models using machine learning methods in OHCA survivors immediately after ROSC.

METHODS

Study setting and design

We performed a retrospective analysis of prospectively collected data archived in a multicenter registry of OHCA survivors. The registry consists of the data collected from adult OHCA survivors who had visited the EDs of three university hospitals in the Republic of Korea. We analyzed data from patients who had visited the EDs from March 31, 2013, to December 31, 2019. We included all adult (age ≥ 18 years) OHCA patients registered in the registry during the study period. Patients were excluded if their cerebral performance category (CPC) scales before OHCA were between three and five or their 1-year neurological outcomes were missing.

Outcome measures

The primary outcome was neurological status at one year according to the CPC scale. A favorable neurological outcome was defined as a CPC score of one or two, and an unfavorable neurological outcome was defined as a CPC score higher than two (i.e., three to five).

Statistical analysis for demographics

Continuous variables are presented as the mean ± standard deviation and were compared using Student's t-test or the Mann-Whitney test as appropriate. Categorical variables are presented as numbers (percentages) and compared using the χ2 test or Fisher's exact test as appropriate. Two-sided P values < 0.05 were statistically significant.

Dataset

After selection of study participants, we first split the whole dataset into two separate datasets: data acquired from March 31, 2013 to December 31, 2018 (dataset 1) and data from January 1, 2019 to December 31, 2019 (dataset 2). Dataset 1 was split again into a develop dataset and an internal validation dataset with an 80:20 ratio, and dataset 2 was reserved as an external validation dataset. Missing values were imputed with means for continuous data and with modes for categorical data. As missing data were not considered missing completely at random, we made new binary variables indicating the missingness of specific variables.

Machine learning models

We implemented four machine learning methods for the prediction of unfavorable neurological outcomes in the develop dataset: random forest (RF), support vector machine, elastic net, and extreme gradient boost. To obtain the best hyperparameters, a grid search was performed for each classifier. After optimization of the hyperparameters, we calculated the following parameters in each model on the develop and the internal validation datasets: accuracy, areas under the receiver operating characteristics curve (AUROCs), areas under the precision-recall curve (AUPRCs), sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs) and F1 scores. We also calculated 95% confidence intervals (CIs) for each value if possible. Five-fold cross validation was implemented to calculate the average prediction performance of each model on the develop dataset. After model establishment, we implemented the ensemble method with soft voting with the four prediction models and tested the prediction performance of the ensemble prediction model on the internal validation dataset. The cutoff probability score of the ensemble model was selected with which the F1 score was maximized. The F1 score is one of the measures of the overall performance of a prediction model, and it is defined as a harmonic mean of sensitivity and PPV of the prediction model at a certain cutoff probability score. Finally, we tested the prediction performance of the ensemble model in the external validation dataset with the same cutoff probability score and calculated the same prediction performance parameters as in the develop and the internal validation datasets.

Variable selection

Since we aimed to establish prediction models that can be applied immediately after ROSC using machine learning techniques, we selected variables widely available at the time of ROSC. As for the laboratory variables, we used most of initial laboratory test results for model develop. However, we discarded variables 1) that are thought to have strong correlation with other included variables (e.g., total carbon dioxide level, pH, arterial oxygen saturation), 2) that are associated with organ function but represented by other included variables (e.g., aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase, activated partial thromboplastin time, creatinine kinase, creatinine kinase MB isoenzyme, pro-B-natriuretic peptide), 3) that are non-classic anion or cation (e.g. ionized calcium, phosphorus), 4) that are not thought to be widely used in general EDs (e.g., red cell distribution width, neuron-specific enolase, S100 protein, central venous oxygen saturation, cortisol, adrenocorticotropic hormone, antidiuretic hormone), and 5) that are not available immediately after ROSC (data from laboratory tests performed at 24 hours and 72 hours after ROSC). We finally used 46 variables for the analysis, including baseline variables, prehospital variables, ED resuscitation variables and laboratory variables. Details of the variables used are described in Supplementary Table 1.

Subgroup analysis

We selected patients whose cardiac arrest was presumed to be of cardiac origin as a cardiac subgroup. We performed the same analysis as we performed in the main analysis with the cardiac subgroup dataset, including data splitting, implementation of the four machine learning methods and ensemble technique, selection of cutoff probability scores and calculation of prediction parameters.

Logistic regression analysis

To explain the variable importance indirectly and to compare performance metrics of the ensemble models with that of classic prediction models, we established multivariable logistic regression models for unfavorable neurological outcomes with the same variables used in the machine learning analysis. We set the cutoff probability score of 0.5 for logistic regression analyses. Same performance metrics used in the machine learning analysis, such as accuracy, AUROC, AUPRC, sensitivity, specificity, PPV, NPV, and F1 scores were calculated.

Tools for analysis

All statistical analyses for demographics, data splitting and logistic regression analysis were performed with R version 4.0.2 (R Foundation, Vienna, Austria). All codes for machine learning analyses and calculation of performance metrics were written in Python 3.7 (Python Software Foundation, Wilmington, DE, USA).

Ethics statement

Study protocols for collecting data for the registry and for the main analyses were approved by the Institutional Review Boards (IRBs) of participating hospitals (Seoul National University Hospital, IRB No. 1408-012-599; Seoul Metropolitan Government-Seoul National University Boramae Hospital, IRB No. 16-2013-157; Seoul National University Bundang Hospital, IRB No. B-1401/234-402) and the IRB of Seoul National University Hospital (IRB No. 2012-016-117), respectively. Informed consent was waivered by the IRB of Seoul National University Hospital, according to the retrospective nature of the study.

RESULTS

Patient selection and baseline demographics

During the study period, 1,214 patients were registered in the registry of which 1,061 comprised dataset 1 and 153 comprised dataset 2 (Fig. 1). A total of 1,054 of the 1,061 patients in dataset 1 met the inclusion criteria. After excluding patients meeting the prespecified exclusion criteria, 789 patients were included in the final analysis. Six hundred thirty-one patients were assigned to the develop dataset, and the rest were assigned to the internal validation dataset. Four hundred ninety-two (78.0%) patients in the develop dataset remained with unfavorable neurological outcomes at the 1-year follow-up. Thirty-nine patients from dataset 2 were excluded, and the remaining 114 patients were included in the external validation dataset. The baseline characteristics of the develop dataset are described in Table 1.

Fig. 1
Study flow chart.
CPC = cerebral performance category.

Table 1
Baseline characteristics of the original develop dataset

Prediction performance

The average prediction performance of each model in the original develop dataset calculated by five-fold cross validation for each model with cutoff probability scores of 0.5 is described in Supplementary Table 2. The cutoff probability score of the ensemble model was set as 0.605, and the prediction performance of the ensemble model in the internal validation dataset is described in Table 2, Fig. 2A and B. When the ensemble model was implemented in the external validation dataset, overall prediction performance metrics such as accuracy, AUROC, AUPRC, and F1 score were all decreased by certain degrees compared with those in the internal validation dataset (Table 2, Fig. 2C and D). The average prediction performance of each model in the cardiac subgroup develop dataset calculated by five-fold cross validation for each model with cutoff probability scores of 0.5 is described in Supplementary Table 3. In the cardiac subgroup analysis, the cutoff probability score of the ensemble model was set as 0.525. Prediction performance was decreased in the cardiac subgroup internal validation dataset compared with that in the original internal validation dataset (Table 2, Fig. 2E and F), and prediction performance in the cardiac subgroup external validation dataset was also decreased (Table 2, Fig. 2G and H).

Table 2
Accuracy, AUROC, AUPRC, sensitivity, specificity, PPVs, NPVs, and F1 scores for the ensemble model in the internal validation dataset, the external validation dataset and the cardiac subgroups of the internal validation and the external validation datasets

Fig. 2
Receiver operating characteristic curves and precision-recall curves for the ensemble prediction model in various datasets. (A) and (B) in the original internal validation dataset, (C) and (D) in the external validation dataset, (E) and (F) in the cardiac subgroup internal validation dataset, (G) and (H) in the cardiac subgroup external validation dataset.

Multivariable logistic regression models derived from the original develop dataset and the cardiac subgroup develop dataset are described in the Supplementary Tables 4 and 5, respectively. Most of the performance metrics were decreased in the logistic regression models compared with that in the ensemble models (Table 3). Only following metrics were better in the logistic regression models: NPV in the original internal validation dataset, NPV in the original external validation dataset, accuracy, specificity, PPV and F1 score in the cardiac subgroup external validation dataset. Receiver operating characteristics curves and precision-recall curves for the multivariable logistic regression models in each dataset are presented in the Supplementary Fig. 1.

Table 3
Accuracy, AUROC, AUPRC, sensitivity, specificity, PPVs, NPVs, and F1 scores for the multivariable logistic regression model in the internal validation dataset, the external validation dataset and the cardiac subgroups of the internal validation and the external validation datasets

DISCUSSION

In the present study, we established and validated a prediction model using an ensemble technique with four machine learning methods for the prediction of unfavorable 1-year neurological outcomes in OHCA survivors. The overall prediction performance of the ensemble model in the external validation set was favorable, with an AUROC of 0.9301 (95% CI, 0.8845–0.9756) and an AUPRC of 0.9476 (95% CI, 0.9087–0.9867). The prediction performance of the ensemble model in the cardiac subgroup external validation set was also good but not comparable with that in the original external validation set, with an AUROC of 0.8917 (95% CI, 0.7906–0.9928) and an AUPRC of 0.8968 (0.7980–0.9956). Performance metrics of the ensemble models were higher than that of the multivariable logistic regression models in general.

The prediction performance of certain machine learning methods is decreased when class imbalance is present in the develop dataset.17 In the develop dataset of our cardiac subgroup, the number of patients with favorable 1-year neurological outcomes was 114 (48.3%) among 236, which means that the classes in the cardiac subgroup develop dataset were more balanced than those in the original develop dataset. In the present study, however, prediction performance in terms of the AUPRC was decreased in general in the cardiac subgroup compared with the original group. Despite relatively balanced classes in the cardiac subgroup, a smaller sample size might have carried a higher risk of model overfitting, which might have resulted in slightly decreased prediction performance.

Early neurologic prognostication after cardiac arrest is important to avoid obvious futile treatment or inappropriate withdrawal of postcardiac arrest care. Current international guidelines recommend that neurologic prognostication be performed using multiple modalities, including clinical examination findings, serum biomarkers and electrophysiological tests.4, 5, 6 It is also recommended that the timing of prognostication be delayed for at least 72 hours after ROSC.4, 5, 6

Several studies have evaluated the prediction performance of machine learning-based models for neurological outcomes after OHCA. Kwon et al.9 used national OHCA registry data to develop a deep learning-based prediction model, the prediction performance of which was better than conventional machine learning-based models. These authors' model did not include hospital variables except the ED visit to ROSC time, and the study endpoints were short-term neurological outcome and survival discharge. Seki et al.8 used the RF model to predict 1-year survival in OHCA patients with presumed cardiac etiology without predicting long-term functional outcomes. Park et al.10 also developed machine learning-based prediction models for neurological outcomes at discharge in OHCA patients; however, long-term neurological outcome was not the scope of the study.

Aside from the overall performance of the prediction models, one of the most important issues is minimizing false positive prediction for unfavorable neurological outcomes when predicting neurological outcomes of cardiac arrest survivors. False positive prediction can lead to withdrawal of intensive postcardiac arrest care from patients who otherwise may fully or nearly fully recover and return to daily life. To exclude the possibility of false positive prediction, recent guidelines recommended the use of prognostic measures with false positive rates lower than or equal to 1%, i.e., with specificity higher than 99%.4, 5, 6, 18 We set cutoff probability scores in each prediction model with which the F1 score is maximized. Although the specificity of the ensemble model in the original internal validation dataset scored 0.9714 (0.9162–1.0000) with a cutoff probability score of 0.605, which is not over 99% but is acceptable, the specificity was significantly reduced (0.6500 [95% CI, 0.5022–0.7978]) when it was implemented in the external validation dataset. Although we trained the prediction models comprising the ensemble model in a separate develop dataset, the prediction performance was different in the internal validation and external validation datasets. Both datasets were hold-out datasets, which had never been involved in model training. However, the internal validation dataset was collected in the same period in which the develop dataset was acquired, and the external validation dataset was collected thereafter. The internal validation dataset was more likely to be similar to the develop dataset than the external validation dataset, and the difference in similarity between the two datasets might have resulted in different prediction performances.

The previously reported specificity of machine learning methods for the prediction of unfavorable outcomes in cardiac arrest victims ranged from 66.7% to 95.3%.9, 10 and the ensemble prediction models in the present study outperformed the previous models in terms of specificity in the internal validation dataset. The major difference between our study and previous studies is that we included laboratory variables to train and to establish prediction models. Initial laboratory data immediately after ROSC have a significant association with neurological outcomes in cardiac arrest survivors.11, 12, 13, 14, 15, 16 Establishing prediction models by adding widely available laboratory data might have contributed to the improvement of model performance, despite a smaller sample size than those of previous studies.

One of the strengths of our study is that we developed a neurological outcome prediction model that can be implemented immediately after ROSC in OHCA survivors. Earlier timing of prognostication than currently recommended by guidelines4, 5, 6 may aid medical personnel and the guardians of the OHCA victims in shared decision on implementation of intensive care or withdrawal of life-sustaining treatment. We used laboratory variables that had been initially obtained at the timing of ED arrival. Previous studies using machine learning models for neurological outcomes in OHCA patients did not include laboratory values in the prediction models,8, 9, 10 which might have improved prediction performance if they had been included. We defined 1-year neurological outcomes as the primary outcome, which was not focused on previous studies.8, 9, 10 As the hospital cost of caring for cardiac arrest survivors is considerable.19, 20, 21 our study may have a role in reducing the socioeconomic burden associated with potentially futile treatment.

There are a couple of factors to consider before implementation of our prediction model in clinical field to aid clinical decisions. First, prognostic measures that are considered reasonable for neuroprognostication in the guidelines showed specificity higher than or equal to 99%.4, 5, 6, 18 As our prognostic model could not reach such high specificity for unfavorable neurological outcomes, performance improvement is essential before clinical implementation, especially in terms of specificity. We hope organizing dataset with a large number of medical centers may improve specificity of the prediction model, without compromising sensitivity. Second, prognostic measures that are available immediately after ROSC, such as gray-white matter ratio,22 may improve prognostic performance of the model when added. Furthermore, as guidelines recommend multimodal approach for neuroprognostication, our prediction model may help clinical decision by providing outcome probability as one of the prognostic measures, not by simply discriminating the prognosis into favorable outcomes or unfavorable outcomes.

Our study has several limitations. First, the small sample size compared with previous studies reduced the statistical power of the results.8, 9, 10 Considering that the rate of survival to ED arrival in OHCA patients is approximately one-fourth,2, 3 the number of participants in our study may be larger than it was thought needed to be. Second, although we performed an external validation with the ensemble prediction model, the external validation dataset was too small. Moreover, the prediction performance of the model in the external validation set showed a potential risk of overfitting, which may impede the generalizability of the study results. However, we performed the analyses with a multicenter registry, and the multicenter nature of the study may attenuate this weakness. Finally, we did not include several prognostic tools that are suggested in the current guidelines, such as neuron-specific enolase or quantitative pupillometry. These tests are not always routinely performed in small centers; therefore, the exclusion of those variables from the models is reasonable in view of practical use.

In conclusion, we established an ensemble prediction model for prediction of unfavorable 1-year neurological outcomes in OHCA survivors using four machine learning methods. The prediction performance of the ensemble model was higher than the multivariable logistic regression model, while its performance was slightly decreased in the external validation dataset.

SUPPLEMENTARY MATERIALS

Supplementary Table 1

Variables included in the machine learning analysis

Click here to view.(61K, doc)

Supplementary Table 2

Accuracy, AUROC, AUPRC, sensitivity, specificity, PPV, and NPV for each classifier in the original develop dataset

Click here to view.(35K, doc)

Supplementary Table 3

Accuracy, AUROC, AUPRC, sensitivity, specificity, PPV, and NPV for each classifier in the cardiac subgroup develop dataset

Click here to view.(35K, doc)

Supplementary Table 4

Multivariable logistic regression analysis for unfavorable neurological outcomes in the original develop dataset

Click here to view.(89K, doc)

Supplementary Table 5

Multivariable logistic regression analysis for unfavorable neurological outcomes in the cardiac subgroup develop dataset

Click here to view.(86K, doc)

Supplementary Fig. 1

(A) Receiver operating characteristic curve and (B) precision-recall curve for the multivariable logistic regression model in the original internal validation dataset, (C) receiver operating characteristic curve and (D) precision-recall curve for the multivariable logistic regression model in the external validation dataset, (E) receiver operating characteristic curve and (F) precision-recall curve for the multivariable logistic regression model in the cardiac subgroup internal validation dataset and (G) receiver operating characteristic curve and (H) precision-recall curve for the multivariable logistic regression model in the cardiac subgroup external validation dataset.

Click here to view.(103K, doc)

Notes

Disclosure:The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Heo JH, Kim T, Shin J, Suh GJ.

  • Formal analysis: Heo JH, Kim T, Kim S.

  • Funding acquisition: not applicable.

  • Investigation: Heo JH, Kim T, Shin J, Suh GJ, Kim J, Jung YS, Park SM, Kim S.

  • Software: Heo JH, Kim T, Kim S.

  • Validation: Heo JH, Kim T, Kim S.

  • Visualization: Heo JH, Kim T, Kim S.

  • Writing - original draft: Heo JH, Kim T, Shin J, Suh GJ, Kim J.

  • Writing - review & editing: Heo JH, Kim T, Shin J, Suh GJ, Kim J, Jung YS, Park SM, Kim S.

ACKNOWLEDGMENTS

This study was conducted based on the OHCA registry constructed by SNU CARE investigators.

References

    1. Holmberg MJ, Ross CE, Fitzmaurice GM, Chan PS, Duval-Arnould J, Grossestreuer AV, et al. Annual incidence of adult and pediatric in-hospital cardiac arrest in the United States. Circ Cardiovasc Qual Outcomes 2019;12(7):e005580
    1. Hawkes C, Booth S, Ji C, Brace-McDonnell SJ, Whittington A, Mapstone J, et al. Epidemiology and outcomes from out-of-hospital cardiac arrests in England. Resuscitation 2017;110:133–140.
    1. Virani SS, Alonso A, Benjamin EJ, Bittencourt MS, Callaway CW, Carson AP, et al. Heart disease and stroke statistics-2020 update: a report from the American Heart Association. Circulation 2020;141(9):e139–e596.
    1. Nolan JP, Soar J, Cariou A, Cronberg T, Moulaert VR, Deakin CD, et al. European resuscitation council and European society of intensive care medicine guidelines for post-resuscitation care 2015: section 5 of the European resuscitation council guidelines for resuscitation 2015. Resuscitation 2015;95:202–222.
    1. Panchal AR, Bartos JA, Cabanas JG, Donnino MW, Drennan IR, Hirsch KG, et al. Part 3: adult basic and advanced life support: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2020;142(16_) suppl_2:S366–S468.
    1. Kim YM, Park KN, Choi SP, Lee BK, Park K, Kim J, et al. Part 4. post-cardiac arrest care: 2015 Korean guidelines for cardiopulmonary resuscitation. Clin Exp Emerg Med 2016;3 Suppl:S27–S38.
    1. Dale CM, Sinuff T, Morrison LJ, Golan E, Scales DC. Understanding early decisions to withdraw life-sustaining therapy in cardiac arrest survivors. a qualitative investigation. Ann Am Thorac Soc 2016;13(7):1115–1122.
    1. Seki T, Tamura T, Suzuki M, SOS-KANTO 2012 Study GroupOutcome prediction of out-of-hospital cardiac arrest with presumed cardiac aetiology using an advanced machine learning technique. Resuscitation 2019;141:128–135.
    1. Kwon JM, Jeon KH, Kim HM, Kim MJ, Lim S, Kim KH, et al. Deep-learning-based out-of-hospital cardiac arrest prognostic system to predict clinical outcomes. Resuscitation 2019;139:84–91.
    1. Park JH, Shin SD, Song KJ, Hong KJ, Ro YS, Choi JW, et al. Prediction of good neurological recovery after out-of-hospital cardiac arrest: a machine learning analysis. Resuscitation 2019;142:127–135.
    1. Daou O, Winiszewski H, Besch G, Pili-Floury S, Belon F, Guillon B, et al. Initial pH and shockable rhythm are associated with favorable neurological outcome in cardiac arrest patients resuscitated with extracorporeal cardiopulmonary resuscitation. J Thorac Dis 2020;12(3):849–857.
    1. Kiehl EL, Amuthan R, Adams MP, Love TE, Enfield KB, Gimple LW, et al. Initial arterial pH as a predictor of neurologic outcome after out-of-hospital cardiac arrest: a propensity-adjusted analysis. Resuscitation 2019;139:76–83.
    1. Bender PR, Debehnke DJ, Swart GL, Hall KN. Serum potassium concentration as a predictor of resuscitation outcome in hypothermic cardiac arrest. Wilderness Environ Med 1995;6(3):273–282.
    1. Choi DS, Shin SD, Ro YS, Lee KW. Relationship between serum potassium level and survival outcome in out-of-hospital cardiac arrest using CAPTURES database of Korea: Does hypokalemia have good neurological outcomes in out-of-hospital cardiac arrest? Adv Clin Exp Med 2020;29(6):727–734.
    1. Tamura T, Suzuki M, Hayashida K, Sasaki J, Yonemoto N, Sakurai A, et al. Renal function and outcome of out-of-hospital cardiac arrest - multicenter prospective study (SOS-KANTO 2012 Study). Circ J 2018;83(1):139–146.
    1. D'Arrigo S, Cacciola S, Dennis M, Jung C, Kagawa E, Antonelli M, et al. Predictors of favourable outcome after in-hospital cardiac arrest treated with extracorporeal cardiopulmonary resuscitation: a systematic review and meta-analysis. Resuscitation 2017;121:62–70.
    1. Li DC, Hu SC, Lin LS, Yeh CW. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets. PLoS One 2017;12(8):e0181853
    1. Callaway CW, Donnino MW, Fink EL, Geocadin RG, Golan E, Kern KB, et al. Part 8: post-cardiac arrest care: 2015 American Heart Association guidelines update for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2015;132(18) Suppl 2:S465–S482.
    1. Graf J, Mühlhoff C, Doig GS, Reinartz S, Bode K, Dujardin R, et al. Health care costs, long-term survival, and quality of life following intensive care unit admission after cardiac arrest. Crit Care 2008;12(4):R92.
    1. Efendijev I, Folger D, Raj R, Reinikainen M, Pekkarinen PT, Litonius E, et al. Outcomes and healthcare-associated costs one year after intensive care-treated cardiac arrest. Resuscitation 2018;131:128–134.
    1. Damluji AA, Al-Damluji MS, Pomenti S, Zhang TJ, Cohen MG, Mitrani RD, et al. Health care costs after cardiac arrest in the United States. Circ Arrhythm Electrophysiol 2018;11(4):e005689
    1. Hong JY, Lee DH, Oh JH, Lee SH, Choi YH, Kim SH, et al. Grey-white matter ratio measured using early unenhanced brain computed tomography shows no correlation with neurological outcomes in patients undergoing targeted temperature management after cardiac arrest. Resuscitation 2019;140:161–169.

Metrics
Share
Figures

1 / 2

Tables

1 / 3

PERMALINK