Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Prediction of viral symptoms using wearable technology and artificial intelligence: A pilot study in healthcare workers

  • Pierre-François D’Haese ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    pd0033@hsc.wvu.edu

    ‡ PFD and VF are Co-first authors.

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

  • Victor Finomore ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    ‡ PFD and VF are Co-first authors.

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

  • Dmitry Lesnik,

    Roles Data curation, Formal analysis, Software

    Affiliation Stratyfy, Inc, New York, New York, United States of America

  • Laura Kornhauser,

    Roles Supervision

    Affiliation Stratyfy, Inc, New York, New York, United States of America

  • Tobias Schaefer,

    Roles Data curation, Formal analysis, Methodology

    Affiliation Stratyfy, Inc, New York, New York, United States of America

  • Peter E. Konrad,

    Roles Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

  • Sally Hodder,

    Roles Conceptualization, Formal analysis, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

  • Clay Marsh,

    Roles Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

  • Ali R. Rezai

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – review & editing

    Affiliations Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America, West Virginia Clinical and Translational Science Institute, West Virginia University, Morgantown, West Virginia, United States of America, Health Sciences Center, West Virginia University, Morgantown, West Virginia, United States of America

Abstract

Conventional testing and diagnostic methods for infections like SARS-CoV-2 have limitations for population health management and public policy. We hypothesize that daily changes in autonomic activity, measured through off-the-shelf technologies together with app-based cognitive assessments, may be used to forecast the onset of symptoms consistent with a viral illness. We describe our strategy using an AI model that can predict, with 82% accuracy (negative predictive value 97%, specificity 83%, sensitivity 79%, precision 34%), the likelihood of developing symptoms consistent with a viral infection three days before symptom onset. The model correctly predicts, almost all of the time (97%), individuals who will not develop viral-like illness symptoms in the next three days. Conversely, the model correctly predicts as positive 34% of the time, individuals who will develop viral-like illness symptoms in the next three days. This model uses a conservative framework, warning potentially pre-symptomatic individuals to socially isolate while minimizing warnings to individuals with a low likelihood of developing viral-like symptoms in the next three days. To our knowledge, this is the first study using wearables and apps with machine learning to predict the occurrence of viral illness-like symptoms. The demonstrated approach to forecasting the onset of viral illness-like symptoms offers a novel, digital decision-making tool for public health safety by potentially limiting viral transmission.

Introduction

Virus transmission from asymptomatic or pre-symptomatic individuals is a key factor contributing to the SARS-CoV-2 pandemic spread. High levels of SARS-CoV-2 virus have been observed 48–72 hours before symptom onset. As high viral loads of SARS-CoV-2 may occur before the onset of symptoms, strategies to control community COVID-19 spread that rely only on symptom-based detection are often unsuccessful. The development of novel approaches to detect viral infection symptoms during this pre-symptomatic phase are critical to reducing viral transmission and spread by facilitating appropriate early quarantine before symptoms occur.

Once infected, the incubation period commonly ranges from 2–14 days (mean of 5.2 days), and infectious transmission starts around 2.5 days and peaks at 0.7 days before the onset of symptoms [14]. Of note, the loss of sense of smell and taste are more specific symptoms for COVID-19 [3]. Even when symptomatic COVID-19 occurs, the symptoms and signs of COVID-19 overlap with other viral illnesses such as influenza.

Today, 1 in 5 Americans use fitness tracking devices [5]. While these technologies can inform population-level data sharing to detect disease state [69], to our knowledge, they have not been used to forecast communicable infectious disease at the individual level. Outputs from wearable technology including heart rate (HR), heart rate variability (HRV), respiration rate (RR), temperature, blood oxygenation, sleep, and other physiological assessments are increasingly being explored in studies of health and disease [1012]. Moreover, a variety of subject-reported symptoms captured on mobile apps transforms both surveillance and contact tracing management strategies for COVID-19 [1315].

Machine-learning algorithms are becoming more popular and useful when collecting large amounts of disparate data to provide insight into otherwise complex relationships not easily determined with routine statistical methods. Using a machine learning model informed by self-reported symptoms, we demonstrate that the combination of physiological outputs from wearable technology and brief cognitive assessments can predict symptoms and signs of a viral infection three days before the onset of those symptoms. This forecasting model could be used to enhance conventional infection-control strategies for COVID-19 and other viral infections.

Methods

Study design

The Rockefeller Neuroscience Institute (RNI) team initiated a study approved by the institutional review board (IRB) at the West Virginia University Medical Center (#2003937069), Vanderbilt University Medical Center (#200685), and Thomas Jefferson University (#2004957109A001) to combine physiological and cognitive biometrics and self-reported symptoms information from individuals at risk for exposure to COVID-19 and potential contracture of a viral illness. We recruited study participants from each tertiary medical center by approaching front-line health care workers receiving regional referrals for COVID-19 patients. We asked each participant to 1) wear a smart ring device [16] with sensors that collect physiological measures such as body temperature, sleep, activity, heart rate, respiratory rate, heart rate variability; 2) use a custom mobile health app [17] to complete a brief symptoms diary [3], social exposure to potentially infected contacts, and measures of physical, emotional, and cognitive workload; (see S1 Table and S1 File) as well as the psychomotor vigilance cognitive task (PVT) [18] to measure attention and fatigue twice a day. All data are collected, structured, and organized into the RNI Cloud data lake for analysis. The RNI Cloud is a HIPAA compliant data platform hosted in Amazon Web Services (AWS) that supports all the security and legal requirements to protect the data’s privacy and integrity from the participants in the context of multi-center clinical studies [19].

We utilized a machine learning approach that combines features through probabilistic rules and provides a prediction. The training process consists of two steps. It combines subject reported symptoms (labeling model) to inform a predictive framework (forecast model) that uses physiological and cognitive signals to forecast suspicion of a viral illness (Fig 1). The dataset consisting of PVT and wearable data is split, 75% for training, and 25% reserved for testing the model [20] (see S4 File). The labeling and forecasting models are created from a set of rules combining one or more features. All rules are given a weight and combined to provide a final decision [2123] (see S2 Table).

thumbnail
Fig 1.

Data Flow a) the labeling model, b) the forecast model. Each model takes as input three days of data (d, d-1, d-2).

https://doi.org/10.1371/journal.pone.0257997.g001

Labeling model

We use a rule-based approach to create an AI model that labels an individual’s self-reported symptoms as suspicious (or not) for presenting symptoms consistent with a viral illness. The labeling model is created based on the expert knowledge manually translated into decision rules (see below). The purpose of this model is to define if a person is being suspicious of an infectious disease (below we just say suspicious) based on its self-reported symptoms. Rules are based on those symptoms commonly present in a diagnosed viral-like condition and those more specific for SARS-Cov-2 (e.g., loss of taste and smell) [2, 24, 25]. Resulting rules (Table 1) assign, for instance, higher confidence on suspicion of a viral-illness for self-reported fever with the persistence of symptoms for more than two consecutive days. In comparison, lower confidence is assigned for stuffy nose and swollen eyes without fever (see S5 File). The rules and weights in this model were establish from clinical subject matter experts. In particular, the weights associated with the rules were chosen to minimize the labeling error assessed by medical experts. We also fine-tuned some rule weights by fitting the model to a small synthetic data set, which contained typical symptom combinations. The actual calculations of the labeling model’s output score is based on the machinery of the probabilistic logic, described in more detail in the next paragraph.

Forecasting model

The forecasting model was used to associate a label of suspicion for viral illness from the Labeling model to the features extracted from the user’s cognitive function assessment and physiological signals. The physiological features include (1) single day and (2) rolling averages over 28 days of the heart rate, heart rate variability, respiration rate, activity, sleep latency, sleep duration, composition (light, REM, deep), skin temperature, and sleep efficiency. Physiological features to the exclusion of skin temperature are measured during the night to remove noise due to varying daily activities. The daily cognitive task (PVT) is a sustained-attention, a reaction-timed task that measures the speed with which subjects respond to a visual stimulus [18]. From this data set, the algorithm extracts rules using an information gain-based approach and combines them in a predictive model using a probabilistic graphical network as follows.

The set of probabilistic rules comprises a Markov network. The joint distribution defined by the Markov network can be written as where x = (x1,x2,…,xn,y) denotes a set of n+1 binary variables, out of which the first n are input variables, and y is the output variable. Here, fj(x)∈{1,0} is a Boolean function corresponding to the rule, ω is a factor associated with the corresponding rule, Z is the normalization constant. In the current implementation, the relation between the rule’s factor ω and the weight ψ used in the supplementary materials is given by . More details on the fundamentals of the probabilistic logic can be found in [2123]. With the joined distribution defined above, the model prediction s for every observation vector r = (r1,r2,…,rn) is computed as the conditional probability of the output variable y as s = P(y = 1|r).

If a training set is available, the model’s parameters can be determined by the calibration process, which minimizes the prediction error. Suppose, for the i-th training example the model’s prediction is si, and the observed (ground truth) output is yi. We define the cross-entropy loss function as

The calibration process uses the steepest gradient descent to find a combination of rules weights which minimizes the loss function. In our particular implementation we used Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm (L-BFGS).

Our model was developed on Stratyfy’s Probabilistic Rule Engine, a commercial machine learning platform [26] (see S3 File). In the application to our study, this general framework for creating a rule-based predictive model was applied as follows: In a preprocessing step, the data from the wearable device (e.g., heart rate, temperature, etc.) and the information for the mobile app (e.g., symptoms, results of the PVT, etc.) were collected, checked for completeness, and engineered variables were extracted. We found that, for our study, large gaps in the data had a significant negative impact on the predictive power of the model and, therefore, our efforts were concentrated on cases where most of the required information was actually available. We identified a number of engineered variables (for instance, a ratio of heart rate to heart rate variability) which helped significantly improve the model’s predictive power. In order to be used with probabilistic rules, continuous variables are discretized, and then discretized and categorical variables are converted into binary variables by one-hot encoding. The labeling model described above was used to construct the binary output variable, marking for each case days of potential onset of a viral infection. At this point, the setup fit into the context of a standard supervised learning problem: We needed to train a classifier to predict the onset of a disease based on the information available before the actual onset. We opted for the rule-based system described above for several reasons. A main reason was the transparency and interpretability of our model. In this case, our rule-based system produced models that were fairly small in size (20–50 rules) and still highly accurate. We compared our approach to standard approaches, for example gradient boosting, and found the rule-based approach most promising. Note that, in this study, the rule-based models were used in two ways. In the labelling model, the rules, together with the confidences, were developed and specified by clinical experts. To create the forecasting model, the rules were extracted from the available data via rule mining. For this purpose, we used the Association Rule Mining algorithm [27], which is based on the co-occurrences frequency analysis. After extracting the rules, the weights of the rules were determined by the calibration process outlined above.

Validation of the model

Model performance was tested with K-fold cross-validation with in our case we perform four rounds of validation (K = 4). One round of cross-validation involves portioning the dataset into complementary subsets, performing the training on one subset and the validation on the other. To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (averaged) over the rounds to give an estimate of the model’s predictive performance. The entire dataset is divided 4 times as 75% for training and 25% for validating the model. The results are then average across the 4 runs of training-validation. The model weights in the final model are obtained by using training dataset of the model. We measure the model’s performances at various threshold settings. We also used the area under the curve (AUC) of the receiver operating characteristic (ROC) curve as a threshold-invariant performance measure. Additionally, we report the model’s learning performances, i.e., how much data is required to reach the stability of the model. Learning is achieved when adding more data does not significantly impact the performance of the model.

Results

We enrolled 867 subjects in the study between Apr 7th, 2020, and Aug 1st, 2020 (age ranged from 20 to 76 years old) (Table 2). The data set includes 75,292 unique data points (median number of days of data per participant is 90 days) (see S2 File). 33% (289) unique participants were labeled (via the labeling model) as having symptoms consistent with a viral illness. The forecasting model’s inclusion criteria require at least three days of continuous data with no more than one feature missing due to compliance (Fig 1). Of the 767 participants that met the criteria, 276 had missing data for the wearable and 376 for the cognitive assessment. The remaining 115 participants were used to label the wearable and cognitive data as input for the three-day forecasting model. Each day of data was adjudicated by the labeling model, which predicted a 10% occurrence of symptoms consistent with a viral-like illness. The remaining days were labeled as negative or non-suspicious of viral-like illness. From the training dataset, the algorithm identified 45 probabilistic rules. These are combined to form the forecasting model (Table 3). The rules contributing to the high probability of developing symptoms within three days are related to low HRV, slower response time to cognitive testing, longer latency to get asleep combined with an increased REM sleep time, and an increased HR. The rules that contribute to a lower probability of developing symptoms are related to lower HR, increased HRV, increased sleep quality, and faster response rate to cognitive testing. Fig 2 provides the model performance as a function of the threshold. Fig 3 illustrates that the model reaches a plateau after about 1500 samples, and that much accuracy cannot be gained by adding more samples. Table 4 reports the precision, recall, and accuracy metrics obtained with a threshold = 0.1 for models with and without cognitive assessment data. The threshold was selected to maximize the balance between precision and recall. The overall accuracy of the model is 82%. The recall positive defined as the true positives (TP) over the total number of positive values (TP/(TP+FN)) is 79% (no PVT:67%). The accuracy of calling negatives (recall negative) defined as the true negatives (TN) over the total amount of negative values (TN/(TN+FP)) is 83% (no PVT:84%). AUC is 89% (no PVT: 83%).

thumbnail
Fig 2.

ROC (False Positive Rate vs True Positive Rate) and Precision/recall curves for the forecasting model with (A, C) and without cognitive assessment (B, D).

https://doi.org/10.1371/journal.pone.0257997.g002

thumbnail
Fig 3. Model performance (learning and stability).

The shade colors represent the variance of results created by the K-Fold(4) validation.

https://doi.org/10.1371/journal.pone.0257997.g003

thumbnail
Table 3. Algorithm-derived rules list of 45 rules extracted by the algorithm and used in the model with their relative weights.

https://doi.org/10.1371/journal.pone.0257997.t003

thumbnail
Table 4. Model performance with and without PVT.

A. With PVT. B. Without PVT.

https://doi.org/10.1371/journal.pone.0257997.t004

Discussion

In this study, we measure daily changes in autonomic activity using a wearable device and cognitive assessments via a mobile app. Using machine-learning analytics, we then forecast the onset of symptoms consistent with a viral illness. Specifically, we describe our strategy of using an AI model in conjunction with a non-invasive and readily available technology, which predicts the likelihood of developing symptoms consistent with a viral infection three days before symptom onset with an accuracy of 82%. The model has a false positive rate of 21% (meaning the system would label a non-infected participant as suspicious) and a false-negative of 17% (meaning the system would not detect a suspicious participant). Due to the occurrence of disease in the population, our dataset is unbalanced with more negatives than positives to a ratio of about 4 to 1. The model would detect 79% of individuals who will develop symptoms (i.e., sensitivity) and correctly predicts, almost all of the time (97%, negative predictive value), individuals who will not develop viral-like illness symptoms in the next three days. Conversely, the model precision is 34%. That precision is defined as the ratio of true positives (TP) over positives (P). In other words, if the model flags someone to develop viral-like symptoms in the next three days, the model is correct 34% of the time. Finally, the very little difference in AUCs between each fold suggest that the model is consistently generalizable.

The current model parameters were chosen to provide a conservative framework that warns potentially pre-symptomatic individuals to socially isolate while minimizing warnings to individuals with a low likelihood of developing viral-like symptoms in the next three days. The individuals predicted to be positive (true or false positives) would undergo additional screening and precautions. This framework can be applied as a digital decision-making management tool for public health safety in addition to conventional infection-control strategies.

Other investigators have confirmed the relationship between autonomic activity and the inflammatory response [2830]. This study suggests a time-dependent relationship between autonomic and cognitive activity and the forecasting of symptoms consistent with a viral illness. We observed consistent changes in the autonomic nervous system function preceding the onset of symptoms. Specifically, differences were observed in HRV, HR, and sleep indices three days before symptom onset. Importantly, this period corresponds to the pre-symptomatic phase of some viral illness such as COVID-19 that is estimated to be 2.5 days [14]. In addition to the autonomic changes measured by the wearables, our analyses demonstrate the additional value of cognitive assessments (PVT) to predict symptoms consistent with a viral illness.

There are several limitations to this study. First, we did not diagnose infection nor measure infection markers in each individual. Instead, we relied on self-reported symptoms known to be associated with the occurrence of a viral infection. Without definitive diagnostics, we cannot confirm the presence of viral infection among persons who self-report symptoms. In the next phase of the study, we plan to test specific viruses (e.g., influenza and SARS CoV-2). The participants in this study are limited to front-line health care workers. Our model would benefit from being extended to other populations. Finally, participant compliance to consistently use their wearable and the app remains a challenge. Non-compliance among our participants reduced the usable data set. We plan on developing additional models to impute the data in an efficient way in order to extend the usability of the forecasting model. While we have demonstrated that the dataset is sufficient to reach this model’s predictive stability, additional data will provide further insights and reinforce the conclusions.

Viral infections have physical, cognitive, behavioral, and environmental influences and stressors that impact infection risk [31, 32]. To our knowledge, this is the first study using wearables and apps with machine learning to predict symptoms consistent with viral infection three days before their onset. The demonstrated approach to forecasting the onset of viral illness-like symptoms offers a novel digital decision-making tool for public health safety by potentially limiting viral transmission.

Supporting information

S1 Table. Data dictionary.

Data dictionary of each data element used in the model.

https://doi.org/10.1371/journal.pone.0257997.s001

(PDF)

S2 Table. Disease onset model rules.

The following table reports the probabilistic weights for each rule of the symptom onset forecasting model.

https://doi.org/10.1371/journal.pone.0257997.s002

(PDF)

S1 File. List of questions.

List of questions asked to the participants.

https://doi.org/10.1371/journal.pone.0257997.s003

(PDF)

S2 File. Dataset and inclusion/exclusion criteria.

Inclusion/exclusion criteria and description of data set.

https://doi.org/10.1371/journal.pone.0257997.s004

(PDF)

S3 File. Probabilistic rule engine.

Detailed description of the Probabilistic Rule Engine.

https://doi.org/10.1371/journal.pone.0257997.s005

(PDF)

S4 File. Validation approach.

Detailed description of the validation approach.

https://doi.org/10.1371/journal.pone.0257997.s006

(PDF)

S5 File. Labeling model.

Detailed description of the labeling model.

https://doi.org/10.1371/journal.pone.0257997.s007

(PDF)

Acknowledgments

We would like to thank the teams and clinical coordinators from Vanderbilt University and Thomas Jefferson University, who have provided the support to recruit and support the participants. We would like to thank the OURARing team for their partnership and integration of their data system.

References

  1. 1. Li Q, Guan X, Wu P, et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia. New England Journal of Medicine. 2020;382(13):1199–1207. pmid:31995857
  2. 2. He X, Lau EHY, Wu P, et al. Temporal dynamics in viral shedding and transmissibility of COVID-19. Nat Med. 2020;26(5):672–675. pmid:32296168
  3. 3. Spinato G, Fabbris C, Polesel J, et al. Alterations in Smell or Taste in Mildly Symptomatic Outpatients With SARS-CoV-2 Infection. JAMA. Published online Apr 22nd, 2020. pmid:32320008
  4. 4. Guan W-J, Ni Z-Y, Hu Y, et al. Clinical Characteristics of Coronavirus Disease 2019 in China. New England Journal of Medicine. 2020;382(18):1708–1720. pmid:32109013
  5. 5. [No title]. Accessed Sept 22nd, 2020. https://news.gallup.com/file/poll/269141/191206HealthTrackers.pdf
  6. 6. Izmailova ES, McLean IL, Hather G, et al. Continuous Monitoring Using a Wearable Device Detects Activity‐Induced Heart Rate Changes After Administration of Amphetamine. Clinical and Translational Science. 2019;12(6):677–686. pmid:31365190
  7. 7. Erb MK, Karlin DR, Ho BK, et al. mHealth and wearable technology should replace motor diaries to track motor fluctuations in Parkinson’s disease. NPJ Digit Med. 2020;3:6. pmid:31970291
  8. 8. Griffin B, Saunders KEA. Smartphones and Wearables as a Method for Understanding Symptom Mechanisms. Front Psychiatry. 2019;10:949. pmid:32009990
  9. 9. Radin JM, Wineinger NE, Topol EJ, Steinhubl SR. Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study. The Lancet Digital Health. 2020;2(2):e85–e93. pmid:33334565
  10. 10. Jarczok MN, Kleber ME, Koenig J, et al. Investigating the Associations of Self-Rated Health: Heart Rate Variability Is More Strongly Associated than Inflammatory and Other Frequently Used Biomarkers in a Cross Sectional Occupational Sample. PLOS ONE. 2015;10(2):e0117196. pmid:25693164
  11. 11. Bakken AG, Axén I, Eklund A, O’Neill S. The effect of spinal manipulative therapy on heart rate variability and pain in patients with chronic neck pain: a randomized controlled trial. Trials. 2019;20(1). pmid:31606042
  12. 12. Johnston BW, Barrett-Jolley R, Krige A, Welters ID. Heart rate variability: Measurement and emerging use in critical care medicine. Pediatr Crit Care Med. 2020;21(2):148–157. pmid:32489411
  13. 13. Abeler J, Bäcker M, Buermeyer U, Zillessen H. COVID-19 Contact Tracing and Data Protection Can Go Together. JMIR Mhealth Uhealth. 2020;8(4):e19359. pmid:32294052
  14. 14. Show evidence that apps for COVID-19 contact-tracing are secure and effective. Nature. 2020;580(7805):563. pmid:32350479
  15. 15. Wang S, Ding S, Xiong L. A New System for Surveillance and Digital Contact Tracing for COVID-19: Spatiotemporal Reporting Over Network and GPS. JMIR Mhealth Uhealth. 2020;8(6):e19457. pmid:32499212
  16. 16. Oura Ring: the most accurate sleep and activity tracker. Oura Ring. Accessed Jul 7th, 2020. https://ouraring.com
  17. 17. ‎RNI Health. Accessed Aug 14th, 2020. https://apps.apple.com/us/app/rni-health/id1515732074
  18. 18. Roach GD, Dawson D, Lamond N. Can a Shorter Psychomotor Vigilance Task Be Used as a Reasonable Substitute for the Ten‐Minute Psychomotor Vigilance Task? Chronobiology International. 2006;23(6):1379–1387. pmid:17190720
  19. 19. D’Haese P-F, Konrad PE, Pallavaram S, et al. CranialCloud: a cloud-based architecture to support trans-institutional collaborative efforts in neurodegenerative disorders. Int J Comput Assist Radiol Surg. 2015;10(6):815–823. pmid:25861055
  20. 20. Stone M. Cross-Validatory Choice and Assessment of Statistical Predictions. Journal of the Royal Statistical Society: Series B (Methodological). 1974;36(2):111–133.
  21. 21. Richardson M, Domingos P. Markov logic networks. Machine Learning. 2006;62(1–2):107–136.
  22. 22. Nilsson NJ. Probabilistic logic. Artificial Intelligence. 1986;28(1):71–87.
  23. 23. Wang C, Komodakis N, Paragios N. Markov Random Field modeling, inference & learning in computer vision & image understanding: A survey. Computer Vision and Image Understanding. 2013;117(11):1610–1627.
  24. 24. Flu Symptoms & Diagnosis | CDC. Published Dec 5th, 2019. Accessed Jul 25th, 2020. https://www.cdc.gov/flu/symptoms/index.html
  25. 25. Eccles R. Understanding the symptoms of the common cold and influenza. Lancet Infect Dis. 2005;5(11):718–725. pmid:16253889
  26. 26. Stratyfy, Inc.. https://www.stratyfy.com
  27. 27. Jochen Hipp, Ulrich Güntzer, and Gholamreza Nakhaeizadeh. 2000. Algorithms for association rule mining—a general survey and comparison. SIGKDD Explor. Newsl. 2, 1 (June, 2000), 58–64. https://doi.org/10.1145/360402.360421
  28. 28. Pavlov VA, Tracey KJ. The vagus nerve and the inflammatory reflex—linking immunity and metabolism. Nature Reviews Endocrinology. 2012;8(12):743–754. pmid:23169440
  29. 29. Pereira MR, Leite PEC. The Involvement of Parasympathetic and Sympathetic Nerve in the Inflammatory Reflex. Journal of Cellular Physiology. 2016;231(9):1862–1869. pmid:26754950
  30. 30. Pal GK, Nanda N. Vagus Nerve: The Key Integrator of Anti-inflammatory Reflex. International Journal of Clinical and Experimental Physiology. 2020;7(1):01–02.
  31. 31. Elkington LJ, Gleeson M, Pyne DB, Callister R, Wood LG. Inflammation and Immune Function. Antioxidants in Sport Nutrition. Published online 2014:171–181.
  32. 32. Gleeson M. Immune function in sport and exercise. J Appl Physiol. 2007;103(2):693–699 pmid:17303714