skip to main content
10.1145/3613905.3650995acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

Large-scale Validation of a Scalable and Portable Behavioral Digital Screening Tool for Autism at Home

Authors Info & Claims
Published:11 May 2024Publication History

Abstract

Autism, characterized by challenges in socialization and communication, benefits from early detection for prompt and timely intervention. Traditional autism screening questionnaires often exhibit reduced accuracy in primary care settings and significantly underperform underprivileged populations. We present findings on the effectiveness of an autism screening digital application (app) that can be administered at primary care clinics and also by caregivers at home. A large-scale validation was conducted with 1052 toddlers aged 16–40 months. Among them, 223 were subsequently diagnosed with autism. The age-appropriate interactive app utilized strategically designed stimuli, presented on the screen of the iPhone or iPad, to evoke behaviors related to social attention, facial expressions, head movements, blinking rate, and motor responses, which can be detected with the device's sensors and automatically quantified through computer vision (CV) and machine learning. The algorithm, combining various digital biomarkers, demonstrated strong accuracy: Area under the receiver operating characteristic curve (AUC) = 0.93, sensitivity = 86.0%, specificity = 91.0%, and precision = 71%, for distinguishing autistic versus non-autistic toddlers, marking a strong foundation as a digital phenotyping tool in the autism research, notably without any costly equipment like eye tracking devices and at home administered by caregivers.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Autism spectrum disorder is manifested by differences in social communication along with idiosyncratic behaviors, the presence of restricted and repetitive behaviors [3], and difficulties in motor planning and coordination [1315,33]. Signs of autism typically emerge between 9-18 months, including reduced attention to people, lack of response to name, differences in affective engagement and expressions, and motor delays [2,11,23,32]. Screening for autism commonly occurs at 18-24 months during their well-child visits using the Modified Checklist for Autism in Toddlers-Revised with Follow-Up (M-CHAT-R/F) [30], a caregiver questionnaire. Recent research has shown that technology can enhance tools such as the M-CHAT, alleviating some of its limitations while improving screening accuracy, scalability, and robustness [26].

A significant portion of autistic individuals show a reduced spontaneous visual attention to social stimuli, and studies using machine learning of eye-tracking data have shown promise in distinguishing autistic and neurotypical children [24,37]. A recent eye-tracking study evaluated in 1,863 toddlers, 12–48 months-old, had strong specificity (98.0%) but poor sensitivity (17.0%) on their measure of social attention [38]. Another recent study with 475 toddlers, 16-30 months-old conducted in a primary care setting with a sophisticated eye tracker showed a sensitivity of 71% and specificity of 80.7% [18]. Given the wide range of results through eye-tracking tests, it is evident that eye-tacking may be insufficient due to the heterogeneous nature of autism. To better capture the complex representation of autism, digital phenotyping can quantify differences in social attention [6], head movements [10,20], complexity in facial expressions [4], blinking rate [19], and motor behaviors [27,28], combining multiple such biomarkers via modern machine learning tools [26].

The app “SenseToKnow” (S2K) was developed for this purpose, administered on a tablet (iPad) or mobile phone (iPhone) either in primary care settings or at home, the app can be fully administered by the caregivers, specifically, no specific hardware (other than their phone/tablet) is required, no calibration steps are needed prior to the administration. The S2K app displays strategically designed movies while recording the child's behavioral responses via the frontal camera and touch/inertia sensors, analyzed through computer vision (CV) and machine learning (ML). In this study, with 1052 toddlers, the app demonstrated high accuracy in classifying autistic versus non-autistic toddlers by integrating 23 digital phenotypes. Prior study validated and tested the app in primary care settings in the presence of clinicians or experts [26]; here we extend this work by testing this interactive app at home with caregivers.

Extending clinical care to homes has extraordinary implications; it has the potential to increase care adoption and access, including groups of society that are far from the reach of medical visits on a frequent basis. Additionally, accessibility of an automatic screening and diagnostic tools that can be directly administered by caregivers at home can help in reducing the wait times towards early screening for autism and also can aid in multistage screening for effective outcomes [31], which is currently not possible. Even though there are existing works that present the feasibility of autism screening at home using questionnaires and video recordings [1,12], these works still needed time-intensive procedures and analysis performed by clinicians or experts. Given the technological advancement, similar to how heart rate measures are available in smartwatches, screening and diagnostic tools in smartphones can make a paradigm shift in autism. In this work, we present the large-scale validation of “SenseToKnow,” a scalable and portable behavioral screening tool for autism that can be administered on a phone or tablet either at the clinic or at home. We also present the effectiveness of the app in detecting the presence of autism as early as 16 months of age.

Skip 2METHODS Section

2 METHODS

2.1 Participants

Participants were 1052 toddlers, 16-40 months of age, who were either (1) recruited at four pediatric primary care clinics during their well-child visit or (2) participated from home where the caregivers administered the app in their iPhone/iPad. A standard caregiver-completed questionnaire, the Modified Checklist for Autism in Toddlers – Revised with Follow-up (M-CHAT-R/F) [30] was used for the initial assessment. If a child was positive on M-CHAT-R/F score or the caregiver/clinician expressed any developmental concern, the child was further evaluated with the Autism Diagnostic Observation Schedule – Toddler (ADOS-T) [22] or the TELE-ASD-PEDS (TAP) [8]. Out of 1052 toddlers, 223 were diagnosed with autism, 43 were diagnosed as having language delay or developmental delay (DDLD), and the rest were considered neurotypical. We combined the DDLD and neurotypical participants as the non-autistic (comparison) group (N=829). All the caregivers provided written informed consent, and the study protocols were approved by the the Duke University Health System Institutional Review Board (Pro00085434, Pro00085435).

2.2 Application (app) administration and stimuli

The S2K app consists of several tasks, including short social and nonsocial movies presented through an iPad/iPhone app (see Figure 1 (a)), namely (1) Floating Bubbles (35 seconds). Bubbles moved randomly throughout the frame of the screen with a gurgling sound. (2) Dog in Grass (16 seconds). A cartoon barking puppy appeared at the center and the four corners of the screen. (3) Dog in Grass Right-Right-Left (RRL) (40 seconds). A cartoon barking puppy appeared randomly in the right/left side of the screen at first, followed by a constant right-right-left pattern. (4) Spinning Top (53 secs). An actress played with spinning top toy with successful and unsuccessful attempts, (5) Mechanical Puppy (25 seconds). A mechanical toy puppy barked, jumped, and walked towards grouped toys. (6) Blowing Bubbles (64 secs). An actor blew bubbles using a bubble wand with successful and unsuccessful attempts. (7) Rhymes (30 seconds). An actress narrated nursery rhymes such as Itsy-Bitsy Spider. (8) Toys (19 seconds). Dynamic toys with sound were shown. (9) Make Me Laugh (56 secs): An actress demonstrated silly funny actions. (10) Playing with Blocks (71 seconds). Two child actors interacted and played with toys with occasional verbalizations. (11) Fun at the Park (51 seconds). Two actresses stood at each side of the frame, having a turn-taking conversation with no gestures. (12) Pop the Bubbles game (20 seconds). Clear bubbles appear from the bottom of the screen moving upward. Each bubble contains a marine animal, which can float away if the bubble is popped via touch. At any time, a bubble is popped, a distinct sound is presented. The animal then spins before floating off the screen.

Figure 1:

Figure 1: (a) Stimuli and task presentation. (b) Features extraction using computer vision analysis (CVA). (c) extracted behavioral biomarkers. (d) XGBoost model training and evaluation. (This figure partially adopted from [26]).

In summary, the app can be administered in less than 10 minutes consisting of 11 short developmentally appropriate video clips that the children watch and, at the end they interact with a bubble-popping game. The caregivers were asked to hold their child on their lap while watching these movies while the device is placed at about 60 cm distance in front of the child. The front-facing camera of the phone/tablet recorded the toddler's face while the videos played on the screen. During the Pop the Bubbles game, the child interacted via the device touch screen while the device's kinetic and touch information was recorded.

2.3 Feature extraction and behavioral measures

The recorded videos (30 fps) were synchronized with the movies, processed to track the toddler's face, and extracted 49 facial landmarks and head pose angles relative to the tablet's frontal camera such as θyaw (left-right), θpitch (up-down), and θroll (tilting left-right) [21,29] (see Figure 1 (b)). After obtaining the facial landmarks and head pose angles of the participants as described in [17], we extracted a set of behavioral features as described in the following section (see Figure 1 (c)).

2.3.1 Facing forward during social and nonsocial movies (2 features).

To measure the participants’ proxy for engagement towards social vs. nonsocial videos, we measured the number of frames where the participant's head was Facing Forward toward the screen. Using CV, we computed the Euler angles of the child's head towards the device's front-facing camera [17]. A criterion of θyaw within 25° was used as a proxy for facing forward [10]. Frames in which rapid head turns occurred were filtered by a thresholding approach described in [20].

2.3.2 Head movements during social and nonsocial movies (6 features)

The facial landmarks associated with the corners of the two eyes and the nose tip were used to compute the participant's head movement [20]. The ‘facing forward’ signal defined above was used to filter the frames in which the child was not facing towards the screen. The variation in the distance between the eyes was used to adjust to child's distance to the screen. The features from head movements computed from the time series of the landmarks were the head movement (1) Rate, (2) Acceleration, and (3) Complexity leveraging multiscale sample entropy [9].

2.3.3 Facial dynamics complexity during social and nonsocial movies (4 features).

The complexity of the facial landmarks’ dynamics was estimated for the movement of eyebrows and mouth regions of the child's face using multiscale sample entropy [9]. We computed the average complexity of the mouth and eyebrows regions, referred to as the Mouth Complexity and Eyebrows Complexity, as described in [4].

2.3.4 Blink rate during social and nonsocial movies (2 features).

To automatically recognize blinks, we used OpenFace [5], a facial analysis toolkit that offered facial action units on a frame-by-frame basis. For the blinking action, we used action unit 45 (AU45) to estimate the child's blinks. A smoothing of the AU45 time-series signal was performed, followed by detecting the number of peaks, which are associated with blink actions. To obtain the Blink Rate (refer [19] for more details), we normalized the number of blinks with respect to the number of frames the participant was engaged with, estimated by the signal ‘facing-forward’ described above.

Note: The features up to here are mean values considering either of the set of social and nonsocial movies.

2.3.5 Social attention variables from gaze data (2 features).

The app includes two movies (Blowing Bubbles and Spinning Top) featuring a left/right separation of social and nonsocial stimuli on each side of the screen, these stimuli were designed to capture social/nonsocial attentional preference. The variable Gaze Percent Social was defined as the percentage of time the child gazed at the social half of the screen, and the Gaze Silhouette Score reflected how concentrated the gaze clusters were, refer to [6] for more details.

2.3.6 Attention to speech variable from gaze data (2 features).

The Fun at the Park movie presented two actors, one on each side of the screen, taking turns in a conversation. To evaluate whether the children were following the actress’ conversation using their gaze, we computed the correlation between the child's gaze (left/right) patterns and a binary signal that indicated which of the actress was actively talking during the conversation. These correlation-based feature is referred to as the Gaze Speech Correlation and the silhouette score associated with this is referred as FP Gaze Silhouette Score, more details are in [6].

2.3.7 Response to name (2 features).

Based on automatic detection of the name calls (as done by the caregiver) and the child's response to their name by turning their head computed from the facial landmarks similar to [27], we defined two CVA-based variables: Response to Name Proportion, representing the proportion of times the child oriented to the name call, and Response to Name Delay, the average delay (in seconds) between the offset of the name call and the start of the head turn.

2.3.8 Touch-based visual-motor skills (3 features).

As described in [28], using the touch information provided by the device sensors when the child played the Pop the Bubbles game, we defined Touch Popping Rate as the ratio of bubbles popped over the number of touches, Touch Error Variation as the standard deviation of the distance between the child's finger position when touching the screen and the center of the closest bubble, and Touch Average Length as the average length of the child's finger trajectory on the screen.

2.4 Classification and statistical analysis.

All statistics are computed in Python version 3.8.10. Mann–Whitney U test (using pingouin package [35] version 0.5.4) is used to estimate the significant difference between the two groups along with the estimation of the effect size, ‘r.’ Linear regression model is used to minimize the effect of age on our measures using SciPy package [36] version 1.7.3.

2.4.1 Extreme Gradient Boosting (XGBoost) algorithm implementation.

XGBoost is a popular model based on several decision-trees whose node variables and split decisions are optimized using gradient statistics of a loss function. The algorithm progressively adds more “if” conditions to the decision tree to improve the predictions of the overall model. We utilized the package, XGBoost version 2.0.3 with all default parameters of the algorithms as provided by the authors [7], except the ones in bold (which were chosen as described in [26]) that we changed to account for the class imbalance and control overfitting. n_estimators=100; max_depth=3; objective=“binary:logistic”; booster=”gbtree”; tree_method=“exact”; colsample_bytree=0.8; subsample=1; colsumbsample=0.8; learning_rate= 0.15; gamma=0.1 (regularization parameter); reg_lambda=0.1; alpha=0; see Figure 1(d) for the workflow of the model training and evaluation. Classification performance was evaluated using the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC). Five-fold cross-validation was used to evaluate classification performance. The 95% confidence intervals were computed with the Hanley and McNeil method [16]. Youden optimality index (J = Sensitivity+Specificity−1) was used to estimate the final prediction value of the classifier to maximize the sensitivity and specificity [25].

2.4.2 SHapley Additive exPlanations (SHAP) computation

SHAP values serve as a metric for gauging the influence of variables on the prediction [34]. They quantify the effect of having a specific value for a given variable as opposed to the prediction that would be made if that variable assumed a baseline value. This framework offers robust theoretical assurances in elucidating the contribution of each input variable to the final prediction, encompassing the estimation of interactions between variables and their respective contributions. In this work, the SHAP values were computed and stored for each sample of the test sets when performing cross-validation. Python package, shap version 0.44.0, was used.

Skip 3RESULTS Section

3 RESULTS

3.1 Descriptive statistics of all the CVA-based behavioral variables.

The descriptive statistics show that group average difference between the autistic and non-autistic groups are highly statistically different (p<0.01 with small (> .2) to medium effect sizes (> .5); see Figure 2 (a) for the statistical results for each variable, except for five variables, namely, Social Mouth Complexity (marginally significant p = 0.08), Response to Name Delay, Touch Popping Rate, Touch Error Variation, and Touch Average Length. Similar to the findings from prior works, the group average differences in variables related to facing forward, head movements, and blink rate are all higher during the social compared to nonsocial tasks between the two groups [19,20].

Figure 2:

Figure 2: (a) Descriptive statistics of all the CV-based behavioral biomarkers. (b) Receiver operating characteristic (ROC) curve. (c) Summary plot of the SHAP values of the top 20 variables of importance

3.2 Classification results using XGBoost.

Since the app was administered at both clinic and home-based settings, there was a possible confounds in the way the app was administered by clinicians vs. caregivers. The Experiment Location is used as a covariate for our analysis. Additionally, the caregivers used different devices and sizes of iPhone and iPad which had a potential role in confounding our gaze-related measures and touch data, thus, Screen Size was added as another co-variate. Additionally, the participant's Age can play a role in some of the behavioral biomarkers based on developmental trajectory. The effect of age was minimized by fitting a linear regression model (see Figure 1(d)) for all the features and considered the residuals of the features for further analysis.

Using all the age-adjusted 23 behavioral biomarkers and the 2 confounding variables, we trained the XGBoost model to classify the autistic group from the non-autistic group. Figure 2(b) presents the ROC curve of three classification models, namely, using (1) all the data from 1052 participants, (2) data collected in primary care settings (at clinic N=456, 39 of them are autistic), and (3) data collected at home (N=596, 184 of them are autistic). For the models (2) and (3) the covariate Experiment Location become invalid as the data were already split based on the location, to check if our results remain consistent. Across the experiments, the sensitivity ranged between 83% (home) and 87% (clinic), the combined sample resulted in 86%. Irrespective of the set of data used the specificity remained similar in the range of 91-92%. The positive predictive value (PPV; precision) and negative predictive value (NPV) rates were 71% and 96% (combined sample), 51% and 98% (at clinic) and 81% and 92% (at home), respectively. The AUC was 0.93 for the combined samples and at the clinic, and 0.92 for the sample of app administrations at home. We present the summary of the SHAP values (combined sample), which provide explainability of the model with the importance of the variables arranged from top to bottom (from more relevant to less relevant) with features from head movements, response to name call, facing forward, social attention variables being the most salient. These results were replicating the past findings from the prior work [26].

Skip 4DISCUSSION AND CONCLUSIONS Section

4 DISCUSSION AND CONCLUSIONS

This study is a part of a broad effort to design a scalable, robust, and portable tools for computational behavioral phenotyping. In this work, we have presented a portable iPad or iPhone application (app) that displayed strategically designed, developmentally appropriate short social/nonsocial movies that can evoke certain behavioral manifestation in autistic toddlers. Autistic and non-autistic toddlers were exposed to the app, watched the movies and interacted via touch screen during Pop the Bubbles game. The app used the front-facing camera of the tablet to record the toddlers’ videos, and device's sensor for touch related measures. The videos were analyzed using computer vision (CV) to extract various behavioral biomarkers. Subsequently, these behavioral features and touch features were fed into the classification model utilizing the XGBoost to reliably screen the signs of autism.

Computer vision based behavioral biomarkers are distinctly different for autistic compared to non-autistic toddlers. The results reported here show that CV-based biomarkers relating to social attention, head movements, response to name call, facing forward, blink rate, and touch-based motor-related features can distinctly differentiate the autistic group compared to the non-autistic group.

“SenseToKnow” – iPad/iPhone app can reliably screen the presence of autism both in the clinic and at home. Our results with data from 1052 participants via machine learning model using XGBoost based classifier can detect the signs of autism as early as 16 months of age with sensitivity = 86%, specificity = 91% and precision = 71%, marking a strong foundation of digital phenotyping tool in the autism research, notably without any costlier equipment like eye tracking devices and at home.

Contribution. Outcome of our research indicates a strong potential for a quantitative, objective, and scalable digital phenotyping tool designed to enhance the accuracy of autism screening. This tool has the promise to address disparities in access to screening, diagnosis, and intervention, serving as a valuable complement to existing autism screening questionnaires. Our results prove that this tool may not only be used at primary care clinics by clinicians but also by caregivers at home environment.

Limitations. Though the study sample is relatively large, we still lag the power to generalize the diversity and demographic characteristics for the target population. Additionally, our app is currently available only via iPad or iPhone.

Future work. This is an ongoing study, so we plan to increase the sample size further to generalize the results to the different ethnic and demographic diversity (preliminary results show no bias). Another future work is to extend the availability of this app to the other platforms like Android.

Skip Acknowledgments Section

Acknowledgments

This project was funded by a Eunice Kennedy Shriver NICHD Autism Center of Excellence Award P50HD093074 (Dawson, PI), NIMH R01MH121329 (Dawson, PI), NIMH R01MH120093 (Sapiro and Dawson, Co-PIs), and the Simons Foundation (Sapiro and Dawson, Co-PIs). Resources were provided by NSF, ONR, NGA, ARO, and gifts from Cisco, Google, and Amazon. We wish to thank the many caregivers and children for their participation in the study, without whom this research would not have been possible. We gratefully acknowledge the collaboration of the physicians and nurses in Duke Children's Primary Care and members of the NIH Duke Autism Center of Excellence research team, including several clinical research coordinators and specialists.

Skip Supplemental Material Section

Supplemental Material

3613905.3650995-talk-video.mp4

Talk Video

mp4

17.1 MB

References

  1. Halim Abbas, Ford Garberson, Eric Glover, and Dennis P. Wall. 2018. Machine learning approach for early detection of autism by combining questionnaire and home video screening. J. Am. Med. Informatics Assoc. 25, 8 (2018), 1000–1007.Google ScholarGoogle ScholarCross RefCross Ref
  2. Gianpaolo Alvari, Cesare Furlanello, and Paola Venuti. 2021. Is smiling the key? Machine learning analytics detect subtle patterns in micro-expressions of infants with asd. J. Clin. Med. 10, 8 (April 2021), 1776.Google ScholarGoogle ScholarCross RefCross Ref
  3. American Psychiatric Association. 2014. Diagnostic and statistical manual of mental disorders: DSM-5. American Psychiatric Association.Google ScholarGoogle Scholar
  4. Pradeep Raj Krishnappa Babu, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Kimberly L.H. Carpenter, Scott Compton, Steven Espinosa, Geraldine Dawson, and Guillermo Sapiro. 2023. Exploring Complexity of Facial Dynamics in Autism Spectrum Disorder. IEEE Trans. Affect. Comput. 14, 2 (2023), 919–930.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis Philippe Morency. 2018. OpenFace 2.0: Facial behavior analysis toolkit. In Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, 59–66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Zhuoqing Chang, J. Matias Di Martino, Rachel Aiello, Jeffrey Baker, Kimberly Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Jacqueline Flowers, Lauren Franz, Adrianne Harris, Jill Howard, Sam Perochon, Eliana M. Perrin, Pradeep Raj Krishnappa Babu, Marina Spanos, Connor Sullivan, Barbara K. Walter, Scott H. Kollins, Geraldine Dawson, and Guillermo Sapiro. 2021. Computational Methods to Measure Patterns of Gaze in Toddlers with Autism Spectrum Disorder. JAMA Pediatr. 175, 8 (2021), 827–836.Google ScholarGoogle ScholarCross RefCross Ref
  7. Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 13-17-Augu, (March 2016), 785–794.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Z. Corona, L., Hine, J., Nicholson, A., Stone, C., Swanson, A., Wade, J., Wagner, L., Weitlauf, A., & Warren. 2020. TELE-ASD-PEDS: A Telemedicine-based ASD Evaluation Tool for Toddlers and Young Children. Vanderbilt University Medical Center. Retrieved January 18, 2024 from https://vkc.vumc.org/vkc/triad/tele-asd-pedsGoogle ScholarGoogle Scholar
  9. Madalena Costa, Ary L. Goldberger, and C. K. Peng. 2005. Multiscale entropy analysis of biological signals. Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 71, 2 (2005), 021906.Google ScholarGoogle ScholarCross RefCross Ref
  10. Geraldine Dawson, Kathleen Campbell, Jordan Hashemi, Steven J. Lippmann, Valerie Smith, Kimberly Carpenter, Helen Egger, Steven Espinosa, Saritha Vermeer, Jeffrey Baker, and Guillermo Sapiro. 2018. Atypical postural control can be detected via computer vision analysis in toddlers with autism spectrum disorder. Sci. Rep. 8, 1 (2018), 1–7.Google ScholarGoogle Scholar
  11. Nicholas Deveau, Peter Washington, Emilie Leblanc, Arman Husic, Kaitlyn Dunlap, Yordan Penev, Aaron Kline, Onur Cezmi Mutlu, and Dennis P. Wall. 2022. Machine learning models using mobile game play accurately classify children with autism. Intell. Med. 6, (January 2022), 100057.Google ScholarGoogle Scholar
  12. Deanna Dow, Taylor N. Day, Timothy J. Kutta, Charly Nottke, and Amy M. Wetherby. 2020. Screening for autism spectrum disorder in a naturalistic home setting using the systematic observation of red flags (SORF) at 18–24 months. Autism Res. 13, 1 (2020), 122–133.Google ScholarGoogle ScholarCross RefCross Ref
  13. Gianluca Esposito, Paola Venuti, Sandra Maestro, and Filippo Muratori. 2009. An exploration of symmetry in early autism spectrum disorders: Analysis of lying. Brain Dev. 31, 2 (2009), 131–138.Google ScholarGoogle ScholarCross RefCross Ref
  14. Joanne E. Flanagan, Rebecca Landa, Anjana Bhat, and Margaret Bauman. 2012. Head lag in infants at risk for autism: A preliminary study. Am. J. Occup. Ther. 66, 5 (2012), 577–585.Google ScholarGoogle ScholarCross RefCross Ref
  15. Kimberly A. Fournier, Chris J. Hass, Sagar K. Naik, Neha Lodha, and James H. Cauraugh. 2010. Motor coordination in autism spectrum disorders: A synthesis and meta-analysis. J. Autism Dev. Disord. 40, 10 (2010), 1227–1240.Google ScholarGoogle ScholarCross RefCross Ref
  16. J. A. Hanley and B. J. McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 1 (1982), 29–36.Google ScholarGoogle ScholarCross RefCross Ref
  17. Jordan Hashemi, Geraldine Dawson, Kimberly L.H. Carpenter, Kathleen Campbell, Qiang Qiu, Steven Espinosa, Samuel Marsan, Jeffrey P. Baker, Helen L. Egger, and Guillermo Sapiro. 2021. Computer Vision Analysis for Quantification of Autism Risk Behaviors. IEEE Trans. Affect. Comput. 12, 1 (2021), 215–226.Google ScholarGoogle ScholarCross RefCross Ref
  18. Warren Jones, Cheryl Klaiman, Shana Richardson, Christa Aoki, Christopher Smith, Mendy Minjarez, Raphael Bernier, Ernest Pedapati, Somer Bishop, Whitney Ence, Allison Wainer, Jennifer Moriuchi, Sew Wah Tay, and Ami Klin. 2023. Eye-Tracking–Based Measurement of Social Visual Engagement Compared With Expert Clinical Diagnosis of Autism. JAMA 330, 9 (September 2023), 854–865.Google ScholarGoogle ScholarCross RefCross Ref
  19. Pradeep Raj Krishnappa Babu, Vikram Aikat, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Steven Espinosa, Rachel Aiello, Kimberly L. H. Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Jacqueline Flowers, Lauren Franz, Geraldine Dawson, and Guillermo Sapiro. 2023. Blink rate and facial orientation reveal distinctive patterns of attentional engagement in autistic toddlers: a digital phenotyping approach. Sci. Rep. 13, 1 (May 2023), 1–11.Google ScholarGoogle Scholar
  20. Pradeep Raj Krishnappa Babu, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Rachel Aiello, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Lauren Franz, Steven Espinosa, Jacqueline Flowers, Geraldine Dawson, and Guillermo Sapiro. 2023. Complexity analysis of head movements in autistic toddlers. J. Child Psychol. Psychiatry Allied Discip. 64, 1 (2023), 156–166.Google ScholarGoogle ScholarCross RefCross Ref
  21. Fernando De La Torre, Wen Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, and Jeffrey Cohn. 2015. IntraFace. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2015, 1–8.Google ScholarGoogle Scholar
  22. Rhiannon Luyster, Katherine Gotham, Whitney Guthrie, Mia Coffing, Rachel Petrak, Karen Pierce, Somer Bishop, Amy Esler, Vanessa Hus, Rosalind Oti, Jennifer Richler, Susan Risi, and Catherine Lord. 2009. The autism diagnostic observation schedule - Toddler module: A new module of a standardized diagnostic measure for autism spectrum disorders. J. Autism Dev. Disord. 39, 9 (2009), 1305–1320.Google ScholarGoogle ScholarCross RefCross Ref
  23. Katherine B. Martin, Zakia Hammal, Gang Ren, Jeffrey F. Cohn, Justine Cassell, Mitsunori Ogihara, Jennifer C. Britton, Anibal Gutierrez, and Daniel S. Messinger. 2018. Objective measurement of head movement differences in children with and without autism spectrum disorder. Mol. Autism (2018)Google ScholarGoogle Scholar
  24. Maria Eleonora Minissi, Irene Alice Chicchi Giglioli, Fabrizia Mantovani, and Mariano Alcañiz Raya. 2022. Assessment of the Autism Spectrum Disorder Based on Machine Learning and Social Visual Attention: A Systematic Review. J. Autism Dev. Disord. 52, 5 (May 2022), 2187–2202.Google ScholarGoogle Scholar
  25. Neil J. Perkins and Enrique F. Schisterman. 2005. The Youden index and the optimal cut-point corrected for measurement error. Biometrical J. 47, 4 (August 2005), 428–441.Google ScholarGoogle ScholarCross RefCross Ref
  26. Sam Perochon, J. Matias Di Martino, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Lauren Franz, Pradeep Raj Krishnappa Babu, Guillermo Sapiro, and Geraldine Dawson. 2023. Early detection of autism using digital behavioral phenotyping. Nat. Med. 2023 2910 29, 10 (October 2023), 2489–2497.Google ScholarGoogle Scholar
  27. Sam Perochon, Matias Di Martino, Rachel Aiello, Jeffrey Baker, Kimberly Carpenter, Zhuoqing Chang, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Jacqueline Flowers, Lauren Franz, Martha Gagliano, Adrianne Harris, Jill Howard, Scott H. Kollins, Eliana M. Perrin, Pradeep Raj, Marina Spanos, Barbara Walter, Guillermo Sapiro, and Geraldine Dawson. 2021. A scalable computational approach to assessing response to name in toddlers with autism. J. Child Psychol. Psychiatry Allied Discip. 62, 9 (2021), 1120–1131.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Sam Perochon, J. Matias Di Martino, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Steven Espinosa, Lauren Franz, Amber D. Rieder, Connor Sullivan, Guillermo Sapiro, and Geraldine Dawson. 2023. A tablet-based game for the assessment of visual motor skills in autistic children. npj Digit. Med. 6, 1 (February 2023), 1–13.Google ScholarGoogle Scholar
  29. Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, and Xiaokang Yang. 2022. Facial Geometric Detail Recovery via Implicit Representation. 2023 IEEE 17th Int. Conf. Autom. Face Gesture Recognition, FG 2023 (March 2022).Google ScholarGoogle Scholar
  30. Diana L. Robins, Karís Casagrande, Marianne Barton, Chi Ming A. Chen, Thyde Dumont-Mathieu, and Deborah Fein. 2014. Validation of the modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Pediatrics 133, 1 (2014), 37–45.Google ScholarGoogle ScholarCross RefCross Ref
  31. R. Christopher Sheldrick, Alice S. Carter, Abbey Eisenhower, Thomas I. MacKie, Megan B. Cole, Noah Hoch, Sophie Brunt, and Frances Martinez Pedraza. 2022. Effectiveness of Screening in Early Intervention Settings to Improve Diagnosis of Autism and Reduce Health Disparities. JAMA Pediatr. 176, 3 (March 2022), 262–269.Google ScholarGoogle ScholarCross RefCross Ref
  32. Roberta Simeoli, Nicola Milano, Angelo Rega, and Davide Marocco. 2021. Using Technology to Identify Children With Autism Through Motor Abnormalities. Front. Psychol. 12, (May 2021).Google ScholarGoogle ScholarCross RefCross Ref
  33. Philip Teitelbaum, Osnat Teitelbaum, Jennifer Nye, Joshua Fryman, and Ralph G. Maurer. 1998. Movement analysis in infancy may be useful for early diagnosis of autism. Proc. Natl. Acad. Sci. U. S. A. 95, 23 (1998), 13982–13987.Google ScholarGoogle ScholarCross RefCross Ref
  34. Kazim Topuz, Akhilesh Bajaj, and Ismail Abdulrashid. 2023. Interpretable Machine Learning. Proceedings of the Annual Hawaii International Conference on System Sciences 2023-Janua, 1236–1237.Google ScholarGoogle Scholar
  35. Raphael Vallat. 2018. Pingouin: statistics in Python. J. Open Source Softw. 3, 31 (November 2018), 1026.Google ScholarGoogle ScholarCross RefCross Ref
  36. Pauli Virtanen, , SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 3 (February 2020), 261–272.Google ScholarGoogle Scholar
  37. Qiuhong Wei, Huiling Cao, Yuan Shi, Ximing Xu, and Tingyu Li. 2023. Machine learning based on eye-tracking data to identify Autism Spectrum Disorder: A systematic review and meta-analysis. J. Biomed. Inform. 137, (January 2023). DOI:https://doi.org/10.1016/J.JBI.2022.104254Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Teresa H. Wen, Amanda Cheng, Charlene Andreason, Javad Zahiri, Yaqiong Xiao, Ronghui Xu, Bokan Bao, Eric Courchesne, Cynthia Carter Barnes, Steven J. Arias, and Karen Pierce. 2022. Large scale validation of an early-age eye-tracking biomarker of an autism spectrum disorder subtype. Sci. Reports 2022 121 12, 1 (March 2022), 1–13Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905

    Copyright © 2024 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 11 May 2024

    Check for updates

    Qualifiers

    • Work in Progress
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate6,164of23,696submissions,26%

    Upcoming Conference

    CHI PLAY '24
    The Annual Symposium on Computer-Human Interaction in Play
    October 14 - 17, 2024
    Tampere , Finland
  • Article Metrics

    • Downloads (Last 12 months)124
    • Downloads (Last 6 weeks)122

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format