Abstract
Autism, characterized by challenges in socialization and communication, benefits from early detection for prompt and timely intervention. Traditional autism screening questionnaires often exhibit reduced accuracy in primary care settings and significantly underperform underprivileged populations. We present findings on the effectiveness of an autism screening digital application (app) that can be administered at primary care clinics and also by caregivers at home. A large-scale validation was conducted with 1052 toddlers aged 16–40 months. Among them, 223 were subsequently diagnosed with autism. The age-appropriate interactive app utilized strategically designed stimuli, presented on the screen of the iPhone or iPad, to evoke behaviors related to social attention, facial expressions, head movements, blinking rate, and motor responses, which can be detected with the device's sensors and automatically quantified through computer vision (CV) and machine learning. The algorithm, combining various digital biomarkers, demonstrated strong accuracy: Area under the receiver operating characteristic curve (AUC) = 0.93, sensitivity = 86.0%, specificity = 91.0%, and precision = 71%, for distinguishing autistic versus non-autistic toddlers, marking a strong foundation as a digital phenotyping tool in the autism research, notably without any costly equipment like eye tracking devices and at home administered by caregivers.
1 INTRODUCTION
Autism spectrum disorder is manifested by differences in social communication along with idiosyncratic behaviors, the presence of restricted and repetitive behaviors [3], and difficulties in motor planning and coordination [13–15,33]. Signs of autism typically emerge between 9-18 months, including reduced attention to people, lack of response to name, differences in affective engagement and expressions, and motor delays [2,11,23,32]. Screening for autism commonly occurs at 18-24 months during their well-child visits using the Modified Checklist for Autism in Toddlers-Revised with Follow-Up (M-CHAT-R/F) [30], a caregiver questionnaire. Recent research has shown that technology can enhance tools such as the M-CHAT, alleviating some of its limitations while improving screening accuracy, scalability, and robustness [26].
A significant portion of autistic individuals show a reduced spontaneous visual attention to social stimuli, and studies using machine learning of eye-tracking data have shown promise in distinguishing autistic and neurotypical children [24,37]. A recent eye-tracking study evaluated in 1,863 toddlers, 12–48 months-old, had strong specificity (98.0%) but poor sensitivity (17.0%) on their measure of social attention [38]. Another recent study with 475 toddlers, 16-30 months-old conducted in a primary care setting with a sophisticated eye tracker showed a sensitivity of 71% and specificity of 80.7% [18]. Given the wide range of results through eye-tracking tests, it is evident that eye-tacking may be insufficient due to the heterogeneous nature of autism. To better capture the complex representation of autism, digital phenotyping can quantify differences in social attention [6], head movements [10,20], complexity in facial expressions [4], blinking rate [19], and motor behaviors [27,28], combining multiple such biomarkers via modern machine learning tools [26].
The app “SenseToKnow” (S2K) was developed for this purpose, administered on a tablet (iPad) or mobile phone (iPhone) either in primary care settings or at home, the app can be fully administered by the caregivers, specifically, no specific hardware (other than their phone/tablet) is required, no calibration steps are needed prior to the administration. The S2K app displays strategically designed movies while recording the child's behavioral responses via the frontal camera and touch/inertia sensors, analyzed through computer vision (CV) and machine learning (ML). In this study, with 1052 toddlers, the app demonstrated high accuracy in classifying autistic versus non-autistic toddlers by integrating 23 digital phenotypes. Prior study validated and tested the app in primary care settings in the presence of clinicians or experts [26]; here we extend this work by testing this interactive app at home with caregivers.
Extending clinical care to homes has extraordinary implications; it has the potential to increase care adoption and access, including groups of society that are far from the reach of medical visits on a frequent basis. Additionally, accessibility of an automatic screening and diagnostic tools that can be directly administered by caregivers at home can help in reducing the wait times towards early screening for autism and also can aid in multistage screening for effective outcomes [31], which is currently not possible. Even though there are existing works that present the feasibility of autism screening at home using questionnaires and video recordings [1,12], these works still needed time-intensive procedures and analysis performed by clinicians or experts. Given the technological advancement, similar to how heart rate measures are available in smartwatches, screening and diagnostic tools in smartphones can make a paradigm shift in autism. In this work, we present the large-scale validation of “SenseToKnow,” a scalable and portable behavioral screening tool for autism that can be administered on a phone or tablet either at the clinic or at home. We also present the effectiveness of the app in detecting the presence of autism as early as 16 months of age.
2 METHODS
2.1 Participants
Participants were 1052 toddlers, 16-40 months of age, who were either (1) recruited at four pediatric primary care clinics during their well-child visit or (2) participated from home where the caregivers administered the app in their iPhone/iPad. A standard caregiver-completed questionnaire, the Modified Checklist for Autism in Toddlers – Revised with Follow-up (M-CHAT-R/F) [30] was used for the initial assessment. If a child was positive on M-CHAT-R/F score or the caregiver/clinician expressed any developmental concern, the child was further evaluated with the Autism Diagnostic Observation Schedule – Toddler (ADOS-T) [22] or the TELE-ASD-PEDS (TAP) [8]. Out of 1052 toddlers, 223 were diagnosed with autism, 43 were diagnosed as having language delay or developmental delay (DDLD), and the rest were considered neurotypical. We combined the DDLD and neurotypical participants as the non-autistic (comparison) group (N=829). All the caregivers provided written informed consent, and the study protocols were approved by the the Duke University Health System Institutional Review Board (Pro00085434, Pro00085435).
2.2 Application (app) administration and stimuli
The S2K app consists of several tasks, including short social and nonsocial movies presented through an iPad/iPhone app (see Figure 1 (a)), namely
In summary, the app can be administered in less than 10 minutes consisting of 11 short developmentally appropriate video clips that the children watch and, at the end they interact with a bubble-popping game. The caregivers were asked to hold their child on their lap while watching these movies while the device is placed at about 60 cm distance in front of the child. The front-facing camera of the phone/tablet recorded the toddler's face while the videos played on the screen. During the Pop the Bubbles game, the child interacted via the device touch screen while the device's kinetic and touch information was recorded.
2.3 Feature extraction and behavioral measures
The recorded videos (30 fps) were synchronized with the movies, processed to track the toddler's face, and extracted 49 facial landmarks and head pose angles relative to the tablet's frontal camera such as θyaw (left-right), θpitch (up-down), and θroll (tilting left-right) [21,29] (see Figure 1 (b)). After obtaining the facial landmarks and head pose angles of the participants as described in [17], we extracted a set of behavioral features as described in the following section (see Figure 1 (c)).
2.3.1 Facing forward during social and nonsocial movies (2 features).
To measure the participants’ proxy for engagement towards social vs. nonsocial videos, we measured the number of frames where the participant's head was
2.3.2 Head movements during social and nonsocial movies (6 features)
The facial landmarks associated with the corners of the two eyes and the nose tip were used to compute the participant's head movement [20]. The ‘facing forward’ signal defined above was used to filter the frames in which the child was not facing towards the screen. The variation in the distance between the eyes was used to adjust to child's distance to the screen. The features from head movements computed from the time series of the landmarks were the head movement (1)
2.3.3 Facial dynamics complexity during social and nonsocial movies (4 features).
The complexity of the facial landmarks’ dynamics was estimated for the movement of eyebrows and mouth regions of the child's face using multiscale sample entropy [9]. We computed the average complexity of the mouth and eyebrows regions, referred to as the
2.3.4 Blink rate during social and nonsocial movies (2 features).
To automatically recognize blinks, we used OpenFace [5], a facial analysis toolkit that offered facial action units on a frame-by-frame basis. For the blinking action, we used action unit 45 (AU45) to estimate the child's blinks. A smoothing of the AU45 time-series signal was performed, followed by detecting the number of peaks, which are associated with blink actions. To obtain the
2.3.5 Social attention variables from gaze data (2 features).
The app includes two movies (Blowing Bubbles and Spinning Top) featuring a left/right separation of social and nonsocial stimuli on each side of the screen, these stimuli were designed to capture social/nonsocial attentional preference. The variable
2.3.6 Attention to speech variable from gaze data (2 features).
The Fun at the Park movie presented two actors, one on each side of the screen, taking turns in a conversation. To evaluate whether the children were following the actress’ conversation using their gaze, we computed the correlation between the child's gaze (left/right) patterns and a binary signal that indicated which of the actress was actively talking during the conversation. These correlation-based feature is referred to as the
2.3.7 Response to name (2 features).
Based on automatic detection of the name calls (as done by the caregiver) and the child's response to their name by turning their head computed from the facial landmarks similar to [27], we defined two CVA-based variables:
2.3.8 Touch-based visual-motor skills (3 features).
As described in [28], using the touch information provided by the device sensors when the child played the Pop the Bubbles game, we defined
2.4 Classification and statistical analysis.
All statistics are computed in Python version 3.8.10. Mann–Whitney U test (using pingouin package [35] version 0.5.4) is used to estimate the significant difference between the two groups along with the estimation of the effect size, ‘r.’ Linear regression model is used to minimize the effect of age on our measures using SciPy package [36] version 1.7.3.
2.4.1 Extreme Gradient Boosting (XGBoost) algorithm implementation.
XGBoost is a popular model based on several decision-trees whose node variables and split decisions are optimized using gradient statistics of a loss function. The algorithm progressively adds more “if” conditions to the decision tree to improve the predictions of the overall model. We utilized the package, XGBoost version 2.0.3 with all default parameters of the algorithms as provided by the authors [7], except the ones in bold (which were chosen as described in [26]) that we changed to account for the class imbalance and control overfitting. n_estimators=100; max_depth=3; objective=“binary:logistic”; booster=”gbtree”; tree_method=“exact”; colsample_bytree=0.8; subsample=1; colsumbsample=0.8; learning_rate= 0.15; gamma=0.1 (regularization parameter); reg_lambda=0.1; alpha=0; see Figure 1(d) for the workflow of the model training and evaluation. Classification performance was evaluated using the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC). Five-fold cross-validation was used to evaluate classification performance. The 95% confidence intervals were computed with the Hanley and McNeil method [16]. Youden optimality index (J = Sensitivity+Specificity−1) was used to estimate the final prediction value of the classifier to maximize the sensitivity and specificity [25].
2.4.2 SHapley Additive exPlanations (SHAP) computation
SHAP values serve as a metric for gauging the influence of variables on the prediction [34]. They quantify the effect of having a specific value for a given variable as opposed to the prediction that would be made if that variable assumed a baseline value. This framework offers robust theoretical assurances in elucidating the contribution of each input variable to the final prediction, encompassing the estimation of interactions between variables and their respective contributions. In this work, the SHAP values were computed and stored for each sample of the test sets when performing cross-validation. Python package, shap version 0.44.0, was used.
3 RESULTS
3.1 Descriptive statistics of all the CVA-based behavioral variables.
The descriptive statistics show that group average difference between the autistic and non-autistic groups are highly statistically different (p<0.01 with small (> .2) to medium effect sizes (> .5); see Figure 2 (a) for the statistical results for each variable, except for five variables, namely, Social Mouth Complexity (marginally significant p = 0.08), Response to Name Delay, Touch Popping Rate, Touch Error Variation, and Touch Average Length. Similar to the findings from prior works, the group average differences in variables related to facing forward, head movements, and blink rate are all higher during the social compared to nonsocial tasks between the two groups [19,20].
3.2 Classification results using XGBoost.
Since the app was administered at both clinic and home-based settings, there was a possible confounds in the way the app was administered by clinicians vs. caregivers. The
Using all the age-adjusted 23 behavioral biomarkers and the 2 confounding variables, we trained the XGBoost model to classify the autistic group from the non-autistic group. Figure 2(b) presents the ROC curve of three classification models, namely, using (1) all the data from 1052 participants, (2) data collected in primary care settings (at clinic N=456, 39 of them are autistic), and (3) data collected at home (N=596, 184 of them are autistic). For the models (2) and (3) the covariate
4 DISCUSSION AND CONCLUSIONS
This study is a part of a broad effort to design a scalable, robust, and portable tools for computational behavioral phenotyping. In this work, we have presented a portable iPad or iPhone application (app) that displayed strategically designed, developmentally appropriate short social/nonsocial movies that can evoke certain behavioral manifestation in autistic toddlers. Autistic and non-autistic toddlers were exposed to the app, watched the movies and interacted via touch screen during Pop the Bubbles game. The app used the front-facing camera of the tablet to record the toddlers’ videos, and device's sensor for touch related measures. The videos were analyzed using computer vision (CV) to extract various behavioral biomarkers. Subsequently, these behavioral features and touch features were fed into the classification model utilizing the XGBoost to reliably screen the signs of autism.
Computer vision based behavioral biomarkers are distinctly different for autistic compared to non-autistic toddlers. The results reported here show that CV-based biomarkers relating to social attention, head movements, response to name call, facing forward, blink rate, and touch-based motor-related features can distinctly differentiate the autistic group compared to the non-autistic group.
“SenseToKnow” – iPad/iPhone app can reliably screen the presence of autism both in the clinic and at home. Our results with data from 1052 participants via machine learning model using XGBoost based classifier can detect the signs of autism as early as 16 months of age with sensitivity = 86%, specificity = 91% and precision = 71%, marking a strong foundation of digital phenotyping tool in the autism research, notably without any costlier equipment like eye tracking devices and at home.
Contribution. Outcome of our research indicates a strong potential for a quantitative, objective, and scalable digital phenotyping tool designed to enhance the accuracy of autism screening. This tool has the promise to address disparities in access to screening, diagnosis, and intervention, serving as a valuable complement to existing autism screening questionnaires. Our results prove that this tool may not only be used at primary care clinics by clinicians but also by caregivers at home environment.
Limitations. Though the study sample is relatively large, we still lag the power to generalize the diversity and demographic characteristics for the target population. Additionally, our app is currently available only via iPad or iPhone.
Future work. This is an ongoing study, so we plan to increase the sample size further to generalize the results to the different ethnic and demographic diversity (preliminary results show no bias). Another future work is to extend the availability of this app to the other platforms like Android.
Acknowledgments
This project was funded by a Eunice Kennedy Shriver NICHD Autism Center of Excellence Award P50HD093074 (Dawson, PI), NIMH R01MH121329 (Dawson, PI), NIMH R01MH120093 (Sapiro and Dawson, Co-PIs), and the Simons Foundation (Sapiro and Dawson, Co-PIs). Resources were provided by NSF, ONR, NGA, ARO, and gifts from Cisco, Google, and Amazon. We wish to thank the many caregivers and children for their participation in the study, without whom this research would not have been possible. We gratefully acknowledge the collaboration of the physicians and nurses in Duke Children's Primary Care and members of the NIH Duke Autism Center of Excellence research team, including several clinical research coordinators and specialists.
Supplemental Material
Available for Download
- Halim Abbas, Ford Garberson, Eric Glover, and Dennis P. Wall. 2018. Machine learning approach for early detection of autism by combining questionnaire and home video screening. J. Am. Med. Informatics Assoc. 25, 8 (2018), 1000–1007.Google ScholarCross Ref
- Gianpaolo Alvari, Cesare Furlanello, and Paola Venuti. 2021. Is smiling the key? Machine learning analytics detect subtle patterns in micro-expressions of infants with asd. J. Clin. Med. 10, 8 (April 2021), 1776.Google ScholarCross Ref
- American Psychiatric Association. 2014. Diagnostic and statistical manual of mental disorders: DSM-5. American Psychiatric Association.Google Scholar
- Pradeep Raj Krishnappa Babu, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Kimberly L.H. Carpenter, Scott Compton, Steven Espinosa, Geraldine Dawson, and Guillermo Sapiro. 2023. Exploring Complexity of Facial Dynamics in Autism Spectrum Disorder. IEEE Trans. Affect. Comput. 14, 2 (2023), 919–930.Google ScholarDigital Library
- Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis Philippe Morency. 2018. OpenFace 2.0: Facial behavior analysis toolkit. In Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, 59–66.Google ScholarDigital Library
- Zhuoqing Chang, J. Matias Di Martino, Rachel Aiello, Jeffrey Baker, Kimberly Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Jacqueline Flowers, Lauren Franz, Adrianne Harris, Jill Howard, Sam Perochon, Eliana M. Perrin, Pradeep Raj Krishnappa Babu, Marina Spanos, Connor Sullivan, Barbara K. Walter, Scott H. Kollins, Geraldine Dawson, and Guillermo Sapiro. 2021. Computational Methods to Measure Patterns of Gaze in Toddlers with Autism Spectrum Disorder. JAMA Pediatr. 175, 8 (2021), 827–836.Google ScholarCross Ref
- Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 13-17-Augu, (March 2016), 785–794.Google ScholarDigital Library
- Z. Corona, L., Hine, J., Nicholson, A., Stone, C., Swanson, A., Wade, J., Wagner, L., Weitlauf, A., & Warren. 2020. TELE-ASD-PEDS: A Telemedicine-based ASD Evaluation Tool for Toddlers and Young Children. Vanderbilt University Medical Center. Retrieved January 18, 2024 from https://vkc.vumc.org/vkc/triad/tele-asd-pedsGoogle Scholar
- Madalena Costa, Ary L. Goldberger, and C. K. Peng. 2005. Multiscale entropy analysis of biological signals. Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 71, 2 (2005), 021906.Google ScholarCross Ref
- Geraldine Dawson, Kathleen Campbell, Jordan Hashemi, Steven J. Lippmann, Valerie Smith, Kimberly Carpenter, Helen Egger, Steven Espinosa, Saritha Vermeer, Jeffrey Baker, and Guillermo Sapiro. 2018. Atypical postural control can be detected via computer vision analysis in toddlers with autism spectrum disorder. Sci. Rep. 8, 1 (2018), 1–7.Google Scholar
- Nicholas Deveau, Peter Washington, Emilie Leblanc, Arman Husic, Kaitlyn Dunlap, Yordan Penev, Aaron Kline, Onur Cezmi Mutlu, and Dennis P. Wall. 2022. Machine learning models using mobile game play accurately classify children with autism. Intell. Med. 6, (January 2022), 100057.Google Scholar
- Deanna Dow, Taylor N. Day, Timothy J. Kutta, Charly Nottke, and Amy M. Wetherby. 2020. Screening for autism spectrum disorder in a naturalistic home setting using the systematic observation of red flags (SORF) at 18–24 months. Autism Res. 13, 1 (2020), 122–133.Google ScholarCross Ref
- Gianluca Esposito, Paola Venuti, Sandra Maestro, and Filippo Muratori. 2009. An exploration of symmetry in early autism spectrum disorders: Analysis of lying. Brain Dev. 31, 2 (2009), 131–138.Google ScholarCross Ref
- Joanne E. Flanagan, Rebecca Landa, Anjana Bhat, and Margaret Bauman. 2012. Head lag in infants at risk for autism: A preliminary study. Am. J. Occup. Ther. 66, 5 (2012), 577–585.Google ScholarCross Ref
- Kimberly A. Fournier, Chris J. Hass, Sagar K. Naik, Neha Lodha, and James H. Cauraugh. 2010. Motor coordination in autism spectrum disorders: A synthesis and meta-analysis. J. Autism Dev. Disord. 40, 10 (2010), 1227–1240.Google ScholarCross Ref
- J. A. Hanley and B. J. McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 1 (1982), 29–36.Google ScholarCross Ref
- Jordan Hashemi, Geraldine Dawson, Kimberly L.H. Carpenter, Kathleen Campbell, Qiang Qiu, Steven Espinosa, Samuel Marsan, Jeffrey P. Baker, Helen L. Egger, and Guillermo Sapiro. 2021. Computer Vision Analysis for Quantification of Autism Risk Behaviors. IEEE Trans. Affect. Comput. 12, 1 (2021), 215–226.Google ScholarCross Ref
- Warren Jones, Cheryl Klaiman, Shana Richardson, Christa Aoki, Christopher Smith, Mendy Minjarez, Raphael Bernier, Ernest Pedapati, Somer Bishop, Whitney Ence, Allison Wainer, Jennifer Moriuchi, Sew Wah Tay, and Ami Klin. 2023. Eye-Tracking–Based Measurement of Social Visual Engagement Compared With Expert Clinical Diagnosis of Autism. JAMA 330, 9 (September 2023), 854–865.Google ScholarCross Ref
- Pradeep Raj Krishnappa Babu, Vikram Aikat, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Steven Espinosa, Rachel Aiello, Kimberly L. H. Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Jacqueline Flowers, Lauren Franz, Geraldine Dawson, and Guillermo Sapiro. 2023. Blink rate and facial orientation reveal distinctive patterns of attentional engagement in autistic toddlers: a digital phenotyping approach. Sci. Rep. 13, 1 (May 2023), 1–11.Google Scholar
- Pradeep Raj Krishnappa Babu, J. Matias Di Martino, Zhuoqing Chang, Sam Perochon, Rachel Aiello, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Lauren Franz, Steven Espinosa, Jacqueline Flowers, Geraldine Dawson, and Guillermo Sapiro. 2023. Complexity analysis of head movements in autistic toddlers. J. Child Psychol. Psychiatry Allied Discip. 64, 1 (2023), 156–166.Google ScholarCross Ref
- Fernando De La Torre, Wen Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, and Jeffrey Cohn. 2015. IntraFace. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2015, 1–8.Google Scholar
- Rhiannon Luyster, Katherine Gotham, Whitney Guthrie, Mia Coffing, Rachel Petrak, Karen Pierce, Somer Bishop, Amy Esler, Vanessa Hus, Rosalind Oti, Jennifer Richler, Susan Risi, and Catherine Lord. 2009. The autism diagnostic observation schedule - Toddler module: A new module of a standardized diagnostic measure for autism spectrum disorders. J. Autism Dev. Disord. 39, 9 (2009), 1305–1320.Google ScholarCross Ref
- Katherine B. Martin, Zakia Hammal, Gang Ren, Jeffrey F. Cohn, Justine Cassell, Mitsunori Ogihara, Jennifer C. Britton, Anibal Gutierrez, and Daniel S. Messinger. 2018. Objective measurement of head movement differences in children with and without autism spectrum disorder. Mol. Autism (2018)Google Scholar
- Maria Eleonora Minissi, Irene Alice Chicchi Giglioli, Fabrizia Mantovani, and Mariano Alcañiz Raya. 2022. Assessment of the Autism Spectrum Disorder Based on Machine Learning and Social Visual Attention: A Systematic Review. J. Autism Dev. Disord. 52, 5 (May 2022), 2187–2202.Google Scholar
- Neil J. Perkins and Enrique F. Schisterman. 2005. The Youden index and the optimal cut-point corrected for measurement error. Biometrical J. 47, 4 (August 2005), 428–441.Google ScholarCross Ref
- Sam Perochon, J. Matias Di Martino, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Lauren Franz, Pradeep Raj Krishnappa Babu, Guillermo Sapiro, and Geraldine Dawson. 2023. Early detection of autism using digital behavioral phenotyping. Nat. Med. 2023 2910 29, 10 (October 2023), 2489–2497.Google Scholar
- Sam Perochon, Matias Di Martino, Rachel Aiello, Jeffrey Baker, Kimberly Carpenter, Zhuoqing Chang, Scott Compton, Naomi Davis, Brian Eichner, Steven Espinosa, Jacqueline Flowers, Lauren Franz, Martha Gagliano, Adrianne Harris, Jill Howard, Scott H. Kollins, Eliana M. Perrin, Pradeep Raj, Marina Spanos, Barbara Walter, Guillermo Sapiro, and Geraldine Dawson. 2021. A scalable computational approach to assessing response to name in toddlers with autism. J. Child Psychol. Psychiatry Allied Discip. 62, 9 (2021), 1120–1131.Google ScholarCross Ref
- [28] Sam Perochon, J. Matias Di Martino, Kimberly L.H. Carpenter, Scott Compton, Naomi Davis, Steven Espinosa, Lauren Franz, Amber D. Rieder, Connor Sullivan, Guillermo Sapiro, and Geraldine Dawson. 2023. A tablet-based game for the assessment of visual motor skills in autistic children. npj Digit. Med. 6, 1 (February 2023), 1–13.Google Scholar
- Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, and Xiaokang Yang. 2022. Facial Geometric Detail Recovery via Implicit Representation. 2023 IEEE 17th Int. Conf. Autom. Face Gesture Recognition, FG 2023 (March 2022).Google Scholar
- Diana L. Robins, Karís Casagrande, Marianne Barton, Chi Ming A. Chen, Thyde Dumont-Mathieu, and Deborah Fein. 2014. Validation of the modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Pediatrics 133, 1 (2014), 37–45.Google ScholarCross Ref
- R. Christopher Sheldrick, Alice S. Carter, Abbey Eisenhower, Thomas I. MacKie, Megan B. Cole, Noah Hoch, Sophie Brunt, and Frances Martinez Pedraza. 2022. Effectiveness of Screening in Early Intervention Settings to Improve Diagnosis of Autism and Reduce Health Disparities. JAMA Pediatr. 176, 3 (March 2022), 262–269.Google ScholarCross Ref
- Roberta Simeoli, Nicola Milano, Angelo Rega, and Davide Marocco. 2021. Using Technology to Identify Children With Autism Through Motor Abnormalities. Front. Psychol. 12, (May 2021).Google ScholarCross Ref
- Philip Teitelbaum, Osnat Teitelbaum, Jennifer Nye, Joshua Fryman, and Ralph G. Maurer. 1998. Movement analysis in infancy may be useful for early diagnosis of autism. Proc. Natl. Acad. Sci. U. S. A. 95, 23 (1998), 13982–13987.Google ScholarCross Ref
- Kazim Topuz, Akhilesh Bajaj, and Ismail Abdulrashid. 2023. Interpretable Machine Learning. Proceedings of the Annual Hawaii International Conference on System Sciences 2023-Janua, 1236–1237.Google Scholar
- Raphael Vallat. 2018. Pingouin: statistics in Python. J. Open Source Softw. 3, 31 (November 2018), 1026.Google ScholarCross Ref
- Pauli Virtanen, , SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 3 (February 2020), 261–272.Google Scholar
- Qiuhong Wei, Huiling Cao, Yuan Shi, Ximing Xu, and Tingyu Li. 2023. Machine learning based on eye-tracking data to identify Autism Spectrum Disorder: A systematic review and meta-analysis. J. Biomed. Inform. 137, (January 2023). DOI:https://doi.org/10.1016/J.JBI.2022.104254Google ScholarDigital Library
- Teresa H. Wen, Amanda Cheng, Charlene Andreason, Javad Zahiri, Yaqiong Xiao, Ronghui Xu, Bokan Bao, Eric Courchesne, Cynthia Carter Barnes, Steven J. Arias, and Karen Pierce. 2022. Large scale validation of an early-age eye-tracking biomarker of an autism spectrum disorder subtype. Sci. Reports 2022 121 12, 1 (March 2022), 1–13Google Scholar
Recommendations
The Need for Smart Parenting Module for Parenting Self-Efficacy of Parents with Autism Children During Covid-19 Pandemic
ICLIQE '21: Proceedings of the 5th International Conference on Learning Innovation and Quality EducationDuring the COVID-19 pandemic, the role of parents in assisting and educating children with autism is limited to their ability and knowledge of how to educate children with autism, parents with low parenting self-efficacy should work hard to meet the ...
Technology-enhanced ABA intervention in children with autism: a pilot study
This study investigates whether ICT technology can enhance applied behavior analysis (ABA) rehabilitation therapy for children with autism. A technology-enhanced rehabilitation system to support the daily work of ABA tutors, parents and teachers was ...
Autism Spectrum Disorder Screening: Machine Learning Adaptation and DSM-5 Fulfillment
ICMHI '17: Proceedings of the 1st International Conference on Medical and Health Informatics 2017One of the primary psychiatric disorders is Autistic Spectrum Disorder (ASD). ASD is a mental disorder that limits the use of linguistic, communicative, cognitive, skills as well as social skills and abilities. Recently, ASD has been studied in the ...
Comments