Alzheimer’s Disease Detection Using Comprehensive Analysis of Timed Up and Go Test via Kinect V.2 Camera and Machine Learning

Abstract— Alzheimer’sdisease (AD) is a progressive neurodegenerative disease affecting cognitive and functional abilities. However, many patients presume lower cognitive or functional abilities because of aging and do not undergo clinical assessments until the symptoms become too advanced. Developing a low-cost and easy-to-use AD detection tool, which can be used in any clinical or non-clinical setting, can enable widespread AD assessments and diagnosis. This paper investigated the feasibility of developing such a tool to detect AD vs. healthy control (HC) from a simple balance and walking assessment called the Timed Up and Go (TUG) test. We collected joint position data of 47 HC and 38 AD subjects as they performed TUG in front of a Kinect V.2 camera. Our signal processing and statistical analyses provided a comprehensive analysis of balance and gait with 12 significant features for discriminating AD from HC after adjusting for age and the Geriatric Depression Scale. Using these features and a support vector machine classifier, our model classified the two groups with an average accuracy of 97.75% and an F-score of 97.67% for five-fold cross-validationand 98.68% and 98.67% for leave-one-subject out cross-validation. These results demonstrate the potential of our approach as a new quantitative complementary tool for detecting AD among older adults. Our work is novel as it presents the first application of Kinect V.2 camera and machine learning to provide a comprehensive and quantitative analysis of the TUG test to detect AD patients from HC. This study supports the feasibility of developing a low-cost and convenient AD assessment tool that can be used during routine checkups or even at home; however, future investigations could confirm its clinical diagnostic value in a larger cohort.

We visually inspected the filtered signals to identify and delete the recordings which had a significant amount of noise that could not be removed using the filtering process.For this purpose, we inspected the ankle joint signal because it had two simple conditions during TUG tests: stance and swing phases.In the stance phase, the ankle joint position is constant, and during the swing phase, it changes gradually.Thus, we can easily notice the higher level of change in the stance and swing phases of the ankle joint in the presence of high noise.Supplemental Fig. 2 shows three noisy samples of the right ankle joint signals.The pink rectangles indicated the artifacts in the form of sharp transitions that were not removed with the filtering process.These records were too noisy to extract accurate gait features and were removed from the analysis.Supplemental Fig. 2. Three samples which were removed from the analysis because of the presence of the artifact in the filtered signals, as confirmed by visual inspection.

S3. Comparative analysis with other sensor-based AD assessment studies
Few studies have examined the TUG test comprehensively for the detection of AD (Alzheimer's disease).Wang et al. (2015) and Ansai et al. (2019) examined the TUG test and its subtask using sensor technology for AD detection.There are some similarities and differences between our study and those two studies.Here, we compared these studies' tools, methods, and results with our findings.
Recording tool: Ansai et al. (2019) focused on analyzing TUG and its subtask using the Qualisys system, which consists of seven cameras and 15 markers mounted on the subjects' bodies, and Wang et al. (2015) used three inertial-sensor-based wearable devices (made up of accelerometer, gyroscope, and remote controller) mounted on waist, right and left foot while we used skeletal data recorded with a single RGBD camera named Kinect V.2 camera.
Feature extraction: Ansai et al. (2019) extracted 46 features during the transition (sit-to-stand, turning, stand-to-sit, walking forward and back).Their features mainly were kinematic values like max and average of the trunk in different directions and velocity in different directions, especially for subtasks of turning and turn-to-sit which include transition.Wang et al. (2015) also extracted 6 features: TUG time, sit-to-stand, stand-to-sit, and stride, stance and swing phases during walking.
In comparison, we extracted mostly clinical features, including 61 spatiotemporal features during the walking subtask as well as duration and velocity of other subtasks of TUG.Several extracted features like time of TUG and subtasks and the number of steps were common features between our study and studies done by Ansai et al. (2019).
Feature analysis: We also examined using machine learning to automatically discriminate HC (Healthy control) from AD subjects using selected features while Wang et al. (2015) and Ansai et al. (2019) did not.
Findings: The proposed method by Ansai et al. (2019) provided new valuable metrics using a more complex system.Wang et al. (2015) also proposed an automatic system for TUG analysis using wearable devices and showed that this system could be used for discriminating between AD and HC subjects.To make a clearer comparison between our study and some previous studies like Wang et al. (2015) and Ansai et al. (2019), we also created a table comparing the extracted features and findings.Supplemental Table 1 summarizes the extracted features from subtasks of TUG in our study and previous studies done by Wang et al. (2015) and Ansai et al. (2019) which examined the subtasks of TUG for discrimination between HC and AD older adults.Supplemental Table 1 shows the features extracted and findings in our study and those studies.

Table 1 .
A comparative analysis of the existing studies for using TUG for AD detection.
 = non extracted;  = significant increase among AD subjects in comparison to HC subjects;  = significant decrease among AD subjects in comparison to HC subjects.