Feasibility of virtual low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson’s disease

A low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson‘s disease, a structured motor assessment administered by a trained examiner to a patient physically present in the same room, utilizes sensors to generate output to facilitate the evaluation of the patient. However, motor assessments with the patient and the examiner in the same room may not be feasible due to distances between the patient and the examiner and the risk of transmission of infections between the patient and the examiner. Therefore, we propose a protocol for the remote assessment by examiners in different locations of both (A) videos of patients recorded during in-person motor assessments and (B) live virtual assessments of patients in different locations from examiners. The proposed procedure provides a framework for providers, investigators, and patients in vastly diverse locations to conduct optimal motor assessments required to develop treatment plans utilizing precision medicine tailored to the specific needs of each individual patient. The proposed protocol generates the foundation for providers to remotely perform structured motor assessments necessary for optimal diagnosis and treatment of people with Parkinson's disease and related conditions.


Rationale
Parkinson's disease, the second most common neurodegenerative disease, afflicts 1% of men and women [3] aged 60 and over and afflicts more individuals with increasing age [4] . The prevalence of Parkinson's disease is increasing around the world [5] . Structured interviews and examinations of participants to assess motor impairments common in people with Parkinson's disease were developed to be performed in person to utilize the full capability of human observation by trained examiners [6][7][8] . Additionally, the identification of tremors of people with Parkinson's disease has been facilitated by the application of automatic intelligence to video recordings [9] . However, a comprehensive review of the vast spectrum of computer-assisted technologies [10] and wearable sensors [11] that have been developed to facilitate the diagnosis and treatment of Parkinson's disease is beyond the scope of this article.
Since people who develop Parkinson's disease may exhibit motor symptoms up to 3 years before they receive the diagnosis of Parkinson's disease [12] , simple procedures to identify people with Parkinson's disease and related conditions in underserved locations through online assessments are needed. Additionally, risks of infections, lack of transportation, and other environmental influences may prevent the conduct of live ratings. Therefore, we seek to develop simple inexpensive methodology for trained raters in different locations to simultaneously rate videos and live performances of structured motor assessments of individuals and to immediately generate a consensus rating of the output. For these reasons we have developed a protocol for conducting motor assessments online using Zoom [13] . We seek to share our proposed protocol for the use with underserved populations by colleagues around the world.

Guidelines for optimal remote evaluation of videotaped motor assessments conducted in person
Since statistical procedures assume that the underlying populations are independent and identically distributed [14] , data that is generated through similar procedures is crucial to the development of results suitable for meaningful analysis. The generation of data that is independently and identically distributed is facilitated by assessments conducted by the same trained rater with every participant in the same chair with arms without wheels in the same room at the same temperature at the same time of day. Videotapes of the testing procedures provide the foundations to assess the live ratings of the trained examiner and the recorded images.

Construction of protocol videotapes
A low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [ 7 , 8 ] was performed in person by an examiner certified in the Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) [6] on cohorts of participants with Parkinson's disease ( N = 20), multiple system atrophy [ N = 1], and typical development [ N = 8] with the assistance of a technologist who recorded and processed the output of the accelerometers and videotaped the participant. Accelerometers were taped to the upper and lower extremities of participants before the participants were asked to perform twelve tasks modified from the MDS-UPDRS [ 6,7 ]. Each task ( Table 1 ) of a low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [ 7 , 8 ] was administered to participants and the score recorded immediately by the examiner [ 7 , 8 , 15 ]. The procedure was performed to facilitate the identification of the severity of the typical impairments experienced by people with Parkinson's disease. The output provides the bases for clinicians to assess the current functioning of people with Parkinson's disease to foster the development of a treatment plan tailored to the specific needs of each individual.
A video camera on a tripod were utilized to record the assessments. The orientation of the video camera was consistently directly facing the participant. The video camera was placed in the same position at the edge of the table in the conference room used for the ratings. The participant was consistently placed in the same position in the conference room for the rating sessions. The video camera was approximately three feet from the participant for all ratings. The recording files were copied to a computer for storage for future analysis. The original video recordings contained uninterrupted files of all the discussions among the examiner, the participant, and the technologist before and after each task. The original videos were entered by an investigator in North America (JRB) into folders on Google Drive.. The original videos were then downloaded by an investigator in Africa (AE) from the folders on Google Drive to a computer for editing. In order to construct the videos for a compact manner suitable for efficient rating, the investigator in Africa [AE] constructed individual video recordings of each individual task with a second or so before and after each task [ 16 , 26 , 27 ]. A typical video editing session took 30 to 60 min. The process of uploading each original file to Google Drive, downloading each original file Table 1 Categories and descriptions of tasks assessed by a low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [7] .

Rest tremor amplitude upper limb (RTU)
• The participant is instructed to sit quietly in a straight-back chair with hands on the arms of the chair and feet comfortably on the floor for three minutes.

Rest tremor amplitude upper limb counting (RTUC)
• The participant is instructed to sit quietly in a straight-back chair with hands on the arms of the chair and feet comfortably on the floor and to count out loud backwards from 30.

Rest tremor amplitude lower limb (RTL)
• The participant is instructed to sit quietly in a straight-back chair with hands on the arms of the chair and feet comfortably on the floor for three minutes. 3.17 Rest tremor amplitude lower limb counting (RTLC) • The participant is instructed to sit quietly in a straight-back chair with hands on the arms of the chair and feet comfortably on the floor and to count out loud backwards from 30

Postural tremor of the hands (PT)
• The participant is asked to separately stretch each hand out in front of the body with palms down and fingers comfortably separated so that they do not touch each other for ten seconds.

Finger tapping (FT)
• The participant is instructed to tap the index finger to the thumb as fast and fully as possible for 60 repetitions.

Hand movements (HM)
• The participant is instructed to make a tight fist and to open and close the fist as fast and fully as possible for 60 repetitions. 3.6 Pronation-supination movements of the hands (PS) • The participant is instructed to stretch each arm with palm down and to turn the palm up and down as fast and fully as possible for 60 repetitions.

Toe tapping (TT)
• The participant is instructed to place each foot comfortably on the floor and to keep the heel on the floor while tapping the foot up and down as fast and fully as possible for 60 repetitions.

Leg agility (LA)
• The participant is instructed to place each foot comfortably on the floor and to raise the foot up and down as fast and fully as possible for 60 repetitions.

Arising from chair upper limbs (ACU)
• The participant is instructed to cross arms on chest and to stand up from sitting. 3.9 Arising from chair lower limbs (ACL) from Google Drive, and editing videos with only the individual tasks took a day or two for the processing of a single video session for a single participant. The remote rating of the edited videos of a single assessment of a single participant and the consensus conference took one to two hours.
The rater and the participant had no communication after the original videos were obtained with the exception that the healthy participants for this study were all co-authors who continued to collaborate. One of the healthy men (JRB) also participated in the construction and rating of the edited videos. There was no online contact between raters and other participants at the time of the ratings of the videos. The original clinical scores obtained by the trained examiner and the instrumentation signals and their transforms have been published [15][16][17][18][19][20][21][22][23] . We seek now to develop a protocol for raters all in different locations to rate the videotapes to share with colleagues around the world.
A preliminary step before rating sessions was to prepare the videos in a format convenient for efficient rating sessions. The original videotapes were uninterrupted recordings of all activities of the participants and raters. The original videotapes contained the intermissions between the conduct of the actual tasks when the technologist processed and recorded the output of the accelerometers and set up the computer to record the next task. There typically were several minutes on the videotape between the actual tasks of the protocol. Since those intermissions between tasks are not rated, they were edited out of the videotapes provided to the raters for each rating session. The original video segments are too large to be sent as email attachments, so they are placed on folders on the iCloud for storage and transmission to team members. Uploading a video of 30 to 60 min may take several hours. Downloading the videos from the iCloud is tedious and time-consuming. Technical problems may interfere with the quick and effective transmission of the videos. An examiner in North America who conducted original in person motor assessments and participated as a participant with typical development (JRB) and an examiner in Africa (AE) met online to view the original videotapes lasting 30 to 60 min to construct video clips of each specific motor task with a second or so before the start of the task and a second or so after the completion of the task to generate separate video clips of each task lasting approximately 15 to approximately 180 s. The initial attempts to construct video clips suitable for online rating were halted after edited files from North America placed on Mendeley Data did not play with sound in North America. Limitations of the computers and Internet systems likely prevented the accurate transmission and playing of the documents. The edited files could be placed on Mendeley Data from Africa and could be send and heard when played in Africa. In order to proceed with the project for international publication, the investigator in Africa (AE) took on the role of corresponding author to place the video clips and related materials on Mendeley Data [ 16 , 26 , 27 ]. The investigators in North America (JRB) and Africa (AE) met regularly to check the accuracy of the sights and sounds of each video clip placed on Mendeley Data [ 16 , 26 , 27 ]. The investigators proceeded with the construction of the placement of videos and related materials on Mendeley Data with the hope that they could be accessed by colleagues for future studies.
Rating of the videos of the original assessments was conducted by a team of trained raters certified in the MDS-UPDRS [6] who were all in different physical locations. For this preliminary study to demonstrate the feasibility of online video rating by raters at different international locations, six raters certified in the MDS-UPDRS [6] were recruited. The ratings were performed by the six raters including the two raters (JRB and AE) who took part in the earlier aspects of the study. JRB and AE were not blind to the content of the video clips because they had participated in the earlier parts of the study. The other raters had no knowledge of the content of the videos. The other four raters were provided no information about the participants in the videos. The videos were presented to all raters for the rating sessions with only the participant identification (ID) number. The diagnosis, age, and sex of the participants in the videos was not provided to the raters during the rating process. Therefore, the other four raters were blind to the clinical and demographic data about the participants. Because this was a preliminary study of feasibility of the protocol, the outcome is reported without attaining statistical significance in the analysis. We had sought to obtain at least ten blind raters for the project. However, we proceeded to assess feasilibity as reported in this article with only six raters. For this preliminary assessment of feasibility, we encountered much missing data (Supplementary Table 1). Supplementary Table 1 contains the independent scores (0,1,2,3,4) of the six identified independent raters of videos of the motor tasks of a low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [7] of five participants and the corresponding percentage agreements for consensus scores represented by %. Missing data is represented by a period (.) in Supplementary Table 1. The headings include the number and abbreviation of each task. Appendix 1 [7] in Supplementary data contains the coding form with the numbers and names of each task utilized by raters for each rating session.
Although we strived to conduct rating sessions at times convenient for all raters, the participation of all raters for all sessions did not happen. There were many internet disconnections so that only some ratings were obtained from some raters for some items. We proceeded with the project despite the small numbers of raters and the missing data in order to identify the strengths and weaknesses for further development of our procedure. We sought to establish that this procedure is feasible for a team of diverse raters each in different locations. A power analysis would facilitate the determination of the number of raters for a definitive statistical analysis for future investigations. Due to many technical discrepancies (lack of sound and blurred images) observed by the original investigator (JRB), the other investigator (AE) hosted all joint sessions and presented the videos on his monitor to screen share with the other raters. After the twelve tasks of a low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [ 7 , 8 ] were edited as separate video clips, the two examiners who had seen the videos (AE and JRB) recruited a team of raters who had attained certification in the MDS-UPDRS [6] to rate each video simultaneously. Altogether there were six raters, the two raters who had participated in the design and conduct of the study (AE and JRB) and the four raters who had never participated in the study. For each rating session the investigator (AE) provided only the participant indentification (ID) number without any clinical or demographic information. The four recruited raters were blind to the identity of the participants in the videos while the two raters who participated In the design and conduct of the study were not blind to the identify of the participants. Rating sessions were scheduled when all raters could attend; however, there was missing data for each rater due to internet disconnections. Videotape ratings are facilitated by performing the task in a manner such that all raters would have similar experiences. In the past when raters could readily gather together, trained raters were seated in a room with a similar view of a monitor to observe videotapes jointly. Raters were allowed to see the recordings once and required to make ratings independently [ 24,25 ]. Immediately after completing all ratings without consultation with anyone else, the raters conducted a consensus conference to discuss the findings to attain a scored agreed by all raters.
The onset of severe respiratory infections led to the prohibition of gatherings together in person. Therefore, this protocol was developed to accommodate a means to provide a uniform rating experience for independent raters using monitors in different locations. Although raters are not physically present in the same room, they are provided one viewing jointly on their own monitors. The raters then independently complete their rating immediately after being shown each task. After all ratings are completed, the raters send their scores to the investigator (AE) who then conducts a consensus conference with all raters to then discuss each task with each other for the first time. The team then seeks to agree on a score for each item.

Method feasibility
Rating sessions were hosted by the investigator (AE) who presented the clips [ 16 , 26 , 27 ] for viewing and who provided to the raters only the code number of the participant without name, age, sex, diagnosis, or other identifying characteristics. The investigator (AE) stated the code number of the participant to be entered by the raters on the score sheet (See Appendix 1 [7] in Supplementary data). The investigator (AE) then stated the number and the name of each task [7] so that the raters could read the scoring instructions for that task. The raters were provided no other information about each participant. No demographic or clinical data about participants was provided to the raters. The investigator (AE) then played the relevant video clip once on his monitor screen shared with the raters. The investigator (AE) then asked each rater to score the tasks independently without discussion with anyone else. The investigator (AE) asked all raters to apply the scoring instructions to all segments of the video clips, even though the video segments were much longer than the those for the MDS-UPDRS [6] . The duration of repetitive items included up to 60 or so repetitions in order to generate enough signals suitable for analysis [7] . The raters were asked to apply the scoring criteria (See Appendix 1 [7] in Supplementary materials) as best they could to the observed video segments, despite blurring and delays in the videos resulting from technical limitations in the cameras, computers, monitors, and Internet disconnections. After all of the 12 items [7] were scored independently by the raters, the investigator (AE) asked all raters to send him electronic copies of their score sheets. Then the investigator (AE) conducted a consensus conference with all raters to attain agreement for the score for each item. If the raters did not agree, then a period was entered as the consensus score to indicate that was missing.
To assess interrater reliability among raters, percent agreement was calculated for each item (Supplementary Table 1). The agreement attained among trained raters viewing videos remotely is comparable to the agreement attained in other studies among trained raters viewing videos in person in the same room [ 24,25 ]. In order to improve the agreement among raters, consensus conferences were conducted immediately after each rating session. Each rater stated their independent score. Then, the raters discussed the basis for their scores. The discussions led to the agreement on a consensus score after considering the opinions of other raters. Therefore, a consensus conference was conducted to provide an overall estimate of the viewpoints of the team of raters.
The consensus scores of six raters for the test rating sessions of five participants aged 66.2 ± 9.2 (55, 76) years including four men (three healthy controls with typical development and one participant with Parkinson's disease) and two people (one man and one woman) with Parkinson's disease (PD) are presented in Table 2 . Altogether 110 consensus conferences scores were condcuted attained by the six raters of the 22 video clips of the protocol conducted on five participants. Raters on three continents agreed on 28 scores of 0, 14 scores of 1, 4 scores of 2, and 7 scores of 3. The raters did not agree on scores for 57 video clips indicated by periods (.) on Table 2 . Table 2 contains the item numbers corresponding to the original items in the MDS-UPDRS [6] and the low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [7] . Interruptions and blurring of video images presented challenges for rating using the techniques of visual observation for live rating. Internet disconnections prevented some ratings during the experimental sessions. Virtual assessment of motor assessment videos presented on the monitor of an investigator (AE) rated by examiners with different monitors in different locations can be accomplished. Agreement for tasks with stationary positions of the extremities was better than for repetitive movements.

Protocol for the remote evaluation of motor assessments conducted online
A low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [ 7 , 8 ]   could not be visualized by others, the participant was asked to position himself and his camera so that a clear image could be obtained on the monitor. The investigator (AE) stated the identity of the participant to be entered by the raters on the score sheet (See Appendix 1 [7] in Supplementary data). The investigator (AE) then stated the number and the name of each task (See Appendix 1 [7] in Supplementary materials) so that the raters could read the scoring instructions for that task. The investigator (AE) presented the monitor of the participant to all raters while he administered each task to the participant. The raters saw the participant during the administration of each task to then score each task immediately after it was accomplished. The iinvestigator (AE) asked each rater to score the tasks independently without discussion with anyone else. The investigator (AE) asked all raters to apply the scoring instructions (See Appendix 1 [7] in Supplementary materials) to all tasks in the live rating sessions, even though the tasks were conducted for durations that were much longer than the instructions for the MDS-UPDRS [6] . The raters were asked to apply the scoring criteria as best as they could to the observed live tasks performed by the participant on their monitor, despite blurring of fine movement, delays in transmission to the computers of the raters, and Internet disconnections. After all of the twelve items were scored independently by the raters, the investigator (AE) asked all raters to send him electronic copies of their score sheets. The participant and the raters agreed to date and time of the next monthly rating session The participant was thanked for each session and was excused. Then the investigator (AE) conducted a consensus conference with all raters to attain agreement for the score for each item. If the raters did not agree, then a period was entered as the consensus score to indicate that consensus was missing. The online ratings are continuing so results are not yet available.

Advantages
The proposed protocol provides the means for two experts in different locations can accomplish the extraction of the motor assessment tasks from original videos containing the extensive intermissions between the motor tasks that are irrelevant to the rating process. The protocol provides the means for experts to check each other to accomplish the construction of separate videos containing only the segments needed to rate each task.
The proposed protocol provides the framework for raters using different computers in different locations to independently and identically perform structured ratings of motor assessment videos coordinated by a coordinator utilizing a different computer in a different location. Since the trained raters conducted independent ratings of identical video screenings, the assumptions of probability theory and mathematical statistical analysis are attained [14] . All raters are shown the test video on shared screen of the investigator's monitor once, make independent scoring immediately after viewing the video, and transmit their scores to the investigator (AE) at the same time. The investigator (AE) then obtains discussion from the raters to agree on a consensus score for each task.

Disadvantages
The original testing procedure by the participant, the examiner, and the technologist jointly in person in a room requires an hour or so to accomplish. One problem with the protocol is the need for the examiner to tape accelerometers on the extremities of the participants and to connect the accelerometers by wires to a data logger connected by wires to a laptop computer before administering the task to the participant. Another problem with the protocol is the need for the technologist to program the laptop to record the output from the accelerometers and then to process the output to generate and record signals for future analysis. Since the participant is connected by wires to a data logger, the movements of the participant are limited to raising and lowering extremities and standing up and down.
The original video files of 30 to 60 min are huge electronic files. They are too large to send as email attachments. The process to place then on folders in the iCloud may take several hours. Downloading the original files from the iCloud is time-consuming. Viewing the original videos to extract only the portions of the video with the motor task is tedious and time-consuming.
A further disadvantage of the protocol is the amount of time required for recording, uploading, editing, and sharing the video components. We have demonstrated that the protocol is feasible to conduct by experts certified in the MDS-UPDRS [6] in distant locations. This protocol provides the means to conduct complex assessments by a team of experts to generate similar output for comparison and contrast. Thus, the protocol establishes the framework for the precision and accuracy required for optimal clinical investigations. While an alternate protocol could utilize an expert at the original assessment site to record, edit, and share the videos, such a protocol is not feasible in underserved regions lacking experts with the required skills. Another alternate protocol could utilize a remote test expert administrator certified in the MDS-UPDRS [6] who views the live assessment conducted by test facilitator in the same room as the participant without editing, a protocol that we are currently developing ourselves. A further alternative could be the development of software to cue the test administrator to start and stop the video editing for each task to eliminate the need for editing the video.

Videos
The videos were obtained at the time of the live ratings in person utilizing a videocamera on a tripod or a mobile phone on a tripod. The original videotapes were copied to be provided to members of the team. The videotapes were further copies to generate edited clips of the specific tasks for this study. The quality of the images deteriorated due to time, storage, and copying. There was blurring of images and sounds in the videos used for rating by teams.

Online rating sessions
Online sessions were conducted with each person on a different electronic device in different locations. Each person experienced occasional interruptions of internet connections. The basic online program led to meetings ending after 40 min so there were intermittent disconnections. When individuals were disconnected, they were sent a new link by the investigator (AE) to connect to a new online meeting.

Human visual observation
Drawbacks to the use of clinical ratings [6][7][8] include the reliance on real-time human vision to quantify small differences in motion and significant inter-rater variability due to inherent subjectivity in scoring the procedures. Rating tools are semi-quantitative by design, however, in addition to significant inter-rater variability, there is inherent subjectivity in administering these tools, which are not blinded in clinical settings.

Future directions Raters
Replication of the proposed protocol by colleagues independently at multiple centers will provide the foundation to confirm the usefulness of the procedure. To satisfy the assumption that the ratings are generated from populations that are identically and independently distributed, future studies will include teams of raters who complete ratings for all assessment in exactly the same manner with an adequate sample size to apply statistical analysis to yield significant findings [28] . A power analysis based on the initial findings of our preliminary study of the feasibility of the procedure [ 29 , 30 ] would be limited by the extensive missing data (Supplementary Table 1). A power analysis would facilitate the determination of the number of raters for a definitive statistical analysis for future investigations. Reliability can be studied by the use of the kappa statistic [ 30,31 ]. We have used percent agreement [32] to obtain a rough estimate of the reliability of our preliminary study of feasibility [ 29 , 30 ].

Wireless sensors
The construction of wireless wearable sensors will be crucial to the development of a protocol suitable for marketing for practitioners. The current protocol is limited by the restrictions imposed on the movements of the participants by the connections of the instrumentation on the participants to the recording equipment. The accelerometers fell off participants requiring taping to reposition them. Also, the movement of the participant was restricted to standing in place due to the limited lengths of the cords. Wireless sensors will allow walking to be included in the assessments. Additionally wireless sensors will facilitate continuous evaluation of participants in the clinic and throughout the day conducting social, occupational, and educational activities.

Videos
Optimal videocameras will be used to generate top-notch images. Three videocameras will be used to provide clear images of participants in all three directions.

Online rating sessions
Optimal online monitoring will be used for meetings. Raters may be allowed to view videos at their convenience to facilitate optimal circumstances. Raters may be allowed to view videos as many times as they like in order to determine their optimal ratings. Raters may be allowed to view the videos at a slow speed to facilitate detecting rapid movements. Raters may even see the videos frame by frame in order to count repetitions and interruptions and to measure changes in rate and rhythm.
Future investigations will be enhanced by including more participants with and without Parkinson's disease and more trained raters. Using optimal cameras and monitors will enhance the quality of images presented to raters. Use of a toggle switch to examine individual frames will help rating by visual observation. Colleagues around the world can utilize this virtual protocol to obtain state-of-the-art motor assessments of people with PD and other conditions.

Methods
Several novel techniques provide the foundations to utilize to develop and enhance this protocol. We seek to collaborate with those who have developed innovations to enhance with our proposed protocol. We are developing this protocol with groundbreaking related techniques [ 10 , 15-23,26,27 , 29 , 30 , 33-46 ]. In particular, we are exploring ways to utilize computers and other technologies to construct methods to efficiently and effectively measure movements in clinical settings [ 10 , 15-23,26,27 , 29 , 33-46 ], generate signals and transforms to generate signatures of Parkinson's disease and related conditions [ 7-12 , 15-27 , 29 , 30 ], and employ automatic intelligence to provide diagnoses for immediate use by providers [ 33 , 43-46 ].

Rating by automatic intelligence
Automatic intelligence may improve the scoring of the MDS-UPDRS [ 6 , 33 , 43-46 ]; however, machine learning does not yet match the accuracy of trained human raters [44] . A comprehensive review of the technology to rate motor assessments is beyond the scope of this article. While the use of machine learning removes the limitations of human vision, it also removes the judgment of experienced human raters. The current protocol has the advantage of using experienced human raters certified in the MDS-UPDRS. Compentent clinicians who evaluate and treat patients with movement disorders require assessments by experienced humans to detect the subtleties that distinguish the various movement disorders. Clinicians are unwilling to base the diagnosis and treatment of people solely on the output of automatic intelligence without interpretation by an experienced human. We ourselves are developing the use of neural networks to evaluate output of our technology [ 33 , 46 ].

Conclusion
The proposed protocol provides the means for examiners in different locations to remotely administer and score structured motor assessments utilizing a low-cost quantitative continuous measurement of movements in the extremities of people with Parkinson's disease [ 7 , 8 ]. The proposed protocol provides the framework for examiners in a spectrum of different locations to remotely view and score structured motor assessments both on videos and on live monitors. The feasibility of remote assessments of patients on videos and on live monitors has been validated by teams of six raters in different locations on three continents [Africa (AE, AOS, HA, MME), Asia (SJ), and North America (JRB)]. The proposed protocol can be utilized by providers, investigators, and patients all in distant locations throughout the world. The proposed protocol will facilitate the remote application of precision medicine to develop treatment plans tailored to the specific needs of individual patients. These protocols can be utilized by colleagues around the world to conduct motor assessments with participants and examiners in vastly different geographical locations. This protocol will provide the bases for telemedicine throughout the world. Utilization of state-of-the-art wearable sensors [ 10 , 34-46 ] will enhance future investigations.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability
All data is provided in the references and the supplementary data.