Better understanding fall risk: AI-based computer vision for contextual gait assessment.

Contemporary research to better understand free-living fall risk assessment in Parkinson’s disease (PD) often relies on the use of wearable inertial-based measurement units (IMUs) to quantify useful temporal and spatial gait characteristics (e


Introduction
Age is one of the largest risk factors for development and progression of Parkinson's disease (PD) [1].Central to the clinical manifestation of PD is the abnormal choreography of movement, often marked by the emergence of walking/gait abnormalities that serve as surrogate biomarkers of disease progression [2][3][4] and are indicative of intrinsic fall risk [5].
For clinicians, the assessment of gait disturbances in PD represents a window into the underlying neurodegenerative processes at play.Specifically, gait assessment in PD sheds light on the interplay between motor control, sensory ability, cognitive function, and pathology within falls [2,6].It is the analysis of spatial and temporal gait characteristics, such as variability and asymmetry of step length, step time or stance time, that can provide valuable insights into fall risk in a non-invasive manner [2].
There are many different methodologies to assess fall risk via a gait assessment starting with subjective/visual observation to objective uses of instrumented walkways or 3D motion capture systems [7][8][9][10].However, within contemporary studies the use of wearable inertial measurement units (IMU) has seen wide adoption and growing use [11][12][13][14].IMUbased wearables are highly portable, unobtrusive, and a lower-cost option of quantifying objective gait characteristics in nearly any setting [15].Specifically, the key driver to their adoption is their use to track/monitor participants within their own free-living environments (e.g., home) during habitual activities, offering a much more natural insight to a participant's true fall risk [11,16].
Although IMU's have broken the major barrier to free-living assessment they have a key flaw, an inability to provide information on absolute context.In short, IMU devices are blind i.e., although they provide high resolution inertial data (e.g., accelerometer) they fail to capture visual data pertaining to the (extrinsic) environment [17].Consider inertial gait data gathered in a home where a person with PD (PwPD) must navigate stairs or cluttered rooms due to furniture.Firstly, the environment may negatively impact the robustness of the algorithm used to interpret the inertial data as algorithms are typically optimised for straight level ground walking [17].Secondly, upon processing the free-living IMU data with an algorithm the researcher must consider the following: are resulting abnormal gait characteristics (e.g., step time variability, stance time asymmetry, mean swing time) indicating (i) an intrinsic factor that underlines the person's disease state and inherent fall risk or (ii) are extrinsic factors influencing gait whereby the person is naturally adapting to their environment?Use of a single IMU alone suggests an incomplete story.
Wearable cameras could better contextualise free-living fall risk assessment,

J o u r n a l P r e -p r o o f
Journal Pre-proof providing absolute clarity on walking path (e.g., straight and flat) and extrinsic factors (e.g., furniture, people, pets).Wearable body cameras (e.g., GoPro, https://gopro.com/en/gb/)exist but may generally be described as cumbersome and unsuitable [17].Additionally, manual interpretation of recorded video data is extremely time consuming and ethically challenging due to possible capture of sensitive visual images/data [17,18].To overcome, wearable video-based glasses with eye-tracking functionality is an ergonomic and discrete form to capture contextual data with the addition of knowing where the person is looking (gaze), an additional physiological measurement to inform fall risk (i.e., visual attention [19]).To process the video data in a timely and ethical manner, artificial intelligence (AI) based computer vision (CV) is suggested.
Compared to a human reviewer, a modern CV object detection model (e.g., YOLO [20] that is suitably trained via ground truth datasets that have been manually viewed and annotated/labelled) can (i) dramatically reduce the time to annotate/label a video (e.g., automated analysis can be done approx.25 video frame/second) and (ii) alleviate privacy concerns for automated environment interpretation where output can be a text file listing the obstacles (e.g., person, chair) and hazards (e.g., raised kerb) within view and/or immediate walking path of the PwPD.However, should a researcher need to examine any video data, a defined list of privacy conscious objects can be defined (e.g., people, financial document), detected and obfuscated [21].In this short communication, we provide an update on the use of a CV-based YOLO method to better inform free-living gait assessment within fall risk via free-living indoor and outdoor home environments and ask, can CV better inform fall risk for PwPD in the home?

Participants
The study was approved by the Northumbria University Institution review board (Ref: 44692).All participants gave written informed consent prior to enrolment.Three PwPD attended the gait laboratory at the Northumbria University Coach Lane campus where they were fitted with an IMU (MoveMonitor: https://www.mcroberts.nl/products/movemonitor/)worn around the waist.Participants also wore eye tracking video glasses (Invisible: https://pupil-labs.com/products/invisible).The IMU and video glasses were time synchronised from a researcher's laptop.After a two-minute baseline walking test in the lab, participants were instructed to return home and go about their typical daily activities while wearing both technologies for approx.3-hours.

J o u r n a l P r e -p r o o f
Journal Pre-proof

IMU gait and CV Model
Key gait characteristics i.e., initial contact (IC) and final contact (FC) were derived from time-series IMU data via validated algorithms [23,24].IC/FC events are key to quantify clinically relevant gait characteristics (e.g., step time, swing time asymmetry) to gauge fall risk [5].Here, we present IMU derived plots with IC and FC events and corresponding gait characteristics.
For a more detailed description of a YOLO model used in this scenario, please refer to [22].In short, anonymisation (for privacy) and contextualisation of video data was achieved using a fine-tuned YOLOv8 model [22,25] trained on a novel local dataset of 3085 annotated images from within the home environments.. Here, two object classes where categorised as sensitive i.e., person and book (a catch all for any written information) that were detected by the YOLO model and obfuscated to uphold PwPD privacy.Specifically, upon detection of those objects a blur was applied to that sensitive region of interest (ROI) with an additional 50-pixel border around the object to enhance privacy.A naïve walking path was also incorporated into the model i.e., a visualisation defining the immediate area where the person is walking (Figure 1a-d).That was achieved by specifying the point coordinates of a trapezoidal / perspective warped rectangle shape to encompass the assumed walking path of the PwPD and to identify what objectives the PwPD must walk around/over.Eye-tracking coordinates (quantified by video glasses) were used to examine the participant gaze i.e., was the PwPD looking at their immediate walking path and/or upcoming obstacle (e.g., chair, raised kerb).

Results
Upon returning home participants had contrasting habitual patterns.From researcher video assessment it was observed that participant #1 was often sedentary but undertook a walk beyond their home, Figure 1a.Participant #2 remained within the confines of their home, ambulating from their home to the garden, Figure 1b-d

Discussion: Opportunities and challenges
Use of wearable video eye tracking glasses and a contemporary CV model (e.g.YOLO) can provide objective environmental details to help contextualise free-living gait to better inform fall risk in PwPD within the home.Moreover, it can achieve added context without compromising participants privacy due to the ability to automatically describe the environment and if video data does need to be investigated by a researcher, then obfuscate any sensitive content such as people and documents.
Clearly, a limitation in this work is the small number of PwPD included but the purpose here is to present an update on the use of a CV model from eye-tracking glasses to showcase how information about the home environment can be captured and automated for future and more rigorous work.Moreover, we do not detail how gait characteristics (e.g., step length variability) and CV data would be examined in tandem to define absolute fall risk.For example, there is a need to define an operational workflow algorithm of how periods of walking and any associated abnormal gait characteristics are investigated with contextual data to categorically identify fall risk.For example, in this short communication we present IMU data spanning several seconds with corresponding gait characteristics compared to one video frame only within that period.Future work needs to examine how all time series data is

J o u r n a l P r e -p r o o f
Journal Pre-proof interpreted within the context of fall risk.Additionally, there is a need to fully consider how visual attention via eye-tracking is included to determine changes in gaze behaviour and risk of falling during adaptive walking [26] e.g., as PwPD transition to stair ambulation (Figure 2b-c) where many falls occur [27].For example, use of clinically relevant visual attention data (e.g., fixations, saccades) from eye-tracking videos could better examine effects of PD on the perceptive judgment of stair step height in PwPD [28,29].In this short communication we suggest how intersections of a naïve walking path, eye-tracker co-ordinates and detected obstacles may provide insight to gaze behaviour to inform fall risk.However, that needs to be thoroughly investigated and future work should also consider peripheral obstacles (if intersecting with eye-tracking) to examine distractions beyond immediate walking path and within the general environment [30].
There are other technology-based challenges.First, although use of the IC/FC approach used here is valid for level walks it is not suitable to fully understand gait during transitions to/from that terrain to stairs or other inclined terrains [17].Accordingly, IMU data needs to be better investigated through a collection of time-series algorithms [31] or AI models that can account for changing walking terrains [32] which can be further contextualised by CV.Second, additional sensing technology may further refine fall risk.For example, depth perception is affected in PwPD [33] and so objective data on distance would be useful.Although distance is not easily achievable with a single camera the addition of another sensing modality (e.g., light detection and ranging, LiDAR) would be a possible solution.

Conclusion
Broadly, use of IMU's is a contemporary approach for gait assessment within a suite of tools (along with e.g., cognition, sensory function) to inform habitual fall risk.Yet, use of IMU's alone is limited and needs a means to provide absolute clarity on abnormal IMU data


. Participant #3 was very sedentary and is not examined here.During continuous walking on level terrain, participant #1 displays normal gait depicted by the stable and uniform oscillating signals and their gaze (blue circle) focuses on their immediate walking path (green trapezoid), Figure1a.In contrast, if IMU data only from participant #2 is examined (Figure1b-d, left), they may appear to have significantly unstable gait at different time points and considered to have an increased fall risk.However, whenJ o u r n a l P r e -p r o o fJournal Pre-proof using video eye-tracking data, a better understanding can be derived.Specifically, CV adds context: Figure1b: As the participant walks outside the CV model identifies stairs/steps within the immediate path as well as obstacles and a person (obfuscated, left).Figure1c: As the participant returns indoors the CV model identifies an obstacle to the left but shows their gaze is on their immediate walking path (blue circle within green trapezoid) which overlaps with identified steps (blue box).A 10s period of IMU data is displayed in two segments where the beginning of the second period (red box on plot) corresponds to the participant transitioning from level walking to stair ascent.A dog/animal is also identified. Figure 1d: Indoors, generic obstacles and a person (obfuscated) are identified.In this instance, gaze (blue circle) is beyond the immediate walking path (green trapezoid) and observes the animal (i.e., blue circle overlaps with blue box).Correspondingly, the IMU signals show a brief abnormal period (purple box) where the participant adjusted their gait due to the passing animal.<Figure 1>

(
Figure caption.Figure 1: Example gait activity with IMU plots (left) and anonymised context (right), for illustrative purposes the mean step time variability (STD) and asymmetry (Asym) values are present for each scenario.(a) Participant #1 during a prolonged outdoor walk where the IMU data show stable gait signals (e.g., no large fluctuation) and identified IC (green line / black dots) and FC (red lines / black x's) on level ground.(b) IMU data suggests abnormal gait (i.e., high step time values) due to large fluctuations and lack of rhythmical signal but is explained by the I-based CV as the participant navigates a door and steps.(c) Participant #2 displays normal gait (top plot 0-5s) but then there is some anomaly (red box) which is explained by their transition from level walking to stair ascent.(d) Participant 2 is now in doors, and generally displays normal gait except for a brief anomaly (purple box) but is explained by the presence of an animal (blue box) within their walking path (green trapezoid) -the participant has seen the animal as their gaze (blue circle) is beyond their walking path and intersects with the blue box defined animal, suggesting their recognition and 7adaption to the obstacle.

Figure 1 :
Figure caption.Figure 1: Example gait activity with IMU plots (left) and anonymised context (right), for illustrative purposes the mean step time variability (STD) and asymmetry (Asym) values are present for each scenario.(a) Participant #1 during a prolonged outdoor walk where the IMU data show stable gait signals (e.g., no large fluctuation) and identified IC (green line / black dots) and FC (red lines / black x's) on level ground.(b) IMU data suggests abnormal gait (i.e., high step time values) due to large fluctuations and lack of rhythmical signal but is explained by the I-based CV as the participant navigates a door and steps.(c) Participant #2 displays normal gait (top plot 0-5s) but then there is some anomaly (red box) which is explained by their transition from level walking to stair ascent.(d) Participant 2 is now in doors, and generally displays normal gait except for a brief anomaly (purple box) but is explained by the presence of an animal (blue box) within their walking path (green trapezoid) -the participant has seen the animal as their gaze (blue circle) is beyond their walking path and intersects with the blue box defined animal, suggesting their recognition and 7adaption to the obstacle.