Next Article in Journal
A Semantic Segmentation Method Based on AS-Unet++ for Power Remote Sensing of Images
Next Article in Special Issue
Smartphone-Based Cognitive Telerehabilitation: A Usability and Feasibility Study Focusing on Mild Cognitive Impairment
Previous Article in Journal
Development of a Non-Contact Sensor System for Converting 2D Images into 3D Body Data: A Deep Learning Approach to Monitor Obesity and Body Shape in Individuals in Their 20s and 30s
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Millimeter-Wave System for Human Activity Monitoring for Telemedicine

1
Department of Electrical and Computer Engineering, University of Dayton, 300 College Park, Dayton, OH 45469, USA
2
Electrical Engineering Department, Jubail Industrial College, Royal Commission for Jubail and Yanbu, Jubail Industrial City 31961, Saudi Arabia
3
Department of Physical Therapy, University of Dayton, 300 College Park, Dayton, OH 45469, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(1), 268; https://doi.org/10.3390/s24010268
Submission received: 20 November 2023 / Revised: 13 December 2023 / Accepted: 21 December 2023 / Published: 2 January 2024
(This article belongs to the Special Issue Advances in Mobile Sensing for Smart Healthcare)

Abstract

:
Telemedicine has the potential to improve access and delivery of healthcare to diverse and aging populations. Recent advances in technology allow for remote monitoring of physiological measures such as heart rate, oxygen saturation, blood glucose, and blood pressure. However, the ability to accurately detect falls and monitor physical activity remotely without invading privacy or remembering to wear a costly device remains an ongoing concern. Our proposed system utilizes a millimeter-wave (mmwave) radar sensor (IWR6843ISK-ODS) connected to an NVIDIA Jetson Nano board for continuous monitoring of human activity. We developed a PointNet neural network for real-time human activity monitoring that can provide activity data reports, tracking maps, and fall alerts. Using radar helps to safeguard patients’ privacy by abstaining from recording camera images. We evaluated our system for real-time operation and achieved an inference accuracy of 99.5% when recognizing five types of activities: standing, walking, sitting, lying, and falling. Our system would facilitate the ability to detect falls and monitor physical activity in home and institutional settings to improve telemedicine by providing objective data for more timely and targeted interventions. This work demonstrates the potential of artificial intelligence algorithms and mmwave sensors for HAR.

1. Introduction

Demands on the healthcare system associated with an aging population pose a significant challenge to nations across the world. Addressing these issues will require the ongoing adaptation of healthcare and social systems [1]. According to the United States Department of Health and Human Services, those aged 65 and older comprised 17% of the population in 2020, but this proportion is projected to rise to 22% by 2040 [2]. Further, the projected increase in the population of those aged 85 and above is anticipated to increase by twofold. Older adults are more prone to chronic and degenerative diseases such as Alzheimer’s, respiratory diseases, diabetes, cardiovascular disease, osteoarthritis, stroke, and other chronic ailments [3] which require frequent medical care, monitoring, and follow-up. Further, many seniors choose to live independently and are often alone for extended periods of time. For example, in 2021, over 27% (15.2 million) of older adults residing in the community lived alone [2]. One major problem for older adults who choose to live alone is their vulnerability to accidental falls, which are experienced by over a quarter of those aged 65 and older annually, leading to three million emergency visits [4]. Recent studies confirm that preventive measures through active monitoring could help curtail these incidents [5].
Clinicians who treat patients with chronic neurological conditions such as stroke, Parkinson’s disease, and multiple sclerosis also encounter challenges in providing effective care. This can be due to difficulty in monitoring and measuring changes in function and activity levels over time and assessing patient compliance with treatment outside of scheduled office visits [6]. Therefore, it would be beneficial if there were accurate and effective ways to continuously monitor patient activity over extended periods of time without infringing on patient privacy. For these reasons, telemedicine and continuous human activity monitoring have become increasingly important components of today’s healthcare system because they can allow clinicians to engage remotely using objective data [7,8].
Telemedicine systems allow for the transmission of patient data from home to healthcare providers, enabling data analysis, diagnosis, and treatment planning [9,10]. Given the scenario that many older people prefer to live independently in their homes, incorporating and improving telemedicine services has become crucial for many healthcare organizations [11], a sentiment supported by the 37% of the population that utilized telemedicine services in 2021 [12]. Telemedicine monitoring facilitates the collection of long-term data, provides analysis reports to healthcare professionals, and enables them to discern both positive and negative trends and patterns in patient behavior. These data are also essential for real-time patient safety monitoring, alerting caregivers and emergency services during incidents such as a fall [13]. This capability is valuable for assessing patient adherence and responses to medical and rehabilitation interventions [14]. Various technologies have been developed for human activity recognition (HAR) and fall detection [15]. However, non-contact mmwave-based radar technology has garnered considerable attention in recent years due to its numerous advantages [16], such as its portability, low cost, and ability to operate in different ambient and temperature conditions. Furthermore, it provides more privacy compared to traditional cameras and is more convenient than wearable devices [17,18].
The integration of mmwave-based radar systems in healthcare signifies notable progress, specifically in improving the availability of high-quality medical care for patients in distant areas, thus narrowing the disparity between healthcare services in rural and urban regions. This technological transition allows healthcare facilities to allocate resources more efficiently to situations that are of higher importance, therefore reducing the difficulties associated with repeated hospital visits for patients with chronic illnesses. Moreover, these advancements enhance in-home nursing services for the elderly and disabled communities, encouraging compliance with therapeutic treatments and improving the distribution of healthcare resources. Crucially, these sophisticated monitoring systems not only enhance the quality and effectiveness of treatment but also lead to significant cost reductions. These advancements play a crucial role in helping healthcare systems effectively address the changing requirements of an aging population, representing a significant advancement in modern healthcare delivery.
While mmwave-based radar technology offers significant advantages for HAR and fall detection, the complexity of the data it generates presents a formidable challenge [19]. Typically, radar signals are composed of high-dimensional point cloud data that is inherently information-rich, requiring advanced processing techniques to extract meaningful insights. Charles et al. [20] recently proposed PointNet, a deep learning architecture that enables the direct classification of point cloud data from mmwave-based radar signals. Their model preserves spatial information by processing point clouds in their original form. The combination of mmwave radar and PointNet can help HAR applications by improving their performance in terms of precision, responsiveness, and versatility across a wide range of scenarios [21]. Accordingly, we utilized the PointNet algorithm to process mmwave radar’s point cloud data for our proposed HAR application to overcome the aforementioned technical limitations. The primary contributions of our work are described below:
  • HAR System: We present an approach for HAR using the TI mmwave radar sensor in conjunction with PointNet neural networks implemented on the NVIDIA Jetson Nano Graphical Processing Unit (GPU) system. This system offers a non-intrusive and privacy-preserving method for monitoring human activities without using camera imagery. Furthermore, it directly uses point cloud data without additional pre-processing.
  • Real-Time Classification of Common Activities: Our system achieves real-time monitoring and classification of five common activities, including standing, walking, sitting, lying, and falling, with an accuracy of 99.5%.
  • Comprehensive Activity Analysis: We provide a novel comprehensive analysis of activities over time and spatial positions, offering valuable insights into human behavior. Our solution includes the ability to generate detailed reports that depict the temporal distribution of each activity and spatial features through tracking maps, ensuring a detailed understanding of human movement patterns.
  • Fall Detection and Alert Mechanism: Our system includes an alert mechanism that leverages the Twilio Application Programming Interface (API) protocol. This feature allows for prompt notification in the event of a fall, enabling rapid intervention and potentially saving lives.
The ensuing portions of this study are structured as follows: Section 2 of this paper is dedicated to a comprehensive review of prior research on the various methodologies employed for conducting HAR. In Section 3, a detailed description of the system architecture of the mmwave-based HAR is presented. A description of the data preparation and collection, along with the evaluation of the methodology’s effectiveness, can be found in Section 4. In Section 5, the study’s findings and analyses are presented. The limitations and future directions of the study are discussed in Section 6, and the conclusions are outlined in Section 7.

2. Human Activity Recognition Approaches and Related Work

Human activity involves a series of actions carried out by one or more individuals to perform an action or task, such as sitting, lying, walking, standing, and falling [22]. The field of HAR has made remarkable advancements over the past decade. The primary objective of HAR is to discern a user’s behavior, enabling computing systems to accurately classify and measure human activity [23].
Today, smart homes are being constructed with HAR to aid the health of the elderly, disabled, and children by continuously monitoring their daily behavior [24]. HAR may be useful for observing daily routines, evaluating health conditions, and assisting elderly or disabled individuals. HAR plays a role in automatic health tracking, enhancements in diagnostic methods and care, and enables remote monitoring in home and institutional settings, thereby improving safety and well-being [25].
Existing literature in this area often categorizes research based on the features of the devices used, distinguishing between wearable and non-wearable devices, as depicted in Figure 1. Wearable devices encompass smartphones, smartwatches, and smart gloves [26], all capable of tracking human movements. In contrast, non-wearable devices comprise various tools like visual-based systems, intelligent flooring, and radar systems. An illustrative summary of these methodologies is presented in this section, offering a snapshot of the investigations undertaken and a brief overview of diverse applications utilizing these techniques.
Wearable technology has become increasingly useful in capturing detailed data on an individual’s movements and activity patterns through the utilization of sensors placed on the body [15]. This technology includes various devices such as Global Positioning System (GPS) devices, smartwatches, smartphones, smart shirts, and smart gloves. Its application has made notable contributions to the domains of HAR and human–computer interfaces (HCIs) [26]. Nevertheless, it is important to acknowledge that every type of device presents its own set of advantages and disadvantages. For instance, GPS-based systems face obstacles when it comes to accurately identifying specific human poses, and experience signal loss in indoor environments [27]. Smartwatches and smartphones can provide real-time tracking to monitor physical activity and location. They feature monitoring applications that possess the ability to identify health fluctuations and possibly life-threatening occurrences [28,29]. However, smartwatches have disadvantages such as limited battery life, and users must remember to wear them continuously [30]. Further, smartphones encounter issues with sensor inaccuracy when they are kept in pockets or purses [31] and they encounter difficulties in monitoring functions that require direct contact with the body. Other wearable devices, such as the Hexoskin smart shirt [32] and smart textile gloves developed by Itex [33], present alternative options for HAR. However, the persistent need to wear these devices imposes limitations on their utilization in a variety of situations such as when individuals need to take a shower or during sleep [34]. As mentioned before, especially when monitoring older adults, failure to constantly wear monitoring devices can lead to missing unexpected events such as falls [35].
Non-wearable approaches for HAR utilize ambient sensors like camera-based devices, smart floors, and radar systems. Vision-based systems have shown promise in classifying human poses and detecting falls, leveraging advanced computer vision algorithms and high-quality optical sensors [36]. However, challenges like data storage needs, processing complexity, ambient light sensitivity, and privacy concerns hinder their general acceptance [37]. Intelligent floor systems such as carpets and floor tiles provide alternative means for monitoring human movement and posture [38]. A study on a carpet system displayed its ability to use surface force information for 3D human pose analysis but revealed limitations in detecting certain body positions and differentiating similar movements [39]. Recently, radar-based HAR has gained interest due to its ease of deployment in diverse environments, insensitivity to ambient lighting conditions, and maintaining user privacy [18,40].
Mmwave is a subset of radar technology [41], that is relatively low cost, has a compact form factor, and has high-resolution detection capabilities [42]. Further, it can penetrate thin layers of some materials such as fabrics, allowing seamless indoor placement in complex living environments [43]. Commercially available mmwave devices have the capability to create detailed 3D point cloud models of objects. The collected data can be effectively analyzed using edge Artificial Intelligence (AI) algorithms to accurately recreate human movements for HAR applications [44].
The mmwave radar generates point clouds by emitting electromagnetic waves and capturing their reflections as they interact with the object or person. These point clouds represent the spatial distribution of objects and movements, which are then processed to decipher human activities. However, the fluctuating count of cloud points in each frame from mmwave radar introduces challenges in crafting precise activity classifiers, as these typically require fixed input dimensions and order [35]. To address this, researchers commonly standardize the data into forms like micro-Doppler signatures [45,46], image sequences [47,48,49], or 3D voxel grids [19,50] before employing machine learning. This standardization often results in the loss of spatial features [51] and can cause data bloat and related challenges [20].
The proposed approach uses the PointNet network to overcome constraints faced by directly processing raw point cloud data, thereby retaining fine-grained spatial relationships essential for object tracking [52]. As shown in Table 1, our proposed system achieved novel high accuracy compared with prior studies and extracted accurate tracking maps using spatial features. PointNet’s architecture, leveraging shared Multi-Layer Perceptron (MLP), is computationally efficient and lightweight, making it well-suited for real-time HAR applications [20].

3. System Overview

This section elucidates the primary components of our proposed system for continuous HAR using a mmwave radar sensor. The system encompasses a mmwave radar sensor for monitoring and an NVIDIA Jetson Nano GPU board to accurately discern five distinct activities: standing, walking, sitting, lying, and falling, utilizing the PointNet deep learning algorithms. Additionally, our proposed system uses an alert feature for care providers, designed to notify them of fall events via Hypertext Transfer Protocol (HTTP) requests via Twilio, which sends SMS notifications and initiates alert calls.

3.1. Millimeter-Wave Radar Sensor (IWR6843ISK-ODS)

Texas Instruments’ radar sensors employ frequency-modulated continuous wave (FMCW) to determine the range, velocity, and angle of objects through frequency-modulated signals [41]. The shorter wavelength of mmwave radars, falling within the millimeter range, enhances accuracy and enables 3D visualization using point clouds, accurately identifying human postures [13]. We use the Texas Instruments (TI) 60 GHz IWR6843ISK-ODS mmwave radar for real-time point cloud generation in 3D Cartesian coordinates, along with velocity information, to track individuals within its field of view (FoV) [60].
The IWR6843ISK-ODS mmwave sensor features a short-range antenna with a broad FoV, interfacing with the MMWAVEICBOOST carrier card. Its evaluation module houses a transceiver paired with an antenna, facilitating point cloud data access via USB, as depicted in Figure 2. The key metrics of IWR6843ISK-ODS are listed in Table 2 [61]. The sensor, with four receivers and three transmitters, can detect individuals up to 18 m away. The maximum detectable range for a human is determined using the link budget formula. This formula relies on factors like detection SNR, radar cross-section, radar device RF performance, antenna gains, and chirp parameters, which are calculated by:
r m a x , d = σ P T x G T x G R x λ 2 T c N c N T x N R x 4 π 3 K T e L η S N R 4
where λ is the wavelength, K is Boltzmann’s constant, T e Ambient temperature, L is total system loss. The IWR6843ISK-ODS sensor has built-in detection and tracking algorithms to ascertain individual locations, monitor movements, and track all moving objects in the scene, even if they are seated or lying down.
The detection process commences with a synthesizer emitting a chirp, which is transmitted by the transmit antenna T x (2) and then reflected off objects back chirp reflected at the receive antenna R x (3), as illustrated in Figure 3.
T x = s i n w 1 t + Φ 1
R x = s i n w 2 t + Φ 2
The IF signal, as depicted in Equation (1), is a sinusoidal waveform whose instantaneous frequency and phase are determined by the disparities in the instantaneous frequency and phase of the input sinusoidal signals. These signals are combined to create an intermediate frequency (IF) signal (4), which is subsequently digitized for further analysis.
I F = T x + R x = s i n w 1 w 2 t + Φ 1 Φ 2
This creates measurement vectors or point clouds that show the physical properties of the scene [41,61]. Obtaining raw 3D radar data is the first step in processing radar signals. Each antenna then goes through range processing using 1D windowing and 1D Fast Fourier Transform (FFT). Following this, a static clutter removal procedure is employed to filter out stationary objects, isolating signals emanating from moving objects. Techniques such as capon beamforming are utilized to formulate a range-azimuth heatmap, with object identification being executed through a constant false alarm rate approach. Further refinement is accomplished through elevation and Doppler estimations, which ascertain the angular positions and radial velocities of detected objects [61].
Transitioning to the tracking phase, the focus shifts toward identifying and tracking clusters within the point cloud. The tracking layer leverages the point cloud data to pinpoint and track these clusters, culminating in a target list. Each point encapsulates values such as range, azimuth angle, and radial velocity. Through this layer, targets are identified and a list is compiled, encapsulating attributes like track ID, position, velocity, and size, which prove instrumental in subsequent tasks like visualization and object categorization [62]. Exploiting the radars’ expansive bandwidth and 8 cm range resolution, multiple target points are derived from the reflections off the human body, with point clouds in each frame representing these targets. The mmwave radar sensor’s point clouds each contain 3D coordinates and velocity, among other characteristics. The term “frame” denotes the data set captured by the radar at each instance. Data points within each frame correspond to target movement, rendering them pivotal for precise target location and suitable for classification and recognition actions in high-level processing.

3.2. NVIDIA Jetson Nano GPU

The NVIDIA Jetson Nano is a compact integrated system-on-module (SoM) and development package adept at executing multiple neural networks simultaneously. This single-board computer (SBC) balances the computational capability essential for modern AI applications with its small size, low cost, and low energy consumption while operating under a power requirement of less than 5 W. It facilitates the deployment of AI frameworks for tasks like image categorization, object detection, segmentation, and audio processing [63]. The characteristics of the NVIDIA Jetson Nano system are summarized in Table 3 [64].
Additionally, NVIDIA offers the TensorRT toolkit to enhance the effectiveness of deep learning layers on Jetson devices. TensorRT is a high-quality deep learning inference software development kit (SDK) that melds an inference optimizer with runtime for low latency and robust throughput. Compatible with training frameworks like TensorFlow and PyTorch, it efficiently executes pre-trained networks on NVIDIA’s hardware. Compared to standard GPU-based inference, TensorRT notably enhances performance and reduces power consumption [65,66].

3.3. PointNet Neural Network

In our methodology, the PointNet architecture processes raw point cloud data to classify the points into one of five categories: standing, walking, sitting, lying, or falling. The architecture shown in Figure 4 begins with an input layer for a set of cloud points p = p 1 , p 2 , , p n , where n represents the total number point count, with each point being characterized by its Cartesian coordinates. p i = x , y , z in a 3D Euclidean space, designed to accommodate point clouds with a shape representing the number of points and three coordinates. Following the input layer, the model proceeds with an input transformation network (T-net), which operates on the raw point cloud data. A significant aspect of PointNet is the incorporation of T-net, which aims to spatially align the point cloud data, helping in learning rotation and translation-invariant features. In the architecture of the PointNet model, following the initial T-net, there are three 1D convolutional layers (Conv1D), each comprising 32 filters. Following each convolution operation, the convolution output undergoes 1D batch normalization (BN1D) for normalization purposes. Subsequently, a Rectified Linear Unit (ReLU) activation function is employed to introduce nonlinearity, aiding the model in learning from the data. This is achieved by applying R e L U ( y i ) = m a x 0 , y i . The model then goes through another T-net to align the feature representations. It also aligns the feature space with a transformation matrix, using a regularization term to ensure near orthogonality and stable optimization.
T r e g = I K K T F 2
where I is the identity matrix, K is the feature alignment matrix predicted by the T-net, and F denotes the Frobenius norm. This alignment is crucial as it improves the model’s ability to generalize across varied spatial orientations of data. Then another Conv1D with 32 filters, one with 64 filters, and a final one with a whopping 512 filters for deeper feature extraction. After convolutional layers, a global max pooling layer condenses the feature maps into a single global feature vector. Following the pooling layer, the global feature vector is passed through a series of fully connected layers known as the MLP. The MLP comprises three layers. The first two layers have 256 and 128 units, respectively. 1D batch normalization and ReLU activation follow each of these layers, ensuring a normalized and non-linear transformation of the data. Additionally, dropout layers with a rate of 0.3 are interspersed between these fully connected layers for regularization to reduce the risk of overfitting during training. The third dense layer is designed to reshape the features for subsequent operations. The MLP serves to further process the extracted features, making them suitable for the final classification stage. The network ends with a fully connected layer that has a set number of class units and a SoftMax activation function that sorts the input point cloud into one of five classes. The entire process enables PointNet to derive human posture classifications from 3D data and utilize these to create spatial feature tracking maps. The finalized model is then stored to be used on the Nvidia Jetson Nano for real-time execution.

3.4. Twilio API Programmable Protocol Messages

Text messages are an efficient means of delivering timely alerts, particularly within the healthcare sector, in the case of critical events like falls. Therefore, we use the Twilio Application Service to make a fall alert SMS system. Twilio is a web service API that offers programmable communication capabilities for the transmission and receiving of text messages, phone calls, and several other modes of communication. The HTTP protocol is used to transmit administrator notifications to Twilio REST APIs, which allow developers to send SMSs and calls, as shown in Figure 5. Twilio SIM cards, administered via APIs and the Twilio Console, provide tailored solutions for IoT applications [67]. Therefore, it enables effective communication, rapid event response, and improved alerting systems. This feature is crucial in facilitating patient communication and notifying care providers about fall incidents.
As illustrated in Figure 5, upon the detection of a fall, the system swiftly initiates an HTTP request to the Twilio cloud platform. Upon receipt of this request, the server evaluates the information and leverages the Twilio REST API to instruct Twilio to dispatch a predetermined SMS bearing the critical fall alert message. Simultaneously, a voice call is initiated by the platform, alerting the designated emergency contact through an automated voice message about the detected fall. This contact could be a healthcare practitioner or a close relative. The entire process commences with fall detection on our Jetson Nano model, then moves to the Twilio cloud platform, where it interfaces with the Twilio service, and culminates in notifying the selected emergency contact through both an SMS and voice call alert.

4. Experiment Setup and Data Collection

4.1. Experimental Setup

The experimental setup aimed to acquire data on human activities within a simulated home environment. A mmwave radar sensor, specifically the IWR6843ISK-ODS model, was positioned on tripods at a height of 2 m, as shown in Figure 6. The radar was tilted by 15 degrees in the depth-elevation plane to enhance coverage over a designated 12-square-meter area. This area was designed to mimic a living room setting, with furniture like chairs, floor mats, beds, and walking space. The primary target for detection was the human subject, as depicted in Figure 6.

4.2. Data Collection

Our study received permission from the University of Dayton Institutional Review Board (IRB) for data collection involving a cohort of 89 healthy adults in accordance with ethical guidelines. The data collection encompassed 57 male participants, while 32 were female. Taking into account a broad spectrum of height and weight variations is necessary to generate training data that can represent a diverse population, which leads to better generalization and reduces biases. A summary of the participant’s demographic characteristics can be found in Table 4. Further, incorporating a diversified set of weight and height parameters during the training phase is crucial to ensuring that the model encompasses an ample quantity of varied and representative data points. This allows the model to effectively generalize its classification capabilities when applied to the test data.
Our study centered on the recognition of five distinct bodily positions: standing, walking, sitting, lying, and falling, which constitute the five output categories, as depicted in Figure 7. The process of radar signal handling, detailed in Section 3, commences with the emission of chirp signals by the radar’s transmitters. Subsequently, the receiver captures these signals following their interaction with the participants, leading to the extraction of data pertaining to trackable objects. Also, our dataset is carefully labeled to differentiate five activities, reducing overlap and enhancing clarity. This dataset comprises key attributes, including track ID, position, velocity, and physical dimensions. Throughout the study protocol, each participant engaged in a sequence of activities. These activities encompassed standing before the sensor for a duration of 30 s, engaging in random walking movements within the sensor’s coverage area for an additional 30 s, assuming a seated posture on a chair for 30 s, lying on a bed for 30 s with intermittent rolling to both sides, and ultimately transitioning from a standing position to an abrupt fall, remaining in the fallen position for an additional 30 s. The mmwave radar sensor effectively captured the participants’ movements and generated point cloud data. This collected data file for each participant’s activity contains numerous data frames, which significantly increases the dataset’s size. Data collection was conducted across various positions within the room to ensure diversity and enhance data quality. Five samples, each spanning a duration of 30 s, were acquired from each participant during this process.

4.3. Data Analysis

To evaluate the efficacy of our PointNet model and its architecture, a comprehensive array of evaluation metrics was used. These encompassed fundamental measures such as the Receiver Operating Curve (ROC), F1-score, precision, recall, and the utilization of confusion matrixes.
The data were split into training (80%) and validation (20%) sets to adjust hyperparameters, prevent overfitting, and evaluate the model’s performance on unseen data. The model was trained with the training dataset, allowing its performance to be evaluated and compared to the validation dataset, which was not used during training.

4.4. Receiver Operating Characteristic (ROC) Curve

The assessment of the model included ROC curve analysis, which is commonly utilized as a diagnostic instrument for evaluating the performance of classification models. This is achieved by evaluating the trade-off between the true positive rate and the false positive rate across different discriminatory thresholds. The present study involved the application of ROC analysis to assess the accuracy of the radar in identifying the five predetermined physical activities.
The findings depicted in Figure 8 highlight the performance of the proposed model across the activity categories. The ROC curves for these classes not only demonstrate prominent peaks but also validate the model’s performance in accurately distinguishing various activities, including scenarios involving fall detection. The ROC curves for all five classes exhibit similar and parallel trajectories, indicating a stable and equally proficient true recognition rate across various positions. The mean ROC curve suggests that the efficacy of the HAR utilizing mmwave radar for the classification of five distinct activities is 99.5%, highlighting the effectiveness of the proposed system in accurately discriminating between various tasks.

4.5. Confusion Matrixes

The confusion matrix is a useful tool for assessing the effectiveness of classification models. The abscissa represents the true labels, whereas the ordinate represents the predicted labels. Enhanced model performance is shown by a higher concentration of predicted values along the diagonal of the confusion matrix. In Figure 9, the confusion matrix visually portrays the classification performance. Notably, all activity classes are classified with excellent accuracy. While ‘Lying’ and ‘Falling’ have certain similarities in terms of their proximity to the ground and body posture, the key distinguishing feature between them is the height at which the human target is positioned above the ground surface, which differs greatly between the two. Radar’s ability to generate a three-dimensional representation allows it to overcome these hurdles, resulting in highly effective classification performance.

4.6. F1-Score, Precision, Recall

An evaluation of the classification performance is conducted by calculating metrics such as “F1-score”, “Precision”, and “ Recall”. These measures consider the quantities of true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs). Mathematically, the metrics are defined in the following manner:
The F1-score is a metric that calculates a weighted average of precision and recall.
F 1 - s c o r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
The precision metric is formally defined as the ratio produced by dividing the count of true positive instances by the sum of the count of true positive instances and the count of false positive instances.
P r e c i s i o n = T P s T P s + F P s
The mathematical expression for the recall formula involves dividing the number of true positives by the sum of true positives and false negatives.
R e c a l l = T P s F N s + T P s
A summary of the Evaluation metric results can be found in Table 5.

5. Results and Discussions

This section presents the results of our proposed system for continuous monitoring of human activities. We conducted two separate experiments, one for a short period (5 min) and another for a longer duration (30 min), which provided comprehensive data analysis reports regarding activity distribution over time and space. We also tested the fall detection and alert mechanism, which is connected via the Twilio REST API protocol, for notifying supervisory personnel in the event of a fall.

5.1. Real-time Detection of Human Activity

During real-time monitoring of the five different activities (standing, walking, sitting, lying, and falling), we assigned each activity a specific color: red for standing, yellow for walking, green for sitting, pink for lying, and blue for falling, which gives an easy way to discern between each. This is shown in the following Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14, where each human target appears in the field of view by the mmwave sensor and is given a specific trackID, and then the cloud points are recolored according to the current activity and the prediction message that is displayed on the Rviz screen of the Jetson Nano in the “Activity_State” parameter with position, velocity, and acceleration values of x, y, and z dimensions. These values are continuously updated to provide a comprehensive description of the target’s spatial location in a Cartesian coordinate system. Real images have been added to the figures for illustrative purposes.

5.2. Short-Term Monitoring of Human Activity

In this experiment test, we monitored two human targets for a short period of time (5 min), and then the system produced a separate folder for each target identified by their trackID, which contained the time distribution of each activity with its percentage in a table along with a colored pie chart for quick and easy visualization, as shown in Figure 15 and Figure 16. Additionally, we tested the ability of our PoinetNet algorithm to keep the spatial feature for creating a 2D tracking map that clarifies the position with the corresponding colors assigned for each activity that provides an overview of the monitored room, as shown in Figure 17 and Figure 18, where the axes in the lower center indicate the sensor location in the room. These features can provide healthcare providers with a comprehensive perspective of their patients’ activity levels over time as well as the specific locations of each activity, including falls. This can help healthcare professionals assess whether a fall may be attributable to surrounding environmental factors, as well as know the specific activity that was being performed immediately preceding the fall so that future falls might be prevented.

5.3. Long-Term Monitoring of Human Activity

In this experiment, we monitored one human target for about 30 min, and after that, the system produced a separate folder for the target as we did in the previous experiment shown in Figure 19 and Figure 20. Additionally, we tested the fall alert feature to see how accurately a fall was detected by the system. If a fall was detected, the Twilio REST API was immediately activated to send an SMS notification and make an alert call. The resulting SMS notification is shown in Figure 21.

6. Limitations and Future Directions

Although our work demonstrates the accuracy and feasibility of mmwave radar for classifying and recording human activity, there are limitations to our findings. One limitation is that we used primarily younger healthy subjects to train the model. Therefore, additional validation in older adults and persons with significant movement impairments such as stroke and Parkinson’s disease who also use assistive devices is warranted. Another limitation of this study was that our testing took place in a simulated living environment with limited obstructions. Additional testing in home and institutional settings with a variety of room layouts and obstructions would be beneficial. Additionally, while mmwave radar can recognize and track more than one person in a room, work remains to improve the ability to identify each individual in the room so that the tracking data can be attributed to the correct individual.
While mmwave radar shows promise as a low-cost and portable solution for HAR, there are still barriers to its widespread implementation. First, suitable Health Insurance Portability and Accountability Act (HIPAA) compliant software and apps will need to be developed that are simple to use and will provide relevant data in a usable format so that healthcare providers can make informed decisions. Integration with smart home systems such as video cameras, voice-activated devices, lighting, etc. would also improve the usefulness of the device but would require additional effort. Work still needs to be performed to determine the optimal coverage area so that the radar can be installed in the correct locations and the number of radar units needed for a home or facility can be easily determined. Lastly, even though radar does not record video images, privacy issues should still be considered carefully as radar systems continue to improve their ability to recognize and record human activity.

7. Conclusions

We describe a mmwave radar-based system that can accurately and efficiently classify and monitor five distinct activities: standing, walking, sitting, lying, and falling in real time and over extended periods. The purpose of our work was to demonstrate its use as a tool for telemedicine in home and institutional settings so that caregivers and healthcare providers can engage in remote activity monitoring. The proposed system was developed on an NVIDIA Jetson Nano platform that uses PointNet neural networks to manage the cloud point data from the mmwave radar system. Our methodology does not depend on intermediary representations, such as 3D voxels or pictures, and maintains spatial linkages that are essential for object tracking. As a result, the proposed system demonstrates the capacity to accurately identify and classify five distinct activities in real time, regardless of whether they involve a single target or several targets, with 99.5% accuracy. Our proposed system offers detailed analyses and reports of activities over time and space, providing insights into human behavior. It can generate reports showing the time period of activities and spatial tracking maps for a more comprehensive understanding of movement patterns. Finally, it incorporates functionality for sending a fall alert message and a call to healthcare providers following a fall so that an immediate response can occur.

Author Contributions

Conceptualization, A.K.A. and V.P.C.; methodology, A.K.A. and M.A.A.; software, A.K.A., A.H.A. and S.M.A.; validation, A.K.A. and K.J.; formal analysis, A.K.A., M.A.A., J.J., C.D. and A.B.; investigation, A.K.A.; resources, K.J. and V.P.C.; data curation, A.K.A., M.A.A., J.J., C.D., A.B. and K.J.; writing—original draft preparation, A.K.A.; writing—review and editing, A.K.A., K.J. and V.P.C.; visualization, A.K.A., A.H.A. and S.M.A.; supervision, K.J. and V.P.C.; project administration, K.J. and V.P.C.; funding acquisition, V.P.C. All authors have read and agreed to the published version of the manuscript.

Funding

We would like to acknowledge the financial support received from the Umm Al-Qura University in Makkah, Saudi Arabia, and the School of Engineering at the University of Dayton.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the University of Dayton (protocol code 18549055, 10 January 2022).

Informed Consent Statement

Informed consent was obtained for all participants.

Data Availability Statement

The data presented in this study are available upon request from the A.K.A author.

Acknowledgments

The authors acknowledge the editors and reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization (WHO). National Programmes for Age-Friendly Cities and Communities: A Guide. 2023. Available online: https://www.who.int/teams/social-determinants-of-health/demographic-change-and-healthy-ageing/age-friendly-environments/national-programmes-afcc. (accessed on 1 June 2023).
  2. Administration for Community Living (ACL). 2021 Profile of Older Americans; The Administration for Community Living: Washington, DC, USA, 2022; Available online: https://acl.gov/sites/default/files/Profile%20of%20OA/2021%20Profile%20of%20OA/2021ProfileOlderAmericans_508.pdf. (accessed on 5 May 2023).
  3. Debauche, O.; Mahmoudi, S.; Manneback, P.; Assila, A. Fog IoT for Health: A new Architecture for Patients and Elderly Monitoring. Procedia Comput. Sci. 2019, 160, 289–297. [Google Scholar] [CrossRef]
  4. Burns, E.; Kakara, R.; Moreland, B. A CDC Compendium of Effective Fall Interventions: What Works for Community-Dwelling Older Adults, 4th ed.; Centers for Disease Control and Prevention, National Center for Injury Prevention and Control: Atlanta, GA, USA, 2023; Available online: https://www.cdc.gov/falls/pdf/Steadi_Compendium_2023_508.pdf (accessed on 10 July 2023).
  5. Bargiotas, I.; Wang, D.; Mantilla, J.; Quijoux, F.; Moreau, A.; Vidal, C.; Barrois, R.; Nicolai, A.; Audiffren, J.; Labourdette, C.; et al. Preventing falls: The use of machine learning for the prediction of future falls in individuals without history of fall. J. Neurol. 2023, 270, 618–631. [Google Scholar] [CrossRef] [PubMed]
  6. Chakraborty, C.; Ghosh, U.; Ravi, V.; Shelke, Y. Efficient Data Handling for Massive Internet of Medical Things: Healthcare Data Analytics; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  7. Sakamaki, T.; Furusawa, Y.; Hayashi, A.; Otsuka, M.; Fernandez, J. Remote patient monitoring for neuropsychiatric disorders: A scoping review of current trends and future perspectives from recent publications and upcoming clinical trials. Telemed.-Health 2022, 28, 1235–1250. [Google Scholar] [CrossRef] [PubMed]
  8. Alanazi, M.A.; Alhazmi, A.K.; Alsattam, O.; Gnau, K.; Brown, M.; Thiel, S.; Jackson, K.; Chodavarapu, V.P. Towards a low-cost solution for gait analysis using millimeter wave sensor and machine learning. Sensors 2022, 22, 5470. [Google Scholar] [CrossRef]
  9. Palanisamy, P.; Padmanabhan, A.; Ramasamy, A.; Subramaniam, S. Remote Patient Activity Monitoring System by Integrating IoT Sensors and Artificial Intelligence Techniques. Sensors 2023, 23, 5869. [Google Scholar] [CrossRef] [PubMed]
  10. World Health Organization. Telemedicine: Opportunities and Developments in Member States. Report on the Second Global Survey on eHealth; World Health Organization: Geneva, Switzerland, 2010. [Google Scholar]
  11. Zhang, X.; Lin, D.; Pforsich, H.; Lin, V.W. Physician workforce in the United States of America: Forecasting nationwide shortages. Hum. Resour. Health 2020, 18, 8. [Google Scholar] [CrossRef] [PubMed]
  12. Lucas, J.W.; Villarroel, M.A. Telemedicine Use among Adults: United States, 2021; US Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics: Hyattsville, MD, USA, 2022. [Google Scholar]
  13. Alanazi, M.A.; Alhazmi, A.K.; Yakopcic, C.; Chodavarapu, V.P. Machine learning models for human fall detection using millimeter wave sensor. In Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 24–26 March 2021; pp. 1–5. [Google Scholar]
  14. Seron, P.; Oliveros, M.J.; Gutierrez-Arias, R.; Fuentes-Aspe, R.; Torres-Castro, R.C.; Merino-Osorio, C.; Nahuelhual, P.; Inostroza, J.; Jalil, Y.; Solano, R.; et al. Effectiveness of telerehabilitation in physical therapy: A rapid overview. Phys. Ther. 2021, 101, pzab053. [Google Scholar] [CrossRef]
  15. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest research trends in fall detection and prevention using machine learning: A systematic review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef]
  16. Li, X.; He, Y.; Jing, X. A survey of deep learning-based human activity recognition in radar. Remot. Sens. 2019, 11, 1068. [Google Scholar] [CrossRef]
  17. Texas Instruments. IWR6843, IWR6443 Single-Chip 60- to 64-GHz mmWave Sensor. 2021. Available online: https://www.ti.com/lit/ds/symlink/iwr6843.pdf?ts=1669861629404&ref_url=https%253A%252F%252Fwww.google.com.hk%252F (accessed on 25 June 2023).
  18. Alhazmi, A.K.; Alanazi, M.A.; Liu, C.; Chodavarapu, V.P. Machine Learning Enabled Fall Detection with Compact Millimeter Wave System. In Proceedings of the NAECON 2021-IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 16–19 August 2021; pp. 217–222. [Google Scholar]
  19. Singh, A.D.; Sandha, S.S.; Garcia, L.; Srivastava, M. Radhar: Human activity recognition from point clouds generated through a millimeter-wave radar. In Proceedings of the 3rd ACM Workshop on Millimeter-Wave Networks and Sensing Systems, Los Cabos, Mexico, 25 October 2019; pp. 51–56. [Google Scholar]
  20. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  21. Huang, T.; Liu, G.; Li, S.; Liu, J. RPCRS: Human Activity Recognition Using Millimeter Wave Radar. In Proceedings of the 2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS), Nanjing, China, 10–12 January 2023; pp. 122–129. [Google Scholar]
  22. Beddiar, D.R.; Nini, B.; Sabokrou, M.; Hadid, A. Vision-based human activity recognition: A survey. Multimed. Tools Appl. 2020, 79, 30509–30555. [Google Scholar] [CrossRef]
  23. Bulling, A.; Blanke, U.; Schiele, B. A tutorial on human activity recognition using body-worn inertial sensors. Acm Comput. Surv. (CSUR) 2014, 46, 1–33. [Google Scholar] [CrossRef]
  24. Bouchabou, D.; Nguyen, S.M.; Lohr, C.; LeDuc, B.; Kanellos, I. A survey of human activity recognition in smart homes based on IoT sensors algorithms: Taxonomies, challenges, and opportunities with deep learning. Sensors 2021, 21, 6037. [Google Scholar] [CrossRef] [PubMed]
  25. Kim, K.; Jalal, A.; Mahmood, M. Vision-based human activity recognition system using depth silhouettes: A smart home system for monitoring the residents. J. Electr. Eng. Technol. 2019, 14, 2567–2573. [Google Scholar] [CrossRef]
  26. Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep learning in human activity recognition with wearable sensors: A review on advances. Sensors 2022, 22, 1476. [Google Scholar] [CrossRef] [PubMed]
  27. Bibbò, L.; Carotenuto, R.; Della Corte, F. An overview of indoor localization system for human activity recognition (HAR) in healthcare. Sensors 2022, 22, 8119. [Google Scholar] [CrossRef]
  28. Tarafdar, P.; Bose, I. Recognition of human activities for wellness management using a smartphone and a smartwatch: A boosting approach. Decis. Support Syst. 2021, 140, 113426. [Google Scholar] [CrossRef]
  29. Tan, T.H.; Shih, J.Y.; Liu, S.H.; Alkhaleefah, M.; Chang, Y.L.; Gochoo, M. Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch. Sensors 2023, 23, 3354. [Google Scholar] [CrossRef]
  30. Ramezani, R.; Cao, M.; Earthperson, A.; Naeim, A. Developing a Smartwatch-Based Healthcare Application: Notes to Consider. Sensors 2023, 23, 6652. [Google Scholar] [CrossRef]
  31. Kheirkhahan, M.; Nair, S.; Davoudi, A.; Rashidi, P.; Wanigatunga, A.A.; Corbett, D.B.; Mendoza, T.; Manini, T.M.; Ranka, S. A smartwatch-based framework for real-time and online assessment and mobility monitoring. J. Biomed. Inform. 2019, 89, 29–40. [Google Scholar] [CrossRef]
  32. Montes, J.; Young, J.C.; Tandy, R.; Navalta, J.W. Reliability and validation of the hexoskin wearable bio-collection device during walking conditions. Int. J. Exerc. Sci. 2018, 11, 806. [Google Scholar]
  33. Ravichandran, V.; Sadhu, S.; Convey, D.; Guerrier, S.; Chomal, S.; Dupre, A.M.; Akbar, U.; Solanki, D.; Mankodiya, K. iTex Gloves: Design and In-Home Evaluation of an E-Textile Glove System for Tele-Assessment of Parkinson’s Disease. Sensors 2023, 23, 2877. [Google Scholar] [CrossRef] [PubMed]
  34. di Biase, L.; Pecoraro, P.M.; Pecoraro, G.; Caminiti, M.L.; Di Lazzaro, V. Markerless radio frequency indoor monitoring for telemedicine: Gait analysis, indoor positioning, fall detection, tremor analysis, vital signs and sleep monitoring. Sensors 2022, 22, 8486. [Google Scholar] [CrossRef] [PubMed]
  35. Rezaei, A.; Mascheroni, A.; Stevens, M.C.; Argha, R.; Papandrea, M.; Puiatti, A.; Lovell, N.H. Unobtrusive Human Fall Detection System Using mmWave Radar and Data Driven Methods. IEEE Sensors J. 2023, 23, 7968–7976. [Google Scholar] [CrossRef]
  36. Pareek, P.; Thakkar, A. A survey on video-based human action recognition: Recent updates, datasets, challenges, and applications. Artif. Intell. Rev. 2021, 54, 2259–2322. [Google Scholar] [CrossRef]
  37. Xu, D.; Qi, X.; Li, C.; Sheng, Z.; Huang, H. Wise information technology of med: Human pose recognition in elderly care. Sensors 2021, 21, 7130. [Google Scholar] [CrossRef] [PubMed]
  38. Lan, G.; Liang, J.; Liu, G.; Hao, Q. Development of a smart floor for target localization with bayesian binary sensing. In Proceedings of the 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), Taipei, Taiwan, 27–29 March 2017; pp. 447–453. [Google Scholar]
  39. Luo, Y.; Li, Y.; Foshey, M.; Shou, W.; Sharma, P.; Palacios, T.; Torralba, A.; Matusik, W. Intelligent carpet: Inferring 3d human pose from tactile signals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11255–11265. [Google Scholar]
  40. Zhao, Y.; Zhou, H.; Lu, S.; Liu, Y.; An, X.; Liu, Q. Human activity recognition based on non-contact radar data and improved PCA method. Appl. Sci. 2022, 12, 7124. [Google Scholar] [CrossRef]
  41. Iovescu, C.; Rao, S. The Fundamentals of Millimeter Wave Sensors; Texas Instrument: Dallas, TX, USA, 2017; pp. 1–8. [Google Scholar]
  42. Jin, F.; Sengupta, A.; Cao, S. mmfall: Fall detection using 4-d mmwave radar and a hybrid variational rnn autoencoder. IEEE Trans. Autom. Sci. Eng. 2020, 19, 1245–1257. [Google Scholar] [CrossRef]
  43. Broeder, G. Human Activity Recognition Using a mmWave Radar. Bachelor’s Thesis, University of Twente, Netherlands, Enschede, The Netherlands, 2022. [Google Scholar]
  44. An, S.; Ogras, U.Y. Mars: Mmwave-based assistive rehabilitation system for smart healthcare. Acm Trans. Embed. Comput. Syst. (TECS) 2021, 20, 1–22. [Google Scholar] [CrossRef]
  45. Zhang, R.; Cao, S. Real-time human motion behavior detection via CNN using mmWave radar. IEEE Sens. Lett. 2018, 3, 3500104. [Google Scholar] [CrossRef]
  46. Jin, F.; Zhang, R.; Sengupta, A.; Cao, S.; Hariri, S.; Agarwal, N.K.; Agarwal, S.K. Multiple patients behavior detection in real-time using mmWave radar and deep CNNs. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
  47. Cui, H.; Dahnoun, N. Real-time short-range human posture estimation using mmWave radars and neural networks. IEEE Sens. J. 2021, 22, 535–543. [Google Scholar] [CrossRef]
  48. Liu, K.; Zhang, Y.; Tan, A.; Sun, Z.; Ding, C.; Chen, J.; Wang, B.; Liu, J. Micro-doppler feature and image based human activity classification with FMCW radar. In Proceedings of the IET International Radar Conference (IET IRC 2020), Online, 4–6 November 2020; Volume 2020, pp. 1689–1694. [Google Scholar]
  49. Tiwari, G.; Gupta, S. An mmWave radar based real-time contactless fitness tracker using deep CNNs. IEEE Sens. J. 2021, 21, 17262–17270. [Google Scholar] [CrossRef]
  50. Wu, J.; Cui, H.; Dahnoun, N. A voxelization algorithm for reconstructing MmWave radar point cloud and an application on posture classification for low energy consumption platform. Sustainability 2023, 15, 3342. [Google Scholar] [CrossRef]
  51. Li, Z.; Ni, H.; He, Y.; Li, J.; Huang, B.; Tian, Z.; Tan, W. mmBehavior: Human Activity Recognition System of millimeter-wave Radar Point Clouds Based on Deep Recurrent Neural Network. 2023; preprint. [Google Scholar] [CrossRef]
  52. Li, Z.; Li, W.; Liu, H.; Wang, Y.; Gui, G. Optimized pointnet for 3d object classification. In Proceedings of the Advanced Hybrid Information Processing: Third EAI International Conference, ADHIP 2019, Nanjing, China, 21–22 September 2019; Proceedings, Part I. Springer: Berlin/Heidelberg, Germany, 2019; pp. 271–278. [Google Scholar]
  53. Rajab, K.Z.; Wu, B.; Alizadeh, P.; Alomainy, A. Multi-target tracking and activity classification with millimeter-wave radar. Appl. Phys. Lett. 2021, 119, 034101. [Google Scholar] [CrossRef]
  54. Ahmed, S.; Park, J.; Cho, S.H. FMCW radar sensor based human activity recognition using deep learning. In Proceedings of the 2022 International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 6–9 February 2022; pp. 1–5. [Google Scholar]
  55. Werthen-Brabants, L.; Bhavanasi, G.; Couckuyt, I.; Dhaene, T.; Deschrijver, D. Split BiRNN for real-time activity recognition using radar and deep learning. Sci. Rep. 2022, 12, 7436. [Google Scholar] [CrossRef]
  56. Hassan, S.; Wang, X.; Ishtiaq, S.; Ullah, N.; Mohammad, A.; Noorwali, A. Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar. Remote Sens. 2023, 15, 1752. [Google Scholar] [CrossRef]
  57. Sun, Y.; Hang, R.; Li, Z.; Jin, M.; Xu, K. Privacy-preserving fall detection with deep learning on mmWave radar signal. In Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, NSW, Australia, 1–4 December 2019; pp. 1–4. [Google Scholar]
  58. Senigagliesi, L.; Ciattaglia, G.; Gambi, E. Contactless walking recognition based on mmWave radar. In Proceedings of the 2020 IEEE Symposium on Computers and Communications (ISCC), Rennes, France, 7–10 July 2020; pp. 1–4. [Google Scholar]
  59. Xie, Y.; Jiang, R.; Guo, X.; Wang, Y.; Cheng, J.; Chen, Y. mmFit: Low-Effort Personalized Fitness Monitoring Using Millimeter Wave. In Proceedings of the 2022 International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 25–28 July 2022; pp. 1–10. [Google Scholar]
  60. Texas Instruments. IWR6843ISK-ODS Product Details. Available online: https://www.ti.com/product/IWR6843ISK-ODS/part-details/IWR6843ISK-ODS. (accessed on 9 April 2023).
  61. Texas Instruments. Detection Layer Parameter Tuning Guide for the 3D People Counting Demo; Revision 3.0; Incorporated: Dallas, TX, USA, 2023. [Google Scholar]
  62. Texas Instruments. Group Tracker Parameter Tuning Guide for the 3D People Counting Demo; Revision 1.1; Incorporated: Dallas, TX, USA, 2023. [Google Scholar]
  63. NVIDIA Corporation. Jetson NANO Module. Available online: https://developer.nvidia.com/embedded/jetson-nano. (accessed on 2 January 2023).
  64. NVIDIA Corporation. DATA SHEET Jetson Nano System-on-Module Data Sheet; Version 1; NVIDIA Corporation: Santa Clara, CA, USA, 2019. [Google Scholar]
  65. Jeong, E.; Kim, J.; Ha, S. Tensorrt-based framework and optimization methodology for deep learning inference on jetson boards. Acm Trans. Embed. Comput. Syst. (TECS) 2022, 21, 51. [Google Scholar] [CrossRef]
  66. NVIDIA Corporation. NVIDIA TensorRT Developer Guide, NVIDIA Docs; Release 8.6.1; NVIDIA Corporation: Santa Clara, CA, USA, 2023. [Google Scholar]
  67. Twilio Inc. Twilio’s Rest APIs. Available online: https://www.twilio.com/docs/usage/api. (accessed on 23 August 2023).
Figure 1. Classification of human activity recognition approaches.
Figure 1. Classification of human activity recognition approaches.
Sensors 24 00268 g001
Figure 2. Texas Instruments IWR6843ISK-ODS mmwave sensor with MMWAVEICBOOST.
Figure 2. Texas Instruments IWR6843ISK-ODS mmwave sensor with MMWAVEICBOOST.
Sensors 24 00268 g002
Figure 3. Mmwave signal processing chain elements.
Figure 3. Mmwave signal processing chain elements.
Sensors 24 00268 g003
Figure 4. PointNet Model Architecture.
Figure 4. PointNet Model Architecture.
Sensors 24 00268 g004
Figure 5. Falling Alert System Architecture.
Figure 5. Falling Alert System Architecture.
Sensors 24 00268 g005
Figure 6. The experimental setup.
Figure 6. The experimental setup.
Sensors 24 00268 g006
Figure 7. Samples of who participates in the collecting data phase. (a) Standing. (b) Walking. (c) Sitting. (d) Lying. (e) Falling.
Figure 7. Samples of who participates in the collecting data phase. (a) Standing. (b) Walking. (c) Sitting. (d) Lying. (e) Falling.
Sensors 24 00268 g007
Figure 8. The ROC Curve of the PointNet model.
Figure 8. The ROC Curve of the PointNet model.
Sensors 24 00268 g008
Figure 9. The Confusion Matrix of the PointNet model.
Figure 9. The Confusion Matrix of the PointNet model.
Sensors 24 00268 g009
Figure 10. Real-time detection of a person in a standing position.
Figure 10. Real-time detection of a person in a standing position.
Sensors 24 00268 g010
Figure 11. Real-time detection of a person in a walking position.
Figure 11. Real-time detection of a person in a walking position.
Sensors 24 00268 g011
Figure 12. Real-time detection of a person in a sitting position.
Figure 12. Real-time detection of a person in a sitting position.
Sensors 24 00268 g012
Figure 13. Real-time detection of a person in a lying position.
Figure 13. Real-time detection of a person in a lying position.
Sensors 24 00268 g013
Figure 14. Real-time detection of a person in a falling position.
Figure 14. Real-time detection of a person in a falling position.
Sensors 24 00268 g014
Figure 15. Activities monitoring report of track ID: 1. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Figure 15. Activities monitoring report of track ID: 1. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Sensors 24 00268 g015
Figure 16. Activities monitoring report of track ID: 2. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Figure 16. Activities monitoring report of track ID: 2. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Sensors 24 00268 g016
Figure 17. Tracking map of trackID: 1.
Figure 17. Tracking map of trackID: 1.
Sensors 24 00268 g017
Figure 18. Tracking map of trackID: 2.
Figure 18. Tracking map of trackID: 2.
Sensors 24 00268 g018
Figure 19. Activities monitoring report of track ID: 3. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Figure 19. Activities monitoring report of track ID: 3. (a) Table of the time distribution of activities. (b) Pie chart of the time distribution of activities.
Sensors 24 00268 g019
Figure 20. Tracking map of trackID: 3.
Figure 20. Tracking map of trackID: 3.
Sensors 24 00268 g020
Figure 21. SMS Fall event Alert.
Figure 21. SMS Fall event Alert.
Sensors 24 00268 g021
Table 1. Overview on the mmwave radar with machine learning for detecting simple HAR studies.
Table 1. Overview on the mmwave radar with machine learning for detecting simple HAR studies.
Ref.Preprocessing MethodsSensor TypeModelActivity DetectionOverall Accuracy
[45]Micro-Doppler SignaturesTI AWR1642CNN 1Walking, swinging hands, sitting, and shifting.95.19%
[46]Micro-Doppler SignaturesTI AWR1642CNNStanding, walking, falling, swing, seizure, restless.98.7%
[53]Micro-Doppler SignaturesTI IWR6843DNN 2Standing, running, jumping jacks, jumping, jogging, squats.95%
[54]Micro-Doppler SignaturesTI IWR6843ISKCNNStand, sit, move toward, away, pick up something from ground, left, right, and stay still.91%
[55]Micro-DopplerTI xWR14xxRNN 3Stand up, sit down, walk, fall, get in, lie down,NA 4
SignaturesTI xWR68xx roll in, sit in, and get out of bed.
[56]Dual-Micro Motion SignaturesTI AWR1642CNNStanding, sitting, walking, running, jumping, punching, bending, and climbing.98%
[57]ReflectionTwoLSTM 5Falling, walking, pickup, stand up, boxing, sitting,80%
HeatmapTI IWR1642 and Jogging.
[58]Doppler MapsTI AWR1642PCA 6Fast walking, slow walking (with swinging hands, or without swinging hands), and limping.96.1%
[59]Spatial-Temporal HeatmapsTI AWR1642CNN14 Common in-home full-body workout.97%
[47]Heatmap ImagesTI IWR1443CNNStanding, walking, and sitting.71%
[48]Doppler ImagesTI AWR1642SVM 7Stand up, pick up, drink while standing, walk, sit down.95%
[49]Doppler ImagesTI AWR1642SVMShoulder press, lateral raise, dumbbell, squat, boxing, right and left triceps.NA
[19]VoxelizationTI IWR1443T-D 8 CNNWalking, jumping, jumping jacks, squats and boxing.90.47%
B-D 9 LSTM
[50]VoxelizationTI IWR1443CNNSitting posture with various directions.99%
[21]Raw Points CloudTI IWR1843PointNetWalking, rotating, waving, stooping, and falling.95.40%
ThisRaw Points CloudTI IWR6843PointNetStanding, walking, sitting, lying, falling.99.5%
Work
1 CNN: Convolutional Neural Network. 2 DNN: Deep Neural Network. 3 RNN: Recurrent Neural Network. 4 NA: Not Available. 5 LSTM: Long Short-Term Memory. 6 PCA: principal Component Analysis. 7 SVM: Support Vector Machine. 8 T-D: Time-distributed. 9 B-D: Bi-directional.
Table 2. Parameters of the mmwave Radar Sensor IWR6843ISK-ODS.
Table 2. Parameters of the mmwave Radar Sensor IWR6843ISK-ODS.
ParameterPhysical Description
TypeFMCW
Frequency Band60–64 GHz
Start Frequency (fo)60.75 GHz
Idle Time (Tidle)30 μ s
bandwidth (B)1780.41 MHz
Number of Transmitters (Tx)3
Number of Receivers (Rx)4
Total virtual antennas (NTx, NRx)12
Transmit power (PTx)−10 dBm
Noise figure of the receiver ( η )16 dB
Combined Tx/Rx antenna gain (GTx, GRx)16 dB
Azimuth (FoV)120
Elevation (FoV)120
Chirp time (Tc)32.5 μ s
Inter-Chirp time (Tr)267.30 μ s
Number of chirps per frame (Nc)96
Maximum beat frequency (fb)2.66 MHz
Center Frequency (fc)63.01 GHz
Required detection (SNR)12 dB
Maximum unambiguous range (rmax,u) 17.28 m
Maximum detection range based on SNR (rmax,d)18.27 m
Maximum unambiguous velocity (vmax) 24.45 m/s
Range resolution ( δ r) 30.0842 m
Velocity resolution ( δ v) 40.0928 m/s
1rmax,u = c f b 2 K , 2 vmax = c 4 T r f c , 3 δ r = c 2 B , 4 δ v = c N c T r f c .
Table 3. The NVIDIA Jetson Nano System-on-Module’s characteristics.
Table 3. The NVIDIA Jetson Nano System-on-Module’s characteristics.
ParameterPhysical Description
GPU128 cores NVIDIA Maxwell architecture
CPUARM Cortex-A57 multiprocessor core (Quad-core) unit
RAM64-bit LPDDR4
Memory Capacity4 GB
Max Memory Bus Frequency1600 MHz
Peak Bandwidth25.6 GB/s
Storage16 GB, eMMC 5.1
Power5 Watt
Mechanical69.6 mm × 45 mm, 260-pin edge connector
Table 4. The demographic details of the participants.
Table 4. The demographic details of the participants.
ParameterMean ± SD (Range)
Age24 ± 7.42 (21–53)
Height (cm)169 ± 5.32 (158–186)
weight (kg)76 ± 11.53 (55–115)
BMI 1 (kg/cm2)25.44 ± 4.36 (19.56–40.55)
Gender (M/F) 257/32
1 BMI: Body Mass Index. 2 M: Male, F: Female.
Table 5. The Evaluation metric of the PointNet model.
Table 5. The Evaluation metric of the PointNet model.
StandingWalkingSittingLyingFalling
F1-score1.001.001.000.97670.9756
Precision1.001.001.001.000.9524
Recall1.001.001.000.95451.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alhazmi, A.K.; Alanazi, M.A.; Alshehry, A.H.; Alshahry, S.M.; Jaszek, J.; Djukic, C.; Brown, A.; Jackson, K.; Chodavarapu, V.P. Intelligent Millimeter-Wave System for Human Activity Monitoring for Telemedicine. Sensors 2024, 24, 268. https://doi.org/10.3390/s24010268

AMA Style

Alhazmi AK, Alanazi MA, Alshehry AH, Alshahry SM, Jaszek J, Djukic C, Brown A, Jackson K, Chodavarapu VP. Intelligent Millimeter-Wave System for Human Activity Monitoring for Telemedicine. Sensors. 2024; 24(1):268. https://doi.org/10.3390/s24010268

Chicago/Turabian Style

Alhazmi, Abdullah K., Mubarak A. Alanazi, Awwad H. Alshehry, Saleh M. Alshahry, Jennifer Jaszek, Cameron Djukic, Anna Brown, Kurt Jackson, and Vamsy P. Chodavarapu. 2024. "Intelligent Millimeter-Wave System for Human Activity Monitoring for Telemedicine" Sensors 24, no. 1: 268. https://doi.org/10.3390/s24010268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop