Next Article in Journal
Enhancing Cross-Lingual Entity Alignment in Knowledge Graphs through Structure Similarity Rearrangement
Previous Article in Journal
GANMasker: A Two-Stage Generative Adversarial Network for High-Quality Face Mask Removal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Remote Health Monitoring Systems for Elderly People: A Survey

1
Department of Computer Science, Capital University of Science and Technology, Islamabad 44000, Pakistan
2
Department of Information Engineering Technology, National Skills University, Islamabad 44000, Pakistan
3
School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley PA1 2BE, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(16), 7095; https://doi.org/10.3390/s23167095
Submission received: 22 May 2023 / Revised: 3 August 2023 / Accepted: 3 August 2023 / Published: 10 August 2023
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper addresses the growing demand for healthcare systems, particularly among the elderly population. The need for these systems arises from the desire to enable patients and seniors to live independently in their homes without relying heavily on their families or caretakers. To achieve substantial improvements in healthcare, it is essential to ensure the continuous development and availability of information technologies tailored explicitly for patients and elderly individuals. The primary objective of this study is to comprehensively review the latest remote health monitoring systems, with a specific focus on those designed for older adults. To facilitate a comprehensive understanding, we categorize these remote monitoring systems and provide an overview of their general architectures. Additionally, we emphasize the standards utilized in their development and highlight the challenges encountered throughout the developmental processes. Moreover, this paper identifies several potential areas for future research, which promise further advancements in remote health monitoring systems. Addressing these research gaps can drive progress and innovation, ultimately enhancing the quality of healthcare services available to elderly individuals. This, in turn, empowers them to lead more independent and fulfilling lives while enjoying the comforts and familiarity of their own homes. By acknowledging the importance of healthcare systems for the elderly and recognizing the role of information technologies, we can address the evolving needs of this population. Through ongoing research and development, we can continue to enhance remote health monitoring systems, ensuring they remain effective, efficient, and responsive to the unique requirements of elderly individuals.

1. Introduction

Remote Health Monitoring Systems (RHMS) can manage, maintain and monitor a specific set of tasks efficiently over a network with reduced cost and errors. This network can be an Internet-of-Things (IoT) system or a local network system with a series of connected devices [1]. They are scalable and provide multiple opportunities to implement changes. They use certain devices, either vision-based devices such as cameras or sensor-based devices such as Accelerometers or Gyroscopes, to form a network. The selection of devices for these systems depends on the environment, requirements, and application [2].
Remote Monitoring System (RMS) is often used in sensor-based technologies with applications such as radars, satellites, and aeroplanes. In the context of this study, another impactful application of RMS is in the health sector, usually labelled as RHMS. Real-time health monitoring of patients by a doctor from a remote location has a significant impact on the avoidance of irregularities. It provides first aid within the nick of time. These systems show great promise, especially for elderly and physically disabled patients [3].
Wearable sensors and health-monitoring devices, including heart rate sensors, pulse sensors, oxygen sensors, and blood pressure sensors, play a crucial role in remote health monitoring systems, whether in open or closed environments, for observing patients [4]. These sensors detect any abnormalities in the patient’s behavior, prompting immediate action from caregivers or doctors, enabling them to take necessary measures promptly to address the situation. In addition to these wearable sensors, vision-based sensors are also employed to monitor the health conditions of patients. For instance, a camera mounted near the patient’s vicinity keeps track of the patient’s movement. If the system detects any abnormal movement by the patient, it promptly triggers an alarm to notify the caretaker [5]. By combining wearable and vision-based sensors, healthcare providers can comprehensively monitor and respond to the well-being of patients in real-time.
In recent years, we have not found any surveys in the healthcare domain that summarize the different computing platforms used in remote healthcare applications. Additionally, a comparison of a variety of Machine Learning approaches has been conducted to identify health-related diagnoses and activities. The primary objective of this study is to comprehensively review the latest remote health monitoring systems, with a specific focus on those designed for the elderly. Moreover, this paper identifies several potential areas for future research like concept drift, which hold promise for further advancements in RHMS.

1.1. Data Analysis in the Context of RHMS

One aspect of RHMS is the availability of a considerable amount of data that must be analysed and exploited to attain certain results. Many of these applications usually run on web servers and require continuous transmission and reception of data, which causes a delay in the service; and when it is concerned with real-time monitoring of patients, this delay can be impactful to life-threatening scenarios. To avoid this persisting problem, an efficient remote health monitoring system is analysed based on a layered structure. The layers usually comprise edge computing, fog computing, and cloud computing [6].

1.2. Fog and Cloud Computing

There is no need to manage and maintain local or online servers for data management and processing. Cloud servers are used to handle and compute the data over the internet, which reduces the cost and increases the efficiency of the RHMS. Cloud-based monitoring enables effective remote monitoring and smart resource scheduling by removing the delays and data communication issues [7]. Many cloud based health monitoring systems have been presented to overcome the limitations of manual server-based data communication [8]. Though there are many advantages to migrating to cloud-based servers, there are also some concerns. The vast distance between multiple devices can cause high latency in data communication, which can cause problems for IoT apps that require low latency. Security and privacy are also major concerns as the data is globally communicated through different channels along with other users, it may cause data loss and be prone to cyberattacks [9].
Fog computing is an extension of cloud computing, though both are composed of different nodes, which ultimately link to the devices; however, the nodes in fog computing are closer to the end-user or devices as compared to cloud computing nodes. In general, cloud computing can be described as a centralized system while Fog computing is more of a distributed system. Fog computing is not a separate or independent system, but a mediator between device and cloud server, which handles the flow of data from the devices to the cloud. The decision regarding the data and its magnitude to be sent to the cloud server is extremely critical and enables the cloud server to work efficiently and maintain the load between the client and server [10]. Abundant work is being conducted by utilizing the fog computing framework in healthcare monitoring as well. Vora et al. [11] presented a fog computing-based health monitoring system for ambient assisted living. Movement patterns of patients are collected through inertial sensors and the data are passed through fog gateways. The major contribution of this work is the low data communication latency and data load management between the cloud server and output devices. In another work, Vora et al. [12] proposed architecture for remote monitoring of heart rate of patients using fog computing. They used a heart rate sensor to gather data from multiple patients with heart diseases and sent it to the fog layer. They introduced a data compression technique for efficient bandwidth utilization at the fog layer. The results showed better accuracy compared to other state-of-the-art cloud-based architectures.
Though fog computing improves the overall communication between client/server data transmission, it has some drawbacks. With the addition of an extra layer (fog computing layer), the overall system becomes a little complex. Cost is also increased to implement software and hardware utilities and another important consideration is that fog computing is not as scalable as cloud computing [13].
Mobile Edge Computing (MEC) [14], is a distributed computing architecture that brings computational capabilities and resources closer to the edge of the network. It aims to reduce latency and improve the efficiency of mobile networks by deploying computing resources, such as servers, storage, and networking equipment, at the edge of the network, typically at the base stations or access points. MEC [15] enables data processing, analysis, and storage to be performed locally, reducing the time it takes for data to travel back and forth to centralized cloud data centers.

1.3. Human Activity Recognition in RHMS

There is a significant focus on ambient assistive living because of a rapid increase in the aging population and related complicated challenges. The goal of improving health care services while reducing the cost is on the agenda of every government. Due to a rapidly increasing aging population and its associated challenges in health and social care, ambient assistive living has become the focal point for both researchers and industry alike [16,17,18].
As mentioned earlier, the current availability of huge data covering every aspect of elderly people’s life is not a big issue. Therefore, once the data are available, the next important component of the remote health monitoring system is its analysis to recognize patients’ behaviour and activities. The automatic interpretation of human activities can play a pivotal role to revolutionize various routine activities. Human activity recognition has been deemed as one of the quintessential research areas for the past two decades [19]. Activity recognition refers to an ability to infer ongoing activity by processing raw data through diversified mechanisms, varying from traditional statistical measures to advanced data mining, machine learning and deep learning concepts [20,21,22,23].
Human activity recognition systems are beneficial in inferring human activities for providing feedback to take necessary actions for intervention. Typically, human activities are classified into two broad categories: (i) ambulation or fitness activities and (ii) functional activities [2]. Uncountable motions, such as walking, jogging, walking in upward/downward directions, fall under the category of ambulation or fitness activities. Functional activities include routine tasks such as attending calls, washing foods, and cooking [20,24]. These behavioural or functional activities can play a consequential role in determining human wellness [25,26]. Human motions of the same activity may have a significant difference because of constraints including environment, time, and culture [27]. Automatic recognition of human activities becomes a challenging task due to diversity in humans’ physical appearances and actions.

1.4. Structure of the Paper

This paper provides a comprehensive survey of the remote health monitoring system, especially in the context of elderly people. To this end, this paper discusses various gateways ranging from smart devices to fog/edge computing to cloud-based solutions. Moreover, the paper also discusses automated human activity recognition techniques that provide the basis for doctors and experts to make decisions.
The rest of this paper is organized as follows: Section 2 details the review of gateways often used in remote health monitoring systems, Section 3 provides the details of human activity recognition in the context of the monitoring of elderly people, Section 4 lists existing challenges related to the remote health monitoring systems, and finally, Section 5 provides conclusions.

2. GATEWAYS: Data Computation in a Remote Health Monitoring System

Remote health monitoring systems help patients reduce the number of visits to the hospital and predicts health-related anomalies sooner with recent Fog enabled healthcare technologies [28]. The most used sensors in health applications include respiratory rate, heart rate, blood pressure, body temperature, blood glucose, electrocardiogram (ECG), and an electroencephalogram (EEG) [29,30].
In IoT and cloud-based architectures, cloud servers store and process a massive amount of data collected from sensor nodes. Therefore, applications deployed on cloud servers can benefit from large amounts of resources and computation power [31]. Regardless of the benefits that we achieve from this architecture, a cloud-based approach is not suitable for health applications because of high network bandwidth requirements, latency, and scalability issues [32]. Healthcare applications are latency-sensitive; therefore, to design an effective real-time application, the fog-based approach plays a beneficial role [33].
In healthcare applications, wrong treatment decisions could be declared with an increased probability of processing and transmitting large and complex data [34]. Many articles included in this survey discuss scalable solutions for health applications using fog architecture, enabling real-time analysis and decision making based on local network resources [35]. In critical healthcare applications, distributed computing on the fog layer on the edge of the networks ensures low latency, reduced energy consumption, low execution time, scalability, privacy, and security [36].
In rural areas, where remote health monitoring of patients is more challenging because of weak internet coverage, fog computing is a workable solution due to its lower latency and spare capacity of locally available resources. In case of unavailability of the Internet, most critical tasks can be performed on the patient’s data at the fog nodes and forwarded to the cloud later upon internet availability [37].
Mach and Becvar [38] discussed some other data computation architectures such as Mobile Cloud Computing (MCC) and Mobile Edge Computing (MEC). The authors discussed the critical challenges in MEC, i.e., (i) computation offloading to User Equipment (UE) to reduce energy consumption and execution delay, (ii) efficient allocation of the computing resources to minimize execution delay and balance load of both computing resources and communication links, and (iii) mobility management for the applications offloaded to the MEC guaranteeing service continuity. This paper also discussed the computational offloading possibilities including fully offloading, partial offloading, and non-offloading of MEC applications.
Wang et al. [39] discussed fog computing real-time processing and event response for healthcare applications. Their experiments proved that the healthcare system using fog computing responded faster and is more energy-efficient than cloud-only approaches. For example, fog computing can efficiently detect falls of stroke patients by scheduling analytical tasks to the most appropriate edge server, guaranteeing the low latency and high throughput.
Rahmani et al. [40] proposed a smart e-Health system based on a fog computing-based remote health monitoring architecture that uses gateway resources to improve processing efficiency of health sensors’ data. The author showed an RHMS with Early Warning Scores (EWS) using hierarchical fog-assisted cloud computing. The results show improvement in sensing-to-actuation latency in fog computing up to 140ms over cloud computing.
Stantchev et al. [41] illustrated three-level architecture to emphasize the essentials of the computing paradigm related to fog computing to provide improved performance. First, before accessing the cloud, the sensing devices connect to localized fog devices that cater to their needs, such as computing and storage. Fog devices can provide local management for the sensors and handle mobility. Second, interconnectivity with enhanced Quality of Service (QoS) of this computing paradigm as latency is toned down because of the proximity between the fog device and sensors. Third, fog devices provide computing redundancy and backup if the link to the cloud is faulty. In addition, access control can provide better management measures for data flow to and from the cloud. Ko et al. [42] discussed issues in conventional design for mobile computation offloading in which a computation task is fetched to another server only when it is handed over. This mechanism requires excessive fetching of a large volume of data for handover and thus brings long fetching latency. Moreover, it also causes heavy loads on the MEC network. The proposed solution handles this issue by pre-fetching parts of future computation data to potential servers during the server-computation time, referred to as live pre-fetching. This technique can significantly reduce the handover latency via mobility prediction and enable energy-efficient computation offloading by enlarging the transmission time. However, it also encounters several challenges, with the two most critical ones described as follows: the first challenge arises from trajectory prediction. An accurate prediction can allow seamless handovers among edge servers and reduce the pre-fetching redundancy. The second challenge lies in the selection of the pre-fetched computation data. The computation-intensive components should be pre-fetched earlier with adaptive transmission power control to maximize the probability of complete offloading of data from edge devices.
Saidi [43] focused on extending elderly home care in the safest possible conditions by preventing the risks of people living alone. The proposed system monitors older people remotely and designs a method for solving the privacy protection issues for healthcare data based on the F2C computing scheme.
Jamil et al. [44] discussed that increasing numbers of Internet of Everything (IoE) devices are generating vast amounts of data, due to which cloud computing is unable to meet real-time applications such as low latency, location awareness, and mobility support.
Fog computing has overcome these limitations and emerged as a new computing paradigm that complements cloud computing by providing real-time processing and analytics and storage facilities near the edge device. This paper has designed and implemented an optimized job-scheduling algorithm to minimize the delays for latency-critical applications. A case study is discussed for healthcare to demonstrate the latency-sensitive and delay-tolerant requests from IoE devices. The proposed algorithm schedules jobs on fog devices based on the length and reduces the average loop delay and network usage. However, the proposed algorithm reduces the waiting time and can starve the tuples with more extensive lengths.
In Table 1, we identified a set of parameters from the literature to summarize the features and performance of each technology. Next, we explore each of these parameters comprehensively and illustrate how these parameters can affect healthcare applications. In Figure 1, cloud-based sensors data processing is compared with Fog- and MEC-based infrastructure based on lower and higher bandwidth utilization.

2.1. Network Latency

Cloud servers are centralized and usually deployed far away from the end-users, so the network latency becomes high. On the other hand, fog gateways are near to end users’ devices, so the network latency is much lower than cloud, as discussed in [40]. The experiment performed by [40] shows that latency in fog is 21 ms and 161 ms in cloud, using the Wifi platform for a healthcare application. In MEC, the user devices connect with the server at the base station, so the network latency is higher than fog.

2.2. Internet Bandwidth Utilization

For applications such as remote health monitoring, where health monitoring sensors produce frequent data in large amounts, fog gateways and MEC servers can process and store data locally so that only the summarized data are transmitted over the cloud. Based on this fact, the bandwidth utilization is reduced significantly over the Internet [45]. The bandwidth utilization difference in these technologies’ architectures can be analyzed in Figure 1.

2.3. Power Consumption

Performance of FC and MEC is magnificently improved in terms of power consumption over the cloud computing approach. Hartmann [46] discussed that a distributed architecture such as fog consumes less power than MEC and cloud computing. Moreover, for healthcare applications, refining sensor data at the local level reduces the amount of data transmitted over the cloud, which reduces power consumption.

2.4. Execution Time

The discussion conducted in [47] concludes that cloud is rich in storage capacity and computing resources. Cloud can also be capable of extending the resources on demand based on the requirements of data processing. Based on this fact, the execution time is very low for processing immense data generated by health applications. Whereas, in MEC, servers with base stations have relatively limited resources, so the execution time to process data is higher. Furthermore, fog devices are usually low-power devices; these devices’ processing power and storage capacities result in higher execution times.

2.5. Context Awareness

Context-awareness is one of the essential concepts utilized in health applications where information such as patient location, other patients in the vicinity, and resources in the environment can be exploited. Therefore, collaborations could accomplish real-time context-aware applications such as remote health monitoring among MEC platforms for better decisions. However, since the nodes for fog Computing are usually devices with a narrower view of the network, such as routers or switches, the context awareness is lower than that of MEC. However, the ability to communicate among the nodes mitigates this issue to some extent. On the other hand, cloud computing is low context-aware because it is developed as a standalone server connected to the Internet [48,49].

2.6. Real-Time Compatibility

In the cloud computing approach, health sensor data are sent directly to cloud servers without filtering data. Based on this reason, the cloud computing approach is not helpful for real-time applications. In fog computing, local data processing helps health applications to detect unwanted events in real-time and respond to the concerned persons for that event quickly. The real-time feedback collection encompasses solutions aiming to provide quick responses to critical situations in healthcare environments [45]. In [50], the authors implemented a real-time signal processing algorithm for fall detection, delivering information to caregivers. These algorithms are executed at the network’s border by fog servers, collecting and processing all health information. Another approach [51] proposes a real-time analytic system to monitor falls caused by strokes. All mentioned applications need a real-time response to caregivers or actuators for further actions.

3. Human Activity Recognition in RHMS

RHMS, which works independently at home and in offices, is now vital in the wake of growing lifestyle-related disease and an increase in the ageing population. Independent of the hospitals, processing and acquisition of the patients’ physiological data is conducted while they are at home or the workplace [52].
Human Activity Recognition (HAR) is a process that involves the recognition, detection, interpretation, and examination of human activities. It mainly focuses on the movements and actions that humans make in their daily lives. The data collected from these actions and movements are collected through different wearable sensors and devices, which can be utilized in several aspects that can make our daily life better and more convenient [20].
Vision-based devices, such as cameras, video recorders, etc., can be fixed in a certain place to track the movements of a person. Over a certain amount of time, enough data can be gathered to identify the movement of a person against a certain action. Similarly, wearable sensors such as motion detectors, compasses, accelerometers, etc., can be attached to a person at different body locations, and movement data can be gathered. The gathered data from these devices is then fine-tuned to remove any redundancies before feeding it to a data-processing model [53].
In recent years, HAR research has attracted significant attention because of its advantages and widespread applications. These widespread applications include (but are not limited to) fashion, smart homes, self-driving cars, surveillance, and healthcare. The ability to recognize and detect human activity has been an important concept in the field of machine learning [27,54,55,56,57]. Different Human Activity Recognition systems have been designed to automate these applications; however, building a fully automated HAR system can be a very difficult task because it requires a huge pool of labeled data and efficient data classification models. Moreover, it is very difficult to accurately classify movement data, as a single activity can be performed in multiple ways, and different activities can be performed in similar ways. Nonetheless, progress is still being made in HAR, as hybrid models are being introduced, which can efficiently differentiate between activities, hence leading to a better classification of features [58].
Activity is recorded using varying modes such as video cameras, RADAR, wearable physiological sensors, device-free sensing such as Wi-Fi, acoustic sensors, etc., [59,60]. The contemporary state-of-the-art has divided these modes into vision-based devices and body-worn sensors [53]. The vision-based devices harness modes, such as a video camera, to capture ongoing activity and are currently being employed by numerous security applications [61,62,63]. However, they often suffer from veracity-related issues due to camera angle, background and, more importantly, they cannot differentiate between targeted objects and other similar moving objects in the area of interest. For instance, a video camera is placed to capture the movement of a patient in a room, but it also captures the activity of people other than the patient [27]. In such scenarios, vision-based approaches may produce inaccurate results for applications wherein criticality must not be overlooked, such as medical health monitoring. On the other hand, wearable physiological sensors mitigate such issues by stipulating correct information with the least involvement of unwanted medium [26]. Wearable sensors like gyroscopes and accelerometer sensors are worn on different parts of the human body to produce 3-axis orientation acceleration, respectively. Besides these, sensors are comparatively less expensive, environmentally friendly, and hold the potential to produce multi-level data resulting in precise information [27].
Presently, ubiquitous sensing that focuses on discovering knowledge from data collected via pervasive sensors is considered a hot area of research [26]. This kind of activity recognition is employed in two different forms, external and wearable sensors [60]. In external sensors, a predetermined point of interest is selected to place devices that capture voluntary interactions among users and sensors [64,65,66,67,68]. At the same time, wearable sensors are attached to the user. Particularly, embedding powerful sensors in smartphones for human activity recognition (HAR) is receiving major attention from the scientific community to meet escalating demands in certain areas, including pervasive and mobile computing, context-aware computing, etc., [26,62,63]. In this survey, we present a comprehensive analysis of the state-of-the-art in the field of human activity recognition in three different dimensions: (1) Evolution of HAR measures from conventional means to machine learning and deep learning, (2) Role of different sensors in remote health monitoring and (3) Strategies to process real-time health data in cloud computing [38,39,40,41,42,43].
With the increasing importance of HAR in IoT and daily life comforts, it is being applied significantly in the Internet of Health Things (IoHT) environment as well [69,70]. HAR is being used to help patients with psychological and paralysis issues. Moreover, HAR is being used for patients with congenital diseases and conditions, especially for children with motor disabilities, to encourage them towards physical activities. HAR is also being used to detect abnormalities in Cardiac patients; it is even used to detect early signs of sickness and illness [71,72,73,74,75,76]. Another aspect of HAR includes the monitoring of elderly patients to detect their physical state. Monitoring elderly patients by attaching sensors to different parts of the body or observing the patient’s movement through a camera can help collect motion data, which can be used to predict irregularities in a patient’s condition, e.g., if the patient has fallen, is standing, is lying, is walking or running, etc. Collecting these data and implementing an interactive method to observe their movements has a significant impact on the health monitoring of elderly patients [77,78,79].
Currently, there are two major ways to acquire human activity data; either it can be accomplished via vision-based devices or through some body-worn sensors. While vision-based data collection has a mature base, it also has some limitations. For example, the lighting conditions of the place, the image quality of the device, the angle of the device, and last but not least, the privacy issue. Whereas sensor-based activity recognition has been initiated in the past couple of years and does not have a mature base, but there are no impactful limitations to it. Over time, different types of sensors, e.g., accelerometer, gyroscope, etc., have been introduced, which also come embedded in most of our smartphones as well. The progress in sensor development has improved to the extent that the issues of magnetic interference in sensor data have also been taken care of. The state-of-the-art sensors can accurately predict the effects of magnetic interference on real-time sensor data. With the rapid development of new and improved sensors, the data collected from them is becoming more and more accurate [25,80,81].
Although the sensors have evolved significantly in the past years, so has been the case with Machine learning as well. The data obtained from vision-based devices or sensors are trained through a Machine learning method to efficiently detect features from the training data. However, the accuracy of these methods depends on the quality of data and the effectiveness of feature extraction. The more labeled the data is, the better the data can be classified, and features can be extracted. Similarly, the better the training model is, the better the accuracy will be. The traditional Machine learning models are available for data classification, but their shortcoming is the manual feature extraction. For better accuracy, Deep learning models such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), etc. are used. The solidity of a HAR system depends on the results of data classification of the Deep learning models. Inaccurate classification of features can also lead to detrimental effects for users [82,83,84,85,86].
Human activities are difficult to classify due to their irregularities. Human activities vary from person to person and can be performed in multiple ways, hence making it a challenging task to classify them efficiently; in such a case, HAR using vision-based devices is a more challenging task as it involves various limitations and challenges. The data or dataset collected from a video or pictures may have background lighting issues, camera angle issues, and vice versa. These issues cannot be sorted out by generic means, thus making the system costly and complex. To overcome these hurdles and limitations, in the past few years, sensors have been used to extract movement data with minimum interference of any unwanted medium [87,88,89].
Sensors are less costly, can provide multi-levelled data, provide precise data, and can be used in any environment. However, even a slight fault or malfunction in sensor hardware can cause adverse effects on the collected data; therefore, it is a must to test the accurate working of the sensor before data gathering. Activity Recognition using standard Machine Learning approaches such as Support Vector Model (SVM), decision tree, etc., can produce substantial results in a controlled environment, but these models are strictly dependent on the data because huge datasets require a huge amount of time for training. All in all, the accuracy of these machine learning models strictly depends on feature extraction as the process is not automated [20,90].
Chang et al. [91] proved that SVM, along with conventional Artificial Neural Networks, can achieve acceptable results; however, they still lacked accuracy. Islam et al. [92] proposed a hybrid feature selection model, which used Sequential Floating Forward Search (SFFS) for feature selection and extraction. Best features are selected, from a subset of features, based on certain criteria, and pairs of best features are created and compared with the next subset of features. The overall feature extraction process becomes efficient and optimal features are extracted. An SVM is then used to classify the data. The main contribution of this work is the efficient utilization of the SFFS module for efficient feature extraction. However, the only drawback of this approach is that it performs well if the dataset is relatively small. The performance degrades if the dataset is too large or there are too many redundancies in it. Hence, it can be concluded that the HAR systems using conventional machine learning models usually require preprocessed data to produce convincing results.
Due to the limitations mentioned previously, machine learning has been integrated with Deep Learning techniques, which cover all the limitations in conventional machine learning models and provide a broader aspect for feature extraction. Deep Learning involves in-depth processing of data, efficient feature extraction, and layered structure for better classification. There are no such drawbacks of Deep Learning other than the increased computational cost for complex and increasing amount of data. Deep Learning models are scalable but hard to debug if the algorithm is too complex [93].
In recent years, Deep Learning has achieved remarkable results in the areas of human activity recognition, image recognition, and Natural Language Processing. One of the impactful aspects of Deep Learning is automatic feature detection and data classification with high accuracy, hence leading to a high demand in the field of HAR. Several Deep Learning models have been introduced over the past years, and each one of them has positives and negatives of its own. Deep Learning Models are designed and inspired by the structure of the human brain, which is the reason they are usually referred to as Neural Networks. CNN is a multilayered model, which is used to extract features from data [94]. Their generic structure consists of an input layer, two hidden layers, and an output layer. The input layer consists of the input data, which are fed to the neural network. The hidden layer usually consists of two sub-layers; one is the convolution layer in which multiple filters are applied to the data and the second is the pooling layer, which merges the data from the convolution layer. The output layer is connected to, and is part of, a fully connected layer that consists of the merged data from all the hidden layers and classifies it.
RNN is a gate-based neural network that consists of multiple gated units. The output of each gated unit can be fed an input to the other unit because they can remember the input of other units. RNN is widely used in the applications of Natural Language Processing and Image detection [95].
LSTM is a type and subclass of RNN. The major difference between LSTM from generalized RNN is that RNN suffers from vanishing gradient problems. LSTM, however, covers up all those problems in RNN by introducing Gates. Moreover, they consist of memory cells that can store input data for a longer period, which is very beneficial when the huge pool of data is being trained [96].
These are the most widespread and commonly used classification models used in HAR in recent years due to their exceptional performance and results. Most of these approaches utilize sensor data for training purposes. Nonetheless, several HAR systems use wearable sensors to collect data. These wearable sensors mostly include inertial sensors, such as accelerometers and gyroscopes, because they are also included in smartphones [58,97].
Many activities, such as lying, walking, running, sleeping, and walking, have been identified by different HAR systems to detect the rate of falls in elderly people and to avoid it. But these approaches utilize multiple sensors to collect movement data, which can be a hassle that affects people’s daily lives. Hence, more studies have been conducted that are based on a single sensor, which shows notable results when it comes to basic activities such as walking, running, and sitting but fails when it comes to complex activities such as smoking, dancing, and exercising [98].
To overcome this hurdle, sensors, such as accelerometers or compasses, are used in conjunction with other sensors, e.g., gyroscope, heart-rate sensors, etc. Though the huge pool of data needs to be segmented by having data from multiple sensors, an efficient classification model can achieve higher accuracy when it comes to feature extraction [99,100]. Wan et al. [101] proposed a CNN-based approach, which showed that the classic CNN still outperforms the conventional LSTM, Bidirectional LSTM (BLSTM), MLP, and SVM models when it comes to the accuracy of feature extraction. However, the structure of these models was not optimized, so the results may vary when it comes to precision. The major contribution of these approaches is the significantly increased efficiency by adding a context-aware classification module that overcame certain errors. The results from these approaches showed a significant increase in classification accuracy as compared to other classification models such as Decision table, Random Tree, etc.
Wang et al. [102] demonstrated the comparative analysis of conventional HAR approaches with advanced HAR approaches. Conventional HAR approaches based on pattern recognition collected raw data from devices such as sensors, Bluetooth, Wi-Fi, etc. and the features were manually extracted and input to a certain Machine Learning model for training. However, these methods contain limitations such as only simplistic features could be extracted, which ultimately leads to average performance. Advanced HAR approaches based on Deep Learning models overcame all these limitations by introducing Neural Networks. The manual feature extraction has been fully automated. Moreover, the performance of Deep Learning Methods on unlabeled data also far exceeds conventional Machine Learning methods. The focus of this survey was to demonstrate the importance of evolved Deep Learning in HAR and the need for lightweight Deep Learning models to minimize the cost of complex systems.
Ignatov Andrey [103] proposed an enhanced CNN, which overcame the time-series length effects when extracting features for real-time activity recognition. Testing the proposed system on UCI-HAR (University of California Irvine—Human Activity Recognition) and WISDM (Wireless Sensor Data Mining) datasets, superior results were achieved as compared to the state-of-the-art CNN-based systems. The proposed work was not only superior in feature accuracy but was also low cost. However, the data required for this approach was preprocessed. As there is no noise removal module or auto-labeling module involved, the performance of this approach will not be up to the mark for weakly labeled data.
Zhou et al. [104] designed a semi-supervised deep learning framework that was able to extract features based on weakly labeled data. An auto-labeling system was introduced to label the unlabeled data, which drastically increased the learning accuracy. A distance-based reward rule strategy was used to handle the labeling of unlabeled data. The results of the auto-labeling module were fused with other sensor data and finally passed through a conventional LSTM module for better feature extraction from the data, which ultimately led to an efficient classification. However, this approach required a large dataset of unlabeled data for efficient auto-labeling, thus increasing the cost of the proposed system. Moreover, by using the BLSTM, data could have been more efficiently classified with the availability of a larger dataset.
Xu et al. [105] proposed a hybrid neural network approach (InnoHAR), which combined RNN with Inception Neural Network (INN). INN consisting of various deep layers has multiple convolution layers parallel to pooling layers, which form an inception layer. These convolution layers are a matrix of 1 × 1, 1 × 3, and 1 × 5, respectively. The main idea behind the inception layer is to allow the filters to select the required size itself rather than wasting resources. The output from the INN is then passed through 2 GRU (Gated recurrent unit) layers for better time efficiency. The results tested on 3 public datasets showed better results compared to the state-of-the-art Deep Convolutional and LSTM (DeepConvLSTM) model and CNN model. However, this approach considered the already-available preprocessed datasets and did not experiment on the real-time sensor data, which requires additional modules such as noise removal and data segmentation. However, INN has poor initialization, which requires a lot of computation to get over it, and minor changes to the model require retraining of data, which is costly. A fine-tuned CNN can achieve the same performance, which is the reason INN is not used much in state-of-the-art approaches.
Jiang et al. [106] proposed an Attention-based Bidirectional LSTM (ABLSTM), which used a BLSTM to train data in multi-directions. This approach is based on Wi-Fi data and thus involves the need for BLSTM, which can process the state of signal data before and after processing. Two-layered BLSTM is used with one layer directing the data forward and the other one redirecting the data backward. The output from the BLSTM is passed as input to the Attention Model. The Attention Model focuses on certain features of data that are of interest based on some speculations. These speculations involve the scoring of vector data using the ReLu function and then passing through the Softmax layer for classification. The results showed superior classification accuracy compared to other similar approaches. However, the experiments were based on a single channeled Wi-Fi device without real-time data collection. These two factors can have a huge impact on the accuracy of the proposed system. Real-time data involve multiple types of interferences on the readings, which can be caused due to a certain environment or magnetic interference. The dataset used for this was based on single-user data and the dataset itself was supervised. A lot of work can be further conducted on this approach. There has been a focus on supervised learning in past research because labeled data are not available in abundance; hence, recent approaches mostly focus on semi-supervised data.
Zhu et al. [107] proposed a novel Deep LSTM (DLSTM) approach for efficient feature recognition. Both labeled and unlabeled, data are used to train the model to detect human activities using smartphone sensors. DLSTM involves multiple LSTM layers between the input and output layers. The raw data are passed through the augmentation phase to increase the amount of data and Gaussian noise removal, which is performed to filter any inconsistencies in the data followed by the extraction of low-level features. These low-level features are dropped out and the rest of the features are passed to DLSTM for high-level feature extraction. The unsupervised data loss, which involves unlabeled data, is calculated and labeled based on some predictions. The results of the proposed approach were benchmarked against the UCI-HAR dataset. The results showed its supremacy over other semi-supervised learning methods. However, this approach was performed in a controlled environment. In an uncontrolled environment, where a single activity can be performed in multiple ways or different activities can be performed in a similar way, the results may vary.
In a recent study, Wang et al. [108] proposed a hybrid 1-dimensional approach. Data from multiple sensors are passed through a convolutional neural network and the output is passed to the LSTM module, which classifies the data. The main achievement of this approach is the identification of activity transition along with activities in general. Most of the proposed works do not consider this factor while designing an approach; however, in human behavior recognition, this is an important task. Moreover, activity transition detection has a significant effect on real-time movement recognition. The data from two sensors (accelerometer and gyroscope) are combined into a 2D array and then passed to the CNN. The respective CNN is a three-layered architecture with three hidden layers, each consisting of a convolution layer and a pooling layer. The output from the CNN is passed into the LSTM module in the form of a vector. The features extracted from LSTM are passed to a fully connected layer which undergoes the process of Batch Normalization and is finally forwarded to the Softmax layer for classification. The results of this approach were benchmarked against the publicly available HAPT (Human Activities and Postural Transitions) dataset, which already contains the activity-transition data. The results showed that, not only did this approach have a better activity recognition rate than other Deep Learning models such as CNN, LSTM, CNN-BLSTM, and CNN-GRU, but it also had a better activity transition recognition rate. The limitations of this approach lie in the fact that only basic activity transitions (lying–standing or sitting–standing) were identified. Complex activity transitions, such as walking–smoking, driving–eating or sitting–reading, may be a different case [108]. Moreover, multiple users may have different movements on activity transition while this study is based on a smaller number of people’s movements.
Lu et al. [109] proposed an approach for efficient data classification, focusing on daily life activities. They categorized activities into two types: countable activities, which involve activities with a fixed number and iterations of gestures like walking, sitting, eating, and smoking, and uncountable activities, which involve complex and uncountable gestures like dancing and exercising. Interestingly, in some cases, even walking can be considered an uncountable activity. The researchers used a method called SFFS (Sequential Floating Forward Selection) to select relevant features for data extraction. Furthermore, they introduced three new features to enhance the classification process. Using Sliding Window, nine features were extracted against every activity using the publicly available DaLiAc dataset and self-gathered (AmA) dataset.
The features extracted were based on every activity’s specific patterns. The extracted features were tested on conventional M.L (Machine Learning) models such as KNN, SVM, GBDT (Gradient Boost Decision Trees), and Random Forest. The results showed a huge boost in classification accuracy as compared to conventional state-of-the-art M.L approaches mentioned before. However, there are certain limitations of Sliding Window such as the computational cost. Cost can be minimized by increasing the window size; however, this will affect the accuracy of feature extraction. Moreover, as mentioned before, the same activity can be performed in multiple ways and vice versa and this method does not take this factor into account. Table 2 summarizes the following section to point out the strengths and weaknesses of referenced approaches. In the “Strength” column, the focus is on the positive aspects or advantages of each model. These strengths highlight the key features or functionalities that make a particular model effective for activity recognition. For example, some models may excel in feature extraction, achieving high accuracy, handling long-term dependencies, or outperforming conventional approaches. In the “Weakness” column, the focus is on the limitations or drawbacks of each model. These weaknesses point out the aspects where a particular model may fall short or face challenges. For instance, some models may have limited accuracy, slow network performance, overfitting issues, difficulty in adapting to certain configurations, or high time complexity.
To summarize, vision-based devices like cameras offer tracking, but can suffer from issues and privacy concerns. Wearable sensors provide precise and multi-level data with less cost and interference. Deep learning models automate feature extraction and outperform traditional machine learning models. Multiple sensors improve classification accuracy, especially for complex activities. Semi-supervised learning and context-aware classification enhance efficiency. Overall, wearable sensors and deep learning models show promise in HAR, overcoming limitations of vision-based approaches and manual feature extraction.

4. RHMS for Elderly People

The smart healthcare monitoring system is proposed in [52]. This system can highly contribute to providing a comfortable and safe environment for elder and disabled people. This can help them live independently without the fear of any emergency or critical healthcare situation through continuous monitoring of their health. The proposed framework collects and accumulates patients’ physiological data with the help of wearable sensors and transmits them to a cloud server for data analyzing and processing. Hence, any change in a patient’s health data will be detected and transmitted to the patient’s doctor through the hospital’s cloud server. Thus, any detection of disorder in a patient’s data will be reported to the patient’s doctors via the hospital platform. The framework is a simple technology based on a fixable architecture that can be scaled and easily expanded, thus providing stable and cost-efficient systems to monitor elderly patients remotely. In addition, the results show that the system could efficiently contribute to improving healthcare services by being able to monitor the patient’s health in real-time detecting symptoms remotely. The proposed system, which can monitor patients’ symptoms remotely and in real-time, is highly effective.
Having a powerful effect on physical and mental health and robust association with many rehabilitation programs, Physical Activity Recognition and Monitoring (PARM) have been considered as a key paradigm for smart healthcare [110]. Traditional methods for PARM used controlled environments, intending to increase the identifiable activity subjects completely and improve recognition accuracy and system robustness using novel body-worn sensors or advanced learning algorithms. The system has now changed with cost-effective heterogeneous wearable devices and mobile applications. PRAM has been transferred to uncontrolled and open environments. However, these technologies and their results with traditional PRAM are currently less known. To help understand the use of IoT technology in PRAM studies, this research will provide a systematic review, inspecting PARM studies from a typical IoT layer-based perspective. First, it will summarize the modern techniques in traditional PARM methodologies as used in the healthcare domain, including sensory, feature extraction, and recognition techniques. The second thing this research explains is the identification of some new research trends and challenges in PRAM studies in the IoT environment. Finally, this paper includes a few successful studies to inculcate PRAM in industrial applications. In the last two decades, several studies have been conducted to address critical issues of PRAM because of its importance in healthcare support in a variety of chronic diseases, musculoskeletal rehabilitation, independent living of the elderly, as well as fitness goals for active lifestyles. The contribution of this work is from the perspective of the Internet of Things (IoT) that sequentially covers the sensing layer, network layer, processing layer, and application layer, distinctively and systematically summarizing existing primary PARM devices, methods, and environments. Wearable and portable sensors/devices, inertial signal data processing, and classification/clustering approaches are described and compared in the light of physical activity types, subjects, accuracy, flexibility, and energy. Typical research and project applications regarding PARM are also introduced.
In [111], the use of RFID sensors and accelerometers has been proposed to recognize a user’s daily activity. A decision tree is employed to classify five human body states using two wireless accelerometers, and detection of RFID-tagged objects with hand movements provides additional instrumental activity information. The system has already developed tagging and visualization tools, making it widely applicable for caring for elderly people’s health.
Ming et al. [112] proposed a CNN-based elderly monitoring system. The approach utilizes vision-based devices to capture movement data. They ensure sensitive data protection by introducing a key-based authorization module with the CNN. The experimental evaluation reveals the system to be sustainable towards explicit breach attempts. The proposed framework was tested on the publicly available UCI-HAR dataset with six basic and six transitional activities and achieved an overall accuracy of 92.02%.
Yang et al. [66] proposed an RFID-based CNN model for the posture detection of elderly people. Kastersen’s dataset is used in order to evaluate the approach. The dataset consists of 245 action instances for seven different activities over 28 days, sensed using RFID technology. They recorded four daily life activities, which include “brushing teeth, taking a bath, eating and getting dressed”. The CNN utilized was a dense network, which involved dense layers, thus making the proposed model complicated and prone to errors. The model demonstrated an accuracy of 82.78%, which showed the effectiveness of the proposed approach. However, in real-time scenarios, this approach may not produce substantial results.
To provide a comparison between the aforementioned approaches, let us take a closer look. In one approach [52], a smart healthcare monitoring system is proposed, utilizing wearable sensors to collect patients’ physiological data, which is then transmitted to a cloud server for analysis. Changes in the patients’ health data are detected and reported to their doctors, enabling real-time monitoring and detection of symptoms. This scalable and cost-efficient system aims to improve healthcare services for elderly patients remotely.
Another approach [110] focuses on PARM in uncontrolled environments using IoT technology. It provides a systematic review of traditional PARM methodologies, discussing sensory, feature extraction, and recognition techniques. It also explores new research trends and challenges in PARM studies within the IoT environment.
Additionally, RFID sensors and accelerometers are employed in an approach [111] to recognize daily activities of users, using a decision tree for classification. This system combines wearable sensors and RFID technology to provide valuable information about user activities.
A CNN-based elderly monitoring system [112] utilizes vision-based devices and sensitive data protection measures, achieving high accuracy in activity recognition. This approach ensures privacy and security while effectively monitoring elderly individuals’ movements.
Finally, a CNN model for posture detection of elderly people is proposed [66] using RFID technology, demonstrating good effectiveness but potential limitations in real-time scenarios. This system focuses on posture detection, which is crucial for maintaining the health and safety of elderly individuals.
Each of these approaches brings unique features and advantages to the field of healthcare monitoring and human activity recognition, catering to different scenarios and requirements.
Table 3 summarizes the Section 4 to point out the strengths of referenced approaches. Figure 2 presents a summarized accuracy-comparison graph, which visually represents the information provided in the aforementioned table. The arrangement of the approaches in the graph takes into consideration both the publication year and the novelty of their architectures. Notably, the graph excludes certain approaches from the reference table that exclusively encompassed a systematic review of diverse frameworks. The graph depicts that the mean accuracy of the various approaches lies within the range of 90% to 95%. This range signifies a customary level of accuracy attained through the utilization of neural network architecture. However, it is imperative to acknowledge that the majority of these approaches prioritize enhancing the duration of model training, rather than solely emphasizing accuracy. As expounded upon in the preceding paragraphs, the significance of model training time should not be disregarded, particularly when formulating an approach that concentrates on unsupervised data.

5. Major Challenges in RHMS

5.1. Data Accuracy and Availability in Real-Time

Logically, the most complex challenge is related to the accuracy of remotely accessed data. Many of these come under scrutiny from patients and medical staff. It is difficult for patients accustomed to traditional methods to trust that a small device will provide great data about their health that they can provide to their doctor. Likewise, front-line medical service providers find it easy to make decisions based on the information obtained through traditional methods. They are also inclined to avoid automated digital systems due to their data inaccuracies. Furthermore, providing information in real-time is also a big challenge. In particular, if data are to reach a doctor’s system from a patient’s device via a mobile network, it must first go to the service provider’s infrastructure and then reach its destination via the internet. In the meantime, if there is a problem somewhere, the data will not be provided. Moreover, mobile networks are not always available. Therefore, if any data on which the patient’s life depends are not available to the doctor at the appropriate time, then the entire RHM system will become meaningless.

5.2. Data Security and Protection

Besides the accuracy and availability of the data, its security and protection are also extremely critical. In this case, healthcare standards need to be met. Also, strong data management practices are essential. A large part of the data management is usually handled by third parties, which itself possesses a potential risk on individuals’ data. On the other hand, the challenges for hospitals are also not insignificant, as they risk integrating third-party systems that endanger the safety and privacy of their patients.

5.3. Selection of Sensors and Devices

In any RHMS, the highlight is the inclusion of sensors and wearable gadgets. They can be available in a variety of sizes and types. Every sensor may play a vital role in one disease and may become completely irrelevant in some other disease. Therefore, the selection of sensors affects the overall efficiency of the system and is not a straightforward task.

5.4. Detection of Concept Drift

In RHMS, machine learning algorithms are used to develop prediction models over Cloud, MEC, fog, edge layers. These models are trained to over historical medical data of different patients to predict falls stroke [113], fall detection [114], and different types of diseases related to health of patient [115]. However, these models are not predicting accurately over a period. This is known a Model or Concept drift in different systems [116]. Many of the drift detection approaches has been proposed in different streams of applications [117,118,119]; however, much less in stream of RHMS.
One recent approach, Ensemble and Continual Federated Learning (ECFL) [120], is a distributed machine learning approach that considers concept drift detection over multiple mobile devices. This approach utilizes ensemble learning, where multiple learning algorithms train models locally and aggregate globally to obtain better predictive performance. In the concept drift detection approach, ECFL preserves the confidence (probability) of predicted labels, data instance values, and the predicted label in a sliding window with each new prediction. The ECFL drift detection algorithm detects changes in two consecutive windows on each edge device. If the change crosses a threshold, the algorithm generates a drift detection alarm. This approach considers the ML model’s confidence level of prediction. However, if the model is not well-calibrated, in case of drift, the model can predict the label with the same confidence as before the drift. In this case, as the confidence value may not differ in case of drift, the detection algorithm may fail to detect the drift.
Under distributed concept drift, the time of change-points of probability distributions can differ across clients/devices [121]. Wang et al. [121] detect the loss in model at each client and develop a cluster on which drift is detected. The retraining is performed for clients that are under the cluster. This approach maintains the accuracy of the overall application developed over the federated platform. However, the measurement of model loss is still uncovered in the scope of this article.
The area of concept drift still has vast space for research to accelerate. The issue of concept drift is critical in RHMS because of inaccurate prediction. More research is needed to improve the detection and handling of concept drift in distributed machine learning systems, especially in the context of real-time healthcare monitoring systems.

6. Conclusions

The rapidly growing elderly population and the provision of healthcare facilities to them present a major challenge for governments and healthcare departments. For elderly people, going to hospitals daily to inform doctors about their health and seek advice is impractical. Therefore, Remote Health Monitoring Systems (RHMS) offer a viable solution for both elderly patients and doctors. In this paper, we conducted a survey of literature related to the development of efficient RHMS.
We reviewed existing data sensing and gateway technologies, as well as state-of-the-art Human Activity Recognition (HAR) systems. Our survey explores the benefits of Fog and mobile edge computing, which overcome limitations of cloud computing such as high bandwidth, high latency, and high power consumption. Fog and edge computing have emerged as a new computing paradigm for real-time sensor processing, analytics, and storage facilities near the edge device. The study provides insights for future work, encouraging the use of fog and edge computing in health applications to achieve real-time response to actuators.
Furthermore, we enlisted promising existing RHMS systems, highlighting their potential in healthcare. Finally, we identified and discussed current challenges related to the development of RHMS. Our future work after this survey leads us to address issues related to concept drift in Machine Learning (ML) health monitoring models in the distributed fog environment. Concept drift is a crucial consideration to ensure the accuracy and reliability of real-time health monitoring systems.
In conclusion, RHMS holds great promise in addressing the healthcare needs of the elderly population. By leveraging fog and edge computing, we can enhance the efficiency and effectiveness of health monitoring systems, enabling real-time responses and improving the overall quality of healthcare services for the elderly.

Author Contributions

Survey Gateways S.A.; Survey Machine Learning S.I.; Analysis, S.A. and S.I.; writing—original draft preparation, S.A.; writing—review and editing, N.K., N.M., N.A. and N.R.; visualization, N.R.; supervision, N.M.; project administration, N.M. and N.A.; funding acquisition, N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported under Erasmus plus Capacity Building in Higher Education (CBHE) programme under SAFE-RH Project-619483-EPP-1-2020-1-UK-EPPKA2-CBHE-JP.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
LDLinear dichroism

References

  1. Rashid, M.M.; Khan, S.U.; Eusufzai, F.; Redwan, M.A.; Sabuj, S.R.; Elsharief, M. A Federated Learning-Based Approach for Improving Intrusion Detection in Industrial Internet of Things Networks. Network 2023, 3, 158–179. [Google Scholar] [CrossRef]
  2. Blount, M.; Batra, V.M.; Capella, A.N.; Ebling, M.R.; Jerome, W.F.; Martin, S.M.; Nidd, M.; Niemi, M.R.; Wright, S.P. Remote health-care monitoring using Personal Care Connect. IBM Syst. J. 2007, 46, 95–113. [Google Scholar] [CrossRef]
  3. Kalid, N.; Zaidan, A.; Zaidan, B.; Salman, O.H.; Hashim, M.; Muzammil, H. Based real time remote health monitoring systems: A review on patients prioritization and related" big data" using body sensors information and communication technology. J. Med. Syst. 2018, 42, 30. [Google Scholar]
  4. Mohammed, K.; Zaidan, A.; Zaidan, B.; Albahri, O.S.; Alsalem, M.; Albahri, A.S.; Hadi, A.; Hashim, M. Real-time remote-health monitoring systems: A review on patients prioritisation for multiple-chronic diseases, taxonomy analysis, concerns and solution procedure. J. Med. Syst. 2019, 43, 223. [Google Scholar] [CrossRef] [PubMed]
  5. Rahman, H.; Ahmed, M.U.; Begum, S. Vision-based remote heart rate variability monitoring using camera. In Proceedings of the Internet of Things (IoT) Technologies for HealthCare: 4th International Conference, HealthyIoT 2017, Angers, France, 24–25 October 2017; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2018; pp. 10–18. [Google Scholar]
  6. Rincon, J.A.; Guerra-Ojeda, S.; Carrascosa, C.; Julian, V. An IoT and fog computing-based monitoring system for cardiovascular patients with automatic ECG classification using deep neural networks. Sensors 2020, 20, 7353. [Google Scholar] [CrossRef]
  7. Hao, Y.; Helo, P.; Gunasekaran, A. Cloud platforms for remote monitoring system: A comparative case study. Prod. Plan. Control 2020, 31, 186–202. [Google Scholar] [CrossRef]
  8. Hossain, M.S.; Muhammad, G. Cloud-assisted industrial internet of things (iiot)–enabled framework for health monitoring. Comput. Netw. 2016, 101, 192–202. [Google Scholar] [CrossRef]
  9. Pramanik, P.K.D.; Pareek, G.; Nayyar, A. Security and privacy in remote healthcare: Issues, solutions, and standards. In Telemedicine Technologies; Elsevier: Cambridge, MA, USA, 2019; pp. 201–225. [Google Scholar]
  10. Hu, P.; Dhelim, S.; Ning, H.; Qiu, T. Survey on fog computing: Architecture, key technologies, applications and open issues. J. Netw. Comput. Appl. 2017, 98, 27–42. [Google Scholar] [CrossRef]
  11. Vora, J.; Tanwar, S.; Tyagi, S.; Kumar, N.; Rodrigues, J.J. FAAL: Fog computing-based patient monitoring system for ambient assisted living. In Proceedings of the 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), Dalian, China, 12–15 October 2017; pp. 1–6. [Google Scholar]
  12. Vora, J.; Tanwar, S.; Tyagi, S.; Kumar, N.; Rodrigues, J.J. HRIDaaY: Ballistocardiogram-based heart rate monitoring using fog computing. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 9–13. [Google Scholar]
  13. Firdhous, M.; Ghazali, O.; Hassan, S. Fog computing: Will it be the future of cloud computing? In Proceedings of the Third International Conference on Informatics and Applications (ICIA2014), Kuala Terengganu, Malaysia, 8–10 October 2014; pp. 8–10. [Google Scholar]
  14. Tran, T.X.; Hajisami, A.; Pandey, P.; Pompili, D. Collaborative mobile edge computing in 5G networks: New paradigms, scenarios, and challenges. IEEE Commun. Mag. 2017, 55, 54–61. [Google Scholar] [CrossRef] [Green Version]
  15. Moghaddasi, K.; Rajabi, S. Learning at the Edge: Mobile Edge Computing and Reinforcement Learning for Enhanced Web Application Performance. In Proceedings of the 2023 9th International Conference on Web Research (ICWR), Tehran, Iran, 3–4 May 2023; IEEE: PIscataway, NJ, USA, 2023; pp. 300–304. [Google Scholar]
  16. Dohr, A.; Modre-Opsrian, R.; Drobics, M.; Hayn, D.; Schreier, G. The internet of things for ambient assisted living. In Proceedings of the 2010 Seventh International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 12–14 April 2010; IEEE: Piscatway, NJ, USA, 2010; pp. 804–809. [Google Scholar]
  17. Costa, R.; Carneiro, D.; Novais, P.; Lima, L.; Machado, J.; Marques, A.; Neves, J. Ambient assisted living. In Proceedings of the 3rd Symposium of Ubiquitous Computing and Ambient Intelligence, 2008; Springer: Berlin/Heidelberg, Germany, 2009; pp. 86–94. [Google Scholar]
  18. van den Broek, G.; Cavallo, F.; Wehrmann, C. Aaliance Ambient Assisted Living Roadmap; IOS Press: Amsterdam, The Netherlands, 2010; Volume 6. [Google Scholar]
  19. Wang, L.; Hu, W.; Tan, T. Recent developments in human motion analysis. Pattern Recognit. 2003, 36, 585–601. [Google Scholar] [CrossRef]
  20. Ramasamy Ramamurthy, S.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
  21. Subasi, A.; Khateeb, K.; Brahimi, T.; Sarirete, A. Human activity recognition using machine learning methods in a smart healthcare environment. In Innovation in Health Informatics; Elsevier: Cambridge, MA, USA, 2020; pp. 123–144. [Google Scholar]
  22. Gumaei, A.; Hassan, M.M.; Alelaiwi, A.; Alsalman, H. A hybrid deep learning model for human activity recognition using multimodal body sensing data. IEEE Access 2019, 7, 99152–99160. [Google Scholar] [CrossRef]
  23. Ann, O.C.; Theng, L.B. Human activity recognition: A review. In Proceedings of the 2014 IEEE International Conference on Control System, Computing And Engineering (ICCSCE 2014), Penang, Malaysia, 28–30 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 389–393. [Google Scholar]
  24. Maurya, A.; Yadav, R.K.; Kumar, M.; Saumya. Comparative study of human activity recognition on sensory data using machine learning and deep learning. In Proceedings of the Integrated Intelligence Enable Networks and Computing, Gopeshwar, India, 25–27 May 2020; Springer: Singapore, 2021; pp. 63–71. [Google Scholar]
  25. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutorials 2012, 15, 1192–1209. [Google Scholar] [CrossRef]
  26. Alam, M.A.U.; Roy, N.; Holmes, S.; Gangopadhyay, A.; Galik, E. Automated functional and behavioral health assessment of older adults with dementia. In Proceedings of the 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Washington, DC, USA, 27–29 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 140–149. [Google Scholar]
  27. Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery. Pattern Recognit. Lett. 2013, 34, 1995–2006. [Google Scholar] [CrossRef]
  28. de Moura Costa, H.J.; da Costa, C.A.; da Rosa Righi, R.; Antunes, R.S. Fog computing in health: A systematic literature review. Health Technol. 2020, 10, 1025–1044. [Google Scholar] [CrossRef]
  29. Manogaran, G.; Varatharajan, R.; Lopez, D.; Kumar, P.M.; Sundarasekar, R.; Thota, C. A new architecture of Internet of Things and big data ecosystem for secured smart healthcare monitoring and alerting system. Future Gener. Comput. Syst. 2018, 82, 375–387. [Google Scholar] [CrossRef]
  30. Sood, S.K.; Mahajan, I. Wearable IoT sensor based healthcare system for identifying and controlling chikungunya virus. Comput. Ind. 2017, 91, 33–44. [Google Scholar] [CrossRef] [PubMed]
  31. Mell, P.; Grance, T. The NIST Definition of Cloud Computing. National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011. [Google Scholar]
  32. Kraemer, F.A.; Braten, A.E.; Tamkittikhun, N.; Palma, D. Fog computing in healthcare—A review and discussion. IEEE Access 2017, 5, 9206–9222. [Google Scholar] [CrossRef]
  33. Andriopoulou, F.; Dagiuklas, T.; Orphanoudakis, T. Integrating IoT and fog computing for healthcare service delivery. Components and Services for IoT Platforms: Paving the Way for IoT Standards; Springer International Publishing: Cham, Switzerland, 2017; pp. 213–232. [Google Scholar]
  34. Gia, T.N.; Jiang, M.; Rahmani, A.M.; Westerlund, T.; Liljeberg, P.; Tenhunen, H. Fog computing in healthcare internet of things: A case study on ecg feature extraction. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), Liverpool, UK, 26–28 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 356–363. [Google Scholar]
  35. Mouradian, C.; Naboulsi, D.; Yangui, S.; Glitho, R.H.; Morrow, M.J.; Polakos, P.A. A comprehensive survey on fog computing: State-of-the-art and research challenges. IEEE Commun. Surv. Tutorials 2017, 20, 416–464. [Google Scholar] [CrossRef] [Green Version]
  36. Mutlag, A.A.; Abd Ghani, M.K.; Arunkumar, N.a.; Mohammed, M.A.; Mohd, O. Enabling technologies for fog computing in healthcare IoT systems. Future Gener. Comput. Syst. 2019, 90, 62–78. [Google Scholar] [CrossRef]
  37. Mutlag, A.A.; Khanapi Abd Ghani, M.; Mohammed, M.A.; Maashi, M.S.; Mohd, O.; Mostafa, S.A.; Abdulkareem, K.H.; Marques, G.; de la Torre Díez, I. MAFC: Multi-agent fog computing model for healthcare critical tasks management. Sensors 2020, 20, 1853. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutorials 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A survey on mobile edge networks: Convergence of computing, caching and communications. IEEE Access 2017, 5, 6757–6779. [Google Scholar] [CrossRef]
  40. Rahmani, A.M.; Gia, T.N.; Negash, B.; Anzanpour, A.; Azimi, I.; Jiang, M.; Liljeberg, P. Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach. Future Gener. Comput. Syst. 2018, 78, 641–658. [Google Scholar] [CrossRef]
  41. Stantchev, V.; Barnawi, A.; Ghulam, S.; Schubert, J.; Tamm, G. Smart items, fog and cloud computing as enablers of servitization in healthcare. Sens. Transducers 2014, 185, 121–128. [Google Scholar]
  42. Ko, S.W.; Huang, K.; Kim, S.L.; Chae, H. Live prefetching for mobile computation offloading. IEEE Trans. Wirel. Commun. 2017, 16, 3057–3071. [Google Scholar] [CrossRef] [Green Version]
  43. Saidi, H.; Labraoui, N.; Ari, A.A.A.; Bouida, D. Remote health monitoring system of elderly based on Fog to Cloud (F2C) computing. In Proceedings of the International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020; pp. 1–7. [Google Scholar]
  44. Jamil, B.; Shojafar, M.; Ahmed, I.; Ullah, A.; Munir, K.; Ijaz, H. A job scheduling algorithm for delay and performance optimization in fog computing. Concurr. Comput. Pract. Exp. 2020, 32, e5581. [Google Scholar] [CrossRef]
  45. Vilela, P.H.; Rodrigues, J.J.; Righi, R.d.R.; Kozlov, S.; Rodrigues, V.F. Looking at fog computing for e-health through the lens of deployment challenges and applications. Sensors 2020, 20, 2553. [Google Scholar] [CrossRef]
  46. Hartmann, M.; Hashmi, U.S.; Imran, A. Edge computing in smart health care systems: Review, challenges, and research directions. Trans. Emerg. Telecommun. Technol. 2022, 33, e3710. [Google Scholar]
  47. Nair, G.; Hadresh, G.; Pdinesh, V. A Comparison Analysis of Fog and Cloud Computing. IJRAR 2020, 6, 1386–1390. [Google Scholar]
  48. Dolui, K.; Datta, S.K. Comparison of edge computing implementations: Fog computing, cloudlet and mobile edge computing. In Proceedings of the 2017 Global Internet of Things Summit (GIoTS), Geneva, Switzerland, 6–9 June 2017; pp. 1–6. [Google Scholar]
  49. Schilit, B.; Adams, N.; Want, R. Context-aware computing applications. In Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications—WMCSA 1994, Santa Cruz, CA, USA, 8–9 December 1994; pp. 85–90. [Google Scholar]
  50. Craciunescu, R.; Mihovska, A.; Mihaylov, M.; Kyriazakos, S.; Prasad, R.; Halunga, S. Implementation of Fog computing for reliable E-health applications. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 459–463. [Google Scholar]
  51. Cao, Y.; Chen, S.; Hou, P.; Brown, D. FAST: A fog computing assisted distributed analytics system to monitor fall for stroke mitigation. In Proceedings of the 2015 IEEE International Conference on Networking, Architecture and Storage (NAS), Boston, MA, USA, 6–7 August 2015; pp. 2–11. [Google Scholar]
  52. Al-Khafajiy, M.; Baker, T.; Chalmers, C.; Asim, M.; Kolivand, H.; Fahim, M.; Waraich, A. Remote health monitoring of elderly through wearable sensors. Multimed. Tools Appl. 2019, 78, 24681–24706. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, S.; Wei, Z.; Nie, J.; Huang, L.; Wang, S.; Li, Z. A review on human activity recognition using vision-based method. J. Healthc. Eng. 2017, 2017, 3090343. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Ishimaru, S.; Hoshika, K.; Kise, K.; Dengel, A.; Kunze, K. Towards reading trackers in the wild: Detecting reading activities by EOG glasses and deep neural networks. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and ACM International Symposium on Wearable Computers, UbiComp/ISWC 2017, Hawaii, HI, USA, 11–15 September 2017; Association for Computing Machinery, Inc.: New York, NY, USA, 2017; pp. 704–711. [Google Scholar]
  55. Tamás, V. Human Behavior Recognition In Video Sequences. Technical University of Cluj-Napoca; Technical University of Cluj-Napoca: Cluj-Napoca, Romania, 2013. [Google Scholar]
  56. Banos, O.; Damas, M.; Pomares, H.; Prieto, A.; Rojas, I. Daily living activity recognition based on statistical feature quality group selection. Expert Syst. Appl. 2012, 39, 8013–8021. [Google Scholar] [CrossRef]
  57. Chen, L.; Nugent, C.D.; Wang, H. A knowledge-driven approach to activity recognition in smart homes. IEEE Trans. Knowl. Data Eng. 2011, 24, 961–974. [Google Scholar] [CrossRef]
  58. Gao, Z.; Xuan, H.Z.; Zhang, H.; Wan, S.; Choo, K.K.R. Adaptive fusion and category-level dictionary learning model for multiview human action recognition. IEEE Internet Things J. 2019, 6, 9280–9293. [Google Scholar] [CrossRef]
  59. Khan, M.A.A.H.; Hossain, H.S.; Roy, N. Infrastructure-less occupancy detection and semantic localization in smart environments. In Proceedings of the 12th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Coimbra, Portugal, 22–24 July 2015; pp. 51–60. [Google Scholar]
  60. Khan, M.A.A.H.; Kukkapalli, R.; Waradpande, P.; Kulandaivel, S.; Banerjee, N.; Roy, N.; Robucci, R. RAM: Radar-based activity monitor. In Proceedings of the IEEE INFOCOM 2016—The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–9. [Google Scholar]
  61. Turaga, P.; Chellappa, R.; Subrahmanian, V.S.; Udrea, O. Machine recognition of human activities: A survey. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1473–1488. [Google Scholar] [CrossRef] [Green Version]
  62. Candamo, J.; Shreve, M.; Goldgof, D.B.; Sapper, D.B.; Kasturi, R. Understanding transit scenes: A survey on human behavior-recognition algorithms. IEEE Trans. Intell. Transp. Syst. 2009, 11, 206–224. [Google Scholar] [CrossRef]
  63. Joseph, C.; Kokulakumaran, S.; Srijeyanthan, K.; Thusyanthan, A.; Gunasekara, C.; Gamage, C. A framework for whole-body gesture recognition from video feeds. In Proceedings of the 2010 5th International Conference on Industrial and Information Systems, Mangalore, India, 29 July–1 August 2010; IEEE: PIscataway, NJ, USA, 2010; pp. 430–435. [Google Scholar]
  64. Van Kasteren, T.; Englebienne, G.; Kröse, B.J. An activity monitoring system for elderly care using generative and discriminative models. Pers. Ubiquitous Comput. 2010, 14, 489–498. [Google Scholar] [CrossRef] [Green Version]
  65. Tolstikov, A.; Hong, X.; Biswas, J.; Nugent, C.; Chen, L.; Parente, G. Comparison of fusion methods based on dst and dbn in human activity recognition. J. Control. Theory Appl. 2011, 9, 18–27. [Google Scholar] [CrossRef]
  66. Yang, J.; Lee, J.; Choi, J. Activity recognition based on RFID object usage for smart mobile devices. J. Comput. Sci. Technol. 2011, 26, 239–246. [Google Scholar] [CrossRef]
  67. Sarkar, J.; Vinh, L.T.; Lee, Y.K.; Lee, S. GPARS: A general-purpose activity recognition system. Appl. Intell. 2011, 35, 242–259. [Google Scholar] [CrossRef]
  68. Hong, J.; Ohtsuki, T. A state classification method based on space-time signal processing using SVM for wireless monitoring systems. In Proceedings of the 2011 IEEE 22nd International Symposium on Personal, Indoor and Mobile Radio Communications, Toronto, ON, Canada, 11–14 September 2011; IEEE: PIscataway, NJ, USA, 2011; pp. 2229–2233. [Google Scholar]
  69. Kaur, H.; Atif, M.; Chauhan, R. An internet of healthcare things (IoHT)-based healthcare monitoring system. In Proceedings of the Advances in Intelligent Computing and Communication: Proceedings of ICAC 2019, Umea, Sweden, 16–20 June 2019; Springer: Singapore, 2020; pp. 475–482. [Google Scholar]
  70. Mukherjee, A.; Ghosh, S.; Behere, A.; Ghosh, S.K.; Buyya, R. Internet of Health Things (IoHT) for personalized health care using integrated edge-fog-cloud network. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 943–959. [Google Scholar] [CrossRef]
  71. Ke, S.R.; Thuc, H.L.U.; Lee, Y.J.; Hwang, J.N.; Yoo, J.H.; Choi, K.H. A review on video-based human activity recognition. Computers 2013, 2, 88–131. [Google Scholar] [CrossRef] [Green Version]
  72. Damaševičius, R.; Vasiljevas, M.; Šalkevičius, J.; Woźniak, M. Human activity recognition in AAL environments using random projections. Comput. Math. Methods Med. 2016, 2016, 4073584. [Google Scholar] [CrossRef] [Green Version]
  73. Avci, A.; Bosch, S.; Marin-Perianu, M.; Marin-Perianu, R.; Havinga, P. Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In Proceedings of the 23th International Conference on Architecture of Computing Systems, Hannover, Germany, 22–23 February 2010; VDE: Frankfurt, Germany, 2010; pp. 1–10. [Google Scholar]
  74. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical human activity recognition using wearable sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [Green Version]
  75. Shahmohammadi, F.; Hosseini, A.; King, C.E.; Sarrafzadeh, M. Smartwatch based activity recognition using active learning. In Proceedings of the 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Philadelphia, PA, USA, 17–19 July 2017; IEEE: PIscataway, NJ, USA, 2017; pp. 321–329. [Google Scholar]
  76. Lee, Y.; Song, M. Using a smartwatch to detect stereotyped movements in children with developmental disabilities. IEEE Access 2017, 5, 5506–5514. [Google Scholar] [CrossRef]
  77. Capela, N.A.; Lemaire, E.D.; Baddour, N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients. PLoS ONE 2015, 10, e0124414. [Google Scholar] [CrossRef] [Green Version]
  78. Schrader, L.; Vargas Toro, A.; Konietzny, S.; Rüping, S.; Schäpers, B.; Steinböck, M.; Krewer, C.; Müller, F.; Güttler, J.; Bock, T. Advanced sensing and human activity recognition in early intervention and rehabilitation of elderly people. J. Popul. Ageing 2020, 13, 139–165. [Google Scholar] [CrossRef] [Green Version]
  79. Lentzas, A.; Vrakas, D. Non-intrusive human activity recognition and abnormal behavior detection on elderly people: A review. Artif. Intell. Rev. 2020, 53, 1975–2021. [Google Scholar] [CrossRef]
  80. Ronao, C.A.; Cho, S.B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
  81. Bulling, A.; Blanke, U.; Schiele, B. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 2014, 46, 1–33. [Google Scholar] [CrossRef]
  82. Deng, L.; Yu, D. Deep learning: Methods and applications. Found. Trends® Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
  83. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  84. Khan, M.; Jan, B.; Farman, H.; Ahmad, J.; Farman, H.; Jan, Z. Deep learning methods and applications. In Deep Learning: Convergence to Big Data Analytics; Springer: Singapore, 2019; pp. 31–42. [Google Scholar]
  85. Coşkun, M.; YILDIRIM, Ö.; Ayşegül, U.; Demir, Y. An overview of popular deep learning methods. Eur. J. Tech. (EJT) 2017, 7, 165–176. [Google Scholar] [CrossRef] [Green Version]
  86. Nguyen, H.; Kieu, L.M.; Wen, T.; Cai, C. Deep learning methods in transportation domain: A review. IET Intell. Transp. Syst. 2018, 12, 998–1004. [Google Scholar] [CrossRef]
  87. Labrador, M.A.; Yejas, O.D.L. Human Activity Recognition: Using Wearable Sensors and Smartphones; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  88. Lai, X.; Liu, Q.; Wei, X.; Wang, W.; Zhou, G.; Han, G. A survey of body sensor networks. Sensors 2013, 13, 5406–5447. [Google Scholar] [CrossRef] [Green Version]
  89. González-Villanueva, L.; Cagnoni, S.; Ascari, L. Design of a wearable sensing system for human motion monitoring in physical rehabilitation. Sensors 2013, 13, 7735–7755. [Google Scholar] [CrossRef] [Green Version]
  90. Jobanputra, C.; Bavishi, J.; Doshi, N. Human activity recognition: A survey. Procedia Comput. Sci. 2019, 155, 698–703. [Google Scholar] [CrossRef]
  91. Cheng, L.; Guan, Y.; Zhu, K.; Li, Y. Recognition of human activities using machine learning methods with wearable sensors. In Proceedings of the 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 9–11 January 2017; IEEE: PIscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  92. Ahmed, N.; Rafiq, J.I.; Islam, M.R. Enhanced human activity recognition based on smartphone sensor data using hybrid feature selection model. Sensors 2020, 20, 317. [Google Scholar] [CrossRef] [Green Version]
  93. Chen, K.; Zhang, D.; Yao, L.; Guo, B.; Yu, Z.; Liu, Y. Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities. ACM Comput. Surv. (CSUR) 2021, 54, 1–40. [Google Scholar] [CrossRef]
  94. Li, Y.; Hao, Z.; Lei, H. Survey of convolutional neural network. J. Comput. Appl. 2016, 36, 2508. [Google Scholar]
  95. Murad, A.; Pyun, J.Y. Deep recurrent neural networks for human activity recognition. Sensors 2017, 17, 2556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.; Liu, J. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 2017, 11, 68–75. [Google Scholar] [CrossRef] [Green Version]
  97. Gao, H.; Duan, Y.; Miao, H.; Yin, Y. An approach to data consistency checking for the dynamic replacement of service process. IEEE Access 2017, 5, 11700–11711. [Google Scholar] [CrossRef]
  98. Cao, L.; Wang, Y.; Zhang, B.; Jin, Q.; Vasilakos, A.V. GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 2018, 118, 67–80. [Google Scholar] [CrossRef]
  99. Zhao, Y.; Li, H.; Wan, S.; Sekuboyina, A.; Hu, X.; Tetteh, G.; Piraud, M.; Menze, B. Knowledge-aided convolutional neural network for small organ segmentation. IEEE J. Biomed. Health Inform. 2019, 23, 1363–1373. [Google Scholar] [CrossRef]
  100. Li, W.; Liu, X.; Liu, J.; Chen, P.; Wan, S.; Cui, X. On improving the accuracy with auto-encoder on conjunctivitis. Appl. Soft Comput. 2019, 81, 105489. [Google Scholar] [CrossRef]
  101. Wan, S.; Qi, L.; Xu, X.; Tong, C.; Gu, Z. Deep learning models for real-time human activity recognition with smartphones. Mob. Netw. Appl. 2020, 25, 743–755. [Google Scholar] [CrossRef]
  102. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
  103. Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
  104. Zhou, X.; Liang, W.; Kevin, I.; Wang, K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-learning-enhanced human activity recognition for Internet of healthcare things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
  105. Xu, C.; Chai, D.; He, J.; Zhang, X.; Duan, S. InnoHAR: A deep neural network for complex human activity recognition. IEEE Access 2019, 7, 9893–9902. [Google Scholar] [CrossRef]
  106. Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. WiFi CSI based passive human activity recognition using attention based BLSTM. IEEE Trans. Mob. Comput. 2018, 18, 2714–2724. [Google Scholar] [CrossRef]
  107. Zhu, Q.; Chen, Z.; Soh, Y.C. A novel semisupervised deep learning method for human activity recognition. IEEE Trans. Ind. Inform. 2018, 15, 3821–3830. [Google Scholar] [CrossRef]
  108. Wang, H.; Zhao, J.; Li, J.; Tian, L.; Tu, P.; Cao, T.; An, Y.; Wang, K.; Li, S. Wearable sensor-based human activity recognition using hybrid deep learning techniques. Secur. Commun. Netw. 2020, 2020, 2132138. [Google Scholar] [CrossRef]
  109. Lu, J.; Zheng, X.; Sheng, M.; Jin, J.; Yu, S. Efficient human activity recognition using a single wearable sensor. IEEE Internet Things J. 2020, 7, 11137–11146. [Google Scholar] [CrossRef]
  110. Qi, J.; Yang, P.; Waraich, A.; Deng, Z.; Zhao, Y.; Yang, Y. Examining sensor-based physical activity recognition and monitoring for healthcare using Internet of Things: A systematic review. J. Biomed. Inform. 2018, 87, 138–153. [Google Scholar] [CrossRef]
  111. Hong, Y.J.; Kim, I.J.; Ahn, S.C.; Kim, H.G. Mobile health monitoring system based on activity recognition using accelerometer. Simul. Model. Pract. Theory 2010, 18, 446–455. [Google Scholar] [CrossRef]
  112. Tao, M.; Li, X.; Wei, W.; Yuan, H. Jointly optimization for activity recognition in secure IoT-enabled elderly care applications. Appl. Soft Comput. 2021, 99, 106788. [Google Scholar] [CrossRef]
  113. Dritsas, E.; Trigka, M. Stroke risk prediction with machine learning techniques. Sensors 2022, 22, 4670. [Google Scholar] [CrossRef]
  114. Luna-Perejón, F.; Muñoz-Saavedra, L.; Civit-Masot, J.; Civit, A.; Domínguez-Morales, M. AnkFall—Falls, falling risks and daily-life activities dataset with an ankle-placed accelerometer and training using recurrent neural networks. Sensors 2021, 21, 1889. [Google Scholar] [CrossRef] [PubMed]
  115. Ahsan, M.M.; Siddique, Z. Machine learning-based heart disease diagnosis: A systematic literature review. Artif. Intell. Med. 2022, 128, 102289. [Google Scholar] [CrossRef]
  116. MS, A.R.; Nirmala, C.; Aljohani, M.; Sreenivasa, B. A novel technique for detecting sudden concept drift in healthcare data using multi-linear artificial intelligence techniques. Front. Artif. Intell. 2022, 5, 950659. [Google Scholar]
  117. Gama, J.; Medas, P.; Castillo, G.; Rodrigues, P. Learning with drift detection. In Proceedings of the Advances in Artificial Intelligence—SBIA 2004: 17th Brazilian Symposium on Artificial Intelligence, Sao Luis, Maranhao, Brazil, 29 September–1 October 2004; Proceedings 17. Springer: Berlin, Germany, 2004; pp. 286–295. [Google Scholar]
  118. Wang, L.; Chen, S.; He, Q. Concept drift-based runtime reliability anomaly detection for edge services adaptation. IEEE Trans. Knowl. Data Eng. 2021. [Google Scholar] [CrossRef]
  119. Gulcan, E.B.; Can, F. Unsupervised concept drift detection for multi-label data streams. Artif. Intell. Rev. 2023, 56, 2401–2434. [Google Scholar] [CrossRef]
  120. Casado, F.E.; Lema, D.; Iglesias, R.; Regueiro, C.V.; Barro, S. Ensemble and continual federated learning for classification tasks. Mach. Learn. 2023, 1–41. [Google Scholar] [CrossRef]
  121. Jothimurugesan, E.; Hsieh, K.; Wang, J.; Joshi, G.; Gibbons, P.B. Federated learning under distributed concept drift. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 25–27 April 2023; PMLR; pp. 5834–5853. [Google Scholar]
Figure 1. Comparison of cloud based with Fog and MEC.
Figure 1. Comparison of cloud based with Fog and MEC.
Sensors 23 07095 g001
Figure 2. Comparison of Machine learning and deep learning techniques in healthcare system [91,92,95,101,102,103,104,105,106,107,108,111,112,113].
Figure 2. Comparison of Machine learning and deep learning techniques in healthcare system [91,92,95,101,102,103,104,105,106,107,108,111,112,113].
Sensors 23 07095 g002
Table 1. Gateways comparison with different parameters.
Table 1. Gateways comparison with different parameters.
Cloud ComputingMobile Edge ComputingFog Computing
Network LatencyHighMediumLow
Internet Bandwidth UtilizationHighMediumLow
Power ConsumptionHighHighLow
Access MechanismsWi-Fi, Mobile NetworksMobile NetworksBluetooth, Wi-Fi
Execution TimeLowMediumHigh
Resources AvailabilityHighMediumLow
Context AwarenessLowHighMedium
Real Time CompatibilityLowMediumHigh
Technology DevicesCentralized Servers, Data CentersServers running in base stationsGateways (Routers, Switches)
Table 2. Summary for Section 3.
Table 2. Summary for Section 3.
Ref.Model (M.L/D.L/Hybrid)StrengthWeakness
[91]M.L (SVM+ANN)A unique architecture fused basic SVM with conventional ANN. Can be very useful for shallow feature extraction.Average accuracy, slow network.
[92]M.L (SVM+ SFFS)A very efficient lightweight feature filtration technique employing SFFS module.Shows good results on smaller datasets only.
[94]D.L (CNN)Detailed Survey on CNN & its state-of-the-art applications.CNN cause overfitting, and typical models fail to adapt to certain configurations.
[95]D.L (RNN)Can outperform CNN’s in extracting long term dependencies.Can cause extreme exploding gradients.
[96]D.L (LSTM)The memory cell enables the network to perform back propagation and remember long term dependencies, which better correlates data; hence, they outperform conventional RNN’s.Training time increases exponentially on larger datasets.
[101]D.L (CNN)Context-aware classification handled some errors in conventional CNN, which increased the overall accuracy.Compared with vanilla models with no parameter adjustments or fine tuning.
[102]M.L/D.L/HybridA detailed survey on conventional vs. advance activity recognition approaches in both M.L and D.L. Portrayed the advanced in D.L.None.
[103]D.L (Enhanced CNN)Showed superior results compared to state-of-the-art works.Results are based on strongly labelled data only. Performance may vary on weakly labelled data.
[104]D.L (LSTM+ALM)Auto labelling module showed significant improvement in accuracy.The auto labelling module required a large pool of unlabelled data, which made the system costly.
[105]Hybrid (InnoHAR)A fusion of RNN with Inception neural networks showed good performance on smaller and larger datasets.Not implemented in real-time scenarios. Moreover, the configuration of INN is very complicated if a change or update is required.
[106]D.L (ABiLSTM)Attention-based BLSTM implemented on Wi-Fi data. Attention module filtered the features-f-interest and dropped low level features, thus making the proposed approach time efficient.A single channelled Wi-Fi was used without any real-time data collection, which is not a viable source of activity recognition data.
[107]D.L (DLSTM)A DLSTM based on labelled and unlabelled data, which extracts high level features and retrains low-level features and labels the unlabelled data. Superior accuracy compared to state-of-the-art LSTM works.Results generated in a controlled environment; performance may vary in real-time scenarios. Moreover, DLSTM structure makes the network too slow and time complexity increases.
[108]Hybrid (CNN+LSTM)A state-of-the-artwork employing postural transition with static activities. Showed superior performance compared to several approaches employing transition activities.Complex structure only considers basic static and transition activities while the experiments were based on a pre-processed dataset with abundant features. Performance may vary for datasets where number of features is far less.
Table 3. Summary for Section 4.
Table 3. Summary for Section 4.
ApproachTypeStrength
[52]Wearable sensorsAble to detect abnormalities in elderly people and capable of employment in real-time scenarios
[111]RFID + wearable sensorsBy the tracking of hand motion, RFID tagged objects are able to be detected which provides additional pattern data for efficient human activity recognition
[112]CNN + vision devicesIntroduced an authentication-based access network to avoid any unwanted access or breach in the network
[66]RFID + CNNA dense CNN with RFID unit brought forward a novel RSS based approach. However no solid strengths were presented in the research work
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, S.; Irfan, S.; Kiran, N.; Masood, N.; Anjum, N.; Ramzan, N. Remote Health Monitoring Systems for Elderly People: A Survey. Sensors 2023, 23, 7095. https://doi.org/10.3390/s23167095

AMA Style

Ahmed S, Irfan S, Kiran N, Masood N, Anjum N, Ramzan N. Remote Health Monitoring Systems for Elderly People: A Survey. Sensors. 2023; 23(16):7095. https://doi.org/10.3390/s23167095

Chicago/Turabian Style

Ahmed, Salman, Saad Irfan, Nasira Kiran, Nayyer Masood, Nadeem Anjum, and Naeem Ramzan. 2023. "Remote Health Monitoring Systems for Elderly People: A Survey" Sensors 23, no. 16: 7095. https://doi.org/10.3390/s23167095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop