Next Article in Journal
4D Printing of Origami Structures for Minimally Invasive Surgeries Using Functional Scaffold
Next Article in Special Issue
An Immersive Virtual Reality Game for Predicting Risk Taking through the Use of Implicit Measures
Previous Article in Journal
Parallel Complex Quadrature Spatial Modulation
Previous Article in Special Issue
An Aerial Mixed-Reality Environment for First-Person-View Drone Flying
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring

by
Héctor Sánchez San Blas
1,*,
André Sales Mendes
1,
Francisco García Encinas
1,
Luís Augusto Silva
1,2 and
Gabriel Villarubia González
1
1
Expert Systems and Applications Lab—ESALAB, Faculty of Science, University of Salamanca, Plaza de los Caídos s/n, 37008 Salamanca, Spain
2
Laboratory of Embedded and Distribution Systems, University of Vale do Itajaí, Rua Uruguai 458, C.P. 360, Itajaí 88302-901, Brazil
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(1), 331; https://doi.org/10.3390/app11010331
Submission received: 28 November 2020 / Revised: 27 December 2020 / Accepted: 28 December 2020 / Published: 31 December 2020
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)

Abstract

:

Featured Application

Featured Application: The present work is a potential application for improve the rehabilitation process using remote supervision.

Abstract

There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has been treated as medical aid, making an exchange between technological issues such as the Internet of Things and virtual reality. The main objective of this work is to design a distributed platform to monitor the patient’s movements and status during rehabilitation exercises. Later, this information can be processed and analyzed remotely by the doctor assigned to the patient. In this way, the doctor can follow the patient’s progress, enhancing the improvement and recovery process. To achieve this, a case study has been made using a PANGEA-based multi-agent system that coordinates different parts of the architecture using ubiquitous computing techniques. In addition, the system uses real-time feedback from the patient. This feedback system makes the patients aware of their errors so that they can improve their performance in later executions. An evaluation was carried out with real patients, achieving promising results.

1. Introduction

There are more than 800 million people in the world with chronic diseases [1]. Many of them do not have easy access to healthcare facilities for their recovery. According to the authors of [2,3], more than 50% of these people could benefit from integrating rehabilitation services into their homes and everyday devices, e.g., smartphones, computers, and tablets. The reasons that account for this range from disabilities to travel-related issues. The concept of telemedicine tries to fill this gap by offering remote access to healthcare. Telemedicine [4] refers to the ability to perform medical diagnosis and treatment remotely. Thus, it uses Information and Communication Technologies (ICT) for its implementation. There are several works that fall under this concept [5,6,7].
The concept of telerehabilitation is found within the field of telemedicine. It refers to the use of ITCs to carry out a rehabilitation service remotely [8]. Many studies have been carried out around this concept using different technological paradigms. Some use Internet of Things (IoT), which allows a greater amount of information to be collected about users. Such information enables healthcare professionals to monitor them remotely and offer them help to achieve their objective [9]. On the other hand, there is an increased usage of the so-called wearables. Using them, a great variety of information can be collected, such as heart rate or blood oxygen level. These devices can be integrated easily in a person’s daily life and are able to retrieve data in a quite precise way [10]. It is worth highlighting the use of Virtual Reality (VR) in this area of work [11]. When using VR technology, users can choose from a wide range of commercial VR headsets. In addition, VR simulations are a simple and low-cost alternative to traditional rehabilitation sessions. Furthermore, the use of this technology allows for designing and incorporating complex scenarios in a simple way [12]. However, user acceptance is one of the limiting factors in VR applications [13]. Thus, this work aims at highlighting the need to develop projects that popularize the viability of VR technology within this field.
The main objective of this work is to propose a robust telerehabilitation process. This process is carried out remotely, through a connection between the doctor, the devices, and the patient. The doctor assigned to the patient can check a patient’s information and see their progress from a distance. In addition, the users are able to check their own progress so that they are aware of how their rehabilitation is advancing.
To this end, a platform capable of monitoring people’s movements and condition while performing specific exercises has been developed. A suit incorporating Inertial Measurement Unit (IMU) sensors has been used to monitor user movements and condition. Each of these sensors provides information about the position of the device. In addition, additional wearable devices are used on this platform. This way, different vital signs are monitored, such as heart rate and the user’s blood oxygen level. All this information is sent to the server for processing, analysis, and storage. In this way, it is possible to consult this information in real-time using visual feedback to the user. Using this feedback, the user can find out what errors they are making and, thus, correct them. To offer better accessibility for consulting this information, the system is available on different platforms: web, mobile, and VR.
To coordinate and communicate all the entities involved in this process, a Multi-Agent System (MAS) is used. This MAS allows the platform to be reconfigured dynamically [14]. By using this MAS, it is possible to distribute resources and capabilities among different nodes. In this way, problems that usually occur in centralized systems, such as bottlenecks or recurrent access to critical resources, are eliminated. In addition, the efficiency of the system when recovering, filtering, and coordinating information is increased. The MAS that best adapts to the use case on hand is the opensource PANGEA architecture [15]. This architecture allows different elements of the system to behave dynamically according to the requirements of the platform at any given moment.
This article is organized as follows. Section 2 focuses on an in-depth review of the state of the art on IoT, VR, and existing MASs. Section 3 describes the architecture of the proposed system. The materials and methods are described in Section 4. Section 5 shows the case study carried out with the results obtained. Finally, the conclusions reached are presented in Section 6.

2. Background

The last decade has seen a slow increase in telemedicine applications [16]. Within this field, several works have been carried out in areas such as patient activity monitoring systems [17] or systems for the recovery of chronic disease patients [18]. Works that deal with rehabilitating patients remotely, different kinds of data are recorded to monitor them. On remote rehabilitation projects, this data is usually recording with an IoT paradigm [19,20]. Among the works that make use of this paradigm, Gaddam et al. [21] analyzed user gait and mobility to give them certain strategies when performing outdoor exercise. The system makes use of Near Field Communication (NFC) technology to obtain the information that is later analyzed. On the other hand, Celesti et al. [22] carried out an analysis of different NoSQL databases to determine which is better suited to be used in IOT-based telemedicine systems. In this case, they highlighted the performance of the MongoDB database in the Cloud for handling this information.
Among IoT systems, there are those dedicated to monitoring movements made by different parts of the body to check that movements are carried out properly. These works are focused on the development and use of the so-called exoskeletons [23,24]. Erdogan et al. [25] developed a system based on an exoskeleton for the rehabilitation of the ankle. To this end, they constructed an exoskeleton to treat the multiple phases of treatment and shorten the patient’s recovery time. On the other hand, Wang et al. [26] developed an ankle exoskeleton to direct three rotation movements through a simple and reliable structure, managing to implement a control system capable of accurately positioning the ankle according to the exercise being performed.
From another point of view, there are solutions capable of monitoring whole-body movements of the users. These solutions make use of Deep Learning algorithms for the detection of postures through image processing techniques [27,28,29]. Hernandez et al. [30] developed a motion capture system capable of measuring the kinematics of spacesuits in an underwater test environment. However, these solutions are not 100% effective, which can result in erroneous data that can lead to errors in reviewing patient progress. On the other hand, the development and usage of motion capture suits to capture this information is an active area of research. Kim [31] used the Rokoko motion-capture suit to identify different strategies that could be used to capture the movements of dancers. These movements are later used in film production, editing, and special effects.
Another important point of these telerehabilitation systems is user feedback. This feedback makes the user feel more integrated and motivated while showing them their actions. Accessibility, ease of use, and human–computer interactions are also very important aspects. To achieve this, most of the systems developed make use of VR technology. Some studies demonstrate the advantages of using this technology. Dif [32] showed that users performing motor function-related tasks improve its performance after training when obtaining visual feedback. The research group of the authors of this study, the ESALab research group, has previous experience related to VR and rehabilitation systems. de la Iglesia et al. [33] proposed an immersive VR rehabilitation system based on the repetition of certain exercises with the help of a exoskeleton. Postolache et al. [34] included VR technology to allow a patient with motor difficulties to perform exercises in a very interactive and non-intrusive way, using a set of devices that can be worn, thus contributing to his or her motivational rehabilitation process.
In telerehabilitation dedicated projects, the communication and coordination between patients and healthcare members are crucial. Healthcare professionals have to plan exercises and, at the same time, check progress during the rehabilitation process. Meanwhile, users should be able to receive feedback from medical professionals and updates on the exercises. Some studies (e.g., [35]) seek to solve this problem by using multi-agent systems that allow one to create a dynamic, scalable, and decentralized system. Calvaresi et al. [36] carried out an analysis of solutions for telerehabilitation and highlighted that multi-agent systems are useful. They allow contextualizing scenarios in a simple way in situations where planning and problem solving uncertainties are intertwined with distributed information source coordination and sophisticated concurrency controls. Calvaresi and Calbimonte [37] presented a model that represents sensors as autonomous agents capable of programming tasks and performing interactions and negotiations that comply with strict time constraints.
In this context, the proposed work seeks to build a novel IoT system capable of monitoring whole-body movements using the techniques described throughout this section. The objective is to allow the assigned doctor to remotely monitor the patient in real time. The user should be aware of their recovery process and the effectiveness of the exercises carried out. The errors performed should be presented to the user to allow the patient to correct them. This closer involvement with the patient in the rehabilitation process will influence their motivation and, therefore, their state of mind. In addition, the use of a multi-agent system will make it possible to coordinate the different assets, resulting in real-time and dynamic monitoring. In the following section, the system developed from these specifications is shown.

3. Proposed System and Architecture

This section presents the proposed system. First, we present the proposed architecture based on the PANGEA multi-agent architecture, describing each agent that is part of the system and its functionality. The main objective of this proposal is to create a system that can be accessible to everyone. Moreover, since the development of the system is modular, it can be used through different devices, either in its entirety or only with some of them. In this case, we integrate these device connections with the system in an abstract way so that they can be replaced by other models of devices suitable to the users’ possibilities.
The main characteristic of the architecture that is proposed is the capacity and inclusion of new functions and the ability to adapt to new environments in the future in a simple way. To solve the problem, the architecture must contain a series of well-defined characteristics to achieve correct operation. As the objective of this research work, the creation of a platform has been proposed to allow the rehabilitation of people who need to do some physical exercise independently and who, in turn, can be accompanied by a specialist who indicates the exercises to be done. To do this, the user or patient of the application can use various devices, such as a motion capture suit or a heart rate monitor, that allow them to monitor their progress, or they can use various terminal environments that allow them to guide and see the results of their progress. The architecture for the system must have the capacity to adapt to the incorporation of new functionalities so that it is adaptive, scalable, distributed, and with different communication protocols that allow the integration of the different parts of the system. To this end, we propose the use of a multi-agent architecture that offers the functionalities described above. In a multi-agent based architecture, each of the agents must have a well-defined functionality and task so that it can coordinate and interact with the other agents. For the construction of multi-agent systems, there are currently several solutions developed to speed up the process. The systems currently available can be from simple libraries such as SPADE and Python’s library or complete complex systems such as JADE, PANGEA, and osBrain. Figure 1 shows the proposed architecture using MAS PANGEA with its virtual organizations and the main agents that are part of the designed architecture.
The architecture proposed for this research work is used as a starting point for PANGEA. This is because PANGEA is based on the theory of organizations, which allows it to be applied to most systems, and allows for the modelling of human iteration with the system. The main advantage of PANGEA over other multi-agent systems is its internal rule engine that allows for the distribution of computational load among the different agents.
The designed architecture is divided into different parts. There are two well-defined parts: The upper part of the image shows the minimum agents required for the operation of the PANGEA multi-agent system. The lower part shows the virtual organizations of agents belonging to the case study.
Capture System Organization: This organization interacts as a bridge between the system and the elements of the sensorization system to monitor the execution of the exercises. There are two main agents in this organization. The first agent we meet is the Node Position, which connects to the monitoring suits, in this specific case via Bluetooth BLE. It has the ability to detect changes in movement and publish them so that they can be used by the system. The second agent is the Pulsometer, which, similar to the agent described above, connects via Bluetooth BLE with the heart rate monitors to capture the heart rate at each instant it is using the system. This agent allows for recommendations to increase or decrease the rate of exercise performance.
Monitoring Organization: This organization aims to carry out the tasks of history storage, progress analysis, patient profile control, and report generation. This organization is detailed in Table 1.
Simulation Organization: This organization is in charge of the classification and validation of poses made by the user as well as the execution of actions. This organization has an important role within the whole system since it is in charge of checking that the exercises are carried out and that they are done correctly. To this end, the main agents of this organization are described:
  • Pose Estimation: This agent can recreate the human pose from the RAW data coming from the Node Position agent. This agent can filter out and discard invalid poses due to what may be a temporary error in a sensor or due to the system being started up and the user wearing the suit.
  • Pose Validation: This agent has the functionality of classifying the poses estimated by the Pose Estimation agent by comparing them with the positions previously registered in the system, to know the position of the user.
  • Action Recognition: This agent makes use of the poses recognized by the Pose Validation agent. This agent’s functionality is to recognize actions through the validated poses over time.
Application Interface Organization: This organization is in charge of adapting the information generated by the system to the application layer. This organization is used as an interface so that the applications can interact directly with the system. This organization is in charge of converting the raw information of the system into information that can be easily interpreted by humans. In this case, the information displayed acts as an interface for the VR applications, the mobile applications, and the expert monitoring application.
PANGEA Multi-Agent System Organization: This organization is composed of the minimum necessary agents for the operation of PANGEA. This organization aims to manage the virtual organizations and coordinate the agents within each of them. The agents of this organization can be seen below:
  • Database Agent: This is the only agent with database access permissions and can store the information present within the organization, such as records, historical agents, and tasks performed by each of the system agents.
  • Information Agent: This agent indicates the services available by all of the agents and can be requested by other agents of the system. When a new agent wants to join the system, this new agent must indicate to the information agent the services that are available so that the information agent can say which services are available to the other agents.
  • Normative Agent: This agent is responsible for imposing and ensuring that the rules are complied with by the communication established between the agents.
  • Service Agent: This agent has the functionality of arranging and distributing the system’s functionality through web services. Its role could be considered as a Gateway style; it allows for the communication of external services with the virtual organization of agents. This allows agents to be easily integrated and built in any programming language.
  • Manager Agent: This agent is important within the whole system, as it is in charge of checking the status of the system periodically. It detects a load in a part of the system, any overloaded functionality, or possible failures in agents from different organizations.
  • Organization Agent: This agent is responsible for verifying all operations of virtual organizations, checking security and load balancing, and offering encryption for communication between agents.
Within the system, it is worth noting that two databases with well-defined functionalities are used. The PANGEA database is only used by the PANGEA virtual organization and is intended to store the information on the organizations and the services available to each of their agents. The APP Database Storage contains information specific to the use case. In this particular case, patient profiles, information on the experts responsible for the rehabilitation, exercises to be performed, exercise and monitoring histories, alerts, and incidents are stored.
The proposed architecture indicates the existence of external agents. These represent the two available roles within the application that have direct interaction with it. The first user who makes use of the application is the patient who is the user of the application carrying out the rehabilitation task. The second user of the application is the expert or doctor who is in charge of setting up the user profile, proposing the exercises, checking the progress of the recovery that must be validated manually by an expert.
The modules and agents of architecture are specialized in a specific objective or task. The advantage of this architecture is that it allows the replacement of either one agent or a set of agents that have similar characteristics without affecting the rest of the system. An example of this would be the use of a different motion capture suit. To do so, we would only have to replace the Node Position agents that allow them to communicate with the suit to be incorporated.

4. Materials and Methods

In this section, we describe the devices and the methods used to collect and unify the data. These data represent the user’s movements in the platform. In addition, the processing system to be carried out to classify the movements made by the patient is described. Finally, the tools and devices used to visualize the patient’s evolution during the rehabilitation process are outlined.

4.1. Pose Detection Devices

The pose detection system aggregated to the architecture is used to collect the movements made by the user. With this information, the system can estimate its pose. The key to this process is the usage of clothing with IMU sensors to track the movement of the user’s body. In this work, we used the Enflux Suit [38], a compound of five motion sensors with an accuracy of ±2 degrees, working in a three-dimensional coordinate system. It uses the technology Bluetooth LE 4.0, connecting to the central module to receive and send data. It has an internal refresh rate of 125 Hz and uses the Arm M4 32 MHz as a microprocessor. Figure 2 shows the location of the sensors in the suit and their characteristics.
The use of this suit is ideal because of its low cost and the IMU sensors used. These sensors are electronic devices that provide measurement information about the speed, orientation, and gravitational forces of a device using a combination of accelerometers, gyroscopes, and magnetometers. Each sensor provides a function to achieve a single result. The gyroscope measures the turns made, the accelerometer measures the linear acceleration, and the magnetometer obtains information about the location of the Earth’s magnetic field. This device connects through Bluetooth and connects to the user’s devices, which transmit the information to the server.
On the other hand, a wearable device is used, in this case a Garmin 245 Forerunner Music [39] smartwatch. This device is connected to the device used by the patient through a Bluetooth, allowing for the collection of information from the user regarding the heart rate and the level of oxygen in the blood to monitor the patient’s condition and be able to track them.
Therefore, the data collected from the above-mentioned devices are sent to the server for processing so that the capture system agent collects this information and sends it to the simulation node for processing.

4.2. Exercise Estimation

After the data collection process, the data are received by the simulation agent. This agent collects and processes the information, IMU sensors, and smartwatch, as shown in Figure 3.
The information generated by the IMU sensors is collected as quaternions [40]. Quaternions are extensions of real numbers, similar to complex numbers, but they are an extension generated by adding the imaginary units i, j, and k to real numbers such that
i 2 = j 2 = k 2 = i j k = 1
The set of all quaternions can be expressed as follows:
H = q 0 + q 1 i + q 2 j + q 3 k
where q = ( q 0 , q ) is the real part, and q = ( q 1 i + q 2 j + q 3 k ) T is the imaginary part of the quaternion. In addition, for Hamilton’s product between two quaternions, the following properties are fulfilled:
p q q p p q r = ( p q ) r = p ( q r ) where p , q , r H
The use of quaternions to collect the information produced by the IMU sensors allows us to process this information in the Unity3D [41] environment using 3D points in space that have rotations.
After the transformation of the information, a processing of the information is carried out to determine whether the patient is doing the exercise properly as well as his or her condition.
To determine whether the patient has performed the exercise correctly, a skilled user (who may be the doctor or someone with knowledge of the sport or motor system) must enter the exercise into the system. To do this, the process to be carried out is similar to Figure 3. The expert user must indicate to the system the introduction of a new exercise, performing the relevant movements relating to the exercise. The system collects the information generated, transforms it, and stores it for later use.
The processing of the data allows for the generation of information that is used for the following things:
  • Visualization of movements: The data received from the Enflux suit are transformed into quaternions for storage. The position of each of the nodes, which are equivalent to the IMU sensors, can be represented within the Unity3D environment, which allows us to carry out a representation of the patient’s movements within the system.
  • Information on errors made: Once the quaternions have been obtained, they are used to detect movements that have been made incorrectly. This is discussed below.
  • Data for patient monitoring: The data generated by the Garmin watch and the Enflux Suit, once transformed, are processed to obtain statistics and information that can be displayed to know the status of the user and the progress of their rehabilitation.
As for the error detection, the angular distance between quaternions is used. As such, the angle between two rotations a and b is treated as the angle formed if a third rotation c is defined as moving a and b such that, if two lines are drawn from c to a and from c to b, the angle between c a and c b is the angle to be used in this case as an interpretation of the angle between a and b. The equation for obtaining this angle is as follows:
a n g l e = 2 · cos 1 ( x )
where x is the real part of ( b a 1 ) .
Bearing this in mind, it is possible to discern between correctly executed movements by knowing the angles formed between the various IMU sensors at every instant of time so that, knowing the movements and values of the exercise performed by the expert as well as the patient, it is possible to process this information to know the errors made. The use of angles makes it possible to generalize this processing independently of the size and body of the person, since the angle formed during the execution of the movement is independent of the values of these points in space.
When this information processing is finished, the system proceeds to store the information that has been generated. In addition, in the case of the patient who performs the exercise, the system returns feedback so that it offers the results obtained after the analysis of the movements, being able to observe the errors made so that one is aware of them and can try to correct them. Finally, the system can represent the movements so that the user, in a visual way, can, in comparison with the example exercises, know if they are doing the exercise well or badly and try to adjust to the appropriate movements in real time.

5. Experimental Results and Contributions

This section shows the final system developed from the proposed architecture. In addition, to validate the developed system, the test that it was subjected to and the results obtained from it are shown.

5.1. Monitoring and Information Display System

In this section, the final result of the developed system is presented. For this purpose, each of the systems was analyzed individually to verify the implementation of the system. For each implementation, the methodology followed by the final result achieved is presented.

5.1.1. Virtual Reality System

The virtual reality system, as well as the 3D model representation system used in the proposed system, is based on the Unity3D development environment. The integration of VR technology into the system is aimed at achieving the following objectives:
  • offer more advanced rehabilitation methods as an alternative to traditional therapy, thus maximizing the effect of the rehabilitation measures;
  • allow patients to perform actions that they are not able to do in real life due to their disabilities;
  • provide individualized treatment plans developed based on careful assessment and following the treatment goals of each case;
  • increase patient commitment and motivation with virtual environments where the tasks to be performed are simulated;
  • provide immediate and illustrative feedback;
  • improve the results by measuring and analyzing different data related to the user’s actions; and
  • provide a controlled environment through a dynamic environment that can be managed according to certain conditions as well as actions taken by the user.
However, when designing a VR environment dedicated to the rehabilitation, there are several aspects [42] to be considered before the tool is developed to make it suitable for users, where the similarity of the scenes in these environments to those in video games is highlighted. The aspects to be taken into account are the following:
  • Award: According to the body of research on the neuroscience of reward and motivation, the limbic system, in particular the Nucleus Accumbens (NA), is critical for learning new behaviors, especially those associated with reward-seeking, pleasure, and addiction. Activity in NA has been shown to scale linearly to the likelihood of receiving a reward, and differences in NA activity correlate with individual differences in sensation-seeking.
  • Difficulty: It is important to consider difficulty as an interaction of individual and environmental limitations to understand how difficulties might arise directly from an injury/illness or from the changes that accompany these individual changes. Skill transfer is particularly important for rehabilitation, where skills acquired in play are expected to be transferred to activities of daily living.
  • Feedback: Feedback can be used to achieve better long-term retention of the developed skill. However, positive feedback must be given more often to efficiently influence the developed skill.
  • Interaction: The user’s interaction with the system increases the user’s connection with the virtual environment. The exploration of new stimuli and new environments is strongly associated with physiological rewards.
  • Clear objectives and mechanics: Goal-directed tasks lead to a higher probability of acceptance of assistive devices. Lack of objectives and instructions can have a significant negative impact on patient motivation. Therefore, therapeutic goals that are not clear to patients can compromise the recovery process. Patients with high motivation in a rehabilitation setting reported communicating more actively with their therapist, and being clear and consistent in the instructions given to the patient created a sense of comfort in knowing that they were progressing towards their therapy goals. Unclear instructions caused patients to become confused and frustrated and eventually lowered their motivation.
This VR system is integrated through the Oculus Quest device [43] to allow interaction with different VR scenarios where the user performs different exercises. The diagram shown in Figure 4 shows the main components related to the representation of the VR scenario together with the information exchange.
The system implemented allows the patient to receive instructions for each of the exercises, both visually and acoustically, and to observe which movements must be performed to achieve the objective of the exercise. During the exercise, the user sees two avatars, as shown in Figure 5. The avatar on the left represents the user, while the avatar on the right represents the expert user who entered the exercise into the system.
During the exercise, the patient is able to observe both their own movements and the movements that must be followed by the expert user. In this way, the patient knows the movements to be made and is able to check whether these movements are being made correctly. Figure 5 shows a counter with the number of repetitions made as well as the repetitions that still need to be made. After finishing the exercise, the user is able to see the mistakes made, according to the repetitions made and for each part of the body, as can be seen in Figure 6.
The data generated during the execution of the exercise provides information, such as the level of performance of an exercise, and allows the patient to be aware of the progress in the rehabilitation process. All this is controlled by the doctor who is able to consult this information remotely so that they are able to know the patient’s progress, adapting the patient’s rehabilitation process according to this progress and ensuring that it is carried out correctly.

5.1.2. Mobile/Web System

This system is similar to the one described in the previous section; however, in this system, the immersion of the patient is sacrificed to carry out VR, which allows greater accessibility to the system through the use of the web tool or mobile application, being accessible from any device with an operating system, a visual interface, and Internet access for everyday use, such as mobile phones and personal computers. These platforms can achieve the telerehabilitation that this work seeks to achieve. The use of Unity3D is key for the objectives mentioned in the previous section. Furthermore, this development environment allows for the export of the 3D scenario to different platforms, in this case, web and mobile. Mobile development is more oriented towards patient use, allowing them to carry out their rehabilitation exercises and obtain information on their progress during the rehabilitation process. One of the objectives to be achieved with the development of this mobile application is to make the application intuitive and understandable for the user so that it is easy to use. Figure 7a shows the screen of access to the application. Figure 7b shows a screenshot of a list of exercises to be performed by the patient.
The web application is more oriented towards health professionals that can monitor, in a more precise way and with more information. The progress of a patient in their rehabilitation process is represented in the Figure 8. By means of this platform and this information, the doctor assigned to the patient can modify the exercises to be performed by the patient according to this progress. In addition, on this platform, patients are allowed to carry out consultations with their doctor through the use of a chat so as to increase communication between patient and doctor.
The aim of monitoring the patient through the platform is to provide the health specialist with important information, so as to go deeper into the rehabilitation process and improve it by adapting it to the progress of the patient. To achieve this, different visual representations of the information are shown, as can be seen in Figure 9.
These representations show information to the user so that they know in detail what they need to check more precisely. The doctor assigned to the patient can replay the exercises in the virtual reality embedded player, as shown in Figure 9. In addition, the doctor is allowed to send a note to the patient about the exercise performed.

5.2. Real Environment Validation

To validate the system in a real environment, a test was carried out with five rehabilitation patients who used the system for over a month. The study was carried out thanks to the collaboration with a clinic specialized in physical rehabilitation. All the patients voluntarily agreed to carry out the test and were duly informed. This process involving human subjects in the tests was carried out in accordance with the Helsinki Declaration of 1964, Ethical Principles for Medical Research Involving Human Subjects.
The volunteers (two men between 23 and 52 years old, together with three women between 26 and 47 years old) were undergoing a rehabilitation process due to muscular pain in the performance of certain daily movements. The experiment was carried out for over a month, with an initial session and four control sessions over the month, corresponding to one session per week.
The exercises performed were designed to eliminate certain causes of pain through preventing certain muscles from becoming atrophied. These exercises were included in the system by an expert in sports under the supervision of a specialized doctor, confirming that these exercises were performed properly and would help the patient.
Throughout the month, users did the different exercises established by the doctor to improve their condition. In each of the sessions, the different parameters collected by the system were analyzed, such as the angle of the movements, pulses per minute, and the level of oxygen in the blood. In Figure 10, the patient can be seen with the VR equipment.
In this case, the VR system was used to carry out the experiments so that their operation and the level of satisfaction with the patient could be monitored. In Figure 11, the patient can be seen performing an exercise with the VR system.
To verify that the system correctly detected the errors made by the patient, a face-to-face control of the user was carried out in the first session. During this control, each of the patients was asked to perform each of the established exercises with the system components on. In addition, a recording was made from different angles of the patient. The sports expert was then asked to visually identify the number of errors made by the patient. Once the exercise was completed, the number of errors detected by the system in the execution of the exercise was collected to compare its effectiveness in detecting these errors. The results obtained can be shown in Table 2.
Table 2 indicates that the system was able to detect more errors than the expert user in real time. To confirm that these errors were real, the expert user was made to review the images captured during the exercise by the patient, verifying that the number was correct and not due to a malfunction of the system. In this way, the effectiveness of the system in this process of detecting bad posture was confirmed.
Subsequently, during the four remaining sessions, a process of information gathering was carried out so that it could be verified that the patient was capable of improving his or her way of performing the exercise by reducing the number of errors made in the performance of each of the exercises. These sessions were carried out on Days 7, 14, 21, and 28 of the treatment. The information collected corresponds to the average number of errors made by users during the performance of these exercises during these control sessions. This information can be confirmed in Figure 12. In this image, it can be seen that the patients managed to reduce the errors committed during the execution of these exercises in a logarithmic manner, and that the rapid learning and assimilation process by the users during the first 14 days reduced, to a great extent, the errors committed during the execution of these exercises. In addition, it can be verified that, after 14 days, the users maintain low values corresponding to the average error committed, which means that they managed to commit a low amount of errors in the exercises, supporting their rehabilitation.
This same evolution is observed if we focus the errors of all patients according to the exercise for each of the body parts being monitored, as shown in Figure 13. These images show, for each of the days in which a control session was performed, the errors made by the patients for each of the exercises performed for the parts of the body that were monitored.
The same trend as shown in the figure below can be observed, where the errors made over the first 14 days are greatly reduced and remain at low levels over the last 14 days, and it can be seen that these values decrease in a general way throughout the patient’s body.

6. Conclusions

This article presents the development and implementation of a training system for rehabilitation through a suit, with IMU sensors, integrated with a system that allows its use in web, mobile, and VR. The development of the system is based on the PANGEA multi-agent system designed to develop a decentralized system that allows one to dynamically make configurations, which allows for the handling of different information and monitoring the patient, enabling one to observe the evolution of the rehabilitation process. The system is capable of processing the data collected when monitoring of the user, evaluating the movements made, and informing patients of the errors made so that they can improve their performance, thus improving the rehabilitation process. In addition, the system allows one to monitor the evolution of the patient and to adapt the exercises that one needs in order to advance. In addition, the use of different forms of access to the system (web, mobile, and VR) allows it to be used by a wide variety of people, improving its accessibility by adapting to the context of each person. The use of VR technology allows the patient to be incorporated during exercise more interactively, influencing their motivation and progress.
If we compare the developed system with other similar rehabilitation systems, the contributions of this project become clear. As far as data collection is concerned, the system developed uses a suit with IMU sensors covering the entire body, thus managing to monitor the movements of both the lower and upper parts of patient’s body, except for the head. In addition, the incorporation of the portable device makes it possible to monitor the patient’s vital signs, which makes it possible to know whether the exercise is being performed correctly or not, as well as whether the effort involved for the patient is high. With this, each of the patient’s movements is captured with precision, achieving a more accurate follow-up, which translates into a more precise follow-up of the patient. In addition, this information is processed to make the user aware of his or her evolution and the effectiveness of the execution of the movements, which can improve the performance of the exercises planned for recovery. The latter has a positive influence on the recovery process, as shown in the results obtained during the execution of the use case.
Furthermore, through the use of the multi-agent system, together with the use of the Unity3D engine, exercises can be added to the system remotely, accommodating different methods of rehabilitation. This allows the system to adapt the activities to the advancement and performance of the patient.
For future work, we intend to improve the system by using artificial intelligence techniques to identify errors made by the user automatically and efficiently according to the exercises performed. In addition, a study will be carried out with a greater number of patients with a diversity of chronic illnesses, which will make it possible to compare their evolution according to the pathology diagnosed.

Author Contributions

Conceptualization, H.S.S.B. and A.S.M.; Investigation, H.S.S.B. and A.S.M.; Methodology, H.S.S.B. and G.V.G.; Project Administration, H.S.S.B. and G.V.G.; Resources, A.S.M. and L.A.S.; Supervision, G.V.G. and F.G.E. Validation, G.V.G. and F.G.E.; Writing—original draft, H.S.S.B. and A.S.M.; Writing—review and editing, L.A.S. and G.V.G.; and Financial support G.V.G. and H.S.S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Junta De Castilla y León—Consejería De Economía Y Empleo: System for simulation and training in advanced techniques for the occupational risk prevention through the design of hybrid-reality environments with ref. J118. André Filipe Sales Mendes’s research was co-financed by the European Social Fund and Junta de Castilla y León (Operational Programme 2014–2020 for Castilla y León, EDU/556/2019 BOCYL). Francisco García Encina’s research was partly supported by the Spanish Ministry of Education and Vocational Training (FPU Fellowship under Grant FPU19/02455).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Knai, C.; Brusamento, S.; Legido-Quigley, H.; Saliba, V.; Panteli, D.; Turk, E.; Car, J.; McKee, M.; Busse, R. Systematic review of the methodological quality of clinical guideline development for the management of chronic disease in Europe. Health Policy 2012, 107, 157–167. [Google Scholar] [CrossRef] [PubMed]
  2. Huang, F.H. Explore Home Care Needs and Satisfaction for Elderly People with Chronic Disease and their Family Members. Procedia Manuf. 2015, 3, 173–179. [Google Scholar] [CrossRef] [Green Version]
  3. Varshney, U. Mobile health: Four emerging themes of research. Decis. Support Syst. 2014, 66, 20–35. [Google Scholar] [CrossRef]
  4. Hjelm, N.M. Benefits and Drawbacks of Telemedicine. J. Telemed. Telecare. 2005, 11, 60–70. [Google Scholar] [CrossRef]
  5. Vogt, L.; Lucki, K.; Bach, M.; Banzer, W. Rollator use and functional outcome of geriatric rehabilitation. J. Rehabil. Res. Dev. 2010, 47, 151–156. [Google Scholar] [CrossRef]
  6. Institute of Electrical and Electronics Engineers; IEEE Sensors Council. IEEE Sensors 2017: October 29–November 1, 2017, Glasgow, Scotland, UK, Socttish Event Campus (SEC): 2017 Conference Proceedings; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  7. Lay-Ekuakille, A.; Vergallo, P.; Trabacca, A.; De Rinaldis, M.; Angelillo, F.; Conversano, F.; Casciaro, S. Low-frequency detection in ECG signals and joint EEG-Ergospirometric measurements for precautionary diagnosis. Meas. J. Int. Meas. Confed. 2013, 46, 97–107. [Google Scholar] [CrossRef]
  8. Brennan, D.M.; Mawson, S.; Brownsell, S. Telerehabilitation: Enabling the remote delivery of healthcare, rehabilitation, and self management. In Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2009; Volume 145, pp. 231–248. [Google Scholar] [CrossRef]
  9. Velázquez, R.; Pissaloux, E.; Rodrigo, P.; Carrasco, M.; Giannoccaro, N.I.; Lay-Ekuakille, A. An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback. Appl. Sci. 2018, 8, 578. [Google Scholar] [CrossRef] [Green Version]
  10. Hiremath, S.; Yang, G.; Mankodiya, K. Wearable Internet of Things: Concept, Architectural Components and Promises for Person-Centered Healthcare; Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (ICST): Athens, Greece, 2014. [Google Scholar] [CrossRef] [Green Version]
  11. Pan, C.T.; Lin, Z.C.; Sun, P.Y.; Chang, C.C.; Wang, S.Y.; Yen, C.K.; Yang, Y.S. Design of virtual reality systems integrated with the lower-limb exoskeleton for rehabilitation purpose. In Proceedings of the 4th IEEE International Conference on Applied System Innovation 2018, ICASI 2018, Tokyo, Japan, 13–17 April 2018; pp. 498–501. [Google Scholar] [CrossRef]
  12. Sadihov, D.; Migge, B.; Gassert, R.; Kim, Y. Prototype of a VR upper-limb rehabilitation system enhanced with motion-based tactile feedback. In Proceedings of the 2013 World Haptics Conference—WHC 2013, Daejeon, Korea, 14–17 April 2013; pp. 449–454. [Google Scholar] [CrossRef]
  13. Burdea, G.C. Virtual rehabilitation–benefits and challenges. Methods Inf. Med. 2003, 42. [Google Scholar] [CrossRef]
  14. Abdellaoui, G.; Bendimerad, F.T. Dynamic reconfiguration of LPWANs pervasive system using multi-agent approach. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 300–305. [Google Scholar] [CrossRef] [Green Version]
  15. Villarrubia, G.; De Paz, J.F.; Bajo, J.; Corchado, J.M. Ambient agents: Embedded agents for remote control and monitoring using the PANGEA platform. Sensors 2014, 14, 13955–13979. [Google Scholar] [CrossRef] [Green Version]
  16. AlDossary, S.; Martin-Khan, M.G.; Bradford, N.K.; Smith, A.C. A Systematic Review of the Methodologies Used to Evaluate Telemedicine Service Initiatives in Hospital Facilities. Int. J. Med. Inform. 2017, 97, 171–194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Huang, H.; Li, X.; Liu, S.; Hu, S.; Sun, Y. TriboMotion: A Self-Powered Triboelectric Motion Sensor in Wearable Internet of Things for Human Activity Recognition and Energy Harvesting. IEEE Internet Things J. 2018, 5, 4441–4453. [Google Scholar] [CrossRef]
  18. Lin, B.S.; Chen, J.L.; Hsu, H.C. Novel Upper-Limb Rehabilitation System Based on Attention Technology for Post-Stroke Patients: A Preliminary Study. IEEE Access 2017, 6, 2720–2731. [Google Scholar] [CrossRef]
  19. Buranapanichkit, D.; Jindapetch, N.; Thongpull, K.; Thongnoo, K.; Chetpattananondh, K.; Duangsoithong, R.; Sengchuai, K. A Patient Monitoring System for Multiple IoT Rehabilitation Devices; Technical Report; IEEE: Pattaya, Thailand, 2019. [Google Scholar]
  20. Bisio, I.; Garibotto, C.; Lavagetto, F.; Sciarrone, A. When eHealth Meets IoT: A Smart Wireless System for Post-Stroke Home Rehabilitation. IEEE Wirel. Commun. 2019, 26, 24–29. [Google Scholar] [CrossRef]
  21. Gaddam, A.; Wilkin, T.; Angelova, M.; Valera, A.; McIntosh, J.; Marques, B. Design development of iot based rehabilitation outdoor landscape for gait phase recognition. In Proceedings of the International Conference on Sensing Technology, Sydney, Australia, 2–4 December 2019. [Google Scholar] [CrossRef]
  22. Celesti, A.; Lay-Ekuakille, A.; Wan, J.; Fazio, M.; Celesti, F.; Romano, A.; Bramanti, P.; Villari, M. Information management in IoT cloud-based tele-rehabilitation as a service for smart cities: Comparison of NoSQL approaches. Meas. J. Int. Meas. Confed. 2020, 151. [Google Scholar] [CrossRef]
  23. Pavón-Pulido, N.; Antonio López-Riquelme, J.; Feliú-Batlle, J.J. IoT Architecture for Smart Control of an Exoskeleton Robot in Rehabilitation by Using a Natural User Interface Based on Gestures. J. Med. Syst. 2020, 44, 1–10. [Google Scholar] [CrossRef] [PubMed]
  24. Raúl Morales Salcedo.; Milton Carlos Elías Espinosa. Smart Rehabilitation Solutions Through IoT and Mobile Devices. Manag. Stud. 2019, 7. [Google Scholar] [CrossRef]
  25. Erdogan, A.; Celebi, B.; Satici, A.C.; Patoglu, V. Assist On-Ankle: A reconfigurable ankle exoskeleton with series-elastic actuation. Auton. Robot. 2017, 41, 743–758. [Google Scholar] [CrossRef]
  26. Wang, C.; Wang, L.; Qin, J.; Wu, Z.; Duan, L.; Li, Z.; Cao, M.; Li, W.; Lu, Z.; Li, M.; et al. Development of an ankle rehabilitation robot for ankle training. In Proceedings of the 2015 IEEE International Conference on Information and Automation, ICIA 2015—In Conjunction with 2015 IEEE International Conference on Automation and Logistics, Lijinag, China, 8–10 August 2015; pp. 94–99. [Google Scholar] [CrossRef]
  27. Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep High-Resolution Representation Learning for Human Pose Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 5693–5703. [Google Scholar]
  28. Kreiss, S.; Bertoni, L.; Alahi, A. PifPaf: Composite Fields for Human Pose Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 11977–11986. [Google Scholar]
  29. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; Metaxas, D.N. Semantic Graph Convolutional Networks for 3D Human Pose Regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 3425–3435. [Google Scholar]
  30. Hernandez, Y.; Kim, K.H.; Benson, E.; Jarvis, S.; Meginnis, I.; Rajulu, S. Underwater space suit performance assessments part 1: Motion capture system development and validation. Int. J. Ind. Ergon. 2019, 72, 119–127. [Google Scholar] [CrossRef]
  31. Kim, E.S. Ghost in the Virtual Reality: Translating the human essence with motion captured dance. BCS Learn. Dev. 2019. [Google Scholar] [CrossRef] [Green Version]
  32. Tretriluxana, S.; Tretriluxana, J. Differential effects of feedback in the irtual reality environment for arm rehabilitation after stroke. In Proceedings of the 8th Biomedical Engineering International Conference (BMEiCON), Pattaya, Thailand, 25–27 November 2015. [Google Scholar]
  33. de la Iglesia, D.H.; Mendes, A.S.; González, G.V.; Jiménez-Bravo, D.M.; de Paz Santana, J.F. Connected elbow exoskeleton system for rehabilitation training based on virtual reality and context-aware. Sensors 2020, 20, 858. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Postolache, O.; Alexandre, R.; Geman, O.; Jude Hemanth, D.; Gupta, D.; Khanna, A. Remote Monitoring of Physical Rehabilitation of Stroke Patients using IoT and Virtual Reality. IEEE J. Sel. Areas Commun. 2020. [Google Scholar] [CrossRef]
  35. Rios-Ramos, E.S.; Melendez-Armenta, R.A.; Vazquez-Lopez, J.A.; Morales-Rosales, L.A. Multi-Agent System for Post-Stroke Medical Monitoring in Web-Based Platform. Comput. Inf. Sci. 2020, 13, 46. [Google Scholar] [CrossRef]
  36. Calvaresi, D.; Marinoni, M.; Dragoni, A.F.; Hilfiker, R.; Schumacher, M. Real-time multi-agent systems for telerehabilitation scenarios. Artif. Intell. Med. 2019, 96, 217–231. [Google Scholar] [CrossRef] [PubMed]
  37. Calvaresi, D.; Calbimonte, J.P. Real-time compliant stream processing agents for physical rehabilitation. Sensors 2020, 20, 746. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Enflux Motion Capture Clothing. 2020. Available online: https://www.getenflux.com/ (accessed on 27 July 2020).
  39. Garmin. 2020. Available online: https://www.garmin.com/es-ES/ (accessed on 27 July 2020).
  40. Parwana, H.; Kothari, M. Quaternions and Attitude Representation. arXiv 2017, arXiv:1708.08680. Available online: https://arxiv.org/pdf/1708.08680.pdf (accessed on 30 December 2020).
  41. Unity Real-Time Development Platform|3D, 2D VR & AR Visualization. 2020. Available online: https://unity.com/es (accessed on 25 October 2020).
  42. Lohse, K.; Shirzad, N.; Verster, A.; Hodges, N.; Van Der Loos, H.F. Video games and rehabilitation: Using design principles to enhance engagement in physical therapy. J. Neurol. Phys. Ther. 2013, 37, 166–175. [Google Scholar] [CrossRef]
  43. Oculus. 2020. Available online: https://www.oculus.com/?locale=es_ES (accessed on 27 July 2020).
Figure 1. Proposed architecture using a multi-agent system (MAS) PANGEA.
Figure 1. Proposed architecture using a multi-agent system (MAS) PANGEA.
Applsci 11 00331 g001
Figure 2. Monitoring suit sensors representation.
Figure 2. Monitoring suit sensors representation.
Applsci 11 00331 g002
Figure 3. Block diagram of the data processing algorithm.
Figure 3. Block diagram of the data processing algorithm.
Applsci 11 00331 g003
Figure 4. Main components of the virtual reality environment.
Figure 4. Main components of the virtual reality environment.
Applsci 11 00331 g004
Figure 5. Virtual reality scenario.
Figure 5. Virtual reality scenario.
Applsci 11 00331 g005
Figure 6. Errors when the user has done the exercise.
Figure 6. Errors when the user has done the exercise.
Applsci 11 00331 g006
Figure 7. Mobile’s interface examples.
Figure 7. Mobile’s interface examples.
Applsci 11 00331 g007
Figure 8. Monitoring interface.
Figure 8. Monitoring interface.
Applsci 11 00331 g008
Figure 9. Information graphics.
Figure 9. Information graphics.
Applsci 11 00331 g009
Figure 10. User with Oculus and Enflux suit.
Figure 10. User with Oculus and Enflux suit.
Applsci 11 00331 g010
Figure 11. User doing exercise.
Figure 11. User doing exercise.
Applsci 11 00331 g011
Figure 12. Evolution of users’ errors.
Figure 12. Evolution of users’ errors.
Applsci 11 00331 g012
Figure 13. Average deviation for each part of the body.
Figure 13. Average deviation for each part of the body.
Applsci 11 00331 g013
Table 1. Agents of the job monitoring organization.
Table 1. Agents of the job monitoring organization.
AgentDescription
HistoricalThis agent is responsible for generating and obtaining historical data
of carrying out exercises in such a way that they can be
reviewed and evaluated by an expert in the future.
EvolutionThis agent is in charge of calculating the evolution from the
percentage of completion of the exercises.
Medical RecordThe function of this agent is to simulate integration with
a health system where patients’ medical
conditions can be obtained and reported.
Profile DataThis agent is responsible for making
the user’s profile available to the entire system
with the user’s parameters. This agent is also
responsible for updating the profile values
as they are updated with progress.
Report GeneratorAgent in charge of generating reports either of
each exercise or of the rehabilitation progress.
Table 2. Comparison between expert errors detected and system errors detected.
Table 2. Comparison between expert errors detected and system errors detected.
PatientStrategieExercise 1Exercise 2Exercise 3Exercise 4
Patient 1Visual Errors
System Errors
8
20
7
13
5
12
10
23
Patient 2Visual Errors
System Errors
7
17
8
14
3
9
9
22
Patient 3Visual Errors
System Errors
6
17
7
15
5
15
9
18
Patient 4Visual Errors
System Errors
4
12
7
13
4
11
9
18
Patient 5Visual Errors
System Errors
9
15
7
19
6
14
9
19
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Blas, H.S.S.; Mendes, A.S.; Encinas, F.G.; Silva, L.A.; González, G.V. A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring. Appl. Sci. 2021, 11, 331. https://doi.org/10.3390/app11010331

AMA Style

Blas HSS, Mendes AS, Encinas FG, Silva LA, González GV. A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring. Applied Sciences. 2021; 11(1):331. https://doi.org/10.3390/app11010331

Chicago/Turabian Style

Blas, Héctor Sánchez San, André Sales Mendes, Francisco García Encinas, Luís Augusto Silva, and Gabriel Villarubia González. 2021. "A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring" Applied Sciences 11, no. 1: 331. https://doi.org/10.3390/app11010331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop