Skip to main content

Design and implementation of a distributed fall detection system based on wireless sensor networks

Abstract

Pervasive healthcare is one of the most important applications of the Internet of Things (IoT). As part of the IoT, the wireless sensor networks (WSNs) are responsible for sensing the abnormal behavior of the elderly or patients. In this article, we design and implement a fall detection system called SensFall. With the resource restricted sensor nodes, it is vital to find an efficient feature to describe the scene. Based on the optical flow analysis, it can be observed that the thermal energy variation of each sub-region of the monitored region is a salient spatio-temporal feature that characterizes the fall. The main contribution of this study is to develop a feature-specific sensing system to capture this feature so as to detect the occurrence of a fall. In our system, the three-dimensional (3D) object space is segmented into some distinct discrete sampling cells, and pyroelectric infrared (PIR) sensors are employed to detect the variance of the thermal flux within these cells. The hierarchical classifier (two-layer HMMs) is proposed to model the time-varying PIR signal and classify different human activities. We use self-developed PIR sensor nodes mounted on the ceiling and construct a WSN based on ZigBee (802.15.4) protocol. We conduct experiments in a real office environment. The volunteers simulate several kinds of activities including falling, sitting down, standing up from a chair, walking, and jogging. Encouraging experimental results confirm the efficacy of our system.

1 Introduction

Internet of Things (IoT) concerns about the seamless interaction of objects, sensors, and computing devices [1]. As wireless sensor networks (WSNs) become increasingly integrated with the Internet, the IoT is fast-becoming a reality. The IoT changes the web from being a virtual online space to a system that can both sense and affect its environment. The WSNs, as a subpart of the IoT, extend the Internet's digital nerve-endings into everyday objects. All kinds of sensors, such as RFID, video, and infrared, are recognized as the critical "atomic components" that will bridge the gap between the real physical world and the digital world [2].

IoT can be applied to various areas. The most often cited include business logistics, home automation and healthcare [3]. Although falls are specific cases of healthcare, there is a significant research effort focusing on fall detection. This is due to the fact that accidental falls are among the leading causes of death over 65 [4]. According to the report in Chan et al. [5], approximately one-third of the 75 years or older people have suffered a fall each year. The fall of the elderly is a serious problem in an aging society [6]. The immediate treatment of the injured people by the fall is very critical, because it will not only increase the independent living ability of the elderly and the patient, but also release the pressure of the shortage of nurses. Therefore, how to design a rapid alarm system for fall detection has always been an active research topic on the elderly healthcare.

Camera-based methods may realize fall detection for elderly people in a non-intrusive fashion. For example, Williams et al. [7] extracted the human target with the simple background subtraction method from the video, and then used the aspect ratio of the image of the body as the cue to determine whether the fall event happened. If the aspect ratio, i.e., the width of the person divided by height, is below a particular threshold, then we can assume that the person is upright; otherwise the person is assumed to have fallen. Rougier et al. [8] integrated the motion history image (MHI) and the variance of body shape information as the feature for fall recognition. Although there are so many works that demonstrate their efficiency [6], these studies are based on the assumption that the lighting conditions remain fairly stable. However, this assumption does not always hold in everyday life. The camera-based analysis may be influenced by the change of illumination and the shadow, and accurate body extraction from video is still a thorny issue in the computer vision community. With resource constrained sensor nodes, sophisticated algorithms are not preferable choices. In addition, the camera-based method will infringe privacy; no one likes the feeling of being monitored by a camera all day long. Is it possible to find another sensing method to detect the fall? This is the motivation of our research.

In the WSN-portion of the IoT, the choice of sensing modality is critical. Recently, there has been a growing tendency to research sensing modalities with pyroelectric infrared (PIR) sensors [911]. The PIR sensor is a kind of thermal imaging technologies and responses only to temperature changes caused by human motion. It has the promising advantages to overcome the limitations of the traditional camera-based sensing method, since the human motion information is acquired directly, without sensing redundant background and chromatic information. The output of the PIR sensors is low-dimensional temporal data stream, which avoids the high-dimensional data processing. However, PIR sensors provide fairly crude data from which it is difficult to acquire the spatial information. Thus, the primary goal of the sensing system design is to enhance the spatial awareness of the PIR sensors, and to capture the spatio-temporal feature of the fall.

In this article, we design and implement a system, SensFall, which can detect the fall efficiently and efficaciously. Sensing model design is the most important process of our system design. Our sensing model springs from the reference structure tomography (RST) paradigm [12], which permits scan-free multidimensional imaging, data-efficient, and computation-efficient source analysis. The reference structure plays the role of modulating the visibility between the object space and the measurement space. Thus, after object space segmentation, the spatial awareness of the PIR sensors in the measurement space is enhanced, and spatio-temporal feature of the fall could be captured by the PIR sensors.

In particular, with a Fresnel lens array around, each PIR sensor can detect sensitively the thermal fluctuation induced by human motion within its field of view (FOV). Each PIR sensor can be considered as a single pixel. The opaque mask covers part of the surface of the Fresnel lens array, acting as the geometric reference structure [13]. As a result, the PIR sensor could only sense part of its original FOV. Several PIR sensors with their own masks are multiplexing in one sensor node to modulate the visibility pattern of the object space.

By this method, the object space is segmented into a lot of sampling cells, as contiguous points in each cell have the same unique visibility signature. The human body acts as a structured thermal source. As the body passes the boundaries of the sampling cells, the PIR sensors will generate output with different characteristics corresponding to different human motions, e.g., falling, sitting down, standing up from a chair, walking, and jogging.

We develop a corresponding hierarchical classifier, the two-layer hidden Markov model, to model the time-varying PIR signals and detect the fall.

The advantages of our system are obvious. First, the dimension of the PIR data stream is low, which avoids the high-dimensional data processing. In our prototype implementation, the data stream dimension is 7 × 1 with 25Hz sample rate. Second, the communication burden is low (message payload 1400 bits/s), which could be supported by most low-rate wireless personal area networks (LR-WPANs) protocol, i.e., the data rate of ZigBee is 250 kbit/s. Third, our system could work well in any illumination conditions, even in a totally dark environment. The sensor nodes could be easily deployed in the elder's house. Last but not the least, our system follows privacy preservation, because it will not capture the image of the elder as the camera sensors do. The design of our system is towards the vision of the IoT: anytime, anywhere, anymedia, and anything[1].

2 Related study

Much of the existing study on IoT has focused on addressing the power and computational resource constraints by the design of specific routing, MAC, and cross-layer protocols [14, 15]. However, for specific applications, further efforts should be directed toward finding ways to improve the sensing efficiency and minimize the data transmitted between sensor nodes. The focus of our study is to design a novel sensing paradigm to achieve the fall detection based on the resource-constrained wireless network.

Using wearable sensors is the most common method of detecting falls and other abnormal behaviors. Several researchers have explored the use of wearable acceleration sensors that placed in human clothing [16, 17]. Based on the multidimensional signals, a simple threshold or posture model was built to detect abnormal activities [18]. Bourke and Lyons [19] used gyroscopes mounted on the torso to measure the pitch and rolling angular velocity. They introduced a threshold-based algorithm to analyze the collected changes of angular velocity to make the fall alarm. Although accelerators and gyroscopes are able to provide discriminative time-varying signals for fall detection, they are intrusive and their usage is restrictive. The reason is that they need the cooperation of the elderly, which largely depends on the person's ability and willingness. The elder may forget to wear them, and wearable sensors will cause discomfort to the wearer.

For camera-based methods, the body shape change analysis algorithms are based on a general principle that the shape of a lying person is significantly different from that of a standing person. For example, Anderson et al. [20] used an HMM-based algorithm to detect the fall. The HMMs use the width-to-height ratio of the bounding box extracted from the silhouette. However, a single camera limits the viewing angle of the scene and, more importantly, the residents. The collaboration of multiple cameras can overcome this limitation. Cucchiara et al. [21] used a 3D shape of the human body to detect the fall. The 3D body shape is obtained by multiple cameras calibrated in prior. The advantages of employing camera-based methods include [6]: (1) Compared with wearable sensors, they are less intrusive because they are installed on the building, not worn by users. (2) The recorded video can be used for remote post verification and analysis. Generally speaking, body shape change detection can be real-time, whereas the 3D body shape needs more computation and more cameras. However, the existing algorithms in this category are mainly based on the shape feature extracted from the human contour, e.g., the width-to-height ratio of the bounding box; the accurate background and shadow substraction is the basic premise. This premise sometimes will be violated in real environment and then the performance of the camera-based methods will degrade.

Moreover, the high dimensional input visual data stream brings great computational and communication pressure and limitation for constructing pervasive visual sensor networks. In addition, the privacy is an inevitable concern of employing the camera-based methods. Recently, several studies exploit the advantages of PIR sensors in the automated surveillance. Shankar et al. [22] explored the response characteristics of PIR sensors and used them for human motion tracking. Hao et al. [23] confirmed a multiple-lateral-view based wireless PIR sensor system to perform human tracking. The human location can be surmised from the angle of arrival (AoA) of the distributed sensor modules. Hao et al. [24] subsequently showed the PIR sensor has the potentiality to have a reliable biometric solution for the verification/identification of a small group of human subjects. Burchett et al. [25] gave a lightweight biometric detection with PIR sensors. However, the studies mentioned above do not address how to discriminate the abnormal behavior from normal behavior as done in our studies.

The first study that employs the PIR sensors for fall detection is conducted by Sixsmith and Johnson [26]. They used an integrated pyroelectric sensor array (16 × 16) to collect human motion information without sensing the background. They installed the device on the wall for capturing the thermal image of the human body and estimating the vertical velocity. A neural network was trained to detect the falls in realistic scenarios. However, only the vertical velocity information is not robust enough to detect the fall. Their system is intrinsic to capture the body shape change as the camera-based method does. Besides, their experiments did not analyze the system's ability to distinguish between the fall and other similar normal vertical activities.

Recently, Liu et al. [27] used the direction-sensitive PIR sensors to construct a distributed fall detection system. Their distributed sensing paradigm is aimed at capturing the synergistic motion patterns of head, upper-limb and lower-limb. Their experiments results are encouraging. However, to capture the motion patterns of different parts of the human body, their system deployment has to be side-view, which means it is very easily occluded by other objects, e.g., furniture. Furthermore, their system is view-dependent, which means that for each sensor node, it could only detect the fall happened perpendicularly to the FOV of the PIR sensors. These limitations will be overcome in our design of a novel efficient sensing model.

3 Feature analysis and sensing model

In this section, we discuss the spatio-temporal feature of the fall, followed by the sensing model design.

3.1 Optical flow based analysis

To obtain the discriminative spatio-temporal feature of the fall, it is necessary to analyze the difference between the fall and other normal activities. This is the key to detect the fall in an efficient way. Based on the analysis, we design the corresponding sensing model.

The sophisticated optical flow method is employed to refine the analysis of the spatio-temporal feature as a human performs different activities. The estimation of pixel motion in two consecutive frames yields the optical flow computation [28]. Some sample images of normal activities and the fall are shown in Figure 1. The motion images are taken from a video of which the sample rate is 25 frames/s. We select three frames and its corresponding optical flow vector images in each category of activities for visualization. We divide the monitored region into four sub-regions, as shown in Figure 1a; furthermore, we aggregate the horizontal vector magnitude within these regions separately, denoted as horizontal motion energy (HME), as shown in Figure 2. The HME reflects the horizontal component of the human motion that crosses the sub-region perpendicularly, and will be used as the cue to analyze the spatio-temporal feature of different activities. As shown in Figure 2a "walking" and 2d "jogging", the peaks of HME of each sub-region appear one by one at roughly fixed interval, which reflect the motion characteristic of "walking" and "jogging" that pass these four sub-regions sequentially at about the same horizontal speed. As shown in Figure 2b,c, the peaks of HME of "sitting down" and "standing up" disappear or appear gradually, as they are controlled human activities. By contrast, the "fall" will cause the HME output of the adjacent sub-regions to overlap in a relatively short period of time, which corresponds with the velocity features of the fall [29], as shown in Figure 2e. These observations consist with the dynamics of the fall. In [29], Wu found two velocity features of the fall: (1) the magnitude of both vertical and horizontal velocities of the trunk will increase dramatically during the falling phase, reaching up to 2 to 3 times that of any other controlled movement; (2) the increase of the vertical and horizontal velocities usually occurs simultaneously, about 300-400 ms before the end of the fall process, which are strongly dissimilar with the controlled human activities.

Based on the above analysis, the time-varying HME of each sub-region is a discriminative spatio-temporal feature which can be used to distinguish the fall from other normal activities. To leverage this feature for fall detection in an efficient fashion, the feature-specific system should: (1) segment the monitored region into sub-regions and (2) the sensors collect the energy variation of each sub-region. This is the inspiration of our sensing model design, which will be elaborated in the succeeding sections.

Figure 1
figure 1

The sample images and their corresponding optical flow images. The first row of each sub-figure is the original images, and the second row is the corresponding optical flow images: (a) walking; (b) sitting down; (c) standing up; (d) jogging; (e) falling. Further, we divided the monitored region into 4 sub-regions, and aggregated the horizontal vector magnitude of these regions as the human cross these regions, as shown in Figure 2.

Figure 2
figure 2

The change of the optical flow of each sub-region. The sum of the horizontal optical flow motion vector as the human passes four regions performing different activities: (a) walking; (b) sitting down; (c) standing up; (d) jogging; (e) Falling.

3.2 Sensing model

To capture the most discriminative spatio-temporal feature of the fall, namely the HME of each sub-region, the sensing model has to be designed deliberately. Our model springs from the reference structure tomography (RST) paradigm, which uses multidimensional modulations to encode mappings between radiating objects and measurements [12].

The schematic diagram of our sensing model is shown in Figure 3. The object space refers to the space where the thermal object moves. The measurement space refers to the space where the PIR sensors are placed. The reference structure specifies the mapping from the object space to the measurement space [12], and is used to modulate the FOV of each PIR. In the case of opaque reference structure, the visibility function v j (r) is binary valued depending on whether the point r in the object space is visible to the j th PIR sensor:

Figure 3
figure 3

Schematic diagram of the sensing model: The measurement space, reference structure, object space and sampling cells.

v j ( r ) = { 1 r is visible to the j th PIR 0 otherwise

The function of the PIR sensors is to transform the incident radiation into measurements. The measurement of the j th PIR sensor is given by

m j ( t ) = h ( t ) * Ω v j ( r ) s ( r , t ) d r
(1)

where "*" denotes convolution, h(t) is the impulse response of the PIR sensor, Ω R3 is the object space covered by the FOV of the j th PIR sensor, v j (r) is the visibility function, and s(r,t) is the thermal density function in the object space.

Assume that there are M sensors in the measurement space, and their FOVs are multiplexed. Thus, every point r in the object space can be associated with a binary signature vector [v j (r)] {0, 1}M, which specifies its visibility to these M sensors. In the object space, contiguous points with the same signature form a cell that is referred to as a sampling cell. As a result, the 3D object space Ω can be divided into L discrete non-overlapping sampling cells, denoted as Ω i

Ω = i Ω i , Ω i Ω j =
(2)

where i, j = 1, ..., L. Then (1) can be rewritten in discrete form

m j ( t ) = h ( t ) * i = 1 L Ω i v j ( r ) s ( r , t ) d r = h ( t ) * i = 1 L v j i Ω i s ( r , t ) d r = i = 1 L v j i h ( t ) * Ω i s ( r , t ) d r = i = 1 L v j i s i ( t )
(3)

where v ji is the j th element of the signature vector of Ω i , and s i ( t ) =h ( t ) * Ω i s ( r , t ) d r is the sensor measurement of sampling cell Ω i .

Then (3) can be written in a matrix form as

m = Vs
(4)

where m = [m j (t)] M×1is the measurement vector, V = [v ji ] M×Lis the measure matrix determined by the visibility modulation scheme, and s = [s i (t)] L×1is the sensor measurement of the sampling cells.

As the analysis mentioned in Section 3.1, to capture the discriminative spatio-temporal feature of the fall, it requires that the system can sense the time-varying HME of each sub-region. In our sensing model design, each sampling cell corresponds to a sub-region of the monitored region, and the PIR sensors are employed to capture the time-varying HME of these sampling cells. Therefore, our sensing model satisfies the design requirements. Our sensing model is an intrinsic non-isomorphic model, which means the number of PIR sensors M is less than the number of sampling cells L, and the measurement of each PIR sensor is a linear combination of the sampling cells [30]. However, its sensing efficiency is high, that is, it can robustly detect the fall by processing low-dimensional sensor data directly. The reason lies in that the our sensing model captures the most discriminative spatio-temporal feature of the fall by efficient spatial segmentation. In Section 5, we will elaborate the system implementation, and list the specification of the reference structure.

3.3 Signal feature extraction

To represent the energy variation of the time-varying PIR sensor signals, it is critical to select an appropriate feature. Because the short time energy (STE) has been proved effective in depicting the energy variation of the sine-like waveform [31], we employ it as the feature of the PIR signals. The STE of the n th frame of the j th PIR is defined as

p j ( n ) = k = 0 Z n - 1 m j ( k ) - a v S T E j ( n )
(5)
with a v S T E j ( n ) = 1 Z n k = 0 Z n - 1 m j ( k )
(6)

where j {1, ..., M} is the index of the PIR sensor, Z n is the total number of the sampling points in the n th frame, avSTE j (n) is the average energy of the sampling points, and m j (k) is the signal amplitude of the k th sampling point.

3.4 Hierarchical classifier

Based on the extracted signal feature, STE of each frame, we can continue to design the corresponding classifier. The design of the classifier is problem-specific. Fall detection could be regarded as a binary classification problem, that is, fall or other normal activities. However, because it is difficult to design a single classifier to accomplish the task, the coarse-to-fine strategy is a better choice [32]. Thus, we design a binary hierarchical classifier for the fall detection.

The hierarchical classifier in our study is based on the hidden Markov models (HMMs). HMMs have been demonstrated as a powerful tool for modeling time-varying sequence data, such as speech [33] and video stream [34]. The parameters of a HMM can be denoted compactly by λ = (A, B, Π), where A = {a ij } represents the hidden state transition probabilities matrix, B = {b i (p(n))} denotes the probability density distribution of the observation vector, and Π = {π i } is the initial state probability vector [33]. The parameters are learned from training data using Baum-Welch method. This is done for each class separately.

The binary hierarchical classifier we designed is the two-layer HMMs model, as shown in Figure 4. The normal activities include normal horizontal activities and normal vertical activities. The first-layer HMMs is responsible to classify the unknown activities into normal horizontal activities and the rest. The horizontal activities include walking and jogging, and the rest activities include fall, sitting down, and standing up. In other words, we need to train two HMMs to separate these two groups of activities, G1 = {walking, jogging} and G2 = {fall, sitting down, standing up}. Based on the Bayesian rule, given the sequence P = [p j (n)], the likelihood output p i |P) is proportional to p(P i ). That is, label the input sequence P to the HMM with the height likelihood,

Figure 4
figure 4

Binary hierarchical classifier. The two-layer HMMs model. The first layer HMMs is responsible to classify the unknown activities into normal horizontal activities and the rest. The rest activities will be classified by the second layer HMMs to distinguish the fall from other normal vertical activities. The fall will be detected eventually.

i * = arg max i { 1 , 2 } p P | λ i
(7)

where λ1 and λ2 correspond to G1 and G2, respectively.

By the same method, the second-layer HMMs is to distinguish the fall from other normal vertical activities (sitting down and standing up).

4 System design

Our SensFall system seeks to provide an efficient solution for fall detection. The design of the system mainly includes two parts: (1) data acquisition and (2) data processing. Specifically, the data acquisition is about how the system acquire the useful sensor data through the sensing model, which is the most important aspect of our system. The data processing consists of feature extraction and classification. An alarm will be raised if the fall is detected.

Our study assumes the distributed fall detection system SensFall comprising several PIR sensor nodes, a sink and a host, as shown in Figure 5.

Figure 5
figure 5

The SensFall system model. The combination of our system and the Internet forms IoT.

A canonical node is assumed to be equipped with several PIR sensors, a micro-controller unit (MCU), and a radio as well as on-board RAM and flash memory. Nodes are assumed to be tetherless and battery-powered, and consequently, the overall constraint for each node is energy and data transmission bandwidth. The PIR sensors are responsible for collecting the time-varying HME of each sub-region, and the MCU converts the analogy signal of each PIR sensor into digital signal. If the amplitude of the PIR signal exceed the pre-defined threshold, the radio unit on the node will send the data to the sink.

The sink collects all the data transmitted from the sensor nodes, and transfer them to the host through a serial cable. The host is responsible for the data processing, including feature extraction and activity classification. If the "fall" is the result of the activities classification, the alarm will be raised. The host is connected to the Internet. If the emergency situation occurs, it will send the message to the doctors and nurses in the nearby hospital.

5 System implementation

This section describes the implementation of SensFall based on the design discussed in the previous section.

5.1 Hardware architecture

In this section, we present some of the important hardware aspects of the preindustrial prototype developed by the authors, as shown in Figure 6. The sensor node will be mounted on the ceiling 3m above the floor, as shown in Figure 7. Hemispherical shape of the Fresnel lens arrays, model 8005, are obtained from Haiwang Sensors Corporation [35], and the PIR detectors, D205B, are obtained from Shenba Corporation [36]. The MCU embedded in the sensor node and the sink are Chipcon CC2430 modules. The CC2430 module combines the RF transceiver with an industry-standard enhanced 8051 MCU, 128 KB flash memory, 8KB RAM [37]. After configure the CC2430, the data generated by the sensor node will be sent to the sink based on the 2.4G Hz IEEE 802.15.4 (ZigBee) protocol, and then the sink will transport the data to the PC host by RS232 serial port for data processing.

Figure 6
figure 6

SensFall hardware prototype. The sensor node is with seven PIR sensors, all to be mounted at a height of 3m from the floor, looking down to classify human activities. The 2.4G RF transceiver module is embedded in the CC2430.

Figure 7
figure 7

Typical experimental scenarios: (a) falling; (b) jogging.

5.2 Reference structure specification

We propose a proof-of-concept implementation scheme of the reference structure for our SensFall system. The optical flow analysis is view-dependent; however, to detect the fall that may happen in all directions within the monitored region, our system should be view-independent. Thus, the design of the reference structure is based on two principles: (1) the volume of each sampling cell is to be as equal as possible; (2) the sampling cells are symmetric along the axis of the monitored region.

For each PIR sensor, there is a Fresnel lens array located one focal length away from the detector. The Fresnel lens array is composed of a number of small Fresnel lens, which will collect the thermal energy within their FOV. Before visibility modulation, the FOV of each PIR sensor is a full cone. The opaque masks play the role of reference structure. The first type of mask, Type I, is a fan shape, as shown in Figure 8a. After applying such mask, the FOV of the PIR sensor is no longer a full cone, but partial cone shape, called fan cone. The fan cone's sweep angle is 120°, only 1/3 of the full cone. The second type of mask, Type II, is a ring shape, as shown in Figure 8b. The FOV of the PIR sensor after masked is still a full cone, but its cone angle β is less than that of the original cone. These two types of masks provide two degree of freedom (DOF) spatial partitions, bearing segmentation by Type I mask, and radial segmentation by Type II mask. By using these two kinds of masks and adjusting the parameters appropriately, the design principles (1) and (2) will be achieved.

Figure 8
figure 8

Schematic diagram of the reference structure. (a) Type I mask for PIR1, PIR2, PIR3, and PIR4. (b) Type II mask for PIR5, PIR6, and PIR7. (c) The multiplexing of PIR sensors with masks form sampling cells. (d) The measure space, object space and the thermal target. The human target can be modeled by a vertical cylinder thermal source approximately. During the process of fall, the thermal target will cross the boundaries of the sampling cells and PIR sensors will generate output correspondingly.

In our system implementation, seven PIR sensors with masks are multiplexing to segment the object space several sampling cells. Four PIR sensors are masked by Type I mask, and the rest three PIR sensors are masked by type II mask. The specification of the FOV of each PIR, the sector angle ϕ and cone angle β, are listed in Table 1. In such configuration, the object space is segmented into 17 sampling cells, as shown in Figure 8c. The sampling cells are symmetric along the cone axis, and they could detect the falls happening in all directions. It means that the monitoring region of the sensor node is view independent.

Table 1 Specification of PIR sensors

Referring to Equation (4), M = 7, L = 17, and the measurement matrix V is given by

V = 0 0 0 1 0 1 1 1 0 0 1 0 1 1 1 0 0 0 0 1 1 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 0 0 1 1 0 0 1 1 0 1 1 0 0 0 0 1 1 1 0 0 0 1 0 0 1 1 0 0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 T

Our system can be regarded as a concrete implementation of the geometric reference structure [13]. The PIR sensors with Fresnel lens array are essentially sensitive to thermal change, especially at the boundaries of the sampling cells, which are preferable in terms of sensing efficiency in the sense of reducing the number of sensors involved without degrading the sensing performance. When the fall occurs, the human body will cross the boundaries of the sampling cells, and output of the PIR sensors array will reflect the spatio-temporal characteristic of the action. Typical output of the PIR sensors for different human activities is shown in Figure 9. Because the dynamic process of the fall is quite different from other normal activities, the output of the PIR sensors provides a powerful cue for fall detection.

Figure 9
figure 9

The output of the PIR sensors caused by different human activities: (a) falling; (b) sitting down.

5.3 Software architecture

The software framework of SensFall is shown in Figure 10. In our implementation, the sample rate of each PIR output is 25 Hz. The STE is calculated based on 2 s window with 1 s overlapping. A threshold is set to determine the starting point and ending point of each activity. Two layer HMMs will be trained by the training samples, and save as the model parameters λ = (A, B, Π) for testing. The main difference between our SensFall system and its camera-based counterpart is that our system does not contain the component of background segmentation [38]. The reason lies in the characteristic of the PIR sensors that they could only sense the motion of the thermal target, not including the chromatic background. As a result, the most obvious merit of our SensFall is its low dimensional input data stream for data processing.

Figure 10
figure 10

SensFall software architecture and data processing flow.

6 Experimental results

The experiments were carried out in an office environment. The monitored region covered by the sensor node was a cone with 3 m radius. There were totally eight volunteers participated in our experiments, including three females and five males. The height of them ranges from 1.64 m to 1.80 m, and the weight of them ranges from 50 kg to 70 kg. Each volunteer emulated five kinds of activities, including fall, sitting down, standing up from a chair, walking and jogging. Every activity was emulated ten times by each volunteer at a self-select speed and strategy, as shown in Figure 7. Totally, we obtained 400 samples, including 80 fall-simulated samples and 320 normal activity samples.

The experiments were divided into two stages: the training and the testing stage. In the training stage, half of the total samples were randomly selected to train the parameters λ = (A, B, Π) for each HMM, where A N×N, B N × M G , and Π N×1. The number of hidden states N and the number of Gaussian models M G have to be specified manually before employing the Baum-Welch method (equivalently the EM method) [33]. Different combination of the N and M G will affect the classification accuracy of the system. Figure 11 presents the average likelihood output of the first-layer HMMs with different parameters, and Figure 12 presents the average likelihood output of the second-layer HMMs with different parameters. Based on their performance, we specify the N and M G for two layers of HMMs as Table 2, where λ11 and λ12 correspond to vertical activities and horizontal activities in the first-layer HMMs, λ21 and λ22 correspond to the fall and other normal activities in the second-layer HMMs, respectively.

Figure 11
figure 11

Average likelihood output of the first-layer HMMs.

Figure 12
figure 12

Average likelihood output of the second-layer HMMs.

Table 2 The specification of HMMs

After these two-layers of HMMs were trained (total 4 HMMs), we used the rest of the samples for testing. We repeated this process 20 times for cross validation, and Table 3 shows the overall normal and abnormal event detection accuracy rate. The data processing was run on an Intel Pentium Dual-Core 2.60 GHz computer by Matlab code. For every testing sample (4-10 s), the average time spent on the first-layer HMMs is 4.1 ms with maximum 7 ms, the second-layer HMMs 4.3 ms with maximum 7.7 ms. It shows that our system fulfills the real-time processing requirement.

Table 3 The average experimental results

7 Discussion

The experimental results of related systems are listed in Table 3. The first one related to ours is conducted by Sixsmith and Johnson [26]. In their study, the SIMBAD system used a low-cost array of infrared detectors to detect falls. Specifically, a neural network was employed to classify falls using the vertical velocity information extract from (16 × 16) integrated pyroelectric sensor array. However, the vertical velocity could not be sufficient to discriminate a real fall from other similar activities. Another study related to ours is conducted by Liu et al. [27]. The feature they extracted by using the PIR sensors is the synergistic motion patterns of head, upper-limb and lower-limb of the human target. This feature is efficient. However, to capture this feature, their system has to be a side-view deployment; the PIR sensors are to be placed parallel to separate parts of the human body, e.g., the head, upper-limbs, and lower-limbs. It means that the FOV of the PIR sensors will be easily blocked by furniture in a real environment. More importantly, because the PIR sensors are direction sensitive, the side-view deployment PIR sensors could only detect the fall occurs perpendicularly to the FOV of the sensor node efficiently; the fall happen along the axis of the sensor node will not show the same characteristics. In other words, their monitoring region is view-dependent.

The primary insight of our study is that by segmenting the object space into distinct sampling cells and detecting the abnormal thermal variation of them, it is an efficient method to detect the fall. In other words, the variation of HME of each sub-region is a most discriminative feature for fall detection. The output of the PIR sensors reflects the spatio-temporal characteristics of different human activities. As a result, the most prominent advantages of our feature-specific sensing paradigm is its low input data dimension (7 × 1, 25 Hz), which is considerably lower than its camera-based counterpart, e.g., cyclops (128 × 128, 10fps) [39] and CMUCam3 (352 × 288, 50fps) [40]. Encouraging experimental results confirm the efficacy of the feature extraction. It shows that the deliberate design of the data acquisition will greatly reduce the complexity of the data processing.

Although the monitoring region of each sensor node is a cone with 3 m radius, it could be deployed in any position where the fall will happen with high probability, e.g., the corridor. The ceiling-mounted deployment style makes the sensor node not easy to be occluded by the furniture. More importantly, our sensor node is view-independent. It could detect a fall that happens within its monitoring region in all directions. It also studies in totally dark environments, unlike its camera-based counterpart. The quantization of the PIR measurement is 8 bit, and the payload of the messages sending from the sensor node to the sink is 1400 bit/s (including measurement of seven PIR sensors). So, it can be transmitted by most low-rate wireless personal area networks (LR-WPANs) standard. For example, the transfer rate of the Zigbee is 250 kbit/s [37]. Thus, it is possible to construct a scalable WSNs to achieve ubiquitous monitoring, which is the future goal of our research.

8 Conclusions

In this article, we design and implement a fall detection system SensFall, which is based on efficient spatial segmentation sensing model. It is not only advantageous in providing a low-cost, privacy non-invasive motion sensing method, but also it can be a practical guide to construct a coverage scalable, construction easy and energy saving wireless network for the integrated healthcare information system. Our ultimate goal is to enhance the quality of life of the elderly, afford them a greater sense of comfort and reassurance, and facilitate independent living.

References

  1. The Internet of Things. ITU Internet Reports2005. [http://www.itu.int/internetofthings/]

  2. Atzori L, Iera A, Morabito G: The internet of things: A survey. Comput Netw 2010, 54(15):2787-2805. 10.1016/j.comnet.2010.05.010

    Article  MATH  Google Scholar 

  3. C Fok, Julien C, Roman G, C Lu: Challenges of satisfying multiple stakeholders: quality of service in the internet of things. In Proceedings of the 2nd workshop on Software engineering for sensor network applications, SESENA'11. Waikiki, Honolulu, HI, USA, ACM; 2011:55-60.

    Google Scholar 

  4. Alemdar H, Ersoy C: Wireless sensor networks for healthcare: a survey. Comput Netw 2010, 54(15):2688-2710. 10.1016/j.comnet.2010.05.003

    Article  Google Scholar 

  5. Chan B, Marshall L, Winters K, Faulkner K, Schwartz A, Orwoll E: Incident fall risk and physical activity and physical performance among older men. Am J Epidemiol 2007, 165(6):696-703. 10.1093/aje/kwk050

    Article  Google Scholar 

  6. Yu X: Approaches and principles of fall detection for elderly and patient. In Proceedings of the 10th IEEE International Conference on e-health Networking, Applications and Services, Healthcom'08. Biopolis, Singapore, IEEE; 2008:42-47.

    Google Scholar 

  7. Williams A, Ganesan D, Hanson A: Aging in place: fall detection and localization in a distributed smart camera network. In Proceedings of the 15th international conference on Multimedia, MULTIMEDIA'07. Augsburg, Germany, ACM; 2007:892-901.

    Chapter  Google Scholar 

  8. Rougier C, Meunier J, St-Arnaud A, Rousseau J: Fall detection from human shape and motion history using video surveillance. In Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, AINA'07. Volume 2. Niagara Falls, Canada, IEEE; 2007:875-880.

    Google Scholar 

  9. Gopinathan U, Brady D, Pitsianis N: Coded apertures for efficient pyroelectric motion tracking. Opt Express 2003, 11(18):2142-2152. 10.1364/OE.11.002142

    Article  Google Scholar 

  10. Fang J, Hao Q, Brady D, Shankar M, Guenther B, Pitsianis N, Hsu K: Path-dependent human identification using a pyroelectric infrared sensor and Fresnel lens arrays. Opt Express 2006, 14(2):609-624. 10.1364/OPEX.14.000609

    Article  Google Scholar 

  11. Liu J, Guo X, Liu M, Wang G: Motion Tracking Based on Boolean Compressive Infrared Sampling. In Proceedings of the 16th International Conference on Parallel and Distributed Systems, ICPADS'10. Shanghai, China, IEEE; 2010:652-657.

    Google Scholar 

  12. Brady D, Pitsianis N, Sun X: Reference structure tomography. J Opt Soc Am A Opt Image Sci Vision 2004, 21(7):1140-1147. 10.1364/JOSAA.21.001140

    Article  Google Scholar 

  13. Agarwal P, Brady D, Matouˇsek J: Segmenting object space by geometric reference structures. ACM Trans Sensor Netw (TOSN) 2006, 2(4):455-465. 10.1145/1218556.1218557

    Article  Google Scholar 

  14. Mast N, Owens T: A survey of performance enhancement of transmission control protocol (TCP) in wireless ad hoc networks. EURASIP J Wirel Commun Netw 2011, 2011: 96. 10.1186/1687-1499-2011-96

    Article  Google Scholar 

  15. De Poorter E, Moerman I, Demeester P: Enabling direct connectivity between heterogeneous objects in the internet of things through a network-service-oriented architecture. EURASIP J Wirel Commun Netw 2011, 2011: 61. 10.1186/1687-1499-2011-61

    Article  Google Scholar 

  16. Karantonis D, Narayanan M, Mathie M, Lovell N, Celler B: Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans Inf Technol Biomed 2006, 10: 156-167. 10.1109/TITB.2005.856864

    Article  Google Scholar 

  17. Lai C, Huang Y, Chao H, Park J: Adaptive body posture analysis using collaborative multi-sensors for elderly falling detection. IEEE Intell Syst 2010, 25: 20-30.

    Article  Google Scholar 

  18. Lee Y, Kim J, Son M, Lee M: Implementation of accelerometer sensor module and fall detection monitoring system based on wireless sensor network. In Proceedings of the 29th IEEE International Conference on Engineering in Medicine and Biology Society, EMBS'07. Lyon, France, IEEE; 2007:2315-2318.

    Google Scholar 

  19. Bourke A, Lyons G: A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med Eng Phys 2008, 30: 84-90. 10.1016/j.medengphy.2006.12.001

    Article  Google Scholar 

  20. Anderson D, Keller J, Skubic M, Chen X, Z He: Recognizing falls from silhouettes. In Proceedings of the 28th IEEE International Conference on Engineering in Medicine and Biology Society, EMBS'06. New York City, USA, IEEE; 2006:6388-6391.

    Google Scholar 

  21. Cucchiara R, Prati A, Vezzani R: A multi-camera vision system for fall detection and alarm generation. Expert Syst 2007, 24(5):334-345. 10.1111/j.1468-0394.2007.00438.x

    Article  Google Scholar 

  22. Shankar M, Burchett J, Hao Q, Guenther B, Brady D: Human-tracking systems using pyroelectric infrared detectors. Opt Eng 2006, 45(10):1-10.

    Article  Google Scholar 

  23. Hao Q, Brady D, Guenther B, Burchett J, Shankar M, Feller S: Human tracking with wireless distributed pyroelectric sensors. IEEE Sensors J 2006, 6(6):1683-1696.

    Article  Google Scholar 

  24. Hao Q, Hu F, Xiao Y: Multiple human tracking and identification with wireless distributed pyroelectric sensor systems. IEEE Syst J 2009, 3(4):428-439.

    Article  Google Scholar 

  25. Burchett J, Shankar M, Hamza A, Guenther B, Pitsianis N, Brady D: Lightweight biometric detection system for human classification using pyroelectric infrared detectors. Appl Opt 2006, 45(13):3031-3037. 10.1364/AO.45.003031

    Article  Google Scholar 

  26. Sixsmith A, Johnson N: A smart sensor to detect the falls of the elderly. IEEE Pervasive Comput 2004, 3(2):42-47.

    Article  Google Scholar 

  27. Liu T, Guo X, Wang G: Elderly-falling detection using distributed direction-sensitive pyroelectric infrared sensor arrays. Multidimensional Syst Signal Process 2011, 2011: 1-17.

    MathSciNet  MATH  Google Scholar 

  28. Horn B, Schunck B: Determining optical flow. Artificial Intell 1981, 17(1-3):185-203. 10.1016/0004-3702(81)90024-2

    Article  Google Scholar 

  29. Wu G: Distinguishing fall activities from normal activities by velocity characteristics. J Biomech 2000, 33(11):1497-1500. 10.1016/S0021-9290(00)00117-2

    Article  Google Scholar 

  30. Peng M, Xiao Y: A survey of reference structure for sensor systems. IEEE Commun Surv Tutor 2011, PP(99):1-14.

    Google Scholar 

  31. Lu L, Zhang H, Jiang H: Content analysis for audio classification and segmentation. IEEE Trans Speech Audio Process 2002, 10(7):504-516. 10.1109/TSA.2002.804546

    Article  Google Scholar 

  32. Ribeiro P, Santos-victor J: Human activity recognition from video: modeling, feature selection and classification architecture. In Proceedings of International Workshop on Human Activity Recognition and Modeling, HAREM'05. Oxford, UK; 2005.

    Google Scholar 

  33. Rabiner L: A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 1989, 77(2):257-286. 10.1109/5.18626

    Article  Google Scholar 

  34. Yamato J, Ohya J, Ishii K: Recognizing human action in time-sequential images using hidden Markov model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CVPR'92. Champaign, IL, USA, IEEE; 1992:379-385.

    Google Scholar 

  35. Haiwang Sensors & Controls Co L: Fresnel lens arrays[http://www.szhaiwang.cn]

  36. Senba Optical & Electronic Co L: PIR sensors[http://www.sbcds.com.cn]

  37. Chipcon: CC2430 Chipcon RF Transceiver data sheet[http://www.chipcon.com]

  38. Hu W, Tan T, Wang L, Maybank S: A survey on visual surveillance of object motion and behaviors. IEEE Trans Syst Man Cybernet Part C: Appl Rev 2004, 34(3):334-352. 10.1109/TSMCC.2004.829274

    Article  Google Scholar 

  39. Rahimi M, Baer R, Iroezi OI, Garcia JC, Warrior J, Estrin D, Srivastava M: Cyclops: in situ image sensing and interpretation in wireless sensor networks. In Proceedings of the 3rd international conference on Embedded networked sensor systems, SenSys'05. San Diego, California, USA, ACM; 2005:192-204.

    Chapter  Google Scholar 

  40. Rowe A, Goode A, Goel D, Nourbakhsh I: CMUcam3: An Open Programmable Embedded Vision Sensor. Carnegie Mellon Robotics Institute Technical Report, RI-TR-07-13 2007.

    Google Scholar 

Download references

Acknowledgements

This study had been financially supported by the National Natural Science Foundation of China (Grant No. 61074167). The authors would like to thank the anonymous reviewers for their constructive comments and suggestions. They also wish to thank all staff of Information Processing & Human-Robot Systems lab in Sun Yat-sen University for their aids in conducting the experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoli Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Luo, X., Liu, T., Liu, J. et al. Design and implementation of a distributed fall detection system based on wireless sensor networks. J Wireless Com Network 2012, 118 (2012). https://doi.org/10.1186/1687-1499-2012-118

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-118

Keywords