Next Article in Journal / Special Issue
IoT’s Tiny Steps towards 5G: Telco’s Perspective
Previous Article in Journal / Special Issue
The Development of Key Technologies in Applications of Vessels Connected to the Internet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach Based on Time Cluster for Activity Recognition of Daily Living in Smart Homes

1
School of Information Science & Technology, Dalian Maritime University, Dalian 116026, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3
Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
4
School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(10), 212; https://doi.org/10.3390/sym9100212
Submission received: 23 May 2017 / Revised: 13 September 2017 / Accepted: 14 September 2017 / Published: 1 October 2017
(This article belongs to the Special Issue Applications of Internet of Things)

Abstract

:
With the trend of the increasing ageing population, more elderly people often encounter some problems in their daily lives. To enable these people to have more carefree lives, smart homes are designed to assist elderly people by recognizing their daily activities. Although different models and algorithms that use temporal and spatial features for activity recognition have been proposed, the rigid representations of these features damage the accuracy of activity recognition. In this paper, a two-stage approach is proposed to recognize the activities of a single resident. Firstly, in terms of temporal features, the approximate duration, start and end time are extracted from the activity records. Secondly, a set of activity records is clustered according to the records’ temporal features. Then, the classifiers are used to recognize the daily activities in each cluster according to the spatial features. Finally, two experiments are done to validate the recognition of daily activities in order to compare the proposed approach with a one-dimensional model. The results demonstrate that the proposed approach favorably outperforms the one-dimensional model. Two public datasets are used to evaluate the proposed approach. The experiment results show that the proposed approach achieves average accuracies of 80% and 89%, respectively.

1. Introduction

With the trend of the ageing population, more elderly people must live alone and cannot receive care from their children or spouses. It is well known that elderly people are prone to accidents in their daily lives. In traditional homes, it is difficult to recognize in a timely manner that an accident has occurred. To help single elderly people live healthy lives, smart homes are being developed to detect the daily activities of elderly people. Naturally, the activity recognition is the key function in the smart home development.
Over the last decade, there has been considerable research on activity recognition in smart homes. This research can be divided into five categories, depending on the monitoring technology [1]. The first monitoring technology is based on video cameras that are fixed in rooms [2,3]. Activities are recorded by the cameras and recognized by analyzing the video records. Although the camera-based approaches were criticized for privacy exposure, video-processing techniques were recently introduced to anonymize and record only the situations where the user is in potential danger. At present, the main problem of camera-based approaches is the required infrastructure for a pervasive and everywhere-monitoring coverage. The second monitoring technology is based on sound recognition [4]. Sound recognition uses micro-phones to detect different classes of daily activity, e.g., the sound of doing the dishes or the fall of an object or a person. Obviously, the activity recognition is able to be interfered by noise. The third monitoring technology is based on body-worn sensors [5,6]. The residents are required to wear monitoring devices, and their activities are recorded by the devices in real time. Although this method protects the resident’s privacy, the wearing devices is an extra burden. The fourth monitoring technology is based on pressure sensors [7]. The pressure sensors are used to detect the presence of residents on chairs or in bed, sit-to-stand transfers and stand-to-sit transfers. The monitoring technology can only detect a few simple activities. The fifth monitoring technology is based on ambient sensors [8,9,10,11,12,13,14] that are placed in various rooms. Generally, the ambient sensors include light sensors, temperature sensors, magnetic door sensors, etc. When the resident moves or does something in the room, some ambient sensors are activated. The current activity of the resident is inferred from the sensor events. The resident’s activities activate a sequence of ambient sensors. For example, the “washing” activates the ambient sensors that are fixed in the taps. In this manner, the resident has privacy protection and is freed from wearing extra devices. The drawback of the ambient sensor-based solutions is the required disbursement to cover the entire residence. Furthermore, even if the sensors are placed in different rooms, some blind spots can remain. Finally, the activity recognition model cannot address the case when the user modifies his/her behavior patterns at certain instants. To use the ambient sensors and body-worn sensors to obtain better performance, the ambient sensors and body-worn sensors are jointly utilized. For example, Atallah et al. presented a Bayesian classification framework for activity recognition [11]. To alleviate the user’s load, a light-weight e-AR sensor is used to generate prior information. Ambient sensors are harnessed to produce more detailed activity profiling.
Substantial effort has been expended on activity recognition [15,16]. Most researchers have attempted to introduce machine learning into their approaches. Some have used time series models to recognize activities, e.g., the hidden Markov model (HMM) or conditional random fields (CRF). For example, Kasteren et al. used hidden Markov models and the hierarchical hidden Markov model to recognize residents’ activities [17,18,19]. Tong et al. used the latent-dynamic conditional random fields and hidden state conditional random field for abnormal activities and the activities of single residents and multiple residents [20,21,22]. The commonality of these approaches is that the order of activities and the order of sensor events are emphasized. However, time series models usually have poor robustness [23]. For example, the daily order of a resident’s activities is not always identical. For a single activity, the order of sensor events often changes. Furthermore, the order of one resident’s activities is different from that of another. To improve the robustness, researchers exploit static classifiers for activity recognition, e.g., naive Bayesian (NB), k-nearest neighbor (kNN) or support vector machine (SVM) [24]. For example, Cook et al. used NB to recognize daily activities [25]. Yin et al. used the one-class SVM to recognize abnormal daily activities [26,27,28].
Temporal features have been interlaced with spatial features in earlier publications [29,30]. Krishnan et al. [31] incorporated time decay, mutual information-based weighting of sensor events and contextual information to perform activity recognition on streaming sensor data. Lotfi et al. [32] used the echo state network to predict abnormal behaviors for the elderly dementia sufferer. In their approach, the start time and duration were considered. In addition, temporal relations between sensor events were exploited as features to infer activities [33,34,35,36]. Generally, the temporal relations were built firstly. Then, the logical rules for activity recognition were generated. In this paper, a two-stage approach is proposed for activity recognition. Firstly, the temporal features of an activity record, e.g., the approximate start time, approximate end time and approximate duration, are used to divide a dataset of activity records into two clusters. Then, the spatial features of the sensor events are used to recognize the activities in each cluster. In the proposed approach and previous approaches, the temporal features are used.
In this paper, a data-driven approach is presented. Compared to the one-dimensional recognition model, the two-stage approach is new. For the first stage, the temporal features are used to cluster the training set. The activity records among different clusters are temporally sensitive, whereas the activity records within the same cluster are temporally insensitive, and the temporal features are noise data. For the second stage, the activity records within the same cluster are spatially sensitive. Activity records are recognized only regarding spatial features. The contribution of this paper is as follows. Compared to the one-dimensional recognition model, the activity recognition of the second stage is not interfered by noise data and has a high performance, which is shown by the experiments.
The remainder of this paper is organized as follows: firstly, we introduce the related work. Then, the approach to activity recognition is described. Next, we validate the proposed approach and discuss the results. Finally, the obtained findings are summarized, and the future work is declared.

2. Related Work

Activity recognition approaches are commonly divided into the data-driven approaches and knowledge-driven approaches. The knowledge-driven approaches emphasize the recognition rules. In most situations, the recognition rules are generated by following a heuristic strategy and are represented in some logical language, e.g., temporal logic or description logic. Rugnone et al. [37] used temporal logic to represent recognition rules to recognize abnormal activity. Yin et al. and Chen et al. [38,39] represented recognition rules as an ontology, and Chen et al. [40] proposed an improved ontology-based approach. The core of these approaches is an iterative process that begins from the so-called “seed” activity models, which are created by ontological engineering, deployed and subsequently evolved through incremental activity discovery and model updates. The knowledge-driven approaches have strong robustness because the recognition rules can be used in different environments. However, the raw data commonly include substantial noise and uncertain information, which are difficult to identify and consequently affect the accuracy of activity recognition.
The data-driven approaches are classified as supervised approaches, semi-supervised approaches and unsupervised approaches. Early research focused on supervised approaches, where the activity records were first labelled in advance. Then, the labelled activity records were learned by a classifier, e.g., NB [25], DT [41], kNN [42], SVM [43,44], HMM [17,18,19] or CRF [20,21,22]. Although supervised approaches achieve high accuracy, labelling the activity records is expensive and time consuming. To avoid data-labelling efforts, unsupervised approaches were proposed [45,46]. Maekawa et al. [47] used information about the end user’s physical characteristics to recognize similar activities of other users according to similarities between activities. Although the labelling efforts can be avoided, the unsupervised approaches are typically criticized for poor accuracy. To reach a trade-off among the accuracy, overhead and labelling efforts of the supervised and unsupervised methods, a growing amount of recent studies has focused on the semi-supervised approaches. For semi-supervised approaches, training data are generally separated into labelled and unlabeled. Four tasks must be orderly fulfilled to generate the recognition model. Firstly, the features are extracted from the sensor events of both labelled data and unlabeled data. Secondly, for the labelled data, sensor events are segmented regarding the start and end boundaries of an entire activity record. The activity features are generated by regarding activity records. Thirdly, for the unlabeled data, the sensor events are segmented by regarding a predefined strategy, e.g., the sliding window strategy. The activity features are generalized by regarding segmented sensor events. The activity labels are assigned to segment the sensor events by calculating the feature similarity between the labelled and unlabeled activity records. Finally, the activity features are extended from both unlabeled activity records and harnessed to generate the recognition model. Some documented semi-supervised studies are as follows. Nef et al. [48] performed a hybrid approach of an unsupervised part and a supervised part to recognize eight activities of daily living. Cluster algorithms were first employed to cluster the same activities according to the spatial locations of the firing sensors. Secondly, three classifiers (NB, SVM and RF) were used for activity recognition. The experiment showed that the random forest classifier was superior to the naive Bayesian and support vector machine in terms of specificity, sensitivity, precision and F-measure. Bourobou and Yoo [23] proposed a similar pattern for activity recognition. Firstly, a cluster algorithm K-pattern was used to find the frequent patterns of activities [49], which were viewed as features of activities. Secondly, a device based on an artificial neural network (ANN) was used to recognize the activities according to their features. Wen and Zhong [50,51] divided trained sensor events into labelled and unlabeled. The labelled sensor events were used to find the initial activity patterns. The unlabeled events were used to enrich the initial activity patterns if the similarities between the unlabeled events and the initial activity patterns satisfied a minimum similarity. Otherwise, they were clustered to find new activity patterns.

3. Terminologies

To represent the proposed approach, some terminologies are defined in advance. For clarity, a segment of activity records is shown in Table 1.
Definition 1.
For a sensor s, sr = (d, h, m, sn, sv, al) is a sensor event iff s is run. d is the date when s was run; h is the hour; and m is the minute. sn is the name of s; sv is the value of s; and al is an explanatory activity label.
Throughout this manuscript, sr.d, sr.h, sr.m, sr.sn, sr.sv and sr.al are used to represent the tuples d, h, m, sn, sv and al of a sensor event sr, respectively. Ω is used to represent the set of sensor events.
For example, “15 June 2011 00:25:01.892474 LS013 7 Sleep” denotes that sensor LS013 is activated at 00:25:01.892474 on 15 June 2011 with the measured value of seven. At that time, the resident was sleeping.
Definition 2.
Given two sensor events sr1 and sr2, sr1 is said to be the precursor of sr2 iff sr1.d < sr2.d holds or (sr1.d == sr2.d AND sr1.h < sr2.h) or (sr1.d == sr2.d AND sr1.h == sr2.h AND sr1.m < sr2.m) holds. sr2 is said to be the successor of sr1 if sr1 is the precursor of sr2.
Throughout this manuscript, sr1sr2 indicates that sr1 is the precursor of sr2.
For example, {15 June 2011 00:25:01.892474 LS013 7 Sleep} is the precursor of {15 June 2011 01:05:01.622637 BATV013 9460 Sleep}. {15 June 2011 01:05:01.622637 BATV013 9460 Sleep} is the successor of {15 June 2011 00:25:01.892474 LS013 7 Sleep}.
Definition 3.
Given two sensor events sr1 and sr2 for which sr1 < sr2 holds, sr1 is said to be the direct precursor of sr2 iff ¬ sr Ω, sr1 < sr AND sr < sr2 holds. sr2 is said to be the direct successor of sr1 if sr1 is the direct precursor of sr2.
Throughout this manuscript, sr1sr2 indicates that sr1 is the direct precursor of sr2.
For example, {15 June 2011 00:25:01.892474 LS013 7 Sleep} is the direct precursor of {15 June 2011 01:05:01.622637 BATV013 9460 Sleep}. {15 June 2011 01:05:01.622637 BATV013 9460 Sleep} is the direct successor of {15 June 2011 00:25:01.892474 LS013 7 Sleep}.
Definition 4.
Given an activity a and n sensor events sr0, sr1, sr2, …, srn, srn+1, SRs is said to be a sensor sequence of a iff 1 ≤ i ≤ n, sri.al == a AND sr0 ≠ a AND srn+1 ≠ a AND 2 ≤ i ≤ n − 1 sri → sri+1 holds.
Definition 5.
For an activity a and a sequence of ambient sensors sr1, sr2, …, srn of a, ar = (sr1.h, srn.h, u, SNT, a) is an activity record. u is the approximate duration of a. SNT is the spatial feature and is defined as a set {(srn, T)}, where srn {sri.sn|1 ≤ i ≤ n} is the name of some sensor, and T = |{sri|1 ≤ i ≤ n∧sri.sn = srn}| is the frequency that the sensors named srn occur. u and SNT are solved using Algorithm 1.
Algorithm 1. GenerateActivitiyRecord
  Input: sr1, sr2, …, srn, a sequence of sensor events
  Output: u, SNT
1. u←(srn.d-sr1.d)*24*60 + (srn.h-sr1.h)*60 + (srn.m-sr1.m)
2. SNT
3. for each sr in {sr1, sr2, …, srn}
4.  if (sr.sn, T) SNT then // T is the number of times that sensor sr.sn is run.
5.  delete(sr.sn, T) // delete (sr.sn, T) from SNT
6.  SNTSNT {(sr.sn, getT (sr.sn)+1)} //getT(sr.sn) is employed to get tuple T of (sr.sn, T)
7.  else
8.  SNTSNT {(sr.sn, 1)}
9. end if
10.  return u, SNT
For the example in Table 1, “(0, 3, 212, {(MA021, 2), (BATV012, 1), (BATV013, 1), (LS013, 2)}, Sleep)” is an activity record of “Sleep”. The duration u is “212” min because the approximate duration between the approximate start and approximate end times of “Sleep” is “212” min. The ambient sensors “MA021” and “LS013” are each run twice; the ambient sensors “BATV012” and “BATV013” are each run once.

4. Methodology

The entire process is composed of five tasks: Proposing, Clustering, GeneratingModel, Aligning and AssigningActivityLabel, which are shown in Figure 1. The first task, Proposing, generates a set of activity records based on the set of sensor events, which is shown in Algorithm 2. Then, the activity record set is divided into a training dataset and a test dataset in some proportion. In the experiment, the proportion is set as 9:1. The second task, Clustering, clusters the training data and test data according to the tuples sr1.h, srn.h and u of each activity record using K-means. The third task, GeneratingModel, generates a recognition model for each training cluster. Note that both training data and test data are involved in the Clustering task regarding temporal tuples, which include sr1.h, srn.h and u. Tuples al and SNT are not involved in the clustering task. The training data are only input into the classifier to generate a recognition model regarding tuples SNT and al, where al is the class label of the activity record. In contrast, the activity records in the test set are assigned to class labels by the recognition model regarding the tuple SNT. The fourth task Aligning is to find an optimal alignment between recognition models and the test cluster set, which is shown in Algorithm 3. For each potential alignment, an outlier detection algorithm (LOF) is used to decide whether each test activity record is an outlier against the training set. The alignment with the minimum outliers is optimal. The last task, AssigningActivityLabel, assigns an activity label to each activity record of the test set, which is shown in Algorithm 4.
Note that both real-time activity recognition and off-line activity recognition are discussed in the smart home context [8]. In this paper, the proposed approach focuses on the recognition across pre-segmented sensor data instead of real-time sensor data.
Algorithm 2. GenerateActivityRecordsSet
  Input: sr1, sr2, …, srn, a sequence of sensor events
  Output: AR, activity records set
1.  i1
2.  k1
3.  AR
4.  while(i<=n-1)
5.    if sri.al != sri+1.al then
6.    argenerateActivitiyRecord(srk, srk+1, …, sri)
7.    ARAR ar
8.    kI + 1
9.    else
10.   i++
11.   end if
12. end while
13. argenerateActivitiyRecord(srk, srk+1, …, sri)
14. ARAR ar
15. return AR
Algorithm 3. AlignTrainSet & TestSet
  Input: M, set of recognition models.
  PTes, test set of activity records.
  Output: MP, a subset of M × PTes.
1.  Let M be {m1, m2, …, mn}
2.  Let PTes be {Tes1, Tes2, …, Tesn}
3.  sm
4.  for any n elements (mx1,Tes x1), (m x2,Tes x2), …, (m xn,Tes xn) in M × PTes
5.   oc0// oc is used to count outliers of Tes xi against Traxi
6.   i1
7.   while(in)
8.   for each ar in Tes xi
9.   if isOutlier(mxi, ar) then // isOutlier(mxi, ar) is used to decide if ar is an outlier or not.
10.  oc oc + 1
11.  end if
12.  end for
13.  end while
14.  if oc<sm then
15.  smoc
16.  MP{(mx1,Tes x1), (mx2,Tes x2), …, (mxn,Tes xn)}
17.  end if
18. end for
19. return MP
Algorithm 4. RecognizeActivity
Input: {m1, m2, …, mn}, set of recognition models
 {Tes1, Tes2, …, Tesn}, test set
 C, classifiers including NB, kNN, C4.5 and RF
Output: {AL1, AL2, …, ALn}, activity class set
1.  i1
2.  j1
3.  while(i<=n)
4.   Let Tesi be {ari1, ari2, …, arim} // arij is an activity record.
5.   j1
6.   while(j<=m)
7.   alijassignLabel(C, mi, arij) // mi is fed to C and C outputs an activity label.
8.   ALi {alij}
9.   j++
10.  end while
11.  i++
12. end while
13. return {AL1, AL2, …, ALn}

5. Results and Evaluation

5.1. Datasets

The two used datasets were selected from the “single-resident apartment data” provided by Washington State University, U.S. [52,53]. Datasets “HH102” and “HH104” were used to validate the recognition of daily activities by comparing the two-stage approach (“TS”) and one-dimensional (“SS”) model.
The details of dataset “HH102” are displayed in Table 2. These data are obtained from healthy elderly people. The measurement time was 64 days. A total of ninety-seven sensors was installed in the apartment. The sensors included the following categories.
(1)
Identifiers starting with “BA” indicate the sensor battery levels: BATP013, BATP019, BATV001–BATV023, BATV102–BATV105.
(2)
Identifiers starting with “D” indicate magnetic door sensors: D001, D002, D005 and D006.
(3)
Identifiers starting with “L” and “LL” indicate light switches: L001–L005, LL001 and LL005.
(4)
Identifiers starting with “LS” indicate light sensors: LS001–LS023.
(5)
Identifiers starting with “M” indicate infrared motion sensors: M001–M022.
(6)
Identifiers starting with “MA” indicate wide-area infrared motion sensors: MA003, MA009, MA010, MA013, MA014, MA020 and MA023.
(7)
Identifiers starting with “T” indicate temperature sensors: T101–T105. Thirty activities were considered in the dataset. The raw dataset included 413,142 sensor events. The activity dataset included 2087 activity records. In our experiment, 12 activities were selected to test the proposed approach: “Sleep” (“S”), “Bathe“ (“B”), “Dress“ (“D”), “Eat_Breakfast“ (“E_B”), “Eat_Dinner“ (“E_D”), “Groom“ (“G”), “Take_Medicine“ (“T_M”), “Toilet“ (“T”), “Wash_Breakfast_Dishes“ (“W_B_D”), “Wash_Dinner_Dishes“ (“W_D_D”), “Watch_TV“ (“W_T”) and “Work_At_Table“ (“W_A_T”). In total, 951 activity records were used.
The details of dataset “HH104” are displayed in Table 3. The measurement time was 61 days. A total of one hundred and thirty six sensors was installed in the apartment. The sensors included the following categories.
(1)
Identifiers starting with “BA” indicate the sensor battery levels: BATP001–06, BATP101–106, BATV001–026, and BATV101–106.
(2)
Identifiers starting with “D” indicate magnetic door sensors: D001–D006.
(3)
Identifiers starting with “L” and “LL” indicate light switches: L001–L006.
(4)
Identifiers starting with “LS” indicate light sensors: LS001–LS026.
(5)
Identifiers starting with “M” indicate infrared motion sensors: M001–M013, M016 and M020–M026.
(6)
Identifiers starting with “MA” indicate wide-area infrared motion sensors: MA014, MA015, MA017–MA019 and MA022.
(7)
Identifiers starting with “T” indicate temperature sensors: T101–T107.
Twenty-eight activities were analyzed in the dataset, which included 347,102 sensor events. The activity dataset included 3139 activity records. In our experiment, 12 activities were selected to test the proposed approach: “Sleep_Out_Of_Bed” (“S_O_O_B”), “Evening_Meds“ (“E_M”), “Dress“(“D”), “Cook_Breakfast“ (“C_B”), “Cook_Dinner“ (“C_B”), “Phone“ (“P”), “Take_Medicine“ (“T_M”), “Toilet“ (“T”), “Wash_Breakfast_Dishes“ (“W_B_D”), “Wash_Dinner_Dishes“ (“W_D_D”), “Morning_Meds“ (“M_M”) and “Work_On_Computer“ (“W_O_C”). There were 1121 activity records.

5.2. Classifiers and Evaluation Metrics

In our experiment, the recognition model is based on one of five classifiers: naive Bayesian (NB), k-nearest neighbor (kNN), C4.5, random forest (RF) and the hidden Markov model (HMM) [17]. The details of the classifiers are shown in Table 4.
a i j = c i j k = 1 N c i k , 1 i , j N
b j k = E j ( V k ) t = 1 M E j ( V k ) , 1 i N , 1 k M
π i = I n i t ( i ) j = 1 N I n i t ( j ) , 1 i N
The evaluation metrics are accuracy, precision and F-measure, which are shown in Formulas (4)–(6), respectively. In Formulas (4)–(6), Q is the number of activity labels; TPi is the number of true positives; FPi is the number of false positives; FNi is the number of false negatives; TNi is the number of true negatives. Each validation was taken as a 10-fold cross-validation.
Accuracy = i = 1 Q T P i T P i + F P i + F N i + T N i Q
Precision = i = 1 Q T P i T P i + F P i Q
F - Measure = 2 * Precision * Accuracy Precision + Accuracy

5.3. Results from HH102

The accuracy, precision and F-measure of the activity recognition of classifiers NB, kNN, C4.5 and RF are shown in Figure 2, Figure 3 and Figure 4, respectively.
“SS” had an average accuracy of 0.6 (NB), 0.66 (kNN), 0.7 (C4.5) and 0.75 (RF); an average precision of 0.53 (NB), 0.53 (kNN), 0.57 (C4.5) and 0.58 (RF); and an average F-measure of 0.51 (NB), 0.53 (kNN), 0.56 (C4.5) and 0.57 (RF). “TS” had an average accuracy of 0.74 (NB), 0.73 (kNN), 0.74 (C4.5) and 0.8 (RF); an average precision of 0.67 (NB), 0.66 (kNN), 0.68 (C4.5) and 0.78 (RF); and an average F-measure of 0.66 (NB), 0.65 (kNN), 0.67 (C4.5) and 0.7 (RF). “TS” had higher average accuracies, precisions and F-measures than “SS”. For individual activity, there were 9 (NB), 7 (kNN), 9 (C4.5) and 8 (RF) activities, so “TS” had higher accuracies than “SS”. There were 9 (NB), 7 (kNN), 8 (C4.5) and 11 (RF) activities, so “TS” had higher precisions than “SS”.
In addition, we compare the proposed approach with the HMM-based approach. The HMM is defined as five tuples (S, V, A, B, π). The activities map to the states in HMM. Two continuous activity records map to a transformation from one state to another. For an activity record, (s, f) maps to an observation, where s is the sensor name, and f is the frequency that s occurs in the activity record. As shown in Figure 5, the HMM had the accuracy, precision and F-measure of 0.68, 0.64 and 0.66. “TS” had the average accuracy, precision and F-measure of 0.78, 0.76 and 0.77 for NB, kNN, C4.5 and RF. “TS” had a higher accuracy, precision and F-measure than HMM. For individual activities, there were nine activities for which “TS” had higher accuracies than HMM. There were seven activities for which “TS” had higher precisions than HMM.
In addition, the Pearson correlation coefficient method was used to examine the correlation between the average duration of activities and the average accuracies of activity recognition. As shown in Figure 6, the correlations are 0.45 (“SS”) and 0.36 (“TS”) for dataset HH102. For activities longer than 25 min, their accuracies are highly correlated to activity durations. For activities shorter than 25 min, the duration of the activity is weakly related to the accuracy of activity recognition.

5.4. Results from HH104

The accuracies, precisions and F-measures of the activity recognition of classifiers NB, kNN, C4.5 and RF are shown in Figure 7, Figure 8 and Figure 9, respectively.
“SS” had an average accuracy of 0.7 (NB), 0.78 (kNN), 0.79 (C4.5) and 0.83 (RF); an average precision of 0.5 (NB), 0.56 (kNN), 0.67 (C4.5) and 0.77 (RF); and an average F-measure of 0.5 (NB), 0.6 (kNN), 0.57 (C4.5) and 0.63 (RF). “TS” had an average accuracy of 0.79 (NB), 0.81 (kNN), 0.87 (C4.5) and 0.89 (RF); an average precision of 0.6 (NB), 0.62 (kNN), 0.78 (C4.5) and 0.75 (RF); and an average F-measure of 0.55 (NB), 0.6 (kNN), 0.7 (C4.5) and 0.72 (RF). “TS” had higher average accuracies, precisions and F-measures than “SS”. For individual activities, there were 9 (NB), 6 (kNN), 9 (C4.5) and 8 (RF) activities for which “TS” had higher accuracies than “SS”. There were 8 (NB), 8 (kNN), 9 (C4.5) and 9 (RF) activities for which “TS” had higher precisions than “SS”.
In addition, we compare the proposed approach with the HMM-based one. As shown in Figure 10, HMM had an accuracy, precision and F-measure of 0.57, 0.59 and 0.62, respectively. “TS” had an average accuracy, precision and F-measure of 0.63, 0.72 and 0.68, respectively, for NB, kNN, C4.5 and RF. “TS” had a higher accuracy, precision and F-measure than HMM. For individual activities, there were eight activities, so “TS” had higher accuracies than HMM. Additionally, there were nine activities, so “TS” had higher precisions than HMM.
As shown in Figure 11, the correlations are 0.24 (“SS”) and 0.1 (“TS”) for dataset HH104. For activities longer than 25 min, their accuracies are highly correlated with the activity durations. For activities shorter than 25 min, the duration of activity is weakly related to the accuracy of activity recognition.

5.5. Discussion

In this paper, we presented a new two-stage approach for activity recognition. In the first stage, the activity records are clustered by regarding temporal features. In the second stage, the activity records of each cluster are recognized by the classifier. It would be interesting to combine the proposed approach with current studies. The datasets were public and used in many current studies, and the used classifiers are common in the studies. Generally, the studies focused on feature selection and classifier optimization. Comparatively, the proposed approach focused on the recognition process. Hence, the idea of two stages can easily extend the studies to improve the feature selection, which was also shown with the experiment results.
The experiments show that “TS” performs better than “SS” and the Hidden Markov Model. The conclusion can also be drawn by analyzing the proposed approach. For the first stage, the temporal features are used to cluster the training set. The activity records among different clusters are temporal-sensitive, whereas the activity records within the same cluster are temporal-insensitive, and temporal features are noise data. For the second stage, the activity records within the same cluster are spatially sensitive and only recognized based on spatial features. Compared to the one-dimensional recognition model, the activity recognition of the second stage is not interfered by noise data and has high performance.
However, different individual activities had a large variation in performance. The activities with sufficient samples obtain high performance. In contrast, activities with a few samples obtain low performance because it is difficult for the classifiers to learn the valid features of activity from the samples. In addition, the activities with a notably small number of samples are probably clustered into different clusters, which makes each cluster have fewer samples. For dataset “HH102”, the activities “Sleep”, “Dress”, ”Eat_Dinner”, ”Toilet”, “Wash_Dinner_Dishes” and “Watch_TV” had high performance. For dataset “HH104”, the activities “Sleep_Out_Of_Bed”, “Dress”, “Cook_Breakfast”, “Cook_Dinner”, ”Toilet”, “Morning_Meds” and “Work_On_Computer” had high performance. In contrast, few activities had poor performance. For dataset “HH102”, the activities “Bathe” and “Take_Medicine” had a notably small sample. “Bathe” had only 36 activity records. “Take_Medicine” had only 24 activity records. These are far fewer than the average number of 80. For dataset “HH104”, the activity “Take_Medicine” had a notably small sample. “Take_Medicine” had only 12 activity records. Fewer activity records imply that the features of the activities were difficult to be adequately obtained.
Although the same sensors were used in detecting some activities, the accuracies had also a large variation in performance. For instance, the activity “Eat_Breakfast” (“Wash_Breakfast_Dishes”) is harder to detect than “Eat_Dinner” (“Wash_Dinner_Dishes”). Compared to the activity “Eat_Dinner” (“Wash_Dinner_Dishes”), the activity “Eat_Breakfast” (“Wash_Breakfast_Dishes”) is temporally irregular. For instance, the activity “Wash_Breakfast_Dishes” sometimes happened at noon or in the evening.

6. Conclusions and Future Work

To better aid elderly people by using context-aware services, this paper proposes a two-stage approach for the activity recognition of a single resident. To validate the proposed model, the naive models were used for comparison in two experiments, each of which was performed to recognize twelve daily activities. The results show that the proposed approach outperforms the naive spatial model in terms of both accuracy of individual activities and average accuracy.
The two-stage approach demonstrates the superiority in recognizing single-resident activities. In the next stage, we will attempt to apply the proposed approach to more complex data situations, e.g., unlabeled data and multiple-resident situations. In this paper, the cluster algorithm is used to partition the set of activity records into two clusters. The number of clusters is vital to the performance of activity recognition. A supplementary algorithm for the cluster algorithm will be conceived. The supplementary algorithm is expected to output a reasonable number of clusters. In addition, we will study deeply the topic of how the sample size affects the calculation of the accuracy of an activity.

Acknowledgments

We thank all reviewers for their useful comments to improve the paper. This work was supported by the National Natural Science Foundation of China (No. 61672122, No. 61402070, No. 61602077, No. 61672261), the Natural Science Foundation of Liaoning Province of China (No. 2015020023), the Educational Commission of Liaoning Province of China (No. L2015060), the Fundamental Research Funds for the Central Universities (No. 3132016348) and the Open Project Program of Artificial Intelligence Key Laboratory of Sichuan Province (2016RYJ01).

Author Contributions

Yaqing Liu and Dantong Ouyang conceived of the research subject and contributed critical suggestions. Yong Liu conducted the experiments. Rong Chen drafted the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Peetoom, K.K.B.; Lexis, M.A.S.; Joore, M.; Dirksen, C.D.; Witte, L.P.D. Literature review on monitoring technologies and their outcomes in independently living elderly people. Disabil. Rehabil. Assist. Technol. 2015, 10, 271–294. [Google Scholar] [PubMed]
  2. McCowan, I.; Gatica-Perez, D.; Bengio, S.; Lathoud, G.; Barnard, M.; Zhang, D. Automatic analysis of multimodal group actions in meetings. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 305–317. [Google Scholar] [PubMed]
  3. Kim, Y.; Lee, H.; Provost, E.M. Deep learning for robust feature generation in audiovisual emotion recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  4. Fleury, A.; Noury, N.; Vacher, M.; Glasson, H. Sound and speech detection and classification in a health smart home. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008. [Google Scholar]
  5. Bloch, F.; Gautier, V.; Noury, N.; Lundy, J.E.; Poujaud, J.; Claessens, Y.E.; Rigaud, A.S. Evaluation under real-life conditions of a stand-alone fall detector for the elderly subjects. Ann. Phys. Rehabil. Med. 2011, 54, 391–398. [Google Scholar] [PubMed]
  6. Wang, L.; Gu, T.; Tao, X.; Lu, J. Recognizing multi-user activities using wearable sensors in a smart home. Pervasive Mob. Comput. 2011, 7, 287–298. [Google Scholar]
  7. Arcelus, A.; Herry, C.L.; Goubran, R.A.; Knoefel, F. Determination of sit-to-stand transfer duration using bed and floor pressure sequences. IEEE Trans. Biomed. Eng. 2009, 56, 2485–2492. [Google Scholar] [PubMed]
  8. Wan, J.; O’Grady, M.J.; O’Hare, G.M.P. Dynamic sensor event segmentation for real-time activity recognition in a smart home context. Pers. Ubiquitous Comput. 2015, 19, 287–301. [Google Scholar]
  9. Shen, J.; Tan, H.; Wang, J.; Wang, J.; Lee, S. A Novel Routing Protocol Providing Good Transmission Reliability in Underwater Sensor Networks. J. Int. Technol. 2015, 16, 171–178. [Google Scholar]
  10. Xie, S.; Wang, Y. Construction of Tree Network with Limited Delivery Latency in Homogeneous Wireless Sensor Networks. Wirel. Pers. Commun. 2014, 7, 231–246. [Google Scholar]
  11. Atallah, L.; Lo, B.; Ali, R.; King, R.; Yang, G.Z. Real-Time Activity Classification Using Ambient and Wearable Sensors. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 1031–1039. [Google Scholar] [PubMed]
  12. Salomons, E.L.; Havinga, P.J.M.; Leeuwen, H.V. Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes. Sensors 2016, 16, 1586–1606. [Google Scholar]
  13. Sawsan, M.; Ahmad, L.; Caroline, L. User Activities Outliers Detection; Integration of Statistical and Computational Intelligence Techniques. Comput. Intell. 2014, 32, 247–250. [Google Scholar]
  14. Helal, S.; Mann, W.; Elzabadani, H.; King, J.; Kaddoura, Y.; Jansen, E. The Gator Tech Smart House: A Programmable Pervasive Space. Computer 2005, 38, 50–60. [Google Scholar]
  15. Benmansour, A.; Bouchachia, A.; Feham, M. Human Activity Recognition in Pervasive Single Resident Smart Homes: State of Art. In Proceedings of the 12th International Symposium on Programming and Systems, Algiers, Algeria, 28–30 April 2015; pp. 276–284. [Google Scholar]
  16. Benmansour, A.; Bouchachia, A.; Feham, M. Multioccupant Activity Recognition in Pervasive Smart Home Environments. ACM Comput. Surv. 2015, 48, 34. [Google Scholar]
  17. Kasteren, T.; Noulas, A.; Englebienne, G.; Krose, B. Accurate activity recognition in a home setting. In Proceedings of the 10th International Conference on Ubiquitous Computing, Seoul, Korea, 21–24 September 2008; pp. 1–9. [Google Scholar]
  18. Kasteren, T.L.M.V.; Englebienne, G.; Kröse, B.J.A. Hierarchical activity recognition using automatically clustered actions. Lect. Notes Comput. Sci. 2011, 7040, 82–91. [Google Scholar]
  19. Singla, G.; Cook, D.; Schmitter, E.M. Recognizing independent and joint activities among multiple residents in smart environments. J. Ambient Intell. Humaniz. Comput. 2010, 1, 57–63. [Google Scholar] [PubMed]
  20. Chen, R.; Tong, Y. A Two-stage Method for Solving Multi-resident Activity Recognition in Smart Environments. Entropy 2014, 16, 2184–2203. [Google Scholar]
  21. Tong, Y.; Chen, R.; Gao, J. Hidden State Conditional Random Field for Abnormal Activity Recognition in Smart Homes. Entropy 2015, 17, 1358–1378. [Google Scholar]
  22. Tong, Y.; Chen, R. Latent-Dynamic Conditional Random Fields for recognizing activities in smart homes. J. Ambient Intell. Smart Environ. 2014, 6, 39–55. [Google Scholar]
  23. Bourobou, S.T.M.; Yoo, Y. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm. Sensors 2015, 15, 11953–11971. [Google Scholar] [CrossRef] [PubMed]
  24. Wen, X.; Shao, L.; Xue, Y.; Fang, W. A rapid learning algorithm for vehicle classification. Inf. Sci. 2015, 295, 395–406. [Google Scholar]
  25. Cook, D.; Schmitter-Edgecombe, M. Assessing the quality of activities in a smart environment. Methods Inf. Med. 2009, 48, 480–485. [Google Scholar] [CrossRef] [PubMed]
  26. Yin, J.; Yang, Q.; Pan, J.J. Sensor-based abnormal human-activity detection. IEEE Trans. Knowl. Data Eng. 2008, 20, 1082–1090. [Google Scholar] [CrossRef]
  27. Gu, B.; Sheng, V.S.; Wang, Z.; Ho, D.; Osman, S.; Li, S. Incremental learning for ν-Support Vector Regression. Neural Netw. 2015, 67, 140–150. [Google Scholar] [CrossRef] [PubMed]
  28. Gu, B.; Sheng, V.S.; Tay, K.Y.; Romano, W.; Li, S. Incremental Support Vector Learning for Ordinal Regression. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1403–1416. [Google Scholar] [CrossRef] [PubMed]
  29. Urwyler, P.; Rampa, L.; Stucki, R.; Büchler, M.; Müri, R.; Mosimann, U.P.; Nef, T. Recognition of activities of daily living in healthy subjects using two ad-hoc classifiers. Biomed. Eng. Online 2015, 14, 1–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Stucki, R.A.; Urwyler, P.; Rampa, L.; Müri, R.; Mosimann, U.P.; Nef, T.A. A Web-Based Non-Intrusive Ambient System to Measure and Classify Activities of Daily Living. J. Med. Int. Res. 2014, 16, e175. [Google Scholar] [CrossRef] [PubMed]
  31. Krishnan, N.C.; Cook, D.J. Activity Recognition on Streaming Sensor Data. Pervasive Mob. Comput. 2014, 10, 138–154. [Google Scholar] [CrossRef] [PubMed]
  32. Lotfi, A.; Langensiepen, C. Smart homes for the elderly dementia sufferers: identification and prediction of abnormal behaviour. J. Ambient Intell. Humaniz. Comput. 2012, 3, 1–14. [Google Scholar] [CrossRef]
  33. Gottfried, B.; Guesgen, H.W.; Hübner, S. Spatiotemporal reasoning for smart homes. Lect. Notes Comput. Sci. 2006, 4008, 16–34. [Google Scholar]
  34. Guesgen, H.; Marsland, S. Recognising human behaviour in a spatio-temporal context. In Handbook of Research on Ambient Intelligence and Smart Environments: Trends and Perspective; IGI Global: Hershey, PA, USA, 2011; pp. 443–459. [Google Scholar]
  35. Jakkula, V.; Cook, D. Anomaly detection using temporal data mining in a smart home environment. Methods Inf. Med. 2008, 47, 70–75. [Google Scholar] [PubMed]
  36. Jakkula, V.; Cook, D.J. Mining Sensor Data in Smart Environment for Temporal Activity Prediction. In Proceedings of the 1st International Workshop on Knowledge Discovery from Sensor Data, San Jose, CA, USA, 12–15 August 2007. [Google Scholar]
  37. Rugnone, A.; Poli, F.; Vicario, E.; Nugent, C.D.; Tamburini, E.; Paggetti, C. A visual editor to support the use of tempora logic for adl monitoring. In Pervasive Computing for Quality of Life Enhancement, Proceedings of the 5th International Conference On Smart Homes and Health Telematics, Nara, Japan, 21–23 June 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 217–225. [Google Scholar]
  38. Latfi, F.; Lefebvre, B.; Descheneaux, C. Ontology-based management of the telehealth smart home, dedicated to elderly in loss of cognitive autonomy. In Proceedings of the OWLED 2007 Workshop on OWL: Experiences and Directions, Innsbruck, Austria, 6–7 June 2007. [Google Scholar]
  39. Chen, L.; Nugent, C. Ontology-based activity recognition in intelligent pervasive environments. Int. J. Web Inf. Syst. 2009, 5, 410–430. [Google Scholar] [CrossRef]
  40. Chen, L.; Nugent, C.; Okeyo, G. An Ontology-Based Hybrid Approach to Activity Modeling for Smart Homes. IEEE Trans. Hum. Mach. Syst. 2014, 44, 92–105. [Google Scholar] [CrossRef]
  41. Hevesi, P.; Wille, S.; Pirkl, G.; When, N.; Lukowicz, P. Monitoring household activities and user location with a cheap, unobtrusive thermal sensor array. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 141–145. [Google Scholar]
  42. Sundholm, M.; Cheng, J.; Zhou, B.; Sethi, A.; Lukowicz, P. Smart-mat: Recognizing and counting gym exercises with low-cost resistive pressure sensing matrix. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 373–382. [Google Scholar]
  43. Gu, B.; Sheng, V.S.; Li, S. Bi-parameter space partition for cost-sensitive SVM. In Proceedings of the 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 3532–3539. [Google Scholar]
  44. Gu, B.; Sheng, V.S. A Robust Regularization Path Algorithm for ν-Support Vector Classification. IEEE Trans. Neural Netw. Learn. Syst. 2016, 1, 1–8. [Google Scholar] [CrossRef] [PubMed]
  45. Sukanya, P.; Gayathri, K.S. An Unsupervised Pattern Clustering Approach for Identifying Abnormal User Behaviors in Smart Homes. Int. J. Comput. Sci. Netw. 2013, 3, 115–122. [Google Scholar]
  46. Zheng, Y.; Jeon, B.; Xu, D.; Wu, Q.M.J.; Zhang, H. Image segmentation by generalized hierarchical fuzzy C-means algorithm. J. Intell. Fuzzy Syst. 2015, 28, 961–973. [Google Scholar]
  47. Maekawa, T.; Watanabe, S. Unsupervised activity recognition with user’s physical characteristics data. In Proceedings of the 15th Annual International Symposium on Wearable Computers, San Francisco, CA, USA, 12–15 June 2011; pp. 89–96. [Google Scholar]
  48. Nef, T.; Urwyler, P.; Büchler, M.; Tarnanas, I.; Stucki, R.; Cazzoli, D.; Müri, R.; Mosimann, U. Evaluation of Three State-of-the-Art Classifiers for Recognition of Activities of Daily Living from Smart Home Ambient Data. Sensors 2015, 15, 11725–11740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Zhang, Y.; Sun, X.; Wang, B. Efficient Algorithm for K-Barrier Coverage Based on Integer Linear Programming. China Commun. 2016, 13, 16–23. [Google Scholar] [CrossRef]
  50. Wen, J.; Zhong, M. Activity discovering and modelling with labelled and unlabelled data in smart environments. Expert Syst. Appl. 2015, 42, 5800–5810. [Google Scholar] [CrossRef]
  51. Wen, J.; Zhong, M.; Wang, Z. Activity recognition with weighted frequent patterns mining in smart environments. Expert Syst. Appl. 2015, 42, 6423–6432. [Google Scholar] [CrossRef]
  52. Cook, D.; Crandall, A.; Thomas, B.; Krishnan, N. CASAS: A smart home in a box. IEEE Comput. 2013, 46, 62–69. [Google Scholar] [CrossRef] [PubMed]
  53. WSU CASAS Datasets. Available online: http://ailab.wsu.edu/casas/datasets.html (accessed on 2 February 2016).
Figure 1. The entire process of activity recognition based on the proposed approach.
Figure 1. The entire process of activity recognition based on the proposed approach.
Symmetry 09 00212 g001
Figure 2. Accuracies on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Figure 2. Accuracies on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g002
Figure 3. Precisions on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Figure 3. Precisions on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g003
Figure 4. F-measures on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Figure 4. F-measures on dataset “HH102” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g004
Figure 5. Accuracies, precisions and F-measures on dataset “HH102” based on the proposed approach and HMM.
Figure 5. Accuracies, precisions and F-measures on dataset “HH102” based on the proposed approach and HMM.
Symmetry 09 00212 g005
Figure 6. Correlation between the durations and accuracies of activities recognition for dataset “HH102”.
Figure 6. Correlation between the durations and accuracies of activities recognition for dataset “HH102”.
Symmetry 09 00212 g006
Figure 7. Accuracies on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Figure 7. Accuracies on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g007
Figure 8. Precisions on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Figure 8. Precisions on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g008
Figure 9. F-measures on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Figure 9. F-measures on dataset “HH104” by using classifiers NB, kNN, C4.5 and RF.
Symmetry 09 00212 g009
Figure 10. Accuracies, precisions and F-measures for dataset “HH104” based on the proposed approach and HMM.
Figure 10. Accuracies, precisions and F-measures for dataset “HH104” based on the proposed approach and HMM.
Symmetry 09 00212 g010
Figure 11. Correlation between the durations and accuracies of activity recognition for dataset “HH104”.
Figure 11. Correlation between the durations and accuracies of activity recognition for dataset “HH104”.
Symmetry 09 00212 g011
Table 1. A segment of activity records.
Table 1. A segment of activity records.
NumberDateTimeSensorValueActivity
115 June 201100:06:32.834414M021ONSleep
215 June 201100:12:32.670631BATV0129540
315 June 201100:15:01.957718LS0136
415 June 201100:25:01.892474LS0137
515 June 201101:05:01.622637BATV0139460
615 June 201103:38:28.21206M021ON
715 June 201103:38:44.482092MA013ONBed_Toilet_Transition
815 June 201103:38:45.133517M018OFF
915 June 201103:38:47.644521MA013OFF
Table 2. Description of dataset “HH102”.
Table 2. Description of dataset “HH102”.
SensorsActivityRaw Sensor EventsRaw Activity RecordsSelected ActivitySelected Activity Records
9730413,142208712951
Table 3. Description of dataset “HH104”.
Table 3. Description of dataset “HH104”.
SensorsActivityRaw Sensor EventsRaw Activity RecordsSelected ActivitySelected Activity Records
13628347,1023139121121
Table 4. Parameters of the classifiers.
Table 4. Parameters of the classifiers.
ClassifiersParameters
NB
  • numDecimalPlaces: 2
kNN
  • the number of neighbors to use: 1
  • distanceWeighting: No distance weight
  • nearestNeighbourSearchAlgorithm: LinearNNSearch
C4.5
  • confidenceFactor: 0.25
  • minNumObj: 2
  • numDecimalPlaces: 2
RF
  • numIterations: 100
  • numDecimalPlaces: 2
HMM
  • The set of sensor events is divided into a number of windows regarding a duration of 60 s.
  • S = {S1, S2, …, SN} is the set of states. Activities map to states.
  • V = {V1, V2, …, VM} is the set of observations. Sensors map to observations.
  • A = {aij = P(qt+1 = Sj|qt = Si), 1 ≤ i,jN} is a set of transition probabilities. Each aij represents the probability of transition from state Si to Sj. aij is solved by Formula (1). Cij of Formula (1) is the frequency of transition from state Si to Sj. States of two continuous windows map to a transformation.
  • B = {bjk = P(ot = Vk|qt = Sj), 1 ≤ jN, 1 ≤ kM} are emission probabilities. Each bjk represents the probability of observation Vk being emitted by Sj. bjk is solved by Formula (2). Ej(Vk) of Formula (2) is the frequency that Vk are fired for Sj.
  • π = {πi = P(qt = Si), 1 ≤ iN} is an initial state distribution. Each πi represents the probability that Si is a start state. πi is solved by Formula (3). Init(i) of Formula (3) is the frequency of Si occurring as a start state. The first state Si of each window maps to an initial state.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ouyang, D.; Liu, Y.; Chen, R. A Novel Approach Based on Time Cluster for Activity Recognition of Daily Living in Smart Homes. Symmetry 2017, 9, 212. https://doi.org/10.3390/sym9100212

AMA Style

Liu Y, Ouyang D, Liu Y, Chen R. A Novel Approach Based on Time Cluster for Activity Recognition of Daily Living in Smart Homes. Symmetry. 2017; 9(10):212. https://doi.org/10.3390/sym9100212

Chicago/Turabian Style

Liu, Yaqing, Dantong Ouyang, Yong Liu, and Rong Chen. 2017. "A Novel Approach Based on Time Cluster for Activity Recognition of Daily Living in Smart Homes" Symmetry 9, no. 10: 212. https://doi.org/10.3390/sym9100212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop