INTELLIGENT INTERACTION FOR HUMAN-FRIENDLY SERVICE ROBOT IN SMART HOUSE ENVIRONMENT

,


Introduction
Various robotic systems have been developed to help human in home environment as well as in public places. The robots that perform works and activities for human beings are called service robots 1 . Service tasks include fetching and delivering articles and foods in the home 2,3,4 . Also, in large public places such as office, hospital, or museum, service robots can provide similar services such as guiding, moving objects and rendering entertainment services 5,6,7 .
Rehabilitation robots are specialized service robots to assist older people or people with disabilities depending on the level of disabilities. KARES II 3 is designed, for example, to assist daily life activ-ities of the patient with the spinal cord injury. It performs 12 predefined service tasks, such as assisting drinking and eating, turning a switch on/off, picking up objects, etc., via various human-machine interfaces. FRIEND 8 and Care-O-Bot II 9 have been also developed to assist people with disabilities for daily activities.
Differently from general-purpose service robots, an assistive robot should be designed in consideration of the characteristics of the target user. L. Leifer has proposed three general guideline principles for assistive service robots 10 : (1) assistive service robots should be designed as social agents; (2) they must possess some intelligence to tolerate ambiguity; (3) all applications should be reapplications.
In practice, it is emphasized that the target user, possibly along with his/her caretakers, should be involved in the design stage and that the specialties and characteristics of the user, such as kind and degree of disabilities, are analyzed beforehand, and then, the required service tasks and the design specifications are to be determined. In particular, the designer should take into consideration of the fact that the older persons and/or people with disabilities as target users would have difficulty in controlling complex robotic systems and in reacting rapidly to an unexpected situation. Thus, service tasks should be performed autonomously or semi-autonomously with ease as much as possible by the robot's own intelligence for the given task command. The words 'intelligence' or 'intelligent system' has been widely used but, in a non-unique way 11 . We find that general notions on intelligence do not provide any specific guideline in designing a practical system because the required type of intelligence varies according to target applications. AI (artificial intelligence) on knowledge abstraction and information fusion is important, for example, in the financial/economic analysis while cognitive intelligence is more useful in the biometrics field. Also, computational intelligence on manipulation/control under uncertain complex environment is effective in the robotics area.
In this paper, we shall discuss a set of effective intelligent techniques which can be utilized in a smart house environment in which human, controllable devices, and robotic agents are treated to be subsystems in a human-in-the-loop system. In the design of a smart house for the people in need, there would be many important factors to consider, such as safety, reliability, accessibility, and so on to evaluate the performance of the system. But we concentrate on the aspect of human-robot interaction in the paper. Note that human-robot interaction (HRI) would become intense and frequent in such a home environment where robotic services are desired for independent living of the residents such as aged people and/or people with physical handicaps. This paper is organized as follows. In Section 2, the notions on the intelligence for a smart house environment are briefly discussed and a smart house environment, called Intelligent Sweet Home (ISH), is explained. Then, for human-friendly humanmachine interaction, the tasks that a robot should execute with some intelligent techniques are grouped into three types in view of the context of the total human-in-the loop system in Section 3. We describe in detail, each context-based HRI type in Sections 4, 5 and 6, respectively. Finally, brief concluding remarks are stated in Section 7.

Intelligent Sweet Home as a Smart House Environment
As a practical example, we describe a smart house environment developed in KAIST, called Intelligent Sweet Home (ISH). This system is aimed for a person with movement limitations to perform various complicated everyday tasks as much independently as possible in a residential space 12 . Based on the consensus gathered from a group of potential users on a special questionnaire survey 13 , ISH is designed to provide those services that are needed for independent activities in cases of going outdoors, meal preparation, eating, drinking, control of home appliances from the bed or wheelchair, and bringing/removing objects while they are in the bed. The ISH includes several robotic subsystems such as an intelligent bed, intelligent wheelchair and robotic hoist to assist transfer between the bed and the wheelchair, and human-machine interaction modules to provide a natural and convenient means of conveying information between the user and the subsystems installed in the home. All the homeinstalled components/devices are connected via a home network that includes both wired and wireless communication modules. Responding to the user's commands, the central unit generates a set of actions for each robotic agent or for an automation device and the sequence information according to which these actions will be executed. Most of the assistive modules are designed to perform their own specific functions as well as to cooperate with other related subsystems whenever necessary. Since the ISH consists of a number of subsystems and tasks, and since each task requires usually more than one subsystem for cooperative execution, it would be quite difficult for a user to control and command all the subsystems by various corresponding human-robot interfaces. To resolve this difficulty, we have developed a steward robot, named Joy, as shown in Fig. 1 so that the user can get a service by interacting with the steward robot only in operating the whole system 14 . The steward robot is endowed with the learning function to handle uncertain services and also with the intention reading capability so that some awkward or difficult situations can be avoided for the user with physical disabilities. The system is also capable of providing with per-sonalized services depending on the resident's preference and life style. All of these functions of the robot will enable the home system to perform appropriate tasks autonomously or semi-autonomously instead of cumbersome manual operations by observing the resident's behavior. Figure 2 shows the hardware structure of the steward robot and the ISH.

Comments on Intelligence for a Smart House Environment
Imagine a home environment where the resident as a user interacts with a service robot in various manners. The user may give a command to bring a cup to him/her or assist walking outside. Also, he/she may want the robot to adjust the internal temperature or turn on TV with a usual favorite channel. While the robot is performing a given task command, the user may express his/her feelings as feedback on the robot's performance or render another instruction to be done after the current task. In this situation, observe that human-robot interaction occurs frequently and sometimes intensely and that it is dealt with in the framework of a human-in-the-loop system. Traditionally, a major concern of designing the total human-in-the loop system is to minimize the troublesome human factors. In an airplane control system, for example, the pilot is trained not only to be skillful in maneuvering the airplane but also not to make any human errors. It means that the target system is first designed according to some given specifications, and the operator of the system is required to make the system perform optimally or adapt to the machine. We may call such a design approach as machine-centered approach.
Recall that the older people and people with disabilities may have difficulty in learning how to control a complex system such as a smart house. Note that a smart house environment is designed to assist the potential users for their independent living with some assistive components, and that the number of components increases as the number of service tasks increases, so the cognitive load of the users correspondingly increases to control all the devices and subsystems in a smart house. For this type of a smart house, we adopt the different philosophy called "human-friendly design approach", according to which the robot is designed so as to minimize human training or human cognitive load in operation of the system. A service robot, designed under the human-friendly design philosophy, is expected to adapt itself to human by learning and understanding characteristics and behaviors of human, which sometimes also called "human-centered" system design approach 2 . One obvious thing, in this approach, is that the robot has to possess some form of capability to understand human characteristics.
When human-robot interaction takes place in a smart house environment, human would prefer communication between human and the system to be easy and convenient. In communication, human responds mostly with perception-based input data while employing an approximate reasoning inference mechanism, whereas robot is operating with measurement-based input data and under welldefined mathematical logic and formulae. This difference makes it difficult for human being and the target robotic machine to communicate with each other unless properly interfaced. Thus, realization of effective human-machine interaction relies on appropriate human-machine interfaces that are capable of translating one type of information and knowledge to another completely different type, and for this, some intelligent techniques are considered to be essential to make human-machine interfaces more effective and more human-friendly. To make the robot human-friendly, conventional techniques based on math have limited applications.
Note that, when human communicates with another human, human can express his/her instructions or intention in various forms, such as natural language, facial expressions, gestures, and be-havioral patterns. To make interaction to be natural and easy for human, we may design a robot to learn these human communication skills. There have been tremendous attempts to make mathematical models with quite limited success. To make the robot human-friendly, conventional techniques based on math have limited applications due to the fact that human is an entity which are very difficult to model because of subjectivity, time-variance, ambiguity, inconsistence, susceptibility, and highdimensionality. Thus, some form of intelligence is needed and it is proposed to use soft computing techniques or the techniques of computational intelligence such as fuzzy logic, GA and ANN that can be effectively utilized for mimicking human intelligence in HRI.
Another aspect to note is that service tasks by a robot are desired to be performed autonomously as much as possible in many situations and for this, some high level of intelligence may be needed. Let us imagine a service task assigned to a robot to bring a cup on a table. If the user has to designate an accurate 3-dimensional position of the target object to perform the task, let us call it 1 st level of intelligence. This kind of intelligence is viewed as a physical ability of the robot. If the robot additionally possesses its own cognitive intelligence by sensing and recognizing the target object, then we regard it as 2 nd level of intelligence. This kind of intelligence enables the robot to handle a great deal of uncertainty. Finally, if the robot can bring the cup autonomously with avoiding and/or rearranging possible unnecessary obstacles, then we may call it 3 rd level of intelligence for the given task. It is remarked that, for the robot with a lower level of intelligence, the role of human being becomes more involved in letting the robot to perform a task and the robot requires continuous human interventions during its operation. On the other hand, if the robot has higher level of intelligence, the robot can perform the given task by a simplified command of human being. Therefore, the capability of autonomously performing a task is considered to reduce unnecessary interaction between human being and the target machine. Of course, if the robot has much higher level of intelligence, it may tries to seek target tasks autonomously without any designated command by estimating intended command of the user.
In designing and realizing an assistive robotic system, we remark that many additional functional requirements should be taken care of, such as safety, reliability and usability. In the paper, however, we concentrate on the human-robot interaction perspective for which the intelligence in the human-machine interface as well as autonomous task performing is considered as essential in a smart house environment.

Human-friendly Interaction between Human and Robot
Human-friendly human-robot interaction can be studied from many perspectives including views of psychology, neuroscience, cognitive science, sociology and engineering. In particular, we can note that a functionally-driven engineering approach has been vigorously investigated with a variety of issues involving interactive design, safety, modeling and task management. From a realization point of view, considerable aspects of HRI are found to be some modified versions from HCI or from HAI (human-agent interaction). For analysis and design of a HRI module, it would be necessary to identify the mode of interaction if the number of robots versus that of humans is termed as "mode". It can be 1 : 1, 1 : N, N : 1 or N : M. Usually, we deal with the 1 : 1 mode when a single user interacts with a robot in the house one at a time. In an office environment, it can be 1 : N mode with a variety of service menus for different users, while in a smart house, the robot works in the N : 1 mode where a user should deal with several robotic subsystems. When multiple robots and a number of people do cooperatively cleaning and arranging scattered chairs of a classroom, the situation can be considered as an N : M mode.
In case of making models of human for interaction, the distance between the user and the robot becomes an important factor. A human can be modeled as a simple moving object when the distance is very far away while the same human becomes a complex entity that renders various forms of input to the HRI Module such as facial emotional expressions, hand gestures or EMG signals when such physical signals are used for controlling other subsystems such as wheelchair. These considerations of mode, distance and others can be fused into a concept of "context".
Here, we propose that the set of interaction tasks that a service robot is supposed to carry out can be partitioned according to the context in which the robot is situated. The degree of context awareness and the specificity of the context information determine the tasks that a robot should do in HRI as in context-aware services. Here, "context" refers to a situation of an entity or the properties of a system that are relevant to interaction between the system and its surroundings and would help it adapt to its behavior accordingly. It is known that any representation is context dependent. In modeling or describing the HRI environment, the robotic tasks should be designed in consideration of the context. The context information can be well-defined and crisp, or it can be fuzzy or uncertain.
We consider HRI Interaction Module to be a system that takes input from human and generates output for human. Generation of the output depends on the context of the situation in which human and robot are situated. We divide the context into 3 types: (1) crisp context, (2) fuzzy context, and (3) uncertain context. Thus, the HRI Module generates the output under context reasoning where the context is hierarchically structured; the bottom level is crisp and task execution is done as if there is no context situation whereas the top level is uncertain and the task done may lead to erroneous/unsatisfactory output from the human point of view and learning is necessary in the long term basis.

Output Input
to Human from Human

Fig. 3. HRI Module Subsystem in Context
In this paper, the interaction tasks done by the robot are considered in terms of HRI in crisp cnontext, in fuzzy context or in uncertain context and are exemplified by three subsystems: (1) task planning subsystem, (2) soft remote control subsystem, and (3) probabilistic fuzzy-rule base subsystem of the ISH. Also, the steward robot is discussed as a representative example in view of computational intelligence for effective human-machine interaction.

Task Planning Subsystem for HRI in Crisp Context
When the human-robot interaction (HRI) takes place in crisp context, the HRI Module can precisely react or take actions to an input in a well-defined way.
The subsystem under crisp context amounts to a task planning subsystem for which an advanced algorithm is utilized in the paper for improving task performance with autonomy, convenience and easy controllability for the user. Recall that the ISH is equipped with several robotic subsystems and devices and that the increased number and the complexity of subsystems can lead to the decrease of accessibility and a great deal of inconvenience. As an approach toward human-friendly human-robot interaction in crisp context, therefore, we focus on endowing a versatile task planning capability to the steward robot, aiming at 'simplification of task commands' by the user.

System Overview
In the ISH, the service tasks are categorized as one of the following 15 : 1) low-level task: the task required to control a single target device (e.g., turning on/off the TV) and 2) high-level task: the task required to control multiple target devices with a proper sequence (e.g., preparing outside trip, preparing a meal). Let us consider a high-level task scenario, for example, that a person with lower-limb paralysis wants to go outside from the bed in the ISH. In this case, the target user with mobile disability has to control the intelligent bed robot, the robotic hoist, the intelligent wheelchair, the steward robot Joy, and the other devices with a proper sequence in the way of reducing task performing time and of avoiding possible collision among mobile agents. Recall that this process requires a cumbersome planning of manual operations, and it is quite difficult for the person with physical disability to conduct such a sequence of operations under possible unexpected events.
To resolve this potential difficulty, a task planning system is proposed for the steward robot "Joy" to control subsystems automatically in a proper sequence. Based on observation of the environmental status and the device conditions, the task planning system generates an appropriate sequence of lowlevel tasks from the given high-level task.
A task planning subsystem finds a sequence of actions that enable the system to move from an initial state to a goal state in the way that all the facts of the goal state are true. Here, a fact is defined to be the discretized status of a device and a state is a set of facts. The goal state is given by a high-level task and the initial state is decided by some environmental sensory information. The task planning system in the paper is based on the STRIPS (STanford Research Institute Problem Solver) representation 16,17 . Many task planning systems have been developed under the STRIPS representation, including Graphplan 18 , BSR-Graphplan 19 , and Split planning 20 , to name a few. Among them, the split planning method is proved to show better performance than the other planning algorithms in view of computation time under the condition that the number of facts in an initial state is larger than that in a goal state 15 . However, we have noted that the computation time of the split planning method is still too large to be applied in a practical real-time system such as ISH. To handle this requirement, we have applied a backward graph construction scheme to the existing split planning method with a state partitioning technique. Figure 4 shows the overall structure of the proposed task planning system.  As shown in Fig. 4, the task planning system consists of four kinds of the processing submodules, which are an initial planning submodule, a state partitioning submodule, BSP (Backward Split Planning) submodules, and a sequence merging submodule. For the given high-level task and the observed sensory information, the initial planning module generates an initially planned graph based on the STRIPS representation. Then, the initial graph is decomposed into a collection of several sub-problems in the state partitioning submodule. For each subproblem, the BSP module generates the planning result, which is a sequence of actions. Finally, each sequence is merged into a sequence of actions for the original problem. The detailed procedures are explained in the next Section

A Backward Split Planning Method with State Partitioning: Intelligent Technique in the Task Planning System
To find an appropriate sequence of actions, the condition given to a planning system consists of three tuples: an initial state, a goal state, and a set of the defined actions 16 . All actions are predefined by STRIPS representation, which has two factors: state and action. An action is a mapping from a current state to a new state.
In general, Graphplan-based task planners (e.g., Graphplan, BSR-Graphplan, Split planning, and so on.) construct a graph first, and then it searches an optimal path to reach the given goal. Recall that one of the performance measures is the computational time of the planning system, and the time is mostly spent in the graph-construction phase. We find that the planning time mainly increases as the number of redundant facts of an initial state increases in a forward-graph construction planner such as Split plan 15 .
It is remarked that some subsystems in the ISH such as the robotic hoist and the intelligent wheelchair are highly related to their high-level tasks whereas some devices and home appliances are not. Thus, as long as a high-level task in the ISH is concerned, the number of redundant facts of an initial state is larger than that of a goal state. In this case, we also found that a backward graph construction algorithm such as BSR-Graphplan shows better performance. Therefore, we have applied a backward graph construction scheme in the split planning algorithm. A brief procedure is shown in Table 1. In addition, a state-partitioning technique has been applied to the backward split planning algorithm. It is remarked that decomposition of an original planning problem into a set of independent subproblems can highly reduce the computational time. The crux of this problem is grouping the initial and the goal state into several sub-initial states and subgoal states, which does not affect each other, respectively. Starting from the initially divided sub-goal states, which is one fact in the original goal state, the planner finds the essential sub-initial state for each sub-goal state. Based on the obtained sub-initial state, we analyze dependency, and then, the decomposition of an original planning problem into independent sub-problems is conducted by merging operation of dependent sub-initial states and sub-goal states.  Table 2 shows an example of task planning in the ISH for preparing the task of going out as explained before. Figure 5 shows an abstract world model of the ISH using the world abstraction technique 21 . Using the proposed task planning system, we can finally obtain the sequence of actions for the given high-level task as shown in Table 2.       Table 3, the computational time has been remarkably reduced with state partitioning technique. Also, the world abstraction technique additionally reduces the computational time. The proposed task planning system can provide a proper action sequence within one second for the high-level task, preparing 'going out', which is a quite satisfactory result to be applied in a practical situation.

Soft Remote Control Subsystem for HRI in Fuzzy Context
In this Section, we present a form of HRI in Fuzzy Context, called "soft remote control system", which is a hand gesture recognition-based interface. Using the soft remote control system, the user can give his/her command directly to the target devices by some pointing/pre-defined gestures. Note that human gestures are intuitive, natural, and easy with which the user can express his/her intentions of approval and satisfaction or giving instructions by pointing some devices and directions but that information contents would be fuzzy and interaction takes place in a fuzzy context. Facial emotional expression recognition is another example of this category.

System Overview
The soft remote control system allows the users to control various home appliances naturally without any body-attached devices in a smart house. 22 Figure 6 shows the overall configuration of the soft remote control system. Multiple color cameras with a pan/tilt/zoom module are used to acquire the images of the room. In the vision processing system, the user's commands using his/her hand gesture are analyzed and the information about them is transferred to the home server via TCP/IP. Then, the home server sends Infrared (IR) signals to control the home appliances. The command procedure to operate a function of a specific device is defined in a simple and natural way that the user can command easily and intuitively. The detailed procedure is described in Fig. 7. The user first selects the device that he/she wants to control by pointing to it. Then, the user can command the operation of desired functions via 10 predefined basic hand motion commands which consist of 1-D motion and 2-D motion as shown in Fig. 8. If there is no command gesture in a few seconds, the activated device is released. Otherwise, the user can command other functions to the currently activated device. In order to complement the errors in recognizing the user's pointing directions, we adopt the concept of feedback by which the user can adjust the pointing direction and confirm the recognition result as shown in Fig. 9. In the case that the pointing direction is beside the target, the soft remote control system finds out the closest appliance to the current pointing direction, and announces to the user which direction his/her pointing hand should be moved to adjust the pointing direction by displaying the location of the target and currently pointed spot. The display also shows the selected device and recognized command gesture when a device is pointed and an operation of the selected device is commanded, respectively, by the user.

Intelligent Technique in the Soft Remote Control System: Fuzzy Garbage Model-based Gesture Recognition
Even though the user does not give a command by hand gesture, the soft remote control system may take some hand motion as a gesture command. This fuzzy context causes wrong recognition results. Such a situation can happen when an ordinary hand behavior looks like a gesture similar to one in the predefined command gestures. To resolve this problem, a fuzzy garbage model has been proposed. 23 (a) 1-dimensional motions (b) 2-dimensional motions Fig. 8. Command gestures using hand motions The fuzzy garbage model is a fuzzy model defined by meaningless but similar gestures to the set of commands. Those gestures are called garbage in this paper. Also, a fuzzy command model is defined by the command gestures. Each model is constructed by the corresponding fuzzy rules and membership functions, and produces the output score, which indicates that the input gesture belongs to each class. By comparing the output score, the user's gesture is recognized as a garbage or command as shown in Fig. 10.  Note that fuzzy logic can be effectively utilized to handle the uncertainty of human gesture. The ambiguity can be expressed and treated as linguistic values in fuzzy logic. In this paper, for example, eating and 'up' command are discussed to explain the overall system. Two features are used for the recognition as follows:  Table 4 are used to recognize 'up' command. The COS (Center of Sums) method is adopted for defuzzification. 24 For clear understanding of the problem, defuzzified values of the recognition result are described for five different users as shown in Fig. 11.
Observe that the defuzzified value of user #3's noncommand is higher than user #2's command gesture. It means that some command gestures may not be correctly recognized if a single threshold is applied for the recognition system.

If A is PS and B is ZO then Out is PB If A is ZO and B is PS then Out is ZO If A is PS and B is PS then Out is PM If A is ZO and B is PM then Out is ZO If A is PS and B is PM then Out is PB If A is ZO and B is PB then Out is PS If A is PS and B is PB then Out is PS If A is PS and B is ZO then Out is PS If A is PB and B is ZO then Out is PB If A is PS and B is PS then Out is ZO If A is PB and B is PS then Out is PB If A is PS and B is PM then Out is ZO If A is PB and B is PM then Out is PM If A is PS and B is PB then Out is PB If A is PB and B is PB then Out is PS
To resolve this difficulty, the fuzzy rules for garbage gesture are generated as shown in Table 5 using the same features. Genetic algorithm (GA) is adopted to optimize the membership functions because of its global optimization and robustness. 25 The length of left, right side and center point of each membership function are the parameters to be optimized by the optimization rule in Eq. (1). Figure 12 shows the optimized membership functions by the proposed method and the results of recognition error rate. Observe that the user #5 shows a particularly high error rate. This is a person dependence problem. Note that the same model has been applied for different users, and each user's gesture characteristics are different from one another.
Thus, a personalized system as an individually optimized system is desired and for this, a two-stage user adaptation technique has been adopted.  For the first stage of user adaptation, GA is applied because of its strong "search" capability, and this adaptation should be done in advance before the system gets started. The initial values of the parameters are obtained from the optimized result in this stage. However, we find that the gesture characteristic of a single user is changeable in a different environment. Therefore, additional adaptation is required for a single user during his/her operation. This adaptation is defined as the second stage of user adaptation, during which the steepest descent method has been adopted because of its high speed capability. 26 The adaptation rule and a cost function J are described in Eq. (2), where D is a defuzzified value. Parameter updating equations are described in Fig. 13. This adaptation process is performed in an incremental manner, where the learning rate is decided to prohibit possible performance deterioration by the user's accidental gesture. The experimental results are described in Table 6  and Table 7. They are first conducted for the previous soft remote control system, comparing with a fuzzy model with one threshold value, fuzzy garbage model, and adapted fuzzy garbage model. Among the 75 data patterns collected from five different users, 25 data patterns are used for training while 50 data patterns are used for test.

If A is PB and B is ZO then Out is ZO If A is PS and B is PS then Out is PM If A is PB and B is PS then Out is PS If A is PS and B is PM then Out is PM If A is PB and B is PM then Out is ZO If A is PS and B is PB then Out is PM If A is PB and B is PB then Out is ZO
The results are described in terms of recognition rate (RR), false negative (FN) error and false positive (FP) error. As shown in Table 6 and Table 7, the recognition rate of the garbage model with adaptation is highest. We remark that, when command gesture is mixed with many other similar gestures, the recognition rate may be much degraded. It is due to the difficulty of finding appropriate features and fuzzy rules to discriminate command gesture and many other similar gestures. Therefore, it is required to develop a complementary method to enhance discrimination capability of the system.

Probabilistic Fuzzy Rule-based Learning Subsystem for HRI in Uncertain Context
Finally in this Section, we discuss the HRI in uncertain context which can be considered as an indirect human-machine interface in the ISH. Recall that the user can generally express his/her intention directly to the target machine or a robot through a direct and well-defined set of commands or behaviors. Also recall that, due to inconsistence and time-variance of human expressions as input to the HRI Module, direct interpretation of input for proper interaction can be erroneous and sometimes impossible if not properly learned. It is necessary to have an advanced form of human-machine interface which can provide reading of the user's intentions by indirectly observing his/her behavioral patterns or can respond in the long run to inconsistent commands and changed environment with proper learning. Learning is essential for interaction in an uncertain context which allows the robot to render services intelligently.

System Overview
The learning system is a subcomponent of the steward robot Joy, as shown in Fig. 2, which is applied for learning the resident's behavioral patterns. One of the key features of the steward robot is the learning capability. The learning system collects data on behavioral patterns of the resident according to the available sensory information, converts in an ap-propriate knowledge form, and utilizes the acquired knowledge to control target devices. In general, an appropriate learning model and a learning algorithm are selected based on analysis about their learning target. In most pattern classification/learning problems, data pattern with wellseparable classes are considered as learning target. However, the human behavioral pattern and some physiological bio-signals (e.g., EEG, EMG, ECG, etc.) show complex characteristics such as high dimensionality, nonlinear-coupling of attributes, subjectivity, apparent inconsistency, susceptibility to environments and disturbances, and time-variance as well as situation-dependency 2 , and therefore, it is difficult to handle them with a complete mathematical model.
Note that, in practical situations, possible measurement data by available sensors are limited for behavioral pattern monitoring, and thus I/O training examples can be sparse and may contain apparently inconsistent examples. Therefore, we select PFRB (probabilistic fuzzy rule base) as a knowledge representation type to handle such an inconsistent characteristic of the target data pattern. Also, we have proposed an IFCS (Iterative Fuzzy Clustering with Supervision) algorithm to extract meaningful PFRB from numerical data patterns. 26 Furthermore, to handle time-varying characteristics of the target data pattern, a life-long learning structure with an adaptation scheme based on the four interconnected functional memory blocks, which are STM (Short-Term Memory), ITM (Interim-Transition Memory), LTM (Long-term Memory), and ABM (Action-Buffer Memory). Figure 14 shows the overall learning architecture of the proposed learning system. The proposed learning system can be utilized to a home appliance control system in the ISH. For example, in this paper, we introduce the learning system as an application to a TV viewing genre/channel recommendation system. The learning system can recommend to the resident his/her favorite TV genre/channel based on the acquired probabilistic fuzzy rule-based knowledge in a sequential order. The detailed intelligent techniques are explained in the next Section.

Sub-Intelligent Technique in the Probabilistic Fuzzy Rule-based Learning System: A Life-long Learning System with Multiple Probabilistic Fuzzy Models
While fuzzy logic is effective to deal with linguistic uncertainty in describing the antecedent part of a rule, the probability theory can be effective to handle probabilistic uncertainty in describing the consequent part of the rule. In many fuzzy rule-based systems, a pre-defined homogeneous (grid-like) fuzzy partition is initially employed, and then, a set of fuzzy rules for the partitioned input space is acquired from numerical input-output data. Note that partitioning can affect system performance in a significant way. A well-partitioned input space, for example, can induce a reduced set of rules to describe the given data pattern with high interpretability, whereas an ill-partitioned input space may generate redundant rules some of which can be even conflicting or inconsistent. Thus, a methodology of extracting fuzzy rules with meaningful self-organized fuzzy partition is desirable. We remark that the target data pattern such as human behavioral pattern is usually intermingled with inseparable data groups and separable data groups because of probabilistic and time-varying characteristics. In this case, we find that conventional unsupervised fuzzy clustering such as FCM 27,28 shows lower performance because it cannot extract separable information of a cluster correctly. In this context, we think that effective combination of an unsupervised learning process with a proper supervisory scheme can be helpful to search general regularities in data patterns, and in particular, in finding more separable/analyzable groups of geometrical shapes.
More specifically, the IFCS learning algorithm tries to extract a meaningful PFRB from a set of numerical training examples iteratively in view of separability. The learning system starts with a fully unsupervised learning process with the FCM clustering algorithm and a cluster validity criterion 29 , and then gradually constructs meaningful fuzzy partitions over the input space. The corresponding rules with probabilities are obtained through an iterative learning process of selective clustering with supervisory guidance based on cluster-pureness and classseparability. If there are separable classes during the learning process, re-clustering of the selected cluster is performed. If not, extraction of probabilistic information from the selected cluster is conducted. Figure 15 shows the learning procedure of the IFCS algorithm.
Another aspect to be considered is that the behavioral pattern of the resident may be changed as time goes on. Also, the acquired knowledge could be evaluated by the target user, and then modified toward more reliable knowledge throughout incessant learning and control. Referring to learning through the entire lifespan of a system 30 , we may use a new terminology "life-long learning" or continuous learning. Grossberg asserts that, in contrast to a paradigm adapting only to a changing environment, the notion of life-long learning suggest preservation of previously learned knowledge if it does not contradict the current task. 31 In conjunction with the notion of a human-inthe-loop system, in this paper, we use a refined definition of the life-long learning for designing a practical learning system, which is defined as a repeated knowledge accumulation process by alternation of an inductive learning process and a deductive learning process throughout the entire lifespan of the system as shown in Fig. 16. In view of PFRB construction, the inductive learning process works to construct a PFRB in a way that the PFRB is constructed and updated inductively by pairs of accumulated training examples, which are monitored during a period, using the IFCS learning algorithm. The deductive learning process operates also to modify the PFRB. The learning system provides control output for a given input condition using the acquired PFRB, and modifies the PFRB by the user's feedback. In both of the learning processes, a judging process (or a decision maker) is conducted which can decide between addition of the rule/rule-base and modification (or deletion) of the existing rule/rule-base by an incoming rule/rulebase.
The proposed learning structure is shown in Fig. 17. There have been several attempts to design a learning system from a view of memory structure in reference to a human cognitive learning model. It is instructive to refer Hawkins 31 who asserts that, if an intelligent machine is ever to behave like a human, it should have a memory structure and functionally similar to the neo-cortex of human brain. Also, some required memory blocks in logical architecture of memories in the brain has been proposed. 32 The proposed life-long learning structure is based on multiple probabilistic fuzzy models, employing STM, ITM, LTM, and ABM in the learning structure. We find that it is difficult to model human behavioral pattern with a single probabilistic fuzzy rule base. Here, a probabilistic fuzzy model denotes for a probabilistic fuzzy rule base. Therefore, the accumulated knowledge is expressed with a collection of multiple probabilistic fuzzy models, that is, a probabilistic fuzzy model base.
We first consider the inductive learning process as shown in Fig. 17 (a). A PFRB is generated by the IFCS algorithm in the STM, and then the PFRB is transferred to the ITM. Based on the model base construction scheme, which maximizes dissimilarities between the constructed models, the PFRB is adapted to the model base in the LTM using a similarity-based model comparator. After a model is constructed in the LTM, the deductive learning process is then activated as shown in Fig. 17 (b). We adopt a model estimator for rapid selection of an appropriate model among the existing models in the LTM. The value of model estimator is updated by calculating the compatibility between the incoming training examples and the existing models. The selected model, which is used for control by a model estimator, is transferred from the LTM to the ABM. The learning system provides a classification result for a test instance using the PFRB in the ABM. Then, using a training example by feedback, the value of the model estimator is updated. The on-line adaptation scheme is applied in the ABM in paralleled with updating the model estimator. The selected model in the ABM is continuously adapted by the on-line adaptation scheme. However, it is discarded if a new model is selected by the model estimator. Even though the proposed learning system has been originally designed to handle inconsistent data, we first tested the learning system for well-separable benchmark data in the UCI repository of Machine Learning Database. By 10-fold cross validation, we obtained 95.1 % (2 PFRs), 95.4 % (3 PFRs), and 97.0 % (3 PFRs) for Iris, Wine, and Wisconsin Breast-Cancer Data, respectively.
To evaluate the proposed learning system as a TV genre/channel recommendation system, 25 persons' TV viewing data collected for one year have been used. We have obtained an average success rate of 80.63 % for genre selection and 69.98 % for channel selection with 5 probabilistic fuzzy models, including the second probable class in the presence of highly inconsistent data. We have also found that the learning system shows better performance for the repeated/periodic data patterns. The experimental result shows that the learning system can recommend favorite TV genres/channels for the resident with high satisfaction degree within two or three trials, which is very useful in a practical system. Note that one of the merits in a probabilistic fuzzy rulebased approach is that it can provide probable outputs sequentially according to the probabilities of each class in the PFRB.

Concluding Remark
Interaction between human and robot in a smart house environment would take place frequently in various forms under different situations and context. When special considerations are needed for the resident of the house, as for the older persons or people with physical disabilities, the human-friendly design approach is preferred in the sense that a robotic system should adapts to human rather than training human for expert operation of the robotic system. It is noted that machine intelligence is a crucial factor for such a design approach. In this paper, we have proposed that, for the robot system, interaction is considered as input and output of the HRI module and that, the robotic response can be designed as a system output in accordance of three different situations such as crisp context, fuzzy context and uncertain context. And then, a successful example of human-robot interaction is provided for each case.
The configuration of interaction between human and robot in terms of contextual situations is an at-tempt to be further studied in conjunction of rich research results and we believe that there will be active studies in search of effective and efficient HRI structure hybridized with various techniques of computational intelligence and with context theories 33 .