An Implementation of Seamless Human-Robot Interaction for Telerobotics

Achieving human-robot cooperation in a telerobotics system is the focus of this paper. The implementation of human-robot systems can be extremely challenging when the robot is not directly controlled by the human. The interaction mode, depending on the task context, can be continuous manual, semi-autonomous or autonomous. To address the Human-Robot Interaction (HRI) issues in such a system, a concept of seamless HRI is introduced. Seamless HRI implies flexibility in human control in interacting with a robot in different situations, and the adaptability of the robot autonomy in response to the human control. The main idea is to design a telerobotics system that allows a shift from manual to autonomous operation, dynamically, via different human-robot roles and relationships. These roles are Master-Slave, Supervisor-Subordinate, Partner-Partner, Teacher-Learner and Full Autonomous mode by the robot. This paper presents the theoretical foundations and the requirements for seamless HRI. An implementation of the concept and an experimental evaluation is presented.


Introduction
Technological advancement and a significant reduction in costs of devices that compute, sense, and actuate make the integration of various robot entities into people's daily lives a reality.Physical mobile robotic agents are now poised to become part of our everyday landscape, with viable applications in the workplace, in the home, in the hospital, in remote and hazardous environments, and on the battlefield.It is clear that this trend will continue to develop more competent robotic agents for different applications to help improve the 'quality of life'.The current state of human interaction with robots in a Human-Robot System (HRS) is quite different in comparison to directly teleoperated "machines" that operate in structured industrial environments.Robots differ from these machines in that they are capable of functioning, in evolving situations, reasoning and acting in a relatively complex domain and adapt to some extent on the basis of human feedback, or to its own environment (Lopes & Connell 2001).In spite of promising research from the field of robotics and artificial intelligence (AI), it is apparent that attempts to develop and employ intelligent autonomous robots have not been successful in addressing limitations as to what the robot can perceive and reason, apart from human input (Lopes & Connell 2001, Murphy & Rogers 2001).The primary reason is due to the high degree of complexity of perception and motion of a robot required in an unstructured and dynamic environment.This makes research in robotics and AI extraordinarily difficult.The complexity arises in part from the need for a robot to perceive, act and reason about the uncertainty of the environment, and under real-time constraints.Although, the goal of building truly intelligent autonomous robots has not been achieved, it is possible for the current state of robots to perform useful tasks (e.g.Casper & Murphy 2003) and to provide appropriate assistance to human to correct his control input errors by supporting perception and cooperative task execution (Burke et al. 2004, Fong 2001).Systems which facilitate robot to cooperate with human are available, and are attracting increasing attention from researchers.This leads to an important research issue in the design and development of an HRS.How should human and robot "interact" so as to cooperatively accomplish task objectives (Burke et al. 2004)?This issue embraces an emerging research area in robotics, namely Human-Robot Interaction (HRI).

Concept of Seamless Human-Robot Interaction
Consider an HRS where a human and a robot are requested to perform assigned tasks.In order to adapt to changes in tasks, work environment and different capabilities of the human or the robot during task execution, both of them need to change their interaction roles to take on different responsibilities.This presents new challenges.In remote operation applications such as space explorations, military operations, automated security, search and rescue, etc., the human does not have direct visual awareness of the environment to perform the required tasks.In these applications, a tight interaction between the human and the robot is required for effective cooperation.This raises an interaction dilemma.On one hand, the robot operating in the remote environment can be expected to be in a "better position" to react locally to the remote environment, and should refuse erroneous human commands which may result in collision into obstacles.On the other hand, due to its limited ontologies, the robot requires human assistance on tasks such as object recognition and decision-making.To overcome this interaction dilemma, adopting appropriate roles that exploit the capabilities of both the human and the robot as well as crafting natural and effective modes of interaction are important to creating a cooperative HRS.To this end, innovative paradigms have been proposed over the years to redefine the roles of the human and robot.The traditional master-slave relationship (Hancock 1992, Sheridan 1992) is being refined to a model of the human as cooperator (e.g.Bruemmer et al. 2003, Fong 2001), supervisor (e.g.Bourhis & Agostini 1998, Sheridan 1992) or teacher (e.g.Nicolescu & Matarić 2001) rather than just as the master controller of the robot.The slave robot, in turn is modelled as an active assistant/partner (e.g.Bruemmer et al. 2003, Fong 2001), subordinate (e.g.Bourhis & Agostini 1998, Sheridan 1992) or learner (e.g.Nicolescu & Matarić 2001) of the human, supporting perception and cooperative task execution.All of the above human-robot roles and relationships are important, since each stresses a different aspect of HRI.It will be beneficial if there is a framework formulated to characterise how a human and a robot cooperate based on these human-robot roles.An understanding of the nature of interactions of these roles can lead to the identification and classification of different HRI control strategies.This is important for effective cooperation, where human and robot can engage in different roles based on their task capabilities to meet new and unexpected task situations.The types of tasks concerned here are those involving reasoning, sensing, and acting which use their capabilities to perform these functions.The abovementioned concept is depicted in Fig. 1, in accordance to different human-robot roles discussed in Ong et al. (2005).A summary of these roles is presented in Table 1.The term seamless refers to flexibility in human control in interacting with a robot in different situations and the adaptability of the robot autonomy in response to human control.Here, the term robot autonomy is defined as "the ability of an agent (robot) to act efficiently without any human's intervention" (adapted from Braynov & Hexmoor 2002).By stating that a robot is autonomous, does mean that the robot is thoroughly self-governing and capable of complete self-planning and self-control.It is expected to be able to operate with some level of capabilities in the absence of human supervision or management for a defined period of time (Jackson et al. 1991).In this context, flexibility means the ability to perform different aspects of the HRS task easily by the human.Adaptability means adjustment to the robot autonomy for performing the task.A robot should be able to carry out its processes no matter what disturbances might occur in the task environment (Lopes & Connell 2001).In this manner, both the human and the robot may work together more coherently to ensure a high-level of system performance and the satisfaction of task demands.

Characteristics Since
Master-Slave To let the robot mimics the human actions exactly in performing a task.

Paper Overview
In this paper, the main concern for the design and development of a cooperative HRS is the achievement of seamless HRI via different human-robot roles (Fig. 1).The type of HRS addressed here is a telerobotics system, where the robot is not directly teleoperated throughout the complete work cycles, but can operate in continuous

Teacher-Learner
Partner-Partner manual, semi-autonomous or autonomous modes depending on the situation.This paper is structured as follows.Section 2 first discusses the theoretical foundations and outlines the requirements for seamless HRI.Consequently, to study how this concept can be applied, it is employed in the implementation of a telerobotics system in Section 3.This includes a report of the experimental evaluation conducted to assess seamless HRI in Section 4. To distinguish our work from other research that allows dynamic HRI, Section 5 provides a discussion of recent related work in the domain of HRI and social robotics.Finally, a discussion regarding the concept of seamless HRI in the modelling of an HRS is presented in Section 6.

Theoretical Foundations
Within a human-human team, participants normally engage different roles based on their task skills and changes in interaction roles to meet new and unexpected challenges.When a human team interacts to perform a common task, a tight coordination of their actions is needed to achieve seamless cooperation.They cooperate by coordinating their actions through monitoring of each other's actions and sharing of information pertaining to the task process.These include task procedures, task assignment, task status, coordination protocol and so forth.Each team member monitors one another's action so as to be aware of the present state of the current task execution, and which action is necessary.Throughout the task execution, their interaction roles may change depending on individual's performances and in accordance to different situations.Appropriate actions may be provided to assist one another in correcting each other errors/mistakes, or at least alert the other member's of his/her mistakes.The primary motivation for seamless HRI is to achieve a resemblance of the human-human coordination described above.The approach adopted here is based on the concept of sharing and trading from Ong (2006).A human-robot cooperation concept allows human and robot to work as a team by letting them contribute according to their degree/level of expertise in different task situations and demands.This concept not only considers how a robot might assist human but also how the human might assist the robot.Through this, a spectrum of HRI modes ranging from "no assistance provided to the human by the robot" to "no assistance provided to the robot by the human" can be envisaged to address contingencies that emerge when the human and the robot work together during task execution (Fig. 2).As a consequence, it facilitates the achievement of seamless HRI.As shown in Fig. 2, to characterise the five human-robot roles and relationships (Table 1), four discrete levels of interaction modes namely, manual mode, exclusive shared mode, exclusive traded mode and autonomous mode are defined.The characteristics of these four interaction modes are depicted in Table 2, in accordance to the implementation in Ong (2006).Here, the term "exclusive" is used to highlight that the shared mode is exclusively envisaged to let the robot assists human, while the traded mode is exclusively envisaged to let the human assists robot.

Modes Descriptions Autonomous Mode
The human is only responsible for relative longterm plan.Once the control system is set up, essentially all the robot control is autonomous; the human can monitor but cannot influence the low-level process.

Exclusive Traded Mode
Here, the control is delegated to the robot while the human typically assumes a monitoring role.The human may resume (i.e.trade) the control from the robot when it encounters any problems.The robot's competence includes capabilities to choose it own path, responds intelligently to the environment, and to accomplish local goals using a sequence of behaviours.

Exclusive Shared Mode
The human and robot can control different aspects of the system concurrently.The robot has similar basic competence as in the exclusive traded mode.Although the robot only handles the low-level task, the human may intermittently control the robot by closing a command loop.He may control some variables while robot performs the other executions.

Manual Mode
The robot takes no initiative except to stop when communications breakdown.The human is responsible for every action taken by the robot.The robot can be configured to take basic initiative to protect itself by assessing its status and surrounding environment to decide if the commands issues by the human are safe.Table 2.A Summary of the interaction modes in Fig. 2

Continuum of Interaction Modes
Autonomous Mode: Based on the Fully Autonomous (No assistance provided to the robot by the human)

Interaction Modes based on the Five Human-Robot Roles and Relationships
Exclusive Shared Mode: Based on the Partner-Partner to let a robot provide assistance to a human.Within this mode, the Teacher-Learner can be adopted.
Exclusive Traded Mode: Based on the Supervisor-Subordinate to let a human provide assistance to a robot.Within this mode, the Teacher-Learner can be adopted for teaching the robot.
For tasks that are best perform by the human For tasks that are best perform by the robot For tasks that require temporal cooperation between the human and the robot Manual Mode: Based on the Master-Slave (No assistance provided to the human by the robot)

Human-Robot Roles and Relationships in accoardance to
the Characterisation in Fig. 1 For tasks that require constant or frequent cooperation between the human and the robot The reason of placing the shared mode below the traded mode (Fig. 2) is because it requires more human control involvement.That is, in exclusive shared mode, human is required to work together with the robot by providing continuous or intermittent control input during task execution.On the other hand, in exclusive traded mode, once the task is delegated to the robot, the human role is more of monitoring rather than of controlling or requires "close" human-robot cooperation, as compared to the exclusive shared mode.
Therefore, the interactions between the human and the robot in this mode resemble the supervisor-subordinate paradigm instead of a partner-partner like interaction as in the exclusive shared mode.However, this does not mean that these two primary modes employed pre-defined robot autonomy.Within these two modes, a range of sub-modes can be incorporated with varying degree of robot autonomy for providing a finer grain of HRI as needed for particular applications (Ong 2006).

Requirements
To achieve seamless interation between the human and the robot based on the HRI modes depicted in Fig. 2, Ong (2006) posits four essential requirements that each must infer or perceive of the other: (i) the capabilities and limitation of its team member (i.e. its counterpart); (ii) the action(s) of its team member; (iii) the goals or intentions of its team member; and (iv) the status of its team member.First, these four requirements pertain to the shared representation between the human and the robot.Second, they highlight the needs for both the human and the robot to monitor each other's actions and states so as to develop and update a task model of each other.Third, they may need to alter and negotiate their interaction strategies in accordance with the task or performance of each other.Hence, both the human and the robot need to know and update (i.e.learn) shared representation from the interaction.To operate efficiently in response to changing HRI situations, the underlying HRS should be designed to facilitate a shift from manual to autonomous operation dynamically (Fig. 2).Within this manual-autonomous continuum, the human-robot team is allowed to engage in a tightly coordinated HRI to promote team cooperation.To support, the system must be able to resolve any arising conflicts flexibly and dynamically.Fig. 3 provides an overview of the above requirements by highlighting the essential points and their dependencies.A brief overview of the requirements on shared representation, monitoring and conflict resolution are presented here to provide context for the implemetation and evaluation of the concept of seamless HRI in Sections 3 and 4 respectively.

Requirement 1: Shared Representation
The key to shared representation between human and robot is communication.
In human-human communication, humans communicate with each other easily using the same language.They can communicate through electronic communication devices or face-to-face.In human-robot communication, it is not that straight forward, because the human cannot communicate with the robot directly.
A well-defined communication framework is required to address the different modes of interactions between the human and the robot.Hence, to facilitate role changes during HRI, the communication framework for changing responsibilities, or level of task interaction, must support both the requesting and accepting of change in interaction modes.This requires a shared frame of reference (i.e. a shared representation).This is purported by Breazeal et al. (2004) work from social robotics.They state that a shared communication is required for coordinating human-robot roles and actions.This is because it serves to establish and maintain a set of "common ground" (e.g. the current state of the task and responsibilities of each teammate) for effective HRI.In the context of human to robot communication, the human needs to be given an effective mechanism to make joint interactions with the robot surroundings, so as to command, and advise the robot.The style of interaction with the robot should be consistent in terms of human control input, forms of robot's output, and the communicative roles held by the human and the robot.From the perspectives of a robot to human communication, the communication tools should provide mechanisms that allow the robot to convey its states, intents, actions and goals to the human.It should also facilitate the attraction of the human's attention when the robot requires assistance.The robot's actions should be observable, and the reasons for those actions should be clearly explicated in human-understandable form via speech or visual forms.As each action taken by the robot is embedded in a context, the action perceived and the most salient aspects of the context must be represented.The intention is to facilitate human monitoring and  (Ong et al. 2004).

Requirement 2: Monitoring
To be an active team member, both the human and the robot need to monitor the behaviours of one another.To provide appropriate assistance and without commanding the robot beyond its abilities, the human must be able to monitor the robot so as to be aware of it capabilities and limitations.If possible, it will be useful if the monitoring task can be viewed from different sensory perceptions.However, this requires that the underlying display interface is provided with the ability to generate and integrate multiple perspectives and representations from the robot.
To monitor the human's behaviour so as to provide appropriate assistance, the robot must perceive and reason about the human's actions and intentions in any particular situation.Ideally, the human can explicitly convey his action and intention to the robot by means of menu-based control interfaces and preferably with haptic feedback.This implies that the robot must perceive the action and intention of the human directly without any uncertainty.This is called explicit communication.If the action and intention of the human cannot be accessed directly, the robot needs to infer his action and intention.This is called implicit communication.
For example, a human teleoperating a mobile robot may give "inaccurate" control signal to the robot due to poor video feedback or perception.In this case, based on an interpretation of its sensor information of the environment together with the human control signal, the robot must infer the human action and intention and make its own decisions to get to the desired goal safely.The challenge here is to find an approach for this implicit communication to capture the human's action and intention.Approaches to implicit communication can be loosely classified into model-based and behaviour-based paradigms (Demeester et al. 2003).The model-based paradigm requires a formal model of human control behaviours and normally confined to the specific application tasks.
Approaches to behaviour-based paradigm in robotics are normally based on the coordination of the different possible entities about the human's action and intention.Possible coordination mechanisms to achieve this are presented in requirement 3. A model-based paradigm may be preferrd over behaviour-based for modelling the human control strategy, but this is not suitable for the current implementation.A model-based paradigm requires the robot to adopt a formal model of the human user's control behaviour so as to infer his control actions efficiently.To achieve this requires many experimental trials to generate a reliable human control model.The evaluation reported in Section 4 does not allow such experimental trials.It requires the robot to capture the control actions of different human users in a very short time.Hence, the behaviour-based paradigm is employed to infer the human's control actions based on the users' desired travelling direction and speed from a human computer interface (e.g.keyboard, joystick or PDA).

Requirement 3: Conflict Resolution
Due to the complex interaction between a robotic system and the real world, the controversial issue of selecting an emergent action arises when different perception-action processes are coordinated.This becomes more difficult when the robot system needs to concurrently responds to the human control actions while performing the desired tasks.To facilitate, the system must have a conflict resolution unit, both "locally" in terms of resolving conflicts, actions and intentions, and "globally" in terms of coordination of different interaction roles or modes during HRI.Basically, there are two predominant classes of coordination mechanisms in robotics (Saffiotti 1997).They are: Arbitration (Brooks 1987) and Command Fusion (Arkin 1991), respectively.Arbitration Coordination Mechanisms This class allows one or more processes, at any instance, to take control for a period of time until another process or a set of processes is activated.Normally, the arbitrator needs to determine the most appropriate perceptionaction process for each situation from a group of competing processes.
Hence, it is suitable for coordinating between the set of processes in accordance with the robot's changing objectives and requirements under dynamic conditions.Command Fusion Coordination Mechanisms This class allows multiple processes to combine actions given by all the processes and generate an output to control the robot.Thus, it facilitates all the processes to contribute simultaneously to the control of the robot in a cooperative manner, rather then in a competitive manner.This class is often suited for control problems with multiple objectives.The implementation described in Section 3 adopts a hybrid approach based on these two classes of coordination mechanisms.

Telerobotics System
The implementation of the mobile telerobotics syetm is based on the HRI testbed presented in Ong et al. (2004).There are three main subsystems in the telerobotics system (Fig. 4).These are the human control interface, the communication link and the mobile robot.In our current implementation, the mobile robot capabilities not only encompasses functions (e.g.path planning, navigation, localisation, etc.) for remote operation but also functions such as monitoring human control behaviours to provide assistance to the human.The main objective of the robot-assisted scheme is to let the robot provide appropriate guidance (e.g.stay on the path) and obstacle avoidance while performing a navigation task under semi-autonomous control.
The coordination architecture that realises this is depicted in Fig. 5.As shown in Fig. 5, a hybrid approach based on prioritybased arbitration and superposition-based command fusion is used to efficiently coordinate both the human and the robot behaviours.A priority-based arbitration is used for control strategy selection by the human.This implies that the human will determine which level of interaction mode is suitable for performing a particular task.Through this, the coordinator will determine the appropriate robot behaviours to use.This greatly reduces the need of coordinating all the robot behaviours during task execution and hence requires lower computation.This is because those behaviours that are not in used are set to a lower priority (or off) according to their process identity.On the other hand, a superposition-based command fusion is used to facilitate semi-autonomous control of the robot by giving the human as much control as possible within a safe limit.For this approach, the command from the human is considered as any other behavioral unit in the robotic system and is incorporated with the robot overall behavior.Finally, to ensure safe navigation and respond to the human control dynamically, the robot maintains a short-term memory of the front perception (6m x 3m occupancy grids) of it environment based on the sonar and laser sensors (Fig. 6).

Experimental Evaluation
To evaluate seamless HRI due to the ongoing interaction role changes between a human and a robot in performing a desired task, the following hypothesis is established: "A better task performance can be achieved if the human and robot can change their interaction roles dynamically as compared to the use of fixed interaction role."To assess the validity of this hypothesis, it is essential that a set of performance criteria used for the experimental evaluation clearly characterises how the task performance is improved.However, the selection of an appropriate set of performance criteria is specific to the application of the system.The primary application envisaged for the telerobotics system presented in Section 3 is for surveillance, reconnaissance, objects transportation, etc. in which the tasks of navigation, obstacle avoidance and target tracking are just some examples.Thus, the navigation task has been implemented.For this system, a navigation task moving from location A to location B can be performed manually by the human, i.e. via manual mode, based on the Master-Slave paradigm, where the human is not provided with any robot assistance; Cooperatively by both the human and robot, i.e., via exclusive shared mode, based on the Partner-Partner paradigm, where appropriate robot assistance is provided to human.Here, the intention is to use both the modes to evaluate the hypothesis.However, to achieve the change in interaction roles, both the manual and exclusive shared modes must be applied concurrently during task execution.Based on this, the evaluation of the hypothesis is achieved by the comparison of the performance obtained from experiment on the interaction roles transitions with the experiments on manual mode and exclusive shared mode.

Experimental Design and Procedure
A user evaluation was conducted to investigate the hypothesis.The experimental setup was based on our past experiences on remote mobile teleoperation evaluation (Tachi et al. 1988).Here, the task was for a human to drive the ATRV-Jr™ mobile robot (Fig. 4) remotely through a test course based on the following three experimental conditions: Experimental Condition A -Manual mode: Apart from providing safe operation such as emergency stop in situations such as communications breakdown or an impending collision with an object, the robot does not provide any other assistance to the human.Experimental Condition B -Exclusive shared mode: Operation with robot assistance such as real-time obstacle avoidance.

Experimental Condition C -Adaptive interaction modes:
Operation with mode varying by the human from the manual mode to exclusive shared mode or vice versa during task execution so as to let the human control the robot in different situations.Between the extremes of manual mode to that of exclusive shared mode, there exists a range of transition sub-modes implemented based on different sensing distances and angles.Basically, the operation based on these experimental conditions requires the human to monitor the robot actions and control the robot by initiating and terminating each of the robot actions in sequence based on his/her sensorymotor coordination.The experimental setup to facilitate this is presented in Fig. 4. The test course is approximately 184 square metres, 27 metre long by 4 metre wide, which comprises of two narrow straight paths (~1.4 metre in width) and two U-turns.Obstacles include chairs, tables of different sizes and cables from the work benches.Six participants (3 men and 3 women), ages from 19 to 44, took part in this experimental evaluation.All participants were unpaid volunteers from our centre and all participants completed the entire study (i.e. the three experimental conditions.).Four participants have driving license and three participants had prior experience with remote driving.However, none of the participants had controlled mobile robots before.

Experimental Design
The experiment is based on the principle of Latin square (Kirk 1995), which is common in psychological research.This approach is adopted because it is able to minimise the effect of any factor that may vary through the duration of the experiment such as environmental factors in the field situation.The effect would be to cancel each other out especially if there is a gradient effect such as training or learning effect in this experiment.In addition, it is useful in serial experiments where different treatments (i.e. the three experimental conditions) are given to the same subjects (i.e. the six participants) in sequence.The performance criteria (or measure) for assessing the experimental conditions are: Task completion time, which is for determining the efficiency of each interaction mode.Modes with faster task completion times are regarded as more efficient.Number of collisions (i.e., actual contact) with obstacles in the test environment.This is for determining the safety of each interaction mode.
Safer interaction modes correspond to fewer collisions.Number of stops (i.e., translation and rotation speed equal to zero for more than one second).This is for determining the ease of operating each interaction mode.Interaction modes that correspond to fewer stop are regarded as easier to operate as the participants are more confident in controlling.Number of turn-on-spots (i.e., translation speed equals to zero for more than one second).Interaction modes that correspond to fewer or no turn-on-spot during driving are deemed to be easier to operate (i.e., smooth driving).The data values for the above criteria are obtained on-line by means of automatic logging during task execution.However, side collision of the mobile robot wheels with obstacles that cannot be obtained automatically by the front and rear bumps is gathered from observation by the experimenter.Apart from assessing the experimental conditions based on the above evaluation criteria, a subjective rating is also employed to determine how participants feel about the evaluation, i.e. user satisfaction (Scholtz & Bahrami 2003).This is important because it allows participants to submit their analysis of the control interface and interaction modes, as well as their opinions.

Experiment Procedure
There are three main stages to the evaluation: preevaluation, actual evaluation and post-evaluation.First, each participant was required to fill up the pre-evaluation questionnaire.The purpose of this was to gauge the experience of the participants, which might affect the opinions and results.Subsequently, they were briefed on the hardware of the ATRV-Jr™ and the control interface (Fig. 4).This provided a better understanding of the robot that they were controlling.Next, participants were given a verbal explanation accompanied with a demonstration as to how to drive the ATRV-Jr™ in their line of sight using the control interface.The purpose was to allow them to observe what their actions on the control interface would have on the actual robot.Once the participant was comfortable with the control interface, the session entered a training phase in which the participant practiced all the three interaction modes without line of slight.To minimise training and learning effects, the interaction mode presentation (i.e. the sequence of using the interaction modes) was randomly selected for each participants.The selected interaction mode presentation remained the same during the actual evaluation.
The participant then practiced all the interaction modes until he/she displayed an acceptable level of competence of each interaction mode.The time spent by the participants on each interaction mode was about ten minutes.All training was done on a small training course.It is important to note that training was done off the actual test course so that the participant was not able to learn anything that would assist him/her during the actual evaluation.The actual evaluation was explained to the participant.This explanation included the route of the test course and the goal of the evaluation (i.e. the hypothesis) along with which interaction mode they should start with for each trial.For each interaction mode, the participant was asked to complete the navigation task as quickly as possible, while ensuring the robot was safe.
To determine when the task had been completed and to count the number of side collisions between the robot wheels and obstacles (that cannot be automatically recorded by the data logger), the robot operation was observed throughout by the experimenter.After the task had been completed, the participant was informed by the experimenter and the system was shutdown.The total evaluation time for each participant to complete two trials was approximately three hours with a fifteen minutes interval between each experimental condition for the documentation of the recorded data.
After the accomplishment of the evaluation, participants were asked to answer the post-evaluation questionnaire.

Experimental Results
For both trials, all the participants were able to complete the test course.Results of the four performance measures are summarised in Table 3.To analyse these four performance measures, the Analysis of Variance (ANOVA) technique is employed (Kirk 1995) From the ANOVA analysis, there are significant differences between the different types of interaction modes for all the performance measures.Among these four measures, the comparison for the task completion time is the most significant.For this measure, the use of the adaptive interaction modes took the shortest time to complete the navigation course, whilst the manual mode took the longest time.On average, the exclusive shared mode is faster by 22.75 seconds (11.3% improvement) over the manual mode.On the other hand, the adaptive interaction modes are faster by 56.25 seconds (28% improvement) and 33.5 seconds (19% improvement) over the manual and the exclusive shared mode respectively.This result is expected since the participant is given the flexibility to adjust to the required degree of robot assistance as needed between the manual (no robot assistance) and the exclusive shared mode (maximum robot assistance).

Dicussion
The analyses of the results of the objective measures presented in Table 3 show that participants were able to operate more efficiently with exclusive shared mode than manual mode.This is because the exclusive shared mode can automatically adjusts for drift, guidance (e.g. during turning) and obstacles avoidance whereas the manual mode does not.However, as compared to the manual mode, the exclusive shared mode could not reach the maximum speed (i.e.0.6 metre/second for this experiment setup) because the robot was constantly reacting to the obstacles in the test environment to ensure safe navigation.On the other hand, the manual mode allowed the participants to operate constantly at the maximum speed.Although participants were able to drive faster under manual mode, they took a longer time to complete the test course.This was due to the need to make numerous adjustments to compensate for drift when travelling along the narrow paths and also to make fine adjustments to achieve smooth turning.
Taking the advantages of both the manual and exclusive shared modes described above, participants completed the test course in a much shorter time under the adaptive interaction modes as compared to the use of a "fixed" single interaction mode, i.e. either under the manual or the exclusive shared mode (Table 3).This result provides significant evidence in support of the hypothesis posited in Section 4.However, the evaluation shows that even though participants were given the flexibility to change their interaction roles to attain the required degree of robot assistance desired, they needed to know when to perform roles transition.For example, when making a Uturn, it was better for the participants to switch to the exclusive shared mode so as to enable the maximum robot assistance.On the other hand, the robot must constantly monitor the participant's control behaviours to adapt its autonomy in response to their control changes.This implies that to achieve seamless HRI both issues discussed above must be met.The results obtained from the user's evaluation show that this can be met if the participants were given the appropriate feedback for perceiving when roles transition should be performed.
For this evaluation, real-time video feedback (from the robot on-board camera) was provided to the participants to let them perform roles transition seamlessly.Real-time video feedback was required because this evaluation requires the participants to teleoperate the robot remotely (without line of sight) at a fast speed.
In the context of the robot responding to each participant's control, the results obtained show that the robot can response to the control changes seamlessly by adjusting its autonomy (so as to attain the required degree of assistance needed by each participant) with an insignificant number of stops (an average of 2.07 stops for two trials, see Table 3) during teleoperation.The stops were not caused by the robot.It was due to the uncertainty the participants have during roles transition.

Related Work
The concept of seamless HRI presented here is related to the work in social robotics, which looks into the development of robots to interact with humans in an engaging seamless manner.The reseach in this domain looks beyond issuses such as designing robots with a human-like morphology to equipping robots with social skills to learn from and collaborate with people as teammates.One recent good example is "Leonardo" (Breazeal et al. 2004), a humanoid robot envisioned to work shoulder-to-shoulder with humans, sharing their workspace and tools.In contast to the research in social robotics that model HRI through social appropriateness and adeptness of autonomous robots, the work here address HRI issues in terms of the control of semiautonomous robots via human supervision or management.The aim is to incorporate human control as an integral part of the robot's system.The concept of seamless HRI is similar to work that develop HRS architecture for multiple levels of human control intervention for the functioning of an autonomous robotic system (e.g., Bruemmer et al. 2003, Fong 2001).
The purpose is to use interaction to resolve problems that arise during task execution.However, there are two differences.First, from the HRS design perspective, the basic idea on how the human and the robot interact is different from the work of Bruemmer et al. and Fong as it is based on different HRI roles (Fig. 1).This is an advantage as it decomposes the HRI problem into smaller sub-problems.System designers can now concentrate on each specified HRI role and design the appropriate functions.As a result, this provides an interactive HRS development that is based on what the human and the robot are best suited, so as to let them work under different task situation and different level of system autonomy (Fig. 1).Second, the approach for addressing the human-robot authority issues during the interaction process is different.The concept presented here does not claim that the robot has the responsibility and authority to direct the human in performing a task.In this concept, the human retains the overall responsibility for the outcome of the tasks undertaken by the robot and retains the authority corresponding with that responsibility.The robot may, however, be in authority to guide certain aspect of the tasks such as to correct human control actions.This is achieved via interaction role transitions to the required level of system autonomy (Fig. 1).

Conclusion and Future Work
The main concern for the design and development of a cooperative HRS is the achievement of seamless HRI.The essential concepts regarding the achievement of flexibility in human control and the adaptability of the robot autonomy for seamless HRI are identified.
Consequently, to study how the concept can be applied, it is employed in the implementation of a telerobotics system.The development includes: (1) the implementation of a shared and traded control architecture that supports a spectrum of interaction modes using different coordination mechanisms.The key idea of the development of different interaction modes is based on the different human-robot roles (Table 1).Instead of using only one fixed interaction role strategy, both the human and the robot can engage in different roles to compensate for the unique kinds of limitations possessed by each other; (2) The establishment of a highly flexible communication protocol between the human and the robot facilitates the coordination of their actions both implicitly (for task sharing) and explicitly (for task trading) in accordance to different task situations, task demands and needs.This paper only explained the achievement of seamless HRI using the master-slave and the partner-partner humanrobot roles.To further show that the concept is useful, future work will include conducting experimental studies to evaluate seamless HRI based on the other three human-robot roles.
They are the teacher-learner, supervisory-subordinate and fully autonomous, as presented in Table 1.The purpose is to assess the cooperation between human and robot due to change of human-robot roles with completely different type of task specification.This includes studying how human assistance can be provided to the robot in a seamless way and when human assistance is required using practical telerobotics tasks.For instance, human assistance may not only be needed in novel situations that hinders the robot from performing a task or to ensure safe operation, but also in situations when the human notices an opportunity to improve the robot task performance.This assessment will further show the importance and the need to provide different types of interaction strategies for the human to assist or guide the robot in different aspects of task situations, as proposed in this paper.

Fig. 3 .
Fig. 3.An overview of the requirements for seamless HRI

Fig. 6 .
Fig. 6.Robot front perception representation and human control behaviours mapping At a minimum, this context may be defined in terms of the robot current state, intents, actions and goals.In addition, the robot must share (i.e.communicate) its capabilities and limitations by providing appropriate feedback to the human so that the human may delegate tasks to the robot in a reasonable manner.In our implementation, when appropriate robot assistance is provided to the human, the robot, will state clearly what, why and how the human control actions are being corrected via speech and force feedback through a 2-axis joystick Dependencycontrol intervention.

Table 3 .
. ANOVA is a technique by which variations associated with factors can be used for testing of significance.Here, it is tested at 5% significant level (i.e.F0.05).The factors in this experiment are: (a) different types of interaction modes, (b) participants, (c) variation in run to run and (d) random error.The ANOVA table for one of the performance measure (task completion time) is presented in Table4.Results of the evaluation

Table 4 .
ANOVA table for task completion time