skip to main content
research-article
Open Access

Measuring the Quality of Learning in a Human–Robot Collaboration: A Study of Laparoscopic Surgery

Published:13 July 2022Publication History

Skip Abstract Section

Abstract

Robot-Assisted Laparoscopic Surgery (RALS) is now prevalent in operating rooms. This situation requires future surgeons to learn Classic Laparoscopic Surgery (CLS) and RALS simultaneously. Therefore, along with the investigation of the differences in performance between the two techniques, it is essential to study the impact of training in RALS on the skills mastered in CLS. In this article, we study comanipulated RALS (Co-RALS), one of the two designs for RALS, where the human and the robot share the execution of the task. We use a rarely used in Human–Robot Interaction measuring tool: gaze tracking and time recording to measure for the acquisition of skills in CLS when training in Co-RALS or in CLS and time recording to compare the learning curves between Co-RALS and CLS. These metrics allow us to observe differences in Co-RALS and CLS. Training in Co-RALS develops slightly better but not significantly better hand–eye coordination skills and significantly better timewise performance compared with training in CLS alone. Co-RALS enhances timewise performance in laparoscopic surgery on specific types of tasks that require precision rather than depth perception skills compared with CLS. The results obtained enable us to further define the Human–Robot Interaction quality in Co-RALS.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Robotic surgery is increasingly used in operating rooms (ORs), especially the da Vinci (cf. Figure 1(j)), with over 1 million procedures performed in 2018 [36]. This robot is essentially used to facilitate the practice of laparoscopic surgery. Without the robot, e.g., Classic Laparoscopic Surgery (CLS) (cf. Figure 1(e)), surgery is done through small holes in the patient’s belly, in which are inserted long instruments and a camera, i.e., endoscope. The image of the endoscope is displayed on a two-dimensional (2D) screen, showing the inside of the patient’s body. CLS is advantageous for patients (less pain, shorter recovery, aesthetic benefits) but disadvantageous for surgeons who then have more visual as well as physical difficulties to deal with than in open surgery. Visual difficulties are partly generated by the absence of direct vision. The working space is visualized on a 2D screen while gestures are performed in a 3D space, making hand–eye coordination and depth perception complicated [7]. Physical difficulties are due to the length of the instruments and to the fact that their insertion point (the small incisions in the patient body) induces kinematic restrictions and the appearance of the fulcrum or lever arm effect [25]. Robot-Assisted Laparoscopic Surgery (RALS) aims to alleviate varying difficulties depending on the robot’s interface and features. There are two interaction designs for RALS: telemanipulated (Tele-RALS) (cf. Figure 1(k,i,m)) such as with the da Vinci and comanipulated RALS (Co-RALS) (cf. Figure 1(f,g,j)). In Tele-RALS, as indicated by the semantics, a distance exists between the surgeon (who sits at the master console) and the patient (above who stand the robotic arms holding the instruments, the slave console). This distance removes the hand–eye coordination and the physical difficulties as instruments situated at the slave console are controlled with joysticks situated at the master’s console thanks to a communication channel between the two. However, it introduces new difficulties. To use the robot surgeons and future surgeons have to learn, on top of traditional skills linked with CLS, more technology-oriented skills. Here, we study the learning period when surgeons simultaneously learn to perform classic (without robot, cf. Figure 1(a,b,e))) and robot-assisted laparoscopic surgery. During this period, the discrepancy between psycho-motor skills required to perform the two techniques is problematic. Most importantly, since learning laparoscopic surgery with a telemanipulated assistant does not result in a transfer of skills to the classic technique [4], switching between techniques requires re-learning for each.

Fig. 1.

Fig. 1. CLS (top); Tele-RALS (middle); Co-RALS (bottom). A CLS OR diagram (a) and an image of an OR in CLS (b). Mechanical tools (c), activated by pulling a trigger and a rotation knob (d). Surgeons insert instrument through small holes (e). Tele-RALS OR diagram (f) and and image of an OR in Tele-RALS (g) where a surgeon works at the master console and two assistants at the bed side. Moving joysticks (i) control tools (h) on the robotic arms at the slave console (j). Co-RALS OR diagram (k) and image of an OR in Co-RALS (l). Same tools as in CLS are used (c) manipulated the same way (d), while gestures are secured and improved by the robotic arms (m). Extended from “Impacts of Telemanipulation in Robotic Assisted Surgery.” by Avellino, I., et al. [2]. Extended with permission.

A comanipulated robotic assistant, such as the Acrobot [17], aims to improve security and dexterity when performing the surgical gesture while keeping the surgeon close to her/his patient. Comanipulation means that control of the instruments by the robotic assistant and the surgeon is shared [24] (cf. Figure 1(l)): Both the robotic arms and the user hold the instruments needed to perform the surgery. As for laparoscopic surgery, while technically possible to perform viewing the image of the endoscope in 3D, Co-RALS is most frequently performed viewing the image of the endoscope on a 2D screen. This design is at the fourth degree of the human–robot collaboration scale [14]: Supportive. At this level, the human and the robot work together at the same time and with the same workpiece to complete a common task. Arguably, a successful Supportive human–robot collaboration is defined by its ability to augment performance compared to the same task performed without a robot, but also to confer to the human a cognitive and physical load that is neither too important nor too little to diminish the difficulty of the task while keeping the human involved and active [27]. For a task as complex as surgery simultaneously performed by a human operator and a robot, where robots are far from being able to replace the human operators, it can be suggested that keeping the human involved and active is mandatory for him/her to maintain his/her acquired skills level. Indeed, in our case, the robot assists the human in a task s/he already masters. Comanipulated robots for surgery are destined to work very closely with surgeons and future surgeons without disturbing their dexterity skills and while increasing their performance. The robot only augments already existing technical skills. Still, the quality of the human–robot interaction in Co-RALS has not been investigated yet to our knowledge. We propose a method to measure with quantitative metrics how a collaborative robotic assistant can help to increase not only performance and learning in very complex tasks but also the impact it has on the technical skills mastered by the human. We study it through two research questions, situated in the specific context of learning when the acquisition of skills is of crucial importance.

The main research question is the following: Does training with a comanipulated robot in what we call a target task, in our case exercises of laparoscopic surgery, develop hand–eye coordination and timewise skills in this target (and simultaneously learned in a surgical curriculum) task? We focus on the hand–eye coordination skills, as these are especially complex to learn and perform in laparoscopic surgery for the previously mentioned reasons. To measure the hand–eye coordination skills, we use a tool that to our knowledge has rarely been used in Human–Robot Interaction: gaze tracking. The number of fixations (the event when the gaze remains on a point for 50 to 600 ms [18]) on the aimed target, the fixation rate per second, and the duration of fixations, when performing in CLS, are measured either after training in Co-RALS or in CLS. Previous works have shown that these metrics enable us to measure hand–eye coordination skills developed when training [13] in laparoscopic surgery especially [19, 42, 43]. To measure timewise skills, time elapsed is recorded when performing exercises in CLS after training in Co-RALS or in CLS. This shows the operator’s performance efficiency [22] in the target task after training in Co-RALS. The secondary research question is the following: Does using Co-RALS improve timewise performance on exercises of laparoscopic surgery compared with the same exercises performed in CLS? Time elapsed is also recorded when performing repeated exercises with each technique. The repetition of the exercises highlights the shapes of the learning curves. Other metrics are used as exploratory measures: The NASA Task Load Index (TLX) [12] serves to compare the workload between the robot-assisted technique and the classic technique, and a performance score is given for the exercise performed in the classic technique after training with the robotic-assistant or without.

In Section 2, the existing connections among motor learning, robotic systems, and their interfaces are discussed, and comanipulation is presented. In Section 3, the experiment’s methods are detailed. In Section 4, results are submitted. The results presented below, as measured by gaze tracking and time recording, suggest that Co-RALS, contrary to what was shown for Tele-RALS, does not negatively impact skills in CLS, developing slightly but not significantly better hand–eye coordination skills. Additionally, during the training session, the exercise of laparoscopic surgery that develops precision skills presents a significantly shorter learning curve when performed in Co-RALS compared with CLS, while an exercise of laparoscopic surgery that develops depth perception skills presents a learning curve that is not statistically different from that of CLS. The results, although reporting on a relatively small number of participants, demonstrate the ability of the metrics chosen to show differences in development of hand–eye coordination skills between the classic and the robot-assisted techniques for laparoscopic surgery and differences in the learning curve between the two.

Skip 2STATE OF THE ART Section

2 STATE OF THE ART

In this section, the consequences of interaction design for RALS on skills mastered by the user are presented and comanipulation is described.

2.1 Performance, Human Skills, and Interfaces for RALS

Numerous articles in the field of Human Computer Interfaces (HCI) have focused on the impact of interfaces for RALS on non-technical skills [45] related to laparoscopic surgery: workflow, communication, situation awareness, and teamwork [2, 6, 29, 32, 33, 34]. These works are exclusively on Tele-RALS, and the metrics used are essentially subjective. These robotic systems enhance the surgeon’s experience by making them more independent to perform surgery and increasing the number of instruments they can control. But the distance imposed with patients keeps them away from their team. This physical distance is compounded by visual, auditory, and mental distances. It disrupts access to information, changes power distribution, and decreases the surgeon’s situation awareness. It also has consequences for students, who then have fewer tasks, decisions, and actions to make [41] than when the surgical intervention is performed without the robot. Their learning process is made more difficult: They can no longer learn by observing, hearing, and doing according to the well-known saying “see one, do one, teach one” [30], as they are not standing next to the surgeon any more [23].

Although less studied in HCI, the impact of interaction design for robotic assistance on technical skills [26] related to laparoscopic surgery is just as strong. The interaction design for robotic surgery defines the nature and the difficulty of motor skills either learned or mastered by the user and how they transfer to without robotic assistance. Robotic assistance can easily improve performance when being used. However, as stressed in a review of the literature on robotic assistance of motor learning by Heuer et al.s from 2015 from the field of neurosciences, it is complicated to ensure that after the robot is turned off, the user continues to perform as well as if s/he had never used it [15]. A risk exists that the use of robotic assistance becomes “normal” and the motor learning highly dependent on the robot-specific dynamic environment. This situation would lead to a negative impact of training with a robotic assistant on skills mastered in the classic technique, as seems to be the case for Tele-RALS [3, 4] . Both these studies by Blavier et al. from 2007 were conducted with medical students without any prior surgical experience, and results suggest that training in Tele-RALS has negative consequences on mastery of skills in CLS compared with training in CLS alone. In the case of a conversion from Tele-RALS to CLS during a surgical intervention, a scenario that regularly happens for different reasons such as mentioned in Blavier et al.’s article from 2007 [3], this negative impact can have major consequences. Here, we hypothesize that training in Co-RALS results in equivalent hand–eye coordination and timewise abilities compared with training in CLS. Hence, we take a different approach than in previous work: With objective metrics, we seek to observe the quality of the human–robot interaction through the analysis of the skills developed by the human in conjunction with the performances achieved by the human–robot team.

2.2 Comanipulated Interfaces in Robotics

The definition of comanipulation can be found in Morel’s article from 2013: A comanipulator is thus any robotic system performing a task, most often in contact with the environment, that can be controlled through direct contact by an operator. It aims to increase the manipulation performance of the operator [24].

Comanipulation is an interaction paradigm involving a robot and a user simultaneously manipulating a load or a tool (cf. Figure 1(m)). The robot is employed as a comanipulated device, in the sense that the gesture control of the instrument is shared by the robot and the surgeon. Of the difficulties of CLS mentioned earlier, comanipulation aims to facilitate the physical ones in particular. The currently existing commercialized comanipulated robotic systems are basically designed for specific types of surgical tasks [10, 16, 17, 28]. Research institutes have also exploited the idea for precise surgical tasks [5]. Comanipulation can be applied to tasks that require both precise manipulation and human judgment so as to enhance gesture quality [37]. This interaction design for robotics has the ability to at least compensate for gravity and filter tremors [31] while performing the surgical gesture. Some other physical difficulties can also be dealt with, including the reduction of the fulcrum effect, for example, with the help of an an active force feedback [35]. To improve haptic feedback, virtual fixtures can be applied [11, 40]. In this study, two comanipulated robotic arms are used. One for each instrument manipulated when performing exercises in laparoscopic surgery. Only the basic functions of the robots are studied. Gravity compensation for instruments is ensured, as well as tremor filtration. The algorithm used for tremor filtration is a viscous field, i.e., a damping algorithm proportional to the velocity of movement [8]. The robots exhibit high viscosity at low velocities and no viscosity at high velocities, at the instrument tip. It helps the surgeons during precise surgical tasks (performed at small speed) by filtering unintentional movements and augmenting precision of the gesture. This viscous field also increases the rate of adaptation of the user, forced to remain active when performing her/his task. A similar algorithm, implemented in a comanipulated robotic assistant where the human controls the direction and the speed while the robot ensures the precision and smoothness of motions by suppressing sudden and abrupt gestures, has been shown to significantly improve performance for tasks of manual welding [9]. In an other research article [39], a human–robot cooperative calligraphic task is performed in which the human and the robot grasp a writing brush at the same time. The robot is controlled against the human force to prevent the vibration and to enhance accuracy, and the results show the advantage of the control method. The benefits of the variable damping coefficient have been demonstrated. Thus, referring to the second question of research, it is hypothesized that in Co-RALS, the adaptive damping algorithm implemented enables better timewise performance for exercises of laparoscopic surgery compared to the same exercises performed without robotic assistance.

Skip 3METHODS Section

3 METHODS

To investigate (1) whether training with a comanipulated device in laparoscopic surgery results in an equivalent development of skills in this target task (exercises of laparoscopic surgery) compared with training in the target task and (2) the comparison of the learning curve between the robot-assisted task and the classic task, we present an experimental protocol involving two conditions. The participants in Classic condition are pre-tested in CLS during the Pre-Learning Step, then trained in CLS, and then post-tested during the Post-Learning Step in CLS. The participants in the Robot condition are also pre-tested in CLS, then trained in RALS, and then post-tested in CLS (cf. Table 1). Both groups are Pre-trained and Post-trained in the same task, CLS, to compare the differences in the process of learning for each group, Classic and Robot, in CLS. The main research question is studied by comparing the skills in CLS of the two groups, Classic and Robot, with gaze-tracking and time recording, and the secondary research question by comparing the learning curves of the two groups (Classic and Robot) during their learning session with time recording. The exercises of laparoscopic surgery performed at each step were chosen among basic training exercises for this discipline, depending on the skills they enable to train and measure. During the Pre- and Post-Learning steps, the exercise chosen was the Peg Transfer, as it enables us to observe a large panel of participant’s skills in laparoscopic surgery: bimanual coordination, precision, and depth perception. It requires the participant to lift six objects place on the left side of a board with a grasper first using the non-dominant (in this case, left) hand and transfer the object midair to the dominant hand. Then, the participant has to place each object on a peg on the right side of the board. It is also performed from the dominant to non-dominant hand. This exercise is further described later. “Pea on a Peg” and “Loops and Wire” were chosen because they trained precision and depth perception, respectively. The Pea on a Peg exercise consists of placing 14 beads on pegs of different highs placed on a board, and the Loops and Wire exercise consists of passing a wire through four different loops placed on a board. Both exercises are also further described later. The fact that each exercise trains different skills enables us to better define the differences between CLS and Co-RALS in terms of learning. This experimental protocol is tested both with Resident participants and, more numerous, Non-Resident participants. The similarity of the exercises in laparoscopic surgery performed by each group is controlled. Level equivalence between the Robot and Classic groups is statistically verified for Non-Resident participants during the Pre-Learning Step. There were not enough Resident participants involved in the protocol to perform this verification, hence the Resident participants’ results are presented as indication of tendency and insights for future research rather than results that can be generalized. The experiment is thus divided into three steps: (1) a Pre-Learning Step to control the participants’ level, (2) a learning step to train the participants of each group respectively in each task and (CLS and Co-RALS) and compare their performance, and (3) a Post-Learning Step to compare the mean level of each group in the target activity, i.e., CLS.

Table 1.
Group Pre-Learning Step Learning Step Post-Learning Step
ClassicTask CLSTask CLSTask CLS
Peg Transf. Domin. (4min)Pea on a Peg (30min)Peg Transf. Domin. (4min)
Peg Transf. Non Domin. (4min)Loops and Wire (15min)Peg Transf. Non Domin. (4min)
RobotTask CLSTask Co-RALSTask CLS
Peg Transf. Domin. (4min)Pea on a Peg (30min)Peg Transf. Domin. (4min)
Peg Transf. Non Domin. (4min)Loops and Wire (15min)Peg Transf. Non Domin. (4min)
  • The others are familiarization exercises. The order of exercises performed during the Learning Step is randomized.

Table 1. Experimental Protocol: Exercises in Bold Are Those During which Data Are Recorded

  • The others are familiarization exercises. The order of exercises performed during the Learning Step is randomized.

3.1 Participants

Fourteen Non-Resident participants and six Residents in medicine are recruited. Among them, seven Non-Resident participants and three Resident participants (group Classic) performed their training session in in CLS (task CLS). Seven other Non-Resident participants, and three other Resident participants (group Robot) performed their training session in RALS on a comanipulated robot (task Co-RALS). Resident participants have a small and contrasted experience in laparoscopic surgery (\( 6\pm 4.5 \) hours of practice). The number of Resident participants involved in the study does not allow to perform statistics. Still, their results enable us to verify whether the trend observed among novices may be applied to intermediate level users. Because of their tight agenda, Residents in medicine could only perform one exercise, Pea on a Peg, during the training session and, due to a lack of technical and time resources, no gaze data could be recorded during the Peg Transfer exercise before and after the Learning session. Their results are presented separately. Five of them are first-year residents and one is a third-year resident. Non-Resident participants all declared themselves to be novices in laparoscopic surgery and are all university students. All procedures are in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

3.2 Material

3.2.1 Apparatus.

Material includes two set-ups: one in CLS and one in Co-RALS.

Task CLS–The CLS set-up: This includes two surgical graspers. Exercises are performed in a pelvi-trainer, above which is a 2D screen transcribing the 3D working space inside it (cf. Figure 2(a)). The endoscope does not move during the entire learning session.

Fig. 2.

Fig. 2. (a) CLS set-up and (b) Co-RALS set-up.

Task Co-RALS–The Co-RALS set-up: This consists of two robotic arms, each holding the same two graspers as those used in the CLS set-up. (cf. Figure 2(b)). The two robotic arms are modified Haption Virtuose 3D robots, characterized by six rotational joints. The robots used are 3D robots as other degrees of freedom are constrained by the entree point in the patients’ belly. The first three joints are fully actuated, and the other three form a free wrist that allows full motion across the surgical workspace [21]. Our research team designed the software implemented in these robotics arms: (i) tool weight compensation so that holding the tools is transparent to surgeons [38] and (ii) a damping algorithm proportional to the velocity of movement, experimentally tuned such as in References [8, 20]. The same pelvi-trainer as for the CLS set-up is used (cf. Figure 2(a)). The endoscope is not moved during the entire learning session.

3.2.2 Measuring Tools and Metrics.

Different measuring tools and metrics are used at each step of the experiment to assess hand–eye coordination abilities, timewise performance, and workload and scores. They were chosen for their ability to show not only the observed performance at the end of a task but also the learning process itself. Gaze-tracking is used during the Pre-Learning Step to verify the absence of heterogeneity in average hand–eye coordination skills in laparoscopic surgery of groups Classic and Robot before the Learning session. It is also used during Post-Learning Step to quantify for differences in the same skills between these groups after the Learning session. Time is recorded at each step of the experiment. Before and after the Learning session, this metric enables us to answer the first research question together with gaze-tracking. During the learning session, Time recording serves to answer the secondary research question. The results obtained at the NASA Task Load Index during the learning session and the scores at the Peg Transfer exercise are explored, respectively, as self-perceived feeling of comfort with the task and observed performance.

Pre-Learning Step

  • Gaze-Tracking: Observing gaze pattern has been proven to be relevant in distinguishing novices from experts [13, 19, 42, 44]. Experts have a greater ability than novices to anticipate their gestures with their gaze. This results, in tasks of target reaching (such as the Peg Transfer), in a greater ability to look at the aimed target rather than the instruments used to reach the target. In other words, experts anticipate their movement by taking their eyes off it and projecting them to its goal, while novices are focused on achieving the movement itself and keep their eyes on it. This translates into a struggle to detach their eyes off their hands or the instruments they hold to perform the movement and to project their gaze to its goal. It can be measured by the number of fixations (the maintenance of the gaze in a single location) on the aimed target before reaching it with the instruments, a greater number meaning a better ability to anticipate, and greater expertise [1]. Experts also tend to do fewer back-and-forth movements with the eyes—movements used by novices to make calculations of distances, lengths, velocities—and fewer fixations. The duration of their fixations also tends to be larger than for novices. Hence, the number of fixations on the aimed target before it has been reached with the instruments, fixation rate per second, and duration of fixations are analyzed. A higher number of fixations on the aimed target at the specific moment of reaching for the target in the Peg Transfer means a greater level of comfort with the task. At the opposite, fewer fixations per second and greater duration of fixations while performing the entire exercise means better hand–eye coordination skills. Gaze data analysis is performed separately for each phase of the Peg Transfer exercise described later in the article. It is cut into three phases: Grab, Transfer, and Drop. Grab corresponds to the moment where participants grab the bead with their dominant hand, Transfer to the moment they pass it to their non-dominant hand, and Drop to the moment they put it on the peg. The Transfer and Drop phases are the most interesting. They represent the moment where the participants are specifically doing a target reaching task. Recording gaze data enable observation of hand–eye coordination abilities. The Eye Tracker used to record gaze data is a Tobii X3-120 screen-based eye tracker with a sampling rate of 120 Hz. It is mounted at the bottom of the screen at the eye level of participants. Data analysis is performed with Tobii Pro Lab, and the filtering algorithm used is the IV-T Classifier. To identify the number of fixations on the aimed target, we manually defined, using Tobii Pro Lab, dynamic zones of interest. They were the zones, on the 2D screen, aimed by the participants when performing the Peg Transfer exercise described later. Statistics of the number of fixations in these zones can then be extracted.

  • Time: A timer is set during performance of exercises.

  • Score: Videos of the camera used to perform the exercises is recorded to allow for scoring of the Peg Transfer exercise. Ten penalty points are counted per dropped sleeve. When a sleeve falls from the pegboard, 20 penalty points are counted. Total score is time in seconds + penalty points. This score shows temporal as well as qualitative performance.

Learning Step

  • Time: A timer is set during each trial of the two exercises performed during the learning session, described later in the article.

  • NASA TLX: It is used as an exploratory measure of comfort level with the executed tasks during the learning session. This index is a subjective, multidimensional assessment tool that rates perceived workload to assess a task, system, or team’s effectiveness or other aspects of performance. It was developed by the Human Performance Group at NASA’s Arms Research Center. The higher the score is, the more important is the perceived workload. In this study, the score is NASA weighted. Participants were required to fill the NASA TLX after they had performed the learning session.

Post-Learning Step. The measuring tools and metrics are the same as during the Pre-Learning Step.

3.3 Procedures

The experiment is divided into three steps (cf. Table 1):

3.3.1 Pre-Learning Step.

This step consists of the measurement of participants’ base level. The data recorded enable us to ensure that the two groups of participants, Classic and Robot, do not show, initially, significant differences in level in the target activity. Means of each group before the learning session are compared to check for statistical differences.

The exercises performed are as follows:

  • Peg Transfer Non Dominant: This is a familiarization exercise, and data are not recorded during this task, because it is a familiarization task, during which participants discover instruments and set-up for four minutes. There are six sleeves and 12 pegs on the board. The six sleeves are positioned on six pegs on one side of the board. Subjects have to transfer the sleeves from one side of the board to the other. To pick up a sleeve, they use the instrument held in the non dominant hand. Then they transfer the sleeve to the instrument held in the dominant hand. Maximum time is 4 minutes.

  • Peg Transfer Dominant: This is a recorded exercise. Subjects have to transfer sleeves resting on pegs on the other side of the board. They pick the sleeves up with the instrument held in the dominant hand, transfer them in the instrument held in the non dominant hand, and drop them on the other side of the board. They have to do as many as possible within 4 minutes. Data are recorded during this exercise only (cf. Figure 3(a)).

    (a) Peg Transfer; (b) Pea on a Peg; (c) Loops and Wire.

3.3.2 Learning Step.

Participants are randomly assigned either to group Classic, which will perform a 1-hour training session in CLS, or group Robot, which will perform 1 hour of training in Co-RALS. During each learning session, participants have to perform two exercises: Pea on a Peg and Loops and Wire (cf. Figure 3). These two exercises are performed in a random order. They are each performed repeatedly: Participants had a maximum number of five trials for each exercise. To avoid the influence of fatigue on performance, a maximum total time per exercise and a maximum time per trial is set. The exercises are the same when performed either in Co-RALS or in CLS. Exercises performed are the following:

  • Pea on a Peg: Participants have to place 14 beads on pegs of different heights. The beads are positioned on a peg board with the cup containing them in front. A maximum time per trial of 10 minutes is set as well as a maximum number of trials of 5 and a maximum time spent on exercise of 30 minutes. Once all the beads are placed on the pegs, the trial is finished. The psychomotor skills developed are similar to those involved during Peg Transfer: fine motor skills, coordination, precision, and depth perception (cf. Figure 3(b)).

  • Loops and wire: The exercise contains a peg board with four loops on which is positioned a flexible wire. Participants have to insert the wire in the four loops in a specific order indicated on the board. A maximum time per trial of 4 minutes is set as well as a maximum number of trials of 5 and a maximum of time spent on exercise of 15 minutes. Once the wire is inserted in all the loops, the trial is finished. The psychomotor skills developed are related to depth perception and manipulation of the instruments (cf. Figure 3(c)).

3.3.3 Pre-Learning Step.

Participants have to perform the same exercises as the one performed during the pre-learning step:

  • Peg Transfer Non Dominant: familiarization exercise.

  • Peg Transfer Dominant: recorded exercise.

Skip 4RESULTS Section

4 RESULTS

We present the results given by each of the measuring tools and metrics used during the experiment. As mentioned before, gaze-tracking could only be recorded for Non-Resident participants while time and score were recorded for all participants. Statistical tests were performed as a comparison between the means of each group: Classic and Robot for every metric used during Pre-Learning, Learning, and Post-Learning steps. All sets of data were normally distributed, variances were homogeneous, and samples were independent, hence Student’s t-tests were performed for every pair of data: number of fixations, duration of fixations, and time to perform Peg Transfer.

4.1 Main Research Question: Does Learning Laparoscopic Surgery with a Comanipulated Robotic Assistance Result in Development of Skills in CLS?

4.1.1 Pre-Learning Step.

Gaze data and time data during Pre-Learning are analyzed to compare between group Classic and group Robot and make sure that there are no statistical differences between these two groups previous to the learning session. The means in terms of time taken to perform Peg Transfer in the Pre-Learning Step are different between group Classic (\( 3.8 \pm 0.4 \)) and group Robot (\( 2.7 \pm 0.9 \)) but not significantly different (Student’s t-test, t(11.\( 2)=2.09 \), p = 0.06). Still, because of the low p-value, the difference in terms of time taken to perform Peg Transfer after learning between group Classic and Robot should be considered with caution. All other data measured before learning show clearly no statistical difference between group Classic and group Robot in terms of gaze data, i.e., duration of fixations during the Grab phase (Student’s t-test, t\( (8.7459)=0.62994 \), p = 0.5), Transfer phase (Student’s t-test, t\( (9.4918) = -0.5219 \), \( p = 0.6 \)), and Drop phase (Student’s t-test, \( t(11.334) = -0.86636 \), \( p = 0.4 \)) and the number of fixations during the Grab phase (Student’s t-test, \( t(8.1897) = 0.58616 \), \( p = 0.5 \)), Transfer phase (Student’s t-test, \( t(8.2418) = 0.42915 \), \( p = 0.6 \)), and Drop phase (Student’s t-test, \( t(9.8688) = 0.54009 \), \( p = 0.6011 \)).

These tests have been performed for Non-Resident participants; there were not enough Resident participants to perform statistical comparisons.

4.1.2 Post-Learning Step.

  • Mean number of fixations on aimed target: For Non-Resident participants, before learning; on average, participants perform \( 0.6 \pm 0.1 \) fixations on an aimed target during the Transfer phase. In the same phase, after learning, group Classic performs \( 0.55 \pm 0.2 \) fixations on average on aimed targets, and group Robot \( 0.59\, \pm \, 0.2 \) fixations on average on aimed target. Before learning, on average participants perform \( 0.54 \pm 0.19 \) fixations on aimed target during the Drop phase. In the same phase, after learning, group Classic performs \( 0.55 \pm 0.22 \) fixations and Group Robot performs \( 0.45 \pm 0.13 \) fixations on average on an aimed target (cf. Figure 4). These differences are not significant (Student’s t-test, \( t(0.12)=20.3 \), \( p = 0.9 \)).

    Number of fixations on target before reaching them with the instrument during the Transfer and Drop phases: Non-Residents. Ta1-3 is the mean number of fixations on Targets no. 1, 2, and 3 before reaching them with the instrument, Ta4-5 is the mean number of fixations on Targets no. 4 and 5 before reaching them with the instrument, and Ta6 is the mean number of fixations on Target no. 6 before reaching it with the instrument. Targets are separated in accordance with the percentage of participants who managed to reach them, this number being proportionally lower for the targets with the higher number. On average, group Robot does more fixations on targets 1 to 5 in the Transfer phase, and on targets 1 to 3 in the Drop phase, than group Classic. Group Classic does more fixations than group Robot on target no. 6, but variability is also twice as important: \( \pm 0.28 \) for group Classic, \( \pm 0.14 \) for group Robot.

  • Mean fixation rate per second: For Non-Resident participants, the fixation rate per second decreases for both groups before and after learning all phases, except in the Grab phase where it slightly increases for group Classic. After learning, in the Grab phase, group Classic increases on average the fixation rate per second of 4% compared to before learning, and group Robot stagnates. In the Transfer phase, group Classic does on average as many fixations per second before and after learning, while group Robot does 5% fewer. Finally in the Drop phase, group Classic decreases by 6% on average the fixation rate per second, and group Robot decreases by 10% (cf. Figure 5). The difference is not statistically significant between the groups (Student’s t-test, \( t(1.9)=9.7 \), \( p = 0.08 \)).

    Mean fixation rate per second and Duration of fixations during Peg Transfer Post-Learning Session: Non-Residents. Participants in group Robot reduce the mean fixation rate per second in phases Transfer and Drop before and after learning while group Classic stagnates in the Transfer phase. Standard Deviation reduces strongly for the two groups before and after learning. Participants in groups Classic and Robot augment the mean duration of fixations before and after learning. Standard deviation also reduces strongly for the two groups.

  • Duration of fixations: For Non-Resident participants, the duration of fixations increases for both groups before and after learning in all phases of the Peg Transfer exercise, except for the Grab phase, in which it slightly decreases for group Classic. After learning, in the Grab phase, group Classic decreases on average the duration of fixations by 9% compared to before learning, and group Robot increases by 6%. In the Transfer phase, group Classic increases the duration of fixations of 3% before and after learning, while group Robot increases by 5%. Finally in the Drop phase, group Classic increases by 2% on average the duration of fixations, and group Robot increases by 7% (cf. Figure 5). The difference is not statistically significant between the groups (Student’s t-test, \( t(-0.7)=11.9 \), \( p = 0.4 \)).

  • Time to perform Peg Transfer: For Non-Resident participants, both groups decrease time to perform the Peg Transfer exercise after the learning session. Mean time to perform the exercise before the learning session is \( 3.1 \pm 0.9 \) minutes, while mean time to perform the exercise after the learning session is, for group Classic, \( 2.6 \pm 0.6 \) minutes and, for group Robot, \( 1.9 \pm 0.4 \) minutes (cf. Figure 6). The difference is significant between group Classic (M = 2.6, SD = 0.23) and group Robot (M = 1.9, SD = 0.18), Student t-test (\( t(11.2)=2.09 \), \( p=0.05 \)).

    Mean time to perform peg transfer pre- and post-learning session.

  • Time to perform Peg Transfer: For Resident participants, both groups decrease time to perform the Peg Transfer exercise after the learning session. Mean time to perform the exercise before the learning session is \( 3.3 \pm 0.9 \) minutes, while mean time to perform the exercise after the learning session is, for group Residents Classic, \( 1.8 \pm 0.6 \) minutes and, for Residents Robot, \( 1.8 \pm 0.4 \) minutes (cf. Figure 6).

4.2 Secondary Research Question: Does Using a Comanipulated Interface for RALS Improve Performance Compared with CLS?

  • Time to perform training exercises: For Non-Resident participants, during the learning session for exercise Pea on a Peg, group Classic starts Trial no. 1 with a mean time of \( 8.6 \pm 1.9 \) minutes and ended Trial no. 5 with a mean time of \( 4 \pm 2.4 \) minutes. Group Robot starts Trial no. 1 with a mean time of \( 4.4 \pm 2.8 \) minutes and ends Trial no. 5 with a mean time of \( 3.1 \pm 1.3 \) minutes. The difference of time taken across conditions is significantly different for trial no. 1 at the \( p \lt 0.002 \) level [F(1,13) = 17.11, \( p = 0.001 \)]. For exercise Loops and Wire, no significant difference is observed between group Classic and group Robot. Group Classic starts Trial no. 1 with a mean time of \( 2.7 \pm 0.7 \) minutes and ends Trial no. 5 with a mean time of \( 1.6 \pm 1.4 \) minutes. Group Robot starts Trial no. 1 with a mean time of \( 3.2 \pm 1.2 \) minutes and ends Trial no. 5 with a mean time of \( 1.5 \pm 0.7 \) minutes (cf. Figure 8). After verifying the normality and sphericity of data, and the independence of samples, a repeated measures analysis of variance was performed to test for effect of learning. This effect is significant for Group Classic for exercise Pea on a Peg (ANOVA, F(1,6) = 63.25, \( p = 0.0002 \)) but not for group Robot (ANOVA, F(1,7) = 2.215, \( p = 0.18 \)). For the Loops and Wire exercise, the effect of learning is not significant for group Classic (ANOVA, F(1,6) = 3.68, \( p = 0.1 \)), but it is for group Robot (ANOVA, \( F (2,5) = 8.4 \), \( p = 0.025 \)) (cf. Figure 7).

    Learning session non-residents.

  • Time to perform training exercises: For Resident participants, only the Pea on a Peg exercise is performed. Group Residents Classic starts Trial no. 1 with a mean time of \( 4.7 \pm 1 \) minutes and ends Trial no. 5 with a mean time of \( 2.9 \pm 1.4 \) minutes. Group Robot starts Trial no. 1 with a mean time of \( 6.9 \pm 2.8 \) minutes and ends Trial no. 5 with a mean time of \( 1.4 \pm 1.6 \) minutes (cf. Figure 8).

    Learning session residents.

4.3 Exploratory Results: Self-Perceived Comfort Level with the Task and Observed Performance

  • Score for NASA TLX: Non-Resident participants show an almost equal score for the two groups. Group Classic rates a mean score of \( 61 \pm 6 \) and group Robot of \( 64 \pm 12 \) (cf. Figure 9).

    NASA TLX.

  • Score for NASA TLX: Resident participants both score very closely the task as follows: \( 58 \pm 18 \) for group Classic and \( 60 \pm 19 \) for group Robot (cf. Figure 9).

  • Score for the Peg Transfer exercise: For Non-Resident participants, the mean score before the learning session for the two groups is \( 230 \pm 81 \), after learning for group Classic it is \( 163 \pm 56 \), and after learning for group Robot it is \( 158 \pm 56 \). The two groups decrease their score in a similar way, showing equivalent improvement of skills.

  • Score for the Peg Transfer exercise: For Resident participants, mean score before the learning session for the two groups is \( 222 \pm 86 \), after learning for group Classic it is \( 111 \pm 59 \), and after learning for group Robot it is \( 147 \pm 38 \). Group Classic decrease their score a little bit more than group Robot.

Skip 5DISCUSSION Section

5 DISCUSSION

The measuring tools’ and metrics’ results confirm their ability to show the differences in terms of skills and performance between the two experimental conditions (Co-RALS and CLS). They also confirm the absence of negative impact of training in Co-RALS on skills in CLS. Gaze-tracking enables to observe the subtle disparities in hand–eye coordination skills in CLS between training in Co-RALS and in CLS, although showing both train the human to perform laparoscopic surgery. Time recording showed a slightly significant difference in terms of the ability for speed in CLS for group trained in Co-RALS compared with group trained in CLS, but this result should be analyzed with care as before learning, group Robot showed better timewise skills than group Classic. The scores obtained in CLS by each group show no difference in achieved performance. Time recording when performing similar repeated exercises in Co-RALS and in CLS identifies that differences in the number of repetition needed before mastering the task, and the time performance that can be achieved for each condition depends on the type of task performed. The NASA TLX, finally, confirms that no supplementary workload was felt when performing in Co-RALS compared with CLS.

5.1 Main Research Question: Does Learning Laparoscopic Surgery with a Comanipulated Robotic Assistance Result in Development of Skills in CLS?

5.1.1 Non-Resident Participants.

Both group Classic and group Robot decrease time taken to perform the Peg Transfer exercise. The difference between the groups is statistically significant, with group Robot taking on average 0.7 minutes less than group Classic after the learning session. Thus, group Robot seems to have developed better timewise skills in laparoscopic surgery compared with group Classic. Gaze patterns between the two groups are very similar. Still, group Robot makes on average, after the learning session, slightly fewer and longer fixations in the three phases of the exercise (Grab, Transfer, and Drop). These differences are not significant, but one can postulate that longer and repetitive learning sessions would expand them. Also, group Robot augmented, on average, the number of fixations on target before reaching it with the instrument in phases Transfer and Drop for targets no. 1 to 5 while group Classic, on average, slightly decreased it, showing little improvement for the first group and consistency for the second. We interpret these results as a consequence of having to deal with the algorithm implemented in the robot, the viscous field forcing the participants in group Robot to remain active when performing the gesture. This may have increased their attention state, despite the fact that their score is almost equivalent to that of the group Classic. In other words, the results suggest that group Robot has developed skills in laparoscopic surgery in a way that is not statistically different to those developed by group Classic in the same task. Without it being statistically significant, group Robot even has better results in terms of the number of fixations on target, fixation rate per second, and mean duration of fixations after learning than group Classic.

5.1.2 Resident Participants.

Groups Classic and Robot, for Resident participants, took on average the same time to perform the Peg Transfer exercise after the learning session, suggesting they have equivalently improved their skills. Time taken to perform the Peg Transfer decreases equivalently for both group Classic and group Robot. The results from the small cohort of residents show a similar trend to that observed in the non-residents’ results. This suggests that the same results could be obtained on this experiment with larger groups of residents in medicine.

5.2 Secondary Research Question: Does Using a Comanipulated Interface for RALS Improve Performance Compared with CLS?

5.2.1 Non-Resident Participants.

For Non-Resident participants, it appears that training with robotic assistance decreases the time taken to perform exercises that require precision and fine motor skills such as Pea on a Peg. The group of participants trained in Co-RALS took significantly less time to perform the first trial and visibly less time to perform every other trial. Contrary to what was expected, for exercise Loops and Wire in which proprioceptive and instrument manipulating skills are required, comanipulated robot assistance seems to hinder learning compared to no robotic assistance. The damping algorithm may, in this case, have disturbed the participants, as it forces them to adapt to a new hand–eye coordination that has no advantage over the goal of this exercise. The implementation of an other algorithm such as virtual fixtures may help to improve performance for such an exercise but could have a negative impact on skill transfer to without robot mode. Further investigation is needed. Scores obtained at the NASA Task Load Index are very close for the two groups, not significantly different, showing an equivalent cognitive and physical load perceived when training either in CLS or in Co-RALS, suggesting it was as difficult to perform the learning session in CLS as it was in Co-RALS.

5.2.2 Resident Participants.

For Resident participants, who had a better base level at laparoscopic surgery, more trials were required in RALS for the Pea on a Peg exercise than for Non-Resident participants to attain the same level as in CLS, but better performance was attained with the same number of trials. This may be due to the fact that, as they were used to performing in CLS, they had to re-adapt to this new setting before they could achieve their best performance on it. No significant difference was observed regarding scores obtained at the NASA Task Load Index when training either in CLS or in Co-RALS. Same as for the main question of research, a similar trend than that of the novices is observed. This paves the way for more research on effect of Co-RALS on subjects with intermediate level in CLS to assess whether their performance can be improved significantly compared with CLS.

5.3 Exploratory Results: Self-perceived Comfort Level with the Task and Observed Performance

5.3.1 Non-Resident Participants.

Group Classic and Robot have on average equivalently improved their score at the Peg Transfer exercise before and after learning, with group Robot performing slightly better than group Classic. The mean score given at the NASA TLX is almost equivalent for the two groups, showing no superior workload for group Robot compared with group Classic.

5.3.2 Resident Participants.

Resident participants started with a score equivalent to the Non-Resident participants—222 compared with 230—and ended with a slightly better score than the Non-Resident participants—111 compared with 153 for group Classic and 147 compared with 158 for group Robot. No difference is observed in terms of perceived workload between group Robot and group Classic: The mean score at the NASA TLX is very close for the two groups.

Skip 6CONCLUSION Section

6 CONCLUSION

The experiment conducted and its results, first, demonstrate that the measuring tools and metrics used and the exercises performed succeed in showing the learning process and performance of participants on tasks of robotic and classic laparoscopic surgery. The different metrics’ results also indicate the advantages and limits of Co-RALS for learning. A comanipulated interface for RALS seems to succeed in maintaining active learning of the motor skills of the user, i.e., in the target task (CLS) while performing the robot-assisted task (Co-RALS), contrarily to what was observed by Blavier et al. with the da Vinci [3, 4]. However, these results confirm the motivations that lead to the development of comanipulation [24]: to support and assist the gesture to make it easier to perform without changing its characteristics. Gaze-tracking and time recording permit observation of two different aspects of the learning process: the psycho-motor skills’ acquisition and the immediately visible performance. These two measuring tools taken together provide a comprehensive overview of what has been learned when training with robotic assistance. However, they go along with other measurements, whose purpose is to study the performance of the human–robot team. Thus, to evaluate the learning curve on exercises of laparoscopy either when training in Co-RALS or in CLS, we used time-recording also. We observed the learning curve of two exercises: one is learned faster and performed more rapidly in Co-RALS than in CLS, and the other is learned and performed more slowly with the robot compared to without. Hence, the superiority of the human–robot team’s performance in terms of the learning curve seems to depend greatly on the type of exercise performed. Co-RALS enables us to increase performance for exercises of fine motor skills and precision, but for a task of proprioceptive and manipulating skills, Loops and Wire, it does not improve the learning process. The exploratory measures show other aspects of the human–robot interaction. The results obtained with the NASA TLX suggest that interacting with a robotic assistant to laparoscopic surgery results in an equivalent workload compared with performing CLS. The comparison of the peg transfer scores in CLS either after training in Co-RALS or in CLS demonstrate and equivalent performance between the two groups, confirming the previous results.

This study encourages us to pursue research in human–robot interaction using quantitative metrics to qualify the conditions for interactions to be correct for the human and for the human–robot team’s performance. The results are also in favor of other interaction designs for RALS than the dominant one, Tele-RALS. Still, these findings present some limitations. First, the number of exercises performed are limited. Future research on comanipulated interfaces and RALS with longer, more complex, and realistic tasks may show more clearly how it can benefit to students in surgery. Second, the number of Resident participants is small. A study with a greater number of Resident participants, with groups of different levels, would show more precisely how the comanipulated robotic system impacts performance and develops psychomotor skills depending on the level in CLS. Also, our study only focuses on one aspect of laparoscopic surgery: dexterity. One could imagine future research that would involve knowledge also of anatomical structures, procedures, risks, and so on. Despite these limitations, our results indicate a need for more research on human–robot collaboration, which could lead to more adapted and hence more easily adopted technologies.

Skip 7ACKNOWLEDGMENTS Section

7 ACKNOWLEDGMENTS

The authors thank Xavier Deffieux and Sofiane Bendifallah, hospital practitioners who helped recruiting Resident participants, and Gilles Bailly for his important contributions to experimental design and article redaction. The authors also thank all members of the Agathe team at the Institute for Intelligent Systems and Robotics (ISIR) who helped in editing the article, especially Rémi Chalard, Ignacio Avellino, Angélina Bellicha, Etienne Moullet, Jesus Mago, Felix Péchereau, Fabien Vérité, and Mégane Millan. The authors declare that they have no conflict of interest.

REFERENCES

  1. [1] Ashraf Hajra, Sodergren Mikael H., Merali Nabeel, Mylonas George, Singh Harsimrat, and Darzi Ara. 2018. Eye-tracking technology in medical education: A systematic review. Med. Teach. 40, 1 (2018), 6269.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Avellino Ignacio, Bailly Gilles, Canlorbe Geoffroy, Belghiti Jérémie, Morel Guillaume, and Vitrani Marie-Aude. 2019. Impacts of telemanipulation in robotic assisted surgery. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19).Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Blavier Adélaïde, Gaudissart Quentin, Cadière Guy-Bernard, and Nyssen Anne-Sophie. 2007. Comparison of learning curves and skill transfer between classical and robotic laparoscopy according to the viewing conditions: Implications for training. Am. J. Surg. 194, 1 (2007), 115121.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Blavier Adélaïde, Gaudissart Quentin, Cadiere Guy-Bernard, and Nyssen Anne-Sophie. 2007. Perceptual and instrumental impacts of robotic laparoscopy on surgical performance. Surg. Endosc. 21, 10 (2007), 18751882.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chalard Rémi, Reversat David, Morel Guillaume, Mozer Pierre, and Vitrani Marie-Aude. 2018. Precisely positioning the tip of an instrument inserted through an orifice with a free wrist robot: Application to prostate biopsies. Int. J. Comput. Assist. Radiol. Surg. 13, 5 (2018), 611618.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Cheatle Amy, Pelikan Hannah, Jung Malte, and Jackson Steven. 2019. Sensing (Co) operations: Articulation and compensation in the robotic operating room. Proc. ACM Hum.-Comput. Interact. 3, CSCW (2019), 126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Dharia S. and Falcone T.. 2005. Robotics in reproductive medicine. fertility and sterility. Fertility and Sterility 84, 1 (2005), 1–11.Google ScholarGoogle Scholar
  8. [8] Dong Lin. 2017. Assistance to Laparoscopic Surgery through Comanipulation. Ph.D. Dissertation. Paris 6.Google ScholarGoogle Scholar
  9. [9] Erden Mustafa Suphi and Marić Bobby. 2011. Assisting manual welding with robot. Robot. Comput.-Integr. Manufact. 27, 4 (2011), 818828.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Ferrand-Sorbets Sarah, Taussig D., Fohlen Martine, Bulteau Christine, Dorfmuller Georg, and Delalande Olivier. 2010. Frameless stereotactic robot-guided placement of depth electrodes for stereo-electroencephalography in the presurgical evaluation of children with drug-resistant focal epilepsy. In CNS Annual Meeting.Google ScholarGoogle Scholar
  11. [11] Gruijthuijsen Caspar, Borghesan Gianni, Reynaerts Dominiek, and Poorten Emmanuel Vander. 2019. A hybrid active/passive wrist approach for increasing virtual fixture stiffness in comanipulated robotic minimally invasive surgery. IEEE Robot. Autom. Lett. 4, 3 (2019), 30293036.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Hart Sandra G. and Staveland Lowell E.. 1988. Development of NASA-TLX (task load index): Results of empirical and theoretical research. In Advances in Psychology. Vol. 52. Elsevier, 139183.Google ScholarGoogle Scholar
  13. [13] Harvey Adrian, Vickers Joan N., Snelgrove Ryan, Scott Matthew F., and Morrison Sheila. 2014. Expert surgeon’s quiet eye and slowing down: Expertise differences in performance and quiet eye duration during identification and dissection of the recurrent laryngeal nerve. Am. J. Surg. 207, 2 (2014), 187193.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Helms Evert, Schraft Rolf Dieter, and Hagele M.. 2002. rob@ work: Robot assistant in industrial environments. In Proceedings of the 11th IEEE International Workshop on Robot and Human Interactive Communication. IEEE, 399404.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Heuer Herbert and Luettgen Jenna. 2015. Robot assistance of motor learning: A neuro-cognitive perspective. Neurosci. Biobehav. Rev. 56 (2015), 222240.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hughes Gwyneth, Vadera Sumeet, Bulacio Juan, and Gonzalez-Martinez Jorge. 2013. Robotic placement of intracranial depth electrodes for long-term monitoring: Utility and efficacy. ASSFN Biennial Meeting.Google ScholarGoogle Scholar
  17. [17] Jakopec Matjaz, Baena F. Rodriguez y, Harris Simon J., Gomes Paula, Cobb Justin, and Davies Brian L.. 2003. The hands-on orthopaedic robot “Acrobot”: Early clinical trials of total knee replacement surgery. IEEE Trans. Robot. Autom. 19, 5 (2003), 902911.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Land Michael and Tatler Benjamin. 2009. Looking and Acting: Vision and Eye Movements in Natural Behaviour. Oxford University Press.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Law Benjamin, Atkins M. Stella, Kirkpatrick Arthur E., and Lomax Alan J.. 2004. Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment. In Proceedings of the Symposium on Eye Tracking Research & Applications. ACM, 4148.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Dong Guillaume Morel and Lin. 2021. Control strategy at different instrument points using lever model in laparoscopic surgery. In 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM’21). IEEE, 7–12.Google ScholarGoogle Scholar
  21. [21] Mago Jesus, Aricò Mario, Silva Jimmy Da, and Morel Guillaume. 2019. Safe teleoperation of a laparoscope holder with dynamic precision but low stiffness. In Proceedings of the International Conference on Robotics and Automation (ICRA’19). IEEE, 26932699.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Marvel Jeremy A., Bagchi Shelly, Zimmerman Megan, and Antonishek Brian. 2020. Towards effective interface designs for collaborative HRI in manufacturing: Metrics and measures. ACM Trans. Hum.-Robo. Interact. 9, 4 (2020), 155.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Mondada L.. 2014. Instructions in the operating room: How the surgeon directs their assistant’s hands. Discourse Stud. 2 (2014), 131161.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Morel Guillaume, Szewczyk Jérôme, and Vitrani Marie-Aude. 2013. Comanipulation. Med. Robot. (2013), 303350.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Nisky Ilana, Huang Felix, Milstein Amit, Pugh Carla M., Mussa-Ivaldi Ferdinando A., and Karniel Amir. 2012. Perception of stiffness in laparoscopy—The fulcrum effect. Stud. Health Technol. Inf. 173 (2012), 313.Google ScholarGoogle Scholar
  26. [26] Gastrointestinal Society of American and Surgeons Endoscopic. 2010. Fundamentals of Laparoscopic Surgery (FLS). Retrieved from http://www.flsprogram.org.Google ScholarGoogle Scholar
  27. [27] Parasuraman Raja and Riley Victor. 1997. Humans and automation: Use, misuse, disuse, abuse. Hum. Factors 39, 2 (1997), 230253.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Pearle Andrew D., Kendoff Daniel, Stueber Volker, Musahl Volker, and Repicci John A.. 2009. Perioperative management of unicompartmental knee arthroplasty using the MAKO robotic arm system (MAKOplasty). Am. J. Orthoped. 38, 2 (2009), 1619.Google ScholarGoogle Scholar
  29. [29] Pelikan Hannah R. M., Cheatle Amy, Jung Malte F., and Jackson Steven J.. 2018. Operating at a distance-how a teleoperated surgical robot reconfigures teamwork in the operating room. Proc. ACM Hum.-Comput. Interact. 2, CSCW (2018), 128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Polavarapu Harsha V., Kulaylat Afif N., Sun Susie, and Hamed O. H.. 2013. 100 years of surgical education: The past, present, and future. Bull. Am. Coll. Surg. 98, 7 (2013), 2227.Google ScholarGoogle Scholar
  31. [31] Poquet Cécile, Mozer Pierre, Vitrani Marie-Aude, and Morel Guillaume. 2014. An endorectal ultrasound probe comanipulator with hybrid actuation combining brakes and motors. IEEE/ASME Trans. Mechatr. 20, 1 (2014), 186196.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Randell Rebecca, Alvarado Natasha, Honey Stephanie, Greenhalgh Joanne, Gardner Peter, Gill Arron, Jayne David, Kotze Alwyn, Pearman Alan, and Dowding Dawn. 2015. Impact of robotic surgery on decision making: Perspectives of surgical teams. In Proceedings of the AMIA Annual Symposium, Vol. 2015. American Medical Informatics Association, 1057.Google ScholarGoogle Scholar
  33. [33] Randell Rebecca, Honey Stephanie, Alvarado Natasha, Pearman Alan, Greenhalgh Joanne, Long Andrew, Gardner Peter, Gill Arron, Jayne David, and Dowding Dawn. 2016. Embedding robotic surgery into routine practice and impacts on communication and decision making: A review of the experience of surgical teams. Cogn. Technol. Work 18, 2 (2016), 423437.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Randell Rebecca, Honey S. A., Hindmarsh Jon, Alvarado Natasha, Greenhalgh Joanne, Pearman Alan, Long Andrew, Cope Alexandra, Gill Arron, Gardner Peter, et al. 2017. A realist process evaluation of robot-assisted surgery: Integration into routine practice and impacts on communication, collaboration and decision-making. Health Serv. Deliv. Res. 5, 20 (2017).Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Schmitt François, Sulub Josue, Avellino Ignacio, Silva Jimmy Da, Barbé Laurent, Piccin Olivier, Bayle Bernard, and Morel Guillaume. 2019. Using comanipulation with active force feedback to undistort stiffness perception in laparoscopy. In Proceedings of the International Conference on Robotics and Automation (ICRA’19). IEEE, 39023908.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Surgical Intuitive. 2019. Intuitive Surgical Announces Preliminary Fourth Quarter and Full Year 2018 Results. Retrieved from https://isrg.intuitive.com/news-releases/news-release-details/intuitive-surgical-announces-preliminary-fourth-quarter-and-4.Google ScholarGoogle Scholar
  37. [37] Taylor Russell, Jensen Pat, Whitcomb Louis, Barnes Aaron, Kumar Rajesh, Stoianovici Dan, Gupta Puneet, Wang ZhengXian, Dejuan Eugene, and Kavoussi Louis. 1999. A steady-hand robotic system for microsurgical augmentation. Int. J. Robot. Res. 18, 12 (1999), 12011210.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Torterotot C. Poquet, Vitrani M. A., and Morel G.. 2014. Proximal comanipulation of a minimally invasive surgical instrument to emulate distal forces. In Proceedings of the 4th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery (CRAS’14). 4851.Google ScholarGoogle Scholar
  39. [39] Tsumugiwa T., Yokogawa R., and Hara K.. 2002. Variable impedance control based on estimation of human arm stiffness for human-robot cooperative calligraphic task. In Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 1. 644–650 vol.1.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Vitrani Marie-Aude, Poquet Cécile, and Morel Guillaume. 2016. Applying virtual fixtures to the distal end of a minimally invasive surgery instrument. IEEE Trans. Robot. 33, 1 (2016), 114123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Wasen Kristian. 2010. Replacement of highly educated surgical assistants by robot technology in working life: Paradigm shift in the service sector. Int. J. Soc. Robot. 2, 4 (2010), 431438.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Wilson Mark, McGrath John, Vine Samuel, Brewer James, Defriend David, and Masters Richard. 2010. Psychomotor control in a virtual laparoscopic surgery training environment: Gaze control parameters differentiate novices from experts. Surg. Endosc. 24, 10 (2010), 24582464.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Wilson Mark R., Poolton Jamie M., Malhotra Neha, Ngo Karen, Bright Elizabeth, and Masters Rich S. W.. 2011. Development and validation of a surgical workload measure: The surgery task load index (SURG-TLX). World J. Surg. 35, 9 (2011), 1961.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Wilson Mark R., Vine Samuel J., Bright Elizabeth, Masters Rich S. W., Defriend David, and McGrath John S.. 2011. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: A randomized, controlled study. Surg. Endosc. 25, 12 (2011), 37313739.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Yule S., Flin Rhona, Paterson-Brown S., and Maran N.. 2006. Non-technical skills for surgeons in the operating room: a review of the literature. Surgery 139, 2 (2006), 140149.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Measuring the Quality of Learning in a Human–Robot Collaboration: A Study of Laparoscopic Surgery

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Human-Robot Interaction
        ACM Transactions on Human-Robot Interaction  Volume 11, Issue 3
        September 2022
        364 pages
        EISSN:2573-9522
        DOI:10.1145/3543995
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 July 2022
        • Online AM: 14 February 2022
        • Accepted: 1 July 2021
        • Revised: 1 June 2021
        • Received: 1 October 2020
        Published in thri Volume 11, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format