Dexterous Manipulation of Unknown Objects Using Virtual Contact Points

The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.


Introduction
Dexterous manipulation of objects has been one of the most relevant topics in robotic research during the past years [1]. Several approaches to the development of new manipulation strategies are inspired in the human capability to manipulate quite different objects with the hands and this has led to the development of a wide variety of anthropomorphic robotic hands [2]. The list of robotics hands is long but some representative examples are the DLR-HIT Hand II [3], the MA-I [4], the Shadow Hand [5], the Schunk Dexterous Hand [6], the Robonaut 2 Hand [7], the Robotiq three-finger gripper [8], the Allegro Hand [9], among several others.
Dexterous manipulation has been defined in different ways in the literature, looking to pioneering works in the field, it goes, for instance, from: "dexterity is defined as the ability of a grasp to achieve one or more useful secondary objectives while satisfying the kinematic relationship (between joint and Cartesian spaces) as the primary objective" (from Reference [10]) to "manipulation that achieves the goal configuration for the object and the [grasp] contacts" (from Reference [11]), and a more recent work summarizes it as "In the robotics research literature 'dexterity' often refers to the manipulation of an object in the hand, with the hand" (from Reference [12]). There are other definitions but all of them refer explicitly or implicitly to the manipulation of the object by properly locating/changing the positions of the grasp contact points, that is, by properly managing the finger configurations, which in turn give rise to the expression "in-hand manipulation" to explicitly refer to the object manipulation using only finger movements [12][13][14][15][16].
In-hand manipulation can be done in two ways [11] or combinations of them: presented in this paper merges and extends the initial ideas previously presented in References [40] and [41]. The rest of the paper is organized as follows, Section 2 introduces the problem statement. The proposed approach and the manipulation algorithm are described in Section 3. The description of the hardware used for experimentation and illustrative experimental results are presented in Section 4. Finally, Section 5 presents the conclusions and the proposed future work.

Problem Statement
The problem addressed in this work is the manipulation of unknown objects with a robotic hand equipped with tactile sensors, understanding by an "unknown objects" that object properties like shape, weight and center of mass are not known during the manipulation, that is, the manipulation is done without the knowledge of an object model. The manipulation process to be done is the rotation of the grasped object avoiding its fall and making the contact force at each fingertip to remain within a threshold around a desired value. The manipulation commands are continuously provided by the user at a high level, that is, in each iteration the system receives a user command indicating the sense of the rotation movement and the system autonomously determine the finger movements. There is no external measurement of the object orientation but adding, for instance, a vision system the proposed methodology could be used to position the object in a commanded absolute orientation, if such orientation is actually reachable. The computation of the finger movements for the object manipulation is done using only the tactile information (contact forces and contact points) and kinematic information (values of the finger joints) obtained online during the manipulation, that is, no other external feedback sources are considered to obtain information about the object position (like for instance a vision system).
Considering that the fingers joints work under position control, the commanded hand configurations must be such that the commanded positions of the fingertips lie "inside" the object in order to apply a force on the object surface. It must be noted that if the fingertips are positioned exactly on the surface of the object, they will not produce grasping forces on it. From now on, in this work, we will refer to the commanded fingertip positions located "inside" the object as "virtual contact points", since they are not physically reachable. Furthermore, the magnitude of the force applied by each fingertip on the object surface depends on the distance between the virtual contact points and the real contact points actually reached on the object surface. Thus, each virtual contact point is adjusted as a function of the force error, that is, the difference between the desired and the current contact force sensed on each fingertip. Determining the finger movements using only the virtual contact points allows the object manipulation without knowing its real shape or ay other physical property.
It is assumed that the initial grasp is a Force Closure grasp [42] but the determination of the initial grasp is outside the scope of this work, it can be determined, for instance, using a generic grasp planner [43] or even by trial an error. In our experimentation we simply move the fingers towards the object surface until obtaining a proper grasps. We use three fingers of a robotic hand to grasp and manipulate the object, performing a tripod grasp [44], that is, the thumb works opposite to the other two fingers (abduction movement) in the same way that humans do it. In this work, we will consider that the Thumb works as supporting finger, while the Index and Middle fingers lead the object movements.

Proposed Manipulation Strategy
The object manipulation is performed by an iterative process such that, in each iteration, the finger movements are computed according to the sense of rotation, s k , indicated by the user. In this work, the indexes k and k + 1 denote the current and next iteration, respectively.
A finger f i , i ∈ {I, M, T} with I, M and T corresponding respectively to the fingers Index, Middle and Thumb, is a kinematic serial chain with n i degrees of freedom (DOF) and n i links. Each finger link has an associated reference frame ε ij , j ∈ {1, ..., n i }, which defines its position in the absolute reference frame W located at the palm of the hand. The position of each link j with respect to the previous one is determined by the joint angle q ij . The finger configuration q i is given by the concatenation of all the joint angles of the finger as q i = {q i1 , · · · , q in i }. The hand configuration is given by the concatenation of the finger configurations as Q = {q I , q M , q T }.
The flexion/extension joints of each finger i move the finger within a working plane Π i , defined by three points corresponding to the positions of the reference frames ε ij of the three phalanges of the finger. The variables involved in the manipulation are computed using the projections of the relevant points on the working plane of each finger. In a tripod grasp, the finger working planes must be oriented as parallel as possible to each other, as shown in Figure 1. In this way, the fingers can perform cooperative movements and the object can be rotated around an axis orthogonal to the working plane of the fingers, as it is usually done by human beings. Nevertheless, the proposed procedure can be easily generalized to rotate objects around any arbitrary axis, there is no restriction that prevents this but it is evident that the kinematics of the hand may allow a very small rotations around some particular axis.
Allegro hand with the finger working planes Π i for Index, Middle and Thumb and the axis for the object rotation.
Given the current virtual contact points P i k , the computation of points P i k+1 for the leading fingers (Index and Middle) is done as follows. Two auxiliary points P * i k+1 , i = {I, M} are defined as the points resulting from a displacement ±ζ of P i k along the line perpendicular to the segment between P i k and P T k , as shown in Figure 2, the intention is to make the axis of rotation passing through P T k . The sign of ζ depends on the desired sense of rotation for the current iteration. Thus, Since the shape of the object is unknown, any movement of the fingers may alter the contact force F i k . The module of F i k must remain within a threshold around a desired value F i d because if it increases a lot the object or the hand may be damaged and if it decreases the grasp may fail and the object may fall down. In order control the value of the grasping forces, a force error e i k is defined as the difference between the desired force F i d and the current force measured by the sensors F i k , that is, Now, let us consider the distance d i defined as the Euclidean distance between each virtual contact point P i , i = {I, M} and the rotation point P T , that is, An adjustment of d i k allows to change the grasping force applied on the object, then, d i k is modified in each iteration depending on the force error e i k by properly determining the final positions of P i k+1 and P T k+1 . P i k+1 is determined as, and being λ a predefined constant, empirically obtained. The reason for the different expression (different gain) depending on the sign of e i k is that a potential fall of the object (F i k → 0) is considered more critical that a potential application of large grasping forces (F i k F i d ). Figure 2. Example of the computation of P i k+1 , i = {I, M}, when the contact force F i k is larger than F i d (i.e., e i k ≤ 0). After obtaining P * i k+1 with a displacement ζ from the current position P i k , the target virtual contact point P i k+1 is obtained applying the adjustment ∆d i k to displace P * i k+1 away from P T k+1 . All the points are projections onto Π ik .
In the case of the Thumb, since it is only used as supporting point for the object rotation, the computation of P T k+1 is done with the only aim of adjusting the contact force without computing any intermediate point. P T k+1 is computed considering an adjustment with respect to the Index and Middle fingers as, with Finally, the new hand configuration Q k+1 is computed using inverse kinematics (IK) of P i k+1 , i = {I, M, T}. The movement of the fingers is executed only if each P i k+1 belong to the workspace of corresponding finger, that is, the target Q k+1 lies within the hand workspace. Algorithm 1 summarizes the main steps for the computation of the hand configuration that allows the desired object manipulation.

Algorithm 1: Manipulation algorithm
Read the direction of rotation s k Compute finger working planes Compute Q k+1 from P i k+1 using IK if Q k+1 belongs to the hand workspace then Move the hand to Q k+1 k=k+1 end until stop by user

Hardware Set-up
The Allegro Hand from Wonik Robotics [9] was used for the experimental validation. This is a 4-finger anthropomorphic hand with 4 degrees of freedom (DOF) per finger (see Figure 3). The Index, Middle and Ring fingers have the same kinematic structure, the first degree of freedom fixes the orientation of the working plane Π i within the finger workspace, while the other three DOF (flexion/extension) are used to make the fingertip reach a point and an orientation in this plane. In the case of the Thumb, the first DOF produces the abduction movement and the second DOF fixes the orientation of the working plane, leaving only two DOF to work in this plane, that is, the position and the orientation of the fingertip are not independent. The joints of the hand have DC motors as actuators and potentiometers to measure their positions with a resolution of 0.002 degrees. The Allegro Hand is connected to a PC by a CAN bus. The joints of the hand have PID position controllers and the system includes gravity compensation. The fingertips of the commercial version of the Allegro Hand do not have tactile sensors, thus, the original fingertips were replaced by sensorized fingertips WTS-FT from Weiss Robotics [45], increasing in this way the capabilities of the hand. Each WTS-FT sensor has a tactile sensing matrix with 4 × 8 taxels. The surface of each taxel is a square with side length of 3.8 mm. A measurement of the pressure in each taxel returns a value between 0, when no force is applied and 4095, for the maximum measurable normal force of 1.23 N. In this work, the contact is modeled using the point-contact model [46]. Thus, when the contact between each fingertip and the object takes place over a contact region including several taxels, the barycenter of this region is considered as the current effective contact point and the summation of the forces sensed at each taxel is considered as the current contact force. Two virtual prismatic joints l 1 (x) and l 2 (y) are used to locate the contact point on the sensor pad, assuming that the sensor surface is flat. These virtual joints add two non-controlled DOF at the end of the finger kinematic chain. Figure 4 shows the taxel distribution of the WTS-FT sensor with an example of a contact region remarked with an ellipsoid. Measures of pressure in the taxels are represented by colors. The Figure also shows the barycenter of the contact region (which is considered as the effective contact point between the fingertip and the object) and the virtual joints that localize the contact point with respect to the fingertip center point (TCP) on the sensor surface.
Robot Operating System (ROS) [47] is the communication layer that allows the integration of the software modules developed for the implementation of the proposed approach: a module to control the Allegro Hand with a PID controller and gravity compensation, a module to get the measurements of the tactile sensor system, a module with a graphical user interface to command the movements of each joint of the hand (used to perform the initial grasp), and a module to perform the object manipulation following the proposed approach, as shown in Figure 5.

Experiments
In following illustrative experiments the fingers of the Allegro Hand were closed around an unknown object, until approximately reaching a desired contact force F i d = 5 N. In a first set of experiments, the initial grasp was obtained using the graphical application to control individually each hand joint; this application also allows the visualization of the measured force on each sensor at the fingertips. The objects used for experimentation, shown in Figure 6, were chosen looking for different shapes, so that the proposed approach performance can be illustrated under different conditions; the objects also have different stiffness. The constant λ to compute ∆d i and the distance ζ to compute the auxiliary points P * i k+1 were all set to 1 mm. The manipulation experiment for each object includes the following steps: first, the initial grasp was performed; then, the object was rotated clockwise until reaching the limit of the hand workspace; then, the object was rotated counterclockwise until reaching again the limit of the hand workspace; and, finally, the object was released.  Figure 7 shows snapshots of the manipulation of three objects with different shapes: a regular bottle, a bottle with multiple curvatures and a jar with flat faces. From left to right, the first picture shows the user putting the object in the workspace of the hand; the second picture shows the hand performing the initial grasp; the third picture shows the configuration of the hand when the limit of the hand workspace was reached after rotating the object clockwise; and the last picture shows the configuration of the hand when the limit of the hand workspace was reached after rotating the object counterclockwise.

Regular bottle
Bottle with multiple curvatures Jar with flat faces   Figure 7), the bottle with multiple curvatures (second row in Figure 7) and jar with flat faces (third row in Figure 7) were manipulated. The commanded joint values correspond to the virtual contact points P i k , i = {I, M, T} and the reached joint values are those obtained due to the real contact with the object surface. Figures 11-13 shown the evolution of the measured forces at the fingertips for the three manipulation examples. In these Figures, five regions are remarked using vertical dashed lines and a number inside a circle: region 1 shows the joint and force values at the initial hand configuration before grasping the object; region 2 shows the evolution of the values while the initial grasp was performed; region 3 shows the evolution of the values while the object was rotated clockwise; region 4 shows the evolution of the values while the object was rotated counterclockwise; and, finally, region 5 shows the values when the object was released and the hand returned to the initial configuration. In each region, the contact forces had the following behaviours. In region 1, the contact forces were zero at all the fingertips, since there were no contact between them and the object. In region 2, when the initial grasp was performed, the contact forces at each fingertip did not appear at the same time because the movements of the fingers were performed individually and sequentially using the graphical interface (see previous subsection) to command the finger movements individually. In region 3 and region 4, during the manipulation, the measured forces remain close to the desired value. Finally, in region 5, the measured forces were constant until the object was released. In a second set of experiments, the used hand is part of a dual-arm mobile manipulator and the initial grasps of the objects (shown in Figure 17) were done by the robot itself. In must be noted that one of the objects (the box) is almost completely rigid. The arm moves the hand to a position such that it envelopes the object and then the fingers are closed until grasping the object with contact forces close to the desired value. We have to remark that, as it was stated before, the problem of obtaining optimized initial grasps is outside the scope of this work. Once the object is grasped, it is lift and then rotated counterclockwise and clockwise until reaching the limits of the hand workspace. The adjustable parameters were set to the same values as in the first set of experiments.   Figure 18 shows snapshots of the manipulation of the two objects. Figures 19 and 20 show the evolution of the commanded and reached values of the finger joints when the objects were manipulated. Figures 21 and 22 shown the evolution of the measured forces at the fingertips for each manipulation example. In Figures 19-22    Videos showing the system performance for each case in both sets of experiments can be found in https://bit.ly/2lLvbDY.

Conclusions
The paper has presented a simple but effective approach for the manipulation of unknown object based on tactile information. The approach is based on a geometric reasoning to determine the movements of the fingers and is able to keep the grasping forces around the predefined value. The experimental validation was done using three fingers of an anthropomorphic robotic hand equipped with tactile sensors to rotate objects of different shapes and stiffness around an axis parallel to the palm of the hand, clockwise and counterclockwise. The manipulation was performed without using any model of the object, so the object is unknown for the system. The experimental results shown that the approach is effective and can be applied in real practical cases. Some positive aspects of the proposed approach are that the finger movements are determined in a very simple way using basic geometry, which is fast and effective and it can be easily implemented for hands with different kinematics and basic position control in the finger joints. On the other hand, since the object shape is unknown it is not possible to predict, neither to know it with precision, how much the object actually rotates for each commanded movement of the fingers. It must be noted that the maximum possible rotation range of the object depends on the initial grasp (contact points on the object and hand configuration), this is not a particular limitation of the proposed algorithm but it is inherent to the type of in-hand manipulation; starting from a good grasp increases the range of the possible rotation of the object but since the object shape is unknown the resulting initial grasp commonly obtained by closing the fingers until touching the object could not be the most convenient for the in-hand manipulation (as it was already mentioned in the paper, looking for an adequate initial grasp is outside the scope of this work).
A natural extension of the proposed approach is to consider the optimization of a grasp quality index as another goal during the manipulation process.