3D Object Modeling Using Eye on Hand Approach

This research proposed vision measurement system which consists of a camera carried on hand of a robot, which captured 2D image to the object from two sides with a constant distance of the objects. To achieve this work several experimental steps are needed: First step includes calibrating the camera by using a standard block to find the best distance between the camera and the object. The best result of a distance is (410) mm. The second step consists of using MATLAB 7.12.0 (R2011a) program to achieve image processing to get some digital information (number of pixels in each row and column), using the proposed line by line scanning algorithm, to extract 2D object dimensions. The resulted dimensions are found closer to real object dimensions that are measured using a digital vernier and 3d digital probe. Last step includes 2D image manipulating using the proposed algorithms to reconstruct the 3D objects depending on the resulted information (number of pixels)

The work of S.Jawad, et al (2012) [3] is greatly depends on reading out the colored reference extrusion sample image and the colored target extruded image, which are captured using ambient light only to minimize possible illumination noise, and then reduce image information by converting them into gray-scale images, gray-scale images still inhibit much information and are very noisy due to the different brightness intensities, so in order to eliminate these effects a threshold operation is applied to both reference and target grayscale images. J. Draréni, S. Roy and P. Sturm (2011) [4] presented a novel linear method to estimate the intrinsic and extrinsic parameters of a 1D camera using a planar object. As opposed to traditional calibration scheme based on 3D-2D correspondences of landmarks, their method uses homographies induced by the images of a planar object.
F. Zhou et al (2012) [5] presented a novel 3D optimization method based on measurement coordinate system, with the aim of constructing a new objection function which minimizes metric distance between the calculated point and the real point in 3D space. G. Du and P. Zhang (2013) [6] presented method requires a camera that is rigidly attached to the robot end effecter, and a calibration board must be settled around the robot where the camera can see it. An efficient automatic approach to detect the corners from the images of the calibration board is proposed. Z. Marton et al (2009) [7] present a method for approximating complete models of objects with 3D shape primitives, by exploiting common symmetries in objects of daily use. Experimental results are presented using real world data sets containing a large number of objects from different views at different distances and orientations, and obtained fairly robust results. M. Sun et al (2010) [8] present a method for solving the challenging problem of generating 3D models of generic object categories from just one single un-calibrated image. The method leverages the algorithm proposed which enables a partial reconstruction of the object from a single view. A full reconstruction is achieved in an object completion stage where modified or state-of-the-art 3D shape and texture completion techniques are used to recover the complete 3D model. The results of a number of images containing objects from five categories (mice, staplers, mugs, car, and bicycle) show photo realistic and accurate reconstructions. N. Mahmood, C. Omar and T. Tjahjadi, (2012) [9] work investigated the use of an inexpensive passive method involving 3D surface reconstruction from video images taken at multiple views. The results of 15 measurements of different length between both reconstructed and actual dummy limb are highly correlate. M.Barrero et al (2013) [10] work proposes a novel probabilistic method to reconstruct a hand shape image from its template. The experimental results show that there is a high chance of breaking a hand recognition system using this approach. Furthermore, since it is a probabilistic method, several synthetic images can be generated from each original sample, which increases the success chances of the attack.

Aim of Work
The aim of this work are explaining the process of camera calibration to determine the best distance between objects and a camera that carried on Robot ' s hand, finding dimensions of objects and compared the results with real dimensions that measured by a digital vernier and by a digital probe, reconstruction the object depending on the extracted dimensions.

System Configuration
The system of this experimental work consists of hand of robot and a camera caught by the gripper of a robot, with condition the plane of camera putted as parallel form to a plane of the face for object that want to be measured as shown in figure (1). In this work, camera (SONY) model No. (DSC-W380) is used with (5X) magnitude of optical zoom and (14.1 MEGA PIXELS) magnitude of resolution.

799
Block length in (pixel) unit

Camera Calibration
In this work, images must be calibrated to achieve approximate dimensional image presentation. Camera calibration is the heart of this measuring system, therefore the calibration results are very critical for the accuracy of the system equipment and directly used for image rectification. A small value of calibration inaccuracy will affect the accuracy results of the measurement significantly. All measurements performed on digital images refer to a pixel coordinate system, whereas real world measurements refer to the metric coordinate system, hence, calibration was made and results obtained over a number of experimental runs. The calibration is achieved using the front view of standard rectangular block of (100 mm) length and (30mm) width. A set of images is captured using different camera distances (D). Each image is then processed to get the average length and width of the rectangular component in pixel units. Therefore, the actual object dimensions (length & width) are compared to the extracted dimensions from the images captured. Consequently the relationship between actual and acquired dimensions is computed, Figures (2) and (3) graphs the results needed for acquiring the mathematical presentation of the equation that best fits the acquired dimensional results. It is worth noting that due to the non-linear characteristic of obtained measurements, iterations are performed to find best equation that fits the data, where the best obtained mathematical relationships are: (a) The relationship between camera distance (x L ) in (mm) unit and (p L ) length of image in (pixel) unit. Where: Relationship between camera distance (x W ) in (mm) unit and (p W ) width of image in (pixel) unit. Where: The scale factor for length (S L ) = 739Pixels/100mm = 7.39 (pixel/mm) - The scale factor for width (S W ) = 259 pixels/30mm = 8.633 (pixel/mm)

Real Test of Camera Calibration
To ensure the results measurements dimensions of objects by using the pixels that required from images, used two standard blocks with dimensions (40*30 mm) and (60*30 mm) as shown in figure (4), those blocks putted at a distance (410mm) and captured images for their by using scanning program computed a number of pixels in length and width for both objects. a b Figure (4

) (a) Standard block (40 * 30) mm, (b) Standard block (60 * 30) mm
By using the previous relationships, the results obtained are the dimensions which compared with real dimension and computed the error between them.

3D Reconstruction Process
For every object taken two pictures the first from front view and the second from top or side view depending on the need of dimension for object that want to be reconstructed as shown in figure (5), which take pictures in front view with respect to the objects, and in figure (6) , which take pictures in top view.

A Pen Shape Object
The real pen dimensions are: (10.18, 134.79)mm and by using a 3D digital probe they are: (10.775, 134.8)mm. The object with pen shape was captured at adopted distance (410)mm, as shown in Figure (7). By using the proposed algorithm the dimensions of this object are obtained in three coordinates x,y and z. The results of dimensions were converted from pixel unit to metric unit (mm) which helped on reconstruction the 3D model of pen shape object as shown in figure (8). Comparing the real dimensions of pen object with that measured by using a vernier and adigital probe, the magnitude of dimensions and errors listed in table (2)

A Bottle of Drug Shape Object
A bottle of drug, is with real dimensions of (119.02, 49.19)mm. The object with bottle of drug shape was captured at adopted distance (410)mm, as shown in Figure (10). By using the proposed algorithm the dimensions of this object are obtained in three coordinates x,y and z. The dimensions are converted from pixel unit to metric unit (mm) as shown in figure (11). By compared a results of dimensions with a real dimensions of 3D bottle of drag object that measured by using a vernier, the magnitude of errors listed in table (3) and shown in figure(12)  .Journal, Vol.34,Part (A), No.3 Figure (12) (a,b)

comparison of digital vernier, scanning program and digital probe reading, (c,d) comparison the eroor of dimension for bottle shape. A Span Shape Object
A span object is with (104.42, 11.47, 11.06 and 4.95)mm. The object with span shape was captured at adopted distance (410)mm, as shown in Figure (13). Figure (14) shows span views and the resulted 3D model. Table (4) and (5). list errors between scanning program with vernier and scanning program with a digital probe are explained in figures (15) Figure(15) (a,b,c,d) comparison of digital vernier, scanning program and digital probe reading for span shape.

A Connected Rod Shape Object:
A connected rod object, with real dimensions of (208.24, 45.46, 17.5 and 25.85)mm, and by using a 3D digital probe they are: (208.305, 45.465, 17.49 and 25.81)mm maximum length, the big end, the small end and thickness respectivelly. The object with connected rod shape was captured at adopted distance (410)mm from two views, as shown in Figure (17). The dimensions are converted from pixel unit to metric unit (mm) which helped on reconstruction the connected rod object with three views and 3D model as shown in figure (18). By compared a results of dimensions with a real dimensions of connected rod object that measured by using a vernier, the magnitude of errors listed in tables (6) and (7). Errors between scanning program with vernier and scanning program with a digital probe are explained in figure (19) Figure(19) (a,b,c,d)

RESULTS
The best distance between the object and the camera is (410) mm, this distance is applied on all objects. And the image of object processed in matlab program and by scanning program gotten number of pixels in each directions X and Y, the numbers of pixels applied in equation (1) to compute the length of object in (mm) unit and in equation (2) to compute the width of object in (mm) unit at last added the magnitude of errors for length and width from table (2). After finding the dimensions of object and compared them with the real dimensions of object that measured in digital vernier and digital probe reconstructed the object from the extracted dimensions.

CONCLUSIONS
On the basis of this study, and observations recorded experimentally, the following findings can be concluded: 1.
Camera calibration is the first and an important step in this work, it determined the best distance between camera and object.

2.
Select automatic thresholding method for captured images. It is suggested because it succeeded in distinguishing the object in the scene without any priory information about the object or the scene.

3.
Using scanning program with MATLAB to compute number of pixels in two dimensions X and Y and converted to millimeter units lead to measure the object dimension from edge to edge with very low error and low cost. 4.
It found that using curve fitting with second polynomial gave good results.

5.
The results of dimensions that fond from images contain different magnitude of error that belongs to fit the center of lenses of camera with center of object. 6. Using 3D reconstruction system with benefit of image processing and computing pixels number with good accuracy. 7.
Some of reconstruction image appeared points especially in the edge of figure. This appearance belongs to the density of image which depends on a step number of points in each row and column.