Verification of a Program for the Control of a Robotic Workcell with the Use of AR Paper

This paper contributes in the form of a theoretical discussion and also, by the presentation of a practical example, brings information about the utilization possibilities of elements of augmented reality for the creation of programs for the control of a robotic workplace and for their simulated verification. In the beginning it provides an overview of the current state in the area of robotic systems with the use of unreal objects and describes existing and assumed attitudes. The next part describes an experimental robotic workplace. Then it clarifies the realization of a new way of verification of the program for robotic workplace control and provides information about the possibilities for further development of created functioning concepts.


Introduction
Current manufacturing industries experience the dynamics of innovations. Product life cycles are shortened and diversification of the product range gets wider, all in the frame of progressive globalization, however, there is a shortage of skilled workers who, moreover, present high costs. A perfect solution for achieving both productivity and flexibility is automation based on industrial robots. Creation of a control program for an industrial robotic system for a specific application is still very difficult, time-consuming and expensive. Small enterprises can have enormous difficulties taking advantage of robotic automation.
In praxis today there are two main categories of robotic programming methods -online and offline programming.
Usually for online programming, the pendant is used for manual movements of the effector at each stage of the realized task. The robot controller records the configurations and a program is written that includes all the paths, postures and actions. This is only suitable for simpler processes and geometries. Of course the quality of the program responds to the skills of the operator. Despite these facts, this intuitive and rather cheap solution is widely used.
In the field of offline programming some new methods are proposed. For example the OLP method uses the complete 3D model of a robotic workcell that gets the tasks of the robot operator to the software engineer. In comparison to the online programming method, it provides increased flexibility, but usually requires additional setting procedures and calibration [1,2].
The programming and verification method proposed in this paper does not require large capital investment and tries to combine the advantages of both basic methods. It is a solution with a robotic workcell using the elements of augmented reality utilized as the bridge connection between programming and its simulated verification.

Online Programming
Online programming is usually realized by skilled robot operators. They guide the robot according to the required trajectory using a teach pendant -this is called the leadthrough method. While jogging the robot through the desired path, the robot controller records the specific points and uses them for creation of motion commands according to the path definition. Although this method is simple and widely used, it has several disadvantages. The operator must always track the coordinate frame of the actual jogging action, which can be quite complicated. Once the program is done it requires a lot of testing for assuring reliability, accuracy and operational safety. Moreover, the program itself is not very flexible considering the need to adapt to different conditions (workpiece, robot position). With online programming the programmed robot is also excluded from the production cycle. In spite of all these facts, online programming is still the usual method utilized in small companies ( Figure 1). Techniques of online programming have been improved using different sensors for detection of forces and positions, and eventually beam sensors and cameras. In some cases these enhancements even removed the necessity of jogging, as the robot is able to understand (to physically or visually check) the required path itself. Some authors state that the accuracy of the final program need not rely on the skill of the robot operator and a 3D robot path with higher accuracy can be generated automatically. This would present a significant advantage, especially for applications where the process tools are in contact with the workpiece or a surface (machining, etc.) [1].

Offline Programming
Offline programming methods have been developed to avoid some of the disadvantages present while using the online form. The characteristic feature of these methods can be found in the PC-based offline programming interface which is connected to the robot controller. Out of known and common techniques we can mention so called graphical programming. This is based on the idea of the acquisition of the 3D geometrical data of the workpiece, robotic device and its environment (machines, fixtures, other objects) -everything that creates the workcell. The data of the robot and other workcell equipment are usually present in the form of a CAD model, and workpiece entities can be eventually obtained from the coordinate measuring machine or from the 3D scanning process. The entire program package of the robotic device, including its paths and actions, is then prepared in the offline mode of the software environment, while the robot concerned can be used for realization of different tasks. The offline method allows implementation of computation processes and thus provides the tools for path optimization. Having the program created in a graphical software environment also enables launching the simulations and the visualization of future robot performance [3].

Robot Programming with Use of Augmented Reality
Besides online and offline programming, there are other possibilities for making the robot programming more visual and effective. A team of researchers at the Mechanical Engineering Department of the Faculty of Engineering, National University of Singapore, has developed a system for the programming of robots using the elements of augmented reality. This can be understood as a form of offline programming, but the ideas behind it are so advanced that it can be considered beyond conventional programming methods.
The system called RPAR-II ( Figure 2) includes a manipulator arm, an electrical gripper, a robot controller, a desktop PC, a display unit, a stereo camera and a handheld device with a marker. In this solution the kinematics and dynamics of the robot were considered, while augmenting the real environment with the virtual robot. An interaction device is used to guide the virtual robot according to the desired path. The system includes definitions of initial and final points together with complex mathematical computation regarding the optimization of robot paths. This means that once the geometric path is obtained, the trajectory planning process effectively deals with the kinematic and dynamic constraints of the robot. Both planned and simulated paths can be displayed simultaneously in the real working environment, so the difference can be seen and evaluated. The implementation of elements of augmented reality in programming processes is interesting mainly because it opens up the future possibility of considering additional constraints (velocity, acceleration) and increasing the level of human-robot interactivity. The main remaining issue with this method is low accuracy, as dimensional data about objects and spatial entities are related to tracking systems [5]. Unconventional programming with the use of elements of augmented reality -RPAR II system, Singapore [4] 3. Controlling an Experimental Robotic Workcell with an ABB Robotic Device

Hardware Characteristic of the Workcell
The robot from the ABB company -compact robot IRB 140 -is a robotic device used at the experimental workplace designed at the Faculty of Manufacturing Technologies (FMT) in Presov (Figure 3). It is a machine with 6 degrees of freedom with a unique combination of great acceleration, work radius and solid load. It is the fastest robot in its class with good repeatability of position and very good trajectory accuracy (± 0,03mm). With load capacity of 6 kg it can manipulate up to a distance of 810 mm. It can be installed on the floor or on the wall. Currently it is situated on a floor stand with the intention to realize sliding for easy changing of position or eventually a table that would be freely movable all around the room.
As for the application area, the robot in this laboratory is used as a manipulator between different machining sequences. It can also be used for welding, assembly realization, cutting of material, packaging tasks, batching, machine servicing, etc. The initial position of this device so far is the place from where it can reach the working area of both machining devices. Those are didactical manufacturing devices EMCO appointed for basic operations of milling and turning. In relation to the programming method and verification of programming results we have the models of all present objects. The model of robot is in STL form downloadable on the Internet; models of the mill and lathe were created in the CAD module of the engineering system ProEngineer.

Visualization Features
Application of the elements of AR is in many manufacturing activities realized by software implementation (overlapping) of geometries of virtual models into the real environment recorded with the use of camera sensors [6]. This method is effective, but there is the need to watch the monitor that lies out of the normal working area, which sometimes leads to problems regarding the synchronization of working activities and moves. To fix this issue a new visualization unit was created at FMT in Presov. Its philosophy lies in the creation of a new mixed working environment. Thanks to the use of a halfsilvered surface it finally provides better interactivity of the application and increases user comfort directly in the active working environment of the programmer [7-9]. The surface of the glass is either half-silvered or there is a half-leaky foil stuck on it that creates a reflection and at the same time allows a view into the working environment with no obstacle or decrease in view quality. This commonly available kind of mirror is often used in gaming, medicine or business presentations. By optical connection of two seemingly different views it creates an ideal platform for the creation of a realistic spatial effect ( Figure 5). Displaying is a reversed emission of the view to the reflex surface. It is provided with the use of an LCD monitor that is placed over the working area, out of the view angle of the worker (programmer). A disadvantage of this visualization variant is that it makes the quality and character of the created view dependent on a fixed watching point. Such an unpleasant attribute was solved by the application of a combined view running under the OpenSource system Blender where a script was activated for tracking the user's face.

Face Tracking
Face tracking uses libraries and program elements from freely accessible database known as OpenCV. That is a special library for the creation of applications for computer imaging with the possibility to freely activating partial visualization scripts. It can be used under different platforms (Windows, Linux, MacOS, even iPhone). The OpenCV library was developed (Intel, 1999) for solving tasks based on complicated algorithms and logical operations in the area of computer imaging and artificial intelligence (AI).
A solution using the face tracking technique is perfect in cases that require the coordination of a displayed view with the motion of the face (body). The monitoring process starts with activation of the script for face tracking and launching of data flow for video images recorded in real-time with the web camera. These images are processed with logical script, which in an observed area automatically identifies and selects the face of the user (using face pattern). The script creates a rectangle over the detected face that is used for determination of the geometrical centre of the face (the intersection of the diagonals presents the virtual coordinate system of the user). In Blender software numerical values of this point are connected to the attributes of the imaging section, while setting the script for image location according to them (Figure 7).

Additional Inputs and Outputs
Another way to increase the quality of implementation of elements of AR in real working space (robotic workcell) is to use the option for audio inputs and outputs. The programmer of a robotic device can obtain audio instructions and information, for example, about threats of collisions detected on virtual objects, about violation of a safety zone, the start and end of the motion or activity of a real or virtual robot, and eventually about reaching or recording of the desired position.
In addition to receiving the information he can also give vocal orders. By simple activation of the microphone and with the use of a regular PC (thanks to the possibility of linking the audio input with the Blender application) his voice can be an interactive feature of his work that can be used for immediate and more comfortable realization of partial programming functions.
Together with the audio there is the possibility of direct text output of the information in the view displayed on the half-silvered glass plate. Different text packages (coordinates of required point, position and state of the effector, collisions, important positions, warnings) can be simply texted directly into the view field of the programmer in the desired form and in real-time in relation to the connections determined in Blender.

Programming and Visual Verification of Control Program of a Robot with the Use of Elements of AR
The concept of programming with the use of the described method is based on the creation of a displaying unit and on the connection between more software environments. The displaying unit includes the construction (static frame), half-silvered glass (reflection and leakiness), LCD displaying device (emission of the image to the glass from the point out of the view field of the user), camera (detection of face motion of the programmer) and PC (synchronization of receiving and broadcasting of the image, running of the Blender application itself) [10].
The possibility of program interconnection of several software environments is partially realized. For its full functionality and thus for real online programming from behind the imaging glass with the creation of augmented reality some additional programming corrections are required. This is based on the principle of mutual interaction of data coming from different software. Data from the RobotStudio application must be available for main imaging and the computation application running in Blender. Script from Blender has to (for example, with use of RobotStudio) generate the output in the form of a program with robotic syntax. A suitable improvement would also mean the availability of data from the control system of machining devices for the calculation purposes of the Blender application, which is supported by the simulated control of the mill and the lathe in the Windows environment. The concept of the overall combined environment of the robotic workcell and composition of its particular components developed on FMT in Presov is presented by Figure 8. Thanks to the combination of real and virtual complex data, the programmer has in his field of view the image combined from real objects, such as devices, lathe, mill, etc. and also from virtually inserted models, for example, the robot, group of robots, another machine [11].
The advantage of such imaging lies also in the possibility of using it for the design and disposition of the robotic workplace, when the designer/constructer has the possibility to visually check (in real-time) the suitability of his proposal, placing of the machines, robots, working radiuses, etc. In the workcell there is another production device inserted (Figure 9). The programmer can use the virtual space of the Blender application even for verifications where potential problems can be signalled by different colours or combined with an audio signal.  Figure 9 there is verification of the working range of the robot related to another machining device that is not currently installed. The creation and simulation of control programs is open also in the case of a workplace that is not yet built or in cases where the disposition is to be changed [12].

Conclusion
This research focuses on the improvement of important features of robot control, concerning both the areas of programming and simulation. Details of the research and related concept are explained in the example of the experimental robotic workcell situated at the Department of Manufacturing Technologies, Faculty of Manufacturing Technologies in Presov, Slovakia [13].
The idea is based on the utilization of a newly created displaying unit that is based on the principle of halfsilvered glass, fixed in a frame that is situated between the programmer and the workcell, which reflects and simultaneously transmits the light. This means that looking into the workplace through this glass, the programmer can see real objects behind it in combination with virtual ones inserted in the software environment of the application created in Blender. This can be considered a new approach among the current methods of robot programming and simulation, as it stands on the border of online and offline programming (programmer is physically in the workcell, but programming tasks are realized more virtually) and tries to use the advantages of both. It is a way of making robot programming even more comfortable, more visual and easier. Future improvements will be in the form of better inter-software communication and solutions for accuracy improvements which could bring very successful results.