A development and application of maintainability evaluation system using immersive VR

Immersing into virtual reality (VR) is a perception of being physically present in a non-physical world. Using immersive VR in maintenance brings great conveniences. For example, using virtual human operating in hazardous environment before the real person, which can prevent being hurt. In this paper, a system of immersive virtual maintenance (IVM) is presented. It is divided into three parts. The first part is Input, which contains the maintenance task and maintenance design. The second part is Method, with which making the virtual human and the users act in a unified way. The key to build IVM system is the interface between the motion capture system and virtual simulation software. The third part is Output, which presents the maintainability evaluation results. This paper focuses on how to build IVM system, the difficulties and solutions also be detailed. At last, a simple application is given


Introduction
Maintainability is "the ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment" [1]. It is one of the main necessary factors not only constituting the system effectiveness but also affecting life cycle cost [2]. Traditional maintainability analysis and verification has some insufficiencies, for example, the defects can be detected only after the products have been put into use, which is difficult to modify the design [3].
Virtual maintenance provides a powerful tool to solve those problems in maintainability design, which improves the efficiency greatly and reduces the cost of design and future maintenance. Immersing into VR is a perception of being physically present in a non-physical world. The most superiority of the immersive VR system lies in that it allows the user to experience a virtual space built using computer graphics as with a real world [4].
The rapid development of the virtual reality technology makes a big progress in immersive virtual simulation [5]. Virtual humans have been used in many applications. Thorisson's [6] interactive guide, Gandalf, gives solar system tours. USC's Institute for Creative Technologies created virtual experiences to train soldiers in interpersonal leadership [7]. Just VR [8] allows a medical trainee to work with a virtual assistant to assess and treat a virtual victim. Balcisoy et al. [9] created a system where users play chess against a virtual human augmented to the real world. The Human Modelling and Simulation Group at the University of Pennsylvania uses virtual humans for task analysis and assembly validation [10].
Immersive virtual simulation plays a significant role in improving the product quality and saving the production cost; what's more, the virtual human can work in harsh conditions, which can avoid the accidents. Ramanarayan [11] studied virtual human's different types of walking. M. Susan Hallbeck [12] used the virtual simulation to formulate high efficiency and healthy postures. By integrating dynamic simulation and ergonomics evaluation, it enables the system designer to visualize and improve workplace design in the digital space [13]. Majid Hashemipour et al. [14] applied virtual technology in teaching.
However, the integration technologies of the motion capture system and the virtual simulation software does not get the satisfactory progress. The collected data from the motion capture is authenticity but lack of interaction and controllability, which makes the data reuse in the dynamic environment difficult [15]. The virtual maintenance simulation software has strong function in simulation analysis, but the work of virtual human motion modelling is tedious and the edit action is not real enough [16]. Therefore, integrating the two systems could control of the virtual human, get more effective data, and realize the true sense of immersion.
In the quest for a functioning, cost effective design, maintainability is often neglected. However, the cost of poor maintainability and reliability can be extremely high and the accumulated costs over the life of a plant often exceed the initial capital cost. In the engineering application, the artificial maintenance exposes lots of problems. For example, in the design stage, the maintenance persons don't know whether it is convenience to maintain when the machine is malfunction. To solve these problems, the immersive VR is introduced in the maintenance. With the help of the IVM, we can do the work about maintainability analysis (ME) and verification in the design stage, which can help find the insufficiencies before being produced. What's more, in IVM system the real maintenance experience can be got, and the maintenance activities can be simulated.
In this paper, a system of IVM is presented. The main function goal of IVM is as follows. 1) Import the virtual prototype and simulate the maintenance environment. 2) Simulate and verify the maintenance activities.
3) Analyze the maintenance visibility, accessibly, operating space and the maintenance time. 4) Output the maintainability evaluation.
The IVM system is divided into three parts. The first is Inputting Part, which contains the maintenance task and maintenance design. The second is Method Part, with which to make the virtual human and the user act in a unified way, it gives us the main ideas of the method, and the key to build IVM system is the interface of motion capture (MC) system and virtual simulation software (VSS). The third is Output Part, which gives us the maintainability evaluation. This paper focuses on how to build IVM system, the difficulties and solutions also be detailed. And at last, the application of IVM is given.

The system of IVM
The system is designed according to the function goal. As shown in Figure 1, the overall frame structure for the logic level is divided into three parts: input part, IVM method part and output part. Input part contains maintenance task and standards of ME. IVM method part contains MC, construction of VR scene, and the interface of MC system and VSS. The output part contains the record of maintenance activities, ME and modify advice.

The basic definitions of IVM
In order to better understand this article, here we give some definitions: Virtual Environment (VE): A three-dimensional model of a space displayed to a human user from an egocentric point of view using real-time 3D computer graphics.
Virtual Reality (VR): The experience of being within a VE. Immersion: The feeling of "being there" that is experienced in some VEs. A VE user is immersed when he feels that the virtual world surrounds him and has to some degree replaced the physical world as the frame of reference.
Virtual Maintenance: The virtual maintenance is based on computer technology and virtual reality technology. In the VE, which contains product digital prototyping and virtual human model, the virtual human is driven to complete the repair process simulation. Virtual maintenance belongs to simulation, but it stresses the sense of reality.
Immersive Virtual Maintenance (IVM)：The maintenance personnel wear the MC equipment, while the motion data is loaded into the computer and control the virtual human completing the repair process simulation.
Motion Capture (MC): Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision [17] and robotics.
Head Mounted Display (HMD): a display device, worn on the head, which has a small display optic in front of each eye [18]. The HMD, Virtual Research VR 1280, is used in this paper.

The method of IVM
As shown in Figure 1, to build the IMV system, the maintenance task and the standards of ME should be firstly prepared. The key components of IVM system are MC, VR scene and Interface, as shown in Figure 2. The MC here is the user, dressed the MC system and head mounted display, to realize the feeling of immersive in the VR. The VR scene consists of virtual human, virtual prototyping, and VSS.
The key to build IVM system is the interface of MC system and VSS, which help the virtual human and the user act in a unified way. The Output of the IMV is the Record, Modify Advice, and ME. We choose the ShapeWrap as the MC system. The ShapeWrap is a portable and durable motion captures system. Freedom from studios and cameras combined with its robust design make ShapeWrap ideal for use in the field, classroom or just about anywhere.
Because of its Wireless capabilities, the ShapeWrap can be operated away from the computer that receives the MC data. ShapeWrap conforms to fit almost any size person, and no special suit is required. Many researchers have used this MC system to capturing human motion and get credible action data [19][20][21][22].

The analysis of position and orientation of virtual human in ShapeWrap.
In the GPO (Global Position and Orientation) data format from ShapeWrap, all limb data is stored with reference to the world coordinate system. All positional data is measured in mm and all orientation data is measured in roll, pitch and yaw orientation angles in degrees, which is shown in Figure 3. The X-axis extends outwards in the direction that the person is facing during the homing calibration, the Y-axis extends upwards, and the Z-axis extends to the person's right during homing. The order of rotation angles is yaw first about the +Y-axis, followed by pitch about the rotated +Z-axis, and then roll about the doubly rotated +X-axis. The default limb position (all rotation angles set to zero) is for a limb extending outwards along the X-axis. In other words, if a certain part of human manikin' posture is changed, we can obtain a series corresponding freedom degree.  Figure 2, the red line is +X-axis, the green line is +Yaxis, and the blue, and the blue line is +Z-axis. During the homing calibration, the local coordinate systems are not all the same with the global coordinate system. The coordinate of head, arm, hand and lumber is the same with the global, while the leg's coordinate system is around the Z-axis rotated 90 degrees clockwise.  ., 2010). The virtual human in the DELMIA is treated as a product, and its position and orientation is analysed as follows.

The analysis of position and orientation of virtual human in DELMIA.
Like the ShapeWrap, the DELMIA also has its GPO, and all positional data are measured in mm and all orientation data is measured in roll, pitch and yaw, which are shown in Figure 4. But there are some differences between the two systems. Firstly, in the DELMIA, not all limb data is stored with reference to the world coordinate system, the arms' position and orientation is different from the global position and orientation, which is shown in Figure 4. Secondly, in ShapeWrap, the orientation angles data are saved in one array, while in DELMIA, the data are saved in two arrays, one for human attitude data and the other is for the orientation and position data.
In the GPO, the X axis extends outwards in the direction that the person is facing during the homing calibration, the Y axis extends to the person's left during homing, the Z axis extends to the upwards. The order of rotation angles is yaw first about the +Z-axis, followed by pitch about the rotated +Y-axis, and then roll about the doubly rotated +X-axis.

Interface.
The purpose of interface is to make the virtual human and the user act in a unified way, so that the maintenance activities can be evaluated in the VR scene. To obtain the interface, firstly, we should know the conversion matrix of MC system and VSS. Secondly, the conversion of coordinates between the MC system and VSS should be analysed.

The conversion matrix of MC system and VSS.
The robotics conversion matrix is applied in the algorithm of software interface design, which is shown as Formula 1. pitch and raw. The fourth column tells the coordinates position of the objects. The human body orientation defined by this array, and the above 3X3 in the array represents the orientation and the rest 1X3 means the human's position.

2.1.3.2.
The analysis of the coordinate system. Comparing the coordinate system of the MC system and VSS, which is shown in Table 1, we can get that the different between them.

The interface algorithm description.
The interface algorithm is mainly about the data conversion. The data conversion from MC system to VSS is shown in Figure 5. Step 1 Get the GPO data from the MC software. ShapeWrap has two kinds of data output mode, one is the real-time output, and the other is file output. Here we choose the real-time data output. There are four kinds of data output format, which are GPO, C3D, BVH and Motion Builder, here we choose the GPO.
Step 2 Distinguish the GPO data into Particle data and Rigid Body data. The GPO data contains two kinds of data: Particle data and Rigid Body data, before data conversion, the Particle data and Rigid Body data should be distinguished.
Step 3 Data Conversion. The GPO data from the MC system cannot be directly used in the VSS, since the GPO coordinate and numbers are not same.
Convert the GPO data into the data that DELMIA can use, and store the data in the Array OPV and Array iAxis. The OPV is used to save the data of human joints curvature data and the iAxis is used to save the orientation and position data. The detailed calculation is in the Chapter 3.1.
Step 4 Assign the converted data to DELMIA. After assignment, the virtual human's motion in DELMIA is as same as the user dressed in ShapeWrap, which is shown in Figure 6.
Step 5. Repeat step 2 to step 4. Repeat step 2 to 4, until the MC system does not send the data to DELMIA.   Figure 6. The practical application of the simulation system.

Output
The output of IVM system contains video recording, ME, and modify advice. The video recording is used to teaching. The modify advice is according to conclusion of ME. The ME is divided into qualitative evaluation and quantitative evaluation. Using IVM system can more easily and friendly analyze the maintenance visibility, accessibility, and operating space, and also get more accurately maintenance time. The ME of IVM consists of qualitative evaluation and quantitative evaluation. The qualitative evaluation contains visibility, accessibility, operating space and the human comfort degree during maintenance. The quantitative evaluation is to evaluate the maintenance time.

1) The maintainability qualitative evaluation
The criterion of visibility is shown in Figure 7.1. The best vision means the user's vision is good during the maintenance. The max vision means the target can be seen, but may not be seen during the maintenance. Invisible vision means the maintenance target is not seen.
The criterion of accessibility is shown in Figure 7.2. In area 1, the maintenance target can be reached by easily, and the accessibility is good, while in area 2, the accessibility is bad.
Operating space means the space should be enough when do the maintenance task. The red circles shown in Figure 7.3 indicate there has collision between the hand and machine. The more the number of collisions, the less the operating space is. The human comfort degree during maintenance is shown as Table 2, the upper-arm, forearm, hand, neck and trunk are analyzed. The score range is given, and each score is match is special color. 2) The maintainability quantitative evaluation The maintenance time is evaluated here. In the IVM system, the length of time that the virtual human used is equal to the real maintenance time. At the beginning of the design, the length of the maintenance time is determined by the maintainability allocation. So, we can use the time obtained from IVM system to judge whether meet the requirements.

The difficulties and solutions
The conversion matrix is , which is shown as in Formula 1, we use its first three columns as orientation , and we get: = . ( Where A is the matrix of GPO coordinate in MC system, and is the matrix of GPO coordinate in VSS. Both A and B is orthogonal matrix, and |A| = |B| = 1. Then the T can be got as below: 1) Let the user, dressing the MC system, do the same action with the virtual human in the VSS, and remember the GPO data of A and B.

Calculate the virtual human's joints curvature in VSS.
The joints curvature will be calculated with the data from MC system. Here, the right arm is taken as example, the orientation data of upper arm and forearm are given, use Array and Array , we calculate the Array , which contains the elbow angles data, and the angles data can be used in DELMIA.
Array A can be got from the follow equation.   10 A is a 3×3 array. The value of this array represents the forearm's orientation with respect to the upper arm. The coordinate system has 3 degrees, the 3 degrees' rotation sequence is assumed that α first, then , and final .
Where , and represents the angle of yaw, the angle of pitch, and the angle of roll, respectively. And they are stored in OPV[b1], OPV[b2] and OPV[b3], which can be used as the elbow data in DELMIA.

Calculate the fingers' bending angle in VSS.
Assume y, p and r respectively represent the roll angle, pitch angle and yaw angle, and then,  Figure 8. And at last, we get the data of finger joint between F1 and F2.
OPV [y] and OPV [p] represent the yaw angle and pitch angle, and the finger's roll angle is not exit.

Holding tools in VSS.
Maintenance tool is necessary during the actual maintenance operation, so is IVM. In order to get convincing ME results, the virtual human in VSS should hold the tools. To judge whether holding tools by virtual human in VSS is a big difficulty in IVM system. The algorithm is shown as Figure 9. Judging the real-time-frame data from the MC system, if collide, then use the pre-frame data to judge whether colliding, if yes, then update the tool's position with hand; if not, then bind hand with tool.
If using the real-time-frame data does not get colliding, then analyze the pre-frame data; if the hand collide with tool, then update the tool's position; if not, the hand would not touch the tool.

Case study
It is significant to introduce IVM system into the engineering maintenance. The user, dressing in the MC system and head mounted display, disassemble a component from a machine, which is shown in Figure 10.
The virtual human holds the wrench to screw. Through the Area 1( Figure 10.1), we know the accessibility and visibility is good, while the operating space is not big enough, the upper-arm is hit the machine.
In the Figure 10.2, the human comfort degree during maintenance is shown. The upper-arm's color is red and the forearm is Yellow. Compared with Table 2, the comfort degree is 3. In Figure 10.3, the user held this component and walk down the steps. The user's vision is good during the maintenance, and the visibility is the best.