Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids

A high percentage of the human activities are based on procedural tasks, for example cooking a cake, driving a car, fixing a machine, etc. Reading an instruction book, watching a video or listening the explanation from an expert have been the traditional methods to learn procedural tasks. But, most researchers agree that procedural tasks are learnt gradually as a result of practice through repeating exposures to the task.


Introduction
A high percentage of the human activities are based on procedural tasks, for example cooking a cake, driving a car, fixing a machine, etc. Reading an instruction book, watching a video or listening the explanation from an expert have been the traditional methods to learn procedural tasks.But, most researchers agree that procedural tasks are learnt gradually as a result of practice through repeating exposures to the task.
Nowadays, Virtual Reality (VR) technologies canbeusedtoimprovethelearningofaprocedural task and moreover to evaluate the performance of the trainee.For example, a Virtual Environment (VE) can allow the trainees to interact physically with the virtual scenario, integrating a haptic device with the computer vision system, so that trainees can interact and manipulate the virtual objects, feeling the collisions among them and, the most important, they can practice the task under the approach of learning by doing.In this way, trainees can practice in order to improve their abilities until they are proficient with the task.This chapter introduces a new Multimodal Training System (MTS) that provides a new interactive tool for assisting the trainees during the learning of a procedural task in the assembly and disassembly operations.This MTS uses the haptic feedback to simulate the real behaviour of the task and special visual aids to provide information and help trainees to undertake the task.One of the main advantages of this platform is that trainees can learn and practice different procedural tasks without the necessity of having physical access to real tools, components and machines.For example, trainees can assemble/disassemble the different components from a virtual machine as it would be done in the real life.This system was designed and implemented as an activity in the context of the SKILLS project 1 .
During the learning of a task,mostofthetimes,trainees need to receive aids with information about how to proceed with the task.From the point of view of the authors, these aids can be divided into two groups according to the type of information that they provide and how this information is provided: direct aids and indirect aids.The difference between both types of aids is in the cognitive load requested by the trainees to interpret and understand the information provided by the aid.In this way, indirect aids require that trainees are active in 1 IST FP6 ICT-IP-035005-2006 Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids

What is a procedural task
A procedural task involves the execution of an ordered sequence of operations/steps that need to be carried out to achieve a specific goal.In other words, this kind of activities require following a procedure and they are usually evaluated by the speed and accuracy of their performance.There are two main types of skills (abilities) that may be involved in each step: 1. Physical skill, called here motor skill, which entails the execution of physical movements, like unscrewing a nut. 2. Mental skill, called here cognitive skill, which entails the knowledge about how the task should be performed, like knowing what nut should be unscrewed in each step.
The cognitive skill has an important role in the process of learning a procedural task, it reflects the ability of the humans to obtain a good mental representation of task organization, and to knows what appropriate actions should be done, when to do them (appropriate time) and how to do them (appropriate method).

Learning of procedural tasks
From a dictionary, one can identify two primary meanings for the word learning.The first definition is the active process of acquiring information, knowledge, procedures and capabilities for performing operations and achieving goals to satisfy the requirements of a given task.The second one is knowledge or ability that is acquired by instruction or study.During the learning process, the learner selects and transforms information, constructs hypotheses, and makes decisions, relying on a cognitive structure (e.g.schemas and mental models) using their past/current knowledge and experiences (Dale (1969)).
There are many ways by which people can learn a new task.People can learn by reading instructions, by watching a video, by listening to an expert, and of course by practicing themselves.In the context of procedural tasks, the training involves the learning of the sequence of the needed steps to accomplish the goal of the task, as well as, the learning of new motor skills or the improvement of existing ones.
Edgar Dale (Dale (1969)) developed from his experience in teaching and his observations of learners the cone of learning.He demostrates that the least effective methods to learn and remember things are by means of reading a manual and hearing a presentation because after two weeks people remember the 10% of what they read and 20% of what they hear.In contrast, the most effective method involves direct learning experiences, such as doing the physical task and simulating the real task because people are able to remember the 90% of what they do.This approach is called enactive learning or learning by doing.
Learning by doing is the acquisition of knowledge or abilities through direct experience of carrying out a task, as a part of a training, and it is closely associated with the practical experience (McLaughlin & Rogers (2010)).The reason to focus on guided enactive training is that it is direct, intuitive, fast, embodied into common action-perception behaviours, and that it may be more efficient than other types of training methods for becoming an expert.Besides, if the learning by doing approach is supervised, trainees not only could practice the task, but they also could also receive feedback about their actions along the training sessions in order to avoid errors and improve their performance.
In this way, direct practice under the guidance of an expert seems to be the preferable approach for both acquiring procedural cognitive knowledge and motor skills associated with procedural tasks.However, this preferred situation is expensive and many times hard to achieve with the traditional training methods.Sometimes the practice on the real environment is impossible due to safety, cost and time constraints; other times the physical collocation of the trainee and expert is impossible.Consequently, new training tools are requested to improve the learning of procedural tasks.And here it is where the VR technologies can play an important role.

The use of VR technologies for training
Virtual environments are increasingly used for teaching and training in a range of domains including surgery (Albani & Lee (2007); Howell et al. (2008)), aviation (Blake (1996)), anesthesia (Gaba (1991); Gaba & DeAnda (1988)), rehabilitation (Holden (2005)), aeronautics assembly (Borro et al. (2004); Savall et al. (2004)) and driving (Godley et al. (2002); Lee et al. (1998)).These simulated and virtual worlds can be used for acquiring new abilities or improving existing ones (Derossis et al. (1998); Kneebone (2003); Weller (2004)).In general, VR technologies can provide new opportunities compared to traditional training methodologies to capture the essence of the abilities involved in the task to be learnt, allowing to store them and to transfer them efficiently.The speed, efficiency, and mainly transferability of training are three major requests in the process of design of new virtual training systems.

Advantages and risks of the VR technologies for training
The use of VR technologies for training provides a range of benefits with respect to traditional training systems, mainly:

45
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.com • It allows teach a task under the approach of learning by doing, so the trainees can practice the task as many time as needed to achieve the level of proficiency requested.• It eliminates the constraints for using the real environment mainly: availability, safety, time and costs constraints.For example, in medical domains there is no danger to the patients, and little need for valuable elements such as cadavers or animals.Similarly for aviation, there is no risk for the aircraft, nor fuel costs.In the case of maintenance tasks, VR systems can offer a risk free environment for damaging machines of technicians (Morgan et al. (2004); Sanderson et al. (2008)).• It can provide extra cues, not available in the real world, and that can facilitate the learning of the task.These cues can be based on visual, audio or/and haptic feedback.A combination of these cues is called multimodal feedback2 .For example, to provide information about the motion trajectory that users should follow, the system could provide: visual aids (for example displaying the target trajectory on the virtual scenario) or haptic aids (for example applying an extra force to constraint the motion of the user along the target trajectory) or audio aids (for example sending a sound when the user leaves the target trajectory).• It allows simulating the task in a flexible way to adapt it to the need of trainees and the training goal, for example removing some constraints of the task in order to emphasize only key aspects.• It can provide enjoyment, increasing the motivation of trainees (Scott (2005)).
• It allows logging the evolution of the trainees along the training process.
On the other hand, the greatest potential danger of the use of VR systems is that learners become increasingly dependent on features of the system which may inhibit the ability to perform the task in the absence of the features.
Developing dependence, or at least reliance, on VR features that do not exist in the real environment or are very different from their real world counterparts can result in negative transfer to the real world.If fidelity cannot be preserved or is hard to achieve, it is much better to avoid the use of the VR instantiation, or alternatively, particular care must be taken to develop a training program that identifies the VR-real world mismatch to the trainee and provides compensatory training mechanisms.It may also be necessary to manipulate the relational properties between feedbacks, i.e., the congruency between visual, audio, and haptic stimulations, in order to favor cross-modal attention.
The experiment described in this chapter demonstrates that the controlled use of multimodal feedback does not damage the trainee performance when the trainee changes from the VR system to the real world and therefore it eliminates the main disadvantage of the use of VR reported in the bibliography: the negative or no transfer.

Multimodal systems for training
Within the virtual reality systems we can find the multimodal systems.The term multimodal comes from the word multi that refers to more than one and the word modal that refers to the human sense that is used to perceive the information, the sensory modality (Mayes (1992)).Therefore, in this chapter a multimodal system is defined as a virtual reality system that supports communication with the trainees through more than one sensory modality (see Figure 1), mainly visual, haptic and auditory.In order to support this multimodal communication in a proper way, the multimodal sytems need high requiring hardware and software technologies.Nowadays, due to the last advances in computer technologies to increase their capabilities for processing and managing diverse information in parallel, the multimodal systems are increasing their use.Fig. 1.Multimodal system is a virtual system where the user interacts with the virtual scene through more than one sensory modality.
According with Figure 1, a multimodal system can provide information to the user through different types of interfaces, mainly: 1. Visual intefaces: the primary interaction style of the multimodal interfaces is usually the visual interaction through computer display technology (such as, computer screens and 3D projections) to render the virtual scenario.In addition, these interfaces can be used to provide relevant information for the task through messages, animations and visual aids.2. Auditory interfaces: traditionally, sound in computer systems has been used either to convey feedback to the users or alert them from a system event, for example, a beep to indicate an error.However, some research studies are looking at the use of more complex speech systems to provide richer non-visual information/interaction to the user.One example of the research work in auditory interfaces is the auditory icons (Gaver (1994)) which provide information about computer system events without the need to use any of the visual display space.3. Haptic interfaces: the word haptic comes from the greek haptikós and refers to the sense of touch.The human touch system consists of various skin receptors, muscles and tendon receptors, nerve fibers that transmit the touch signals to the touch centre of the brain, as well as the control system for moving the body.In normal tactile exploration the receptors in the hairless skin play the dominant role but in haptic interaction the focus is shifted towards the proprioceptive and kinesthetic touch systems.There is usually a distinction made between: tactile interfaces and force (kinesthetic) feedback interfaces.The tactile interface is one that provides information more specifically for the skin receptors, and thus does not necessarily require movement in the same way as a kinesthetic interface

47
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.comdoes.For simplicity, in this chapter, the haptic devices will denote the force feedback interfaces.Haptic devices are mechatronic input-output systems capable of tracking a physical movement of the users (input) and providing them force feedback (output).Therefore, haptic devices allow users to interact with a virtual scenario through the sense of touch.In other words, with these devices the users can touch and manipulate virtual objects inside of a 3D environment as if they were working in the real environment.These devices enable to simulate both unimanual tasks, which are performed with just one hand (for example grasping an object and insert it into another one), and bimanual tasks in which two hands are needed to manipulate an object or when two objects need to be dexterously manipulated at the same time (Garcia-Robledo et al. ( 2011)).Figure 2 shows some commercial haptic devices.Some relevant features of a haptic device are: its workspace, the number of degrees of freedom, the maximum level of force feedback and the number of contact points.The selection of a haptic device to be used in a MTS depends on several factors mainly, on the type of task to be learnt, the preferences/needs of trainees and cost constraints.The sensorial richness of multimodal systems translates into a more complete and coherent experience of the virtual scenario and therefore the sense of being present inside of this VE is stronger (Held & Durlach (1992); Sheridan (1992); Witmer & Singer (1998)).The experience of being present is specially strong if the VE includes haptic (tactile and kinesthetic) sensations (Basdogan et al. (2000); Reiner (2004)).
In this chapter, a multimodal system combined with a set of suitable training strategies can be considered as a Multimodal Training System.

Multimodal feedback in Virtual Training Systems
The use of multimodal feedback can improve perception and enhance performance in a training process, for example, tasks trained over multiple feedbacks can be performed better than tasks completed within the same feedback (Wickens (2002)).Various studies have supported the multiple resource theory of attention.Wickens et al. (Wickens et al. (1983)) found that performance in a dual-task was better when feedback was manual and verbal than when feedback was manual only.Similarly, Oviatt et al. (Oviatt et al. (2004)) found that a flexible multimodal interface supported users in managing cognitive load.
Ernst and Bülthoff (Ernst & Bülthoff (2004)) suggested that no single sensory signal can provide reliable information about the structure of the environment in all circumstances.

48
Virtual Reality and Environments www.intechopen.comSynergy3 , redundancy, and increased bandwidth of information transfer are proposed benefits of multimodal presentation (Sarter (2006)).If information is presented in redundant multiple modalities, then various concurrent forms and different aspects of the same process are presented.Concurrent information may coordinate activities in response to the ongoing processes.
Several studies have shown that using auditory and visual feedback together increase performance compared to using each of these feedbacks separately.For example, under noisy conditions, observing the movements of the lips and gestures of the speaker can compensate for the lost speech perception which can be equivalent to increasing the auditory signal-to-noise ratio (Sumby & Pollack (1954)).It was also demonstrated that reaction times for detecting a visual signal were slower in unimodal conditions, compared to a condition in which a sound was heard in close temporal synchrony to the visual signal (Doyle & Snowden (2001)).Handel and Buffardi (Handel & Buffardi (1969)) found that in a pattern judging task, performance was better using a combination of auditory and visual cues than either audition or vision alone.
On the other hand, adding haptic feedback to another modality was also found to improve performance compared to the use of each modality individually.Pairs of auditory and haptic signals delivered simultaneously were detected faster than when the same signals presented unimodally (Murray et al. (2005)).In a texture discrimination task, participants accuracy was improved (less errors) when they received bi-modal visual and haptic cues as compared to uni-modal conditions in which only the haptic or the visual cue was presented (Heller (1982)).Sarlegna et al. (Sarlegna et al. (2007)) demonstrated that feedback that was combined of different sensory modalities improved the performance in reaching task.
In general, the advantages of the use of multimodal feedback are summarized below: 1. Various forms and different aspects of the same task can be presented simultaneosly when information is presented using multimodal feedback.2. The use of a second channel to provide information can compensate the lost of information of the first one.3. When one sensorial channel is overloaded, other channel can be used to convey additional information.4. The reaction times are slower and the performance is increased with the use of multimodal feedback.
But, in order to provide all these benefits it is essential to assure the coherence and consistence among the different stimuli.Conflicts among the information provided by the different channels can deteriorate the performance of trainees.In addition, it is important to identify the different components of the task to be learnt in order to select and employ the most suitable channel to provide useful information for each one.

State of the art on multimodal systems for procedural tasks
Some studies have tested and analyzed the use of the multimodal systems as a simulation tool or a training tool to teach and evaluate the knowledge of a trainee in procedural tasks.The review presented in this section is focused on procedural tasks related to the assembly and disassembly processes.
There are several multimodal systems that simulate assembly and disassembly tasks, although most of them are designed to analize and evaluate the assembly planning during the design process of new machines and only few of them provide additional training modules.Below there is a description of some of these systems: • VEDA: Virtual Enviromnent for Design for Assembly (Gupta et al. (1997)) It is a desktop virtual environment in which the designers see a visual 2-D virtual representation of the objects and they are able to pick and place active objects, move them around, and feel the forces through haptic interface devices with force feedback.They also hear collision sounds when objects hit each other.The virtual models are 2-D in order to preserve interactive update rates.The simulation duplicated as well as possible the weight, shape, size, and frictional characteristics of the physical task.The system simulates simple tasks, such as peg-in-hole task.
• VADE: A Virtual Assembly Design Environment (Jayaram et al. (1999)) VADE was designed and implemented at the Washington State University.It is a VR-based engineering application that allows engineers to plan, evaluate, and verify the assembly of mechanical systems.Once the mechanical system is designed using a CAD system (such as Pro/Engineer), the system automatically exports the necessary data to VADE.The various parts and tools (screw driver, wrench, and so on) involved in the assembly process are presented to users in a VE.Users perform the assembly using their hands and the virtual assembly tools.VADE supports both one-handed and two-handed assembly.The virtual hand is based on an instrumented glove device, such as the CyberGlove, and a graphical model of a hand.VADE also provides audio feedback to assist novice users.The system lets users make decisions, make design changes, and perform a host of other engineering tasks.During this process, VADE maintains a link with the CAD system.At the end of the VADE session, users have generated design information that automatically becomes available in the CAD system.
• HIDRA: Haptic Integrated Dis/Reassembly Analysis.(McDermott & Bras (1999)) HIDRA is an application that integrates a haptic device into a disassembly/ assembly simulation environment.Two PHANToM haptic interfaces provide the user with a virtual index finger and thumb force feedback.The goal is to use HIDRA as a design tool, so designers must be able to use it in parallel with other CAD packages they may use throughout the design process.The V-clip library is used to perform collision detection between object inside the virtual scene.The system simulates simple scenarios, for example a simple shaft with a ring.
• MIVAS: A Multi-modal Immersive Virtual Assembly System (Wan et al. (2004)) MIVAS is a multi-modal immersive virtual assembly system developed at the State Key Lab of CAD&CG, Zhejiang University.This system provides an intuitive and natural way of assembly evaluation and planning using tracked devices for tracking both hands and head motions, dataglove with force feedback for simulating the realistic operation of human hand and providing force feedback, voice input for issuing commands, sound feedback for prompts, fully immersive 4-sided CAVE as working space.CrystalEyes shutter glasses and emitters are used to obtain stereoscopic view.Users can feel the size and shape of digital CAD models using the CyberGrasp haptic device (by Immersion Corporation).Since Haptic feedback was only provided in griping tasks, the application lacked in providing force information when parts collided.The system can simulate complex scenarios such 50 Virtual Reality and Environments www.intechopen.comas disassembling different components of an intelligent hydraulic excavator.To make the disassembly process easy the parts that can be currently disassembled are highlighted in blue color to help users make their selection.
• HAMMS: Haptic Assembly, Manufacturing and Machining System.(Ritchie et al. (2008)) HAMMS was developed by researchers at the Heriot-Watt University to explore the use of immersive technology and haptics in assembly planning.The hardware comprises a Phantom haptic device for interaction with the virtual environment, along with a pair of CrystalEyes® stereoscopic glasses for stereo viewing if required.Central to HAMMS is the physics engine, which enables rigid body simulations in real time.HAMMS logs data for each virtual object in the scene including devices that are used for interaction.The basic logged data comprises position, orientation, time stamps, velocity and an object index (or identifying number).
By parsing through the logged data text files an assembly procedure can be automatically formulated.
SHARP is a dual-handed haptic interface for virtual assembly applications.The system allows users to simultaneously manipulate and orient CAD models to simulate assembly/disassembly operations.This interface provides both visual and haptic feedback, in this way, collision force feedback was provided to the user during assembly.Using VRJuggler as an application platform, the system could operate on different VR systems configurations including low-cost desktop configurations, Power Wall, four-sided and six-sided CAVE systems.Finally, different modules were created to address issues related to maintenance, training (record and play) and to facilitate collaboration (networked communication).
This system is an interactive and immersive VR system designed to imitate the real physical training environments within the context of visualization and physical limitations.Head Mounted Displays are used for immersive visualization equipped with 6DOF trackers to keep the virtual view synchronized with the human vision, PHANTOM® devices are used to impose physical movement constraints.In addition, 5DT data gloves are used to provide human hand representation within the virtual world.The aim of the proposed system is to support the learning process of general assembly operators.Users can repeat their learning practices until they are proficient with the assembly task.
As it can be seen, most of the existing multimodal systems in the domain of assembly and disassembly tasks are focused mainly on their simulation without addressing explicitly their training.In the next Section, the authors present a new multimodal training system (combining visual, audio and haptic feedback) for training assembly and disassembly procedural tasks.Figure 3 shows a comparison among the previous systems and the new multimodal training system described in the next Section.

The new multimodal training system
This section presents a controlled multimodal training system, for learning assembly and disassembly procedural tasks.This platform supports the approach of learning by doing by means of an active multimodal interaction with the virtual scenario (visual, audio and haptic), eliminating the constraints of using the physical scenario (such as availability, time, cost, and

51
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.com

52
Virtual Reality and Environments www.intechopen.comsafety constraints).In this way, the trainees can interact and manipulate the components of the virtual scenario, sensing the collisions with the other components and simulating assembly and disassembly operations.In addition, the new system provides different multimodal aids and learning strategies that help and guide the trainees during their training process.One of the main features of this system is its flexibility to adapt itself to the task demands and to the trainees preferences and needs.In this way the sytem can be used with different types of haptic devices and allows undertaking mono-manual and bimanual operations in assembly and disassembly tasks with or without tools.

Set-up of the system
The new multimodal training system consists of a screen displaying a 3-D graphical scene, one haptic device and the training software, which simulates and teaches assembly and disassembly procedural tasks.As it will be discussed later, thanks to a special interface the platform is flexible enough to use different types of haptic devices with different features (large or small workspace, one or two contact points, different DoFs, etc.) depending on the task demands.
Figure 4 shows the training system in which trainees have to grasp the stylus of the haptic device with one hand to interact with the virtual pieces of the scene.Besides, trainees can use the keyboard to send commands to the application (e.g.grasp a piece, change the view, ...).
Fig. 4. Multimodal Training System.On the left, the system is configured to use a large workspace haptic device.On the right, the system is configured to use a desktop (small wokspace) haptic device.
The 3D-graphical scene is divided into two areas.The first area, the repository,isaback-wall, that contains the set of pieces to be assembled (in the case of an assembly task).The second area, the working area, is where trainees have to assemble/disassemble the machine or model.On the right part of the screen there is a tools menu with the virtual tools that can be chosen to accomplish the different operations of the task.When the user is closed to a tool icon, the user can "grasp" the corresponding tool and manipulate it.Figure 5 shows one example of a virtual assembly scene of the experimental task described in the next Section.Using the haptic device, the trainees must grasp the pieces from the repository (with or without tool) and move them to place them in its correct position in the model.Along the training session, the system provides different types of information about the task, such as: information about the "task progress", technical description of the components/tools and critical information of the operations.Critical information can also be sent through audio messages.When trainees make an error during the task, the system also displays a message with the type of error and plays a sound to indicate the fault.The platform also provides different utilities to configure the training process.For example, to start the training session at any step, to select the constraint in the order of the sequence of steps (the steps can be done in any flexible order or must follow a fixed order), to allow making steps automatically so the trainees can jump the easy steps and focus on the complex ones or even to "undo steps" to repeat difficult steps.The system logs information about the training sessions in order to analyze the performance of the trainee.Then, this information can be processed for extracting different performance measures.At the end of the session, the system provides a performance report containing information about: • Step performance: time, information about how the step was performed (correct step, without errors but with aids, with errors but without aids, with errors and aids, not finished, performed automatically) and number of times that the help was needed to finish the step and number or errors.• Overall performance: total time, description of the sequence done, total number of correct steps, total number of steps without errors but with aids, total number of steps with errors but without aids, total number of steps with errors and aids, total number of steps not finished, total number of steps performed automatically by the system, total number of times that the help was needed, total number or errors, and total number of consecutive steps done correctly.

Architecture and components
The multimodal interaction of the trainees with the virtual scene involves different activities, such as haptic interaction, collision detection, visual feedback, commands management, etc.All these activities must be synchronized and executed with the correct frequency in order to provide the trainees with a consistent and proper interaction.Therefore, a multi-threaded solution with 3 threads is employed (see Figure 6): 1.The haptic thread analyzes the user position, with the frequency requested by the haptic device (usually 1 Khz), in order to calculate the force that will be replicated to the trainee.It also manages the communication between the system and the haptic device through a special API (Application Programming Interface): the API-HAL (Haptic Abstract Library).This API makes the software application independent from the haptic device drivers and allows easily changing the system configuration, and using different haptic devices according to the task demands.

54
Virtual Reality and Environments www.intechopen.com2. The simulation thread validates the motion requested by the trainee according to the task, for example: detecting collisions of the model in movement, simulating an operation or constraining the motion of trainees along a specific path, etc. 3. The rendering thread updates the graphical rendering of the virtual scene, sends the requested audio feedback at each time, processes the commands of the trainees (i.e.making a zoom in, rotating the view, etc.) entered by the keyboard or the switches of the haptic device and controls the training process.
Fig. 6.Multi-thread solution implemented in the IMA-VR system.
The main components of the new training system are: • Graphics Rendering Module: it loads and displays the virtual scenario from a VRML file.Each object is represented by a triangle mesh and a transformation matrix.This module also includes some features to simulate the visual deformation of flexible objects as cables.• Haptic Rendering Module: it analysis the collisions between the user and the scene for exploration tasks and between the object in movement and the rest of the objects for manipulation tasks.This module also supports two haptic devices working simultaneously to simulate bi-manual operations.
• Operation Simulator: it simulates some of the most common operations of a assembly/disassembly task such as: insert/remove a component in/from another one, align two components along an axis, etc. • Virtual Fixtures Module:t h et e r mvirtual fixture is used to refer to a task dependent aid (visual, audio or haptic aid) that guides the motion and actions of users.This module provides both visual and haptic fixtures.Examples of visual fixtures are: changing the color of a virtual object, making a copy of an object and displaying it in a specific position, rendering auxiliary virtual elements such as arrows, trajectories, reference planes, points, etc.The output of the haptic fixtures is rendered in the form of forces dependent on the type of fixture: e.g.trainees can receive forces in order to constraint their hand-motion along a specific path or just attraction/repulsion forces with different levels of force and areas of influence (a surrounding area of a object or the whole-working space).This library of virtual fixtures is the basis of most of the training strategies provided by the system.

55
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.com • Training Controller Module: this module provides several trainings strategies with different degree of interaction between the trainees and the system in order to perform the task.
In one extreme it is the observational learning strategy, where the trainees do not have any interaction with the virtual scenario but they just receive information about how to undertake the task in order to develop a mental model4 of the task.On the other hand, there are some strategies where the trainees have to perform the virtual task being able to receive different types of multimodal aids from the system (based on the fixtures explained before).In the next section, one of these training strategies is described in detail.Finally, it is the learning test strategy in which the trainees have to perform the virtual task by themselves without receiving any aid from the system.Usually, a training programme combines several of these strategies.The selection of the strategies will depend on the task complexity and the profile of the trainees.

Experiment: the use of direct aids to train a procedural task
During the learning of a task, trainees can need to receive information about how to proceed with the task.As commented in previous sections, the way of providing theses aids is essential to assure the effectiveness of the training.The direct aids provide the information to perform the task in an easy way, however, some research works suggest that their use can have adverse effects on training.This section describes the experiment conducted to analyze if the controlled use of direct aids to train a procedural task does not damage the transfer of the involved skills.

Experimental task
The selected experimental task was learning how to build a LEGO™ helicopter model composed of 75 bricks.Participants were trained in the task using the multimodal training system explained in the last section with the OMNI device by SensAble.In order to avoid the effect of the short-tem memory, the experiment was conducted in two consecutive days.In the first day, the participants had to learn the procedural task using the virtual training system.During the second day, they had to build the same model using the physical LEGO™ bricks, as it is shown in Figure 7.

Learning conditions
According to the goal of the experiment, two experimental groups were defined to compare the effectiveness of the controlled direct aids with respect to a classical aid as it is an instruction manual (indirect aids).The participants of both groups trained in the task using the same multimodal platform, so all participants were able to interact with the bricks of the virtual LEGO™ model, through the haptic device, and build the virtual helicopter.The difference between the groups was the way of providing the information about the task: • Group1-Indirectaids: the participants did not receive any aid from the training platform; to get information about the immediate action to perform, they had to consult an instruction book and each consult was logged as one aid.This book contained step-by-step diagrams of the 75 steps of the task.Each diagram included a picture of the LEGO™ brick needed for the current step and the view of the model at the end of the step (see Figure 8).This Fig. 7. Building a real LEGO™ helicopter after being trained in the task with a multimodal training system.
kind of aid was considered as an indirect aid, because the participants had to translate in a cognitive way the information of each diagram to the actions that had to be undertaken at that step.• Group 2 -Direct aids: the training platform provided information about the immediate action that participants had to perform in a direct way through visual and haptic aids.Each step consists of two main operations: select the correct brick and place the brick in its correct position on the model.For the first action, select the correct brick, the target brick was highlighted in yellow colour (visual aid) and the trainee received an attraction force to it (haptic aid), see Figure 9 on the top.For the second action, place the brick, a copy of the correct brick was rendered at its target position (visual aid) and an attraction force towards that position was applied as well (haptic aid), see Figure 9 on the bottom.
From the point of view of the authors, the main disadvantage of the direct aids is that their abuse of use can inhibit an active exploration of the task and the trainees can become dependent on these aids.This fact could impede the transfer of skills in the real situation, where these aids are no longer available.To avoid this problem, the authors proposed a training strategy based on providing these direct aids in a controlled way and reduce them along the training process.In this experiment, the training period consisted of four training sessions.Therefore, during the first training session, in order to have an overview of the task, the aids were displayed automatically by the system, i.e. after finishing an action the trainee received automatically information for the next action.After that, during the second training

57
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids Fig. 9. Direct aids provided by the platform.On the top: aids to select the correct brick.On the bottom: aids to place the brick in its correct position.
session, the aids were displayed only when the trainees request them by pressing the help key.Finally, during the third and fourth sessions, the aids were displayed only when the trainee requested them as in the second one, but they had also made at least one attempt of selecting the correct brick, i.e. the trainees had to try selecting the correct brick by themselves before being able to receive the aid.Each requested help was automatically logged as one aid.
In both groups, when trainees selected an incorrect piece, the platform played a "beep" sound and displayed a message to indicate the mistake.To evaluate the performance of the trainee the system also logged the number of incorrect pieces selected.Additionally, the order of the steps was fixed and numbered from 1 to 75, therefore both strategies followed the same building order.

Participants
The experiment was undertaken with 40 undergraduate students and staff from CEIT, the University of Navarra and TECNALIA.Following a between-participants design, each participant was assigned to one of the two experimental groups (20 participants in each one).For the assignment of the participants to the groups, first they filled in a demographic questionnaire whose answers were used to distribute them along the two groups in an homogeneous way.In the direct aids group, 48% of participants were female, the average age was 33 with a range from 24 to 50, and only 32% of participants did not have any experience with LEGO™ models.Meanwhile, in the indirect aids group, 53% of participants were female, the average age was 36 with a range from 26 to 48 and only the 37% of participants did not have any experience with LEGO™ models.All participants reported normal sense of touch and vision and did not have experience in using haptic technologies.
Before starting the experiment, participants signed an informed consent agreement and they were informed that if they achieved the best performance of their group, they would receive areward.

58
Virtual Reality and Environments

Procedure
The experiment was conducted in two consecutive days.The first day, the participants were familiarized with the MTS and the corresponding training condition.Later, they had to use the training system to learn how to build the virtual helicopter model during four training sessions (i.e. they built the virtual helicopter four times), with a short break between sessions to allow the examiner to restart the system.The initial position of the bricks was generated randomly, so it was different in each training session.The evaluator was in the room along with the participant and information about the task performance (number of aids, number of errors and the training time) was logged automatically by the system for further analysis of the data.At the end of the training, the participants of both groups were requested to fill in a questionnaire to evaluate their experience with the multimodal system.
The second day, the participants had to build the real LEGO™ helicopter model (the same one as in the training sessions).Each participant was instructed to complete the model as rapidly as possible, but without mistakes and missing bricks.At that time, the participants had the instruction book to consult it when they did not know how to continue with the task and this action was recorded as one "aid".The session was recorded in video for further analysis.

Performance measures
The measures to analyze the final performance of the trainee were: the training time, the real task performance (execution time, number of aids with the instruction book, number of non-solved errors and type of errors) and the evolution of the performance of the trainee along the training process and the transition from the training system to the real task.The subjective evaluation of the platform was also analyzed.

Results and discussion
Before starting to analyze the results, it would be note that although the experimental task is simple conceptually its correct execution is difficult due to the requested cognitive skills component.The task required memorizing the exact position of 75 bricks, most of them without any indication for functional or semantic meaning (just bricks with different size, color and number of pins) and some of the assembly procedure was totally arbitrary (different combinations of the bricks could generate the same shape of the helicopter model but only one specific combination was considered valid and the rest of options were considered as non-solved errors).
In general, there were not significant differences between the results obtained in both groups.Statistical analysis was performed with independent-samples t-tests, equal variances assumed.
Figure 10 shows that the mean real task execution time for the direct aids group, 18.6 minutes, dd not have significant difference with the mean time for the indirect group, 17.9 minutes (t36=0.36,p=0.721).Nevertheless, it is important to point out that during the building of the real LEGO™ model, the participants of the indirect aids group had some advantages respect to the others because they were used to consult the instruction book during the training period, and in this way they could get the information quicker than the others.
In order to analyze the real task performance obtained in each group, each step of the task (75 steps in total) was classified according to the following criteria: although they looked at the instruction book because they had some doubts about how to undertake the step, or they did not know how to continue or how to solve an error.• Step with non-solved errors: the participants made some error in the step and they did not solve it because they did not know how to solve it or they were not aware of the error.
Taking into account this classification, Figure 11 shows that there were not significant differences between the real task performances obtained in each group.In both groups, the mean of correct steps (i.e.steps performed by the participants without the use of the instruction book and without non-solved errors) was around the 60% of the total steps, and the mean of steps with non-solved errors was very small, less than the 2% of the total steps.The typology of the non-solved errors was also similar in both groups.Most of the non-solved errors were related to the missing or wrong assembly of small pieces.Figure 12 shows some examples.In more detail, Figure 13 shows the mean number of consults to the instruction book during the real task, in which there were not significant differences between both groups 60 Virtual Reality and Environments www.intechopen.com(t36=0.14, p=0.889).Although no participant was able to build the real LEGO™ model without consulting the instruction book, the mean number of aids was less than 29 for both strategies.Additionally, the percentage of errors corrected by the participants without using the instruction book (see Figure 14) was larger in the direct aids group (28.6%) than in the indirect aids group (20.7%), so it seems the participants of the direct aids group could have created a better mental model of the task.16 show that the percentage of correct steps (without non-solved errors and without aids) increased along the training trials.Moreover, in the direct aids group the performance obtained in the real task was a little better than in the last trial of the training process, even though the real task was performed the day after of the training sessions.Additionally, the statistical analysis demonstrates that there was not significant differences in the transition from the MTS to the real task between both groups (t36=0.11,p=0.911).This fact demonstrates that the controlled use of direct aids does not damage the trainees performance when they change from the training platform (virtual task) to the real world (physical task) and therefore it eliminates the main disadvantage of the use of direct aids reported in the bibliography (Yuviler-Gavish et al. ( 2011)).It means, the controlled direct aids did not impede the transfer of knowledge as it was hypothesised in this experiment.As it is shown in Figure 19 and 20, the results of the questionnaires did not have significant differences among the two groups.This fact seems logic since the interaction of the participants with the trianing system was similar in both groups; the difference was only the way in which the information about the task was provided.In general, the questionnaire results were quite positive.

64
Virtual Reality and Environments www.intechopen.com The questions 1 to 4 (see Figure 19) measured the usability of the training platform.The results of these questions indicate that participants concentrated quite well on the task and the platform was easy to use.Nevertheless, participants did not feel totally comfortable using the system.Some participants suggested eliminating the use of keyboard for sending commands to the application by adding more buttons in the stylus of the haptic device.Moreover, they suggested increasing the duration of the familiarisation session in order to increase the level of confidence using the training platform.
The questions from 5 to 8 evaluated the system as a training tool.As showed in Figure 20, participants were quite confident to perform the real task and rated the system with a score of 5.7 (in the direct aids group) and 5.9 (in the indirect aids groups) as a training tool.Some of the participants in the group of direct aids commented that they did not like to be forced to make an attempt of grasping the correct brick before being able to receive the aid.

Conclusions and future research
This chapter presents a new Multimodal Training System for assembly and disassembly procedural tasks.Some features of this system are: • It supports the approach of learning by doing by means of an active multimodal interaction (visual, audio and haptic) with the virtual scenario, eliminating the constraints of using the physical (real) scenario: mainly availability, time, cost, and safety constraints.For example, providing training when the machine is still in the design phase or it is working and can not be stopped or when errors during the training can damage the machine or the trainee.• It provides different multimodal aids, not available in the real world, that help and guide the trainees during the training process.• Its flexibility to adapt itself to the demands of the task and to the preferences/needs of the trainees, for example: flexibility in the available training strategies, flexibility in the sensory channel used to provide feedback and flexibility in the supported haptic devices.
In this work the characteristics, advantages and dissavantages of the use of VR Technologies for training were also described.One of the main drawbacks in the use of Virtual Training Systems is that trainees become increasingly dependent on the features of these systems which may inhibit the ability to perform the task in the absence of them.This negative effect in the use of virtual aids was analyzed in the experiment described in this chapter and the findings suggest that the use of a strategy based on providing direct aids in a controlled way does not damage the knowledge transfer from the virtual system to the real world.
This outcome is in contrast with other research works that shows the negative effects of the use of direct aids.Moreover, from the point of view of the authors, the use of direct aids could reduce the training time and therefore increase the efficiency of the training process.In this way, further experiments should be run in order to analyze in which way the use of direct aids could decrease the training time without damaging the final performance of the trainees.
During the experiment described in this chapter, the authors detected three main behaviour patterns in the participants that can be useful to define some design recommendations for the design of the virtual aids: 1. Participants who like trying by themselves the next action but when they do not know how to continue they request help.In this case, it would not be necessary to add any constraint to be able to receive the aid, it would be enough that the aid was provided on demand.

65
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.com2. Participants who prefer performing the next action by themselves although they do not know it, so they make a lot of attempts to guess the correct option.In most cases, these participants only spent more training time without making any improvement in their knowledge.It would be suitable to define a maximum number of attempts, so after achieving this limit the aid was provided automatically by the system.This maximum number of attempts should be configurable and could depend on the profile of the trainee or the target task.3. Participants who want to have an easy access to the aid when they do not know how to continue.These participants were annoyed by being forced to make an attempt of grasping the correct brick before being able to receive the aid, and in addition the training time increased without any benefit.In this case, when the trainees do not know how to continue, it is important that they can receive the aid from the system easily.
Lastly, a final recommendation for the future implementation of virtual training systems is that the system should detect and evaluate the behaviour of trainees along the training session in order to display adequate information according with the evolution of their performance.

Fig. 2 .
Fig. 2. Commercial haptic devices.On the left, PHANToM haptic devices with 6-DOF by SensAble.On the middle, the FALCON device with 3-DOF by NOVINT.On the right, the Quanser 5-DOF Haptic Wand, by Quanser

Fig. 3 .
Fig. 3. Comparison among the new multimodal training system and other systems.*NIA: no information available.

Fig. 5 .
Fig. 5. Virtual assembly scene.Trainees have to grasp the correct piece from the backwall and place it in its correct position in the model.

Fig. 8 .
Fig. 8. Three pages of the instruction book, showing for each step the target brick and the final result of the step.

Fig. 10 .
Fig. 10.Real task execution time.• Correct step: the participants made the step correctly (without any non-solved error) and without the use of the instruction book.• Step with aids: the participants made the step correctly (without any non-solved error),although they looked at the instruction book because they had some doubts about how to undertake the step, or they did not know how to continue or how to solve an error.• Step with non-solved errors: the participants made some error in the step and they did not solve it because they did not know how to solve it or they were not aware of the error.

Fig. 11 .
Fig. 11.Comparison of the real task performance between the two strategies.

Fig. 12 .
Fig. 12. Examples of a correct assembly, missing a piece and wrong position of a piece.

Fig. 13 .
Fig. 13.Mean number of consults to the instruction book during the real task.

Fig. 14 .
Fig. 14.Percentage of errors corrected by the trainees without using the book during the real task.Regarding the analysis of the evolution of the performance of the trainees along the training process and the transition from the training platform to the real scenario, Figures 15 and

Fig. 16 .
Fig. 16.Detailed information about the performance of the trainees (percentage of correct steps without non-solved errors and without aids) in the last training trial and in the real task.In relation to the last performance measure, Figure17shows the mean total training time at each group where the statistical analysis demonstrates that there was not significant difference between both groups (t36=-0.78,p=0.442).But, it is also interesting to analyze the training time

Fig. 17 .
Fig. 17.Total training time at each experimental group.

Fig. 18 .
Fig. 18.Evolution of the time along the four training sessions At the end of the training sessions all participants filled in a questionnaire, which was a reduced version of Witmer & Singer's Presence Questionnaire (Witmer & Singer (1998)).Since the haptic technologies provide a new interaction paradigm, this set of questions was useful to get extra information about the experience with the haptic training platform.The questionnaire consisted of 8 main semantic differential items, each giving a score in the scale from 1 (worst) to 7 (best).The questionnaire was elicited by the following questions: • Q1.How natural was the mechanism which controlled movement through the environment?(Extremely artificial = 1, Natural = 7).

63
Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids www.intechopen.com• Q2.How well could you concentrate on the assigned tasks or required activities rather than on the mechanisms used to perform those tasks or activities?(Not at all = 1, Completely = 7).• Q3.How was the interaction with the virtual environment during the training session?(Difficult = 1, Easy = 7).• Q4.During the training session how did you feel? (Nothing comfortable = 1, Strong comfortable = 7).• Q5.During the training sessions, did you learn the task?(Nothing 0% = 1, All 100% = 7).• Q6.Are you able to repeat the task with the real bricks?(No, I can't do it = 1, Yes, I can do it = 7).• Q7.How consistent was you experience in the virtual environment with respect to the real scenario?(Nothing consistent =1, Strong consistent=7).• Q8.What mark do you give to the Multimodal Training System as a training tool?(The worst=1,Thebest=7).

Fig. 19 .
Fig. 19.Level of usability of the Multimodal Training System.