Next Article in Journal
Flexural Behavior of Concrete Beams Reinforced with Recycled Plastic Mesh
Next Article in Special Issue
Virtual and Augmented Reality for Building
Previous Article in Journal
Smart Contracts in the Construction Industry: A Systematic Review
Previous Article in Special Issue
Mixed Reality in Multiuser Participatory Design: Case Study of the Design of the 2022 Nordic Pavilion Exhibition at the Venice Biennale
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficiency of VR-Based Safety Training for Construction Equipment: Hazard Recognition in Heavy Machinery Operations

1
Department of Civil Engineering, Monash University, Melbourne, VIC 3800, Australia
2
Department of Infrastructure Engineering, The University of Melbourne, Melbourne, VIC 3010, Australia
3
Department of Information Technology, Monash University, Melbourne, VIC 3800, Australia
4
Department of Building and Real Estate, Hong Kong Polytechnic University, Hong Kong, China
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(12), 2084; https://doi.org/10.3390/buildings12122084
Submission received: 26 October 2022 / Revised: 17 November 2022 / Accepted: 25 November 2022 / Published: 28 November 2022

Abstract

:
Machinery operations on construction sites result in many serious injuries and fatalities. Practical training in a virtual environment is the key to improving the safety performance of machinery operators on construction sites. However, there is limited research focusing on factors responsible for the efficiency of virtual training in increasing hazard identification ability among novice trainees. This study analyzes the efficiency of virtual safety training with head-mounted VR displays against flat screen displays among novice operators. A cohort of tower crane operation trainees was subjected to multiple simulations in a virtual towards this aim. During the simulations, feedback was collected using a joystick to record the accuracy of hazard identification while a post-simulation questionnaire was used to collect responses regarding factors responsible for effective virtual training. Questionnaire responses were analyzed using interval type-2 fuzzy analytical hierarchical process to interpret the effect of display types on training efficiency while joystick response times were statistically analyzed to understand the effect of display types on the accuracy of identification across different types of safety hazards. It was observed that VR headsets increase the efficiency of virtual safety training by providing greater immersion, realism and depth perception while increasing the accuracy of hazard identification for critical hazards such as electric cables.

1. Introduction

The construction industry accounts for more than 15% of all work-related fatal accidents in developed economies [1]. With the increased use of heavy machinery for construction operations, the key factor for avoiding incidents and managing risks effectively is to develop construction workers’ ability to identify hazards. The highest number of fatalities recorded on construction sites are related to workers being struck by moving machinery and vehicles [2,3]. Even in 2020, about 17% of fatalities on construction sites in Australia were caused due to workers being hit by moving vehicles, moving objects, or being trapped by moving machinery [1]. Earthmoving machinery and cranes are also responsible for many fatal accidents on construction sites [4,5]. Most of these accidents have been due to operator negligence or lack of training [6].
Several studies have analyzed the root causes of accidents and provided recommendations to avoid them. One of the most frequently cited recommendations for avoiding such accidents is providing site-specific safety training to heavy machinery operators and construction workers to increase their hazard perception ability while familiarizing them with the operation site [6,7]. Another important causality of accidents that needs to be addressed is the lack of structured training for novice heavy-machinery operators such as Tower crane operators. Lack of experience combined with a lack of site-specific training among novice operators can lead to accidents on site [8]. To solve this, many studies have recommended structured virtual training for novice construction operators [9]. However, only a few studies in the domain of heavy-machinery operations have employed head-mounted displays (HMDs) to utilize depth-perception offered by VR (Virtual Reality) HMDs to enhance safety during operations such as [10,11,12]. The importance of depth perception in training crane operators to perform critical tasks has also been highlighted by Juan et al. in their study where they developed stereoscopic and kinesthetic vision for training crane operators in a virtual environment [13]. This study aims to analyze the difference in training effectiveness of crane operations using head-mounted VR displays and flat screens by comparing the performance of novice operators in the virtual environment. Therefore, this paper uses an analytical approach using multicriteria decision-making (MCDM) to interpret participants’ feedback and reactions. The received feedback is analyzed using interval type-2 fuzzy analytical hierarchical process to rank the most critical factors to hazard perception. A sensitivity analysis is then performed to develop an in-depth insight into the impact of shortlisted factors on the overall training outcomes.
The rest of the paper is organized as follows: Section 2 presents the literature review of safety training and the analysis of MCDM problems in this space. Section 3 details the methodology used in this study. Section 4 provides details regarding results obtained from the analysis, and Section 5 discuss the results and draw conclusions for this study, respectively.

2. Literature Review

Heavy equipment is increasingly used in construction, increasing productivity and efficiency [14]. However, the downside of this advancement is an increase in accident rates caused by heavy equipment [15]. Previous research shows that around 15% of site accidents are “Struck by” heavy equipment [16]. Struck by moving objects was the leading cause for all accidents behind “Falls” and “Collapse of object” for most of these years. The use of construction equipment is one of the leading causes of fatalities on construction and infrastructure projects—specifically tower cranes, which account for up to 8% of all fatalities [4]. Previous research identified human error and falling from height as the two most common causes of crane accidents [5].
Furthermore, collapsing crane structures have been found to be another frequent cause of crane accidents [17]. In another study, worker negligence or human error was confirmed as the primary cause of tower crane accidents [18]. Considering the findings of these studies, it can be concluded that safety training can increase awareness of crane operators related to construction safety and hazardous situations, potentially reducing site accidents and injuries.

2.1. Safety Training for Heavy Construction Machinery Operations

The medium of the training is an essential factor for accident prevention in construction and infrastructure projects [19]. The effectiveness of VR training over conventional training has been measured by analyzing immediate effectiveness, short-term effectiveness, and information retention ability among construction workers [20]. A similar study used both VR and conventional displays to compare knowledge retention, self-efficacy, engagement, and self-reported presence in safety training [21]. The findings of these studies have been implemented for the skill enhancement of various heavy machinery operators. The applicability of VR systems for training crane operators was effectively demonstrated by Patrao et al. [22]. They developed two different simulators for stationary and mobile cranes, which effectively trained crane operators in an immersive environment. In another safety study focused on crane operations, the effectiveness of VR systems in training workers for the dismantlement of tower cranes was demonstrated by Li et al. [9]. The multiuser virtual safety training system was able to demonstrate better memory retention and site identification among workers trained using the virtual training.
Recent studies have also demonstrated an extension of research for skill enhancement, real-time data visualization, and instruction for optimized performance. The prediction of individual learning performance to indicate support requirements for training and learning experiences was studied by Arashpour et al. [23]. The use of 4D-BIM to create real-time augmented reality (AR) visualizations for crane operations, thereby optimizing safety and time of operation, was demonstrated by Lin et al. [24]. Similarly, the development of digital twins for the remote monitoring of tower cranes and the provision of information on an AR display for safety enhancement was demonstrated by He et al. [25]. In another study, real-time information on construction progress, lift path, and hazards present in the construction environment was relayed to tower crane operators using an intelligent crane operations assistance system [26].
Enhancing safety during crane operations has been the recent focus of studies that use VR and gamification of the virtual environment for operator training or lift path analysis. The use of VR to visualize lift paths and gamify the lifting process to detect a collision, thereby enhancing safety and lift planning, was demonstrated by Zhang et al. [27]. Virtual reality-based training for the assembly and dismantling of tower cranes was developed by Li et al., which focuses on training construction workers to recognize and respond to hazards during these operations [9]. Similarly, the use of VR for interactive tower crane layout planning and operator training has been investigated by Zhang et al. [28]. This study was an advancement to their earlier study for lift planning and operator training. They demonstrated intractability in multicriteria analysis using VR, thereby utilizing the human-in-the-loop approach to arrive at the optimum solution for the safety and efficiency of lifts. In a similar study, the effectiveness of VR training in optimizing lift planning and safety during lift operations was demonstrated by Kahyani et al. [29].

2.2. Multi-Criteria Decision Making for Assessing Effectiveness of VR Safety Trainings

The ability of a machine operator to recognize hazards, understand the risks involved, and react timely according to perceived hazards and risks is key to maintaining safety on a construction site. An important aspect of recognizing hazards stems from the ability to understand the distance to an object—i.e., depth perception. The superiority of VR in imparting hazard-perception knowledge when compared to that imparted by the two-dimensional representation of information has been demonstrated by Periman et al. [30]. The effectiveness of VR in memory retention and understanding of tasks has been demonstrated by Lee et al. [31], while cognitive load in VR immersive environment has been analyzed using the fuzzy Analytical Hierarchical Process (f-AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) by Wang et al. [32]. However, none of these studies have evaluated the ability of equipment operators for depth perception and hazard identification as a result of VR-based safety training.
The analytical Hierarchical Process (AHP) is a process for ranking alternatives and criteria to reach an optimum solution to MCDM problems. AHP and fuzzy logic have been used to rank alternatives and analyze important criteria in the VR and training domains. Fuzzy type-1 AHP has been used to analyze the performance of VR and conventional displays in overhead crane operation training [11]. Moreover, this method has been used to analyze the effectiveness of VR-based training systems for mining environments [12]. Fuzzy AHP has also been used to solve MCDM problems by studying the effects of stereoscopic and kinesthetic vision on depth perception during safety training simulations [13].
Because of the inherent uncertainty of type-1 fuzzy sets, fuzzy type-2 sets have been proposed as effective MCDM problem-solvers. Site selection and prioritization of alternatives for renewable energy generation are among the complex problems that have been addressed using fuzzy type-2 AHP [33,34]. Fuzzy AHP has also been used to analyze the efficacy of VR to study peoples’ behavior during simulated fire evacuations [35]. Selection problems such as tunnel Boring Machine (TBM) selection [36] and multicriteria evaluation of construction equipment have been some of the applications of fuzzy AHP methods [37]. Fuzzy AHP has also been combined with TOPSIS to assess construction sites’ safety risks and relevant controls [38,39].
With the proven robustness of the fuzzy AHP analysis in the raking of multiple criteria for decision making [40,41], this paper employs interval type-2 fuzzy sets to convert qualitative inputs into quantitative fuzzy sets allowing evaluation of outputs according to the degree of their membership. The Interval Type-2 Fuzzy Analytical Hierarchy Process (IT2F-AHP) is used to obtain the quantitative relative weights of factors contributing to depth perception in hazard identification. These factors are then analyzed as an MCDM problem to rank the better of the two display types. This paper adopts a user-centric approach to assess depth perception and hazard identification among participants subjected to VR training with immersive and semi-immersive displays.

3. Methodology

This study focuses on analyzing the hazard recognition of crane operators on construction sites that employ tower cranes for transporting bulky precast or prefabricated modules during construction. Safety incidents simulated in the virtual environment include (1) Electrocution as a result of the crane hook or load colliding with electric lines, (2) Collisions due to hook or module colliding with the building structure or stationary vehicles, (3) Collisions with moving vehicles or workers on-site leading to “Struck-by” incidents, and (4) Collision with other tower cranes on site.
Unity3D [42] was used for creating the virtual environment due to its ability to simulate real-world physics. Tower crane simulations were based on a “Pick-and-place” activity wherein the crane operator needs to pick up a load or module from a supply point to a demand point. All the simulations were developed from a crane operator’s view in a tower crane cabin and had multiple incidents modeled during each simulation (Figure 1). A total of four scenarios were developed representing real-world pick-and-place activity with multiple hazards and safety incidents. Further, a script was developed that recorded reaction time when a participant provided feedback using a joystick.
Users were shown four different simulations based on the four scenarios developed. To ensure consistency during the experiment, two simulations (randomly selected from the four scenarios) were shown on a semi-immersive screen and the other two simulations were on an immersive display. To register instances when users identified hazards during each of the simulations, they were instructed to move the joystick lever in either of the four directions. The joystick used for the experiment was a four-axis Arduino-based joystick whose form was similar to a jib-operation joystick found in most hammer-head tower-crane cabins. Reaction times from joystick inputs were recorded for each of the participants and were analyzed against the modeled time of each incident to understand the ease of hazard recognition for different hazard types that were modeled in the virtual environment. The procedure for experimenting is represented in Figure 2.
The stepwise procedure for conducting this study is listed below:
  • Participants are given a presentation on “pick and place” tower crane operations and potential hazards on construction sites.
  • Participants are familiarized with the VR environment and registration of inputs using the joystick in the VR environment.
  • Once the participants are familiar with the VR settings, they are shown two simulations on VR headset and two simulations on a semi-immersive flat-screen display. Their inputs to hazard identification are recorded in a database for further analysis.
  • After the experiment, they are asked to provide feedback regarding various aspects of the simulations via a questionnaire survey.
The study participants were asked about their previous experiences with VR. It was seen that only 10% of participants had used VR previously for training, and 20% of the participants had experience using VR for gaming only. Before subjecting the participants to VR simulations, they were given a short 5 min discourse on these hazards frequently observed during tower crane operations.
Once familiar with the virtual environment and operation of the joystick, participants were shown the four simulations and their feedback on the joysticks was recorded (Figure 3). At the end of all four simulations, participants were asked to respond to a questionnaire evaluation of hazard recognition. Feedback from participants and reaction times form the basis of analysis to answer the following questions:
  • How much difference does a VR headset make to training efficiency for construction machinery operator safety training?
  • What type of hazards are better perceived using VR?
  • Which aspects of construction machinery operator safety training can be improved using VR?

3.1. Participants

A cohort of 30 participants was selected from a course imparting knowledge on tower crane operations as novice operators for the purpose of this study. These participants were asked about their previous experiences with VR. It was seen that out of 30 participants, only three participants had used VR previously for training and seven of the participants had experience using VR for gaming applications; 70% of the participants had not used VR before.
The selected participants did not have experience operating tower cranes and can be considered a fair representation of the target area of improvement of this study—i.e., novice construction machinery operators.

3.2. Determining the Set of Evaluation Factors

The evaluation factors for the efficiency of virtual training were divided into four significant factors related to performance, effectiveness, presence, and device based on an existing body of knowledge as elaborated in this section combined with expert interviews. Each factor contains two or more sub-factors to evaluate hazard perception in virtual environments. These evaluation factors are presented in Figure 4.

3.2.1. Performance Factors

Performance factors focus on the core task of spotting hazards and reacting to them. Performance factors comprise the following sub-factors.
  • Ease of operations: This criterion measures the ease of conducting the given task in VR, including task complexity and problem-solving ability [43].
  • Enjoyability: This criterion relates to the ease of learning and the sense of enjoyment and satisfaction while performing the task in VR [44,45].
  • Level of Engagement: The amount of learning absorbed from virtual training depends on the correlation to the actual task [46].
  • Response Accuracy: Accuracy of response to hazard recognition task that can be measured in terms of time, i.e., how quickly a participant can recognize and react to a hazard during the operation [12].

3.2.2. Effectiveness Factors

Effectiveness factors are related to the utility and usefulness of the device while spotting hazardous conditions in the virtual environment.
  • Objective usability: Previous studies have employed a comprehensive fuzzy evaluation of safety training systems to measure the effectiveness of the training in terms of cognitive load and decomposition of the task [12]. These terms have been captured by “objective usability” under effectiveness factors.
  • Perceived usefulness: An important sub-factor that constitutes training efficiency is its “perceived usefulness”. This sub-factor focuses on the difference in learning and recognition when using virtual safety training for the enhancement of depth perception.
  • Ease of recognition: Participants’ response to “ease of recognition” was also evaluated under effectiveness evaluation. The aim is to understand the effectiveness of visuals for recognizing hazards when using VR displays and flat screens [47].

3.2.3. Presence Factors

The feeling of presence in a VR depends upon involvement, interface quality, sensory fidelity, and immersion relayed by the technology [48]. Since this experiment relies on reaction to perceived hazards in a simulated environment, the only two applicable sub-factors are “realism” and “immersion”. Some of the notable publications that have analyzed presence factors as part of the study of VR-based safety training are [11,12,49].
  • Immersion: The level of involvement and the feeling of “being there”. This sub-factor measures the effect of immersion on the hazard perception of participants.
  • Realism: This sub-factor aims to understand the effect of visual fidelity on depth perception and hazard recognition.

3.2.4. Device Factors

Along with studying the effect of depth perception in recognizing and reacting to hazardous situations, the level of comfort during training is captured. To compare training concerning comfort, device factors consisting of discomfort and ease of interaction were considered. The weight of the device, sickness feeling, and the freedom of interaction was evaluated by seeking participants’ responses to these criteria. These factors have been studied in multiple articles with respect to the comfort of use [11,12,50,51]
  • Weight: Feedback from participants in understanding if the weight of the VR device caused any discomfort during the training
  • Sickness: Investigating if the VR headset caused any nausea or disorientation during the training.
  • Interaction: This factor aims to understand if training using VR and flat screen offered various levels of interactivity.

3.3. Interval Type-2 Fuzzy AHP Process

The analytical Hierarchical Process (AHP) was developed as a practical decision-making process by Saaty [52] for solving complex problems using qualitative and quantitative criteria. AHP has been widely used in various research fields to solve MCDM problems. Using AHP to solve MCDM problems involves creating a hierarchy of criteria and conducting decision-makers pairwise comparisons of criteria.
Since linguistic variables are widely used to compare and rank criteria and alternatives, using linguistic variables can lead to vagueness in decision-making. Fuzzy sets were introduced to cater to vagueness in responses to multi-factor problems in 1965 by Zadeh et al. [53]. Most fuzzy AHP methods utilize type-1 fuzzy sets to represent ambiguity in linguistic variables. However, since the linguistic terms used can have different meanings for different people (i.e., linguistic terms can be relative), an additional degree of freedom can allow for modeling uncertainty and verbal ambiguity. Thus, the concept of type-2 fuzzy sets was introduced by Zadeh [54] as an extension to type-1 fuzzy sets to represent uncertainty by introducing a third dimension in the membership function of fuzzy sets. Type-2 fuzzy sets allow the incorporation of linguistic uncertainty in the concept of fuzzy sets by allowing for an additional degree of freedom with the third dimension for determining the membership function of the fuzzy set. However, due to the complexity of calculations with type-2 fuzzy sets, a special case of generalized type-2 fuzzy sets, named interval type-2 fuzzy (IT2F) sets, has seen greater application due to the simplicity of computations [55]. This paper employs IT2F numbers to deal with vagueness while performing AHP. The advantage of using this method is that it can convert qualitative inputs into quantitative fuzzy sets allowing the evaluation of outputs according to the degree of their membership [12].
The process of using IT2F sets to perform AHP evaluation can be complex due to various methods for the defuzzification of IT2F sets. A popular method for using IT2F sets with AHP has been developed and described by Kahraman et al. [56]. It is popular due to the simplicity of the defuzzification process. The linguistic variables and the equivalent IT2F numbers used in this study were selected based on the scale suggested by Kahraman et al. [56], given in Table 1, and graphically represented in Figure 5.
Although other scales for the representation of linguistic terms using IT2F sets exist [57,58], this scale is more suitable as it allows participants to rate two options as equal at the lowest end of the scale. As this study aims to differentiate between two alternatives, this scale is deemed suitable. The IT2F set A ˜ ˜ can be written as follows:
A ˜ ˜ = l U , m 1 U , m 2 U , u U ; α U , β U , l L , m 1 L , m 2 L , u L ; α L , β L
where u U is the largest possible value of the upper membership function; l U is the least possible value of the upper membership function; m 1 U and m 2 U are the second and third parameters of the upper membership function, respectively; u L is the largest possible value of the lower membership function; l L is the least possible value of the lower membership function; m 1 L and m 2 L are the second and third parameters of the lower membership function, respectively; α U and β U are the membership degrees of m 1 U and m 2 U , respectively; α L and β L are the membership degrees of m 1 L and m 2 L , respectively.
Different steps taken in this study for solving the MCDM problem in this study are illustrated in Figure 6 and described thoroughly in this section.
Step-1—Construct IT2F pairwise comparison matrices: The pairwise comparison matrices of all hierarchy levels, including the main factors, subfactors, and alternatives levels, are constructed from participant responses using linguistic terms.
Step-2—Consistency check: In order to evaluate the consistency of a comparison matrix with IT2F sets as its elements, defuzzification of the matrix should be performed. Defuzzification of a type-2 fuzzy set is a two-step process where IT2F set is converted to a type-1 fuzzy set using a reduction process in the first step, and then a defuzzification method for ordinary type-1 fuzzy sets is used to find the equivalence of the IT2F set. Reduction of an IT2F set to a type-1 fuzzy set using the centroid of the type-2 fuzzy set was proposed by Karnik and Mandel [59], while the use of pessimistic, optimistic, realistic, and weighted average indices for using different points of view while performing the reduction was proposed by Niewiadomski et al. [60]. These methods, along with methods for ranking the type-2 fuzzy sets [61,62] were compared by Kahraman et al. [56] and two new ranking methods, based on the center of area (CoA), for the better ranking of type-2 fuzzy sets according to the shape of the interval type-2 fuzzy set were proposed. The equation for defuzzification of trapezoidal type-2 fuzzy sets (DTraT) is described below.
DTraT = u U l U + α U · m 1 U l U + β U · m 2 U l U 4 + l U + u L l L + α L · m 1 L l L + β L · m 2 L l L 4 + l L 2
Elements of the IT2F set used in the above equation are described in Equation (1).
Step 3—Aggregate pairwise comparison matrices: In order to create an aggregated pairwise comparison matrix at each level, response matrices from each participant are aggregated. Aggregation for each element is conducted by taking the geometric mean for the respective element. The following equation is used to obtain the element r ̃ i j by aggregating n pairwise comparison matrices:
r ̃ i j = a ˜ ˜ i j , 1 a ˜ ˜ i j , 2 a ˜ ˜ i j , n 1 n
where r ̃ i j is the aggregated IT2F number related to the pairwise comparison of the elements i and j; a ˜ ˜ i j , k is the IT2F number given by the kth participant. The pairwise comparison matrices for level-1 and level-2 factors are tabulated in Table 2, Table 3, Table 4, Table 5 and Table 6.
Step-4—Fuzzy geometric mean: In order to calculate the weights of criteria and alternatives, the geometric mean of each row of defuzzified matrix is calculated using Equation (2). This step is represented by the following equation:
r ˜ ˜ i = r ˜ ˜ i 1 + r ˜ ˜ i 2 + + r ˜ ˜ i D
Step-5—Calculate fuzzy weights of factors, subfactors, and criteria: Using the geometric means, the priority weight of the ith element ( w ˜ ˜ i ) can then be calculated using the following equations, in which D is the dimension of the pairwise comparison matrix.
w ˜ ˜ i = r ˜ ˜ i r ˜ ˜ 1 r ˜ ˜ 2 r ˜ ˜ i r ˜ ˜ D 1
Step-6—Defuzzify and Normalize weights: Fuzzy weights for criteria and alternatives calculated in the last step are defuzzified using Equation (2). These weights are normalized to arrive at local weights for all the subfactors, and then further global weights are calculated according to the defined hierarchy of factors as detailed in Step-7 and Step-8. The fuzzy, local, and global weights of evaluation factors and subfactors are presented in Table 7 and Table 8, respectively.
Step-7—Determine Priority: Based on local weightages, the priorities for main evaluation factors and subfactors are determined. These priorities allow the understanding of the most important evaluation factor and subfactors that correlate depth perception and hazard identification. These priorities are also referred to as local weightages.
Step-8—Rank alternatives: Based on the local weightages, the overall ranking—global weightages, of subfactors, and alternatives are determined. To arrive at the global weightage of a subfactor, the local weight of the subfactor is multiplied by the local weight of the evaluation factor it is located under. Similarly, to arrive at the global weight of alternatives for each subfactor, the local weight of the alternative is multiplied by the global weight of the subfactor. At this step, the most important subfactors and selected alternatives are determined as the most appropriate solution to the MCDM problem as shown in Table 9.
Step-9—Sensitivity Analysis: To understand the effect of variation in weightages over the outcome of the analysis and to test the robustness of the model, sensitivity analysis was performed. Global weightages of the evaluation factors and their subfactors were varied one by one, by changing the priorities to observe the effect of changed priorities and subfactor weightages on the final selection of alternatives. This step is presented in more detail as part of Section 4.

4. Results and Discussion

To analyze the efficiency of safety training and to answer research questions 1 and 2, an analysis of reaction times was conducted as obtained from each participant by their joystick inputs. The purpose of analyzing this data was to understand which type of hazards were correctly identified by each participant and correlate it to the perception offered by display types.
As discussed in Section 3, each participant was shown four scenes during the simulation—two scenes using a flat screen and two scenes using a VR headset. They were instructed to provide input using a joystick to register the time of identification of a hazard during the simulations. The break-up of the total number of hazards identified by the participants is shown in Figure 7.
It can be observed that participants were able to correctly identify a greater percentage of total hazards with a VR display when compared to the flat screen display. Moreover, this trend is magnified when trying to understand the distance to electric cables, which seem like 2D objects when viewed from a greater distance. Participants could only correctly identify the location of 18.5% of electricity cables when using a flat-screen display but could identify the location of 78.3% of electricity cables when using a VR headset.
A graphical representation of reaction accuracy—i.e., timely identification of hazards is shown in Figure 8. Participants were able to identify hazardous situations before the incident occurred in most of the cases when using a VR headset; 0 in Figure 8 represents the time when the incident occurs. All positive values represent the time of evasive reaction in seconds before the incident, while negative values represent seconds elapsed after the incident has occurred.
Another area where significant improvement was the accuracy of hazard identification. It was observed that the number of false identifications dropped by 50% when using VR headsets. This is shown in Figure 9.
To answer research question 3 using the procedure described in Section 3.2, fuzzy weights of all the criteria and alternatives are calculated and used for ranking the alternatives with respect to each criterion and sub-criterion. For ease of comparison and ranking, the fuzzy weights were converted into defuzzified weights using Equation (2), and these crisp weights were then further normalized. Aggregated IT2F comparison matrix for main evaluation factors and sub-factors are tabulated in Table 2, Table 3, Table 4, Table 5 and Table 6, while calculated fuzzy weights, crisp weights, and normalized weights of the main criteria, sub-criteria, and alternatives are given in Table 7, Table 8 and Table 9.
Using the relative weights of factors and subfactors in Table 7 and Table 8, the relative importance of each factor and subfactor is obtained. Accordingly, it can be seen that Performance Factors > Efficiency Factors > Device Factors > Presence Factors is the order of relative importance of evaluation factors. Using the local weights of subfactors and multiplying them with the priority weights of evaluation factors, the global weights of all subfactors were achieved, as shown in Figure 10. The same methodology is used to rank the subfactors to obtain the local and global weights. Table 9 provides the fuzzy, crisp, and normalized weights of display alternatives with respect to all subfactors.
Among all the subfactors, it can be seen that Realism and Immersion were the highest-ranked subfactors with individual global weightage greater than 0.1. As can be seen in Figure 10, the subfactors were divided into three distinct categories—High (Red), Med (Yellow), and Low (Green) which correspond to weightages greater than 0.1, between 0.1 and 0.08, and less than 0.08, respectively. This categorization refers to the contribution of the said factor toward depth perception leading to efficient hazard recognition during the simulations.
Similarly, the global weights of alternatives are calculated by multiplying the local weights of alternatives by the global weights of subfactors assessed by the participants. As seen in Figure 11, the difference in global weights between the flat screen and VR headset is highest in the case of immersion and realism. Another interesting observation in Figure 11 is the flat screen is the alternative of choice for ease of operation, Weight, and Sickness subfactors.
Finally, to answer the research question regarding aspects of construction machinery operator safety training that can be improved using VR, the fuzzy sets corresponding to each of the alternatives are calculated and defuzzified using the DTraT method—Equation (2), to calculate their crisp weights. The final representation of IT2F sets for both the alternatives can be seen in Figure 12, where the defuzzified values are represented at the center of these type-2 fuzzy sets. It can be seen that after the application of the DTraT method, the global weights of alternatives are 3.36 for the VR headset and 1.42 for the flat screen. Eventually, it can be observed that all aspects related to presence factors—i.e., perception of immersion and realism which contribute to distance perception can be effectively conveyed for better hazard identification. Overall, it can be observed that a VR headset is more effective compared to a flat screen for virtual safety training.
To find the effect of MCDM analysis techniques on the obtained results, Buckley’s type-1 fuzzy AHP [63] method served as a control technique. This technique was also used in this section to assess the effects of the uncertainty in the type-1 fuzzy sets on the obtained results. Analysis of the same set of participant responses was also conducted using type-1 fuzzy sets. The weights obtained compared to IT2F-AHP analysis are presented in Table 10. Both methods produce equivalent results with the same rankings for all the subfactors considered for this study. Accordingly, type-2 fuzzy AHP results are considered to be sufficiently robust, and the findings of this study are independent of the types of analysis techniques.
Further, sensitivity analysis was conducted to validate the outcomes and understand if any changes to the weights of main evaluation factors can affect the overall selection of display technology for training involving depth perception. For the analysis, we defined 21 cases by varying weights for evaluation factors and subfactors. The definition of the first nine cases was conducted by gradual weight increases for the main factors one by one. Next, the weights of subfactors to performance factors were varied by assigning high weightage to one subfactor and low weights to other subfactors. Similarly, weights for other subfactors were varied. The case definitions can be seen in Table 11, and the variations in weights across different cases can be seen in Table 12.
The variation in the final weights of the alternatives was graphed to observe any overall changes to alternative selection. It can be seen in Figure 13a that for all but three of the cases, the weight of the VR headset remains higher than that of the flat screen. However, the weight of the flat screen exceeds that of the VR headset in Case 10, Case 19, and Case 20. This variation can be attributed to high weights for ease of operation, device weight and Sickness subfactors. Accordingly, it can be observed from Figure 13b that the ranking for flat screens exceeds that for VR headsets only in Case 10, Case 19, and Case 20.
It can be seen from the results that realism and immersion are two of the most important factors which lead to better hazard recognition among novice heavy-machinery operators. Looking at the global weights of alternatives for subfactors in Figure 11, this finding is further magnified as the weightages for both immersion and realism are highest for the immersive VR display. This finding conclusively proves that immersive VR displays can portray realism better than semi-immersive displays. This result demonstrates the greater effectiveness of VR-based training with immersive displays due to their ability to provide better depth perception.
On the other hand, it can be observed from Figure 11 that flat screen display was preferred by the participants over VR display within the Device Factors category (Weight and Sickness) and Performance Factors category (Ease of use). As a part of feedback, most first-time VR users were initially uncomfortable with the immersive environment as the simulations provided a feeling of moving around in a tower crane cabin while they were actually sitting on a stationary chair. Additionally, not being able to see the actual joystick with a headset on contributed to the flat screen being preferred for ease of use.
Additionally, the effectiveness and suitability of VR displays for virtual training are also fortified by statistical details of hazards correctly identified by the participants presented in Figure 7. Immersive displays were able to provide participants with sufficient depth of field to identify the distance to seemingly two-dimensional objects such as overhead electricity cables. Additionally, Figure 9 shows that the identification of critical hazards improved with an immersive VR display by more than 300% when compared to that with a flat-screen display.
This study was able to demonstrate the importance of depth perception in hazard identification safety training by comparing the performance of novice tower crane operators in immersive and semi-immersive virtual environments. It can be concluded that better training efficiency can be achieved with immersive virtual displays when imparting hazard identification training to novice heavy machinery operators.

5. Conclusions

In this study, a comprehensive set of factors that are key to safety training of heavy equipment operations were analyzed. A group of operators from a course on tower crane operation was identified and shown simulations of tower crane operations in a virtual environment using flat-screen displays and head-mounted VR displays. Their hazard identification performance was recorded using a joystick during the simulations. Finally, the participants were asked to respond to a questionnaire to record their responses towards key factors regarding hazard perception in safety training. Interval type-2 fuzzy AHP was used to analyze participant feedback and convert linguistic responses to quantitative data to rank the effectiveness of factors and subfactors. Further, sensitivity analysis was conducted to observe the variations in hazard identification and depth perception.
Data analysis of participant response times reveals that immersive training with VR headsets improves the perception of hazards, as participants were able to identify more hazards, with sufficient time before incident occurrence and with higher accuracy when compared to their performance using flat screen display. Further, a very large improvement of about 300% was observed in the timely identification of critical safety hazards such as electricity cables, which account for the greatest number of fatal accidents involving tower cranes.
Observing the results from Interval Type-2 Fuzzy AHP and those from the sensitivity analysis, it can be concluded that VR simulations can provide effective training for heavy machinery operators when a VR headset is used as the display type for the training. Among the main evaluation factors, performance factors have the highest ranking as rated by the participants, followed by efficiency factors, device factors and presence factors. The IT2F AHP results highlight the areas related to immersion and realism that can be strengthened by using an immersive virtual environment. These include depth perception, spatial awareness and familiarization with construction sites. There were only three subfactors out of the total twelve subfactors—Ease of use, Sickness and Weight, where participants rated flat screen display better than VR headset for the training experience.
Sensitivity analysis reveals that higher weights across all the levels of the hierarchy can change the alternative preference but increasing the weight of only main evaluation factors or subfactors do not alter the preference for alternatives.
Combing the feedback from participants and data generated during the experiment, this study is able to demonstrate that depth perception conveyed by immersive VR displays such as head-mounted VR headsets can increase the efficiency of virtual safety training. Virtual safety training with HMDs can be used as an effective tool to increase awareness and hazard perception ability among novice heavy-machinery operators across most hazard types. This practice can drastically reduce injuries or fatalities on construction sites by reducing accidents caused due to lack of operator experience as novice operators can be trained in a virtual environment and can be taught effective ways of hazard identification by their experienced mentors. Acquisition of operator skills by novice operators and their safe implementation on construction should be continuously assessed in addition to safety training.

Author Contributions

Conceptualization, A.S. and E.M.G.; Methodology, A.S., E.M.G. and M.A.; Formal analysis, A.S.; Investigation, A.S., M.A.; Validation, A.R., H.L.; Supervision, M.A., T.D.; Visualization, A.S.; Writing—original draft, A.S.; Writing—review and editing, M.A., E.M.G.; Project administration, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was considered by the Monash University Human Research Ethics Committee. The Committee was satisfied that the proposal meets the requirements of the National Statement on Ethical Conduct in Human Research and has granted approval. (project ID 28033, approval dated 23 April 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Raw data supporting this research article will be shared upon reasonable request.

Acknowledgments

Authors acknowledge the contributions of the members of the ASCII Lab at Monash University for critiquing the manuscript and providing constructive feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Australia, S.W. Work-related Traumatic Injury Fatalities, Australia. 2020. Available online: https://www.safeworkaustralia.gov.au/sites/default/files/2021-11/Work-related%20traumatic%20injury%20fatalities%20Australia%202020.pdf (accessed on 5 September 2022).
  2. Gharaie, E.; Lingard, H.; Cooke, T. Causes of Fatal Accidents Involving Cranes in the Australian Construction Industry. Constr. Econ. Build. 2015, 15, 1–12. [Google Scholar] [CrossRef] [Green Version]
  3. Kazan, E.; Usmen, M.A. Worker safety and injury severity analysis of earthmoving equipment accidents. J. Saf. Res. 2018, 65, 73–81. [Google Scholar] [CrossRef] [PubMed]
  4. Beavers, J.E.; Moore, J.R.; Rinehart, R.; Schriver, W.R. Crane-Related Fatalities in the Construction Industry. J. Constr. Eng. Manag. 2006, 132, 901–910. [Google Scholar] [CrossRef]
  5. Raviv, G.; Fishbain, B.; Shapira, A. Analyzing risk factors in crane-related near-miss and accident reports. Saf. Sci. 2017, 91, 192–205. [Google Scholar] [CrossRef]
  6. Dong, X.S.; Largay, J.A.; Wang, X.; Cain, C.T.; Romano, N. The construction FACE database—Codifying the NIOSH FACE reports. J. Saf. Res. 2017, 62, 217–225. [Google Scholar] [CrossRef]
  7. Teizer, J.; Cheng, T.; Fang, Y. Location tracking and data visualization technology to advance construction ironworkers’ education and training in safety and productivity. Autom. Constr. 2013, 35, 53–68. [Google Scholar] [CrossRef]
  8. Dzeng, R.-J.; Lin, C.-T.; Fang, Y.-C. Using eye-tracker to compare search patterns between experienced and novice workers for site hazard identification. Saf. Sci. 2015, 82, 56–67. [Google Scholar] [CrossRef]
  9. Li, H.; Chan, G.; Skitmore, M. Multiuser Virtual Safety Training System for Tower Crane Dismantlement. J. Comput. Civ. Eng. 2012, 26, 638–647. [Google Scholar] [CrossRef] [Green Version]
  10. Goh, J.; Hu, S.; Fang, Y. Human-in-the-loop Simulation for Crane Lift Planning in Modular Construction on-site Assembly. In Proceedings of the ASCE International Conference on Computing in Civil Engineering 2019: Visualization, Information Modeling, and Simulation, Atlanta, GA, USA, 17–19 June 2019; pp. 71–78. [Google Scholar]
  11. Dhalmahapatra, K.; Maiti, J.; Krishna, O. Assessment of virtual reality based safety training simulator for electric overhead crane operations. Saf. Sci. 2021, 139, 105241. [Google Scholar] [CrossRef]
  12. Zhang, H.; He, X.; Mitri, H. Fuzzy comprehensive evaluation of virtual reality mine safety training system. Saf. Sci. 2019, 120, 341–351. [Google Scholar] [CrossRef]
  13. Juang, J.; Hung, W.; Kang, S. SimCrane 3D+: A crane simulator with kinesthetic and stereoscopic vision. Adv. Eng. Inform. 2013, 27, 506–518. [Google Scholar] [CrossRef]
  14. Arashpour, M.; Ngo, T.; Li, H. Scene understanding in construction and buildings using image processing methods: A comprehensive review and a case study. J. Build. Eng. 2020, 33, 101672. [Google Scholar] [CrossRef]
  15. Arashpour, M.; Kamat, V.; Heidarpour, A.; Hosseini, M.R.; Gill, P. Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks. Autom. Constr. 2022, 137, 104193. [Google Scholar] [CrossRef]
  16. Zhang, F.; Fleyeh, H.; Wang, X.; Lu, M. Construction site accident analysis using text mining and natural language processing techniques. Autom. Constr. 2019, 99, 238–248. [Google Scholar] [CrossRef]
  17. Zhang, W.; Xue, N.; Zhang, J.; Zhang, X. Identification of Critical Causal Factors and Paths of Tower-Crane Accidents in China through System Thinking and Complex Networks. J. Constr. Eng. Manag. 2021, 147, 04021174. [Google Scholar] [CrossRef]
  18. Zhang, X.; Zhang, W.; Jiang, L.; Zhao, T. Identification of Critical Causes of Tower-Crane Accidents through System Thinking and Case Analysis. J. Constr. Eng. Manag. 2020, 146, 04020071. [Google Scholar] [CrossRef]
  19. Arashpour, M.; Heidarpour, A.; Nezhad, A.A.; Hosseinifard, Z.; Chileshe, N.; Hosseini, R. Performance-based control of variability and tolerance in off-site manufacture and assembly: Optimization of penalty on poor production quality. Constr. Manag. Econ. 2019, 38, 502–514. [Google Scholar] [CrossRef]
  20. Sacks, R.; Perlman, A.; Barak, R. Construction safety training using immersive virtual reality. Constr. Manag. Econ. 2013, 31, 1005–1017. [Google Scholar] [CrossRef]
  21. Buttussi, F.; Chittaro, L. Effects of Different Types of Virtual Reality Display on Presence and Learning in a Safety Training Scenario. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1063–1076. [Google Scholar] [CrossRef]
  22. Patrão, B.; Menezes, P. A Virtual Reality System for Training Operators. Int. J. Online Biomed. Eng. 2013, 9, S8. [Google Scholar] [CrossRef]
  23. Arashpour, M.; Golafshani, E.M.; Parthiban, R.; Lamborn, J.; Kashani, A.; Li, H.; Farzanehfar, P. Predicting individual learning performance using machine-learning hybridized with the teaching-learning-based optimization. Comput. Appl. Eng. Educ. 2022. [Google Scholar] [CrossRef]
  24. Lin, Z.; Petzold, F.; Hsieh, S.H. 4D-BIM based real time augmented reality navigation system for tower crane operation. In Proceedings of the Construction Research Congress 2020: Computer Applications, Tempe, AZ, USA, 8–10 March 2020; pp. 828–836. [Google Scholar]
  25. He, F.; Ong, S.K.; Nee, A.Y.C. An Integrated Mobile Augmented Reality Digital Twin Monitoring System. Computers 2021, 10, 99. [Google Scholar] [CrossRef]
  26. Chen, Y.C.; Chi, H.-L.; Kangm, S.-C.; Hsieh, S.-H. A smart crane operations assistance system using augmented reality technology. In Proceedings of the 28th International Symposium on Automation and Robotics in Construction, ISARC 2011, Seoul, Korea, 29 June–2 July 2011. [Google Scholar]
  27. Zhang, Z.; Pan, W. Virtual reality (VR) supported lift planning for modular integrated construction (MiC) of high-rise buildings. HKIE Trans. 2019, 26, 136–143. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Pan, W. Virtual reality supported interactive tower crane layout planning for high-rise modular integrated construction. Autom. Constr. 2021, 130, 103854. [Google Scholar] [CrossRef]
  29. Kayhani, N.; Taghaddos, H.; Noghabaee, M.; Hermann, U. Utilization of virtual reality visualizations on heavy mobile crane planning for modular construction. In Proceedings of the ISARC 2018—35th International Symposium on Automation and Robotics in Construction and International AEC/FM Hackathon: The Future of Building Things, Berlin, Germany, 20–25 July 2018; pp. 1226–1230. [Google Scholar]
  30. Perlman, A.; Sacks, R.; Barak, R. Hazard recognition and risk perception in construction. Saf. Sci. 2014, 64, 22–31. [Google Scholar] [CrossRef]
  31. Lee, H.; Kim, H.; Monteiro, D.V.; Goh, Y.; Han, D.; Liang, H.-N.; Yang, H.S.; Jung, J. Annotation vs. Virtual Tutor: Comparative Analysis on the Effectiveness of Visual Instructions in Immersive Virtual Reality. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; pp. 318–327. [Google Scholar] [CrossRef]
  32. Wang, Y.; Chardonnet, J.-R.; Merienne, F. Enhanced cognitive workload evaluation in 3D immersive environments with TOPSIS model. Int. J. Hum. Comput. Stud. 2021, 147, 102572. [Google Scholar] [CrossRef]
  33. Çolak, M.; Kaya, İ. Prioritization of renewable energy alternatives by using an integrated fuzzy MCDM model: A real case application for Turkey. Renew. Sust. Energy Rev. 2017, 80, 840–853. [Google Scholar] [CrossRef]
  34. Ayodele, T.; Ogunjuyigbe, A.; Odigie, O.; Munda, J. A multi-criteria GIS based model for wind farm site selection using interval type-2 fuzzy analytic hierarchy process: The case study of Nigeria. Appl. Energy 2018, 228, 1853–1869. [Google Scholar] [CrossRef]
  35. Bourhim, E.M.; Cherkaoui, A. Efficacy of Virtual Reality for Studying People’s Pre-evacuation Behavior under Fire. Int. J. Hum. Comput. Stud. 2020, 142, 102484. [Google Scholar] [CrossRef]
  36. Yazdani-Chamzini, A.; Yakhchali, S.H. Tunnel Boring Machine (TBM) selection using fuzzy multicriteria decision making methods. Tunn. Undergr. Space Technol. 2012, 30, 194–204. [Google Scholar] [CrossRef]
  37. Zhang, F.; Ju, Y.; Gonzalez, E.D.S.; Wang, A. SNA-based multi-criteria evaluation of multiple construction equipment: A case study of loaders selection. Adv. Eng. Inform. 2020, 44, 101056. [Google Scholar] [CrossRef]
  38. Ilbahar, E.; Karaşan, A.; Cebi, S.; Kahraman, C. A novel approach to risk assessment for occupational health and safety using Pythagorean fuzzy AHP & fuzzy inference system. Saf. Sci. 2018, 103, 124–136. [Google Scholar] [CrossRef]
  39. Taylan, O.; Bafail, A.O.; Abdulaal, R.M.; Kabli, M.R. Construction projects selection and risk assessment by fuzzy AHP and fuzzy TOPSIS methodologies. Appl. Soft Comput. 2014, 17, 105–116. [Google Scholar] [CrossRef]
  40. Ramanathan, R. Multicriteria Analysis of Energy. In Encyclopedia of Energy; Cleveland, C.J., Ed.; Elsevier: New York, NY, USA, 2004; pp. 77–88. [Google Scholar]
  41. Hopkins, L.D. Multi-attribute decision making in urban studies. In International Encyclopedia of the Social & Behavioral Sciences; Smelser, N.J., Baltes, P.B., Eds.; Pergamon: Oxford, UK, 2001; pp. 10157–10160. [Google Scholar]
  42. Unity—Game Engine Release 2020.3.2f1, Unity Technologies: San Fransico 2022. Available online: https://unity3d.com/get-unity/download/archive (accessed on 3 August 2021).
  43. Raimbaud, P.; Lou, R.; Danglade, F.; Figueroa, P.; Hernandez, J.; Merienne, F. A Task-Centred Methodology to Evaluate the Design of Virtual Reality User Interactions: A Case Study on Hazard Identification. Buildings 2021, 11, 277. [Google Scholar] [CrossRef]
  44. Kim, S.-Y. Effects of Military Training Based on the Virtual Reality of Army Using AHP Method. Turk. J. Comput. Math. Educ. 2021, 12, 551–556. [Google Scholar]
  45. Anderson, L.W.; Krathwohl, D.R.; Bloom, B.S. A taxonomy for learning teaching and assessing. In A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Pearson Education Limited: London, UK, 2001. [Google Scholar]
  46. Afzal, M.; Shafiq, M. Evaluating 4D-BIM and VR for Effective Safety Communication and Training: A Case Study of Multilingual Construction Job-Site Crew. Buildings 2021, 11, 319. [Google Scholar] [CrossRef]
  47. Tan, Y.; Xu, W.; Li, S.; Chen, K. Augmented and Virtual Reality (AR/VR) for Education and Training in the AEC Industry: A Systematic Review of Research and Applications. Buildings 2022, 12, 1529. [Google Scholar] [CrossRef]
  48. Witmer, B.G.; Jerome, C.J.; Singer, M.J. The Factor Structure of the Presence Questionnaire. PRESENCE Teleoperators Virtual Augment. Real. 2005, 14, 298–312. [Google Scholar] [CrossRef]
  49. Perroud, B.; Régnier, S.; Kemeny, A.; Merienne, F. Model of realism score for immersive VR systems. Transp. Res. Part F Traffic Psychol. Behav. 2019, 61, 238–251. [Google Scholar] [CrossRef] [Green Version]
  50. Kurilovas, E.; Vinogradova, I. Improved fuzzy AHP methodology for evaluating quality of distance learning courses. Int. J. Eng. Educ. 2016, 32, 1618–1624. [Google Scholar]
  51. Try, S.; Panuwatwanich, K.; Tanapornraweekit, G.; Kaewmoracharoen, M. Virtual reality application to aid civil engineering laboratory course: A multicriteria comparative study. Comput. Appl. Eng. Educ. 2021, 29, 1771–1792. [Google Scholar] [CrossRef]
  52. Saaty, T.L. What is the analytic hierarchy process? In Mathematical Models for Decision Support; Springer: Berlin/Heidelberg, Germany, 1988; pp. 109–121. [Google Scholar]
  53. Zadeh, L.A. Fuzzy sets. Inform. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  54. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  55. Mendel, J.M.; John, R.I.; Liu, F. Interval Type-2 Fuzzy Logic Systems Made Simple. IEEE Trans. Fuzzy Syst. 2006, 14, 808–821. [Google Scholar] [CrossRef] [Green Version]
  56. Kahraman, C.; Öztayşi, B.; Sarı, I.U.; Turanoğlu, E. Fuzzy analytic hierarchy process with interval type-2 fuzzy sets. Knowl. Based Syst. 2014, 59, 48–57. [Google Scholar] [CrossRef]
  57. Chen, S.-M.; Lee, L.-W. Fuzzy multiple attributes group decision-making based on the interval type-2 TOPSIS method. Expert Syst. Appl. 2010, 37, 2790–2798. [Google Scholar] [CrossRef]
  58. Chen, S.-M.; Chen, J.-H. Fuzzy risk analysis based on similarity measures between interval-valued fuzzy numbers and interval-valued fuzzy number arithmetic operators. Expert Syst. Appl. 2009, 36, 6309–6317. [Google Scholar] [CrossRef]
  59. Karnik, N.N.; Mendel, J.M. Operations on type-2 fuzzy sets. Fuzzy Sets Syst. 2001, 122, 327–348. [Google Scholar] [CrossRef]
  60. Niewiadomski, A.; Ochelska, J.; Szczepaniak, P.S. Interval-valued linguistic summaries of databases. Control Cybern. 2006, 35, 415–443. [Google Scholar]
  61. Chen, S.-M.; Lee, L.-W. Fuzzy multiple attributes group decision-making based on the ranking values and the arithmetic operations of interval type-2 fuzzy sets. Expert Syst. Appl. 2010, 37, 824–833. [Google Scholar] [CrossRef]
  62. Lee, L.-W.; Chen, S.-M. Fuzzy multiple attributes group decision-making based on the extension of TOPSIS method and interval type-2 fuzzy sets. In Proceedings of the 2008 International Conference on Machine Learning and Cybernetics, Kunming, China, 12–15 July 2008; pp. 3260–3265. [Google Scholar]
  63. Buckley, J.J. Fuzzy hierarchical analysis. Fuzzy Sets Syst. 1985, 17, 233–247. [Google Scholar] [CrossRef]
Figure 1. Hazards modeled in simulations.
Figure 1. Hazards modeled in simulations.
Buildings 12 02084 g001
Figure 2. Experimentation steps.
Figure 2. Experimentation steps.
Buildings 12 02084 g002
Figure 3. Simulation of heavy machinery operation with flat screen (left) and VR Headset (right).
Figure 3. Simulation of heavy machinery operation with flat screen (left) and VR Headset (right).
Buildings 12 02084 g003
Figure 4. IT2F-AHP analysis of criteria for evaluation of training efficiency.
Figure 4. IT2F-AHP analysis of criteria for evaluation of training efficiency.
Buildings 12 02084 g004
Figure 5. IT2F scales of the linguistic variables.
Figure 5. IT2F scales of the linguistic variables.
Buildings 12 02084 g005
Figure 6. The steps taken for analyzing the data in this study.
Figure 6. The steps taken for analyzing the data in this study.
Buildings 12 02084 g006
Figure 7. Hazard identification performance of participants.
Figure 7. Hazard identification performance of participants.
Buildings 12 02084 g007
Figure 8. Effect of display–type on reaction accuracy.
Figure 8. Effect of display–type on reaction accuracy.
Buildings 12 02084 g008
Figure 9. Total number of false identifications in each simulation.
Figure 9. Total number of false identifications in each simulation.
Buildings 12 02084 g009
Figure 10. Global weights of Subfactors.
Figure 10. Global weights of Subfactors.
Buildings 12 02084 g010
Figure 11. Global weights of alternatives for subfactors.
Figure 11. Global weights of alternatives for subfactors.
Buildings 12 02084 g011
Figure 12. IT2F sets and defuzzied values of (a) VR Headset Flat Screen and (b) Flat Screen.
Figure 12. IT2F sets and defuzzied values of (a) VR Headset Flat Screen and (b) Flat Screen.
Buildings 12 02084 g012
Figure 13. Sensitivity Analysis: (a) Variation in alternative weights. (b) Case-wise ranking changes.
Figure 13. Sensitivity Analysis: (a) Variation in alternative weights. (b) Case-wise ranking changes.
Buildings 12 02084 g013
Table 1. Definition of the linguistic variables and the equivalent IT2F scales.
Table 1. Definition of the linguistic variables and the equivalent IT2F scales.
Linguistic VariablesTrapezoidal IT2F Scales
Absolutely Strong (AS)((7, 8, 9, 9; 1, 1), (7.2, 8.2, 8.8, 9; 0.8, 0.8))
Very Strong (VS)((5, 6, 8, 9; 1, 1), (5.2, 6.2, 7.8, 8.8; 0.8, 0.8))
Fairly Strong (FS)((3, 4, 6, 7; 1, 1), (3.2, 4.2, 5.8, 6.8; 0.8, 0.8))
Slightly Strong (SS)((1, 2, 4, 5; 1, 1), (1.2, 2.2, 3.8, 4.8; 0.8, 0.8))
Exactly Equal (E) ((1, 1, 1, 1; 1, 1), (1, 1, 1, 1; 1, 1))
Table 2. Aggregated IT2F comparison matrix Level 1.
Table 2. Aggregated IT2F comparison matrix Level 1.
PerformanceEffectivenessPresenceDevice
P((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.66, 0.75, 0.98, 1.15; 1, 1), (0.68, 0.77, 0.94, 1.11; 0.8, 0.8))((0.56, 0.67, 0.91, 1.09; 1, 1),
(0.58, 0.69, 0.88, 1.04; 0.8, 0.8))
((0.84, 1.19, 1.89, 2.39; 1, 1),
(0.92, 1.24, 1.81, 2.27; 0.8, 0.8))
E((0.86, 1.04, 1.34, 1.51; 1, 1),
(0.9, 1.06, 1.3; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.46, 0.56, 0.86, 1.18; 1, 1),
(0.49, 0.58, 0.81, 1.09; 0.8, 0.8))
((0.94, 1.2, 1.7, 2.03; 1, 1),
(1, 1.24, 1.63, 1.95; 0.8, 0.8))
Pr((0.92, 1.11, 1.49, 1.77; 1, 1),
(0.96, 1.13, 1.44; 0.8, 0.8))
((0.84, 1.18, 1.78, 2.14; 1, 1),
(0.92, 1.22, 1.72, 2.06; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((2.39, 3.07, 4.26, 4.82; 1, 1),
(2.53, 3.19, 4.15, 4.71; 0.8, 0.8))
D((0.42, 0.53, 0.85, 1.18; 1, 1),
(0.44, 0.55, 0.8; 0.8, 0.8))
((0.49, 0.6, 0.84, 1.06; 1, 1),
(0.51, 0.61, 0.8, 1; 0.8, 0.8))
((0.2, 0.24, 0.33, 0.42; 1, 1),
(0.21, 0.24, 0.31, 0.39; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
Note: P: Performance, E: Effectiveness, Pr: Presence, and D: Device.
Table 3. Pairwise IT2F comparison matrix—Performance factors.
Table 3. Pairwise IT2F comparison matrix—Performance factors.
EOENLERA
EO((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((1.37, 1.82, 2.74, 3.44; 1, 1),
(1.46, 1.9, 2.61, 3.25; 0.8, 0.8))
((0.44, 0.61, 0.95, 1.25; 1, 1),
(0.47, 0.63, 0.9, 1.16; 0.8, 0.8))
((0.37, 0.42, 0.54, 0.6; 1, 1),
(0.38, 0.43, 0.51, 0.59; 0.8, 0.8))
EN((0.29, 0.37, 0.56, 0.73; 1, 1),
(0.31, 0.38, 0.52; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.13, 0.15, 0.21, 0.27; 1, 1),
(0.13, 0.15, 0.2, 0.25; 0.8, 0.8))
((0.33, 0.41, 0.61, 0.78; 1, 1),
(0.35, 0.42, 0.57, 0.73; 0.8, 0.8))
LE((0.8, 1.06, 1.68, 2.27; 1, 1),
(0.85, 1.1, 1.58; 0.8, 0.8))
((3.71, 4.84, 6.81, 7.67; 1, 1),
(3.95, 5.06, 6.61, 7.5; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.99, 1.39, 2.09, 2.47; 1, 1),
(1.07, 1.45, 2, 2.38; 0.8, 0.8))
RA((1.65, 1.9, 2.39, 2.71; 1, 1),
(1.69, 1.94, 2.3; 0.8, 0.8))
((1.27, 1.67, 2.46, 2.97; 1, 1),
(1.36, 1.74, 2.35, 2.85; 0.8, 0.8))
((0.4, 0.48, 0.73, 1; 1, 1),
(0.42, 0.5, 0.69, 0.93; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
Note: EO: Ease of Operation, EN: Enjoyability, LE: Level of Engagement, and RA: Response Accuracy.
Table 4. Pairwise IT2F comparison matrix—Effectiveness factors.
Table 4. Pairwise IT2F comparison matrix—Effectiveness factors.
OUPUER
OU((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.49, 0.61, 0.91, 1.18; 1, 1),
(0.52, 0.63, 0.87, 1.11; 0.8, 0.8))
((0.18, 0.21, 0.28, 0.35; 1, 1),
(0.18, 0.21, 0.27, 0.34; 0.8, 0.8))
PU((0.84, 1.11, 1.64, 2.03; 1, 1),
(0.9, 1.15, 1.58; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.19, 0.23, 0.32, 0.39; 1, 1),
(0.2, 0.23, 0.3, 0.37; 0.8, 0.8))
ER((2.81, 3.58, 4.91, 5.52; 1, 1),
(2.96, 3.72, 4.75; 0.8, 0.8))
((2.54, 3.19, 4.44, 5.2; 1, 1),
(2.68, 3.32, 4.29, 5.02; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
Note: OU: Objective Usability, PU: Perceived Usefulness, and ER: Ease of Recognition.
Table 5. Pairwise IT2F comparison matrix- Presence factors.
Table 5. Pairwise IT2F comparison matrix- Presence factors.
IMRL
IM((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.48, 0.73, 1.37, 2.07; 1, 1),
(0.54, 0.77, 1.28, 1.87; 0.8, 0.8))
RL((0.48, 0.73, 1.37, 2.07; 1, 1),
(0.54, 0.77, 1.28; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
where IM: Immersion and RL: Realism.
Table 6. Pairwise IT2F comparison matrix- Device factors.
Table 6. Pairwise IT2F comparison matrix- Device factors.
WSKIT
W((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((0.25, 0.31, 0.52, 0.85; 1, 1),
(0.26, 0.31, 0.48, 0.75; 0.8, 0.8))
((0.92, 1.28, 1.97, 2.37; 1, 1),
(0.99, 1.34, 1.89, 2.28; 0.8, 0.8))
SK((1.17, 1.92, 3.3, 3.97; 1, 1),
(1.33, 2.06, 3.15; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
((1.12, 1.6, 2.75, 3.81; 1, 1),
(1.22, 1.69, 2.59, 3.52; 0.8, 0.8))
IT((0.42, 0.51, 0.79, 1.08; 1, 1),
(0.44, 0.52, 0.75; 0.8, 0.8))
((0.26, 0.37, 0.63, 0.9; 1, 1),
(0.28, 0.38, 0.59, 0.82; 0.8, 0.8))
((1, 1, 1, 1; 1, 1),
(1, 1, 1, 1; 1, 1))
Note: W: Weight, SK: Sickness, IT: Interaction.
Table 7. Local weights of Evaluation factors.
Table 7. Local weights of Evaluation factors.
FactorsFuzzy WeightsCrisp WeightsNormalized Weights
Performance Factors((0.14, 0.18, 0.32, 0.44; 1, 1),
(0.14, 0.2, 0.3, 0.39; 0.8, 0.8))
0.25130.2621
Efficiency Factors((0.13, 0.18, 0.31, 0.43; 1, 1),
(0.15, 0.2, 0.3, 0.39; 0.8, 0.8))
0.24880.2595
Presence Factors((0.12, 0.17, 0.29, 0.39; 1, 1),
(0.13, 0.17, 0.27, 0.36; 0.8, 0.8))
0.22650.2362
Device Factors((0.12, 0.17, 0.29, 0.4; 1, 1),
(0.14, 0.18, 0.28, 0.37; 0.8, 0.8))
0.23230.2422
Table 8. Weights of subfactors.
Table 8. Weights of subfactors.
Fuzzy WeightsCrisp WeightsNormalized Local Weights
Weights of subfactors for Performance Factors
EO((0.11, 0.16, 0.28, 0.39; 1, 1), (0.12, 0.17, 0.26, 0.37; 0.8, 0.8))0.22180.2534
EN((0.12, 0.16, 0.27, 0.36; 1, 1), (0.13, 0.17, 0.25, 0.33; 0.8, 0.8))0.21330.2437
LE((0.09, 0.15, 0.27, 0.38; 1, 1), (0.11, 0.15, 0.26, 0.37; 0.8, 0.8))0.21230.2426
RA((0.12, 0.17, 0.29, 0.39; 1, 1), (0.14, 0.18, 0.26, 0.36; 0.8, 0.8))0.22780.2603
Weights of subfactors for Efficiency Factors
OU((0.18, 0.23, 0.35, 0.44; 1, 1), (0.18, 0.24, 0.33, 0.42; 0.8, 0.8))0.28200.3316
PU((0.16, 0.21, 0.35, 0.46; 1, 1), (0.17, 0.22, 0.33, 0.43; 0.8, 0.8))0.27750.3263
ER((0.17, 0.23, 0.36, 0.49; 1, 1), (0.18, 0.23, 0.33, 0.45; 0.8, 0.8))0.29100.3422
Weights of subfactors for Presence Factors
IM((0.23, 0.36, 0.68, 0.98; 1, 1), (0.26, 0.39, 0.63, 0.89; 0.8, 0.8))0.52700.5000
RL((0.23, 0.36, 0.68, 0.98; 1, 1), (0.26, 0.39, 0.63, 0.89; 0.8, 0.8))0.52700.5000
Weights of subfactors for Device Factors
W((0.15, 0.21, 0.39, 0.58; 1, 1), (0.16, 0.23, 0.37, 0.53; 0.8, 0.8))0.31250.3125
SK((0.12, 0.2, 0.46, 0.74; 1, 1), (0.14, 0.23, 0.42, 0.67; 0.8, 0.8))0.35630.3563
IT((0.16, 0.23, 0.42, 0.59; 1, 1), (0.18, 0.25, 0.4, 0.55; 0.8, 0.8))0.33130.3313
Table 9. Subfactor weights with respect to alternatives.
Table 9. Subfactor weights with respect to alternatives.
Ease of OperationFuzzy WeightsCrisp WeightsNormalized Weight
Ease of Operation
Flat((2.09, 2.43, 3.06, 3.39; 1, 1), (2.14, 2.48, 2.97, 3.31; 0.8, 0.8))2.59750.6270
VR((1.41, 1.49, 1.71, 1.91; 1, 1), (1.44, 1.5, 1.68, 1.86; 0.8, 0.8))1.54550.3730
Enjoyability
Flat((0.31, 0.38, 0.5, 0.59; 1, 1), (0.33, 0.38, 0.48, 0.58; 0.8, 0.8))1.27850.2483
VR((0.26, 0.33, 0.56, 0.73; 1, 1), (0.27, 0.36, 0.53, 0.69; 0.8, 0.8))3.87100.7517
Level of Engagement
Flat((0.31, 0.38, 0.5, 0.59; 1, 1), (0.33, 0.38, 0.48, 0.58; 0.8, 0.8))1.32030.2692
VR((0.26, 0.33, 0.56, 0.73; 1, 1), (0.27, 0.36, 0.53, 0.69; 0.8, 0.8))3.58430.7308
Response Accuracy
Flat((0.37, 0.41, 0.48, 0.52; 1, 1), (0.39, 0.42, 0.47, 0.51; 0.8, 0.8))1.24380.2297
VR((0.34, 0.39, 0.5, 0.58; 1, 1), (0.36, 0.4, 0.48, 0.55; 0.8, 0.8))4.17150.7703
Objective Usability
Flat((0.31, 0.41, 0.61, 0.75; 1, 1), (0.33, 0.42, 0.58, 0.72; 0.8, 0.8))1.56850.3809
VR((0.3, 0.4, 0.62, 0.78; 1, 1), (0.32, 0.41, 0.6, 0.75; 0.8, 0.8))2.54980.6191
Perceived Usefulness
Flat((0.29, 0.38, 0.58, 0.74; 1, 1), (0.3, 0.4, 0.55, 0.71; 0.8, 0.8))1.26430.2424
VR((0.24, 0.35, 0.63, 0.88; 1, 1), (0.27, 0.38, 0.6, 0.82; 0.8, 0.8))3.95050.7576
Ease of Recognition
Flat((0.32, 0.4, 0.55, 0.67; 1, 1), (0.34, 0.41, 0.53, 0.64; 0.8, 0.8))1.28100.2457
VR((0.28, 0.37, 0.6, 0.78; 1, 1), (0.3, 0.38, 0.57, 0.73; 0.8, 0.8))3.93230.7543
Immersion
Flat((0.33, 0.36, 0.45, 0.51; 1, 1), (0.33, 0.37, 0.44, 0.5; 0.8, 0.8))1.23200.2255
VR((0.28, 0.33, 0.49, 0.6; 1, 1), (0.28, 0.35, 0.48, 0.58; 0.8, 0.8))4.23150.7745
Realism
Flat((0.3, 0.35, 0.45, 0.51; 1, 1), (0.31, 0.36, 0.43, 0.48; 0.8, 0.8))1.23080.2198
VR((0.24, 0.32, 0.49, 0.64; 1, 1), (0.26, 0.33, 0.46, 0.61; 0.8, 0.8))4.36950.7802
Weight
Flat((0.24, 0.35, 0.59, 0.83; 1, 1), (0.27, 0.36, 0.56, 0.76; 0.8, 0.8))2.85500.6443
VR((0.18, 0.31, 0.67, 1.06; 1, 1), (0.21, 0.33, 0.62, 0.95; 0.8, 0.8))1.57650.3557
Sickness
Flat((0.41, 0.45, 0.5, 0.55; 1, 1), (0.41, 0.45, 0.5, 0.54; 0.8, 0.8))2.92230.6633
VR((0.38, 0.43, 0.53, 0.58; 1, 1), (0.4, 0.44, 0.51, 0.57; 0.8, 0.8))1.48330.3367
Interaction
Flat((0.29, 0.38, 0.58, 0.74; 1, 1), (0.3, 0.4, 0.55, 0.71; 0.8, 0.8))1.28100.2457
VR((0.24, 0.35, 0.63, 0.88; 1, 1), (0.27, 0.38, 0.6, 0.82; 0.8, 0.8))3.93230.7543
Table 10. Comparison of Type-1 and IT2F-AHP results.
Table 10. Comparison of Type-1 and IT2F-AHP results.
SubfactorsType-1 Fuzzy AHPIT2F-AHP
Global WeightsRankingGlobal WeightsRanking
Realism0.118710.11811
Immersion0.118720.11812
Ease of Recognition0.088830.08883
Sickness0.086340.08634
Objective Usability0.085050.08605
Perceived Usefulness0.083760.08476
Interaction0.078870.08027
Weight0.075180.07578
Response Accuracy0.069290.06829
Ease of Operation0.0672100.066410
Enjoyability0.0649110.063911
Level of Engagement0.0636120.063612
Table 11. Sensitivity analysis case descriptions.
Table 11. Sensitivity analysis case descriptions.
CasesDescription
Case 1Current Case
Case 2Medium-high weight of Performance Factors, Low for other factors
Case 3High weight of Performance Factors, Low for other factors
Case 4Medium-high weight of Efficiency Factors, Low for other factors
Case 5High weight of Efficiency Factors, Low for other factors
Case 6Medium-high weight of Presence Factors, Low for other factors
Case 7High weight of Presence Factors, Low for other factors
Case 8Medium-high weight of Device Factors, Low for other factors
Case 9High weight of Performance Device, Low for other factors
Case 10High weight for Ease of Operation
Case 11High weight for Enjoyability
Case 12High weight for Level of Engagement
Case 13High weight for Response Accuracy
Case 14High weight for Objective Usability
Case 15High weight for Perceived Usefulness
Case 16High weight for Ease of Recognition
Case 17High weight for immersion
Case 18High weight for realism
Case 19High weight for weight
Case 20High weight for Sickness
Case 21High weight for interaction
Table 12. Sensitivity Analysis Case Weightages.
Table 12. Sensitivity Analysis Case Weightages.
CasesWeightages
Performance FactorsEfficiency FactorsPresence
Factors
Device Factors
Case 10.260.260.240.24
Case 20.510.180.150.16
Case 30.760.090.070.08
Case 40.180.510.140.15
Case 50.100.760.060.07
Case 60.180.180.490.16
Case 70.100.090.740.08
Case 80.180.180.150.49
Case 90.100.090.070.74
Ease of
Operation
EnjoyabilityLevel of
Engagement
Response Accuracy
Case 100.950.010.010.03
Case 110.020.940.010.03
Case 120.020.010.940.03
Case 130.020.010.010.96
Objective
Usability
Perceived UsefulnessEase of
Recognition
Case 140.930.030.04
Case 150.030.930.04
Case 160.030.030.94
ImmersionRealism
Case 170.90.1
Case 180.10.9
WeightSicknessInteraction
Case 190.910.060.03
Case 200.010.960.03
Case 210.010.060.93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shringi, A.; Arashpour, M.; Golafshani, E.M.; Rajabifard, A.; Dwyer, T.; Li, H. Efficiency of VR-Based Safety Training for Construction Equipment: Hazard Recognition in Heavy Machinery Operations. Buildings 2022, 12, 2084. https://doi.org/10.3390/buildings12122084

AMA Style

Shringi A, Arashpour M, Golafshani EM, Rajabifard A, Dwyer T, Li H. Efficiency of VR-Based Safety Training for Construction Equipment: Hazard Recognition in Heavy Machinery Operations. Buildings. 2022; 12(12):2084. https://doi.org/10.3390/buildings12122084

Chicago/Turabian Style

Shringi, Ankit, Mehrdad Arashpour, Emadaldin Mohammadi Golafshani, Abbas Rajabifard, Tim Dwyer, and Heng Li. 2022. "Efficiency of VR-Based Safety Training for Construction Equipment: Hazard Recognition in Heavy Machinery Operations" Buildings 12, no. 12: 2084. https://doi.org/10.3390/buildings12122084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop