Next Article in Journal
Special Issue on State-of-the-Art Renewable Energy in Korea
Previous Article in Journal
Recognition of Customers’ Impulsivity from Behavioral Patterns in Virtual Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Processing and QR Code Application Method for Construction Safety Management

Intelligent Construction Automation Center, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(10), 4400; https://doi.org/10.3390/app11104400
Submission received: 24 March 2021 / Revised: 2 May 2021 / Accepted: 10 May 2021 / Published: 12 May 2021
(This article belongs to the Section Civil Engineering)

Abstract

:
Construction safety accidents occur due to a combination of factors. Even a minor accident that could have been treated as a simple injury can lead to a serious accident or death, depending on when and where it occurred. Currently, methods for tracking worker behavior to manage such construction safety accidents are being studied. However, applying the methods to the construction site, various additional elements (e.g., sensors, transmitters, wearing equipment, and control systems) that must be additionally installed and managed are required. The cost of installation and management of these factors increases in proportion to the size of the site and the number of targets to be managed. In addition, the application of new equipment and new rules lowers the work efficiency of workers. In this paper, the following contents are described: (1) system overview, (2) image processing-QR code-based safety management target recognition methodology, and (3) object location discrimination technique applying the geometric transformation. Finally, the proposed methodology was tested to confirm the operation in the field, and the experimental results and conclusions were described in the paper.

1. Introduction

In the construction industry, serious safety and fatal accidents are commonly measured high, regardless of the distinction between developing and advanced countries [1]. Researchers and practitioners related to the construction industry are making a lot of effort to solve this problem. However, there is still a long way to go to zero accidents/injuries [2]. Even though the devices and technologies for construction safety management are constantly developing, the construction field’s accident rate is still high. According to data released by the U.S. Department of Labor, fatal occupational injuries by primary and secondary source of injury for all fatal injuries in the manufacturing sector (i.e., natural resources and mining, construction, manufacturing) in 2018 was the highest in the construction industry at 49.05% [3].
Stakeholders in construction projects have different levels of awareness of risks and different levels of understanding of safety according to their experiences and perceptions [4]. In addition, it is difficult to organize a systematic management process because the organizational composition continuously changes and new participants are generated according to the project schedule and stage [5,6]. This structural limitation of the construction project is a factor that lowers the learning efficiency for the project participants’ safety. Continuous safety training is essential to ensure that project participants who are newly organized by project and schedule follow the project’s standards and rules for safety management. Moreover, even if safety training has been completed, a system should be established in which the safety manager can check in real-time whether the subject of management complies with the safety-related regulations to manage unsafe safety consciousness and safety behavior of workers. The ideal form of direct safety management for construction by managers is that enough managers can check all workers scattered throughout the construction site. To satisfy this condition, the site’s scale increases the more safety managers must be employed. However, the biggest problems with these actions are (1) large amounts of economic expenditures irrespective of the project production cost and (2) whether it is possible to establish that they have sufficient knowledge for safety management and capabilities regardless of their safety career. Besides, recent studies pointed out that the existing management methods for construction safety do not sufficiently reflect potential risk factors occurring in real-time at the site [7,8]. Researchers argue that the current method should be changed to a bottom-up management method to solve these problems. This method can quickly transfer information generated on-site to headquarters and the person in charge of management and help start an appropriate response process and prepare alternatives [9,10].
In this paper, the methodology that identifies workers as individuals and estimates the location of workers, and updates their historical data, is presented. The proposed method can be applied to all sites where surveillance cameras are installed and can be applied to all cameras simultaneously. Accordingly, the method can be developed as a prototype of an automatic safety information management system for construction. The system can record workers’ safety-related history information without additional devices (e.g., sensors, transmitters, wearing equipment, control systems, etc.) and can be developed as a management system for personalized safety in the future. The case study analyzes the practical application of the methodology.

2. Literature Review

2.1. Construction Safety Management

In recent years, as high-resolution cameras were popularized in construction sites at low prices, the resolution of various images (i.e., photographs and videos) generated from the site has been highly developed. Moreover, analysis research is being conducted to utilize artificial intelligence technology for multiple purposes in the construction field [11,12]. Currently, computer vision technology has been developed to a level that allows it to automatically analyze various construction activities [13,14,15,16]. Accordingly, in safety management for construction, research on automation of safety management using computer vision technology shows a proportionate increase. In traditional safety management, safety accidents were considered to have been caused by individual carelessness, so personal responsibility was important. However, even the most experienced workers do not identify all of the safety risk situations that safety managers can [17]. Therefore, safety issues should be treated as separate from the worker’s experience, construction skill, and workability, and cannot be defined as individual problems alone. As a result, corporate responsibility for construction safety management has been gradually emphasized.
The safety management method for construction provides various advantages (e.g., reducing work injuries, risk control, productivity improvement, cost reduction, etc.) to construction projects [18]. However, it takes a lot of capital and time to observe these factors directly [19,20]. Fortunately, the low-cost prevalence of high-resolution cameras and advances in computer vision technology has shown the potential to solve these problems. To apply the technology to the right target, the computer must recognize what the observer sees and recognizes as the same. For this purpose, object recognition was conducted according to changes in color-related information (e.g., RGB values, hue, saturation) for accurate detection of objects [21]. Furthermore, as the resolution of surveillance cameras deployed at the construction site gradually improves, multiple workers are recognized in one video [22], and the path tracking of each worker also possible [23,24,25]. A study on human–object interactions (HOIs) [26] and a skeleton-based study [27] have been able to provide a basis for judging the current worker’s stability by recognizing the work situation and pose of workers. As the classification of objects became clear, researchers tried to analyze the movement and activity of objects from 2D images [28,29,30,31]. Moreover, by applying a methodology to estimate the 3D coordinates of an object from multiple images, a method that can implement an object’s position on a three-dimensional plane has been studied without any sensor (e.g., distance sensor, ultrasonic sensor, etc.) [32,33,34,35]. As such, the current computer vision studies have immediately discovered the management targets in the field and are performing the targets’ location based on the distance between the surveillance camera and the object. In addition, most studies classify objects as simple class levels (e.g., workers, people) and do not try to realize their own information (e.g., worker’s name). If the statistics enable workers to recognize the frequency of exposure to an unsafe situation, a perception of safety can be improved by themselves [36]. This immediate feedback system can also be applied at an economical cost to a system dynamics approach [37,38] to determine whether the current governance structure is working properly.

2.2. You Only Look Once (YOLO) V3

In this study, image processing based on You Only Look Once (YOLO) was applied to implement object recognition. YOLO regards the bounding box and class probability in an image as a single regression problem and can guess an object’s type and location by looking at the image once. It is also a method of calculating the class probability for multiple bounding boxes through a single convolutional network [39]. In this study, the updated version, YOLO v3, was used (Figure 1).

2.3. Geometric Transformation

The video recorded by the on-site surveillance camera reflects the distance difference due to the perspective. There is a difference between the moving distance in the video and the actual one. Thus, estimating an object’s actual position in an image requires steps to offset the perspective and mapping it to a plane. In this study, matching the image of the surveillance camera with the drawing by applying a geometric transformation to the image is proposed. Geometric transformations can be classified into five types as shown in Figure 2 [40]. In this study, distortion correction is used.

2.4. Quick Response (QR) Code

The larger the amount of information that QR codes contain, the greater the number of pixels. It must also have minimal resolution performance to allow pixel dis-crimination to recognize pixel patterns. QR codes allow users to enter a variety of information, but depending on the amount of information, the number of pixels required will increase proportionally (Figure 3). Therefore, if high-capacity information is entered into the QR code, a high-resolution device is required to recognize the QR code. Conversely, the minimal information required for object recognition reduces the number of pixels in QR codes. It can also increase the recognition rate of QR codes even with low-resolution devices. In this work, we use the object’s ID (five digits) as the least information to distinguish objects.

3. Image Processing-Based Worker Location Estimation System

3.1. Image Processing Learning Model

The system is based on Yolo V3 and developed the system code and user interface using Python, Anaconda, Ubuntu, and C++. In this study, people (i.e., workers) and things (QR code) are classified into three classes and recognized. As part of building a custom class for workers and QR codes, the image data necessary to train the weight model were collected from the web and case-study sites. Yolo basically provided the weight model for the person-class. Among the obtained data, we excluded images that are difficult to label (i.e., class classification and bounding box representation), and only available images were selected and used for learning. Additional learning data were created by reproducing the images by using the horizontal flip and vertical flip. Finally, 921 directly acquired images and 2763 reprocessed images, a total of 3684 images, were used in the learning process (Table 1). In this study, the images used for learning (i.e., worker and QR code) were composed of sub-labels as shown in Table 1.
In the learning model development, 50% of the acquired images were used for learning, 30% for comparison, and 20% for verification data. The training settings applied to use the system are shown in Table 2.

3.2. System Algorithm

The image processing-based worker location estimation system is executed as follows: (1) image acquisition from a field surveillance camera, (2) object recognition using image processing, and (3) information matching in the system DB. The system algorithm is shown in Figure 4.

3.2.1. Matching Coordinates between On-Site Video and Drawing

When a region for management has been selected, a surveillance camera is installed. An ID is assigned to the surveillance camera, and the video from the camera is collected in real-time. Update the design drawing to see where the camera is located on the plane. Reference points are selected from the acquired image and the drawing, and the position reference coordinates are matched. After all coordinate matching is completed, check whether the recognized object is projected to the correct position on the plane (Figure 5).
As shown in Figure 5a, the mapping process of the site topography can be targeted at 4 coordinate points in the video from the surveillance camera. After setting the coordinates, the observed object reads the person, worker, and QR code. In Figure 5b, the recognized object coordinates and rectangular reference coordinates are recognized as X, Y coordinates of the video. Record the offset X, Y length between the recognized object and 4 points based on the current state.
The rectangles and four points are shown in Figure 5b, and Figure 5c actually mean the same space and coordinates. Accordingly, it is necessary to search for a transducer of the four points and object’s position. When Figure 5b coordinate points are expressed as input values (x, y), Figure 5c coordinate points can be expressed as result values (x’, y’). (x’, y’) can be expressed as Equations (1)–(3).
( wx wy w ) = D R L ( x y 1 ) = [ p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 ] ( x y 1 )
x′ = (p11x + p12y + p13)/(p31x + p32y + p33)
y′ = (p21x + p22y + p23)/(p31x + p32y + p33)
Each Equation for x-coordinate and y-coordinate can be obtained from the movement relationship of a point. Therefore, a distortion rate factor (DRL) is obtained by sequentially analyzing eight equations from four input coordinates and output coordinates. Moreover, when the position coordinate point of the object recognized in Figure 5b is (m, n), the position coordinate of the object in Figure 5d is calculated as Equations (4) and (5) using DRL.
( a b c ) = D R L ( m n 1 )
( m ,   n ) = ( a / c ,   b / c )

3.2.2. Target Object and QR Code Recognition

An on-site surveillance camera is filmed from one direction. Unless it has a complex structure, a single device is installed in one space. In this study, the reference point of the object was determined by the worker’s feet. Accordingly, the center point of the lower part of the object bounding box is assumed to be the floor position on which the worker is standing (Figure 6).
Equation (6) represents the anchor box coordinates of the object, and Equation (7) represents the anchor box coordinates of the QR code. Both indicators form a rectangular shape made up of four points.
A = [ x 1 , y 1 x 1 , y 2 x 1 , y 2 x 2 , y 2 ] = [ A ,   B ,   C ,   D ]  
QA = [ n 1 , m 1 n 1 , m 2 n 1 , m 2 n 2 , m 2 ] = [ E ,   F ,   G ,   H ]  
In this study, Equation (8) is applied to determine whether the QR code is included in the object.
( AB × AE ) ·   k = ( B x A x ) ( E x A y ) ( B y A y ) ( E x A x )   0
Equation (8) shows the method of calculating the outer product between the AB vector and the AE vector (Figure 7). If the result of the equation has a positive value, it means that the E point is located to the left of the AB line. As a result, if the coordinates (E, F, G, H) of the anchor box QA satisfies all these formulas for the anchor box A, it is proved that the QR code is inside the object.

3.2.3. Construction Management DB

The system recognizes three classes of people, workers and QR codes. At the construction site, the object containing the QR code is the person included in the system DB, which means the project’s people. If the QR code does not recognize, it can be divided into two cases. The one is wearing work clothes, and another is not wearing work clothes. If the target is wearing work clothes, the QR code does not recognize or that the target did not include the DB. In the case of another, it is regarded as neither the worker nor the project’s people.

4. Case Study

A site was recruited for the analysis of actual cases of the proposed methodology. The site is an apartment complex construction with an area of 65,245 m2, and structure construction has been completed. The site drawing provided by the manager was in CAD file format (.dwg), and a drawing in which it was easy to understand the site boundary was selected and used. For the calibration of the drawing, the actual positions of the four reference sets of X and Y were entered. As a result, it was possible to calculate the coordinate positions of all points (Figure 8). The area used for the case study is the area marked in red in Figure 8.
In the drawing, the parts entered as reference coordinates are the four red dots shown in Figure 9a, and the location and angle of view of the surveillance camera that took the image are visually displayed. The image captured by the actual surveillance camera was taken as shown in Figure 9b, and the four reference coordinates set in the video were set as T1~T4 as shown in (Figure 9c). As for the QR code marking method, the paint was directly marked on the clothes using a stencil technique to prevent peeling during work (Figure 9d). As a result of object recognition, among a total of 4 workers in the photo, all workers were recognized except a worker located at the front apartment entrance. A total of 3 workers were recognized as workers. Even though the QR code was distorted due to wrinkled clothes, the system recognizes the code correctly. The worker’s identification number “00121” was printed (Figure 9e). Finally, the worker “00121” was estimated to be located in (197.4, 31.8) in Figure 7 (the space set as the X-axis (0~384.6 m) and Y-axis (0~178.6 m)) (Figure 9f).

5. Conclusions

This study deals with a study on developing a methodology for construction safety management automation using surveillance cameras installed on site. In this paper, three methodologies were presented: (1) classification and recognition of object; (2) allocating unique identification information using QR code; (3) extracting an object’s coordinates projecting onto a 2D plane. During the recruitment process at the experiment site, the research team was able to conduct the test on the condition that it does not infringe on the work environment of the on-site staff. Accordingly, there is a limitation in that workers and managers did not conduct the surveys. The authors discussed the case study results with safety managers. Managers have commented that this methodology is easy to apply on-site because it does not require additional equipment installation and does not require special instructions to workers. Moreover, there was an opinion that immediate application could positively affect major management targets (e.g., less than 1 year of experience, workers in hazardous areas).
The images that are using for training were taken under sufficient brightness. Because these images easily distinguish between objects and backgrounds, neural networks trained with data can record high object identification rates for images with vivid colors. However, most of the construction task is done outside, and the amount of sunlight fluctuates depending on the location of the clouds and the sun. Besides, images shot in the shade become very difficult to distinguish between the object and the background. This situation increases the uncertainty and inaccuracy of object recognition. In the study of Edirisinghe et al. [41], an approach through QR code was proposed so that workers could easily access and understand information on the precautions during work and obtained meaningful results that enable workers to prepare themselves for injuries. In addition to these results, to feedback personalized statistical data to each worker, it will be possible to improve the accident prevention effect by considering the physical fatigue and injuries caused by the worker’s improper posture.
Accordingly, in future research, it is necessary to improve object recognition accuracy by applying fuzzy theory [42,43] and to obtain personal safety information by applying improper working posture recognition [44,45].
This study provides the basis for a methodology that can automatically collect each worker’s individual safety history. If the problems described above are solved, and the system’s reliability is sufficiently secured, it is expected that even the simple process of data updating [46] by a manager using a smart device can be omitted.

Author Contributions

Conceptualization, Y.-J.P. and C.-Y.Y.; methodology, J.-S.K., C.-Y.Y. and Y.-J.P.; software, J.-S.K.; validation, C.-Y.Y., J.-S.K. and Y.-J.P.; formal analysis, C.-Y.Y. and Y.-J.P.; investigation, J.-S.K., C.-Y.Y. and Y.-J.P.; resources, Y.-J.P.; data curation, J.-S.K.; writing—original draft preparation, J.-S.K., C.-Y.Y. and Y.-J.P.; writing—review and editing, Y.-J.P.; visualization, J.-S.K.; supervision, Y.-J.P.; project administration, Y.-J.P.; funding acquisition, Y.-J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Kyungpook National University Development Project Research Fund, 2018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Guo, H.; Yu, Y.; Skitmore, M. Visualization technology-based construction safety management: A review. Autom. Constr. 2017, 73, 135–144. [Google Scholar] [CrossRef]
  2. Zhou, Z.; Goh, Y.M.; Li, Q. Overview and analysis of safety management studies in the construction industry. Saf. Sci. 2015, 72, 337–350. [Google Scholar] [CrossRef]
  3. Bureau of Labor Statistics; U.S. Department of Labor Table A-9. Fatal Occupational Injuries by Event or Exposure for All Fatalities and Major Private Industry 1 Sector, All United States, 2009. Available online: https://www.bls.gov/iif/oshwc/cfoi/cftb0330.htm%0Ahttp://bls.gov/iif/oshwc/cfoi/cftb0249.pdf (accessed on 18 March 2021).
  4. Zhao, D.; McCoy, A.P.; Kleiner, B.M.; Mills, T.H.; Lingard, H. Stakeholder perceptions of risk in construction. Saf. Sci. 2016, 82, 111–119. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Li, H.; Lu, M.; Hsu, S.C.; Gray, M.; Huang, T. Proactive behavior-based safety management for construction safety improvement. Saf. Sci. 2015, 75, 107–117. [Google Scholar] [CrossRef]
  6. Huang, Y.H.; Yang, T.R. Exploring on-site safety knowledge transfer in the construction industry. Sustainability 2019, 11, 6426. [Google Scholar] [CrossRef] [Green Version]
  7. Asadzadeh, A.; Arashpour, M.; Li, H.; Ngo, T.; Bab-Hadiashar, A.; Rashidi, A. Sensor-based safety management. Autom. Constr. 2020, 113, 103128. [Google Scholar] [CrossRef]
  8. Chen, K. Enhancing construction safety management through edge computing: Framework and scenarios. J. Inf. Technol. Constr. 2020, 25, 438–451. [Google Scholar] [CrossRef]
  9. Hinze, J. The need for academia to address construction site safety through design. In Proceedings of the Construction Congress VI: Building Together for a Better Tomorrow in an Increasingly Complex World, Orlando, FL, USA, 20–22 February 2000; American Society of Civil Engineers: Reston, VA, USA, 2000; Volume 278, pp. 1189–1195. [Google Scholar]
  10. Huang, X.; Hinze, J. Owner’s Role in Construction Safety: Guidance Model. J. Constr. Eng. Manag. 2006, 132, 174–181. [Google Scholar] [CrossRef]
  11. Akinosho, T.D.; Oyedele, L.O.; Bilal, M.; Ajayi, A.O.; Delgado, M.D.; Akinade, O.O.; Ahmed, A.A. Deep learning in the construction industry: A review of present status and future innovations. J. Build. Eng. 2020, 32, 101827. [Google Scholar] [CrossRef]
  12. Zhong, B.; Wu, H.; Ding, L.; Love, P.E.D.; Li, H.; Luo, H.; Jiao, L. Mapping computer vision research in construction: Developments, knowledge gaps and implications for research. Autom. Constr. 2019, 107, 102919. [Google Scholar] [CrossRef]
  13. Yang, J.; Shi, Z.; Wu, Z. Vision-based action recognition of construction workers using dense trajectories. Adv. Eng. Inform. 2016, 30, 327–336. [Google Scholar] [CrossRef]
  14. Peddi, A.; Huan, L.; Bai, Y.; Kim, S. Development of human pose analyzing algorithms for the determination of construction productivity in real-time. In Proceedings of the Building a Sustainable Future—Proceedings of the 2009 Construction Research Congress, Seattle, WA, USA, 5–7 April 2009; pp. 11–20. [Google Scholar]
  15. Park, M.W.; Brilakis, I. Construction worker detection in video frames for initializing vision trackers. Autom. Constr. 2012, 28, 15–25. [Google Scholar] [CrossRef]
  16. Park, M.W.; Brilakis, I. Continuous localization of construction workers via integration of detection and tracking. Autom. Constr. 2016, 72, 129–142. [Google Scholar] [CrossRef]
  17. Perlman, A.; Sacks, R.; Barak, R. Hazard recognition and risk perception in construction. Saf. Sci. 2014, 64, 13–21. [Google Scholar] [CrossRef]
  18. Jazayeri, E.; Dadi, G.B. Construction Safety Management Systems and Methods of Safety Performance Measurement: A Review. J. Saf. Eng. 2017, 2017, 15–28. [Google Scholar]
  19. Seo, J.; Han, S.; Lee, S.; Kim, H. Computer vision techniques for construction safety and health monitoring. Adv. Eng. Informatics 2015, 29, 239–251. [Google Scholar] [CrossRef]
  20. Khosrowpour, A.; Niebles, J.C.; Golparvar-Fard, M. Vision-based workface assessment using depth images for activity analysis of interior construction operations. Autom. Constr. 2014, 48, 74–87. [Google Scholar] [CrossRef]
  21. Escorcia, V.; Dávila, M.A.; Golparvar-Fard, M.; Niebles, J.C. Automated vision-based recognition of construction worker actions for building interior construction operations using RGBD cameras. In Proceedings of the Construction Research Congress 2012: Construction Challenges in a Flat World, Proceedings of the 2012 Construction Research Congress, West Lafayette, IN, USA, 21–23 May 2012; pp. 879–888.
  22. Yang, J.; Arif, O.; Vela, P.A.; Teizer, J.; Shi, Z. Tracking multiple workers on construction sites using video cameras. Adv. Eng. Informatics 2010, 24, 428–434. [Google Scholar] [CrossRef]
  23. Teizer, J.; Vela, P.A. Personnel tracking on construction sites using video cameras. Adv. Eng. Informatics 2009, 23, 452–462. [Google Scholar] [CrossRef]
  24. Yang, J.; Vela, P.A.; Teizer, J.; Shi, Z.K. Vision-based crane tracking for understanding construction activity. Congr. Comput. Civ. Eng. Proc. 2011, 28, 258–265. [Google Scholar] [CrossRef] [Green Version]
  25. Yuan, C.; Li, S.; Cai, H. Vision-Based Excavator Detection and Tracking Using Hybrid Kinematic Shapes and Key Nodes. J. Comput. Civ. Eng. 2017, 31, 04016038. [Google Scholar] [CrossRef]
  26. Tang, S.; Roberts, D.; Golparvar-Fard, M. Human-object interaction recognition for automatic construction site safety inspection. Autom. Constr. 2020, 120, 103356. [Google Scholar] [CrossRef]
  27. Guo, H.; Yu, Y.; Ding, Q.; Skitmore, M. Image-and-Skeleton-Based Parameterized Approach to Real-Time Identification of Construction Workers’ Unsafe Behaviors. J. Constr. Eng. Manag. 2018, 144, 04018042. [Google Scholar] [CrossRef] [Green Version]
  28. Han, S.; Lee, S. A vision-based motion capture and recognition framework for behavior-based safety management. Autom. Constr. 2013, 35, 131–141. [Google Scholar] [CrossRef]
  29. Luo, X.; Li, H.; Cao, D.; Yu, Y.; Yang, X.; Huang, T. Towards efficient and objective work sampling: Recognizing workers’ activities in site surveillance videos with two-stream convolutional networks. Autom. Constr. 2018, 94, 360–370. [Google Scholar] [CrossRef]
  30. Bügler, M.; Ogunmakin, G.; Teizer, J.; Vela, P.A.; Borrmann, A. A comprehensive methodology for vision-based progress and activity estimation of excavation processes for productivity assessment. In Proceedings of the EG-ICE 2014, European Group for Intelligent Computing in Engineering—21st International Workshop: Intelligent Computing in Engineering 2014, Cardiff, UK, 16–18 July 2014. [Google Scholar]
  31. Liu, M.; Han, S.; Lee, S. Tracking-based 3D human skeleton extraction from stereo video camera toward an on-site safety and ergonomic analysis. Constr. Innov. 2016, 16, 348–367. [Google Scholar] [CrossRef]
  32. Lee, Y.J.; Park, M.W. 3D tracking of multiple onsite workers based on stereo vision. Autom. Constr. 2019, 98, 146–159. [Google Scholar] [CrossRef]
  33. Konstantinou, E.; Brilakis, I. Matching Construction Workers across Views for Automated 3D Vision Tracking On-Site. J. Constr. Eng. Manag. 2018, 144, 04018061. [Google Scholar] [CrossRef]
  34. Soltani, M.M.; Zhu, Z.; Hammad, A. Framework for Location Data Fusion and Pose Estimation of Excavators Using Stereo Vision. J. Comput. Civ. Eng. 2018, 32, 04018045. [Google Scholar] [CrossRef]
  35. Brilakis, I.; Park, M.W.; Jog, G. Automated vision tracking of project related entities. Adv. Eng. Inform. 2011, 25, 713–724. [Google Scholar] [CrossRef]
  36. Hallowell, M.R.; Teizer, J.; Blaney, W. Application of sensing technology to safety management. In Proceedings of the Construction Research Congress 2010: Innovation for Reshaping Construction Practice—Proceedings of the 2010 Construction Research Congress, Banff, AB, Canada, 8–10 May 2010; pp. 31–40. [Google Scholar]
  37. Di Nardo, M.; Madonna, M.; Santillo, L.C. Safety management system: A system dynamics approach to manage risks in a process plant. Int. Rev. Model. Simulations 2016, 9. [Google Scholar] [CrossRef]
  38. Di Nardo, M.; Madonna, M.; Murino, T.; Castagna, F. Modelling a Safety Management System Using System Dynamics at the Bhopal Incident. Appl. Sci. 2020, 10, 903. [Google Scholar] [CrossRef] [Green Version]
  39. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  40. Luo, R. Optical proximity correction using a multilayer perceptron neural network. J. Opt. 2013, 15, 75708. [Google Scholar] [CrossRef]
  41. Edirisinghe, R.; Lingard, H.; Broadhurst, D. Exploring the potential for using video to communicate safety information to construction workers: Case studies of organisational use. In Proceedings of the 31st Annual ARCOM Conference, Lincoln, UK, 7–9 September 2015; Volume 34, pp. 519–528. [Google Scholar]
  42. Versaci, M.; Morabito, F.C.; Angiulli, G. Adaptive Image Contrast Enhancement by Computing Distances into a 4-Dimensional Fuzzy Unit Hypercube. IEEE Access 2017, 5, 26922–26931. [Google Scholar] [CrossRef]
  43. Malarvizhi, N.; Selvarani, P.; Raj, P. Adaptive fuzzy genetic algorithm for multi biometric authentication. Multimed. Tools Appl. 2020, 79, 9131–9144. [Google Scholar] [CrossRef]
  44. Ray, S.J.; Teizer, J. Real-time posture analysis of construction workers for ergonomics training. In Proceedings of the Construction Research Congress 2012, West Lafayette, IN, USA, 21–23 May 2012; Volume 26, pp. 1001–1010. [Google Scholar] [CrossRef]
  45. Chen, J.; Qiu, J.; Ahn, C. Construction worker’s awkward posture recognition through supervised motion tensor decomposition. Autom. Constr. 2017, 77, 67–81. [Google Scholar] [CrossRef]
  46. Shohet, I.M.; Wei, H.-H.; Skibniewski, M.J.; Tak, B.; Revivi, M. Integrated Communication, Control, and Command of Construction Safety and Quality. J. Constr. Eng. Manag. 2019, 145, 04019051. [Google Scholar] [CrossRef]
Figure 1. Conception of YOLO v3 network architecture.
Figure 1. Conception of YOLO v3 network architecture.
Applsci 11 04400 g001
Figure 2. Examples of geometric transformation.
Figure 2. Examples of geometric transformation.
Applsci 11 04400 g002
Figure 3. QR code pixel changing according to the amount of information contained.
Figure 3. QR code pixel changing according to the amount of information contained.
Applsci 11 04400 g003
Figure 4. Worker location estimation system algorithm.
Figure 4. Worker location estimation system algorithm.
Applsci 11 04400 g004
Figure 5. Conception of matching process between image and drawing: (a) setting 4 points to be used as the location standard from the on-site CCTV video; (b) estimating the relative position of the recognized object from the polygon created by the set 4 points; (c) setting the positions corresponding to the 4 points in the drawing and calculating the polygon’s distortion rate.; (d) estimating the position on the drawing of an object recognized by CCTV using the calculated results.
Figure 5. Conception of matching process between image and drawing: (a) setting 4 points to be used as the location standard from the on-site CCTV video; (b) estimating the relative position of the recognized object from the polygon created by the set 4 points; (c) setting the positions corresponding to the 4 points in the drawing and calculating the polygon’s distortion rate.; (d) estimating the position on the drawing of an object recognized by CCTV using the calculated results.
Applsci 11 04400 g005
Figure 6. The reference position of the object recognized as an anchor box.
Figure 6. The reference position of the object recognized as an anchor box.
Applsci 11 04400 g006
Figure 7. The outer product of the two anchor boxes to verify the worker’s QR code (i.e., ID).
Figure 7. The outer product of the two anchor boxes to verify the worker’s QR code (i.e., ID).
Applsci 11 04400 g007
Figure 8. Inputting of XY coordinates of reference drawing.
Figure 8. Inputting of XY coordinates of reference drawing.
Applsci 11 04400 g008
Figure 9. Applying process and results of case study: (a) location of the camera installed on-site and 4 points to be used as the location standard; (b) on-site video file; (c) polygon plane created from 4 points (T1~T4) set in the image; (d) QR code drawn on clothes with a stencil technique to identify each worker’s ID; (e) identification result of each object and reading result of QR code information; (f) estimating the location of the operator “00121” and marking the location on the map.
Figure 9. Applying process and results of case study: (a) location of the camera installed on-site and 4 points to be used as the location standard; (b) on-site video file; (c) polygon plane created from 4 points (T1~T4) set in the image; (d) QR code drawn on clothes with a stencil technique to identify each worker’s ID; (e) identification result of each object and reading result of QR code information; (f) estimating the location of the operator “00121” and marking the location on the map.
Applsci 11 04400 g009aApplsci 11 04400 g009b
Table 1. Class and sub-label composition for object classification.
Table 1. Class and sub-label composition for object classification.
ClassTraining SampleSub-LabelExample of Training Sample
Worker3172Standing Applsci 11 04400 i001
Bending Applsci 11 04400 i002
Sitting Applsci 11 04400 i003
Personreferred from YOLO V3 data
QR code512Original Applsci 11 04400 i004
Stretched Applsci 11 04400 i005
Distorted Applsci 11 04400 i006
Table 2. Configuration of the training inputs.
Table 2. Configuration of the training inputs.
Batch.SubdivisionsWidth, HeightChannelsDecayAngleSaturationExposureHueLearning RateBurn inMax BatchesPolicyStepsScales
644448,44830.90.00531.51.50.0005100040,000200;
400;
600;
20,000;
30,000
200;
400;
600;
20,000;
30,000
2.5;
2,
2;
0.1
0.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.-S.; Yi, C.-Y.; Park, Y.-J. Image Processing and QR Code Application Method for Construction Safety Management. Appl. Sci. 2021, 11, 4400. https://doi.org/10.3390/app11104400

AMA Style

Kim J-S, Yi C-Y, Park Y-J. Image Processing and QR Code Application Method for Construction Safety Management. Applied Sciences. 2021; 11(10):4400. https://doi.org/10.3390/app11104400

Chicago/Turabian Style

Kim, Joon-Soo, Chang-Yong Yi, and Young-Jun Park. 2021. "Image Processing and QR Code Application Method for Construction Safety Management" Applied Sciences 11, no. 10: 4400. https://doi.org/10.3390/app11104400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop