Towards self-recovering construction schedules: a new method for periodically updating project plans and optimizing recovery actions

ABSTRACT It is common for a construction schedule to deviate from its original-planned baseline, as uncertainty is inherent in all construction activities. Accordingly, planners are required to perform periodic schedule updates that learn from retrospective progress to more accurately schedule remaining activities and draw optimum recovery plans. This research proposes a method that utilizes a neural network regression model to forecast upcoming productivity rates based on retrospective progress and accordingly updates the schedule on a regular time interval with the required resource adjustments to meet the planned end date of the project with optimal cost. The method was tested on brickwork activities at a residential complex construction project in the UAE, using retrospective progress data of 1487 working days for 132 masons, and was found to be 98% accurate in predicting labor productivity, which was thus used as a basis to draw schedule recovery plans according to the proposed framework. In essence, this research provides a platform toward an automated self-recovering scheduling system, which serves construction managers in proactively preventing potential schedule deficiencies. Graphical abstract


Introduction
Construction project scheduling systems are currently based on an initial baseline plan and a limited number of recovery plans, which are made when the project's baseline plan becomes unrealistic and far from the real progress at site. A recovery plan is a new plan for the project that starts from the current actual status of the project with one of the suggested strategies to recover the delay and finish the project on time with the least possible additional cost to recover the delay (Guida and Sacco 2019). Building a recovery plan goes through a long process. It is built by the contractor's planning engineer and revised by the consultant, which is further approved by the client as the new project plan. Unfortunately, recovery plans usually yield with enormous actions that are expensive because it either adds resources to accomplish the remaining activities with significantly more compressed duration than what is in the baseline plan or divides the remaining work into more subcontractors to do the work in parallel which decreases the profit share (Martens and Vanhoucke 2019).
One of the most widespread recovery plan strategies is Activity Crashing. It involves loading the critical activities with more resources before reaching the maximum applicable resource load, which is usually due to limited working space, that is when we move or add resources to the next activity and so on (Hady, Al-Kindi, and Mahmoud 2020). Another strategy is Fast Tracking which is the execution of activities in parallel. This strategy requires assigning multiple subcontractors for the same type of activities in the project. It needs advanced management skills and hard efforts by the management team due to the risk of losing the sequence of activities, giving special attention to activity interdependencies. Moreover, additional cost applies on the main contractor due to commencing new contracts with subcontractors who get a proportion of the originally assigned overhead (Ballesteros-Pérez, Elamrousy, and González-Cruz 2019).
The above-mentioned strategies are highly expensive with respect to cost and labor. For example, fast tracking requires the manager to start new deals under compressed conditions, which imposes taking costly decisions in order to divide the work on multiple subcontractors and start the work in parallel. On the other hand, activity crashing adds to the overall project cost and might cause saturated project conditions where the supervision staff might be insufficient to provide good quality.
The objective of this research is to propose a new periodically modified scheduling system based on proactive data reported from the site on a regular basis (e.g., weekly, daily) which includes the actual productivity of the crew and the current delay of the project. After each report, a neural network predicts the future productivity rates, which provides dynamism for the system. Then, the remaining activities are modified according to the predicted productivity rates. If the project is found to be behind schedule in the modified schedule, then the system enters into an optimization algorithm. In this algorithm, different recovery strategies, namely, activity crashing, fast tracking, and do nothing, are separately considered, and the best alternative is chosen.
The major task of the system is the automated process of continuously changing the planned duration, resource load, and/or the relationships of activities based on the results of the current and historical data of the project, which represents a starting point toward a bigger research that focuses on forming lean construction practice. This research also aims to change the role of the construction program in the project from monitoring to planning, which currently depends on the manual engineer's modifications. The periodic mini recovery actions do not only minimize the cost of the delay recovery but also provide active contribution in the project time and cost management.

Labor productivity factors and productivity prediction models
A labor productivity prediction model based on multilayer feedforward neural network (FFNN) with back-propagation learning algorithm was developed to predict the productivity of a concrete work of two power plant construction projects in Iran. The two major techniques "early stopping" and "Bayesian regularization" used to prevent data overfitting and to provide generalized networks were compared. It was concluded that the generalization performance of Bayesian regularization is better than that of early stopping. They also performed a sensitivity analysis of factors affecting productivity and found that labor qualification, decision-making quality, motivation, site layout organization, and planning are the most significant factors affecting productivity (Heravi and Eslamdoost 2015). A comparison study between FFNN and radial basis neural network (RBNN) models while analyzing the factors affecting productivity of masonry crew was conducted. The models were evaluated using both mean absolute percentage error (MAPE) and correlation oefficient (R). Based on the MAPE evaluation of model performance, RBNN was found to be more effective than FFNN, although both slightly overestimate the productivity, yet both techniques were successful in predicting construction productivity (Gerek et al. 2015).
Based on a regression model, the significant factors affecting labor productivity of reinforced concrete construction were found to be those related to workers (such as experience, health, motivation), work characteristics (such as work method, prefabrication, accessibility), work technologies (such as information technology, work continuity, and rework), and work management (which includes safety, planning, management system, and manager capabilities) (Jang et al. 2011).
Umit Dikmen and Sonmez (2011) developed an Artificial Neural Network (ANN) learning model to predict the needed man-hours for reinforced concrete formwork. They tested it with two case studies, and the results were considerably close to the actual field data, despite the fact that the model had a limited number of parameters, including building size, weather conditions, and working heights. According to an on-site questionnaire conducted with labors and mid level project personnel in a Chilean construction company, equipment, materials, and rework are the most significant factors affecting labor productivity (Rivas et al. 2011). A marble flooring productivity estimation model using ANN based on 10 factors affecting productivity and using data from different types of projects was developed. Their model performance was measured using four statistical measures, concluding high accuracy degree of the model, which was sufficiently attained using only one ANN hidden layer of one node in addition to the input and output layers. It was found that the most influencing factors affecting productivity are age, experience, and number of assisting labors. A statistic-based analysis of the same marble flooring dataset using multivariable linear regression technique was implemented to compare the results of MLR and ANN techniques and found that both produce significantly high accuracy measures with slightly higher values for the linear regression model that fits these specific marble flooring data better (Al-Zwainy, Rasheed, and Ibraheem 2012).
Project managers need to identify and evaluate factors affecting labor construction productivity to increase productivity in construction and thus avoid cost and schedule overruns. Considerable costs can be saved if productivity is improved because the same work can be done with less manpower, thus reducing overall labor cost. Productivity factors are classified into various groups. Different authors suggest different ways to categorize productivity factors as seen in the literature review. Hence, considering all the categories, productivity factors can be broadly classified as shown in Figure 1.
The productivity factors as mentioned contribute to the productivity rates of each labor working in a project. Most of the time, the work will not progress according to the schedule due to various reasons. In such cases, appropriate recovery plans can be adopted to overcome the delay and ultimately meet the actual schedule of the project.

Project crashing techniques
The major techniques adopted to overcome the delay and to restructure the future activities to comply with the actual schedules are "activity crashing" and "fast tracking." After it is found that the progress update is behind schedule, the contractor can either increase the manpower (activity crashing), do activities in parallel (fast tracking), or can even "do nothing" in response to the delay (Ballesteros-Pérez, Elamrousy, and González-Cruz 2019). The cost of each recovery option varies according to the project.
Activity crashing starts by filtering the longest path to perform the activity crashing, and then the model computes the compressed duration of the first activity, required to recover the delay, and also the required increase in crew size. The constraints on the maximum crew size are workspace limitations and activityspecific conditions. When the workspace limit of applicable manpower is exceeded, overtime working is a considerable option if applicable. The total cost of activity crashing is calculated by summing up the costs of increasing manpower and overtime working cost in all of the modified activities (Harun et al. 2020) Fast tracking is the next option in which the activities are overlapped in a way that decreases the overall duration without affecting the sequence of work. The process of fast tracking starts from filtering the activities on the critical path; then, for the first activity, the system checks if overlapping is applicable and to which range; then, it compares the needed overlap with the applicable range; if it exceeds the maximum, the system enters a loop where it keeps performing overlapping on the subsequent activities until the delay duration is recovered. The system then reschedules the progress update and checks for other critical paths to be modified. Finally, the cost of fast tracking is computed by summing up the cost of assigning subcontractors or adding resources to do fast tracking (Sindhu et al. 2018) "Do Nothing" is a recovery option adopted when the recovery cost is more than the delay cost. The delay cost includes delay penalty, cost related to the salary and resources, cost related to time in which the loss faced due to delay in the start of the next activity, administration cost, and so on (Lock 2017

Project crashing models
Rescheduling the project activities of a delayed project to meet up with the originally planned completion date is a serious task for the project managers. Many efforts have been made in the past to develop several mathematical models, heuristic and metaheuristic algorithms in association with this challenge. A linear programming model was developed in which the cost, time, and quality of the project activities were taken into consideration for project crashing (Babu and Suresh 1996). Fuzzy theory and genetic algorithm were applied to develop models based on durations of activities (Leu, Chen, and Yang 2001). Similarly, many other methods including and not limited to nonlinear model, stochastic models, twostage robust scheduling algorithm, genetic and simulated annealing algorithm, multiobjective genetic algorithm, and deterministic particle swarm optimization algorithm have been developed for time-costquality tradeoff problems (Feylizadeh et al. 2018).
Different models have taken into consideration different parameters for dealing with the project delay including cost, time, etc. This study focuses on productivity of the labors to monitor the progress of the project. Productivity of labor is dependent on several factors as seen in the initial portion of this section. Hence, considering productivity for deciding on recovery actions can be beneficial since it reflects many other factors too. Neural network techniques have been previously applied to develop models to predict labor productivity as seen in the above section. However, this study aims to adopt neural network algorithm to develop a self-recovering schedule which facilitates updating the original schedule based on the actual productivity rate of the labors at the site. Thus, this paper represents a proactive approach rather than a reactive approach toward project delays. In addition, this study has selected brickwork activity to develop the model because it is one among the basic activities adopted in all types of construction projects.

Data collection and preprocessing
In order to measure the productivity of the labors, an activity needs to be selected. In this study, block work is selected as the activity to study productivity. The data associated with the block work in the construction of a villa complex project in Dubai, UAE, are utilized in this study. This activity was selected because it was comparatively easy to monitor this activity and data related to this activity were available. Labor details including their ID number, labor type, blocks per day (productivity of the labor), and the skill level of the labor and the supervisor are noted. These types of data are generally collected by supervisors in the industry. For model development, labor skill, supervisory skill, temperature, and humidity were selected as inputs since these factors were easy to monitor and in addition influenced productivity rate to a great extent as seen in the literature review. Productivity of the labor is measured by considering the number of blocks prepared by the labor per day. Labor skill is assumed to be the average productivity of the labor during the duration of the data collection. Supervisor skill corresponds to the average productivity of all labors supervised by a foreman. In addition, environmental conditions including temperature and humidity are included from the daily weather history. Skill levels, blocks per day, temperature, and humidity are normalized to prepare the database for model development. An example of the data collected is provided in Table 1.
Data corresponding to 3 weeks were monitored. Data from the first 2 weeks were used to train the model to predict the forthcoming productivity of the labors, and the data from week 3 were utilized to validate the results. One hundred and thirty-two masons are considered in this study. The difference in the actual value and the predicted value of the productivity rate was calculated to estimate the error in the model.
Each labor is under the supervision of a foreman who supervises the labor in his work. More than one labor can be under the supervision of a single foreman. Likewise, there are 17 supervisors in this project. Table 2 lists the number of labors under the supervision of each supervisor.

Neural network model
This model uses a machine learning algorithm composed of a multilayer FFNN with backpropagation learning algorithm to predict productivity rates of the remaining .24 a Blocks per day -It is the number of blocks prepared by the labor on that particular day b Labor skill -Average number of blocks prepared by the labor throughout the days of data collection c Supervisor skill-It is average number of blocks prepared by all the labors under the supervision of the foreman activities based on a generative database of previous productivities for each type of work at site so that the planned duration of the remaining activities gets modified based on the proactive productivity data. The output of the neural network is the future estimated productivity rate, while the inputs are the training set (generative database of previous productivity records) and model-specific features affecting the productivity. The durations of the remaining activities are then recomputed based on the predicted productivity rates, which modifies the updated project schedule and the expected finish date to more realistic and representative values based on the actual progress at the site.
The basic representation of a neural network model is given in Figure 2, having three layers; input layer, hidden layer, and output layer. In Figure 2, "L" represents the layer number, "S" represents the number of units in the layer, "a" represents activations of units, and "θ" is associated with the weights assigned in the network.  S9 5 10 S10 6 11 S11 4 12 S12 15 13 S13 4 14 S14 5 15 S15 6 16 S16 5 17 S17 6 The learning process involves starting with initial values of the weights θ L ð Þ ij which is modified at the end of each iteration, which goes through the following four major steps (Shaik et al. 2020).
• Forward-propagation: The activation of unit "j" in layer "L" (a L ð Þ j ) is calculated by running the network forward with input data to get the network output. The input data are the values of the features X i .
The activation unit is calculated using the sigmoid activation function σ (z), as in equation (1 • Error calculation: For each iteration, given a set of training data points (actual outputs) (y j ) and the corresponding nodes from the output layer (a 3 ð Þ j ), we can write the error (E) as in equation (2). (2) • Back-Propagation: Here, @E @θ L ð Þ ij the rate of change in the error (E) with respect to each weight in the neural network θ L ð Þ ij is calculated. This value is used to perform gradient descent with respect to the weights, and then θ L ð Þ ij is adjusted by propagating it backward through the network.
• Update the weight and biases Network training is done by generating and choosing values for the weights and biases of the network, and then these weights and biases are modified in each iteration to reduce the error, as represented by equation (3) and equation (4).
where α is the learning rate (usually set to 0.1) to avoid overfitting and to have small steps toward the minimum error.
The software used in this research is Visual Gene Developer to build the neural network. This software has an ANN toolbox facilitating architecture generation, training, validation, and prediction. The data can be imported as in Microsoft Excel. Different transfer functions, namely, Gaussian, modified Gaussian, sigmoid, modified sigmoid, and hyperbolic tangent are available. Different values for momentum coefficient and learning rate can be used. Before initiating the training process, all the input data should be normalized between 0 and 1. The predicted productivity rates are then used to modify the progress update, which changes the expected delay to be more realistic according to the actual progress at the site.

Framework
The framework of the periodically modified construction plan is represented in Figure 3. The model mainly consists of mainly two stages. In the first stage, the neural network model analyzes the actual productivity rate and predicts the future productivity rate in order to obtain the updated schedule of the work. In the second phase, a suitable recovery technique is adopted to recover the delay and restructure the schedule to meet the original baseline plan.
Initially, the baseline plan of a particular activity is considered. In this study, the activity considered is brickwork. Now, the productivity of a particular group "i" named PP(i) and the crew size of Group "i" will calculate the duration required to complete the activity by the members of Group "i". Now, the brickwork activity with the planned productivity, planned duration, and planned crew size is executed, and the progress update of the work is noted at a particular time period. The actual productivity calculated from the progress update details of the work forms the actual productivity database.
The actual productivity database is then fed into the neural network model. The input of the model includes the number of blocks prepared per day and the data of the most prominent factors affecting productivity including temperature, humidity, and the skill level of both labor and supervisor (Table 1).
The output of the neural network model is the future estimated productivity rate of Group "i" (PF). The future rate is then compared with the planned productivity value. If the future rate is greater than or at least equal to the planned productivity rate (PF ≥ PP(i)), then there will be no chance of delay in the project. If the future productivity rate is less than the planned productivity rate (PF < PP(i)), there is a chance for the project to get behind the schedule. To estimate the actual progress, the durations of the remaining activities are then recomputed based on the predicted productivity rates, which modifies the project update and the expected finish date to more realistic and representative values of the actual progress at the site.
The updated modified schedule is compared with the actual baseline plan/schedule. This comparison reveals if the project is progressing according to the schedule or if the project is behind the planned schedule. If the updated schedule is similar/ahead of the actual plan of the project, then no modification is required in the baseline plan and the remaining activities are executed according to the baseline plan. On the other hand, if the updated schedule is behind the actual plan, the model proceeds to the second stage and various recovery options are adopted to cope up with the delay and meet the actual schedule. The different options considered include activity crashing, fast tracking, and do nothing. Here, activity crashing was found to be the lowest cost option.
In this model, activity crashing is adopted in which extra labors are supplied to carry out the activity. In order to supply more labors, the required number of extra labors should be known. The difference between the planned productivity rate and the future productivity rate gives the extra productivity required to overcome the delay (P(extra)).
The productivity of the next group (PP(i + 1)) is calculated and analyzed to estimate if it can compensate for the requirement of Group "i". If not, the productivity of other groups i + 2, i + 3 till i+(n −1) are calculated and analyzed till a suitable group to meet the need of Group "i" is found. As the appropriate group to supply productivity to Group "i" is estimated, the required number of masons with their helpers is transferred to Group "i". This procedure can thus help in recovering the delay of the project and can be efficiently utilized to meet the actual baseline plan/schedule of a project.

Results
Data related to labor skill level, supervisory skill level, temperature, and humidity were selected as inputs, and mason's productivity for a given working day (i.e., blocks per day) was selected as the output to develop the neural network model. Each datapoint in the dataset is represented as per mason's productivity inputs and output for a given day.
The data collected included 1487 data points, and it was divided into training set and testing set randomly. The training set included 1046 data points (70% of the total data), and the testing set included 441 data points (30% of the total data). A portion of the test set including 60 data points (5 masons for 2 weeks) was used to validate the model.
The training set of the prediction model includes 1046 data points of the above-mentioned factors and the corresponding labor productivity of all the 132 masons at the construction site throughout 2 weeks (11 working days; 5 working days in Week 1 and 6 working days in Week 2).
After collecting 1046 data points, all the values were then normalized and randomized before feeding them into the neural network. The parameters normalized are number of blocks per day, temperature, humidity, and the skill level. All factors are scaled to a range of [−1:1] where (−1) and (1) are the highest negative and positive effects, respectively.
The normalization is done within [−1,1] using equation (5): Where: x n Normalized value x Original value x min Minimum value in the data range x max Maximum value in the data range The minimum and maximum values of each factor are as follows: • Temperature (from the weather history): x min ¼ 10 o C and x max ¼ 55 o C • Humidity (from the weather history): x min ¼ 18% and x max ¼ 100% • Labor and supervisor skills: The minimum and maximum values were calculated using the Excel sheet of the input data.
After that, the normalized data are randomized to be ready to feed it to the neural network.

Neural network building, training, and prediction
Visual Gene Developer free software was used to build the neural network. The data set was divided into training set and testing set randomly in proportion to 70:30. The training set was used to develop the model. The optimal neural network model with desired performance is obtained by constantly adjusting the weights to obtain minimum error. After performing multiple runs, an optimal network was obtained with an input layer of 4 nodes, 10 nodes in the primary hidden layer, and 5 nodes in the second hidden layer and 1 node in the output layer. The hyperbolic tangent transfer function was selected for the hidden layers. The momentum coefficient and learning rate were also adjusted to obtain the desired model. The representation of neural network before and after training is given in Figures (4) and (5), respectively. From Figures (4) and (5), it can be seen that there are four inputs and one output. The neural network model obtained after training was tested to verify the performance of the model. The test set without output variable was used to verify the actual and predicted results. The difference in actual value and predicted value of the output as obtained from the software was calculated in percentage and it was found to be 7.11%. Hence, a multilayer feedforward backpropagation was designed and tested. Furthermore, to validate the performance of the neural network model in real a construction project, a sample of the test set related to five masons for 2 weeks was utilized as seen in the next section.

Validation of the model
To validate the model, one group of labors was selected from a total of 17 groups (Table 2). Group 2 consists of five masons with their helpers, and working in one villa of two floors was selected. The groundfloor block work quantity is 10,000 blocks, and the first-floor block work quantity is 9000 blocks. According to the baseline plan, groundfloor block work activity is planned to finish within the 3 weeks (16 working days) as shown in Table 3.
The data from the training set (Week 1 and Week 2) was fed into the machine learning model to compare the actual and planned values as in Table 4. However, instead of planned productivity of 125 blocks/day for each mason, the average productivity was 118.8 blocks/ day and 117 blocks/day in the 1 st and 2 nd weeks, respectively. With the neural network model, the actual productivity values obtained for Week 1 and Week 2 are used to update the schedule for Week 3. According to the initial plan, the productivity was expected to be 125 blocks/day. However, the actual value on the field was different. Hence, machine learning the actual productivity will automatically update the construction schedule and prescribe suitable recovery strategies to be adopted to meet the required output. Table 3 implies the total quantity of blocks to be produced by the end of Week 3 is 10,000, and Table 4 shows the actual output from the Week 1 and Week 2 is 6480. Thus, the remaining number of blocks to be manufactured in Week 3 to meet up the planned schedule is calculated and tabulated in Table 5.    Table 5. Data for week 3.

Remaining Quantity
Total planned quantity -Actual quantity of Week 1 and Week 2 = 10,000-6480

Predicted number of blocks
From the machine learning model 2873blocks

Activity Lag
Remaining quantity -predicted number of blocks = 3520-2873

blocks
Thus, the project is behind schedule by 647 blocks that need to be done in Week 3. This shortage is to be compensated by supplying more resources. Table 6 shows the new proposed analysis of the progress update at the end of Week 2 (on 14 September 2017), which is to modify the expected delay based on the predicted productivity, not the originally planned productivity, which was not achieved by the group. The machine learning model predicts the remaining duration based on the actual productivity rates, not the initially planned ones that cannot be achieved, and hence, the model is realistic and practically applicable. According to the conventional progress update one extra mason and according to modified progress update, two extra masons are required to complete the activity on time.
From Table 6, it is concluded that two extra masons with their helpers are needed to provide 647 blocks in week 3 for Group 2. According to the framework which is explained in Figure 3, once the extra productivity [P(extra)] required by Group "i" (here Group 2) is analyzed, the model will check the productivity of other groups one by one [PP(i+(n−1))]. The group which can supply enough resources will be selected to deal with the delay. Group 1 consists of 13 masons (Table 2). Two masons with their helpers were moved from Group 1 to Group 2, both having a total predicted productivity of 1023 (Activity-crashing; more resources are added). Now, the Group 2 consists of seven masons with their helpers. The actual number of blocks observed was 3970 for seven masons. The predicted value was obtained by multiplying the number of days (5 days) with the predicted productivity based on machine learning (574.6 blocks/day) and adding this value to the productivity of the two extra labors (1023 blocks). The predicted and actual values were very close, and the error was too low. This result validates the model.
With the seven masons, the progress of Week 3 was as follows: In this study, the calculation of labor skills and supervisory skills is dependent on the number of blocks prepared. This may cause autocorrelation or multicollinearity between the variables in certain situations. An additional similar analysis with independent variables (Temperature and Humidity) was performed, resulting in a model with 92.01% accuracy in contrast to 98.14% in the above model.

Discussion
In the present study, a periodically modified project planning approach based on neural network is adopted. The major factors affecting labor productivity were discussed and the most prominent factors such as skill level and environmental factors were used in the case study. The recovery option selected in the approach was activity crashing, in which excess resources are pumped in, to overcome the delay. The algorithm developed was validated, and it was found to be 98.14% accurate. Previous studies on similar approach for recovering the delay also generated promising results (Agyei 2015;Abdul-Rahman et al. 2006). Apart from the above-mentioned merits, there are certain limitations for the implementation of the proposed prediction approach, which can be addressed in future research. Initially, in this study, the ideal case is considered where the labors are assumed to be present every day for the work. Labors can be absent on one or other day due to various reasons (Durdyev, Omarov, and Ismail 2017). Thus, the effect of labor absence should be considered in the future study.
The factors considered for the development of the model were only labor skill, supervisory skill, temperature, and humidity, due to unavailability of enough data. However, there are many other variables to be considered in a decision-making model (Perera, Sutrisna, and Yiu 2016). Hence, the effect of other factors affecting productivity can be a point of future research. Moreover, in the present study, the recovery option adopted is pumping of additional labor to increase the productivity of the group. However, this assumption is not always valid, especially in the activities with workspace limitations (li, Love, and Drew 2000). Depending on the activity, the number of labors will have an impact on productivity. The relationship between the productivity and the crew size need not always be linear. Hence, for future research, the profile of the relationship between the productivity and the crew size should be subject to the type of activity. Moreover, the workspace limitations should be considered as one of the prominent factors while selecting a recovery action in addition to the cost comparison for each type of recovery actions. Thus, the model works well in ideal case where the labors are not absent in any working day and when the productivity is proportional to the crew size. It is not necessary that the other cases generate the same highly accurate solutions.
The accuracy of the model was lesser when only environmental factors were considered. This implies that the performance of the model is associated with the type of input variables selected for the study. In the future, consideration of additional input variables which can closely represent the physical and physiological data of the labors, years of experience, etc., may increase the applicability of the model. The advancement in data collection techniques, such as collecting data via sensors and cameras, is increasing the availability of data compared to previous days (Yu et al. 2021). The activity focused in this study is the block work, and the data associated with this activity are utilized to develop the model. The scope of this novel approach can be improved by considering additional construction activities like plastering, painting, etc., in future studies.
This study implies the possibility of cost and time management in construction projects with periodically modified construction plans. The study ensures that the huge amount of money spent by construction firms on recovery plans can be substituted with simple cheap solutions through periodic monitoring and selfrecovering schedules. This study can be considered as an initial step for the development of similar models in the future by including more factors and different scenarios.

Conclusion
Delays are an indispensable feature of the construction industry. The periodically modified construction plan using FFNN with backpropagation learning algorithm developed in this research can be used to develop selfrecovering schedules. The features of the machine learning approach developed include predicting future productivity rates and automated recovery actions' calculations, followed by a cost optimization algorithm to choose a suitable recovery action. In the present study, the approach is validated with a case study, and it is found to be more than 98% accurate.
Moreover, the modified progress update provides a realistic expectation of the delay at site because it was based on actual data and not based on planned data. Here, the schedule is continuously updated with the actual progress of the activity at the site. When construction plans are regularly updated with real data, the delay can be easily captured at an early stage. This would help the decision-makers to adopt suitable and cheap recovery actions in the initial phase of the activity delay. Moreover, the application of neural networks and machine learning techniques can improve the scope, accuracy, and applicability of this scheduling approach.
This research project is expected to be an eyeopener for the continuously modified construction plans. The results obtained in this study imply that developing the model further with the addition of more factors, which causes delay in projects, can lead to better cost and time management compared to conventional project recovery methods. The modeling approach developed in this research should be particularly useful for projects where significant cost savings and high standards in terms of both quality and time management are likely to be the most important.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The authors are grateful to the UAE University for the facilities provided for research. This research was financially supported by the Research Affairs Office at UAE University under grant number 12R099

Notes on contributors
Hamad AIJassmi is the Director of EmiratesCenter for Mobility Research (ECMR) at UAEUniversity. He is an Associate Professor of Civil Engineering at UAE University, and a ResearchFellow of University of New South Wales, Sydney.Apart from academia, Dr. Al Jassmi acted as Project Director, Project Manager or Project Leadat numerous consultancy projects related to trafficsafety, mobility policy making and planning. He obtained his PhD in Construction Engineering Management from the School of Civil Environmental Engineeringat UNSW in Australia. In his thesis, he developed mathematical methodsthat serve for analysing the complex generative mechanisms of defectsin construction. His research interests include expert systems and machinelearning for the buildings and infrastructure sector. He published more than50 peer-reviewed articles, several which appeared in top ranked international journals.
Yusef Abduljalil is a Senior Planning Engineer with a knack for ensuring that projects are planned and executed efficiently. Adept at employing diverse building methods to achieve project goals. Yousef's point of strength is his knowledge and experience in delay analysis, as well as implementing the latest methods in project planning and control.
Babitha Philip is a research assistant at the Emirates Center for Mobility Center (ECMR), UAE University. She is a PhD candidate at the Civil and Environmental Engineering Department, UAE University. Babitha completed her master's in Construction Engineering and Management and bachelor's in Civil Engineering from India. Babitha's research works focus on application of Artificial Intelligence techniques in civil engineering field.