The case for digital twins in metal additive manufacturing

The digital twin (DT) is a relatively new concept that is finding increased acceptance in industry. A DT is generally considered as comprising a physical entity, its virtual replica, and two-way digital data communications in-between. Its primary purpose is to leverage the process intelligence captured within digital models—or usually their faster-solving surrogates—towards generating increased value from the physical entities. The surrogate models are created using machine learning based on data obtained from the field, experiments and digital models, which may be physics-based or statistics-based. Anomaly detection and correction, and diagnostic closed-loop process control are examples of how a process DT can be deployed. In the manufacturing industry, its use can achieve improvements in product quality and process productivity. Metal additive manufacturing (AM) stands to gain tremendously from the use of DTs. This is because the AM process is inherently chaotic, resulting in poor repeatability. However, a DT acting in a supervisory role can inject certainty into the process by actively keeping it within bounds through real-time control commands. Closed-loop feedforward control is achieved by observing the process through sensors that monitor critical parameters and, if there are any deviations from their respective optimal ranges, suitable corrective actions are triggered. The type of corrective action (e.g. a change in laser power or a modification to the scanning speed) and its magnitude are determined by interrogating the surrogate models. Because of their artificial intelligence (AI)-endowed predictive capabilities, which allow them to foresee a future state of the physical twin (e.g. the AM process), DTs proactively take context-sensitive preventative steps, whereas traditional closed-loop feedback control is usually reactive. Apart from assisting a build process in real-time, a DT can help with planning the build of a part by pinpointing the optimum processing window relevant to the desired outcome. Again, the surrogate models are consulted to obtain the required information. In this article, we explain how the application of DTs to the metal AM process can significantly widen its application space by making the process more repeatable (through quality assurance) and cheaper (by getting builds right the first time).


Introduction
Metal additive manufacturing (AM) is an enabling technology that brings previously unattainable manufacturing goals within reach. For instance, its application resulted in the reduction of 855 components on the Cessna Denali aircraft engine to a mere 12, while at the same time, through optimised part shapes, increasing power output by 10% and boosting fuel efficiency by 20% [1]. However, despite the clear benefits provided by the technology, there exists a fundamental challenge: the AM process is inherently unstable and chaotic [2]. This unpredictability produces poor repeatability. Consequently, the properties of the parts made by AM can vary significantly [3]. This characteristic of AM has been one of the barriers to its wider adoption by industry players in the aerospace, space, defence, automotive and medical sectors who routinely make mission-critical parts [4,5]. Nonetheless, there is also an effective solution. If one could imagine a 'superhuman' operator who has vast process intelligence and 'see-through vision' can supervise the AM process 24/7 and make real-time adjustments based on any anomalies that are detected, the process can be constrained within close bounds to its optimal route. Also, if this operator, through acquired process intelligence, can find this ideal process route even before the build begins, then that is a bonus. A digital twin (DT) is such an operator but resides in the virtual domain. A product resulting from a DT-optimised and supervised process will be assured of quality.
The meshing between the DT and AM is exceptional: AM is full of uncertainty-it produces mostly unique components, each with its own individual processing route, and operates in an unstructured and unpredictable environment, whereas DT can bring structure and certainty by exercising control. This ability is imparted to the DT through the process insights generated by machine learning (ML) models trained using AM data. The data is obtained from the field as well from experiments and physics-based or statistical models.
In the following sections, we first expand on what DTs are and how they are progressively being introduced into modern manufacturing environments. Subsequently, we explain how guidance from DTs may be used as a powerful strategy for optimising return from a process such as AM, which is widely applied to produce expensive, bespoke parts.
It must be noted that, while DTs hold evident promise for the AM industry of the future, some barriers need to be overcome before they are adopted widely in manufacturing facilities. Present challenges for creating a DT for AM include the following: (a) a need to encapsulate process-related knowledge within accurate and realistic computational models that the artificial intelligence (AI) systems can interrogate, and (b) the development of affordable, advanced, high-speed, high-resolution sensors that are capable of online sensing. For online monitoring, typically high-speed sensors such as photodiodes or 'melt pool' cameras (calibrated pyrometers) are used (e.g. see [6]).
The current Focus Issue is primarily designed as a platform for disseminating work relating to the generation of process intelligence for the AI systems of metallic AM processes through computational models and experiments. In the present article, we take the opportunity to explain the relevance of DTs in AM of the future.

DTs and their role in the manufacturing industry
The DT is expected to guide the smart and connected factories of the future, sometimes referred to as the Industry 4.0 era, towards achieving previously unattainable levels of process productivity and part quality [7]. The DT can be described as a real-time virtual representation of a physical entity, which may be a part, an assembly of parts (e.g. an automobile), a complete system (e.g. a city), an asset, a process or a service. Thus, the DT is a 'living' model that continuously updates as the physical counterpart changes to represent its status, working conditions, product geometries and resource states synchronously [8]. The updates are facilitated by real-time sensor data and have the potential to incorporate simulations, enabling prognostic assessment [9]. Notably, the twins maintain two-way communication so that the digital counterpart is not only aware of the instantaneous state of its physical twin, but can also provide appropriate real-time control commands to guide the physical twin's behaviours (figure 1).
Real-time control commands are made possible through the use of fast-solving 'surrogate models.' These are computationally efficient reduced-order models that are derived from full models (which can be physicsor statistics-based) for replicating the salient features of the parent models almost instantaneously. These surrogate models are typically taught by ML algorithms using data obtained from the field (via sensors from the physical twin), experiments or full models. Consequently, they contain the required process intelligence and can be interrogated in real-time, unlike the full models that typically take much longer to solve. The process models could either be physics-rich computational models or statistical models synthesised from field and maintenance data, or a combination of the two types. The ML models also use real-time sensor data from the physical twin to enable the DT to learn, reason and dynamically recalibrate to stay up-to-date. The ML models typically contain algorithms for prediction and anomaly detection. These bestow additional capabilities on the DT, such as being able to direct a process to recover from an excursion outside set limits. In addition, the predictive capabilities of DTs based on process intelligence may be leveraged to determine the optimum processing windows through the posing of 'what-if ' queries. It was reported by General Electric (GE) recently [10] that 90% of manufacturers are already investing or planning to invest in predictive capabilities in the next five years, and 75% are investing in ML capabilities. A recent review records the application of ML in defect recognition, thermal deformation compensation, process parameter optimisation, topology optimisation, surface roughness reduction, lowering of macro-porosity and cracks, prediction of fusion zone depth and many other applications in AM [11].
The ability of the DT to mimic its physical twin resides within its AI capabilities, which receive behavioural information from digital simulation models. Businesses and researchers are increasingly exploring the DT concept as a tool for improving the physical counterpart's performance [12]. Well-known companies that use the technology include GE, Siemens, Tesla, Altair, SAP, Oracle, Ansys, PTC and Dassault Systèmes [13,14]. Yet, there is presently no formal consensus definition of a DT in the open literature [13,15]. This is primarily because of the evolving, cutting-edge nature of DTs, and partially because of their diverse forms (e.g. product DT, process DT, service DT, asset DT, etc). Our treatment of the DT concept is, however, aligned with the popular treatment [7,12,13,[15][16][17][18][19][20][21]. Process DTs, such as the DT for metallic AM, are a small subset of the DT space, in which part DTs are the dominant type.
As much as it sounds revolutionary, the DT concept is only a logical extension of the digital technologies that have been used in the manufacturing sector for decades. That is because the manufacturing industry took good advantage of the digital revolution of the late twentieth century in multiple ways. These included: (a) Using the digital computer for simulations of the manufacturing processes to increase understanding of these processes. Such virtual studies of cause-and-effect relationships enabled scientists and engineers to identify quality improvements and productivity enhancements efficiently while leveraging the everincreasing processing power of processing units. (b) Introducing digital sensors to implement 'quality control' (QC) measures centred on observing processes and products. The QC philosophy was to reject non-compliant output, frequently created by an out-ofspecification process. Data logged from these sensors were usually analysed offline, and corrective measures were recommended for future runs of the process. (c) Adopting programmable logic controller based feedback control automation in factories to replace labour and increase process repeatability. Resulting benefits included substantial productivity gains as well as cost reductions.
While the above improvements have continued to be deployed with increasing levels of sophistication over the years, transformational progress achieved in software and hardware capabilities, as well as AI algorithms in the recent past, have created significant opportunities for introducing more advanced digital innovations in the manufacturing sector. A noteworthy feature is the combination of digital models, ML and data sciences that are combined with sensing and control deployed to improve product quality at a lower cost.
The DT concept is one such innovation and makes use of the technologies associated with the three examples mentioned above. The DT extracts value for businesses through synergies generated by combining all three and superimposing AI capabilities. The result is a paradigm in which machines can think for themselves. These machines can make autonomous, real-time decisions based on process intelligence for keeping processes within specifications. Such smart connectivity introduced by the DT (figure 1), which also enables AI-initiated instantaneous changes to be instigated in processes through software-controlled hardware (e.g. computer numeric controlled actuators), represents a major leap in the capabilities of the manufacturing industry. That is because, the connection back from the digital to the physical world, which complements the physical-to-digital linkage realised decades earlier, enables active closed-loop control commands to be autonomously triggered (figure 2) before a process produces a non-compliant product. The emphasis, therefore, shifts from the previous quality control (QC) strategy, where rejects were identified after manufacture, to 'quality assurance' of the process and, by extension, of the product.
The schematic in figure 2 not only depicts the connection between the physical (left) and digital (right) worlds in a DT-deployed facility but also the contrasts between the traditional (top) and Industry 4.0-based (bottom) factories. Conventionally, digital data was logged for analysis offline by engineers or scientists for making potential improvements to the process in the future. On the other hand, in smart manufacturing associated with the factory of the future, AI-enabled DT can make autonomous decisions in real time to instantly nudge a deviating process back to its optimum path. Thus, the DT transforms the value of the logged data by immediately analysing and acting on it to avoid making defective parts in the current process itself.

Manufacturing process DTs
The smart factory is a key construct of Industry 4.0 [22]. It is envisioned as a fully connected manufacturing system, operating without human force by generating, transferring, receiving and processing digital data to conduct all required tasks for producing all kinds of goods. Smart manufacturing is the dramatically intensified application of 'manufacturing intelligence' throughout the manufacturing and supply chain enterprise [18]. It comprises the real-time understanding, reasoning, planning and management of all aspects of manufacturing processes, facilitated by the pervasive use of advanced sensor-based data analytics, modelling, and simulation [23]. Within this context, DTs form an evolutionary paradigm for smart manufacturing in smart factories [24] enhanced by their interlinks with other Industry 4.0 technologies, including cyber-physical systems, and the internet of things (IoT). For example, a recent publication [25] reported the use of a DT in the manufacture of engine blocks.
While there are multiple types of DTs suited to the manufacturing industry, a DT that emulates the AM process would fall within the 'process DT' category. There are at least three types of process DTs available based on the roles for which they are chosen [26]:  [14]. Reprinted from [14], Copyright (2019), with permission from Elsevier.
• Process DTs with supervisory capabilities-at their most basic level, such DTs offer visual emulation (or animation) to support understanding of static or dynamic operational characteristics-and how they relate to performance outcomes. For example, they can help ensure consistent operation and ongoing standards; plan and test the impact of critical scenarios; or visually test new conceptional business models. • Process DTs with diagnostic and control capabilities-these provide dynamic diagnostic capabilities, enabling the analysis of end-to-end process performance in real time. They support the optimisation of real-world asset and process performance. Such twins link sensor data from the physical world to analytical and data-mining algorithms, giving detailed, accurate and actionable insight. This allows the triggering of alerts to prompt closer monitoring, diagnose issues and identify performance improvements. One can also link physical assets and control systems to adjust process parameters automatically via an actuation loop. As a result, operational plans can be executed more effectively. Benefits include increased productivity, improved control and more stable and reliable operational performance. • Process DTs with predictive capabilities-these DTs are created using specially designed predictive simulation software and can be used to evaluate future scenarios. Using such DTs, decision-makers can test and understand the impact of different scenarios, identifying opportunities and risks without incurring any cost. One can ask questions like 'what if?' , 'what's best?' and 'how do we?'-and get answers using dynamic models of real business and operational processes.
Qi et al [13], Tao et al [14] and Kritzinger et al [15] have recently provided reviews of current DT use in manufacturing. It is apparent from their assessments that the general manufacturing sector's interest is currently limited mainly to applying the DT concept to logistical or services-related activities such as production planning and control, maintenance, layout planning, and shipment scheduling. That is not to say the benefits of 'process DTs' are not understood-see figure 3 and [27]. By mirroring the manufacturing process digitally, process DTs create models of 'the best way' to run a process in a given environment-often referred to as 'the golden batch' [10]. By identifying the optimal process to manufacture a given product, plant operators can ensure they are consistently delivering against quality, cost and volume objectives.
Process DTs are not yet commonplace in the manufacturing industry. There are multiple reasons for the lag in the progress of process DTs. According to GE [10], this is due in part to the high variability of manufacturing processes (equipment, scale, environment, complexity and low data quality). Despite the challenges, some large corporations have developed process DTs for simulating their processes. For example, the deployment of a process DT in a Siemens factory was reported very recently [28]. In this instance, dust from a milling operation on PCB boards caused drill bits to jam, causing unplanned downtime. To remedy the situation, automation engineers added sensors to the machine; readings from these were fed to the DT. With the aid of AI, the DT could predict machine breakdowns well before they occurred, allowing the engineers to take preventative measures. GE Digital has already delivered process DTs in multiple industries, including mining, food and beverage, oil and gas and power generation, and water and wastewater industries. Unfortunately, these activities have not been publicised for commercial reasons. Similarly, process DTs for metal AM are yet to be reported in the public domain. Thus, there is a need for process DTs to be developed for a host of manufacturing processes and, in particular, for metal AM, before the transition to intelligent factories of the future. In the following section, we explain why AM is an ideal candidate for the application of DTs.

The exceptional fit between DTs and AM
AM is an enabling technology that is anticipated to grow into a $15.5 billion industry by 2030 [29]. The highly digitalised AM process is a perfect contender for the application of the DT concept, which, in turn, offers compelling solutions for the inherent problems associated with the AM process, as described below.
The process waste, in the form of defective builds, is expensive as the AM parts are usually high-value items. This is because of their low-volume, customised nature and the high cost of the raw materials and machines. Hence, there is room for closed-loop process control enabled by DTs to improve the AM process, which suffers from some unique drawbacks. Firstly, the metal AM process is inherently chaotic (figure 4) [2,30]. This can sometimes produce poor repeatability [31][32][33][34]. Secondly, improper energy densities may be applied at certain locations, either due to incorrect design of the process or due to unexpected or unknown hardware system drifts, resulting in defects. Consequently, the properties of the parts made by AM can vary significantly [3]. The lack of reproducibility of parts and the occurrence of defects have been barriers to the wider adoption of AM by industry players in the aerospace, space, defence, automotive and medical sectors that routinely make mission-critical parts [4,5].

The ability of a DT to provide real-time diagnostic process control
A DT capable of diagnostic process control can react to process excursions instantaneously. The role of the DT in this manufacturing setup would be similar to that of an experienced process specialist who works 24 h a day to steer an operating process to perform continually at its optimum. The DT would have the assistance of 'superhuman' diagnostic process vision, enabled by digital sensors, that allows the 'virtual specialist' , who has AI capabilities, to track the process and autonomously instigate changes where necessary. Besides, the AI can choose an optimum process route during the planning stages, even before that process is initiated in the plant [11].
Therefore, if a DT first helps determine the optimum processing window for a part, and then ensures that the manufacturing process stays within boundaries established using that processing window, the quality of the process will be assured. By extension, the quality of the product will also be assured. Such certainty can transform AM by providing confidence to the customer in the integrity of its output. There are multiple flow-on benefits for businesses: (a) The increased confidence gained in process repeatability will significantly widen the scope of AM; it will then be possible for AM to manufacture mission-critical parts with a guaranteed minimum quality level. It will also lessen the burden of part qualification. (b) The improved product integrity will significantly reduce the need to destructively test AM parts; this will significantly reduce costs since expensive parts need no longer be sacrificed to satisfy current requirements for part certification. Equally, a quality-assured process and product will expedite and simplify part certification procedures. This is beneficial for two practical reasons. Firstly, it will help deliver urgent parts on time. Secondly, it will simplify strategies concerning the certification of customised AM parts, each of which tends to require specific processing conditions and, therefore, has distinctive material properties.
(c) A quality assured process and its product will make it possible to significantly reduce rejects; this will eradicate expensive waste. It will also reduce the burden of process qualification. (d) The ability to determine the optimum processing window for each customised part before its build even begins reduces or eliminates trial end error; this saves time and money as the products will be built right the first time (and every time), e.g. see [35]. (e) Since parts will be built right the first time, plant capacity will not be wasted in rebuilding products; hence plant productivity will be higher.

The capacity of a DT to predict a future state of the process under multiple scenarios
A traditional closed-loop feedback control system is a set of mechanical and/or electronic devices that automatically regulates a process output (response) to a desired state or set point without human interaction. A control loop is the system of hardware components and software control functions involved in measuring the response and adjusting an input variable that controls the measured response of the process. Although closed-loop control systems have been used in industry for some time, the advent of DTs that contain significant AI promises to raise the rigour and breadth of controls employed in such systems. That is because traditional feedback control systems that are manually programmed only implement known solutions to problems that the programming engineer is familiar with. This limits the number of scenarios that can be handled by closed-loop control. However, DTs run by ML algorithms can handle a multitude of unprogrammed problem scenarios because of their enhanced predictive capabilities that can foresee a future state of the physical twin (based on the current state) and take appropriate corrective action determined by the AI. This predictive ability of the DT is derived from their capacity to observe and learn from the physical twin (e.g. the AM process) and the AI embedded within them. Thus DTs can operate proactively in a context-sensitive manner, through what is sometimes called the 'feed-forward loop,' [36] whereas traditional closed-loop control (or feedback) systems are usually reactive. This enhanced ability to predict and fix multiple process issues is ideal for a process such as AM, for which the permutations and combinations of process variables, and therefore potential process states, are numerous. For instance, it can prevent the build of an expensive part from being defective by initiating suitable preemptive action in case the process veers off-course. This is another reason why an outstanding match exists between AM and DTs. Therefore, it is clear that DTs offer business value propositions that are particularly compelling for metal AM, which suffers from a lack of process repeatability and a heavy reliance on trial and error, resulting in expensive and excessive waste. A more detailed treatment of this topic may be found in [37], which is a summary of topics discussed at an international symposium held in Australia in 2019 under the title of 'Towards a True Digital Twin of Metal Additive Manufacturing.' A recent paper [38] concluded that the use of DTs would make metal AM products cheaper, thus increasing their market share. Another work [21] dealt with how the certification process stands to gain from the introduction of DTs into AM.

Application of closed-loop control in AM
In this section, we illustrate how manual closed-loop control may be applied to the AM process for the avoidance of undesired process conditions during a build. The intention is to provide a preview of the autonomous diagnostic process control that can be set up using a DT during an AM build.
We use information from a recently published work [39] for the illustration, noting that other examples exist (e.g. [40,41]). In that work, an open-source laser powder bed fusion (L-PBF) machine known as Aconity3D (see, left) that can be user-programmed was used to build a stainless-steel part. The melt pool was monitored, and pyrometer readings were taken during a series of builds with varying parameters such as scan speed and scan offset. The melt pool dimensions were then manually correlated with pyrometer emission spectra to create a limited database. Similarly, correlations between emission spectra and the following variables: scan rate, scan offset, and laser power were obtained and added to the database. Two demonstration builds followed this. In the first build, the laser power was left unchanged throughout the process. Several hot spots were recorded during the process, as monitored by the pyrometer. In the second with the same process parameters, the laser power was programmed to change via closed-loop control based on pyrometer readings, according to the information in the database. This had the effect of avoiding the hotspots. While demonstrating closed-loop control in AM convincingly, this study relied on manually generated insights into the process. Thus, its scope was limited to the previously investigated process parameters and their boundaries. Also, a considerable amount of time was spent on carrying out the previous set of experimental investigations as well as analysing data manually to obtain the necessary correlations. It is therefore clear that the study would have benefited from the use of a surrogate ML model enhanced by data from physics models. The ML model would have been able to support many more process parameters along with their interactions, providing the investigation with a more robust capability to avoid Figure 5. The different components that contribute to the construction of a DT [21]. Reprinted from [21], Copyright (2019), with permission from Elsevier. not just hotspots, but a host of other defects as well. Nevertheless, this study remains a compelling example of how an AM machine may be controlled in real-time for process improvements. Hence, it may be considered a forerunner to a DT-controlled AM process.
Despite the significant potential benefits, a recent review on the subject [42] found no process DT currently exists for AM. This is a considerable research gap that must be filled in the next decade or so with a view to making AM increasingly quality assured and cheaper-and consequently, more attractive to several industries that make high-end mission-critical parts.

AM process models containing AI for DTs
A DT for AM should not only provide advice on the best processing path but must also be capable of supervising a process autonomously and effectively to assure the quality of the process itself and the resulting product. To achieve this goal, the DT must have sufficient AI to be able to find the process parameters by simulating the behaviour of a process towards achieving the stipulated product quality. Process intelligence for this purpose is provided to the AI by computational models that encapsulate the behavioural characteristics of the physical twin, which is the AM process in our case.
Initially, the computational models are developed as physics-rich mechanistic models or as data-driven statistical counterparts (figure 5). It is also not unusual to have hybrid models that combine the characteristics of both mechanistic and statistical models. These models must be suitably validated. Then the data generated from these models are used [43,44], along with field data, to train surrogate ML models that help provide real-time guidance to DTs. The DT will have algorithms (e.g. in the form of control models, etc) for data analysis and decision-making.
Before computational models can be developed, a large amount of effort is required to observe the AM processes and record the vital diagnostic parameters through sensors, which will form the basis for constructing the models. We highlight below some aspects of the AM process that would benefit from experimental investigations as well as computational modelling related to the development of AI for an AM process DT. Given that the powder-based AM processes start with near-spherical metal powders that are tens of micrometres in diameter and ends with parts with dimensions of tens of centimetres, a multiscale modelling approach is necessary to connect the models of the various sub-processes involved.

Powder spreading (recoating) model (for powder-bed applications)
A common technique employed in AM powder-bed systems is to add successive layers of metal powder by spreading (also known as recoating or raking) a new layer across the existing surface. Understanding this spreading process and how the properties of the powder particles (e.g. size and shape distributions, density, interaction properties) and process parameters (e.g. height of powder layer, rake geometry, recoater speed) affect the properties of the bed after raking is crucial in optimising the performance of the system and ensuring the quality of the 3D-printed part [45,46]. It is, therefore, essential to develop a quantitative understanding of the spreading process. Some models for this process exist (figure 6) [45,46], but further refinements are considered useful. Right [46]: calculated spatial particle distribution, with particles coloured according to size (typically tens of µm). Reproduced from [46]. CC BY 4.0.

Figure 7.
Melt pool width for the Gaussian vs. donut beam profiles for different process parameters [50]. Reproduced from [50], with the permission of the Laser Institute of America. https://doi.org/10.2351/7.0000100.

Laser-metal interactions and melting of powder
Understanding the interaction between the laser (or electron beam) and the metal powder beds is also critical in predicting optimum processing regimes in L-PBF AM of metals [47]. It is thus a key part of the process that needs to be modelled well [48]. In most models, the distribution of heat flux over the powder bed during the laser-powder interaction is assumed to follow a Gaussian beam pattern. However, the heat flux pattern depends on several factors such as beam quality factor, laser wavelength, etc, and must be described mathematically to present the laser-material interaction in a way that represents the actual beam. In a recent work [49], a non-Gaussian laser beam model was used to model the temperature profile, bead geometry, and elemental evaporation in the powder bed process. The model predictions of the temperature, bead shape, and concentration of alloying elements agreed better with an experiment using Inconel 718 alloy than with those of a model that assumed a Gaussian beam profile. The importance of considering the actual profile in models was further highlighted in an experimental work [50] on alloy AlSi10Mg in terms of its effect on melt widths and depths (results in figure 7).
The complicated mechanisms [51] that operate during this stage of the AM process (figure 8) are being progressively understood and modelled, but several questions remain. This subject has been reviewed recently in [52]. Models for the melting of powders by the laser also exist (e.g. figure 9), but can be improved to account for the various contributing parameters adequately. Figure 8. The physical effects and processes taking place during AM include powder particle dynamics due to vapour recoil pressure [75], thermal fluid dynamics capturing solid-liquid-vapour transitions when interacting with laser radiation, solid-state transformation such as precipitation during potential remelting and 'heat treatment' in the heat affected zone, and subsequent solid mechanics to deal with damage mechanisms such as cracking [32]. Reproduced from [75]. CC BY 4.0.  [53]. Reprinted from [53], Copyright (2020), with permission from Elsevier.

Solidification and microstructure formation
When the microscopic molten pool solidifies, it does so at an extremely high rate since the substrate (i.e. the build plate) acts as an enormous heat sink. The rapid solidification results in microstructures that are distinctive to AM, the formation of which is governed by the unique non-equilibrium conditions and directional heat flow. Thus, new theories and methodologies are often required to describe the phenomena involved. Since microstructures determine the properties of the part (e.g. see [54]) and, ultimately, its mechanical performance, a quantitative understanding of its formation has to be input into the AI process intelligence. Modelling of microstructures can be tackled in various ways, including the less computationally intensive phenomenological method and the more accurate but computationally expensive phase-field method. A recent comprehensive review [55] on the subject concluded that the bulk (macroscopic) AM microstructures predicted by analytical or quasi-analytical models for the temperature were inadequate in terms of capturing salient features of the microstructures, including surface topology on real parts. The authors compared various techniques, including the phase-field method, cellular automata and kinetic Monte Carlo, and discussed their advantages and disadvantages. The quality of the models and their predictions (figure 10) are progressively improving, but the generation of additional knowledge in the area for the various alloys and process conditions is necessary.

Residual stress and distortion
Large temperature gradients are usually present in an AM part during the build process since, while the substrate acts as a large heat sink, the buildup of layers on the top of the part results in substantial heating. When the entire part cools to room temperature at the end of the build, these temperature gradients result in different regions of the part cooling by different amounts. That results in non-uniform thermal contraction, which triggers residual stresses. These stresses are partially relieved when the part is cut out of the substrate, resulting in distortion. Hence, to limit the undesirable effects, practitioners attempt to manage temperature gradients. Models have been developed to assist with mitigation strategies [57,58] (figure 11). However, the impact of residual stresses on the mechanical behaviour of AM materials remains largely unexplored [59].
The continued development of these models will make future simulations more realistic. Models can be improved based on experimental data on residual stress formation in AM environments which are progressively being published-e.g. see recent reviews in [60,61]. These works have assessed, for instance, current research status on residual stress sources, characteristics, and mitigation. There is an outstanding example [59] where workers combined experiments with simulations to understand residual stresses on AM parts. Here, workers combined in-situ synchrotron x-ray diffraction experiments and computational modelling to quantify the lattice strains in different families of grains with specific orientations and associated intergranular residual stresses in an AM 316 l stainless steel under uniaxial tension. This work, where attention was trained on residual stresses at a microscopic scale, represents a pushing of the boundaries of knowledge in this area.

Microstructure-property relationships
In parts made using traditional manufacturing methods like casting, properties of the part material may be reliably represented by a single bulk value because the cooling rates do not vary significantly from location to location (with some exceptions). However, this is not the case in AM, where cooling rates change Figure 11. Von Mises stress (Pa) distribution and corresponding deformation at various time points as calculated by the multiphysics simulation of the LENS fabrication of a square box [57]. The deformation is scaled by a factor of 50. Reprinted from [57], Copyright (2018), with permission from Elsevier.

Figure 12.
Relationship between temperature and measured mean grain size in an AM duplex titanium alloy [62]. Reprinted from [62], Copyright (2019), with permission from Elsevier. Figure 13. Integrated computational and ML framework to predict process-structure-property in the AM process, adapted from [65]. Reproduced with permission from [65].
considerably based on location, which means material properties follow a similar pattern. Further complications in the form of microstructure modifications arise (figure 12) from the fact that the continuous deposition of hot layers in close proximity to the already solidified locations can reheat the latter and, sometimes, even melt it. Since the localised material properties play a role in the AM part's mechanical performance, their estimation is extremely useful to ensure that minimum quality requirements are satisfied. Knowledge of the properties would also help with the certification of the part. From a practitioner's point of view, the inverse methodology, where processing parameters are tailored based on the minimum required quality, is a useful approach. Hence, a knowledge of the AM microstructure-property relationships is essential. Researchers (e.g. [62][63][64]) have studied the relationships for several alloy systems and process combinations and developed models to quantify the relationships. Mechanistic models may be combined with ML to extract the salient relationships, as shown in figure 13.

Special alloys tailored for AM
While most of the alloys systems used in metal AM to-date were developed for traditional processes such as casting and welding, there are opportunities for the design of unique alloys suitable for AM [66]. Some guidance is available for the quantification of the desired characteristics of these alloys (e.g. [67]). These alloy systems also must then be included in the knowledge base for interrogation by DTs for AM. As new printable alloys are developed, more alloys can be added to the printability database, which can provide information about the common defects for a combination of alloys and AM processes [67]. Fields such as computational materials discovery [68] and ML-assisted materials discovery (using big data) [69] can play important roles in the future development of highly printable alloys.

Challenges
As is often the case during the introductory phase of any new technology that substantially alters the status quo, some challenges need to be overcome before DTs may be successfully deployed in the control of AM. Key challenges are considered below. It must however be pointed out that, despite these challenges, successful research is being carried out using model-assisted closed-loop control of AM (e.g. [36,41]), and AM machine manufacturers [70] and large AM industry players such as Lockheed Martin [71] alike are committed to applying the technique to improve their AM processes.

Barriers to the creation of robust AI on the metal AM process
The DTs rely heavily on the AI embedded in them to make critical decisions. The AI is held in the form of digital models that contain vast knowledge about the process exceeding that of an expert with several years of experience. For this AI to be robust, the digital models must accurately reflect the actual process in every plausible situation. These digital models are created using data obtained from the field, controlled laboratory experiments and physics-rich computational models. Therefore, an obvious impediment to the development of AI in metal AM is the large number of process variables involved, giving rise to numerous permutations and combinations of scenarios. Consequently, large amounts of relevant data need to be generated for the construction of robust AI. In addition, while ML works well in situations where the data of interest have strong statistical features, which fortunately is the case for the majority of data in metal AM, it cannot be depended upon to predict what are termed 'outlier' events. An example in metal AM of this is building an ML model to predict the location of a maximum pore within a powder-bed laser fusion build [72]. Because a maximum pore is a statistical outlier-a very rare event-nearly all ML algorithms will ignore it. Further challenges exist in ensuring the standardised treatment of data between its producers and end-users, the conduct of experiments with sufficient rigour, and the development of accurate computational models that sufficiently account for the multiscale and multiphysics nature of the sub-processes that constitute an AM process and their interactions.
In a separate work [73], the present authors have discussed in good detail barriers to the development of physics-rich models for DTs in AM, and offered potential solutions. The hurdles were considered under the following categories: difficulties in modelling stemming from the multiscale and multiphysics aspects, verification (which is confirming that the equations in the models were solved correctly in time and space), validation (which is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model), the need for better material properties at elevated temperatures, uncertainty quantification and the control of uncertainty propagation within and between linked models, the need for standardisation in generating data and developing models, the requirement for faster hardware and more efficient software, and the use of ML for creating surrogate models. In addition, the need for a shared global vision and coordination, the types of partnerships that can be forged and related issues, the benefits of bringing industry and research institutions together, intellectual property treatment and funding issues were discussed.

Observability and controllability issues
For a closed-loop control system to work as intended, the process in question must be adequately observable and effectively controllable. This may be challenging in some instances of AM because of the rapid speed at which some processes take place (e.g. laser scan rate, solidification rate) and the randomness inherent in the process itself (e.g. powder flow, spatter dynamics) or the microscopic nature of the melt pool and its flow dynamics. However, as outlined by the authors elsewhere [73], progress is being made in the area of observation with the advent of high-end advanced sensors that are capable of high-speed high-resolution image capture. This means control hardware such as the actuators and valves that must take corrective action based on the observation should also respond in good time. Where required, research directed at reducing thermal and/or mechanical latencies in the operation of such hardware can be conducted (e.g. see [74]). Accurate calibration of sensor and actuator hardware is another area that may need increased attention. Other circumstances that must be planned for include how a DT must respond to a potential malfunction of sensor/s and/or actuator/s during an ongoing AM build process.

Conclusions
While DTs are already being deployed in many industries, process DTs are still uncommon. Nonetheless, they hold the promise of providing real-time predictive (or feed-forward) closed-loop control of manufacturing systems, by seamlessly linking computational models, sensor data and automation hardware through ML algorithms. The AI-driven DTs can autonomously arrive at context-sensitive resolutions. Consequently, they represent a step-change over traditional closed-loop control methods which are based only on feedback and are manually programmed-thus being limited to correcting problems already recognised by engineers using known solutions.
Nowhere is the potential value of DTs clearer than in their application to AM. The inherent process variability, the ability to produce bespoke parts, and the high cost of production of AM parts make a compelling case for the development and implementation of process DTs of AM. The DT can be applied to determine optimum processing routes for bespoke products, to improving AM process repeatability and, by assuring part quality, reduce waste and the need for destructive testing.
Researchers and engineers have already demonstrated the value of manually programmed closed-loop control in modifying process parameters to improve part quality in the industry. However, many requirements have to be fulfilled before reaching the goal of autonomous closed-loop control, as required for a process DT. This type of feedforward control can tackle scenarios unknown to researchers because these circumstances are discoverable by trained ML models which are also designed to figure out associated solutions using big data. The ability of a DT to provide autonomous control is another reason why DT provides exceptional value to AM, since the permutations and combinations of scenarios are numerous in AM given the multitude of process variables involved.
We have shown the need for fast-solving reduced-order ML models that hold process intelligence for real-time interrogation by DTs during diagnostic process control. Further, we have provided examples of the different physical and materials science aspects of the AM process for which process intelligence, obtained through computational modelling and experimentation, must be embedded in the AI capabilities, thus implementing a physics-informed ML approach.
There is a compelling case for the development of DTs of AM processes. However, this remains a challenging task that will require the skills and collaboration of scientists and engineers from many disciplines.

Data availability statement
No new data were created or analysed in this study.