A review of automation of laser optics alignment with a focus on machine learning applications

In industrial and laboratory-based laser systems there are complicated processes involved in the positioning of various optical components and these processes are time consuming. Machine learning has proven itself in recent years as a reliable tool in general control automation and adjustment tasks. However, machine learning has not yet found wide-spread application in specific tasks that require very skilled workforces to assemble and adjust high-precision equipment, such as the wide array of optical components that are implemented across vast numbers of laser systems within the field of photonics. This review provides a comprehensive summary of research in which automation and machine learning have been used in the processes of mirror positional adjustment, triangulation, and the selection of optimal laser parameters alongside other control parameters of various optical components. Promising research directions are presented with corresponding proposals on the use of machine learning for the task of setting up industrial and laboratory laser systems. The review in this paper was based on the recommendations presented in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).


Introduction
Today, lasers are used in a very wide and diverse range of production technologies, covering industries as varied as defense, medicine, entertainment and the home to name a few [1,2].For some industrial applications, the accurate positioning of the laser source is solved via calibration processes [3,4], but many applications also require the characteristics and direction of the laser beam to be adjusted in real-time with a high level of accuracy as sub-micron precision, Aligning optical components poses challenges in a wide array of applications.For example, difficulties in aligning the optical components of telescopes exist in the assembly of satellites monitoring the Earth's surface [5].Similar problems are encountered during the assembly of micro-optical components involving lenses with a short focal length despite the significantly different scales involved.In recent years, there has been significant progress in the automation of laser system production lines towards the creation of smart factories [6,7], where the main tool for achieving the set goals is machine learning [8].
To obtain optimal laser performance, precise alignment processes are performed, which normally Hinrichs [9].This is especially important when working with high-precision systems such as those used in laser interferometry [10].Misalignments often lead to reduced efficiency/system performance.The alignment of laser optics requires the adjustment of parameters such as tilt, offset and rotation of the various optical elements which also typically have a direct influence on each other.The interaction between these parameters leads to a multidimensional alignment space, which in turn makes it difficult to efficiently find the optimal alignment configuration using human logic [11].It is much easier for machine learning algorithms, optimization processes and search methods to navigate this complicated parameter space to find the best alignment configuration.Quite often, laser devices need to be continuously monitored and adaptively aligned due to changes in environmental conditions, thermal effects, and/or system dynamics [12].Such real-time adaptability is a challenging aspect not only for manual work but also for automation, machine learning methods are required, especially reinforcement learning, can allow systems to autonomously adjust alignment in response to these dynamic changes.At the same time, manual alignment of laser optics, in addition to being time consuming, is still subject to human errors.Therefore, automation is necessary to optimize these alignment processes and increase overall system efficiency.
Laser alignment of optical components can be implemented through automation and/or machine learning.Process automation follows predefined instructions or algorithms for an efficient step-by-step laser alignment process, with each of these being written in the algorithm.The advantage of using automation is that it is complete understanding of the incorporated alignment process and, generally, if a non-standard situation arises it is easy to identify.To implement an automated alignment process, it is necessary to use specialized diagnostic equipment/cameras and to periodically calibrate these.The main disadvantage of automation is that the algorithm implemented at its core is not flexible and will not be able to adapt independently if any operation and/or system parameters are changed.Machine learning includes the ability of machines to adapt and learn from data, improve performance over time, and deal with complex, unstructured situations.While automation is successful at repetitive tasks, machine learning holds much higher promise for laser alignment tasks typically done by highly trained personnel, as machine learning will eventually be capable of enabling autonomous alignment systems to perform more complex cognitive tasks [13].
A general overview of the existing attempts of automating the alignment of optical laser systems is provided, followed by discussions on how, when, and where machine learning techniques can be beneficial in these processes.

Research methodology and criteria for comparison
In this section, the principles by which the articles were selected for analysis are described in detail.The search was performed through a combination of keywords using three major databases of important academic literature: Web Of Science, Scopus, and IEEE Xplore.In addition, the databases of the leading research publishing houses Elsevier, Springer, MDPI, Taylor & Francis, and Wiley were searched, as were the scientific literature platforms of Google Scholar, ResearchGate, Academia, Publons, and PubMed.
Focus was placed on articles that were published in the last 10 years as significant breakthroughs in machine learning occurred after the victory of the neural network AlexNet [14] in the ImageNet competition in 2012.Keywords were used in various combinations, such as "machine learning and automation" for "laser tuning", "FAC control" and "detection control of mirrors using machine learning", combined with different types of lasers and optical elements.
Particular attention in the analyzed works is paid to: the types of neural networks and their architecture, the types of machine learning methods used, dataset quality and availability in the public domain, experimental conditions and technical information about the equipment used, and the repeatability of the experiments.In Fig. 1, the flow diagram for choosing the core literature that constitutes the basis of this review is given, according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) paradigm [15].Initially 415 articles were found of which 12 were removed as duplicates and then the remaining 403 were screened.The criteria of "machine learning being used" to control the laser parameters, to control the position of optical components, and to analyse the shape of the laser, together narrowed this collection down to 43 papers.Of these, close to half of the studies were excluded because the studies lacked the complete information necessary for implementation and repetition of methods.As a result, 19 articles remained as the core review literature.Those 19 articles directly discuss the use of machine learning to automate the process of controlling optical components and hence are the works that most closely match the search criteria specified above.These 19 core studies are closely reviewed in Appendix A. This Appendix provides an understanding of the current status of research in this area.Besides these 19 core publications, the review includes 102 at the end of the paper.The 102 citations were necessary to explain the laser alignment process and equipment to an extent that the discussion of automation and machine learning for optical systems can easily be comprehended within the correct context.
In Fig. 2, a schematic of the structure of the paper is shown to clarify that, while the dataset collection and processing for optical laser systems is reviewed, mirror calibration methods are also briefly introduced, as understanding these two topics together is important to appreciate the application of machine learning techniques to alignment processes.This diagram also clarifies that among the various machine learning techniques reviewed as applied to laser alignment, a specific focus is placed upon reinforcement learning as this is the most common and perhaps most effective machine learning method in this field.

Related reviews and the focus of this study
There are plentiful research papers on tuning laser parameters by use of machine learning, but there is much less work on directly controlling the laser beam such as by controlling the position of mirrors.Mill et al. [16], reviewed papers in which machine learning was used to process metal, plastic, ceramic, glass, and other non-metals.Attention was paid to the control of the main parameters of the laser operations: pulse frequency, pulse width, and average laser power.The authors concluded that implementing modern architectures of neural networks during laser processing (depending on the effect of laser exposure) can provide error compensation in real-time.
Many reviews are devoted to the use of machine learning in laser welding processes [17].In these works, the emphasis is mainly on controlling the power of the laser, with the direction of the laser beam being controlled by physically moving the laser beam via automated means.This format of work correlates to a lesser extent with the goals of this review.Research in which machine learning models are used for image analysis to assess the effect of a laser on a material has been considered previously by Deng et al. [18].Zuo, et al. [19] presented a comprehensive review on deep learning in optical metrology.The authors considered a wide range of problems in the context of neural networks, including interference suppression, phase recovery and phase unrolling.However, the topic of laser sources was only indirectly covered.
It is important to note that various machine learning models can be involved in the processes of controlling optical elements and lasers, as these are both extremely diverse [20,21], and therefore the process of choosing the necessary machine learning architecture can be a rather laborious task.Often, for optimal searches, authors test several well-known architectures on one dataset.Such a method is simply not capable of giving a completely robust result.Therefore, the search for optimal architectures of machine learning is also relevant, even though it is separate to the problems encountered in the field of applied optics.Again, there are many review papers, both specific to a certain area and more general, within the field of machine learning.For example, Lawal et al. [22] presented an overview of root methods in evolving environments, Francis et al. [23] provided an overview of deep neural networks, Alzubaidi et al. [24] produced an overview of convolutional neural networks (CNNs) and Zhuang et al. [25] formed a review for the rather promising direction of transfer learning.
The very subject of using neural networks in tuning optical systems was described back in 1993 in the technical paper written by Decker et al. [26] using the example of a spatial filter.In this paper, the advantages of neural networks over standard optimize methods are described in sufficient detail and a large theoretical work is presented.However, as this was written two decades ago, there was still not enough computing power at the time to use machine learning in practice, hence the publication does not provide any practical examples.Transverse CNNs for laser mode analysis were discussed by Jefferson-Brain et al. in [27], but again no examples were provided with real data.The author considered the general theory of CNNs, but the practical application was demonstrated only on non-deep CNNs (beam net).
Majarena et al. [28] presented a detailed overview of kinematic and gauge models to improve the behavior of spatial parallel mechanisms, but the authors focused on the number of sensors and required configurations, the type and impact of errors, and the number of parameters required, rather than the machine learning itself.
A full review dedicated specifically to the use of machine learning adjustment of industrial and laboratory laser and optical systems was not found, thus the knowledge presented in various works and reviews close to this topic was collated and discussed.

Focus of the current review
Special attention was paid to works that have found practical application(s) rather than those that have only worked in simulation(s).Therefore, the practical applications of machine learning have been highlighted, specifically in exercising control over robotic or mechatronic devices for adjusting lasers and optical devices.A particularly important point that is focused on is the release and feedback between robotic/ mechatronic devices and the input data to the machine learning models.Besides those, the depth of machine learning architectures, the time and methods of their training, and the use of transfer learning are also mentioned.
An example of one of the devices considered in this review is shown in Fig. 3, to provide a context of the knowledge reviewed in this paper.The work highlighted includes methods of controlling the position of mirrors for laser galvanometers and beam shape analysis.Of particular interest are papers in which machine learning is used to control the steering of the laser beam via optical components (for example, precision kinematic mirror mounts controlled by robotic/ mechatronic devices that receive coordinate data for the actuation mechanisms from neural networks), as well as monitoring of the position and shape of the laser beam via some form of sensor to provide feedback to a neural network.Also touched upon within this review are the areas in which machine learning has been used to analyze the effect of a laser on a material and the use of machine learning to control the process of cutting, engraving, etc.In these works, the laser parameters were calculated, but the feedback was not considered, and the laser parameters were not changed during operation [29][30][31][32][33][34][35].

Literature review
In the following section the conducted literature review is presented.This review is structured under the subtitles of "Datasets", "Mirror Calibration Methods", "Control Related Aspects", "Machine Learning Fig. 2. Structure of the paper content.Fig. 3.An example set of devices that were considered in this review as they require optical alignment.a -complete device (1 -laser source, 2 -lenses and phase mask, 3 -camera used to image laser source) [36], b -block diagram of laser beam control [37], c -result of laser beam decoding with CCD camera [38].

I. Rakhmatulin et al.
Applications for Laser and Optic Control", "Reinforcement Learning for Laser and Optic Control", "Supervised Learning for Laser Control" and "Limitations in the Reviewed Literature".

Datasets
Data preparation is one of the critical aspects of any machine learning application.In the field of photonics, alignment processes are useful for providing datasets for laser systems [39], optical components [40], and cameras/sensors.Within this context, machine learning tools are most often used for the task of measuring a laser beam on a CCD matrix [41].The OpenCV library can be used to incorporate machine vision [42] for recording data in optical systems.This library can work without the learning process for object detection tasks and implements many methods for identifying objects of a given shape [43].
It is worthwhile to note that, despite there being a significant amount of research investigating the tuning of optical components and laser parameters with machine learning techniques, as reviewed in this paper, none of these studies provides training datasets for such purposes.Consequently, despite the popularity of articles on datasets for a wide variety of topics, review articles on datasets or research papers in which the datasets were made available to the public could not be found.

Mirror calibration methods
In the problem of directing laser light onto an arbitrary surface using a three-dimensional control system, there are several methods for controlling and calibrating components.Fig. 4 shows the process of steering a laser beam onto a plane using two mirrors [44].
The calculation of the laser position is explained in several papers [45].The main coordinates of the laser beam on the plane in X, Y coordinates can be expressed by the following formula: where d is the distance between the mirror and object, e is the distance between mirror x and mirror y, and θ x and θ y are the mirror rotation angles in the x and y -axis respectively.However, despite the mathematical understanding of the process of controlling the position of the laser on the plane, automating this process remains a difficult task.Expensive high-precision actuation mechanisms are required to accurately and repeatably control the position of optical elements (in this case mirrors to steer the beam accordingly) [46].Additionally, there are a large variety of possible errors in optical element placement to potentially account for [47] and high-resolution cameras are needed to adequately measure the fine adjustments involved, particularly as the system approaches an optimally aligned state [48].For this reason, methods that implement calibration from a reference table are still popular [49].This method shows good results in practice, but sacrifices versatility, and as a result when the operating modes and/or system parameters change re-calibration is required [50].
Yu et al. [51] controlled a galvanometer laser scanner by developing a mathematical model which did not require information about the position of the mirrors.This model considers the possible impact of errors which can be made during the assembly of the laser system.The calibration process consists of seven steps, during which the objective function needs to optimize eleven independent unknown parameters.Consequently, while the procedures gave high accuracy, they noticeably complicated the calibration process.This same issue is present in the method proposed by Yang et al. [52].In this work the authors proposed a highly accurate position calibration method based on the use of both a precise motorized linear stage and a flat target.Alternatively, calibration of the laser plane itself was made again using a precise motorized linear stage using Zhang's method [53].Chi et al. [54] also calibrated the position of the mirrors using the same method as Zhang, but unlike previous work the authors managed to achieve universality from their method.This was showcased viafield-calibrated single-axis galvanometers being able to solve the refraction problem caused by the air-glass-water interface for automatic underwater scanning systems.To achieve this, the authors proposed performing calibration with a coplanar limitation of these points.It was proposed to use the rotation and displacement matrices between the coordinate systems of the target and the camera in the process of calibrating the internal parameters of the camera directly to transform the coordinates of the interface points.
As previously mentioned, there are many different methods for calibrating the position of mirrors [55,56], but the common problem of the presented methods is that they are time consuming in nature.This arises from the adjustments being performed step by step according to the various algorithms used, with a primary focus on accuracy, typically at the expense of universality.
In the papers discussed above, there are several shortcomings expressed in the complexity of the processes of controlling the position of optical components within the calibration processes.The required precision of the positional control of these optical components can be realized with high-precision drives [57] or through high-precision measuring equipment [58], but these tend to further complicate the setup process and lead to additional production costs due to the need to take into account the requirements for accuracy, complication of the adjustment process, production cost, comparative analysis, risk assessment.Quantifying these aspects will require data collection and experimentation.
Machine learning algorithms require data sets that are very laborious to collect, requiring highly skilled engineers to repeat the calibration process many times, while the average duration of a calibration session depends on both the design of the device and the qualifications of the specialist.This data acquisition can actually be automated.For example, an automated algorithm can run through the " beam walking " calibration process [59,60], which ultimately provides data on the initial position of the actuators on the kinematic mirrors and the final positions after the calibration process.The data throughout this process can be recorded in custom sample rates and some alterations or slight deviations from the procedure can be introduced to generate variations in the data, providing a rich set of input-output pairs of alignment data that can be registered automatically.
The quality of the data set recorded in this way will largely depend on the accuracy of the equipment used.Machine vision can be used to control the position of the laser relative to the centre of the aperture, and the detection accuracy will depend on the characteristics of the camera itself (along with any imaging optics used, such as magnification lenses).These include the number and size of the pixels, the quality of the focus setting and the distance to the object to name a few.Unlike the position of the laser beam, the position of the actuator is easier to control, as the accuracy largely depends on the accuracy of the encoders used and the Fig. 4. Projection diagram of a laser beam propagating through a galvanometer scanner [19].
responsivity of the mechanics to control the position of the actuator.
Once the connection between the machine vision system used to generate the feedback and the machine learning algorithm that physically controls the position of the optical components is established, it becomes possible for the tedious calibration process to be performed in an autonomous manner [61,62].

Control related aspects
Automation aims to increase the achievable accuracy while also speeding up the process of tuning optical components by reducing the time spent by specialists in setting up individual devices.Salazar-Serrano et al. [63] automated a kinematic mount with stepper motors, but the accuracy of these actuating motors is potentially limited due to susceptibility to resonance and the possibility of skipping steps [64].
Fang et al. [65] calibrated the position of a lens using a relatively complicated algorithm via access to eight degrees of freedom, and, by considering the image of the focal plane, presented a self-alignment method for a reconfigurable system.However, this method uses only an on-axis point source and does not take into account the effects of optical aberrations on alignment across the entire field of view.Authors suggest that similar results would be obtained for on-axis and off-axis sources.Lahari et al. [66] positioned a laser beam on a detector consisting of four photodiodes using proportional-integral-derivative (PID) and fuzzy logic controllers.The authors succeeded in detecting the deviation of the laser spot from the center of the detector.The developed system demonstrated that it can solve the problem of beam position alignment in free-space optical communication applications and has better control action compared to the PID controller.
The processes of automatic control of optical components are typically performed by piezoelectric inertia actuators (such as in the work published by Choi et al. in [67]), adapted linear actuators [68,69] or stepper motors [70].Another popular device for aligning the position of the mirrors [71] and controlling the position of the laser [72] is the hexapod.The hexapod is a platform that uses six mechanised limbs to precisely change its position/ orientation.This can also be successfully controlled by machine learning [73], where a gyroscope is often used as a controlling sensor [74].In any case, hexapods tend to be implemented into more niche applications [75] with much higher (in comparison to the systems above) cost, so current developments in this area are not elaborated upon here.
Although automatic processes can greatly simplify the overall calibration process, the initial adjustment of the position of the optical components still takes time and qualified personnel.Advances in the field of machine learning show promise that, if implemented in conjunction with robotic and/or mechatronic devices, could significantly speed up optical alignment processes.

Machine learning applications for laser and optic control
Determining the best algorithm for tuning and controlling optical components in optical alignment processes remains a challenging task to date, in part due to the large number of highly precise adjustments required by suitably trained personnel.Machine learning has the potential to eliminate the need to give exact instructions to the software for performing the necessary calibration processes.The task of the algorithm to be implemented is to independently find the optimal solutions by both discovering patterns in the available measurement data and finding mathematical dependencies between the incoming input data and the data that needs to be determined with pre-trained weights which is subsequently used for prediction problems in real conditions.For input data, all of the possible laser parameters, information about the position of the optical elements and other experimental parameters (e.g.sample properties in laser machining contexts) can be used.
Wissel et al. [29] tested ridge regression, support vector regression and Gaussian processes for a galvanometric triangulation device based on two mirrors and showed that these all performed similarly to look-up table calibrations.The auth power showed advantages over ANNs which often encounter difficulties with generalization problems, due to overfitting, limited data, high dimensionality, noise in data, etc.At the same time this method demonstrates performance that is suitable for real-time tasks on low-power consumption equipment.Elsewhere, to reduce the computer power spent on calculations, Wagner et al. [76] used a linear regression to calibrate the triangulation system for the optical head tracking system in intracranial radiosurgery.Zhao et al. [30] developed a static method for calibrating a system used for the inspection of sheet metal parts via laser triangulation by using a model consisting of linear combinations of coordinates for calibration.The method is based on preparing data on a scalar composite target with a geometric score as a quality indicator.Ludtke et al. [31] presented a calibration method based on statistical learning methods.The authors reviewed linear regression methods and artificial neural networks, showing that statistical learning methods outperformed artificial neural networks.The authors investigated linear regression with polynomial basis functions.To avoid overfitting, the complexity of the underlying function was reduced to a suitable level and was used for the regression function that is smoothed by adding a penalty to large coefficients.Alomet et al. [77] investigated the classification of laser beam images from the National Ignition Facility (NIF) to find abnormalities in the creation of background X-ray illumination.They proposed a relatively simple, fast and high-precision CNN with multiple layers to perform alignment via parabolic mirrors.The implemented CNN consists of two main parts: a feature extractor and a classifier.In feature extraction layers, each network layer receives the output of the prior layer as its input and passes its output as the input to the next layer.Xie et al. [78] used a CNN with four layers to control the process of laser material processing with real-time feedback.The authors successfully detected minor (sub-pixel) unintentional movements in the position of the laser beam, with the neural network making it possible to set up and control several parameters of laser processing.McDonnell et al. [32] used artificial neural networks and generative adversarial networks (GANs) in material processing to optimize laser parameters (such as the pulse energy, repetition rate and the number of pulses used) and predictively visualize the expected results in laser surface texturing.The authors successfully attained a positive outcome by devising a methodology to optimize hyperparameters, using the specific example of optimizing the number of neurons at each layer as a case study.Zhang et al. [33] used a Wavelet Kernel Extreme Learning Machine (KELM) to calibrate optical components in real time.The Wavelet KELM algorithm was adopted to establish the mapping model f:xi→ti between the measurement data and the spot position data.Based on this, an online calibration model is built.This algorithm can be used in real-time and allows balancing between the calibration accuracy and calculation speed.
Khachikyan et al. [34] trained a CNN for micro-optics positioning for fast axis collimation.In this work, images from cameras were used as feedback, with the neural networks controlling the position of micro-optics along six axes.The authors trained three popular CNNs (AlexNet, VGG19, ResNet152) to predict the optimal lens position.As a result, the authors have achieved a reduction in lens alignment time to hundreds of milliseconds.The authors used a training set with 2011 images.This approach to the collimation of high-power diode laser systems reduces the assembly time.However, this implementation is currently limited to one degree of freedom, i.e. alignment along the optical axis.Tu et al. [79] introduced a single-hidden layer feed forward neural network to control galvanometer drives.The authors described in detail the theory of triangulation and the implementation of the neural network.A distinctive feature of this work was that stereo vision was used to control the position of the laser beam.Jefferson-Brain et al. [35] introduced a CNN for laser mode composition estimation that can be performed in less than 3 ms on consumer-grade hardware.The authors mentioned that the developed CNN was successfully applied to the alignment of solid-state laser systems operating on higher order modes, but they did not provide details of the alignment process.The authors used a custom dataset of 18,000 images with 128 × 128 pixels in 8-bit resolution.
From the highlighted research, it can be observed that machine learning has shown high efficiency in optimizing the positions of various optical components.While the assembly of the dataset is laborious, the authors of the considered works agree that using machine learning methods for performing the calibration process is advantageous as it shows higher accuracy and requires fewer overall steps.

Reinforcement learning for laser and optic control
When implementing machine learning in production processes, an important aspect is the ability to work in real-time using an adaptable machine learning model to work with rapidly changing data.It is necessary that the model can compensate for errors in laser alignment and that the compensation time is as short as possible.In recent years, there has been a breakthrough in the field of reinforcement learning [80,81], primarily due to earlier success when applied to various games [82], with it now being widely used to control robots [83,84].All of this has led to the creation of architectures that are able to control laser parameters [85,86] and the positions of optical components [87].
Bruchon et al. [88] used episodic Q-learning [89] with linear function approximation and Natural Policy Gradient Reinforcement [90] to optimize and recover laser performance respectively during seed laser alignment of a free-electron laser system.The authors completely automated the procedure for aligning the seed laser with respect to the electron beam, which was previously only achievable manually.Bruchon et al. [91] also used the Q-learning algorithm for laser alignment by monitoring the intensity and spectral characteristics of the laser beam.The authors adapted Q-learning to function effectively within a continuous state space, enabling the use of linear function approximation for the action-value function.They employed Gaussian Radial Basis Functions as the specific method for this linear function approximation.
Sorokin et al. [61] used reinforcement learning to calibrate a physical Mach-Zehnder interferometer.The authors wrote simulations (Interferobot) for which they trained the model and subsequently transferred the data to the physical model of the interferometer to successfully align without any fine manual tuning.Interferobot was punished for each additional step taken, but significantly rewarded for high alignment quality.Makarenko et al. [62] trained RL agents with continuous action space in comparison to Sorokin, where the agent used discrete actions.Mathew et al. [92] used the open-source machine learning algorithm M-LOOP to improve an optimised manual alignment of the laser beam into a single-mode optical fibre by using signals from a custom photodiode circuit to continually measure the output power.This algorithm controlled four stepper motors to align the position of the mirrors mounted in the two precision kinematic mirror mounts.This network took 20 min to optimise the four parameters (stepper motor positions).In the papers described above, the efficiency of the machine learning calibration process is comparable to that of a qualified specialist in terms of the results achieved, but the calibration process itself is much faster.M-LOOP contains several machine learning algorithms but here the authors used only Gaussian Processes (GP).
Despite the successes obtained in the works described above, one should not forget that overfitting can very quickly lead to state overload, which can consequently inhibit the results.The use of reinforcement learning should be well justified and should not be implemented for simpler problems due to the many intricacies involved in sufficiently training the machine learning model.Contrarily, in tasks that involve highly trained experts and where other machine learning models can encounter numerous difficulties, reinforcement learning can be particularly suitable as this algorithm forms an understanding of the process much like the method a person would perform through repeated trial and error.Also, training with reinforcement learning in many cases requires a mathematical model to be used alongside the data, and the difficulty lies in establishing the mathematical description of the laser path in systems with a large number of optical components.In general, the success of the implementation of the architecture on the real setups described in this section depends largely on how the randomization of data was implemented, since it is from this that the success of transferring machine learning models from simulation to a real optical setup is largely dependent upon.Domain randomization involves introducing various random variations and perturbations into the simulation environment during training.Thus, the model learns to adapt to a wide range of possible scenarios when working on real setups and becomes more robust to the uncertainties and disturbances present in real environments.Randomization attempts to smooth out the discrepancy between the simulated environment and real conditions.When randomized, various environmental factors such as lighting conditions, textures (if the real setup uses machine learning for feedback), and the position of objects change randomly during the simulation.It is necessary that the model be trained to adapt to different conditions for it to work reliably in a real setup.

Supervised learning for laser control
In several research papers the process of laser tuning is considered out with the context of manipulating the various optical components.Teixidor et al. [93] explored K-nearest neighbours algorithms, neural networks, decision trees, and linear regression models to control laser micromachining of hardened steel.The analysis performed showed that neural networks can better model the dimensions of the product (channel depth), and that decision trees are able to model the material removal rate more accurately.Lu et al. [94] used a convolutional neural network to control the position and temperature of the laser crystal, the length of the cavity and the position of the focusing lens in an 18 mW mode-locked deep ultraviolet laser with a center wavelength of 205 nm.The authors created a neural network a feed-forward network with a single hidden layer and one output layer.The single-layer network ensured fast operation for real-time optimization.Galvin et al. [95] used a deep Bayesian neural network to reduce the effects of fluctuations and laser drift in a high-energy laser system with a time between pulses of 1 ms.Here the authors were able to match the input power spectrum of a laser with its output power spectrum and demonstrated the speed of this approach.In these papers, the problems of setting and controlling laser parameters are considered locally for solving specific problems.However, if they were to be considered within the context of setting up optical components, machine learning could provide even more opportunities to further increase the efficiency and accuracy of these alignment processes.
CNNs, as the most popular type of deep neural networks, are particularly suited for image recognition tasks, where they excel at identifying and classifying patterns within image data.They are not specifically designed for linear/nonlinear dependence or regression analyses, where the goal is to model the relationship(s) between the input and output variables, as typically required for laser alignment problems.Therefore, machine learning techniques that can leverage the analytical models of laser beam propagation might be more suitable for finding dependencies between parameters, such as linear regression, logistic regression, or support vector machines.

Limitations in the reviewed literature
Machine learning empowered Artificial Intelligence systems have shown significant progress when replacing humans involved in comparatively low-skilled work, with an estimated 85 million jobs being expected to be replaced by machine-led automation by 2025 [96].This review was instead focused on the use of machine learning to replace the work of highly qualified experts and in this area similar trends are not yet observed.This is because higher skill-based work requires specific knowledge and experience, therefore there is a need of understanding the process itself to transfer human experience to machine-based automation.For such tasks, machine learning techniques could first be used to analyze and understand how the experts perform the task.In other words, machine learning could be used to capture the required skills.
Consequently, it is necessary to understand more precisely the mechanisms behind the optimisation of laser systems.Despite the vast number of techniques used, this tuning is still mostly done manually, and the time taken to do so depends on the experience of the specialist.In the future, machine learning is planned not only to control these configuration processes, but also to understand the logic of the configuration and to answer the questions of how experts perform these processes, how they decide suitable outputs/results, and why one specialist takes more time than another.
Optical tuning and/or alignment is used in many different areas, each of which has its own specific aims and therefore requires sufficiently in-depth analysis to understand the feasibility of implementing machine learning methods.Although this review focused on the use of machine learning, it was hindered by the fact that in most of the work under consideration there were no links provided to either the code or, more importantly, to the datasets.
This review covers numerous scientific fields, therefore for the most part the technical details characteristics of the optical components used and the principles of calibration were not extensively elaborated upon.Although the type of machine learning used (such as the depth of the neural networks) affects the required processing power of the processor or GPU [97], the hardware used was also not considered.The importance of computing ability is reduced since even heavy machine learning models are implemented on single board computers.Le et al. [98] has previously covered this in a detailed look at the prospects for deep learning hardware.

Conclusion and future directions
This review provides an overview of solutions aimed at automating the alignment of various optical components through the implementation of machine learning methods.The most relevant works have been analysed, the results of which can be used in practice.Both positive and negative aspects of the machine learning models used in the covered research endeavours have been discussed, with the consequent feasibility of using machine learning in tasks that require highly qualified employees to adjust optical elements being considered.There are various classical methods for adjusting laser optics alignment, but machine learning is not as sensitive to initial errors made in the calibration processes and can adapt to different conditions.Additionally, machine learning mechanisms do not rely on decades of knowledge gained in the field of tuning optical components.On the one hand, this is one of the main advantages of machine learning architectures, since they can be entrusted with the process of setting up optical components and at the same time neglect the initial working conditions.However, because of this, there are fears that an laser engineer or operator can potentially lose their understanding of the logic of the tuning process.It should be considered that even with a slight change in the adjustment algorithm, machine learning models may not give the expected results.At the same time, if there is an error that was obtained through the use of machine learning, it can be very difficult to analyse and correct it.The underlying reasons behind these errors may include the lack of data used for training, the wrong architecture of machine learning being implemented, or the fact that the chosen theoretical model is not able to work correctly on new data.
In this work, the main achievements in the field of automation of optical alignment processes to date have been highlighted.These works have provided examples of which algorithms can be used and how successful they are in the practice of setting up optical components.Reinforcement learning models have generally been found to be the preferred machine learning technique for laser optics alignment.Machine learning competes with manual optimization methods, which is largely dependent on the prior experience of the user.Additionally, with manual tuning, there is "human error" to consider, whereas machine learning depends primarily on the model chosen alongside the mechanisms and sensors involved in the processes of controlling the parameters of the laser and/or manipulating the various optical components.
As a general rule, in the works reviewed, the comparative advantage between the use of machine learning and traditional methods is not sufficiently discussed in the papers.Due to the rising popularity of machine learning, it is increasingly considered as a necessary tool.Particular attention should be paid to the possibility of using the proposed algorithms in real-time.It should also be noted that the number of machine learning techniques involved in the reviewed papers is quite limited relative to the complete available range and that many of these other techniques were not considered for controlling mechanical components and/or device parameters.
The limiting factor in implementing machine learning in laser alignment processes is its generalizability to alignment of different optical systems.The successful use of an algorithm to align a specific optic setup does not guarantee success with another optical system, even if they share similar characteristics.Automation lacks the flexibility to adapt to changes in the algorithm or the addition of new components.For example, when calibrated by a specialist, the addition of a new component can normally only increase the time of the entire calibration, but with a machine learned automation the alignment might not be successful at all.However, in recent years, many machine learning architectures have been implemented and new directions in machine learning have been formed which might be promising for enhancing the generalisability of the machine learned models.These rather novel machine learning approaches have not yet found wide application in the field of optical component control.One such promising direction is the use of Multi-Task Learning [99], which makes the architecture more flexible and able to cope with different operating conditions.Similarity Learning Network [100] could also potentially be used to evaluate the similarity of the calibration process conditions after changes have been made to the optical setups and subsequently use this data to reconfigure the architectures responsible for calibration.To fully exploit the adaptivity feature of such machine learning approaches, machine learning perhaps needs to be applied not only to align specific optical systems, but rather to learn the common strategies applied by experts when aligning a large range of optical systems.Machine learning might then help to conclude why and how an experienced operator calibrates an optical system faster and/or more accurately than a less experienced user.
As it currently stands, there is no ideal solution for how to align optical components through machine learning, as the solutions presented are typically of a local nature for bespoke setups.There are however several works that have implemented the most widely used optical components, such as cameras and mirrors, and/or have involved altering the parameters of the laser source under the context of forming one automated line.There is still continual progress being made in this area as evidenced by the increasing publication activity on this topic.There remains ample opportunities to test and consider algorithms across numerous optical control problems.

Declaration of Competing Interest
There are no conflicts of interest to declare.From 2022 he is research associate in Heriot-Watt University, School of Engineering and Physical Sciences.His interests lie in the field of machine learning and machine vision.
Donald Risbridger graduated from Heriot-Watt in 2017 with an MPhys.His Master's project consolidated upon the latter, with microfluidic device design and parameter studies for the optimal optical isolation of yeast cells forming the majority of this work.Donald's PhD project at Heriot-Watt revolved around ultrafast laser resection of biological tissues using various beam geometries, with the aim of improving the precision and efficiency of minimally invasive surgical procedures.He is currently undertaking a multidisciplinary research endeavour looking at implementing a robotic arm to automate the alignment of various laser systems.This entails working closely with robotics and machine learning researchers in addition to industrial collaborators.Prof Esser is researching new materials, architectures and manufacturing methods of direct diode laser modules, diodepumped solid-state lasers in the near-and mid-infrared, as well as laser-based digital manufacturing.Key challenges of power-and energy scaling of solid-state laser systems are addressed, focussing on high-brightness laser diodes, microoptic beam shaping, thin-slab architectures and compact amplifier concepts.Novel manufacturing techniques, such as ultra-fast laser welding of glass-to-metal, autonomous robotic assembly and metal additive manufacturing are critically evaluated for their use and uptake in the design and manufacture of future industrial laser systems.

Fig. 1 .
Fig. 1.PRISMA flow diagram: Schematic representation of the process followed to identify the 19 core studies.

Ildar
Rakhmatulin received the Ph.D. degree in electrical engineering from South Ural State University, Russia in 2015.
Richard M. Carter graduated from the University of Edinburgh in 2005 with an MPhys, from the Universities of St-Andrews and Heriot-Watt with an MSc in photonics and opto-electronic devices in 2007 and in 2012 from Heriot-Watt with a PhD developing long period fibre gratings for environmental gas sensing.He was one of the founding members of the EPSRC Centre for Innovative Manufacturing in Laser Based Production Processes.In 2018 was appointed a Research Fellow (Lecturer) within the Institute for Photonics and Quantum Sciences at Heriot-Watt University and in 2021 promoted to his current role as Associate Professor with teaching responsibilities in both Physics and Mechanical Engineering.His research focuses on the use of lasers for high value manufacturing.M J Daniel Esser obtained his Doctorate of Engineering in Photonics (2010) from Heriot-Watt University while working full-time at the CSIR National Laser Centre, South Africa.He joined Heriot-Watt in 2013 as Professor and held the Royal Academy of Engineering & Leonardo Research Chair in Laser Device Physics & Engineering.He is an alumnus of the Global Young Academy.