Methodology for the development of in-line optical surface measuring instruments with a case study for additive surface finishing

The productivity rate of a manufacturing process is limited by the speed of any measurement processes at the quality control stage. Fast and effective in-line measurements are required to overcome this limitation. Optical instruments are the most promising methods for in-line measurement because they are faster than tactile measurements, able to collect high-density data, can be highly flexible to access complex features and are free from the risk of surface damage. In this paper, a methodology for the development of fast and effective in-line optical measuring instruments for the surfaces of parts with millimetreto micrometre-size is presented and its implementation demonstrated on an industrial case study in additive manufacturing. Definitions related to in-line measurement and barriers to implementing in-line optical measuring instruments are discussed.


Introduction
The productivity of a manufacturing line depends on the throughput of high-quality parts that are produced within a prescribed period of time. To control the quality of parts without reducing the production rate, the parts should be inspected accurately and quickly [1,2] . Furthermore, quality control procedures should be applied at each step of a complex process chain to reduce stack-up variations during the processes and to meet desired tolerances [3,4] . Hence, fast (less than the cycle time of a process) and effective (satisfying desired requirements, for example accuracy) in-line measuring instruments are required to control the quality of parts without halting the manufacturing processes [4] .
In this paper, a general methodology to develop in-line surface measuring instruments focusing on millimetre-to micrometre-feature size is presented. The applications of this type of in-line measurement are, for example, in micro-scale injection moulding, micro-scale polishing micro-scale milling, micro-scale electrical discharge machining, microscale electrolyte jet machining, precision grinding, additive manufacturing of small parts and surface coating systems. A case study is used to show implementation of the methodology for in-line measurement (see Section 1.1 ) of product properties; in this case, the surfaces produced by a polishing process for additively manufactured polymer parts. The development of in-line measuring instruments is a challenging task. The challenges are not only the measurement speed requirement, but also issues such as environmental noise, data fusion and system-level integra- Fig. 1. Definitions of in-situ, in-line and on-machine measurement (adapted from [3] ).

Definitions
Following the CIRP draft definitions [3] , several terms related to inline measurement are given below.
• In-situ measurement : a measurement of a part surface that is carried out inside the same manufacturing floor/shop floor without isolating the measurement process outside the manufacturing line. • In-line/on-line measurement : a measurement of a part surface that is carried out in a production or manufacturing line either inside ( onmachine ) or outside ( off-machine ) a production machine. Meanwhile, for off-line , the measurement is carried out outside the production or manufacturing line, but still inside the same manufacturing floor. • On-machine measurement : a measurement of a part surface that is carried out inside a production machine that manufactures the part. The measurement can be carried out in-process (during the process) or off-process (before or after the process).
From the definitions above, in-situ measurement includes both inline/on-line and on-machine measurement. Fig. 1 shows a pictorial representation of the definitions.

Barriers to developing in-line measuring instruments
Focusing on optical measuring instruments, there are challenges that need to be overcome for in-line measurement. We divide the challenges into five groups: methods; speed; system integration and control; traceability; and intelligence (defined here as the ability of an instrument to take decisions and learn from prior measurements) ( Table 1 ). However, several challenges are not only relevant for optical instruments, but also for tactile measuring instruments, such as the ability to undertake multi-scale measurements (measurement of form at hundreds of millimetre scales and surface texture at sub-micrometre scales) and to measure in noisy environments (for nanometric accuracy).
Method is related to the limitations of the optical measurement working principles. A few examples of physical limitations are given. Physical limitations related to imaging optics are the slope and resolution limits of the numerical aperture, limited measuring areas and the relatively short working distance to the measured surface [7] . The intensity of light from highly reflecting surfaces may exceed the pixel threshold value causing saturation of the imaging sensor, which can negatively affect a 3D surface reconstruction calculation [7] . Also, environmental noise from vibration, temperature, humidity and pressure variations, can significantly contribute to the uncertainty of optical measurement results [8] .
Speed here refers not only to the speed of a measurement, but also to speed of motion required to access a surface, and to the speed of handling and processing of high-density data obtained from an optical instrument. The total measurement speed of the instrument depends on how fast the instrument can capture the raw data, for example, a stack of images, and the processing speed to derive a measurement result from the raw data. The processing speed becomes more relevant when measuring a surface, reconstructing a 3D model and calculating specific parameters from the reconstructed surface data.
System integration and control includes the integration of an in-line measuring instrument into a machine or production line and the integration of the measured data into a system level data management system. The economic advantage of an in-line measuring instrument becomes more significant when a production system can be controlled and maintained by an integrated organisation data management system [9] . The use of methods such as statistical process control can detect, currently off-line or per production batch, whether a process deviates from its predefined operating conditions. The main challenge for in-line control is to have a fast in-line measuring instrument, working in an uncontrolled environment with noises, for example inside a machining chamber, with a measuring time less than or equal to the production speed. Also, integration and control are related to the use of modular design concepts to be able to produce a flexible in-line measuring instrument that can be adapted to various machines or production lines, regardless of, for example, space constraints.
Traceability , an essential factor for measurement, covers the issues of performance verification and calibration of in-line optical measuring instruments and the uncertainty estimation associated with their measurement results. For performance verification, there has been substantial work on how to develop material measures and procedures to verify the performance of off-line optical measuring instruments [10] . There has also been some research to verify the optical performance of in-line measuring instruments [11,12] . For measurement uncertainty estimation, there is still a need for methods to identify relevant influence factors for an in-line optical instrument [3] .
Intelligence refers to the challenges to build intelligent in-line optical instruments and includes the utilisation and application of machine learning (ML) methods, that are only now being utilised for measurement applications, to enhance the capability and performance of the instruments, for example, the capability to understand surface orientations [13] , to automatically segment 3D point clouds [14] and to infer surface information from missing data using a priori information [15] . Recently, deep learning neural network methods have been used in many applications, for example, to automatically segment objects, especially for machine vision applications [16,17] . The availability of abundant data, low-cost computers with high computing power and advanced learning algorithms cause deep neural networks, that is a neural network with many of layers and multiple hierarchies of abstractions [16] , to perform significantly better than neural networks with only a few layers [18] and classical machine learning methods, for example, support vector machines [19] . However, despite the popularity of deep learning methods, their lack of capability to estimate prediction uncertainty [20] causes a problem in their implementation for measurement applications, as the methods are only used as a black-box system, for example, part conformance determination in quality control requires measurement uncertainty statements to aid a decision [21] . The use of Bayesian probabilistic approaches may be able to address the uncertainty prediction in machine learning [22] , but suffers from the need for higher memory capacity than neural network methods [19] . The combination of deep learning and Bayesian approaches holds the possibility to enhance the capability of deep learning methods, while maintaining an estimate in prediction uncertainty [20] .
It is worth noting that we do not necessarily need to overcome all the above challenges to develop an in-line measuring instrument. Rather, only relevant barriers that are related to our specific in-line measurement requirements need to be addressed. The identification of what the relevant challenges are, for the development of a specific in-line surface measuring instrument, is elaborated on in Section 2 . Table 1 summarises many of the challenges and the current state of the art.  [7] . To measure a larger area, multiple measurements followed by data stitching should be carried out. Measurement with high spatial bandwidth (multi-scale measurements) includes both form and surface texture measurements Most optical measuring instruments either measure a large area with low resolution or a small area with high resolution [23,24] .
Measurement of features with high slope angles is difficult The ability to measure surfaces with high slope angles is limited by the numerical aperture (NA) of an objective lens [25] ; however, with certain configurations, advanced measurement models and a priori information, high slope measurements beyond the NA are possible [26] . Measurement of materials with specular surfaces can negatively affect 3D surface reconstructions High reflection from a surface can cause light intensities that saturate the imaging sensor pixels. The saturated pixels cause problem for many 3D surface reconstruction algorithms [7] . Accuracy of measurements under noisy environment, for example vibration and temperature variation will be reduced Environmental noise, such as ground vibration, is a significant factor for measurements with micrometre and higher level accuracies [27] . For an example, small levels of vibration will cause differences between an encoder reading and the actual position during a measurement. Speed Measuring faster than a process cycle-time is still a challenge Areal surface measurements require motions to access a surface and a large number of images/computations so that the processing time is relatively longer (commonly > 1 min [1,7] ) than many manufacturing cycle times, that can be within seconds. Handling and analysing high-density data from optical measuring instruments require relatively long computing times A large number of data points, from hundreds of thousands [28] to millions or more points [29] , can be obtained from optical measuring instruments in relatively short period of time [1,7] . However, the time required to process the data is more than the time to acquire it. System integration and control Application of modular and environment-robust design is needed to integrate and adapt into various types of manufacturing machines Different types of machines have specifically-built in-line measuring instruments, for example, due to space constraints [30][31][32] . Design studies to isolate vibration by using lattice structures have been proposed [33,34] Utilisation and combination of various types of in-line measurement data, from different sensors with different resolutions, for the efficient control of a manufacturing system requires algorithm development A large number and variety of data are obtained from many sensors with different resolutions and accuracy levels. Often, data fusion is needed to combine all the data with different densities [35] . Fusion of data with different densities has been proposed, for example, data fusion from two optical instruments [36] and data fusion from a tactile and an optical instrument [37,38] . Integration of an in-line instrument with system level control (for example, on-line statistical process control, run-to-run control and predictive maintenance) needs to be performed continuously in real time Current practice divides system-level control into several types [39] . However, the lack of in-line instruments means that run-to-run process controls have to be carried out in batch rather than continuous mode [40,41] . Integration of measurement data into an enterprise production planning and scheduling is still a gap to be bridged Measurement data from in-line instruments needs to be integrated into the resource planning management system of companies and enterprises for production planning and scheduling [42] . Dealing with data transfer speed is needed to avoid data bottle-necks A large amount of data is congested by the limitation of data transfer speeds that can be originated from hardware or software [43] . Traceability Calibration and verification of performance of in-line measuring instruments is required to assure the instruments work within their specification Performance verification procedures and material measures for the determination of length measurement errors for optical instruments are still lacking [ 10 , 44 ]. Currently, performance verification infrastructures are available for some off-line instruments [45][46][47] . In some situations, calibration of in-line optical instruments for surface measurements can follow those already available for off-line optical instruments [10,[48][49][50] . Estimation of the measurement uncertainty associated with in-line measurement results is essential to establish measurement traceability Methods of measurement uncertainty estimation are commonly applied for off-line tactile and optical measuring instruments [51,52] . Intelligence The use and application of machine learning methods for in-line optical measurements is still limited A recent application of deep learning in fringe projection measurement to rapidly tracking the projector orientation has been reported [13] . Some applications of deep learning for object classifications from 3D point clouds have also been reported [14,53] Training from very large data sets for in-line optical measurements will take very long periods of time and needs a large amount of data A parallel computation method leveraging graphical processing units has been used [54] . Regularisation methods to avoid overfitting of large training data sets and increased accuracy of deep learning methods use dropout [55] and penalisation methods [56,57] . Uncertainty estimation with machine learning methods is still lacking (commonly, many machine learning methods are used as a "black-box " methods) Recent research to combine a Bayesian framework with deep learning to provide uncertainty estimation for deep learning have been proposed [58][59][60] . With the ability to provide uncertainty estimation, confidence with a prediction can be obtained to decide whether a prediction is reasonable or not.

The proposed methodology
An IRM framework is an essential element and the foundation of the proposed methodology [15,82] . IRM is a term referring to the use of any available information that can be included to improve a measurement process [15,82] . The available information can be information about a measured object, a manufacturing process that makes the object, the instrument-surface interaction, optical instrument characteristics. The information can be obtained from, for example, a priori knowledge, the physics of a measurement method, mathematical modelling and simulations or from other measurement processes. All this information is aggregated by the use of smart data processing, that is the ability to use a priori information, rigorous modelling and learn from prior measurements to improve future measurement processes and results. This smart data processing leverages various methods and algorithms, for example, machine learning and data fusion.
Information about a measured object can be obtained from its 3D CAD model where nominal form and dimension and their tolerance are available. Also, information about the manufacturing of the object can be obtained in terms of, for example, materials that can be processed, its capability and typical features and defects it generates. By knowing the typical features and defects, with the IRM framework, improved metrology for quality inspection can be obtained, for example, improving the speed of defect detection.
One of the main focuses of the IRM framework is to develop improved mathematical models that describe the interaction between a measured surface and an optical instrument. Currently, many rigorous mathematical models that describe the principle of many optical measurement technologies are already available [7] . However, those models are designed to be general to measure various surfaces with different scenarios with little a priori knowledge of the measurement. With the general models, most optical instruments have limitations in their measurement capabilities. In fact, very often, many surface measurement scenarios provide much additional information [ 15 , 29 ], for example, at the macroscopic scale, information regarding form and nominal dimensions is available and at the microscopic scale information regarding surface texture and manufacturing fingerprint are available.
With all the additional information, the IRM framework requires a new type of data processing pipeline to homogenise and aggregate all the information, and then, exploit it to give better overall measurement results and performance. Data fusion methods are essential for various data homogenisation and aggregation. To data mine relevant relationships between variables and obtain statistical models, machine learning methods provide significant support for smart data processing and, finally, smart measurement solutions.
The methodology for the development of in-line surface measuring instruments is based on the IRM framework and consists of three phases: Phase 1 for knowledge and data (a priori) gathering, Phase 2 for instrument (and software) development and integration, based on the data gathered in Phase 1, and Phase 3 for the development of a control system that uses the measurement system from Phase 2 (see Fig. 2 ).
For the methodology presented here, in Phase 1, knowledge and data (a priori) gathering is carried out. For example, by conducting measurements of parts to understand the relationship between measured properties, in this case defects, and component functions. Component functions are important because those functions are the reason why the component produced and to make an assembled product work as intended. In addition, from Phase 1, properties that are the most relevant (small changes of the values of the properties may significantly affect the component's functions) to be measured in-line will be identified. Phase 2 is the development phase for an in-line measuring instrument, both hardware and software, and the integration of the developed instrument into a production line/machine. At Phase 2, the aim is that the developed instrument should be as "simple " as possible for the required measurement task. Specificity is the design aim at Phase 2, not versatility. Finally, Phase 3 is the development and implementation of the control system of a manufacturing process or product by leveraging the in-line measurement data obtained from the developed in-line instrument. Table 2 shows the summary of aspects needed to be considered in the three phases during the development of an in-line measuring instrument.

Phase 1: knowledge and data (a priori) gathering
The main goal of Phase 1, for knowledge and data gathering, is to gather information related to instrument requirements, measured surfaces, measurement models and manufacturing processes to support the IRM framework and to identify the most important defects to be measured in-line so that a measurement can be done as fast as possible. In other words, this phase is to define the measurand definition and all the information useful to support the IRM framework to develop of an in-line instrument. To achieve this goal, a high-level of understanding about the functionality or operation of a part is necessary. Subsequently, based on this understanding, the type of relevant defects should be defined. In addition, the measurement can be categorised as absolute or relative (a comparison to a reference quantity) measurement and correlations between measured defects and the functionality/operation of the part should be established. The understanding of the correlation is necessary to understand the range of the values of the defects that need to be controlled. Where possible, functional tests should be carried out to understand the relevance of a feature with respect to the functionality of the part.
The process of Phase 1 is very often carried out by using measuring instruments, with high resolution, which are commonly off-line instruments, to gather data about surface topographies and defects. High resolution and accurate instruments commonly need relatively long measurement times (compared to the process cycle time). With these instruments, high resolution measurement data, containing comprehensive or many features on a part, can be obtained to study the most important defects that significantly affect the part functionality. From this study, one can determine the minimum number of defects to be measured and further determine the range of values needed to be controlled for the measured defects.
Other factors to consider in Phase 1 are related to, for example, data structures and analysis, procedures for uncertainty estimation, enhancement of sensors for data capturing and types of machine learning methods that can be leveraged to improve an instrument's performance. The type of data structures, for example grids or vectors, is important to determine what the most appropriate data analysis methods to be used, for example 2D image or 3D point cloud processing. Procedures to estimate uncertainty need to be planned in this phase, for example what influence factors are relevant for a specific type of in-line measurement. Effective and efficient machine learning methods should be selected, if possible, to improve an instrument performance while minimising the increase of computational cost. The time required in Phase 1 can be from several days to several months of study, but time invested at this stage, can save considerable time and costs in subsequent phases.

Phase 2: instrument and software development
In this phase, instrument and software development for an in-line measuring instrument are carried out. The goal is to develop the simplest and/or most efficient in-line optical instrument utilising the IRM framework for the required in-line measurement tasks. The development is carried out based on the outputs of Phase 1, that is, the definition of minimum number of defects (measurands) that are relevant to the quality of a part. Several important aspects that need to be considered are speed requirements, instrument cost, accuracy level, sensor type, size constraint, modular design and programming language.
Speed requirements can be considered as the first aspect to be taken into account in designing an in-line measuring instrument. The reason is  -Type of measurement, whether an absolute or relative measurements -Speed requirement from obtaining raw data to presenting a measurement result -A control system for process or product quality control -Form or surface texture measurements -Target instrument cost for both hardware and software -Types of SPC that can be implemented to a process -Selection of high-resolution offline measuring instrument for a comprehensive study of defects (measurands) -Level of accuracy required for an in-line instrument -Uncertainty consideration for the determination of part conformance or no-conformance -Measurement data correlation with respect to a process applied to a part -Type of sensor that will be used (contact or non-contact) -Additional information that can be used as feedback for a process controller -List of possible potential defects to be measured by an in-line measurement -Size constraint of an in-line instrument defined by the space availability in a machine -Leveraging machine learning methods for intelligent statistical process control -Modular design possibility to increase the flexibility of an integration -Type of programming language used to develop the software of an in-line instrument -Types of ML methods that can be efficiently and effectively implemented -Type of in-line integration: in-line/on-line or on-machine -Design of instrument cover -Type of positioning system, for example a Cartesian robot, an articulated-arm robot and a linear motion stage -Programming method to control a positioning system, for example, serial or socket (TCP/IP) programming -Safety issues, for example, failsafe system, safety fence and cable management -Calibration and performance verification that, often, the instrument will only be used by industry if it can measure faster than (or equal to) the cycle time of a high-throughput manufacturing process of interest.
Cost is also an important consideration that needs to be considered. The instrument should have significantly lower cost than the manufacturing process, to justify the economic benefit of the instrument. Accuracy levels should be achieved as per requirement from Phase 1. A significant cost will be generated if the instrument is designed with accuracy levels beyond the requirement. The sensor type can be selected based on the previous considerations of speed, cost and accuracy. A low-cost image sensor can be potentially used to lower the instrument cost. The maximum available space within a machine or a process should be considered to design the overall dimension of the instrument.
The design of an in-line instrument should be robust to environmental noise, for example vibration, and from process contamination. Fitfor-purpose or modular approaches can be selected. Fit-for-purpose design is needed for special-purpose production machines, especially for on-machine and in-process measurement (see definition in Section 1.1 ). Moreover, fit-for-purpose design will be optimised for a specific machine and process so that optimised and fast measuring instrument can be obtained. However, in some cases, may be for in-line, (see definition in Section 1.1 ), modular design can be considered to increase the adaptability of the instrument to various types of production machines, for example, an in-line instrument that can fit into various types of tool holder in milling machines.
Programming languages to write the control and data analysis software of the instrument need to be carefully chosen. The main considerations for language selection are speed and compatibility. Commonly, C/C ++ , a compiled programing language, is used to develop software for an embedded instrument; this is because C/C ++ offers machinelevel compilation suitable for instruments of high measuring speed, and can be interfaced with many instrument control systems [61] . However, Python programming language is becoming popular to be used to write instrument software because, although it is slower than C/C ++ language, it has compatibility to different instruments and also supports Internet-of-Thing (IoT) protocols [62] for system-to-machine and machine-to-machine communications that is the base for Industry 4.0 [63] . In addition to the Python language, many state-of-the-art ML libraries are available. Careful considerations for selecting a type of ML, such as availability and simplicity to collect data for model training, and complexity of computation for the ML method should be taken into account.
To integrate the instrument, the type of enclosure design and positioning system of the in-line measurement instrument (on-machine or off-machine, see Section 1.1 ) need to be considered. On-machine measuring instruments commonly have higher space constraints due to a limited working/processing volume of a machine and higher environment disturbances, for example dust and coolants. On the other hand, inline measuring instruments commonly have less space constraint compared to on-machine instruments.
The instrument enclosure is designed depending on the application. Hazardous and extreme environments are some of the main challenges in designing instrument enclosures. For example, the cover may have a water-resistant capability to either protect the instrument from any liquids generated from a process, or enable the instrument to be used in submerged applications. Safety issues are also important in that a fail-safe system may need to be provided.
For on-machine instruments, fixed or small movements of the instruments are often required for measurement purposes. For in-line and off-machine instruments, a large movement of the instrument may be required. For the movement of the instruments, the type of positioning system considered can range from a linear motion stage to robotic manipulation with different coordinate systems. The contribution of positioning system errors that affect the accuracy of measurement results must also be considered.

Phase 3: the development of in-line control system
Phase 3 is the development of a control system for product/process monitoring. The control system could be the simple "go/no-go " system to prevent defected parts being sent to subsequent processes or sent to customers and provide information about a process for further improvement. Another type of the control system is an advanced control system that leverages both a process model and feedback data from in-line instruments and uses them to reduce the process drift, variation and shift, to prevent defects of a product to pass to subsequent processes or to repair defects of the product before going to subsequent processes [3,40] , for example the use of Statistical process control (SPC). Moreover, additional information from product usage data during their operating life cycle can also be used to feed information to the control system, for example, the usage data from a product during operations provide new types of defect of the product that also affect the operation and have to be considered by the control system.
Statistical process control (SPC) is a well-known industry method for advanced control systems used to control a process/product shift, drift and variation. SPC captures assignable event on a process and give an "alarm " so that a corrective action can be carried out. A classical SPC is usually applied off-line that causes corrective actions can be undertaken after a process drift or shift too far for their limits and product with defects are already produced. To encounter this issue, run-to-run control is applied for small batches, while a production line is still operating, so that corrective action can be made faster compared to the classical SPC method [40] . Intelligent SPC controllers that utilise ML methods can also be developed to adapt to correlated data and data from different probability distributions [40,64] .
The uncertainty of measured data should also be considered to design the control system, for example, in quality control, uncertainty values have to be considered to determine the conformance of a part [21,65] . Moreover, the uncertainty of measurement results can be integrated into a system controller to have a better decision of what actions need to be taken to control the process [66] .

Case study: the development of an in-line surface condition detection for post-processed additively manufactured polymer parts
The case study presents the development of an in-line instrument to detect the surface condition of post-processed additively manufactured (AM) polymer parts and to establish a closed-loop feedback control to the post-processing machine to monitor and control the process. The measurement system is considered productive because it has a substantial added-value [69] . AM parts generally have rough surfaces due to a so-called "stair case effect " resulted from layer-by-layer process [67] and other effects, for example, balling effect in metal additive manufacturing processes. The effects become more pronounced for surfaces processed at high inclination angle and having excessive support structures [68] . To improve the texture of AM polymer parts, a post-processing of the surface has to be performed. A new automated solution for the postprocessing of polymer AM parts, a so-called Postpro3D has been developed by Additive Manufacturing Technologies (AMT) with their proprietary method. Fig. 3 shows the automatic post-processing machine that improves the surface finish of AM polymer parts. The automatic postprocessing solution results in a significant increase in productivity of AM polymer processes due to a time reduction of manual post-processing and an increase of the surface texture quality Postpro3D post-processing machine is a physical-chemical-based process that can smooth a wide variety of polymers used in AM, including Nylon-12, Nylon-11, Nylon-6, flame resistant nylons, carbon/glass filled derivatives of nylon, thermoplastic polyurethane (TPU), thermoplastic elastomers, ULTEM 9085, PMMA, PLA and other polymer types [70,71] . Postpro3D is a non-line-of-sight process that can smooth complex internal cavities of polymer parts. The advantages of Postpro3D machine are that it is highly controllable, allowing reproducible result and closing surface pores so that the surface provides water-tightness property and has comparable surface finish to that one manufactured using injection moulding (see Fig. 4 ). In Fig. 4 , it is worth to note that the presented images are obtained from a focus-stacking of images at different focus position and not from a single microscope image.
In this case study, the in-line instrument will be integrated outside the post-processing chamber to measure a surface condition directly after post-processing so that it is categorised as off-machine (in-line measurement that is carried out outside a production machine). Relevant barriers (see Table 1 ) that need to be addressed in this type of in-line measurements are: • Multi-scale measurements to capture the feature on AM polymer surfaces with different spatial wavelength. • Measurements under noisy or harsh environment, for example vibration in a workshop, dust and chemical vapour. • Fast measurements than the cycle-time of the post-processing. • Efficient and effective handling of large data from measurements. • Flexible in-line integration into post-processing chain and the use of measurement data for AM polymer part quality control. Fig. 5 shows the schematic view of the three phases of the in-line instrument development. In Phase 1, a focus variation microscopy (FVM) measuring instrument was used to study the surface texture of AM polymer parts. Following Phase 1, the development of an instrument and a software is carried out in Phase 2 based on the results obtained in Phase 1. Finally, in Phase 3, the developed instrument is integrated into the post-processing process chain. Details of each phase are explained in the following section.

Phase 1: high resolution measurement of polymer surfaces
The first step in Phase 1 is to define the requirement of the in-line instrument for the surface condition detection. The requirements are: • The maximum dimension of the instrument should be < (200 × 200 × 200) mm to comply with the end-effector of a collaborative robot. • The maximum mass of the instrument should be < 3kg to comply with the maximum payload of a small collaborative articulated robot arm. • The instrument is equipped with a stand-alone robust and fast software. • The maximum detection time of surface condition is within < 15s.
• The cost of instrument should be acceptable (significantly lower than the machine cost). • The instrument should be flexible, simple to be integrated in-line into the post-processing chain, and portable.
In this case study, a surface texture measurement type is required (see Table 2 ). To understand the evolution of polymer surfaces during the post-processing, the focus variation microscopy (FVM) instrument with a 20 × objective lens was used. With the objective lens, the FVM has theoretically up to 10 nm vertical resolution and 0.8 μm lateral sampling distance so that small features on polymer surfaces can be captured to understand the evolution of the surfaces. Fig. 6 shows the high resolution measurements with the FVM instrument that is a type of offline measurement. Since this measurement is to study the evolution of the polymer surfaces after being applied at different levels of the postprocessing, measuring time is not relevant (since it is in Phase 1, see Section 2.1 ). Instead, the understanding of the surface evolution is more relevant to decide what attributes need to be measured.
Nylon-12 and TPU polymer surfaces were measured in the surface evolution study. A total of 18 parts were measured for both types of polymer. For each type, six post-processing levels (three parts for each level) were applied to the parts: 0% (no-post-processing), 25%, 50%, 75%, 100% (optimal processing) and sixth > 100% "over-processed " processing stage. For Nylon-12 parts, two measurement areas (at top and bottom surfaces) were measured, which leads to a total of 36 measurements. For TPU parts, nine measurement areas (one at a flat surface and eight at an inclined surfaces) were measured, which leads to a total of 162 measurements (see Fig. 6 ).
The results of the Nylon-12 and TPU polymer surfaces texture measurements are shown in Fig. 7 . In Fig. 7 , Sq areal parameters [82] were calculated from a (2.5 × 2.5) mm area with S -nesting index [83] of 2.5 μm and L -nesting index [83] of 500 μm. Sq represents the value of root mean square of heights within a measured area [84] . The S-and Lnesting index are the filtration operator to remove short-scale and longscale components from an extracted surface texture, respectively. From the results in Fig. 7 , the post-processing significantly increases the surface finish of the polymer parts. In this particular test, 100% surface finish for both TPU and Nylon-12 surfaces was around 2 μm. Overall, depending on the application, the post-processing can improve the surface finish by reducing Sq from tens of microns to below 1 μm for both TPU and Nylon-12 surfaces. For the TPU surface, > 100% "over-processed " resulted in an increase in texture roughness, whereas for the Nylon-12 surface the difference was insignificant.
In this case study, a relative measurement is required (see Table 2 ), that is, the measurement task is to be able to differentiate a required post-processed surface (at 100% post-processing level) with respect to other surfaces processed at different post-processing levels. For this type of measurement, 3D surface measurement is considered to be not suitable. The reason is that it requires relatively longer measurement time (typically in the order of minutes) and higher development cost. For example, due to the need of a precision optical system and a linear motion stage, many 3D surface measurement methods require a scan through a focus position of a measured surface to collect a stack of images and reconstruct a 3D surface model from thereof.
Subsequently, a solution based on microscope-based 2D machine vision is selected due to several reasons: • Only relative measurements are required. The measurement involves quantitative image comparisons between a measured and a reference surface considered as a pre-defined surface with smooth surface finish. • A low development cost can be achieved because the cost of 2D imaging complementary metal-oxide-semiconductor (CMOS) sensors has been significantly reduced. • A significant performance improvement of a 2D machine vision instrument can be obtained by implementing a machine learning method to improve the classification capability of various AM polymer surface textures after the post processing.

Phase 2: the development of fast in-line measuring instrument
Based on the results from Phase 1, the development of a 2D machine vision instrument and its control software are presented in this section. In addition, the validation, using both simulated images and real    6. Phase 1 -High resolution measurements with an FVM instrument. In this example, a TPU surface was measured. measurement images, of the developed instrument and software are also presented.

Instrument development
The required in-line instrument should be low-cost, low-mass, small, and based on non-contact method (see Table 2 ). Based on the requirements, a small and compact instrument with microscope-based 2D machine vision that can capture the feature of surface textures is developed. The design of the instrument in 3D solid model is shown in Fig. 8 . In Fig. 8 , the instrument has the maximum dimension of (203 × 121 × 84) mm. The instrument complies with the requirements for low-cost, low-mass and compact so that it has high flexibility for an integration into the post-processing chain. A small area of a surface can be captured and magnified to get detailed texture features for further analysis.
The instrument consists of illumination and microscope modules. The microscope module is constructed with a camera of a complementary metal-oxide semiconductor (CMOS) sensor, a beam splitter, a tube lens, objective tool changer and objective lenses with 4 × and 10 × magnifications (see Fig. 8 ). With the objective tool changer, more objective lenses with different magnifications can be mounted. A parallel light reflected from a measured surface enters the aperture of the objective lens and is transformed into an image on the CMOS sensor by the tube lens. The beam splitter is used to deflect the off-axis parallel ray from the white light source (after passing a diffuser) into the axis of the microscope. Both the beam splitter and the tube lens have transmission spectra of 400nm-700nm. The CMOS sensor has a pixel density of (1280 × 1024) pixels with a frame rate up to 45 fps.
The illumination module consists of a white light emitting diode (LED) and a diffuser lens (see Fig. 8 bottom). The LED has a total power output of 250mW with an intensity of 3mW/cm 2 . The emission of the LED has a spectrum of 400 nm − −700 nm . To improve the cross-sectional intensity distribution of the light from the LED, a diffuser lens, with a transmission spectrum of 380nm-1100nm, is used. With the diffuser lens, the LED will have a uniform intensity across the field of view of the objective lens of the microscope. Fig. 9 shows the developed instrument without and with an enclosure. The total weight of the instrument Fig. 7. Improved surface finish after the different stage of the post-processing for TPU (left) and Nylon-12 (right). The process type number 1, 2, 3, 4, 5, and 6 indicates surfaces with post-processing of 0%, 25%, 50%, 75%, 100%, and > 100% (over-processed), respectively.  with the enclosure is 2.4 kg, suitable to be mount on small robotic arms systems that commonly have around 3 kg payload.

Software development
For the software development, the selection of programming language is very essential for the performance of a developed software (see Table 2 ). In this case study, C/C ++ programming language is used to have a high speed software performance to comply with the general requirement (see Section 3.1 ).
A general unsupervised classifier of various different types of polymer surfaces, post-processed at different levels, is developed. In this case, a machine learning method that does not require a large data set to be trained and a very fast learning process is required. The classifier is based on an unsupervised machine learning approach using principal component analysis (PCA) [72] . The fundamental idea of PCA is that data with high dimension are reduced to lower dimension. In this case, high dimension data are the number of pixels of an image with (1280 × 1024) pixels, obtained from the CMOS sensor, can be reduced to a lower number of dimensions that still contains the important surface texture information. Implementing PCA directly into an image (raw data) requires expensive computation and large memory. Subsequently, to improve the computation efficiency of the PCA, a total of 54 image = statistical standard deviation, blob = identified white area on grey-scaled image. BLP = Binary local pattern, LED = Local edge descriptor.
parameters are pre-calculated from the captured image of a surface, obtained from the developed instrument, to feed the PCA algorithm. By calculating these parameters, a pre-step in data reduction is applied to increase the speed of the PCA algorithm.
The fundamental idea of PCA is explained as follows. Let N be the number of training data (number of images), and m is number of image parameters. Hence, a column vector of image parameters ̄ 1 averaged from number of training images can be calculated as: where X mn is the vector of image parameter with the number of element m for the n -th training image. The PCA will project the parameter data on to the principle axis, also called principle component (PC), u mk , where k is number of reduced dimension { k ∈ 1 ⋅⋅⋅54} that maximises the variance in training data: where, S mm is the covariance matrix of the parameter data and is calculated as: The principle axis u ′ mk that maximises the variance in the training data from Eq. (2) is Eigen vectors of S mm that correspond to the largest Eigen values of S mm . The classification process of polymer surface conditions is carried out by calculating a similarity value. The similarity value is defined as the Euclidean distance d between projected data of the image parameters of a measured surface ( ), and projected data of the image parameters of a reference surface ( ) on the principle axes u ′ mk that is the distance of a new point (from a new measurement) to the mean of the class cluster (obtained from training). Note that the number of element of both and are equal to the number of element of the reduced dimension k . The projected data of the image parameters and are calculated as follows: and The Euclidean distance d in PC space, that is the similarity value, between and are calculated as: The PCA classification of the images of different surface conditions are calculated from the 54 image parameters. With this approach, the calculation of the PCA classification is more efficient compared to the calculation of the PCA from all the raw pixels of an image. During training, the best number of considered dimensions (from 3 to 54) can be determined. The 54 image parameters include both colour-related and texture-related parameters to represent the texture of surfaces [73] . The colour-related parameters consist of, for example, the calculation of statistical parameters of the colour and the histogram entropy of an image [72] . The texture-related parameters consist of, for example, the calculation of statistical parameters of blobs of an image, binary local patterns [74,75] and local edge descriptors that is part of the multimedia content description interface (MPEG-7) [76] . Table 3 shows the 54 calculated parameters as the input for the PCA algorithm.
The developed software, implemented in the C/C ++ , works as a stand-alone software to control the developed instrument, to process images for the detection of surface conditions, and to control a collaborative robot used to position the instrument to a focus position with respect to a part surface for measurement. The image processing uses the OpenCV robust image processing library [77] , and the graphical user interface (GUI) is developed using the Qt framework [78] . The developed software is shown in Fig. 10 . In Fig. 10 , the software has two main modules: measurement and machine learning.
The measurement module provides the capability to control the collaborative robot, to adjust camera settings and to detect a surface condition; by comparing a measured surface with respect to a reference surface. The camera settings can be adjusted to find an optimal surface colour. An auto-exposure algorithm [79] and a white-balancing algorithm [80] are implemented to optimise the colour adjustment. The detection process is carried out based on the already described machine learning approach that learns distinctive image properties data from a measured surface and image properties data of a reference surface and compares them. Based on the learning process, a measured surface can be monitored and classified as similar or dissimilar with respect to the reference surface. The machine learning module provides the functionality to also control the collaborative robot, to adjust camera settings and to train the software with a specific reference surface. This module allows setting of the number of training data and the number of reduced dimensions from 2 to 54.
The machine learning process is as follows. An image is taken from the CMOS sensor according to a number of training images N that are set by a user. For each captured image, the 54 image parameters are calculated as the first data reduction. By this reduction, the training efficiency increases so that only hundred number of images are required to effectively conduct the machine learning process. A mean of the 54 parameters is calculated and a matrix containing the difference of values between the 54 parameters of each image and the mean parameters is derived. Subsequently, a 54 × N training matrix is constructed. Finally, the PCA method is applied to the training matrix. A single value decomposition method is applied to obtain the eigenvectors and eigenvalues of the trained data. The trained data are stored in a file so that the file can be recalled when a specific surface detection is to be carried out. A similarity value is calculated between the reference surface and the measured surface to decide whether the two surfaces are similar or not. With the calculation of the similarity value, subjectivity for determining a specific surface texture condition can be eliminated.

Instrument and software testing
Before the integration of the develop instrument and software into the post-process chain, several testing was carried out to verify their effectiveness for surface condition detection. Two stages of testing were applied: testing with simulated images and testing with real TPU surface images. The test with simulation images is to understand how well the algorithm can separate different surface images. With the simulated images, how different each simulated image can be understood and controlled so that the separation among simulated images in a PC space can be correlated.
A number of generated images with simulated speckles features were generated as the first test. The simulated speckles consists of different sizes and density to represent different features and condition on a surface and is generated by a method found elsewhere [81] . Four types of simulated images with speckle features are generated, namely Type 1, Type 2, Type 3, and Type 4 (see Fig. 11 ). A total of 100 images are generated for each type of the simulated images. Type 1 images represent an un-processed surface and have the largest size of speckle patterns with the lowest density. In contrast, Type 4 images represent a processed surface and have the smallest size of speckle patterns with the highest density. A simulated image of type 4 is selected as a reference surface. A total of 100 images are used for the training. The trained data are used to calculate a similarity value to detect the different type of simulated images with respect to the reference image. In this test, three PC spaces (number of reduced dimension k = 3) are considered for the surface detection.
The projection data onto the three PC of the simulated image parameters is shown in Fig. 12 . In Fig. 12 a, the separation plot of the projection data considers only two out of three PCs (2D view), from each image type in PC space. Meanwhile, the separation plot considering three PCs (3D view) is shown in Fig. 12 b. From Fig. 12 , the different types of surfaces can be classified into four different groups. The Type 4 surfaces, as the reference surface, can be largely separated from the other types. It is worth to note that Type 1 and Type 2 simulated surfaces are separated along the direction of PC2 (see Fig. 12 b). Calculated similarity values will be significantly smaller for Type 4 compared to other values of the other types. Table 3 shows the calculated similarity values for the four types of surfaces compared to the reference surface (Type 4).    Table 3 , Type 4 surfaces can be identified from the other types of surfaces by setting a threshold value.
Furthermore, tests were also carried out for the measurement of real polymer surfaces: TPU and Nylon-12. Fig. 13 shows one of the measurements of both samples. The testing with both of the material surface images uses five types of surfaces with different post-processed levels, namely: Type 1, Type 2, Type 3, Type 4 and Type 5 that represent 0% (unprocessed), 25%, 50%, 75% and 100% (fully processed) surfaces, re-spectively. The type refers to a specific process parameter for a specific polymer, such as processing time. A total of 100 images for each type of surface are captured. In order to cover various types of features on each surface type, the 100 images are captured from different areas that cover the entire surfaces.
Figs. 14 and 15 show a measurement process for one of the TPU and Nylon-12 surfaces at different post-process level, respectively. From Figs. 14 and 15 , the Type 1 (unprocessed) surface has high roughness    and Type 5 (fully processed) has low roughness. The reference surface is a surface from Type 5. The surface of Type 4 and Type 5 have a small difference on their textures.
Training procedures used a total of 100 Type 5 images for both TPU and Nylon-12 materials. Surface condition measurements will be compared with respect to the Type 5 surfaces. The calculated similarity values of all measurements were calculated by considering three PC components out of 54 components from the training data. Fig. 16 a and b shows the separation plot of each TPU image type in PC space as a 2D (two PCs) and 3D plot (three PCs), respectively. From Fig. 16 a and b, the Type 5 TPU surfaces can be isolated from the other types of TPU surfaces. However, the group of type 4 surfaces are close to the group of Type 5 as can be qualitatively observed from the images in Figs. 14 and Fig. 15 that the Type 4 surface is similar to the Type 5 surface. Fig. 17 a and b shows the separation plot of each Nylon-12 image type in PC space as a 2D (two PCs) and 3D plot (three PCs), respectively. Similar results with the measurement of TPU surfaces, the Type 5 Nylon-12 surfaces can be isolated from the other types of TPU surface as shown in Fig. 16 a and b. For the Nylon-12 surfaces, the group of Type 4 surfaces are quite far to the group of Type 5 as can be qualitatively observed from the images in Fig. 15 that the Type 4 surface is not as similar as the Type 5 surface. Table 4 shows the calculated similarity values for the five types of TPU and Nylon-12 surfaces compared to the Type 5 surface as the reference. From Table 4 , the closer condition or texture of a surface compared to its reference surface, the lower the similarity value. All surfaces close to their reference surface have the lowest similarity value, which means that surfaces are considered similar to their reference surfaces. A threshold can be set to detect Type 5 surfaces from the other types. The detection time is ranging from around 2-4 s depending on the number of features on the surface texture and is less than the required maximum detection time of 15 s.

Sensitivity analysis
It is important to quantitatively analyse the effect of the variation of similarity values with respect to the variation of pixel intensity on the CMOS sensor. The pixel detector on the CMOS sensor has noise so that the intensity value of a pixel at each detector will vary over time. The analysis of the intensity variation is carried out by analysing a single intensity value of a pixel on the detector over time. A Nylon-12 surface was used for the analysis. The sampling frequency of the detector was set to 15 fps because the sampling frequency range of the camera is around 10 −− 15 fps for measurements. A total of 100 pixels were sampled over a period of 6.6 s. The sampling period is considered sufficient, since it is larger than detection time of around 2 −− 4 s . Fig. 18 a shows   the pixel intensity variation over 6.6 s. The results of the variation analysis show that the standard deviation of the pixel intensity is 2 pixel unit. The analysis of the similarity value is carried out by analysing the similarity value of a Nylon-12 surface image with respect to the image of the Nylon-12 surface with increasing values of pixel intensity variation. A Gaussian noise with a mean 0 pixel unit and a standard deviation ranging from 0 to 100 pixel units are used to perturb the intensity values of the pixels of the image. Fig. 18 b shows the results of the sensitivity analysis of similarity value. From Fig. 18 b, it can be observed that the similarity value is stable below a noise of 30 pixel units. From this result, a surface detection is considered robust, since the pixel intensity variation is within only 2 pixel units that is in the left region of the red line in Fig. 18 b.  Fig. 20. Measurement of the green coloured Nylon-12 surfaces.

In-line integration into post-processing chain
The developed instrument is integrated in-line into the postprocessing chain. Factors considered in the integration are the selection of a positioning system, the design of the enclosure for the instrument and the type of programming method to control the positioning system ( Table 2 ). For the positioning system, a collaborative articulatedarm robot (cobot) is selected for the integration due to its flexibility and workability with human. The cobot has linear resolution of 0.1mm and rotational resolution of 0.5 ∘ . The enclosure for the instrument is designed to be stiff with 2mm thick aluminium sheet, because the cobot and the instrument are placed in an open area within the post-processing chain. To control the robot with the developed software, a socket programming approach is selected due to its universality with respect to different robot manufacturers. With socket programming method, the control procedure for the cobot can be applied to different cobot manufacturers so that the flexibility of the integration is increased. Fig. 19 shows the in-line integration of the developed instrument and software with the cobot. The in-line measurement is carried out after a postprocessing has finished.

Phase 3: control system implementation
In this case, a simple "go/no-go " control system based on feedback data is implemented. The main goal is to distinguish parts that have different surface quality with reference surfaces. The defective parts will be re-processed to achieve a desirable level of surface finish. A demonstration is showcased by measuring coloured Nylon-12 surfaces. The purpose of the demonstration after the integration is to test the ability of the selected cobot as a positioning system to effectively position the instrument at its focus position for the purpose of capturing images and to test the classification ability of the instrument. Only two types of green Nylon-12 parts are used: unprocessed and processed at 50%. Fig. 20 shows the measurement of the polymer parts.
The demonstration uses the processed part having smooth surfaces as the reference. For the training processes, a total of 150 images of the reference (processed) surfaces are captured to extract the learning data. Fig. 21 a shows the measurement area (green box) for the 150 training images. The unprocessed part is shown in Fig. 21 c as a validation pair. Measurements on both type of unprocessed and processed parts are carried out covering the entire top surfaces of the parts (see Fig. 21 b and d in red boxes). For each part, a total of 100 measurement images are captured. The threshold value for classification of the surfaces (whether they belong to the unprocessed or processed parts) is set to be less than five times from the calculated reference similarity value form the training process. The threshold selection is based on the results shown in Table 3 . The demonstration shows that all the images captured form the two surfaces can be correctly classified as processed or unprocessed with 100% success rate and a "go/no-go " can be made to the parts with different surface quality with respect to the reference surfaces.

Conclusion and future work
In this paper, a methodology to develop an in-line measuring instrument is proposed. The methodology can be used as a general framework to develop in-line surface measuring instruments and is validated with a case study to develop an in-line surface measuring instrument for post-processed AM polymer parts. The purpose of the developed instrument is to quantitatively detect the surface condition of the surfaces at different post-processing level. The results show that by using the methodology, a successful development and implementation of an inline instrument can be achieved. With the developed instrument, a subjectivity to classify the condition of the surfaces can be eliminated since the condition is quantitatively represented as a similarity value. Future works include applying the proposed methodology for the development of an in-line surface measuring instrument for absolute measurements as well as further fundamental research to solve the various mentioned barriers.