Multi-objective Optimisation of Machine Tool Error Mapping Using Automated Planning

Keywords: Machine tool error mapping Uncertainty of measurement Automated planning PDDL Multi-objective optimisation a b s t r a c t Error mapping of machine tools is a multi-measurement task that is planned based on expert knowledge. There are no intelligent tools aiding the production of optimal measurement plans. In previous work, a method of intelligently constructing measurement plans demonstrated that it is feasible to optimise the plans either to reduce machine tool downtime or the estimated uncertainty of measurement due to the plan schedule. However, production scheduling and a continuously changing environment can impose conflicting constraints on downtime and the uncertainty of measurement. In this paper, the use of the produced measurement model to minimise machine tool downtime, the uncertainty of measurement and the arithmetic mean of both is investigated and discussed through the use of twelve different error mapping instances. The multi-objective search plans on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Further experiments on a High Performance Computing (HPC) architecture demonstrated that there is on average a 3% improvement in optimality when compared with the experiments performed on the PC architecture. This demonstrates that even though a 4% improvement is beneficial , in most applications a standard PC architecture will result in valid error mapping plan. A machine tool is a mechanically powered device used during subtractive manufacturing to cut material. The design and configuration of a machine tool is chosen for a particular role and is different depending, amongst other things, on the volume and complexity range of the work-pieces to be produced. A common factor throughout all configurations of machine tools is that they provide the mechanism to support and manoeuvre the functional position, and sometimes the orientation, between the cutting tool and work-piece. The physical manner by which the machine moves is determined by the machine's kinematic chain (Moriwaki, 2008). The kinematic chain will typically constitute a combination of linear and rotary axes. Fig. 1(a) shows an example of a five-axis gantry machine tool that has three linear and two rotary axes which are used to move the tool around the work-piece. Typically, this configuration of machine will be used to machine heavy, multi-sided, large volume work-pieces. Fig. 1(b) shows an alternative …


Introduction
A machine tool is a mechanically powered device used during subtractive manufacturing to cut material. The design and configuration of a machine tool is chosen for a particular role and is different depending, amongst other things, on the volume and complexity range of the work-pieces to be produced. A common factor throughout all configurations of machine tools is that they provide the mechanism to support and manoeuvre the functional position, and sometimes the orientation, between the cutting tool and work-piece. The physical manner by which the machine moves is determined by the machine's kinematic chain (Moriwaki, 2008). The kinematic chain will typically constitute a combination of linear and rotary axes. Fig. 1(a) shows an example of a five-axis gantry machine tool that has three linear and two rotary axes which are used to move the tool around the work-piece. Typically, this configuration of machine will be used to machine heavy, multi-sided, large volume work-pieces. Fig. 1(b) shows an alternative design of a three-axis C-frame machine tool. This particular machine tool configuration consists of three linear and no rotary axes and is typically used to machine smaller work-pieces than the five-axis machine.
In a perfect world, a machine tool would be able to move to predicable points and orientations in three-dimensional space, resulting in a machined artefact that is geometrically identical to the designed part. However, due to tolerances in the production of machine tools and wear during operation, this is very difficult to achieve mechanically. Pseudo-static errors are the geometric positioning errors resulting from the movement of the machine tool's axes that exist when the machine tool is nominally stationary. Machine tool error mapping is the process of quantifying these errors (Schwenke et al., 2008) so that predictions as well as improvements of part accuracy can be made via numerical analysis and compensation.
The significance of the error mapping process is dependent on application; manufacturers machining high value parts to tight tolerances, usually in the order of less than a few tens of micrometres, should have their machines regularly error mapped otherwise they are at risk of producing non-confirming parts. Manufactures with broader tolerances may calibrate less frequently. There are many error components that collectively result in deviation of the machine tool from the nominal. For analytical and correction purposes, it is important to measure each error component. For example, as seen in Fig. 2(a) a machine tool with three linear axes will have twenty-one geometric errors. This is because each linear axis will have six-degrees-of-freedom and a squareness error with the nominally perpendicular axes (Mekid, 2009;Schwenke et al., 2008). Therefore, a three-axis machine tool will have a total of twenty-one geometric errors. Additionally, as seen in Fig. 2(b) a rotary axis will have six motion, two location errors, and two squareness errors (Bohez et al., 2007;Khim and Keong, 2010;Srivastava et al., 1995). Therefore, a five-axis machine tool will have a total of forty-one geometric errors.
The measurement of each error will involve the use of a test method and a measurement device. The selection of equipment will usually be done in unison with the test method, influenced by the engineer's preference. However, there are many cases where many different instruments can be used for performing the same test method, where each require a different duration to install and perform the test. For example, both a laser interferometer and a granite straight edge can be used to measure the straightness of a linear axis. The laser interferometer might take longer to setup, but if the machine has an axis with a long travel, the granite straight edge might need to be repositioned multiple times to measure the entire axis, therefore, taking more time to perform. For most manufacturers, removing a machine tool from manufacturing has large financial implications. Downtime can be in excess of £120 per hour (Shagluf et al., 2013). Therefore, the accumulative cost for a manufacturer with many machines can be large. For example, consider a manufacturer with 10 machine tools, each of which undergoes a 12 h error mapping exercise per year. The estimated downtime cost would be £14,400. However, this is a conservative figure for many high value manufacturing companies.
The estimated uncertainty of measurement is a parameter associated with the result of a measurement that characterises the dispersion of the values that could reasonable be attributed to the measurand (BIPM, 2008). The uncertainty of measurement is calculated for each individual measurement and the accumulative estimated uncertainty of measurement has a direct effect on tolerance conformance of parts manufactured using the machine. Therefore, from the manufacturer's perspective, it is desirable to reduce the estimated uncertainty of measurement. The estimated uncertainty of measurement is affected by change in environmental temperature. If the same calibration plan was carried out at different times throughout a working-day while the temperature is continuously changing, the accumulative estimated uncertainty would also change.
Depending upon the manufacturer's motivation for performing the error map, they may want to optimise the error map plan to either minimise financial cost, or maximise the quality of the error mapping exercise. The following different optimisation criteria considered in this paper are: (1) the reduction of machine tool downtime, (2) the reduction of the estimated uncertainty of measurements, and (3) balancing the two parameters with the possibility of customising their individual weighting. The change in environmental temperature throughout a measurement, as well as between interrelated measurements, will have a significant impact on estimated uncertainty of measurement ISO230-9 (2005). The decision making process involved for construction optimal error maps plans is exhaustive. However, computational intelligence in the form of domain-independent Artificial Intelligence (AI) can be used to provide optimal solutions when given a model of the problem (Ghallab et al., 2004).
In this paper, a description of individual factors that affect a machine's downtime and estimated uncertainty of measurement during error mapping are defined. This leads to a discussion of a previously developed model (Parkinson et al., 2012a(Parkinson et al., , 2012b) that can be used by state-of-the-art domain-independent AI planning tools to find optimised solutions (Russell et al., 1995). The development of this model to produce measurement plans that are optimised to reduce machine downtime and the estimated uncertainty of measurement due to the plan order is presented and discussed. This multi-objective optimisation is motivated by the desire to reduce both machine tool downtime and the uncertainty of measurement, which to some extent are competing as a temporally optimised plan will generally have a high estimated uncertainty of measurement. Following the development of a multi-objective model, twelve different case-study examples are presented and described to show the ability to generate plans that are optimised for (1) downtime, (2) uncertainty of measurement, and (3) the arithmetic mean of them both. The generated calibration plans are then examined and discussed to evaluate their fitness-for-purpose. It is then identified that computational resources are restricting the planner's ability to find optimal solutions in ten minute time allocation. This leads to a further investigation into the produced measurement plan when solving the planning problem on both personal and High Performance Computing architectures.

Related work
The complexity associated with machine tool geometric error measurement (Mekid, 2009;Schwenke et al., 2008) and the desire to reduce measurement uncertainty (Bringmann and Knapp, 2009;Bringmann et al., 2008) and machine tool downtime are well known for individual measurements. However, surveying the literature suggests that less well known is the potential to reduce machine tool downtime and the uncertainty of measurement by intelligent construction of the multiple-measurement plan. Knapp (2009) andBringmann et al. (2008) have identified that current ISO 230 part 2 (ISO230, 2006) is based on sequential testing of single geometric component errors. However, an exception is made for ISO 230 part 4 (ISO230, 2005) where several machine errors are tested together while the machine tool is performing multi-axis movement. Bringmann and Knapp (2009) then continue to describe the importance of interrelated errors using the example of linear yaw deviation effecting the nonorthogonality measurement at different positions in the plane of non-orthogonality measurement. The authors identify that this process is time consuming, and in response have shown the calibration of a machine tool using a 3D-ball plate where the amplification of interrelated measurements can be identified. However, when such approach cannot be used, they suggest using a Monte Carlo simulation that uses an approximation of the machine tool, the measurement and the machine's performance after calibration to estimate the uncertainty of measurement. Performing the (a) Five-axis machine tool (b) Three-axis machine tool Monte Carlo simulation sufficiently often will produce a distribution for the uncertainty of the identified errors. This work succeeded in producing optimal measurement plans when considering interrelated measurements by suggesting the use of a 3D-ball plate, or measurement uncertainty of measurement reduction using Monte Carlo simulation. In one example, the uncertainty of measurement for the X-axis linear positioning error E XX is reduced from 30 lm to 10 lm. The limitation of this work is that it is concerned with achieving the best possible measurement sequence with respect to the uncertainty of measurement at all costs, ignoring machine tool downtime. Muelaner et al. (2010) produced a method of large volume instrumentation selection and measurability analysis. This work is not explicitly for machine tool calibration, but does considers the suitability of instrumentation based on measurement method and instrumentation criteria. This implementation results in a prototype piece of software capable of finding the best instrumentation and measurement method from an internal database. Although this work is capable of always finding the optimum selection based on the predefined criteria, it pays no consideration to temporal aspects. Additionally, the produced model and software does not take any consideration to interrelated measurements, allowing for optimal sequencing.
Recent advancements in measurement instrumentation have demonstrated how multiple error components can be measured simultaneously using the same instrument. These techniques can simplify the calibration planning process as the calibration will require less time to complete, making the duration between measurements lower. Therefore, the likelihood of being able to schedule the measurements to happen over a duration that is temperature-stable is increased. However, this significantly depends on the machine tool's environment. For example, the API XD™ (API, 2014) allows for measuring all 6DOF simultaneously for one linear horizontal axis from a single set-up.
Other methods of machine tool calibration include being able to measure indirectly all the geometric error components simultaneously. One such method is the Etalon laserTRACER (Etalon, 2014) which has a linear measurement resolution of 0.001 lm for measuring axes up to 15 m in length. The laserTracer tracks the actual path of the machine tool throughout the entire working volume. This is done by attaching a reflector on the machine tool at the tool fixing point. From the acquired information, the system can perform a full calibration of multiple axis Cartesian machines. This includes all six-degrees-of-freedom and the non-orthogonal error. Using this method to calibrate a machine tool reduce the requirement for the use of multiple instrumentation and measurement methods, therefore, the type of calibration planning discussed in this thesis is reduced. However, due to the expensive cost of such equipment, the majority of machine tool owners and providers of calibration services will not yet own such a device.
The literature survey suggests that although both industrial and academic experts are currently producing valid machine tool calibration plans, there is little evidence to suggest that they are considering optimisation. It has also been identified that there is a desire to minimise machine tool downtime during calibration and to improve the machine's accuracy. From these observations, it has been established that there is potential benefit from developing a method to automatically produce optimised machine tool calibration plans.

Temporal optimisation
A machine tool will not be available for normal manufacturing while the error mapping process is taking place. For this reason, it is important to consider the temporal aspects when performing a measurement. Measuring an error component has several temporal implications (Schwenke et al., 2008). The following list describes the different phases associated with all measurements. The duration of each phase is based on empirical observation (Parkinson et al., 2012a) and can easily be adjusted should the user require.

Set-up of the instrumentation is normally a manual process
where the instrumentation will be taken from its protective packaging and set-up on the machine for use. This duration includes time taken for fine tuning of the instrumentation (e.g laser beam alignment) and it can also include the time taken for the instrument to stabilise in terms of self-heating and stabilising to the environmental conditions, although with good planning this can be achieved ''offline'' without the need for the machine. 2. Measurement of the component error can be manual or automated, but either way it will still require time to complete. During measurement, the measurement data as well as any necessary environmental data will be recorded. The duration will be affected by the sampling frequency, interval between targets, dwell time to take a measurement, and the feedrate of the machine. 3. Removal, adjustment and reposition of instrumentation are post-measurement durations. Removal is simply the time taken to remove the instrumentation and package it suitable for storage. Adjustment is the duration for when an instrumentation is required to be adjusted to measure another component error. For example, after measuring linear positioning using a laser interferometer the optics could be changed to angular optics without having to go through the complete set-up. Repositioning is where the instrument needs moving to perform another measurement for the same component error. For example, when measuring straightness of a long axis using a Short Range Displacement Transducer (SRDT) and a granite straight edge it is possible that the straight edge will need to be repositioned multiple times to cover a sufficient amount of travel. Another example is using a granite square and SRDT; the square can be adjusted to measure another axis, taking less time than setting the instrumentation up in the first instance. In this analysis, the ''taking out of the box'' is not included. It is assumed that the measurement engineers have expertise to do this efficiently. However, planning could be extended to include this.

Downtime calculation
During planning, the estimated machine tool downtime can be calculated by summing the individual durations associated with each measurement. A minimisation function can be used return the most efficient selection of measurements where the objective is to reduce estimated time. Each measurement task is comprised of several sub-tasks that have an associated duration. Eq.
(1) shows an abstract minimisation function, f ðeÞ, for measuring the machine tool t, where m are individual measurements (error component) and is made up of the sum of durations, d. For example, the duration to setup a measurement and the duration to perform the measurement. min is the combination of d for measurement m where the accumulation of all the durations is as low as possible.

Uncertainty of measurement
Uncertainty of measurement is a parameter associated with the result of a measurement that characterises the dispersion of the values that could reasonable be attributed to the measurand (BIPM, 2008). For example, a thermometer might have an uncertainty value of ±0.1°C. Therefore, it can be stated that when the thermometer is displaying 20°C, it is actually 20°C ±0.1°C with a confidence level of 95% where the confidence level is determined by the distribution and knowledge of the system. Quantifying and reducing uncertainty of measurement is an important task and is required to be reported on the calibration certificate. More importantly, it is required to determine if the measurement method is suitable to establish whether the machine is capable of meeting its tolerances.
The philosophy behind the investigation performed in this paper is that, rather than calculating the total estimated uncertainty for each individual measurement, it is more efficient to consider only the contributors due to scheduling that affect the estimated uncertainty. This means that it is only necessary to model aspects that cause the estimated uncertainty of measurement to change, thus simplifying the domain model.
There are many potential contributors that affect the uncertainty of measurement. However, when automatically constructing an error map plan, the aim is to select the most suitable instrumentation and measurement technique that has the lowest estimated uncertainty. In addition, the estimated uncertainty should take into considering the changing environmental data, and where possible, schedule the measurements to take place where the effect of temperature on the estimated uncertainty is at its lowest. Fig. 3 shows a real-world example of the Y-axis straightness in the X-axis direction measured on a gantry milling machine at two different temperatures. From this figure, it is evident that the error quadruples with a 4.5°C increase in temperature. This example illustrates the different of measuring the same error component at different temperatures and motivates the significance that environmental temperature has on the estimated uncertainty of measurement.
The following list provides the factors that affect the estimated uncertainty of the error map plan, and suggests how they can be optimised.
Measurement instrumentation having the lowest estimated uncertainty of measurement. Where possible, intelligently selecting instrumentation with the lowest uncertainty will reduce the overall estimated uncertainty of measurement. However, they may also have a higher temporal cost. The change in environmental temperature throughout the duration of a measurement can significantly increase the uncertainty of measurement. When possible, the measurement should be scheduled to take place where the temperature is stable. When considering inter-related measurements, the change in environmental temperature between their measurement can significantly increase the uncertainty. During planning, it is important to schedule interrelated measurements where the change in environmental temperature is at its lowest. For example, Mian et al. (2013) report a 5°C environmental temperature change over a three day period that resulted in 18 lm displacement in the Y-axis and 35 lm in the Z-axis.
Allowing the instrument to correctly stabilise in the environment before the measurement can reduce the uncertainty due to thermal distortion and self-heating.

Uncertainty calculation
One known method, recommended by International Standards Organisation (ISO), involves combining the individual uncertainties using the root of the sum of squares to produce a combined uncertainty u c . In this paper, Eq. (2) is used for calculating u c (ISO230-9, 2005;BIPM, 2008).
Where u c is the combined standard uncertainty in micrometers (lm), and u i is the standard uncertainty of uncorrelated contributor, i, in micrometers (lm).
Once u c has been calculated, the expanded uncertainty is calculated by multiplying uncertainty U c by the coverage factor k, which in this case is k ¼ 2 which provides a confidence level of 95%.
A comprehensive example of calculating the uncertainty of measurement for measuring the squareness of two perpendicular axes can be found in Parkinson et al. (2014aParkinson et al. ( , 2014b.

Automated planning
Planning is an abstract, explicit deliberation process that chooses and organises actions by anticipating their expected outcome. Automated planning is a branch of Artificial Intelligence (AI) that studies this deliberation process computationally and aims to provide tools that can be used to solve real-world planning problems (Ghallab et al., 2004).
Domain-independent planning is a form of planning where a piece of software (planner) takes as input the problem specification and knowledge about the domain in the form of an abstract model of actions. Searching for solution plans is a PSPACE hard problem (Erol et al., 1995). PSPACE describes the computational complexity associated with decision problems that can be solved by a Turing machine using a polynomial amount of space. One key difficulty encountered with domain-independent planners is the very broad range of planning problems which could be presented, requiring any guidance strategy to be effective across the potential range of problems.
Advances in domain-independent research resulted in the formation of the International Planning Competition (IPC) (McDermott, 2000) where state-of-the-art planners try to solve an ever increasing set of complex benchmark problems. The birth of the IPC brought a standardised formalism for describing planning domains and problems that could be used to make direct comparisons between the performance of planners. Therefore, supporting faster progress in the community. This formalism is called the Planning Domain Definition Language (PDDL) (McDermott et al., 1998) and has gone through many revisions where new features, allowing for more expressive domain modelling, have been added.

Domain model
The previously developed temporal model (Parkinson et al., 2012a(Parkinson et al., , 2012b has been extended to encode the knowledge of uncertainty of measurement reduction (Parkinson et al., 2014a). Fig. 4 shows the functional flow between the PDDL actions within the extended model. In the figure, durative actions are represented using a circle with a solid line. Fig. 4 shows that the measurement action has been split up into two different action and an adjust action has been added which may need to be executed if the instrumentation is not effective for the travel length of the axis. It is necessary to have two versions of the measurement action because not all measurements have other errors propagating down the kinematic chain and effecting their uncertainty. The following list details the extension of the measurement action into two actions along with and adjustment action: Measure no : The measurement action represents a measurement where no consideration is required to be taken for any inter-related measurements. Measure in : Conversely, this measurement action represents a measurement where consideration is required to be taken because of inter-related measurements. Adjust: Some axes are longer than the range of the measuring device. In this case, the measuring device needs to be readjusted, perhaps several times, in order to measure the full range of the axis.
The developed model is encoded in PDDL 2.1 because it uses numbers, time, and durative actions . Regular numeric fluents to model constants and variables relating to the uncertainty of measurements. For example, a device's uncertainty (U DEVICE LASER ) can be represented in PDDL as (=(device-u?iinstrument) 0.001) where the instrument object ?i has the value of 0.001 lm.
In the temporal model, the cost of each action is the time taken to perform that specific task. Using this model will produce an error map plan, indicating the ordering of the measurements and the time taken to perform each test. In order to encode temperature dependent equations, access is needed to the change in environmental temperature that occurs between the start and end of a measurement. Assuming access to the temperature as a fluent, the value can be sampled at the start and end of the measurement with the at start and at end syntax of PDDL. Thus, a temporary value (start-temp) is recorded at the start of the action, and then calculate TD at the end of the action by subtracting the start temperature from the current temperature.
As stated previously, in order for this modelling choice to work, it must be possible to query the temperature as a fluent at any given time. The method used in order to achieve this is described in the following section.

Temperature profile
In PDDL2.1 it is not possible simply to represent predictable continuous, non-linear numeric change. More specifically, it is not possible to represent the continuous temperature change throughout the error mapping process. This presents the challenge of how to optimise the sequence of measurements while considering temperature. The solution implemented in the model involves discretizing the continuous temperature change into sub-profiles of linear continuous change. This environmental temperature profile contains the systematic heating and cooling profile for the environment of a machine tool. This information can be obtained by non-invasive monitoring or historical date from the machine tool owner. It is not possible to predict the non-systematic environmental temperature deviation and the magnitude of the systematic element could fluctuate slightly. However, using this systematic profile will allow us to predict how the temperature will change throughout the day, in particular where the rate of change is at its lowest. Scheduling against this profile gives the best available chance of producing realistic and accurate results. During the actual error mapping process, deviations from the systematic profile are recorded and taken into account on the calibration certificate.
The continuous temperature profiles are split up into a discrete set of linear sections by iterating over the temperature data looking for a difference in temperature greater than a given sensitivity. This allows the temperature profile to be discretized into a set of linear sub-profiles. An example can be seen in Fig. 5 where the environmental temperature profile (difference from 20°C) for a forty-eight hour period is shown (Monday and Tuesday). The chosen sensitivity, s, is based on the minimum temperature sensitivity of the available instrumentation. In this example, it is 0.1°C. The graph shows the temperature profile across 48 h; the second 24 h period displays a higher temperature profile than the first and appears to reach relative stability. The reason is due to the initial state on the Monday morning after the weekend shut-down.
To model these sub-profiles in the PDDL model, they are represented as predetermined exogenous effects. In order to encode these in PDDL2.1, the standard technique of clipping durative actions together is used . The #t syntax is used to model the continuous linear change through the subprofile. Because the (temp) fluent is never used as a precondition, the measure actions can make use of the continuously changing value, without violating the no moving targets  rule. Given the predefined times t 1 ; . . . ; t n when the sub-profile p 1 ; . . . :p n will change, a collection of durative actions, d 1 ; . . . d n are created that will occur for the durations t 1 ; t 2 À t 1 ; . . . ; t n À t nÀ1 . An example durative action d 1 that represents a sub-profile p 1 can be seen in Fig. 6 where the duration t 1 ¼ 42. In the measure-influence action, the temperature at the start of the measurement action, t 1 , and at the end of the action, t 2 are stored in start-temp and temp, respectively. Therefore, in the measurement action it is possible to calculate the maximum temperature deviation, DT, based on two temperatures, t 1 and t 2 .

Uncertainty equations
Implementing equations where the result is influenced by other measurements is also encoded in the PDDL using numeric fluents. For example, Fig. 7 shows the calculation for the squareness error measurement using a granite square and a dial test indicator where the uncertainty is influenced by the two straightness errors. In the model, this is encoded by assigning two fluents (errorval?ax?e1)) and (error-val?ax?e2)), the maximum permissible straightness error in the PDDL initial state description. This fluent will then be updated once the measurement estimation has been performed. The planner will then schedule the measurements to reduce the effect from the contributing uncertainty. This shows how the uncertainty can be reduced due to the ordering of the plan.

Planner
Local Search for Planning Graphs with Timed Initial Literals (TILs) and derived predicates (LPG-td) (Gerevini et al., 2008) is a domain-independent planning tool and was a top performer in the third International Planning Competition (IPC) , solving 428 planning problems with a success of 87%. Additionally LPG-td was a top performer involving domains with predictable exogenous events (which are TILs in PDDL) (Gerevini et al., 2006). LPG-td implements an extended local search algorithm and action graph representation. This representation is a Numerical Action (NA) graph which extends the action graph (Gerevini and Serina, 2002) to contain propositional nodes and numerical nodes, labelled with propositions and numerical expressions, respectively (Serina, 2004). Since the production of LPG-td, many other planners have been developed that can solve PDDL2.2 problems and beyond. However, there are few planners that can support the full semantics of PDDL.
The experiments were performed on an AMD Phenom II 3.50 GHZ processor with 4 GB of RAM. The results show the most efficient plan produced within a 10 min CPU time limit. All the produced plans are then validated using VAL (Howey et al., 2004). VAL is the automatic validation tool for PDDL that is capable of validating PDDL solutions against PDDL problems and domains. These experiments were carried out without the ability to schedule measurements concurrently. This is because in this current model, the effect that concurrent measurements will have on the uncertainty of measurement has not been accounted for. It is likely that uncertainties could improve due to lower change in ambient conditions during relative measurements, but this could be counteracted by any need to use instrumentation with a higher uncertainty in order to achieve concurrent measurement. Table 1 shows the empirical data from performing these experiments. From these results, it is evident that when optimising for time, no consideration is taken for the uncertainty due to the plan order. Similarly, it is evident that when optimising for the uncertainty due to the plan order, no consideration for temporal implications is taken. However, when optimising the plan for both the uncertainty due to the order of the plan and reducing the overall timespan, it is evident that the planner can establish a good compromise.
From Table 1 it is noticeable that a solution to each problem instance is found within the 10 min time limit. In addition, Table  2 shows exactly how many plans were produced during this time-limit and at what time the optimal plan was discovered. This information shows that the optimal plans were discovered on average after 8 min 29 s of execution. This highlights that it is possible that the optimal plans are not being found within the 10 min period. It is also worth highlighting that the results demonstrate the potential advantage of using automated planning based on the developed model. However, it is possible that experts with different opinions and knowledge might produce error map plans that have a lower estimated uncertainty of measurement. Encoding this new knowledge in the model would then allow for comparable optimised error map plans to be produced.

High Performance Computing
To investigate this further, without imposing a strict computation restriction, experiments were performed on a hardware platform with increased resource availability. The chosen platform is the Huddersfield University QueensGate Grid (QGG) High  (at start(assign(temp-u) ;calculate u device using the length to measure. (+(*(/(k value ?in)(*(u calib ?in)(length-to-measure ?ax ?er))) (/(k value ?in)(*(u calib ?in)(length-to-measure ?ax ?er)))) ;calculate u misalignment.
(*(/(u eve)(2sqr3))(/(u eve)(2sqr3)))))))) ) ) Performance Computing (HPC) architecture. The dedicated hardware has 37 cores with a clock speed of 2.53 GHz with an allocated 8 GB of RAM to each core. The same experiments as for the PC were performed on the QGG with a CPU execution time-limit of 24 h. The motivation behind allowing the planners to run for 24 h is because the HPC architecture has clock speeds comparable with a PC architecture. As LPG-td is a single-core application, allowing the planner to execute for only 10 min would yield similar results as on the PC. Therefore, the chosen HPC architecture creates the possibility to execute many instances of LPG-td simultaneously and for prolonged periods which would render the average PC unusable for other activities. Table 3 shows the results from these experiments. From these results, it is evident that in almost all instances plans have been found with a lower metric. This highlights that providing significantly more computation time can result in plans that are better optimised. However, it is important to consider the gain in optimality to evaluate whether the extra computational resources are necessary. Table 4 shows the percentage improvement for each metric when comparing the experiments performed on the QGG and those on the PC. It is noticeable that while in most cases there is an improvement in the optimised metric, there is also often deterioration for the non-optimised metric. Additionally, there is an improvement for both metrics for the multi-object experiments.
The use of the QQG has shown that improvements over the optimal solutions identified on a PC can be achieved by using greater computation power. However, determining whether this is necessary is down to the end user. For example, for a measurement engineer wishing to perform a quick and effective error mapping process on an old machine tool operating with large tolerances, the use of a PC architecture is sufficient. Conversely, a measurement engineering error mapping a state-of-the-art machine tool that operates to sub-micron tolerances within the aerospace sector will want to perform both the quickest and most effective error mapping plan that can minimise the uncertainty of measurement, making the use of HPC for this engineer is justified.

Plan excerpts
The following three plan excerpts  illustrate the produced plans for the three different metrics and the differences between the order of measurement. Fig. 8 shows an excerpt from a temporally optimised plan produced from the 31A problem instance. The motivation for showing this particular excerpt is to investigate how the measurement of interrelated errors is scheduled in the produced plan. Firstly, it is noticeable in the plan that the measurements that can use the same instrumentation are clustered together so the instruments can be adjusted from a previous measurement to save time, rather   Fig. 9. Plan excerpt from an uncertainty optimisation. than set-up from a packaged state. It is also noticeable that the measurement order is not optimal for reducing the estimate uncertainty of measurement because of the measurement of the Y-axis about the Y-axis angular deviation (E CY ). This adds a time increase of around one hour between the interrelated straightness and nonorthogonal errors. The significance of this time period on uncertainty is that the continuing temperature increase will have a negative impact on the estimated uncertainty of measurement. From Table 1 it can be seen that the total machine downtime when using this error mapping plan would be 33 h and 12 min with an uncertainty of measurement due to the plan order metric of 99 lm. Fig. 9 illustrates an excerpt from the produced plan for the same 31A. but this time optimising for the uncertainty of measure due to the ordering of the plan. Similarly to Figs. 8 and 9 also displays the section of the plan that details the scheduling of interrelated measurements. From the plan, it is noticeable that temporal aspects have not been considered because even though measurements using the same instrumentation are grouped together, the planner has scheduled for the instrumentation. It is also noticeable that the plan is optimised to reduce the estimated uncertainty of measurement due to the plan order. This can be seen by the fact that the two interrelated straightness errors (E YX and E ZY ) are scheduled sequentially followed by the measurement of non-orthogonality between the Y-and X-axis (E C0Y ). Scheduling these errors sequentially means that any effect due to changing temperature over time can be minimised. It can also be seen in the produced plan that the temperature variation over the course of the three interrelated measurements is only 0.3°C. The machine downtime when using this error mapping plan would be 34 h and 12 min with a plan order uncertainty of measurement metric of 52 lm. This plan results in an increased downtime of 1 h over the temporally optimised plan, but reduces the uncertainty of measurement metric by 47 lm.
The third plan excerpt shown in Fig. 10 shows the plan order when optimising for both machine tool downtime and the uncertainty of measurement due to the plan order for problem instance 31A. Firstly, it is evident that temporal optimisation has been achieved by scheduling measurements that use the same instrumentation sequentially so that the instrumentation only needs to be adjusted, not removed and set-up once again. Secondly, it can be seen that the uncertainty of measurement due to the plan order has been reduced by scheduling interrelated measurements together as well as scheduling them where the temperature difference is at its lowest. From examining the temperature profile seen in Fig. 5, it is evident that there are areas where the rate of change of temperature is lower. However, when solving multi-objective optimisation planning problems, a trade-off between both metrics is going to take place. In Table 1 this trade-off can be seen where the error mapping plan duration is 33 h 38 min and the uncertainty of measurement metric is 53 lm. It is evident that both metrics are not as low as when optimising for them individually, but it is clear that the plan is a suitable compromise, showing significant reduction in both machine tool downtime and the uncertainty of measurement due to the plan order. In comparison between the single-objective optimum plans, the metrics in the multi-objective plans are on average 2% worse for time and 8% worse for the uncertainty of measurement than when they are optimised individually. However, the multi-objective search plans are on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan.
The graph presented in Fig. 11(a) shows the average metrics for the six different three-axis error mapping instances, and Fig. 11(b) shows the six different five-axis error mapping instances. In these Instrumentation set-up Instrumentation adjustment Measurement Set-up removal EYX: Straightness of the X-axis in the Y-axis direction EZX: Straightness of the X-axis in the Z-axis direction EZY: Straightness of the Y-axis in the Z-axis direction EXY: Straightness of the Y-axis in the X-axis direction EXZ: Straightness of the Z-axis in the X-axis direction EYZ: Straightness of the Z-axis in the Y-axis direction EC0Y: Non-orthogonality between the Y-and X-axis EB0Z: Non-orthogonality between the Z-and Y-axis EA0Y: Non-orthogonality between the X-and Z-axis   two figures, the effect on both metrics when performing a single-object optimisation can be visualised. Additionally, the trade-off between time and uncertainty when performing the multi-objective optimisation and the compromise in the final solution can easily be visualised. From these two graphs, it can be concluded that performing the multi-objective optimisation is beneficia and adjusted the weighting of each metric would allow optimisation on a case-by-case basis.

Conclusions
In this paper it has been identified that in addition to optimising error mapping plans to minimise downtime or the uncertainty of measurement, multi-objective optimisation can be performed. The undertaken case studies show that in comparison between the single-objective optimum plans, the metrics in the multiobjective plans are on average 2% worse for time and 9% worse for the uncertainty of measurement than when they are optimised individually. However, the multi-objective search plans have on average a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Evaluation of the generated plans have validated their fitness-for-purpose and demonstrates the merit of automatically generating measurement plans.
Knowledge regarding the discovery of optimal plans when performing the experiments on a PC architecture highlighted that the experimental analysis could be performed on an HPC architecture. These experiments displayed that there is on average a 4% improvement in optimality when compared with the experiments performed on the PC architecture. This warrants the use of the HPC resources for measurement engineers working to sub-micron tolerances and also suggests that a standard PC architecture is enough for most applications. As the state-of-the-art in both AI autonomous planners and PC computation power improve, the requirement for HPC resources should potentially reduce. This paper presents novel contributions to both the machine tool metrology and automated planning (AP) community. The first contribution presented in this paper is the ability to use AP to model both temporal and uncertainty of measurement aspects of a machine tool error mapping process. This involved modelling the durations of each individual measurement, as well as discretizing continuous environmental temperature change. The paper then describes how this model can be used by state-of-the-art AP algorithms to find optimal plans based on three different metrics. These metrics are: (1) downtime of the machine tool during measurement, (2) estimated uncertainty of measurement, and (3) the arithmetic mean of both time and uncertainty. The work undertaken in this paper has also identified that for most manufacturers, using the proposed technology on a standard PC architecture will produce sufficient results. Producing both temporal and estimated uncertainty of measurement optimised plans is a novel contribution to the machine tool metrology community, and provides a solution to the identified literature gap. This has significant implications for all metrological processes where both their cost through downtime, and quality through uncertainty can be reduced. The work presented in this paper is also significant for the AP community as it provides a real-world multi-objective problem to use to initiate the development of more powerful AP tools.
Future work includes the extension of the developed model to account for other continuous factors that affect the uncertainty of measurement. For example, the effect of performing simultaneous measurements on the estimated uncertainty of measurement needs to be investigated and modelled. This would allow for concurrent measurements to be scheduled to minimise the uncertainty of measurement. Therefore, further reducing machine tool downtime and the uncertainty of measurement.