Instrument localisation for endovascular aneurysm repair: Comparison of two methods based on tracking systems or using imaging

In endovascular aneuysm repair (EVAR) procedures, medical instruments are currently navigated with a two‐dimensional imaging based guidance requiring X‐rays and contrast agent.


| INTRODUCTION
Cardiovascular diseases, like abdominal aortic aneurysm (AAA), are one of the most frequent causes of death in western industrial nations. 1 AAAs are local dilatations of the abdominal aorta with a diameter greater than 1.5 times of its normal size with a prevalence of 3.9%-7.7% in people over 65 years of age. 2,3 In 80% of the cases, AAAs are free of any symptoms. Thus, they are usually detected by chance. 4 Since untreated AAAs with a large diameter have a high risk of rupture which is associated with a mortality rate of 80%, 5 recently several countries have established ultrasound screenings. 3 Besides surgery, a possible therapy option is implantation of a stentgraft to reduce the stress on the aneurysm wall. Usually, this treatment is done in a minimally invasive procedure called endovascular aneurysm repair (EVAR). 6 In an EVAR procedure, guide wires as well as catheters are used to navigate and fixate the stentgraft in the aneurysm. The most technically demanding steps in this procedure, and thus having a major impact on the duration of the procedure, are the cannulation of small diameter vessels of the aorta or parts of the stentgraft. Stateof-the-art navigation is achieved by visual guidance based on fluoroscopy and digital subtraction angiography (DSA) in combination with a contrast agent administered to visualise the actual vessel volume and to verify technical success of the procedure. However, this image-based guidance has several drawbacks. Firstly, fluoroscopy imaging and DSA provide only two-dimensional (2D) projections of the patients anatomy and inserted instruments. This makes the navigation of the guide wires and catheters difficult and leads to long EVAR procedure times. Furthermore, patients and physicians are exposed to X-rays during the procedure. The commonly used iodine contrast agents carry health risks like nephrotoxicity. 7 These burdens may increase further in the future, as screening studies show an increase in EVAR procedures. 8 Hence, reducing the health risks for the surgical team as well as for the patient and facilitating the navigation of the inserted devices will gain more importance.
A preoperative computed tomography (CT) angiography is part of the guideline-based diagnosis and is therefore usually available.
Knowing the three-dimensional (3D) localisation of the instruments and their relative location in the vascular system (e.g. for visualisation on preoperative CT) simplifies the procedure. 9 In contrast to the conventional 2D view, it allows to view the instrument positions from different viewpoints and to see whether the inserted instrument has the correct orientation.
For endovascular navigation, not only the position and orientation of the tip is relevant, but also the overall 3D position of the shape of the device. This provides information about the flexibility and mechanical stress on the vessel wall from the inserted devices.
Misjudegments can lead to potentially life-threatening complications, such as rupture of the vessel wall. In addition, prediction of tip motion based on navigation commands given at the proximal end of the device requires knowledge of device shape. During navigation, the position of the tip of the device may not change even though the physician inserts the device into the vascular system. In this situation, the shape of the device contains turns and the mechanical stress on the vessel wall increases. Such situations can be easily recognised by clinicians when the entire 3D shape is tracked.
The accuracy of the instrument's 3D positions that must be achieved in EVAR procedures depends on the proximity of the stentgraft to be placed, the structures to be preserved, vessel sizes and territories and ultimately the individual case. In the renal artery region, a clinical accuracy of 3 mm can generally be defined as acceptable, which is half the average diameter of the renal artery ostium. 10 For other conditions, lower accuracies might be acceptable.
A common approach to obtaining 3D information is based on 2D/ 3D image registration, which refers to the process of determining the spatial relationship between a preoperative 3D image and an intraoperative 2D image. A comprehensive review of the latest approaches and methods can be found in Markelj et al. 11 However, registration is used to transfer information from intraoperative fluoroscopic images to the preoperative 3D image or vice versa, and which also allows reconstruction of 3D shape. The field of 2D/3D registration is an active area of research in minimally invasive imageguided therapy. 12,13 A 2D/3D registration approaches can be divided into featurebased or intensity-based approaches. In addition, methods differ in whether only a single 2D fluoroscopic image or multiple 2D fluoroscopic images from different directions are used for registration.
Mitrović et al. 13 compares several state-of-the-art mono-and biplanar 2D/3D registration approaches for cerebral vascular data. It was shown that mono-planar approaches achieve similar registration accuracy but are not as robust as the bi-planar approaches. A disadvantage of the biplanar approaches is that they require a biplanar system, which is not very common, or manual rotation of the C-arm, which requires additional steps. Furthermore, the radiation exposure is higher.
A completely different way of guidance without the need of Xrays and contrast agent is the use of tracking systems. Most common tracking systems are optical tracking, electromagnetic (EM) tracking or fibre optical sensors.
EM tracking provides the position and orientation (pose) of medical instruments. 14 EM sensors are very small and thin sensors, which measure an EM field generated by a base station. Thus, they can be easily integrated into medical instruments like needles, catheters or endoscopes. Since EM sensors do not need a line of sight to the field generator, they are suited to track instruments inside the human body. Normally, the pose of a tracked device is displayed in relation to the patient's anatomy obtained from a preoperative CT scan. [15][16][17] For this purpose, fiducial markers are placed on the patient/phantom during preoperative image acquisition. During navigation, these markers are placed at approximately the same positions as in the preoperative scan and their positions in the EM space are measured. Then, a rigid transformation from the EM measurement space into the preoperative space can be determined with the marker coordinates so that the EM tracked instruments can be visualised in the preoperative data. With EM tracking the 3D pose can be obtained for one specific position of the instrument, but not the shape of the instrument.
Fibre optical sensors, such as fibre Bragg gratings (FBGs) are another tracking technology, which allow shape sensing of medical instruments, but do not allow to track the current position. [18][19][20] FBGs are interference filters, which are inscribed into the core of an optical fibre and reflect a specific Bragg wavelength. Combining several FBGs at the same longitudinal position in different fibres as an FBG array allows to estimate curvature and direction angle so that the shape of the fibre can be reconstructed. In multicore fibres, FBG arrays are inscribed in three or more cores of a single optical fibre. 21 Since they have a small diameter and they are flexible, optical fibres can be easily integrated into medical instruments.
Combining shape sensing based on optical fibres with EM tracking, a 3D guidance of flexible medical instruments can be enabled. Shi et al. 22 described a catheter, which included one EM sensor, an intravascular ultrasound probe at the tip and an optical fibre with FBGs. However, in this work no method for combining the fibre optical shape sensing (FOSS) information with the EM tracking information was introduced. In Jäckle et al., 23 a first method for fusing FOSS with the position information of three EM sensors was introduced. Since EM sensors can sense orientation information, the necessary amount could be further reduced. In Jäckle et al., 24 an approach for locating the measured shape with the position and direction of two EM sensors was introduced. To the best of our knowledge, there are currently no other publications dealing with the fusion of these two technologies.
Reducing the necessary number of EM sensor has several advantages. As already mentioned, EM sensors are very small and thin sensors. However, at each position where an EM sensor is integrated the medical instrument becomes stiffer and loses flexibility. In addition, each EM sensor needs space for its cable and tubing and this limits the instruments, where all needed EM sensors can be integrated. Thus, reducing the necessary amount of EM sensors to only one single EM sensor allows further applications in thinner and more flexible instruments, such as thinner catheters or even guide wires.
Furthermore, EM sensors can be only tracked inside the measurement volume of the field generator. For an instrument with three integrated EM sensors distributed over the whole instrument, the range where all three EM sensors of the instrument can be tracked is limited. Using only one EM sensor significantly enlarges the area where the medical instrument can be located. Thereby, an instrument with only one integrated EM sensor can also used in procedure where the instrument has to be tracked over long distances.
The main objective of this work is to evaluate, compare and relate image-based and tracking-based methods for 3D localisation of medical instruments. Firstly, therefore, we introduce a novel 3D shape localisation approach for a stentgraft system using FOSS, the pose of only one EM sensor and the vasculature information of the preoperative CT scan. Secondly, as a representative example, we introduce an alternative image-based method for 3D shape reconstruction using only a single 2D fluoroscopic image and the 3D vessel information from the preoperative CT scan as input. Subsequently, a joint experiment was performed using a stentgraft system with integrated tracking systems in a realistic vascular phantom. In this experiment, the stentgraft system was inserted at different depths into the vascular phantom and both tracking data and 2D fluoroscopic images were acquired. Afterwards, the data were used to compute the 3D shapes of the stentgraft system with both approaches and the localisation accuracy is determined. Finally, the two methods are compared and discussed based on the results. into the optical fibre were not visible, but a fibre region of 40 cm was marked by the manufacturer where the 38 FBG arrays are located, which allow shape sensing of 38 cm. Thus, the EM sensor was not placed exactly at the tip of the fibre but further inside to be sure that the sensor is within the shape sensing region. The EM sensor was fixed rigidly to the capillary tube and covered separately with a shrinkage tubing to protect the sensor and its cable from damage.

| Tracking systems
The optical fibre is connected to a fanout and it, in turn, is connected to an interrogator (FBGS Technologies GmbH, Jena, Germany) to obtain the reflected wavelengths of all FBGs. The interrogator measures the spectrum of the reflected light and the fanout allows to select specific cores of the multicore fibre for this measurement.
Based on the measured wavelengths of all FBGs, the shape of the 38 cm long shape sensing region of the fibre is reconstructed using the method explained in Jäckle et al. 25 In that article, each step of the shape sensing model has been analysed and optimised for our optical fibre. The resulting shapeŜ is represented by 761 equispaced 3D

| Localisation model
For the calibration step and for the evaluation of the localisation method, CT and cone-beam CT (CBCT) scans were acquired.
Metallic markers were placed on the phantom before the experiment and the positions were acquired with a EM tracked stylus during the experiment. Thus, a transformationF CT EM , which maps the EM sensor poseP EM 1 from the EM space to the CT or CBCT spaceP CT 1 , can be computed by means of a point based registration. 26 A spatial calibration step was performed before the evaluation to find a correspondence between the reconstructed shapeŜ and the measured EM sensor poseP CT 1 . For this purpose, two variables were determined. First, the index i 1 of the shape point nearest to the EM sensor tip is determined and used to obtain the corresponding shape pointŜ i1 . Second, the offset between the EM sensor and the corresponding shape point has to be corrected. To this end, a correction vector v 1 for mapping the measured EM sensor position T CT 1 to the correspondingŜ i1 is determined depending on the current orientation of the EM sensor v This calibration step is illustrated in Figure 2 and it was already introduced in detail in. 23 Afterwards, the shapeŜ can be located in the CT space with one EM sensor and the preoperative data using the values obtained in the calibration. Firstly, the shape is transformed into CT space.
From the EM sensor, the full orientation information is given.
However, with our optical fibre it is not possible to accurately measure the rotation along the fibre direction. As already reported in Ref., 25 the rotation angles are not stable and are shifting over time along the whole fibre. Before shape sensing is started, the angles are calibrated and the observed angle changes are small enough for an accurate shape sensing of flexible instruments (see also the reported accuracies in Ref. 25 ). However, angle changes of several degrees can be observed locally along the fibre. Another problem from mathematical theory of curves is that the angle is not uniquely defined for straight shapes, because no direction of bending is given. Integrated EM sensors make the instruments stiffer, the shape tends to be straight and the angle may not be uniquely defined.
As a result only the direction and position of both tracking systems coincide. Thus, the shape is transformed into the CT space such that its position and direction correspond to those of the EM sensor.
Then, the shape is prealigned and registered into the vasculature to obtain the correct rotation along its direction. An overview of all processing steps is given in Figure 3. In the following subsections, each step of the shape localisation with one EM sensor and the preoperative data is introduced.

| Transformation of shape into CT space
For this shape localisation method, the shape is reconstructed such that the shape point corresponding to the EM sensor is the origin Thus, this shape point is pointing into the third dimension and corresponds to the direction of the EM sensor. Hence, the shape is transformed to the EM space with the pose of EM sensorP EM 1 and by applying the correction vector v →EM 1 , summarised as the transformation (3) Afterwards, the shape is transformed to the CT space by applying the transformationF CT EM obtained from the metallic markers. As a result, the shape is located at the EM sensor position in the CT scan and pointing into the same direction as the EM sensor, but with a probably wrong rotation along this direction.

| Shape prealignment with vessel centreline
In this step the information from the preoperative scan is used. From the CT scan the mask image M of the vessel volume is obtained by segmentation. Furthermore, the centreline of the chosen inserted vessel path, represented as the point list is semi-automatically extracted from the segmentation mask. Based on this, the insertion depth of the sensed shape is estimated as follows: Firstly, the point c i of the centreline C with the shortest distance to the shape pointŜ CT i1 is determined. Then, the length along the centreline from the access point c 0 to the point c i L ¼ is determined. Afterwards, the shape pointŜ CT i0 corresponding to the access point c 0 can be estimated such that holds. Then, the shape is rotated along the direction of the shape pointŜ CT i1 by the angle α such that the access point and its corresponding shape point are as close as possible. This is done by estimating the angle α between the vectors where P X is the projection of point X onto the line Z generated with the position and the direction of the shape pointŜ  that this approach only results in a good estimation when not the whole sensed shape is inserted into the vascular system in order to obtain corresponding shape point to the access point. If the whole shape is inserted, then all shape points are deeper inserted as the access point c 0 and thus no corresponding shape point exists for the access point.

| Shape registration with vessel volume
To get an accurate localisation, a registration of the shape represented as a curveŜ CT with the vessel mask M : R 3 → f0; 1g has to be done. Here we assume, that the inserted shape part g of the shape is located completely inside the vessel system. Since we have one EM sensor, which gives a reasonable position estimate with a sufficient accuracy, its corresponding shape pointŜ CT i1 should not change its position and is used as a landmark. Since the shape can be measured accurately with the optical fibre and is not deformed, the aim of the registration is to find a rigid transformation y rigid : R 3 → R 3 that maps the shape inside the vessel mask subject to the landmark constraint. To compute a solution, the shape is discretised by the equispaced pointsŜ where D M ðxÞ ≔ inf MðzÞ¼1 kx − zk 2 is the distance map of the mask M.
The weighting parameter β of the landmark penalty has been chosen very high (β ¼ 10000) to ensure that this position corresponding to the EM sensor does not change. As a result, the shape from the EM sensor until the access point will be aligned to the iliac arteries.

| IMAGE-BASED SHAPE LOCALISATION
The image-based approach to localise a 3D shape consists of the following steps: First, an intraoperative fluoroscopic image is registered with the preoperative CT scan. In a next step, the stentgraft system is segmented in the registered fluoroscopic image. This segmentation is then projected back along the lines from the detector plane to the point source into the CT volume. In the CT volume, possible paths of the stentgraft system are obtained from the vessel centrelines. The path with minimal distance to the back projection of the stentgraft system is selected as the path where the stentgraft system is inserted. To locate the path in the intraoperative space, this path is projected onto the back projection. An overview of the process is provided in Figure 5 and the individual steps are explained in more detail in the following.

| Intensity-based 2D/3D registration
For registering the 2D intraoperative fluoroscopic image with the 3D preoperative CT scan we build on our previous work. 27,28 We use an image intensity-based approach for registration of fluoroscopy and 2D so-called digitally reconstructed radiographs (DRRs). DRRs simulate X-ray images by generating a perspective projection of a 3D image (e.g. a CT scan) onto a 2D image plane. 27,29,30 Our overall approach consists of a preprocessing step, a preregistration step and the final rigid registration.
For the preprocessing step we use the available information In a subsequent preregistration step we perform a brute-force search for an offset of the CT volume position along all three axes.
Assuming that the patient is positioned in the same way on the table for the CT and in the operating room, we can limit the brute force search to the offsets and do not have to take rotations into account.
For each offset we calculate a DRR and compare it with the fluoroscopic image using a so-called image similarity or distance measure D. In literature, a variety of distance measures have been proposed, for example sum-of-squared-differences, cross correlation or mutual information. 11 In vascular fluoroscopic imaging, vessels are the dominant structures providing strong gradient information. Therefore, the choice of a distance measure, that is sensitive to image gradients seems to be suitable. To this end, we propose using the socalled normalised gradient fields distance measure (NGF), 31 which F I G U R E 5 Processing pipeline of the image-based instrument localisation method also was successfully used for 2D/3D registration of vascular images in our previous work. 27 Given two images R and T it is given by with 〈x; y〉 ε ≔ x ⊤ y þ ε, kxk ε ≔ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 〈x; x〉 ε 2 p and so-called edge-parameters ε R ; ε T > 0 steering the sensitivity to gradient magnitude and noise.
In our intensity-based 2D/3D registration we search for a 3D rigid transformation y rigid : R 3 → R 3 , which, applied to a 3D so-called Furthermore, the multilevel strategy as presented in Ref. 28 was applied.

| Segmentation of the stentgraft system in fluoroscopic images
To segment the stentgraft system in the registered fluoroscopic image we use an semi-automatic approach. An initial pre-segmentation is based on a deep learning approach with a 2D U-Net consisting of four levels. As the network was trained in a different context on simulated fluoroscopic image data with inserted catheters, the segmentation results were adapted manually.

| Back projection of segmentation to CT volume
The segmented stentgraft system mask is then projected back into the CT volume by using a ray sampling method and therefore 'inverting' the DRR calculation. For this purpose, we create a binary 3D back projection image B that has the same size as the CT image, where we set BðxÞ ¼ 1 if the voxel located at x is hit by any ray passing the point source and the segmented stentgraft system in the 2D detector plane, and BðxÞ ¼ 0 otherwise (cf. Figure 10).

| Shape estimation
Based on the vessel segmentation of the preoperative CT scan, we compute the centrelines of all vessels. Then, the centrelines are used to calculate all possible paths C 1 ; …; C n connecting any two endpoints of the vessel system. A path C j ¼ ðc j 1 ; …; c j mj Þ is made up from a list of m j consecutive points c i j from the centrelines. Then, for each path C j we calculate the average Euclidean distance d j to the back projection B of the segmented stentgraft system where D B ðxÞ ≔ inf z:BðzÞ¼1 kx − zk 2 is the distance map of back projection B. Calculating the distance between the path and the back projection for all paths, the path that has the smallest distance is chosen as the one where the stentgraft system is located. Then, we have a rough shape localisation in the space of the preoperative CT scan. Of course, the centreline will most likely not correspond to the exact position of the stentgraft system, as the stentgraft system will behave differently in a vessel due to its physical properties. But to include this would go beyond the scope of this work, so we will stick to the centreline of the vessel as a rough estimate of the actual 3D position of the stentgraft system in the preoperative space.
However, let C ) ¼ ðc 1 ; …; c m Þ be the found centreline path with minimal distance to the back projection B. Then finally, we define the intraoperative shapeĈ ) as the projection of C ) to the intraoperative space. That is, we compute the interoperative shapê Figures 10 and 11).

| EVALUATION
In order to perform a proof of concept for the two methods presented above and to compare and evaluate the accuracy, an experiment was conducted on a vascular phantom. The stentgraft system with EM sensor and optical fibre was inserted into the phantom and tracking data and 2D fluoroscopic images were acquired at several positions. In order to obtain ground truth data a 3D CBCT image was additionally acquired for each position. The details of the phantom, the experimental setup and procedure, the evaluation measures and the results and discussion for both approaches are described subsequently.

| Experimental setup and procedure
Calibration of the tracking system was performed as demonstrated before 24  In the angiosuite, the field generator of the EM tracking system was placed vertically using a wooden fixation device to avoid artifacts introduced by the coils of the field generator in the acquired images.
The vessel system of the phantom was filled with water doped with contrast agent (ratio 1:15). Afterwards, the phantom was placed as close as possible to the field generator and fixed to the table. The whole experimental setup is shown in Figure 7.
In concordance with clinical routine, a stiff guide wire was inserted and moved to the healthy aorta proximal of the aneurysm.
Then, the stentgraft system was inserted into the phantom and  For the evaluation of the EM sensor, the Euclidean distance e to the ground truth was calculated. For the reconstructed shape points and the located shape points estimated with one of the introduced approaches, the average, maximum and tip errors were calculated as

| Evaluation measures
and where x 0 ; …; x m are the estimated shape points and x gt inearest is the nearest ground truth point to point x i .
For the image-based approach the error measures e avg ; e max and e tip include the error resulting from using the centreline to estimate the shape. To determine the error of the 2D/3D registration alone, we additionally calculate the so-called reprojection distance r. 32 Since we do not have a ground truth shape in the CT, we use the ground truth shape from the CBCT and the marker-based registration between CT space and CBCT space mentioned above to generate a ground truth shape in the CT space. The average and maximum 2D/3D reprojection errors were calculated as where x gt 0 ; …; x gt q denote the ground truth points and as above D B ðxÞ ¼ inf z:BðzÞ ¼ 1 kx − zk 2 denotes the distance to the stentgraft back projection B.

| Results and discussion of the FOSS and EM based approach
The results of the five insertion depths are given in Tables 1 and 2.
The movements of the shape and the EM sensor during the image acquisitions were low for all insertion depths. This indicates that the optical fibre and the EM sensor were fixed very well inside the stentgraft system and that the stentgraft system did not move. Thus, the measured shapes and positions should correspond to the ground truth obtained from the CBCT scans.
For the reconstructed shapes we obtained errors e avg < 1.3 mm, e max < 4.1 mm and e tip < 3.2 mm. These measured errors are higher than in previous experiments with the same multicore fibre. [23][24][25] One reason for that might be that the stentgraft system was rotated during insertion and pressure was applied at the insertion sheath. This could have introduced a twist of the fibre and thus resulted in higher errors. Moreover, almost all measured errors are decreasing over time from the first to the last insertion depth. This indicates that the twist of the fibre may have decreased over time and that the fibre received less pressure in the stentgraft system.
In addition, the positions of the EM sensor were measured accurately (e < 1.6 mm) and the measured errors were comparable to those measured in previous experiments 23,24 (in each study, average error lower than: 1.00 mm/1.50 mm) and to those reported in [15][16][17] (in each study, average error: 1.20 mm/1.28 mm/1.30 mm).
The shape located with one EM sensor and preoperative data was estimated with errors e avg < 3.2 mm, e max < 5.5 mm and e tip <4.6 mm. These measured errors of the shape located with one EM sensor are higher than the measured errors of the reconstructed shape and the EM sensor positions separately ( Table 1). The reason for this is that both the shape sensing error and the EM sensor error influence the resulting located shape. Despite the twisted shape, the errors of the located shape did not increase much for most cases in comparison to the errors of the reconstructed shape and the shapes were located accurately.
However, the quality of the located shape depends on the measured data. In this evaluation twist was presumably introduced to the multicore fibre and thus the resulting shape had higher errors than those in previous experiments. This is also visible in Figure 8.
The twist caused noticeably shape differences at the end of the femoral leg artery. To overcome this problem and to improve the shape reconstruction, the fibre could be integrated into the medical instrument such that no twist is applied to the fibre. Another way is to use fibre technologies which can measure twist such as fibres with helically wrapped FBGs. 33 Another limitation of optical fibres is, that bending diameters less than 2 cm cannot be measured and the fibre might break.
Moreover, the measurements of EM sensors are highly influenced by electric devices and metallic objects. For minimising this error, the C-arm was moved as far away as possible from the phantom while acquiring the position of the metallic markers with the EMtracked pointer and during the measurements of the EM sensors and the optical fibre. Also, the accuracy of the shape localisation is influenced by the transformation from the EM space into the CT space, which is determined with markers. If the markers are moved, the accuracy of the EM sensors is decreasing. In our experimental setting, the phantom was fixed well and only the stentgraft system was inserted and moved. Thus, this error could be minimised in our experiment, but it can be higher in a real interventional setting. EM tracking systems have the limitation that the sensor can only be tracked inside the measurement volume of the field generator.
Accordingly, the region of the human body, where the EM sensor has to be tracked, has to be inside the measurement volume.
Another source of error are deformations of the vessel structure during the procedure. As a result, the vessel segmentation obtained from the preoperative scan can significantly differ from the current, intraoperative vessel structure. Such deformations can be caused, for example, by the inserted instruments. In this study, a vessel deformation occurred in the right iliac artery. Using motion compensation to correct those deformation could further improve the proposed method.
T A B L E 2 Errors and length l (in mm) of the FOSS and EM-based and the image-based shape localisation methods

| Results and discussion of the image-based method
A visual result of the 2D/3D registration approach for the fluoroscopic image from 0°AP and the third inserted depth is shown in Figure 9. Despite the artefact shown by the horizontal green lines and the stentgraft system, which are missing in the preoperative CT scan and therefore in the DRR, the registration works well. Only in the right external iliac artery, where some motion was caused by the insertion of the stentgraft system, the registration is not accurate. Since the motion is non-rigid, it clearly cannot be compensated by the rigid registration approach.
As the proximal is irrelevant for navigation (in contrast to the tip) and part of the catheter is fixated in the sheat, the traumatic potential to the vessel is neglectable. Thus this measurement error has no clinical relevance and does not impact clinical applicability.
The results of the 2D/3D registration in terms of the reprojection distance are given for all insertion depths and angulations in Table 3. The average error lies between 0.14 mm ≤ r avg ≤ 0.41 mm whereas the maximum error r max is smaller than 1.6 mm. Note that this error also depends on the accuracy of segmentation and rigid marker registration between the CBCT scan and the CT scan.
Therefore, an average registration error, which is less than the voxel size of the CBCT scan seems tolerable. Recent works on 2D/ 3D manual or semiautomatic registration approaches used in clinical routine for navigated EVAR procedures report an accuracy in the range 4-5mm. 34 In Figure 10  The results are given in Table 2. We obtained errors e avg <  region of interest. As explained above, it is not surprising that the main errors are observed in the direction from the source to the detector, cf. Figure 10.
Clearly, here we performed an experiment under optimal conditions. In real procedures, the situation might be less ideal due to difficulties. For example the contrast agent might not be well distributed in the vessel volume, the image might contain multiple and complex anatomical structures like bones, which were not simulated in this phantom study, and therefore registration quality might decrease. Finally, larger vascular movements are also expected due to the insertion of instruments, which cannot be compensated by the rigid registration.

| DISCUSSION
Comparing the two approaches, we observe slightly different results.
The This can be seen in Figure 8. However, since errors occur mainly in one direction from the source to the detector, they can be detected and controlled relatively easily by the interventionist. The largest deviation of the estimated shape from the image-based approach is observed in the right external iliac artery (compare Figure 11). This is due to the movement of the vessels caused by the insertion of the instrument. Because our algorithm computes a rigid transformation, this motion cannot be compensated. A possible way to improve the accuracy could be to use a non-rigid registration.
An advantage of the image-based approach compared to FOSS & EM-based localisation, is that it does not require any additional measurements or devices. The method uses only the image data, which are acquired in standard EVAR procedures. In contrast, the FOSS & EM-based localisation works equally well for each position in the vessel system, which is visible in Figure 8. Thus, this approach can be used in the whole vessel system. Further advantages of this method are that no contrast agent and no 2D fluoroscopy imaging is needed during the intervention, and that the FOSS & EM-based approach allows to track the whole 3D shape of the instrument, whereas the image-based method can only determine the shape of the tool parts that are inside the acquired fluoroscopic image.
In summary, the FOSS & EM-based approach resulted in low errors and these results satisfy the accuracy requirements defined in the introduction. The image-based method can also result in accurate 3D shapes, but the accuracy depends on the RAO positioning angle and the vessel region where the instrument has to be guided. The smaller the vessel diameters, the more accurate are the resulting estimated shapes. Therefore, this method is more suitable for regions with smaller vessels, such as peripheral arteries. Nevertheless, the image-F I G U R E 1 0 Back projection B of the segmented stentgraft system for 40°RAO and the first inserted depth (blue) in the preoperative CT space, vessel segmentation (red) and the centreline of the path C ) with minimal distance to the backprojection (yellow). The intraoperative shapeĈ ) is shown in orange. For comparison the ground truth from the CBCT is shown in green as well F I G U R E 1 1 Intraoperative shapesĈ ) estimated with the image-based approach for the third inserted depth and 0°AP (blue), 20°RAO (orange) and 40°RAO (red) together with the ground truth from the CBCT scan (green) based approach can be easily integrated into the current workflow of EVAR procedures and support the navigation of the instruments.
The major problem areas in endovascular interventions are harmful radiation exposure and navigation in 2D, which makes orientation difficult and can lead to complications, especially in complex pathological anatomies. Both presented localisation solutions can offer a potential approach to reduce radiation exposure and facilitate endovascular procedures by providing a 3D representation of the anatomy and the endovascular devices. Consequently, the cannulation and stent placement during procedures is simplified and patient outcomes are potentially improved. However, it has to be kept in mind that the presented setting is experimental and does not fully reflect the clinical application.

| CONCLUSION
The main goal of this work is to compare and relate image-based Furthermore, the tracking based guidance method can be improved by using motion compensation to correct the used vessel segmentation. Moreover, further improvements of the image-based approach will be done. Due to vessel movement, a non-rigid registration could improve the 3D shape estimation. Also, a comparison with image-based localisation method, which uses fluoroscopic images of two or more view angles, will be done. In addition, both introduced approaches could be evaluated in a more clinical setting. This would also allow to compare them with the current gold standard navigation for example how long the procedure took place, how much X-rays were needed and how many contrast agent was administered.

ACKNOWLEDGMENTS
This work was funded by the German Federal Ministry of Education and Research (BMBF, Nav EVAR project, funding code: 13GW0228) and by the Ministry of Economic Affairs, Employment, Transport and Technology of Schleswig-Holstein.
Open access funding enabled and organized by Projekt DEAL.