Use of mixed reality for surgery planning: Assessment and development workflow

Meticulous preoperative planning is an important part of any surgery to achieve high levels of precision and avoid complications. Conventional medical 2D images and their corresponding three-dimensional (3D) re-constructions are the main components of an efficient planning system. However, these systems still use flat screens for visualisation of 3D information, thus losing depth information which is crucial for 3D spatial understanding. Currently, cutting-edge mixed reality systems have shown to be a worthy alternative to provide 3D information to clinicians. In this work, we describe development details of the different steps in the workflow for the clinical use of mixed reality, including results from a qualitative user evaluation and clinical use-cases in laparoscopic liver surgery and heart surgery. Our findings indicate a very high general acceptance of mixed reality devices with our applications and they were consistently rated high for device, visualisation and interaction areas in our questionnaire. Furthermore, our clinical use-cases demonstrate that the surgeons perceived the HoloLens to be useful, recommendable to other surgeons and also provided a definitive answer at a multi-disciplinary team meeting.


Introduction
Visualisation of medical images of organs in 3D is gaining importance in healthcare, especially for surgical procedures. Understanding the patient-specific 3D-anatomy helps surgeons to better prepare for surgery, thus delivering better treatment to patients. Currently, conventional volumetric images such as contrast-enhanced magnetic resonance (MR) and computed tomography (CT) images are used to plan surgery. These high-quality medical images provide excellent discrimination properties, which are necessary to differentiate anatomical structures and understand highly variable morphologies like, for instance, in congenital heart disease. However, these 3D volumetric images are still viewed as 2D slices leaving important depth information behind, which may be insufficient for complex cases. By planning surgery using 2D slices, surgeons have to create the 3D model in their minds, with obvious potential flaws [1].
Alternatively, the 3D-reconstructions or segmented 3D-models of the volumetric medical images based on different tissue categories (e.g. vessels, parenchyma and lesions for liver and chambers, septa and vessels for heart, etc.) can be used for better visualisation, simplifying the understanding of the patient anatomy [2,3]. Visualising these segmented 3D models in real 3D could provide the surgeons with a better anatomical overview. This could greatly impact the decision-making process and lead to different and better decisions in surgical planning.
Mixed reality, which is the merging of real and virtual worlds where physical and digital objects co-exist and interact in real-time, has already shown to provide better 3D information to surgeons [1,[4][5][6][7]. Microsoft® HoloLens™ (hereafter called HoloLens) is the first wearable device classified under head-mounted display (HMD) that bring the usage of mixed reality into the market. HMDs are generally perceived to be of great value for the future of medicine [8]. The HoloLens can map the users' real environment to place virtual 3D models in a fixed position relative to real objects, thus providing the user with a realistic, interactive experience, which is essential for use in medical applications. This allows the user to visualise the virtual 3D models alongside real objects, allowing to interact and also walk around the virtual model similar to a real object. Also, being a mobile all-in-one machine, makes it possible to use it anywhere, thereby allowing them to bring HoloLens to the operating room for a unanimous 3D understanding of specific patient's anatomy just before starting and during a procedure.
In the recent years, several works address the use of mixed reality in medicine [1,4,7,[9][10][11]. However, these works have provided neither a detailed processing workflow from medical images to 3D visualisation in mixed reality to help readers replicate the work, nor an evaluation from a wide demographic of participants to get a better global understanding of such device. Thus, in this paper, we present results from our qualitative user evaluation of mixed reality application performed with 62 participants (including both medical doctors (MD) and non-MD participants) from 22 different countries at the conference of European Association for Endoscopic Surgery 2018. We also describe a detailed development and clinical-use workflow from the data acquisition to visualisation in mixed reality, and also present its pilot real-world use in surgery planning through clinical use-cases in laparoscopic liver resection and congenital heart surgery.

Data acquisition
The datasets used for the laparoscopic liver resection use-case were CT images, with contrast-enhanced portal phase acquisition and contrast agent Iomeron 350 mg I/ml (Iomeron®, Bracco, Italy). For the late portal phase, the CT images were acquired 70 s after the end of the contrast injection. The dataset used for the congenital heart surgery use-case was MR images from a 1.5 T scanner with a 3D Truefisp sequence in the coronal direction, with a slice thickness of 1.33 mm and an in-plane acquisition matrix of 256/256 pixels.

Liver anatomy segmentation
The segmentation for liver, hepatic, portal and tumour are performed on the same input image to keep the same coordinate system and origin. After the initial data acquisition for laparoscopic liver resection, the first step for segmentation is pre-processing. Here the images are processed to enhance the regions of the liver, blood vessels and tumour. The three main parts of the pre-processing are Remapping, Curvature Flow Smoothing and Resampling [12]. For MR images alone, the pre-processing starts by inverting the image to attribute positive bright intensities for blood vessel structures in the liver. At Remapping, the image intensities are remapped and rescaled to intensities that only contain the regions of the liver, blood vessels and tumour. At Curvature Flow Smoothing, the remapped image is de-noised while still preserving the edge details needed for obtaining a good segmentation [13,14]. At Resampling, the image voxels are resampled to isotropic voxels to have equal spacing along with all coordinates of the image. The next step is to segment the liver and the tumours from the preprocessed image, which has enhanced intensities at the region of interest. Here, the segmentation is performed in a semi-automatic manner by using active contours, implemented in ITK-Snap [15]. For both liver and tumour segmentation, the segmentation process starts with initial blob placement by the user at the region of interest. Once the blobs are initialised, the contours of the blobs are expanded based on the probability that a voxel belongs to liver or tumour. The user stops the contour expansion when the desired boundaries of the liver or tumours are achieved. The segmentations of the liver and tumour are done separately so that the segmentation does not influence each other.
The final step of the liver anatomy segmentation is the hepatic and portal vein segmentation. We start the segmentation process with our earlier developed whole image multiscale vessel enhancement method [12]. The multiscale vesselness filter is calculated using the Eigen analysis of a Hessian matrix at every voxel of the image. On sorting the eigenvalues | | | | | | 1 2 3 , obtained from the Eigen analysis, the lowest eigenvalue corresponds to the direction along the vessel and the other two eigenvalues correspond to the orthogonal directions along the cross-section of the vessel. Using this information, the single scale vesselness filter is formulated as, where S is the root of the sum of eigenvalues and c adjusts the noise in the filter. The multiscale vesselness filter is calculated by taking the additive integration of various scales of Eq. (1) for detecting the vessels of varying lengths and multiplying to normalise with a scale-dependent weighting term.
The application of the multiscale vesselness filter, Eq. (2), at every voxel of the image results in the enhancement of every vessel structure in the image that has a radius within the range sigma scale taken. The enhancement of the vessels in the image clearly differentiates the vessel structures from the background. This differentiation between the vessels and its background allows usage of active contour-based segmentation [15] for obtaining the best fitting segmentation of the portal and hepatic vein from user-initiated contours.

Heart blood volume segmentation
For heart blood volume segmentation, we utilised some of the same methods that were used for liver segmentation, such as active contourbased segmentation [15]. To preserve fine structures such as papillary muscles in the ventricles and to avoid creating non-anatomical communications between structures, we opted not to use smoothing techniques such as those used for liver segmentation. Instead, speed images where generated based directly on unprocessed magnitude images from the scanner.
The initialising of contours or blobs for the active contour-based segmentation is done in ways that allow the iterative algorithm to fill the region of interest. Since the region of interest often changes due to each patient being a unique case, there are very few rules for placing the blobs. A good rule of thumb would be to use as few blobs as possible to model large uniform structures such as in the aorta or atrium. In areas with fine, tubular or mesh-like structures there is however a good chance that the contour will not spread into every corner of your region of interest. So, it will be necessary to employ one or several methods for completing the segmentation. Such as careful alteration of weighting factors or adjusting the speed image thresholding. Placing multiple blobs in a smaller area can cause the contours to overcome small barriers. Some cases may require that the entire structure is segmented separately from the rest of the heart using completely different settings and weights. Local thresholding with 3D interpolation seems a valuable method. Lastly, the segmentation should be revised manually to ensure that key structures are properly defined and that contours have not leaked through places where noise or image inhomogeneity has made a less clear boundary than usual.

Surface model generation
The segmentations are still volumetric images with non-zero intensity representing the foreground or segmented region and the zero representing the background. These segmentations are now converted to surface models for visualisation as a 3D model. One of the most widely used techniques for creating 3D surface models is marching cubes [16]. The method works by extracting iso-surfaces from the segmented structures. We have used marching cubes surface model extraction for tumour and blood vessels of liver anatomy, and the heart. Although marching cubes gives accurate iso-surfaces that strictly follows the underlying data, it leads to surfaces with a high number of polygons and vertices, and also creates staircase artefacts. Thus for the liver, a large organ, we have used Poisson surface reconstruction, as it is shown to have better accuracy/complexity trade-off than marching cubes surface reconstruction [17]. Also, this helps in better visualisation of large datasets, such as liver, in mixed reality using HoloLens. Additionally for heart, the 3D models were then hollowed for visualising the inner structures of the heart.

Mixed reality device
HoloLens is one of the market-leading mixed reality head-mounted device. It uses an optical projection of 3D or 2D images on to its seethrough lenses. With the use of a single depth camera, multiple environment camera and an ambient light sensor, the HoloLens can create a spatial map of the user's surroundings. The HoloLens gives the users the ability to interact in multiple ways, such as voice commands, gaze tracking (by head movement tracking), and hand gestures. The hand gestures that are currently available are bloom (opening and closing of hand for "home" gesture), air-tap (tapping of index and thumb finger in the air with closing the other fingers, for interaction similar to a mouse click), and tap-and-hold (which allows features such as object translation or scaling). Also, the spatial sound feature in HoloLens helps the user experience the sound in relation to how far and which direction the model is placed. The device has an Intel 32-bit processor and a Microsoft® Holographic Processing Unit, with a Flash memory of 64 GB and 2 GB RAM. Fig. 1 shows the development scheme that we have used for making our Universal Windows Platform (UWP) app for HoloLens. The Uni-ty™game engine is the core development platform under this development scheme. It enabled the development of our custom interaction and visualisation options for the app. Unity utilises C# scripting with Visual Studio®. The Application Programming Interface (API) needed for the Mixed Reality features such as spatial stages, gestures, motion controllers or voice input is built directly into Unity™. Server information such as the IP address and port are the main external information collected for downloading the 3D models associated with a given patient ID. The apps are generated by compiling our separate UWP Visual Studio® solution from Unity™, and then building and deploying our UWP solution to the HoloLens Emulator (a PC window to test the holographic apps) or HoloLens.

Custom interaction and visualization
Below are the custom-made interaction options that we have made available in our mixed reality application (see Fig. 2). All the interaction options can be activated or deactivated with the air-tap option. These interactive buttons also enlarge on hovering the user's gaze over them.
Scale, Rotate, Move. These interactions allow the user to scale, rotate or move a 3D surface model respectively. The interactive buttons turn green on activating them.
Marker Placement. The marking tool allows the user to mark a point of interest on the model that can be seen by all the users in the shared view. The markers are in the shape of a needle with a sphere at its handle, and these spheres can be in three different colours. The marker can also be selected separately after initial placement for readjustment.
CT/MR Images (Medical Image Viewer). An important interaction that is generally required is to view the CT/MR images associated with the 3D models. On activating this interaction, the associated CT/MR image stack is loaded in a separate 2D viewer, where the user has an option to scroll through the image stack.
Selective Visualisation. When multiple models with different labels are available (In our case for the liver application), sometimes it is needed to selectively view only one or a few at a time. This interaction allows the user to choose among the models to make them visible.

Surgery planning
Apart from the basic interaction and visualisation options that we have made available in the mixed reality application, we have also implemented specific surgery planning tools for liver and heart surgery.
Resection (Liver Resection Planning Tool). For mixed reality liver resection planning, we have reimplemented the liver resection planning using deformable Bezier surfaces [17] that was developed initially for use in the software 3DSlicer [18]. Fig. 3a shows the interactive button, "Resection", to start the resection planning. The process starts with an initial placement of a Bezier surface in the liver model, which later can be deformed accordingly to the required resection. The deformation can be performed using interactive manipulation of coordinates of the control points that are distributed on a grid. In addition to the deformable plane, a resection margin (safety indicator) is also shown on the plane. The resection margin is computed by assuming the tumour to be spherical and then calculating a point to point distance from the centre of the tumour to every point of the Bezier plane. Finally, the areas violating the resection margin (arbitrarily chosen to be 10 mm) are visualised in red colour on the deformable plane.
Slice Heart (Heart Slicing Tool). For heart surgery planning, the biggest challenge is to see in the inner structures of the patient-specific 3D heart surface model generated. To achieve this, we have implemented a "Slice Heart" interaction button in the application, as  shown in Fig. 3b, for literally slicing the heart to see the inner structures. The slicing is performed by placing an invisible cube such that its intersection with the heart model is made completely transparent. Furthermore, additional interaction with a green sphere is provided to move the invisible cube, thereby providing the user with the ability to change the slicing angle. Fig. 4 shows the step by step workflow for visualisation of patientspecific models through mixed reality application for clinical use. The workflow for the clinical use of HoloLens starts with the acquisition of the medical images. The images can be either MR or CT according to the request made by the clinicians. These medical images are a stack of 2D image slices to create a 3D volumetric image. The image acquisition step typically takes 15 to 60 min depending on the protocols. Semiautomatic heart or liver anatomy segmentations are then performed on the medical images, in-house, to create labelled 3D volume images of the anatomies of interest. Depending on the image quality and complexity of structures to segment, segmentation can take from 30 min to 6 h. Patient-specific 3D surface models are then created in seconds by surface model generation from the segmented volumetric images. Afterwards, the generated surface models are uploaded to a database server, where all patient-specific models can be saved and tagged with a case number. Our HoloLens application has been programmed to access the server to download the patient-specific 3D surface models, according to the clinical use. Both data upload and download depending on connectivity and data size takes just seconds on high-speed networks. Finally, during the clinical use, multiple HoloLens can be used with our sharing mode that allows multiple clinicians to see and interact with the same model. Here, the clinicians take as much time as needed to make a good clinical decision.

Subjective evaluation
General user experience and clinical use-case for laparoscopic liver resection were evaluated using a questionnaire designed to evaluate the device, visualisation and interaction. Questionnaire consisted of 10 questions with answers on a six-point Likert scale. The questionnaires were completed right after the demonstration during general user experience and after completed surgery for the clinical case. There were no common participants for the questionnaire for general user experience and the clinical case. A six-point Likert scale was chosen for this study, considering that reliability growth flattens from seven-point and higher [19].
The 10 questions in the questionnaire were as follows, Q1: Rate the comfort level while wearing the HoloLens, Q2: Rate the 3D depth perception of the model, Q3: Rate the screen size of the HoloLens, Q4: Rate the visibility when using HoloLens, Q5: Rate the understanding of the morphology with HoloLens, Q6: Would you recommend HoloLens to others in your profession, Q7: Rate the ability to walk around the model, Q8: Rate the ability to rotate the model, Q9: Rate the ability to scale the model, and Q10: Rate the ability to move the model. The device was evaluated on the level of comfort, 3D depth perception, screen and obstruction of the sight. The visualisation was evaluated on the level of understanding the anatomy and the ability to move around the model. Interaction with the model was evaluated on the ability to rotate, scale and move the model. The participant had also to rate how likely it is for them to recommend this to colleagues in their profession.
SPSS software (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, version 25.0, Armonk, NY, USA: IBM corp) was used for statistical analysis.

General user experience with mixed reality
Participants of the annual conference of the European Association   The non-MD participants also included a student undertaking medical degree. The evaluation process consisted of a live demonstration of the HoloLens with the developed application. Participants were able to put on the device to view and interact with a liver model and, were asked to walk around the model to exam the anatomy and perform a rotation, scaling and movement of the model.
After the demo, participant filled a questionnaire with 10 questions on a six-point Likert scale, which was designed to evaluate the device, visualisation and interaction. Table 1 shows the results from the questionnaire and Fig. 5 shows the box-plots presenting the results of the questionnaire.
The participants considered the device to be very comfortable with very good 3D depth perception with a median score of 5 out of 6. Participants did not perceive that the HoloLens obstructed the normal sight with the median score of the 6. All the available interaction options were rated 5. 74% of the participant rated 5 or higher that they would recommend HoloLens to others in their profession.
Using bivariate correlation analysis show no significant correlation between questions and age, sex, time since graduation for the participants.
Mann-Whitney test, comparing results between medical doctors (MD) and others (non-MD), show a significant (alpha = 0.05) effect of the profession in the following categories: comfort (p = 0.034) and rotation of the model (p = 0.013). In the before mentioned categories, medical doctors rated on median 1 point higher (1; 1).

Laparoscopic liver resection use-case
56 years old male, with locally advanced rectum cancer with metastasisation to the liver with 8 lesions in 7 liver segments. During the multidisciplinary team meeting, it was decided to proceed with laparoscopic parenchyma sparing liver resection.
Due to the high number of lesions and a spread throughout the liver, the creation of a liver model was requested by the surgeons as a map to plan multiple local resections. A 3D liver model was created from segmented pre-operative CT images and transferred to HoloLens. This patient-specific 3D model in HoloLens was then used for resection planning using our liver resection planning tool. Fig. 6 shows the segmented 3D liver model, and surgeons using HoloLens for surgery planning and later during surgery.
In the sterile setting before starting surgery, the liver model was opened with volumetric images in a sharing mode. These were placed conveniently by the surgeons in the operating room using hand gestures. Surgery plan discussed using the patient-specific 3D model. During surgery, the model was used for localisation of the lesions and to guide initial placement of the laparoscopic ultrasound probe for lesion location confirmation. It was also used as a roadmap for tracking the surgery and the sequence of resections. This way the surgeons could enhance surgical plan tracking and have discussions at the operation table with no need of additional computer screen or assistance. The surgeons wore the HoloLens on for over an hour until all planned lesions were located. Compared to other available methods of viewing and planning surgery this was perceived as superior. The surgeons also noted that the lights in the OR did not at all reduce the quality of the model. Also, sharing the model between surgeons was very helpful. The surgeons noted that they would recommend this to other colleagues for planning and during surgery.

Congenital heart surgery use-case
6 years old girl, diagnosed at birth with a large apical muscular ventricular septal defect plus secundum atrial septal defect, was Table 1 Results from the questionnaire presented as median (interquartile range) scores primarily operated with ASD closure and a pulmonary artery band. The patient was discussed several times in the paediatric heart surgical conference based on standard images such as repeat echocardiograms and a contrast-enhanced cardiac CT angiogram but was found inconclusive concerning whether the VSD was too large and difficult to close with preserved ventricular function. A 3D object was created from the segmented pre-operative CT images, then hollowed and uploaded to the HoloLens for shared use. The hollowing of the 3D model allows visualisation of inner structures when using our slice heart tool in the mixed reality platform. Fig. 7 shows the 3D models of patient-specific heart created and then, visualised and discussed by surgeons and cardiologists using HoloLens. This holographic presentation of the patientspecific heart, convinced the surgical conference team members that she was operable, aided by the detailed 3D view of the location and size of the defect in relation to adjacent important structures such as coronary arteries and papillary muscles. The operation method and access route with the incision site and size on the left ventricular surface could be planned in detail. For the surgeons, it seemed that the holographic 3D model more easily made clear that the operation method and access route was feasible due to the morphology of the defect with surrounding structures, as observed in real 3D visualisation.

Discussion
We present in this paper a detailed workflow and assessment of the use of HoloLens for the planning of liver and heart surgery.
Overall acceptance of HoloLens for visualising 3D models was very high with a median score of 5 for most topics according to Q6 in Table 1. Almost all the participants found that the transparent display of HoloLens did not obstruct their view of the real environment. The main objective using holographic visualisation platforms such as Ho-loLens is providing true depth perception, giving the user a real 3D experience. The participants in this study gave a median score of 5 for 3D depth perception of HoloLens, Q2 in Table 1, which is consistent with another recent study on the user experience [6]. Also, we did not find any significant difference based on age or gender for the general acceptance of HoloLens, coherent to a study on user satisfaction [20].
The present study showed that the advantages of the 3D visualisation of the models including the depth perception have provided the users with a better understanding of the anatomy, Q5 in Table 1, with a median score of 5. It is already shown that visualisation through mixed reality significantly decreases the time required in the spatial understanding of the organ [5]. Our assessment also showed that the interactions made available in the HoloLens application had high acceptance with a median score of 5 for rotation, movement and scaling interactions. Improved anatomical understanding is also supported by the ability of the user to move freely around the models with the use of HoloLens. In mixed reality, the user can manipulate and walk around the Holograms in the real environment, as opposed to the limited interaction provided in other augmented or virtual reality platforms, like Oculus Rift or Google Glass [21]. Though this ability gave a better understanding of the anatomy, one user in our study commented that he was afraid to move around as he might hit some objects in the room, with his focus completely on the model. This would be an even bigger problem with totally immersed virtual reality applications.
The study also showed that the interactions made available in the HoloLens application had high acceptance with a median score of 5 for rotation, movement and scaling interactions; Q8, Q9 and Q10 in Table 1. However, we still believe the interactive ability of HoloLens to be limited by the need for the user to understand and get used to some of the gesturing techniques. This could be solved with HoloLens 2 in the future as it has much more intuitive interaction abilities.
The comfort of wearing HoloLens is important for the general acceptance of such devices for clinical applications. Our assessment on comfort based on the EAES participants is coherent with another study [5], where the median score was 5 for Q1 in Table 1. However, the surgeons from the surgical use-case with liver resection gave a low neutral score of 3 for comfort. This might be mainly because surgeons wore the HoloLens for an extended period of more than 1 h, compared to an average time of 5 min spent spend by each of the EAES participants. This might indicate that the current version of HoloLens is not yet suitable for long time use with or without the headband. Although, a recent study has pointed out that HoloLens is safe to use, in terms of cognitive and physiological functions, on prolonged tasks of 90 min [22]. Nevertheless, more surgical use-case studies are required to evaluate this clinically.
The surgeons at our centre perceived the HoloLens application to be superior as compared to the current state of the art methods of studying images and planning surgery, similar to other studies [7,9,23,24].  Compared to the 2D and 3D screens, the gesture controls of HoloLens allow surgeons to be unbound by keyboard and mouse for interaction with the objects and freely walk around the model. This form of interaction allows for the use of HoloLens under sterile conditions (using sterile gloves), which might be difficult or inconvenient using other methods. Our assessment on the visibility through HoloLens, Q4 in Table 1, where the median score is 6, has shown that objects visualised in mixed reality do not obstruct the surgeon from following the normal workflow in the OR. It is also important to note that, during the laparoscopic procedure with dimmed lights, surgeons rated that the light in the OR did not affect the 3D model visualisation through the Holo-Lens.
Using multiple mixed reality devices and sharing of the virtual models between surgeons provides a way to discuss images between colleagues and during meetings as well as surgeries. Several of the EAES participants commented that a device such as HoloLens could be highly useful in MDT meetings. It was the use of HoloLens at the MDT meeting of the congenital heart use-case that convinced the surgeons that the patient was operable. In the future, the application needs to be tested during more MDT meetings and check whether there is a change of treatment plan after viewing of the 3D model.
In our questionnaire to the EAES participants, we also compared the results of MD and non-MD participants. It was interesting to note here that there was a significant difference on questions related to comfort, rotation interaction, ability to move around the model and recommendation. We believe the major reason behind this difference is that the non-MDs were more interested in evaluating the technology and device from a technical aspect in high detail compared to MDs, who were mainly trying to evaluate based on the clinical importance of such an application. Thus, for the purpose of this paper, we consider that the assessment of MDs is more important to understand the overall acceptance of mixed reality applications for clinical fields.
The two presented applications are for research puposes only, not for clinical use, although these clearly exemplify the wide variety of uses and potential benefits of inclusion of mixed reality in clinical routines. Presentation of patient-specific anatomy in an easily understandable medium for surgical cases makes possible to use the model as a roadmap for the surgical plan. Overall intraoperative use of HoloLens has been rated high and recommended by the clinical users.
For liver surgery, the presented workflow enhances the communication between the surgeons and synchronises the planned resection by using a 3D patient-specific model. These models can be utilised during surgery to communicate on immediate surgical actions by indicating locations and directions on the hologram. Additionally, having a model with multiple lesions can be used to have a progressive surgical plan by hiding already completed resections. In the case of laparoscopic surgery, where the liver is deformed due to pneumoperitoneum, the previous surgical plan needs to be adapted. In the future, with intraoperative CT scanner, images could be acquired intraoperatively and subsequent 3D models are made [25]. In this way, up-to-date 3D models could be loaded into the mixed reality platform for more accurate guidance during surgery.
For paediatric heart surgery, the model provides a detailed and complete presentation of the image data, enabling the surgeon to plan where to open the heart without damaging structures such as papillary muscles. Further, by viewing the defect as it will appear during the operation, the suture line for the patch that closes the defect is planned in detail, along muscular ridges inside the heart, finding a safe route that does not damage coronary arteries and valves in the immediate vicinity.

Conclusions
Our study indicates that mixed reality applications have a very high acceptance of diverse clinical applications. The results from the clinical use-cases show that such applications provide a better understanding of the patient-specific anatomy and thus improve surgical planning. The laparoscopic liver resection use-case also shows that it plays a major role during surgery as well, where surgeons can revisit the plan. In the methods section, we also present a detailed workflow on how to create, visualise and interact with patient-specific 3D models in the HoloLens. Rutine clinical use of such applications in treatment process would require approval from regulatory authorities. In the future, we will work to test the system further with more clinical applications and also test with HoloLens 2. Additionally, we will later also focus on how to use mixed reality for navigation during surgery.