Artiﬁcial Intelligence and Transcatheter Interventions for Structural Heart Disease: A glance at the (near) future

With innovations in therapeutic technologies and changes in population demographics, transcatheter interventions for structural heart disease have become the preferred treatment and will keep growing. Yet, a thorough clinical selection and eﬃcient pathway from diagnosis to treatment and follow-up are mandatory. In this review we reﬂect on how artiﬁcial intelligence may help to improve patient selection, pre-procedural planning, procedure execution and follow-up so to establish eﬃcient and high quality health care in an increasing number of patients


Introduction
The demand for Transcatheter Interventions for Structural Heart Disease (SHD) will increase given age-associated accretion of valvular heart disease, atrial fibrillation (AF) and stroke and incessant innovations in technology [ 1 , 2 ]. This calls for a more refined process of data analysis along the entire clinical pathway from treatment decision and planning to execution and follow-up to ensure a cost-effective and patient-tailored (precision) medicine. Since Artificial Intelligence (AI) enables computers to perform tasks traditionally performed by humans in a faster and potentially more precise fashion, it may be the tool to achieve this goal and is the subject of this paper [3][4][5][6][7] .

Concepts and definitions
Artificial Intelligence is a field of computer science enabling computers to perform tasks that traditionally could only be carried * Corresponding author.
Machine Learning is the process by which the computer is taught by supervised, unsupervised or reinforcement learning ( Table 1 ) [3][4][5][6][7][8][9][10][11] . Deep Learning (DL) is a branch of ML (supervised or unsupervised) based on artificial neural networks (ANN's) composed of "neurons" arranged in multiple layers. Each neuron receives inputs from multiple neurons in the previous layer and transmits outputs to multiple neurons in the next layer until a final ("desired") output is produced ( Fig. 1 ) [3][4][5][6][7] . The number of layers and structure of their inter-connection define the network architecture, as different architectures may be appropriate for different tasks. For instance, convolutional neural networks are particularly suited for segmentation tasks, whereas recurrent neural networks are more suited for processing of sequential data (e.g. cineimaging) [12] . The complex interactions between layers allow the computer to learn features (e.g. edge detection) for data processing and interpretation. This is not possible with traditional ML algorithms, where features have to be programmed by humans (fea- Supervised Learning • The computer learns by being exposed to a dataset in which the input-output relationship is known (e.g. anatomic structure of interest is annotated for image analysis; outcome is labelled for prognosis prediction)

ARTICLE IN PRESS
• The output is manually annotated in the training dataset (requiring significant human interaction) • Once trained, the algorithm is able to predict the output from an unlabelled dataset Example : • The computer is given an input of CT images in which the aortic annulus has been manually annotated (labelled output, ground truth), until it learns to identify the aortic annulus form an unlabelled set of images 9 Relevance of previous example • Reduced time of imaging processing • Reduced inter-and intra-observer variability Unsupervised Learning • The computer learns to detect certain patterns within an unlabelled dataset (minimal human interaction)

Example:
• The computer is exposed to a data set composed of clinical variables concerning patients with aortic stenosis and automatically identifies clusters of phenotypes that resemble each other ( ie . alike vs not alike), that may be associated with a different outcome and demand a specific treatment/clinical approach 10,11 Relevance of previous example • The different disease/patient phenotypes may confer different prognoses and/or response to treatment and thus have implications in treatment decisions Reinforcement Learning • The computer is trained in a trial and error fashion, where each outcome is positively or negatively rewarded accordingly to whether it is right or wrong • Currently seldom applied in medicine

Example
• The computer selects a given valve size for a given anatomy, is confronted with the outcome (positive vs negative) therefore positively or negatively rewarding the initial treatment decision, and ultimately learns to make the right clinical decision for each specific anatomy. Comment : There is neither clinical experience nor study in the use of reinforcement learning in structural heart disease yet Relevance of previous example • Automation and refinement of treatment algorithms Fig. 1. Schematic of a Deep Learning network. Artificial neurons (top left) receive inputs that are multiplied by their weight and summed to produce an output. Artificial neuron networks are composed by multiple neurons arranged in layers -input layers, hidden layers and output layers. Input layers process and transmit inputs from the dataset to the hidden layers. In the hidden layers, each neuron receives inputs from multiple neurons in the previous layers and transmits its output to multiple neurons in the next layer. The depth of the network (e.g. number of hidden layers) and architecture (e.g. arrangement & connections) confers each network specific characteristics suitable for the desired task. The output layer receives inputs from the hidden layers and produces the final output.
ture engineering) [ 4 , 5 ]. DL allows the generation of new features that may remain undetected by humans [13] . This renders DL particularly efficient in processing graphic data and hence the analysis of (medical) images. The complexity of DL algorithms, however, demands substantial computer power for processing and storing (e.g. graphics processing units (GPU), cloud computing) [ 3-7 , 12 , 14 ]. Big Data refers to a large amount of complex multidimensional data that is hard to process using traditional methods, of which medical data are exemplary. The digitalization of healthrecords requires the integration of information from medical files plus surveillance from remote devices (e.g. wearables) and "omics" concerning millions of patients [3] . Because ML and DL rely on ex-perience (i.e. the more extensive the training dataset, the more accurate the algorithm) and process a large amount of information, big data and AI are interdependent ( Central Illustration ). For instance, weakly supervised ML (minimal human interaction) effectively labelled aortic valve abnormalities from MRI sequences of > 14,0 0 0 subjects [15] .
Similar to traditional statistics, the data set used to construct ML and DL algorithms needs to meet the criteria of quality and completeness to adequately address the research/clinical question. The data need to be multidimensional, relevant and of high quality. While robotics, NLP and other technologies can be used for automated extraction of data from patient files, the variability between  clinical registries hinders data fidelity. Also ML, and particularly DL, are prone to overfitting when too many "noisy" or "confounding" variables are present or when the algorithm is too complex for a dataset, resulting in algorithms that are unreliable outside the scope of the training and testing data and, therefore, invalid in other populations [ 16 , 17 ]. Moreover, the incorporation of a large volume of data increases model complexity and processing time that may render them inefficient, particularly in case of supervised learning. In such cases, appropriate variable selection is mandatory to retain the most useful information while maintaining model reproducibility [ 11 , 18 ]. Natural Language Processing recognizes and discerns the meaning of speech or text in particular when empowered by DL [ 7 , 19 ]. NLP is useful in identifying relevant data from patient files and scientific publications potentially enforcing clinical decisionmaking [20][21][22] .
Computer Vision focuses on image/video interpretation and object recognition and is particularly useful to automatically detect abnormalities (e.g. tumors) or specific anatomical structures based on differences in pixel features, useful for automated quantitative analysis or generation of 3D models [ 7 , 19 ].
Cognitive computing encompasses NLP, computer vision and ML. It seeks to mimic the human process of decision-making by teaching the computer to acknowledge information from multiple sources (image, sound, text,) and interpret such information in light of previous experience (associative memory) [ 3 , 23 , 24 ]. By accounting for previous experience in the decision process, cognitive computing goes beyond most ML algorithms that rely only on logical thinking. Sengupta et al. used speckle-tracking and standard echocardiographic variables from 94 patients with either restrictive cardiomyopathy or constrictive pericarditis (the diagnosis was based on multimodality imaging, right heart catheterization and, for constriction, at the time of pericardiectomy) to build an MLbased associative memory classifier that included the most powerful predictors of each diagnosis, with excellent diagnostic accuracy (area under the curve 96%) [25] .

Diagnosis and treatment selection
Unsupervised Learning : Currently, indications for valvular heart intervention are based mainly on echocardiographic findings and symptoms whereas recommendations for interventions such as left atrial appendage occlusion (LAAO) rely on estimated ischemic (i.e. stroke) and bleeding risks [ 26 , 27 ] The contemporary guidelines are strict and limited since they are based on a limited number of variables collected from a limited number of studies and, thus, pertaining to selected populations. Although they are easy to implement and reflect the strongest prognostic factors reported to date, they may omit other factors that may remain elusive to conventional statistical analyses. This can be addressed by Cluster Analysis, which refers to unsupervised ML algorithms that have the power to elicit hidden patterns/associations within the known data and, hence, to identify various patient/disease phenotypes with different outcomes ( Table 1 ). Such algorithms can help to improve appropriateness of care (precision medicine) by identifying phenotypes associated with a benign outcome that do not warrant treatment, those at high risk for whom treatment is indicated and those with advanced disease state for whom treatment is futile or even harmful. In other words, they can elicit heterogeneous treatment effects in patients with the same disease (e.g. aortic stenosis). This has been demonstrated by Kwak et al., who found three different aortic stenosis (AS) phenotypes with different outcomes; one with predominant cardiac dysfunction and high cardiovascular mortality, one comprising mostly elderly and increased cardiac and non-cardiac mortality and a third cluster of mainly "healthy" AS patients with a benign prognosis [11] . Similarly, topological data analysis of cross-sectional echocardiographic data identified two pathways of progression from mild to severe AS: one associated with preserved left ventricle (LV) function and little LV hypertrophy and the other with depressed LV function and increased LV mass [10] . Similar findings of different outcomes in various disease phenotypes have been found in patients with mitral valve prolapse and AF [28][29][30] . Conceptually, Cluster Analysis may help to refine treatment selection for patients with valve disease (catheter-based treatment vs surgery, mitral valve repair or replacement) and AF (differentiation of risk of stroke with specific LAA anatomies).
Supervised Learning : While unsupervised ML is particularly helpful for the identification of patterns of associations (phenotype/outcome) within a population, supervised ML is an alternative to conventional statistics to identify prognostic predictors ( Table 1 ). Unlike Cluster Analysis, where the computer identifies distinct groups of patients based on their characteristics irrespective of the outcomes (i.e. the computer is unaware of any data classification), supervised ML serves to identify relationships between the input data and a specific outcome [ 4 , 14 ]. It typically produces stronger prediction models than conventional statistics, by relying on fewer assumptions and being able to learn relationships within the data that escape human comprehension [ 8 , 14 ]. Also, and at variance with conventional logistic regression, it allows the identification of linear but also non-linear associations between the data within a large multidimensional data set such as a biologic one [17] . As mentioned above, AI algorithms are not free of bias. They depend upon the quality, relevance and completeness of the data. Supervised ML has been used to predict stroke in AF and outcomes after TAVI. Currently, the accuracy of these algorithms still limits their application in clinical practice [ 31 , 32 ]. It has also been proposed to facilitate an early diagnosis of AS, as it can predict significant AS from ECG analysis [33] .
Application of AI, and particularly DL to automate image analysis (interpretation and quantification) allows faster imaging processing with less intra-and inter-observer variability. This is of particular relevance for prevalent diseases whose diagnoses heavily rely on imaging (e.g. valve disease). Most AI algorithms are directed towards the automated segmentation of echocardiographic, CT and MRI images allowing a fast (within seconds) and accurate structure recognition and delineation (e.g. valves, LV borders) or 3D model generation [ 9 , 34-43 ]. This is relevant for procedural planning (see below), by facilitating structure measurements and 3D model generation to be used for patient-specific computer modeling and simulation (CM&S), thereby, improving treatment planning and execution. It also allows the quantification of volumes, flow and ejection fraction aiding diagnosis and disease severity determination [ 3 , 9 , 36 , 40 , 44 ].
Knackstedt et al. used ML empowered analysis of echocardiographic images for automatic quantification of ejection fraction and longitudinal strain and reported a good agreement with the manually tracked ejection fraction and longitudinal strain (processing time ~8 s) [37] . Models for automated assessment of valvular heart disease also showed encouraging results [ 45 , 46 ]. Playford et al. developed an echocardiography-based AI algorithm to diagnose severe aortic stenosis while overcoming the limitations inherent to LV outflow tract measurements. AI predicted a greater survival difference between severe versus non-severe aortic stenosis than traditional measurements, while more frequently labeling "severe AS" that traditional methods [47] . It remains unclear whether this should trigger an earlier intervention or be regarded as co-existing heart disease such as age-related diastolic dysfunction.
Cognitive computing proved useful to differentiate restrictive cardiomyopathy and constrictive pericarditis (vide supra) [25] . Algorithms integrating imaging plus other relevant clinical data could further refine the diagnostic process in a manner that more closely resembles the physician's clinical thinking, where clinical, laboratory, imaging data and more come into play [ 3 , 12 ].

Treatment planning and guidance
ML allows fully-or semi-automated identification and quantification of anatomic structures as a result of which tasks such as aortic (annulus) measurements for valve selection for TAVI will become less time-consuming and more efficient. Automated perimeter measurement of the aortic and mitral annulus is feasible within seconds and with an error similar or smaller than the one of different operators (i.e. within inter-observer variability) [ 9 , 34 , 41 , 48 ]. Fig. 4 shows an example of how the computer is taught to recognize the aortic annulus from CT images (labelled dataset) until it learns to execute this task in an unlabelled dataset.
ML also enhances the generation of 3D computer models for simulation, streamlining the process of selection of the device that best fits the individual patient. ML minimizes variability of measurement next to time efficiency, while CM&S predicts valve performance and complications as it assesses device/host interaction. This has been validated for TAVI and shown for transcatheter mitral valve replacement (TMVR), mitraclip procedures and LAA occlusion [49][50][51][52][53][54][55] . Of note, it not only affects device selection (size, type) but also procedural technique such as depth-of-implantation to prevent conduction abnormalities (TAVI) or neoLVOT (TMVR) [ 56 , 57 ]. Enhancement and refinement of such models by ML could promote a more widespread implementation in clinical practice, which would be of particular relevance for procedures that are performed less frequently and for which the experience is low, such as TMVR, or for lower-volume centers with less experienced operators.
ML has also been applied to enhance fusion-imaging for guidance of transcatheter SHD interventions. Fluoroscopy has lim- ited ability to differentiate soft tissue structures and fusion with echocardiography or CT provides more anatomic detail. By facilitating the segmentation process and allowing structure recognition, AI enables superimposition of two different imaging techniques and allows automatic identification of landmark anatomic structures relevant for correct valve/device implantation [ 4 , 7 , 35 , 58 ]. Fusion of 3D transesophageal echocardiography with fluoroscopy has been used to guide TAVI and transcatheter LAAO, possibly reducing procedural time and radiation [ 59 , 60 ]. CT-fluoroscopy imaging has proved useful to guide transapical access for complex interventions such as mitral valve-in-valve, PVL occlusion or ventricular septal defect occlusion [61] . Procedural guidance/planning may even be taken a step forward by the utilization of augmented reality, where different imaging modalities (eg. fluoroscopy and CT) are combined to generate holograms allow real-time 3-dimensional visualization of cardiac structures and catheter [62] .

Prognosis, surveillance and rehabilitation
A premise for establishing a patient-oriented efficient and costeffective surveillance program is that patients at a high risk of adverse events, who will be the target of more rigorous, timeand resource-consuming follow-up surveillance, are accurately separated from those at a lower risk. ML has already been shown to outperform classical statistical methods to predict outcomes in the patient with heart failure, coronary artery disease and congenital heart disease [63][64][65][66] . Recently, Hernandez-Suarez et al. used supervised ML to predict in-hospital mortality after TAVI and TMVR with an accuracy surpassing that of previous models [ 67 , 68 ]. , Prediction of longer-term prognosis is more challenging even with AI [31] .
The advent of Telemedicine and its empowerment by upcoming mobile devices (e.g. smartphones-watches, other remote sensing devices) will support early discharge protocols by guaranteeing safety via close surveillance including remote rehabilitation [69] .There is a plethora of platforms and hardware offering remote monitoring in particular for the detection of arrhythmias that is potentially helpful for early discharge after SHD interventions [69][70][71][72] . Mobile devices also allow real-time transmission of blood pressure or other vital signs [72] . Early signs of pulmonary congestion can be detected by a wearable vest with two sensors fostering early intervention and avoidance of hospital admission [67] . Accelerometers assessing daily step-count can monitor and promote physical activity [72][73][74] . The setting of a remote surveillance and rehabilitation program is challenging, as it demands the efficient incorporation of the collected data into the electronic healthrecords plus a structured and timely response to the incoming information. AI nevertheless is a way to endorse a patient-driven and patient-tailored health care system. The creation of tele-and virtual-medicine centers such as the Mercy Virtual that launched a single-hub electronic intensive care unit (ICU) in the USA in 2006 is an illustration thereof. It impacts those in need of care and those who deliver care (continuous training, education, reorganization) as well as the health-care authorities. Proper positioning and application of AI in medicine demands a profound understanding of its principles, strengths and pitfalls.

Challenges in applying Artificial Intelligence in clinical practice
ML algorithms development, validation and application share some common grounds with conventional statistics [7] . First, as discussed above, ML algorithms are vulnerable to bias and highquality data are required for the generation of first-rate algorithms [ 8 , 14 , 75 ]. This is particularly an issue when very large datasets containing unneeded and/or confounding variables are being used. Bias is especially difficult to ascertain in DL algorithms, whose methodology of data processing may be incomprehensible to the human brain [ 7 , 75 ]. Secondly, the algorithms' reproducibility needs to be assessed, and so far most evidence stems from single-center studies lacking validation in different populations. On the other hand, in some cases, different algorithms may be required for different populations [ 14 , 75 ]. For instance, cluster analysis in American AF patients yielded different phenotype clusters than in a Japanese population, probably due to different environmental, cultural (including health system structure) and genetic factors. 29,30 Thirdly, the use of AI in clinical practice may be complicated by the fact that the user may be unaware of how the output was created and a clinical or pathophysiologic explanation of the associations may be absent. Human intelligence needs artificial intelligence but the opposite is equally true to grasp the complexity of AI and hence ensure its appropriate application. Fourthly, digitalization of health records comes with privacy and data safety issues. A breach in security could jeopardize confidential information of millions of 5

ARTICLE IN PRESS
JID: TCM [m5G; February 18, 2021;0:20 ] patients [72][73][74][75] . However, the development of more robust algorithms, particularly those concerning less frequent pathologies or interventions (such as TMVR) may require data sharing between institutions, adding complexity to the privacy issue. Steady incorporation of AI into clinical decision support systems is envisioned by many, but may be still a distant reality. It implies that robust algorithms are built and adequately validated and that they are efficiently embedded in the institutional software so that they can be used in real-time by the healthcare providers without imposing a cumbersome change in daily routine [75] . Currently, studies demonstrating the advantages and cost-effectiveness of AI algorithms over "traditional" clinical practice are lacking. AI algorithms may eventually fail due to faulty design or inappropriate application (population), irrespective of how robust they have shown in test-populations. It is therefore unclear how much human supervision will be required and who will be held responsible in case of error -the healthcare provider or the company responsible for the technology [ 17 , 76 ]. It requires specific regulation as well as standardization of validation and approval of novel AI algorithms. It also requires continuous training, education and information of all stakeholders of health care to overcome doubts and suspicions, concerns of inappropriate use and obsolescence of human resources [4] . The purpose of AI is to ease their workflow by for instance allowing the healthcare practitioners to dedicate more time to tasks that cannot be performed by machines (personal contact, humanization of healthcare) and to grant health accessibility to a larger number of patients. In the light of the above, we feel that human and artificial intelligence are complementary and that final clinical decision-making is the responsibility of the physician who has an understanding of the nature and pathophysiology of the AI-derived associations or predictions.

Conclusion
AI is a promising tool for improving the delivery of care such as SHD-Interventions. Upon further research and development, it has the potential to enhance Precision Medicine in each step of the clinical pathway, including diagnosis, treatment stratification and device selection, procedure execution and guidance and postprocedural/discharge surveillance and rehabilitation.

Ethical statement for Solid State Ionics
Hereby, I, Joana Maria Ribeiro, consciously assure that for the manuscript "Artificial Intelligence and Transcatheter Interventions for Structural Heart Disease: A glance at the (near) future " the following is fulfilled: (1) This material is the authors' own original work, which has not been previously published elsewhere.
(2) The paper is not currently being considered for publication elsewhere.
(3) The paper reflects the authors' own research and analysis in a truthful and complete manner.
(4) The paper properly credits the meaningful contributions of co-authors and co-researchers. (5) The results are appropriately placed in the context of prior and existing research.
(6) All sources used are properly disclosed (correct citation). Literally copying of text must be indicated as such by using quotation marks and giving proper reference.
(7) All authors have been personally and actively involved in substantial work leading to the paper, and will take public responsibility for its content.
I agree with the above statements and declare that this submission follows the policies of Solid State Ionics as outlined in the Guide for Authors and in the Ethical Statement.