Ethical Challenges and Opportunities in Applying Arti ﬁ cial Intelligence to Cardiovascular Medicine

Much anticipation surrounds arti ﬁ cial intelligence ’ s (AI) emergence as a promising tool in health care. It offers potential to revolutionise clinical practice through assistive and autonomous operation. The high prevalence of cardiac disease globally provides an opportunity for AI technology to increase health care ef ﬁ ciency and improve patient outcomes. This article explores the ethical considerations necessary for safe and acceptable implantation of AI within the health care space. We aim to highlight several challenges such as data privacy, consent, sustainability

Artificial intelligence (AI) has the potential to revolutionise health care.Over the past few years there has been an exponential increase in public interest around AI developments, driven largely by the widespread availability of large language models (LLMs) such as ChatGPT.This has invariably led to both excitement and apprehension regarding a future with AI. 1 Dystopian concerns abound regarding machines controlling human affairs and freedom abound. 2 Recently, government, business, and technology leaders called for a moratorium on the development of AI to allow public discourse to take place and rigourous regulation established, as stated in the Bletchley Declaration. 3e high burden of cardiac disease places an overwhelming burden on health care systems around the world.AI represents an opportunity to reduce this pressure and translate the latest medical advances into clinical practice.
5][6] Wearable electrocardiogram (ECG) monitors can analyse heart rhythms and advise patients to seek emergency medical aid.AI models are also widely used in echocardiography software to generate automated measurements and reports. 7][10][11] Robotically assisted percutaneous coronary intervention is likely to improve angiography accuracy and reduce occupational risks of the proceduralist, 12 and AI can provide clinical decision support during angiography. 13The various uses of AI within cardiovascular medicine will increase over the coming decade.
AI's opportunities and challenges are linked. 6Responsible use of the technology requires system-wide efforts to implement regulation that protects patients but avoids being overly restrictive and stymie innovation and progress.Serious negative consequences could arise if ethical principles and human rights obligations are not prioritised by those who fund, design, regulate, and use AI technologies for health care.
This article describes the ethical considerations for applying AI to cardiovascular medicine, explores potential ethical challenges involved in its implementation, and details future opportunities in this field.

Ethical Considerations for Use of AI in Cardiovascular Medicine
Integrating AI into health care and specifically cardiovascular medicine requires the consideration of several ethical principles, discussed below, which have been proposed by the World Health Organisation. 14Developers, users, and regulators have a responsibility to adhere to ethical principles in the context of development, deployment, and ongoing assessment of AI use in health care.The grounding bioethical principles of beneficence, nonmaleficence, autonomy, and justice are central to the consideration of AI-specific ethical challenges 15 (Table 1).In Figure 1, we summarise competing ethical challenges and opportunities in this area.Inevitability, the routine application of standardised ethical principles may not provide compelling answers to the ethical uncertainties provoked by AI.An effective response to this challenge will require fundamental reflection about what clinicians and society value about health care and what we consider to be its main points of vulnerability. 16Figures 2 and 3 illustrate clinical case studies that demonstrate ethical conundrums in the practical application of AI to cardiovascular medicine.

Promoting human health and the public interest
The impetus for using AI systems in cardiovascular medicine should be to benefit individual patients and improve population-level outcomes.Most current AI systems focus on assisting and complementing physicians, but these will likely evolve in the coming years to being capable of providing independent or autonomous care. 17ear safeguards are required to ensure that AI does not result in physical or mental harm.Care will need to be taken to protect patients when AI systems reveal a diagnosis that cannot be addressed or remedied with contemporary medical care.For example, AI has proved to be useful in ensuring that radiologic findings receive guideline-recommended follow-up. 18The approach to incidental findings made by AI systems must balance a "duty to warn" with the risk of misdiagnosis and overdiagnosis.Human clinicians are best placed to make this determination and should take responsibility for appropriate disclosure of AI-derived findings to patients.Considerations include the right of the patient to have access to personal health information, the likelihood of clinically relevant disease, the risks of further investigations (radiation and invasive biopsies), and the actionability of a given result. 19For example, a benignappearing lung lesion incidentally noted on a computed tomographic coronary angiogram should be documented in the text of a radiology report for full transparency, but the extremely low risk of malignant potential should be explicitly stated to avoid referrer or patient anxiety and confusion.
It is imperative that systems are thoroughly scrutinised by regulatory bodies and deployed into clinical care only when safety, accuracy, and efficacy can be guaranteed.The precedent for clinical adoption of AI systems is the use of assistive software in the field of radiology.Imaging-related AI products currently make up more than 75% of the 691 AI-enabled "devices" that have received US Food and Drug Administration (FDA) approval. 20For example, BriefCase is a machine learning (ML) eenabled medical device approved for computer-aided triage of suspected intracranial vessel occlusions (limited to the first segment of the middle cerebral artery) on computed tomography and angiography. 21Once it was commercially available, it became clear that some clinicians were using the program to evaluate for occlusions in vessels that the algorithm had not been trained on or validated to detect.This resulted in a cautionary letter from the FDA to users to clarify its intended use and warn that application outside of those boundaries could result in adverse patient outcomes. 22y their nature, risk-prediction algorithms in cardiovascular medicine may deal with diseases that take decades to emerge or become clinically relevant.Policy makers will have to strike a balance between safety, through careful postimplementation analysis and rigourous protections for patients, and medical innovation, which will require freedoms granted to AI developers and clinicians to undertake clinical trials in this area.Similarly, assessment of digital health technology in the cardiovascular space must allow for the rapid and continuously iterative nature of these algorithms and devices.This will require determinations to be made on evidence that is less mature compared with approvals for traditional medical devices. 23he onus is on developers and users of AI to perform quality assurance on algorithms to ensure they work as intended and are continuously improved as technology matures.A well publicised case of AI technology failing to meet community expectations of safety involved an algorithm designed to advise on appropriate cancer treatment based on review of patients' medical records.Postimplementation analysis of the program revealed unsafe and incorrect recommendations, which were thought to be related to software training on synthesised information rather than real patient data. 24I used in health care should be shared as widely as possible.It should be designed in a way that encourages use regardless of age, sex, gender, income, race, ability, sexual orientation, ethnicity, or location.Provisions will have to made to cater for individuals with culturally and linguistically diverse backgrounds, particularly indigenous and remote populations who have inconsistent access to health care and information technology.

Human autonomy
Foundational respect for human autonomy necessitates that clinicians should maintain control over the extent to which AI systems are used in health care.There have been some recent real-world demonstrations of the safe operation of autonomous algorithms improving efficiency within health care, for example, in diabetic retinopathy management and reduction of delays in treatment for stroke. 25,26The degree of independent functionality that is appropriate in each context will involve the consideration of the risks associated with the AI system and the ability of users to transparently monitor activity.For example, AI tools that perform intermediary tasks, such as optimising workflow through triage, will require less oversight than those performing high-level tasks, such as diagnostics.The latter require caveats to ensure that results are verified by human clinicians.Human discretion acting as a "safety check" will always be required in a clinical decision making algorithm.
As technology advances there may come a time when AI is proven to be more reliable or accurate than human clinicians, as studies in the radiology field have signalled. 27This situation will require deep reflection about what we value about medicine and the patient-clinician interaction.Will patients be satisfied with a diagnosis and management recommendations delivered by LLM software as the entirety of their medical care?A systematic review of patient attitudes toward clinical AI found overall positive views, although there were concerns about AI-based tools for patient-facing interactions, such as cancer screening or answering emergency calls.Nonepatient-facing uses, such as clinical documentation in health records, was viewed more favourably.Patients also generally did not favour AI "replacing" clinicians endorsed the latter to be used as failsafe. 28he bio-psycho-social model of care argues that sociable beings desire a broader consideration of values, culture, and spiritual and emotional needs during clinical interactions.In essence, a social contract must be created between the patient and human clinician.Currently, AI systems cannot cater to these spheres of unique human psychology.As these technologies mature, the Turing Triage Test, 29 which was devised to determine whether machines have achieved the moral standing of humans, may be useful in gauging if AI is an acceptable adjunct to the patient-clinician relationship.As AI becomes more widely integrated into all aspects of life, this will likely change patients' attitudes toward AI in health care.
The use of widespread AI technology in medicine presents a major epistemologic challenge, as clinicians have traditionally been at the centre of medical knowledge production and dissemination.How will cardiologists adjust to AI potentially displacing human clinicians as the arbiter of predicting patient health outcomes and disease evolution?

Privacy and consent
The fundamental bioethical principal of autonomy implies a duty to protect the privacy and confidentiality of individuals.The development of AI systems requires access to vast amounts of detailed patient data to train algorithms.The volume of data involved typically necessitates cloud-based storage, which leaves sensitive data vulnerable to cyber threats.Rigourous legal frameworks are required for data protection, which must be respected by system designers.Standard data protection laws typically require sensitive information to be encrypted and stored in a de-identified fashion, with various safeguards to control access to data, duration of storage, and the ability to re-identify.
Patient data should not be accessed by AI systems without informed, valid consent.Medical treatment or essential services should not be restricted if an individual withholds consent.Additional incentives should not be offered by organisations to individual to induce consent, as seen with insurance companies that offer wearable technology to customers in return for access to health metadata. 30here is ethical uncertainty regarding the extent of clinician responsibility to educate and inform patients about the use of AI in medical care.The technical operation of most algorithms is so complex that it would be difficult for an ordinary clinician to digest, understand, and convey to a patient.This lack of understanding of algorithm functionality has been demonstrated to be worrisome for both clinicians and patients alike. 31As health chatbots and wearable technology proliferate, there are concerns from bioethicists about user agreements and their relationship to informed consent. 32Most individuals do not read user agreements, and frequent software updates makes it hard to follow what terms of service they have agreed to.If information is to be collected from patients facing AI health applications, such as smartwatches, and then fed into clinical decision making, then there must be a rigourous informed consent process at the outset, so that patients can understand how their health data may be used.
There has been widespread publicity about legal challenges regarding ownership of published works that are accessed in the public domain and used to train AI algorithms. 33As we move into the medical AI era, regulations should establish personal ownership and control of data, empowering patients to move or share their own health data as they like.

Minimisation of bias
AI algorithms require large amounts of data for training and will base their understanding of a particular problem on the representative data set.Providing limited or inaccurate data to AI algorithms will inevitably result in biased models.For example, cardiovascular risk assessments may be imprecise for minority ethnic groups if they were not adequately represented in the original training data.This can lead to over-or underestimation of risk, inappropriate testing, and adverse cardiovascular outcomes.This is a recognised issue in cardiovascular disease, where traditional risk assessments, for example, the Framingham Risk Score, substantially underestimate risk in Indigenous Australian populations. 34Societal bias can also manifest in algorithmic outcomes when historical inequities are not considered.For example, an algorithm designed to estimate future health needs was found to be biased against Black patients, because calculations were based primarily on past health care spending and failed to account for historically limited access to care for Black patients. 35urthermore, bias may be inherent within AI algorithms.This reflects the developers' biases that may pertain to patient demographics or specific outcomes.A programmed bias will be amplified as the algorithm learns, resulting in inaccurate and potentially discriminatory results which may then lead to poor patient outcomes.For example, favouring inputs from Western European or North American databases rather than giving equal weight to data collected from Asian or African countries will result in naturally biased algorithms.
For an AI model to be universally applicable, developers and end users must be aware of their biases, so that preventative steps can be taken.Individuals from underrepresented races must be incorporated into the AI model building process to ensure that algorithms are applicable to all populations, including historically excluded ones.From a practical perspective, AI technology should be available in multiple languages, or have built-in translation functionality, to maximise its potential for widespread use in noneEnglishspeaking populations.

Transparency and explainability
A common criticism of AI use in health care is the "black box" nature of advanced technology.This refers to the perceived lack of comprehension from health care providers regarding the development, testing, reliability, and pitfalls of AI systems.This concept is not unique to AI and could also be applied to pharmaceuticals, where novel drugs are developed largely outside the clinical space and undergo initial testing before most doctors' awareness of their existence.However, with education about the details relevant to medical practice, clinicians can prescribe and use medications safely and appropriately.
Opacity in AI decision making is one of the most significant challenges for regulatory approval and clinical implementation. 36Developing explainable AI remains a priority of policy makers to build trust in novel systems. 37Providing a descriptive account of the functioning of complex algorithms is not achievable for complicated ML models.Instead, contemporary methods of AI explanation rely on "global or local interpretability" which seek to describe information and correlations on which a model has relied in coming to its conclusions, but not the causal drivers that produce the correlations themselves. 38he need for doctors to comprehend algorithms on a detailed level may be unnecessary and have little effect on their ability to safely incorporate AI into health care decision making.However, a certain degree of understanding will be necessary so that patients can provide informed consent to the use of AI in their medical care.As AI technology becomes integrated into clinical practice, there will be a natural increase in algorithmic understanding as health care workers become more familiar with AI and undertake focused educational activities.Furthermore, it is likely that new allied health roles will arise to meet the demand for expertise in health care AI, akin to the roles fulfilled by pharmacists and radiographers today.
The issue of transparency can be addressed by adding "explainable AI" (XAI) features to AI models.XAI programs aim to breakdown the decision making pathway of the AI model.Local interpretable model-agnostic explanations (LIMEs) are algorithms designed to help humans "trust" AI by explaining complex nonlinear learning models with simple linear models. 39These are developed post hoc and can often be applied across multiple complex systems rather than being tailored to a particular AI model.This provides consistency across AI platforms.LIMEs also aid in identifying potential biases and other concerns within AI systems. 40ltimately, modern health care is structured around a team with the doctor at the head and the patient at the centre.The clinician's ultimate responsibility is to collate expertise from other health care professionals and use clinical judgement to apply this information in the patient's best interest.AI is likely to be additive to this information base, driving improved outcomes.

Responsibility and accountability
Clinicians will be required to understand clear indications for the use of AI systems, as well as the conditions under which they will be able to perform accurately.Users will be expected to operate AI systems for appropriate, prescribed tasks but retain overall responsibility for clinical decision making.This "human warranty" requires regulatory steps to be applied up-and downstream to the AI system, to establish points of human authority and supervision.Active participation in the regulation process by patients and clinicians will stir public consultation and debate, and ultimately help algorithm development.
When AI technology is deployed in health care settings, robust mechanisms must be developed to provide restitution to patients if errors occur or harm is sustained.The issue of responsibility for algorithmically informed decisions is complex, but a model of "collective responsibility" would allow reasonable apportionment to AI developers, AI-using clinicians, and health care institution managers who promote AI system use. 41These parties have a shared duty to assume responsibility for decisions made by algorithms, even if it is not feasible to explain in detail how the AI systems produced their results (see the Transparency and Explainability section above).
AI can circumvent traditional acquisition of knowledge, allowing nonmedical personnel to be trained to perform basic cardiac investigations (eg, echocardiography, pacing device interrogation) within a short time frame.The ultimate responsibility of accurately performing and reporting cardiac investigations should still lie with the responsible clinician.

Foreseeable Ethical Challenges
The case for AI: The promises and the peril There is a great deal of anticipation around the deployment of AI technologies in cardiovascular medicine.It is worth considering what lies behind the relentless drive for new or updated technology.Invariably there are commercial motivations for medical device companies to promote the use and sale of their new products, as is seen with cardiac implantable electronic devices such as leadless pacemakers in recent years.Some argue that our society holds an enduring appeal for technologic solutionism, in which new technology such as AI may be viewed as a remedy for deeper issues, be they structural, societal, or economic. 42erein lies the risk of unrealistic estimates of what could be achieved as AI evolves in the field of health care.There is a risk of enthusiastic uptake of unproven products and services that have not been subjected to rigourous evaluations of their safety and efficacy. 43At present, there remains a paucity of evidence that AI can positively affect patient outcomes compared with current standards of care. 44Healthcare AI systems are prone to type 1 errors, ie, false positives. 45There are several examples of AI models failing to live up to expectations, such as a sepsis prediction tool not identifying 67% of septic hospitalised patients, 46 a retinopathy screening tool malfunctioning in 21% of cases because of different lighting conditions, 47 and a medical scribe algorithm that is able to accurately capture only 80% of the data it processes. 48ttention and resources may be diverted to vogue AI projects, and away from proven but underfunded medical interventions, particularly in lower-and middle-income countries.In these areas, financial resources and information technology infrastructure may fall well below the level that is required to operative modern AI systems.
Recently there have been pilot studies in the use of AIenhanced echocardiography for rheumatic heart disease (RHD) screening in endemic populations in Western Africa and Northern Australia. 49The updated World Heart Federation guidelines for echocardiographic diagnosis of RHD encourage consideration of hand-held devices to screen for early signs of valve disease.These devices may be equipped with AI to allow real-time feedback and image optimisation to nonexpert sonographers.This in turn would allow operators from the local community to screen for RHD. 50The benefits of this model include fostering trust in Western health care systems, improved integration of indigenous health models and health workers, and increased efficacy of screening programs.
Although these projects seek to improve access to medical technology in remote communities, they may divert funds from primordial prevention efforts against rheumatic fever, such as reducing overcrowding and improving sanitation, although economic analyses suggest that efforts focused on secondary prevention in RHD are associated with swifter results and lower costs. 51Regulators and governments must ensure that resources are allocated to areas most likely to provide improved health outcomes, align with the local community's wishes, and ensure that approved AI technologies are accurate and effective.
All novel pharmaceutical agents or medical devices are subject to stringent regulatory approval processes to protect patient safety.Errors in AI systems, such as miscoding, present risks to patient groups. 52Unlike clinician error, which may be recognised and remedied before repeated harms, an encoded error fixed within an algorithm could cause widespread harm to large number of patients if the technology is widely used.
As health care systems become increasing dependent on AI, for example, in the field of worn and implanted cardiac devices, these technologies may be targeted by malicious actors in cyberattacks.Patients must be protected from nefarious actors who may seek to exploit AI systems and the data they train on for material gain.Increasing hacking scandals afflicting technology companies mean that consumers are becoming more vigilant about their proprietary data.

Sustainability
The training and development of AI models require vast amounts of computing power, which has led to scrutiny for its potential adverse impact on climate change.The World Health Organisation considers climate change to be an urgent global health challenge that requires immediate and focused action to rapidly reduce greenhouse gas emissions to improve health and build resilient and environmentally sustainable health systems. 54Some research suggests that the training of an LLM may generate carbon dioxide emissions equivalent to that of 60 average North Americans in a given calendar year. 53xtending AI into health care has the risk of potentiating climate related harms, particularly in low-income countries.Emerging technology should be designed to minimise carbon emissions, potentially by using curated data sets that may lead to increased accuracy and efficiency of algorithms. 55

Data collection and use
The collection and analysis of massive quantities of personal and health data, referred to as "biomedical big data," form the basis for the operation of AI systems within cardiovascular medicine.These various types of data include demographics, medical records, ECGs, radiographic images, and genomics.Health care systems should normalise data sharing by educating patients that high-quality data is foundational for the development and training of the AI systems.These technologies, once developed, will be used to benefit patients through improved diagnostics and health care efficiencies.Owing to the sensitive nature of health data, it is imperative that patients consensually share their biometric information, such as their ECGs in the emergency department, which can enable algorithm training.For example, the recently described Queen of Hearts AI bot is designed to detect occlusive myocardial infarction by analysing 12-lead ECGs not meeting ST-segment elevation myocardial infarction criteria. 56he counterpoint to the broad sharing of health data is the concern regarding the safeguarding of individual privacy.The former UK Information Commissioner pointed out that "the price of innovation does not need to be the erosion of fundamental privacy rights." 57Individuals may suffer harm if sensitive health data is broadcast or shared without their consent, for example, discrimination based on health status that can affect job opportunities and insurance policies. 58atients can be protected against these types of harm through robust antidiscrimination laws, similarly to the emerging regulatory frameworks that govern genetic privacy. 59uestions arise about whether insights derived from AI algorithms should be treated as confidential health data, owned by the patient.Should third parties, such as AI technology funders, be able to use the health data generated by AI systems?Surveys show the public is uncomfortable with government or for-profit organisations selling health data for profit, 60 but it is broadly accepted that those seeking to use patient data for AI development should show they are adding value to the health of those same patients who own the data. 61pic Systems, an American health care software company, recently announced an agreement with Microsoft to incorporate its GPT-4 LLM into their electronic health record database, which has been used for more than 305 million patients worldwide. 62This has been marketed as likely to increase productivity and reduce administrative burden, allowing clinicians more time to spend with their patients.This partnership has implications for patient privacy and should be closely observed by regulatory bodies during its rollout to ensure absolute protection of confidential health data.
Meaningful consent can overcome many privacy concerns about data sharing and its use by AI algorithms.However, the scale and complexity of biomedical big data that are fed into AI systems challenges the concept of meaningful consent.How are patients to understand the potential future uses of their data when algorithms are so complex that their function cannot be readily explained even by their developers?Contemporary consent for the use of personal data in this field must include consideration of future uses that may deviate markedly from the original intended purpose.
Data custodians have a well recognised duty of confidentiality, which acts to protect patient privacy alongside consent mechanisms.De-identification and anonymisation of health data removes personal identifiers from information and uses technical safeguards, such as encryption, to reduce the risk of re-identification and potential privacy breach.Full anonymisation may not be possible with some types of data, such as genomic sequences for inheritable cardiomyopathies or predictive AI, which requires multiple inputs over multiple time points to allow the algorithm to learn. 63

Patient preferences
As this nascent technology becomes formally embedded into health care, patients' perceptions are likely to evolve.Some may object to the use of AI-based technology in favour of clinician-based care.Commonly cited concerns include mistrust in new technologies and fear of reduced personal interactions. 64A patient should retain the prerogative to decline AI-based health care if practical to do so.Catering to patient choice is more problematic in resource-poor settings: AI tools may not be available despite patients' wishes, or alternatively AI-based care may be the only available option.

Future Opportunities
AI-enhanced coronary artery assessment is already a rapidly evolving field.AI will likely be able to assist in real-time assessment of coronary lesions to definitively determine the presence or absence of acute coronary syndrome beyond our present capabilities with ECG, biomarkers, and clinical assessment.Early studies have already shown promise in AIenhanced identification of hemodynamically significant lesions. 65I offers great promise in meaningfully tackling cardiometabolic health issues.Globally, cardiometabolic disease prevention strategies have been unable to change the tide of rising obesity, diabetes, and atherosclerotic disease rates.Despite extensive research into the optimal "healthy lifestyle," it is highly challenging for clinicians to provide more than generic advice to patients.In multicultural countries such as Canada, dietary advice that ignores cultural and ethnic preferences is unlikely to produce any meaningful change.
The possibility of providing highly individualised lifestyle prescriptions has been posited since the advent of wearable AI technology.Smartwatches can track heart rate and rhythm, energy expenditure, sleep patterns, and other health metrics in real time.The accuracy of these measurements is continually debated, but overall continues to improve with each iteration.Data collected from AI wearables allows tailored advice regarding diet, sleep, and exercise that aligns with an individual patient's preferences, lifestyle, and abilities. 66This opportunity for precision medicine before the onset of endorgan disease can vastly improve primary prevention efforts.
The use of AI in clinical decision making and procedures may raise concerns regarding "deskilling" of cardiologists.AI could be used to build highly accurate human models that can be used to train doctors alongside exposure to real patients.High-fidelity simulations could limit the exposure of patients to harm by allowing trainees to build clinical decision making skills in a controlled environment.Training clinicians could also have increased exposure to rare disease processes and uncommon procedures.There are also favourable attitudes toward AI use in medical school education. 67The counterargument is that AI technology is unable to replicate the realworld interactions between patient and doctor and may be unable to reproduce subtleties in disease presentation that experienced clinicians can identify.
Clerical work is a frequent contributor to physician burnout, especially when it occupies 20% or more of total work time. 68,69The use of clinician-only documentation has been linked with lower staff satisfaction owing to inefficiency. 70AI has great potential to reduce the cognitive and administrative load in health care via automatic dictation, correspondence generation, documentation, and streamlined booking processes. 71Doctors unburdened from administrative tasks could dedicate more time to clinical duties, leading to improved efficiency and less work outside normal working hours, improving job satisfaction.
There is space for AI in every aspect of cardiovascular medicine.A recent paper from the America Heart Association outlined a framework for the successful implementation of AI into cardiovascular medicine. 72Given its highly researched and evidence-driven nature, cardiology is continuously evolving at a rapid pace.The application of risk scores in primary care could potentially result in lower population burden of cardiovascular disease as more targeted preventative care becomes normalised.It is not far-fetched to envision a future where cardiologists perform complex angioplasty while seated in front of a holographic model of the heart, just as it was unimaginable for a physician in the 19th century to conceive the modern interventional laboratory.As AI expands its functionality, it is inevitable that our roles in cardiovascular medicine will undergo transformation.

Conclusion
Reliance on AI will become deeply ingrained in our health care systems, making it inseparable from modern clinical practice.Looking ahead, cardiologists will need to undergo training in using AI systems to acquire new skills and perform advanced procedures.Embracing the full spectrum of AI possibilities holds the potential for improved outcomes for both doctors and patients.However, as stewards of health care, it is our responsibility to regulate the use of AI and continually evaluate its value to our patients.We must remain cognisant of the potential pitfalls and constraints of AI, approaching our aspirations with a healthy dose of scepticism regarding the potential for misuse.
We can advocate for AI-specific legislation to ensure that AI developers are motivated to create products that align with modern ethical standards.We can educate the public about the advantages and disadvantages of AI in prevention and treatment of cardiovascular disease.By deepening our understanding of AI, we can cultivate patient trust in AIenhanced health care and our own profession.With the multitude of AI applications in cardiovascular medicine, the future cardiologist is poised to assume a leadership role in modern AI-enhanced health care.

Figure 1 .
Figure 1.Ethical challenges and opportunities for using artificial intelligence (AI) in cardiovascular medicine.

Table 1 .
Bioethical principles central to the application of artificial intelligence (AI) to cardiovascular medicine