AI Quality Standards in Health Care: Rapid Umbrella Review

Background In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. Objective This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period. Methods We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies. Results We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard–related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications. Conclusions Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.


Growth of Health Care Artificial Intelligence
In recent years, there has been an upwelling of artificial intelligence (AI)-based studies in the health care literature.While there have been reported benefits, such as improved prediction accuracy and monitoring of diseases [1], health care organizations face potential patient safety, ethical, legal, social, and other risks from the adoption of AI approaches [2,3].A search of the MEDLINE database for the terms "artificial intelligence" and "health" in the abstracts of articles published in 2022 alone returned >1000 results.Even by narrowing it down to systematic review articles, the same search returned dozens of results.These articles cover a wide range of AI approaches applied in different health care contexts, including such topics as the application of machine learning (ML) in skin cancer [4], use of natural language processing (NLP) to identify atrial fibrillation in electronic health records [5], image-based AI in inflammatory bowel disease [6], and predictive modeling of pressure injury in hospitalized patients [7].The AI studies reported are also at different AI life cycle stages, from model development, validation, and deployment to evaluation [8].Each of these AI life cycle stages can involve different contexts, questions, designs, measures, and outcomes [9].With the number of health care AI studies rapidly on the rise, there is a need to evaluate the quality of these studies in different contexts.However, the means to examine the quality of health care AI studies have grown more complex, especially when considering their broader societal and ethical implications [10][11][12][13].
Coiera et al [14] described a "replication crisis" in health and biomedical informatics where issues regarding experimental design and reporting of results impede our ability to replicate existing research.Poor replication raises concerns about the quality of published studies as well as the ability to understand how context could impact replication across settings.The replication issue is prevalent in health care AI studies as many are single-setting approaches and we do not know the extent to which they can be translated to other settings or contexts.One solution to address the replication issue in AI studies has been the development of a growing number of AI quality standards.Most prominent are the reporting guidelines from the Enhancing the Quality and Transparency of Health Research (EQUATOR) network [15].Examples include the CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension for reporting AI clinical trials [16] and the SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension for reporting AI clinical trial protocols [17].Beyond the EQUATOR guidelines, there are also the Minimum Information for Medical AI Reporting standard [18] and the Minimum Information About Clinical Artificial Intelligence Modeling checklist [19] on the minimum information needed in published AI studies.These standards mainly focus on the methodological and reporting quality aspects of AI studies to ensure that the published information is rigorous, complete, and transparent.

Need for Health Care AI Standards
However, there is a shortcoming of standard-driven guidance that spans the entire AI life cycle spectrum of design, validation, implementation, and governance.The World Health Organization has published six ethical principles to guide the use of AI [20] that cover (1) protecting human autonomy; (2) promoting human well-being and safety and the public interest; (3) ensuring transparency, explainability, and intelligibility; (4) fostering responsibility and accountability; (5) ensuring inclusiveness and equity; and (6) promoting AI that is responsive and sustainable.In a scoping review, Solanki et al [21] operationalized health care AI ethics through a framework of 6 guidelines that spans the entire AI life cycle of data management, model development, deployment, and monitoring.The National Health Service England has published a best practice guide on health care AI on how to get it right that encompasses a governance framework, addressing data access and protection issues, spreading the good innovation, and monitoring uses over time [22].To further promote the quality of health care AI, van de Sande et al [23] have proposed a step-by-step approach with specific AI quality criteria that span the entire AI life cycle from development and implementation to governance.
Despite the aforementioned principles, frameworks, and guidance, there is still widespread variation in the quality of published AI studies in the health care literature.For example, 2 systematic reviews of 152 prediction and 28 diagnosis studies have shown poor methodological and reporting quality that have made it difficult to replicate, assess, and interpret the study findings [24,25].The recent shifts beyond study quality to broader ethical, equity, and regulatory issues have also raised additional challenges for AI practitioners and researchers on the impact, transparency, trustworthiness, and accountability of the AI studies involved [13,[26][27][28].Increasingly, we are also seeing reports of various types of AI implementation issues [2].There is a growing gap between the expected quality and performance of health care AI that needs to be addressed.We suggest that the overall issue is a lack of awareness and of the use of principles, frameworks, and guidance in health care AI studies.
This rapid umbrella review addressed the aforementioned issues by focusing on the principles and frameworks for health care AI design, implementation, and governance.We analyzed and synthesized the use of AI quality standards as reported in a sample of published health care AI systematic review articles.In this paper, AI quality standards are defined as guidelines, criteria, checklists, statements, guiding principles, or framework components used to evaluate the quality of health care AI studies in different domains and life cycle stages.In this context, quality covers the trustworthiness, methodological, reporting, and technical aspects of health care AI studies.Domains refer to the disciplines, branches, or areas in which AI can be found or applied, such as computer science, medicine, and robotics.The findings from this review can help address the growing need for AI practitioners and researchers to navigate the increasingly complex landscape of AI quality standards to plan, conduct, evaluate, and report health care AI studies.

Overview
With the increasing volume of systematic review articles that appear in the health care literature each year, an umbrella review has become a popular and timely approach to synthesize knowledge from published systematic reviews on a given topic.For this paper, we drew on the umbrella review method in the typology of systematic reviews for synthesizing evidence in health care by MacEntee [29].In this typology, umbrella reviews are used to synthesize multiple systematic reviews from different sources into a summarized form to address a specific topic.We used a modified version of the Joanna Briggs Institute (JBI) umbrella review method to tailor the process, including developing of an umbrella review protocol, applying a rapid approach, and eliminating duplicate original studies [30].Our rapid approach was informed by the practical guide to conducting rapid reviews in the areas of database selection, topic refinement, searching, study selection, data extraction, and synthesis by Tricco et al [31].A PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of our review process is shown in Figure 1 [32].A PRISMA checklist is provided in Multimedia Appendix 1 [32].

Objective and Questions
The objective of this rapid umbrella review was to examine the use of AI quality standards based on a sample of published health care AI systematic reviews.Specifically, our questions were as follows: 1. What AI quality standards have been applied to evaluate the quality of health care AI studies? 2. What key quality standard-related issues are noted in these reviews?
3. What guidance can be offered to improve the quality of health care AI studies through the incorporation of AI quality standards?

Search Strategy
Our search strategy focused on the MEDLINE database supplemented with Google Scholar.Our search terms consisted of "artificial intelligence" or "AI," "health," and "systematic review" mentioned in the abstract (refer to Multimedia Appendix 2 for the search strings used).We used the .TW search field tag as it searches on title and abstract as well as fields such as abstract, Medical Subject Heading terms, and Medical Subject Heading subheadings.Our rationale to limit the search to MEDLINE with simple terms was to keep the process manageable, recognizing the huge volume of health care AI-related literature reviews that have appeared in the last few years, especially on COVID-19.One author conducted the MEDLINE and Google Scholar searches with assistance from an academic librarian.For Google Scholar, we restricted the search to the first 100 citations returned.

Inclusion Criteria
We considered all English-language systematic review articles published over a 36-month period from January 1, 2020, to December 31, 2022.The review could be any type of systematic review, meta-analysis, narrative review, qualitative review, scoping review, meta-synthesis, realist review, or umbrella review as defined in the review typology by MacEntee [29].
The overarching inclusion criteria were AI and health as the focus.To be considered for inclusion, the review articles must meet the following criteria:

Review Article Selection
One author conducted the literature searches and retrieved the citations after eliminating duplicates.The author then screened the citation titles and abstracts against the inclusion and exclusion criteria.Those that met the inclusion criteria were retrieved for full-text review independently by 2 other authors.Any disagreements in final article selection were resolved through consensus between the 2 authors or with a third author.
The excluded articles and the reasons for their exclusion were logged.

Quality Appraisal
In total, 2 authors applied the JBI critical appraisal checklist independently to appraise the quality of the selected reviews [30].The checklist has 11 questions that allow for yes, no, unclear, or not applicable as the response.The questions cover the areas of review question, inclusion criteria, search strategy and sources, appraisal criteria used, use of multiple reviewers, methods of minimizing data extraction errors and combining studies, publication bias, and recommendations supported by data.The reviews were ranked as high, medium, and low quality based on their JBI critical appraisal score (≥0.75 was high quality, ≥0.5 and <0.75 was medium quality, and <0.5 was low quality).All low-quality reviews were excluded from the final synthesis.

Data Extraction
One author extracted data from selected review articles using a predefined template.A second author validated all the articles for correctness and completeness.As this review was focused on AI quality standards, we extracted data that were relevant to this topic.We created a spreadsheet template with the following data fields to guide data extraction: 1. Author, year, and reference: first author last name, publication year, and reference number 2. URL: the URL where the review article can be found 3. Objective or topic: objective or topic being addressed by the review article 4. Type: type of review reported (eg, systematic review, meta-analysis, or scoping review) 5. Sources: bibliographic databases used to find the primary studies reported in the review article 6. Years: period of the primary studies covered by the review article 7. Studies: total number of primary studies included in the review article 8. Countries: countries where the studies were conducted 9. Settings: study settings reported in the primary studies of the review article 10. Participants: number and types of individuals being studied as reported in the review article 11. AI approaches: the type of AI model, method, algorithm, technique, tool, or intervention described in the review article 12. Life cycle and design: the stage or design of the AI study in the AI life cycle in the primary studies being reported, such as requirements, design, implementation, monitoring, experimental, observational, training-test-validation, or controlled trial 13.Appraisal: quality assessment of the primary studies using predefined criteria (eg, risk of bias) 14.Rating: quality assessment results of the primary studies reported in the review article 15. Measures: performance criteria reported in the review article (eg, mortality, accuracy, and resource use) 16.Analysis: methods used to summarize the primary study results (eg, narrative or quantitative) 17.Results: aggregate findings from the primary studies in the review article 18. Standards: name of the quality standards mentioned in the review article 19.Comments: issues mentioned in the review article relevant to our synthesis

Removing Duplicate AI Studies
We identified all unique AI studies across the selected reviews after eliminating duplicates that appeared in them.We retrieved full-text articles for every tenth of these unique studies and searched for mention of AI quality standard-related terms in them.This was to ensure that all relevant AI quality standards were accounted for even if the reviews did not mention them.

Analysis and Synthesis
Our analysis was based on a set of recent publications on health care AI standards.These include (1) the AI life cycle step-by-step approach by van de Sande et al [23] with a list of AI quality standards as benchmarks, (2) the reporting guidelines by Shelmerdine et al [15] with specific standards for different AI-based clinical studies, (3) the international standards for evaluating health care AI by Wenzel and Wiegand [26], and (4) the broader requirements for trustworthy health care AI across the entire life cycle stages by the National Academy of Medicine (NAM) [8] and the European Union Commission (EUC) [34].
As part of the synthesis, we created a conceptual organizing scheme drawing on published literature on AI domains and approaches to visualize their relationships (via a Euler diagram) [35].All analyses and syntheses were conducted by one author and then validated by another to resolve differences.
For the analysis, we (1) extracted key characteristics of the selected reviews based on our predefined template; (2) summarized the AI approaches, life cycle stages, and quality standards mentioned in the reviews; (3) extracted any additional AI quality standards mentioned in the 10% sample of unique AI studies from the selected reviews; and (4) identified AI quality standard-related issues reported.
For the synthesis, we (1) mapped the AI approaches to our conceptual organizing scheme, visualized their relationships with the AI domains and health topics found, and described the challenges in harmonizing these terms; (2) established key themes from the AI quality standard issues identified and mapped them to the NAM and EUC frameworks [8,34]; and (3) created a summary list of the AI quality standards found and mapped them to the life cycle phases by van de Sande et al [23].
Drawing on these findings, we proposed a set of guidelines that can enhance the quality of future health care AI studies and described its practice, policy, and research implications.Finally, we identified the limitations of this rapid umbrella review as caveats for the readers to consider.As health care, AI, and standards are replete with industry terminologies, we used the acronyms where they are mentioned in the paper and compiled an alphabetical acronym list with their spelled-out form at the end of the paper.

Summary of Included Reviews
We found 69 health care AI systematic review articles published between 2020 and 2022, of which 35 (51%) met the inclusion criteria.The included articles covered different review types, topics, settings, numbers of studies, designs, participants, AI approaches, and performance measures (refer to Multimedia Appendix 3  for the review characteristics).We excluded the remaining 49% (34/69) of the articles because they (1) covered multiple technologies (eg, telehealth), ( 2) had insufficient detail, (3) were not specific to health care, or (4) were not in English (refer to Multimedia Appendix 4 for the excluded reviews and reasons).The quality of these reviews ranged from JBI critical appraisal scores of 1.0 to 0.36, with 49% (17/35) rated as high quality, 40% (14/35) rated as moderate quality, and 6% (2/35) rated as poor quality (Multimedia Appendix 5 ).A total of 6% (2/35) of the reviews were excluded for their low JBI scores [69,70], leaving a sample of 33 reviews for the final synthesis.

Use of Quality Standards in Health Care AI Studies
To make sense of the different AI approaches mentioned, we used a Euler diagram [71] as a conceptual organizing scheme to visualize their relationships with AI domains and health topics (Figure 2 [36,[41][42][43]47,48,[51][52][53][54][56][57][58]60,62,65,67]).The Euler diagram shows that AI broadly comprised approaches in the domains of computer science, data science with and without NLP, and robotics that could be overlapping.The main AI approaches were ML and DL, with DL being a more advanced form of ML through the use of artificial neural networks [33].The diagram also shows that AI can exist without ML and DL (eg, decision trees and expert systems).There are also outliers in these domains with borderline AI-like approaches mostly intended to enhance human-computer interactions, such as social robotics [42,43], robotic-assisted surgery [47], and exoskeletons [54].The health topics in our reviews spanned the AI domains, with most falling within data science with or without NLP.This was followed by computer science mostly for communication or database and other functional support and robotics for enhanced social interactions that may or may not be AI driven.There were borderline AI approaches such as programmed social robotics [42,43] or AI-enhanced social robots [54].These approaches focus on AI enabled social robotic programming and did not use ML or DL.Borderline AI approaches also included virtual reality [60] and wearable sensors [65,66,68].
Regarding AI life cycle stages, we harmonized the different terms used in the original studies by mapping them to the 5 life cycle phases by van de Sande et al [23]: 0 (preparation), I (model development), II (performance assessment), III (clinical testing), and IV (implementation).Most AI studies in the reviews mapped to the first 3 life cycle phases by van de Sande et al [23].These studies would typically describe the development and performance of the AI approach on a given health topic in a specific domain and setting, including their validation, sometimes done using external data sets [36,38].A small number of reviews reported AI studies that were at the clinical testing phase [60,61,66,68].A total of 7 studies were described as being in the implementation phase [66,68].On the basis of the descriptions provided, few of the AI approaches in the studies in the AI reviews had been adopted for routine use in clinical settings [66,68] with quantifiable improvements in health outcomes (refer to Multimedia Appendix 6  for details).
Regarding AI quality standards, only 39% (13/33) of the reviews applied specific AI quality standards in their results [37][38][39][40]45,46,50,54,58,59,61,63,66], and 12% (4/33) mentioned the need for standards [55,63,68].These included the Prediction Model Risk of Bias Assessment Tool [37,38,58,59], Newcastle-Ottawa Scale [39,50], Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies [38,59], Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis-Machine Learning Extension [50], levels of evidence [61], Critical Appraisal Skills Program Clinical Prediction Rule Checklist [40], Mixed Methods Appraisal Tool [66], and CONSORT-AI [54].Another review applied 7 design justice principles as the criteria to appraise the quality of their AI studies [68].There were also broader-level standards mentioned.These included the European Union ethical guidelines for trustworthy AI [44]; international AI standards from the International Organization for Standardization (ISO); and AI policy guidelines from the United States, Russia, and China [46] (refer to Multimedia Appendix 6 for details).We updated the Euler diagram (Figure 2 [36,[41][42][43]47,48,[51][52][53][54][56][57][58]60,62,65,67]) to show in red the health topics in reviews with no mention of specific AI standards.A summary of the harmonized AI topics, approaches, domains, the life cycle phases by van de Sande et al [23], and quality standards derived from our 33 reviews and 10% of unique studies within them is shown in Table 1.There were also other AI quality standards not mentioned in the reviews or their unique studies.They included guidelines such as the do no harm road map, Factor Analysis of Information Risk, HIPAA, and the FDA regulatory framework mentioned by van de Sande et al [23]; AI clinical study reporting guidelines such as Clinical Artificial Intelligence Modeling and Minimum Information About Clinical Artificial Intelligence Modeling mentioned by Shelmerdine et al [15]; and the international technical AI standards such as ISO and International Electrotechnical Commission 22989, 23053, 23894, 24027, 24028, 24029, and 24030 mentioned by Wenzel and Wiegand [26].
With these additional findings, we updated the original table of AI standards in the study by van de Sande et al [23] showing crucial steps and key documents by life cycle phase (Table 2).

Quality Standard-Related Issues
We extracted a set of AI quality standard-related issues from the 33 reviews and assigned themes based on keywords used in the reviews (Multimedia Appendix 8 ).In total, we identified 23 issues, with the most frequently mentioned ones being clinical utility and economic benefits (n=10); ethics (n=10); benchmarks for data, model, and performance (n=9); privacy, security, data protection, and access (n=8); and federated learning and integration (n=8).Table 3 shows the quality standard issues by theme from the 33 reviews.To provide a framing and means of conceptualizing the quality-related issues, we did a high-level mapping of the issues to the AI requirements proposed by the NAM [8] and EUC [20].
The mapping was done by 2 of the authors, with the remaining authors validating the results.Final mapping was the result of consensus across the authors (Table 4).Need to integrate different variables and include multilevel data preprocessing to reduce dimensionality Data integration and preprocessing Zidaru et al [68] Need design justice principles to engage the public and ensure a fair and equitable AI system Design justice, equity, and fairness Kaelin et al [52] Select the best AI algorithms and outputs to customize care for specific individuals Personalized care and targeted interventions Welch et al [65] Need well-designed studies and quality data to conduct AI studies Quality-data and study

•
Buchanan et al [43] Reviews Quality issues Key themes Social justice and social implications Need to balance human caring needs with AI advances, understanding the societal impact of AI interventions

•
Choudhury et al [45] Need governance on the collection, storage, use, and transfer of data; being accountable and transparent with the process Governance

•
Vélez-Guerrero et al [64] Need adaptable and flexible AI systems that can improve over time Self-adaptability a AI: artificial intelligence.
b SDOH: social determinants of health.We found that all 23 quality standard issues were covered in the AI frameworks by the NAM and EUC.Both frameworks have a detailed set of guidelines and questions to be considered at different life cycle stages of the health care AI studies.While there was consistency in the mapping of the AI issues to the NAM and EUC frameworks, there were some differences across them.Regarding the NAM, the focus was on key aspects of AI model development, infrastructure and governance, and implementation tasks.Regarding the EUC, the emphasis was on achieving trustworthiness by addressing all 7 interconnected requirements of accountability; human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination, and fairness; and societal and environmental well-being.The quality standard issues were based on our analysis of the review articles, and our mapping was at times more granular than the issues from the NAM and EUC frameworks.However, our results showed that the 2 frameworks do provide sufficient terminology for quality standard-related issues.By embracing these guidelines, one can enhance the buy-in and adoption of the AI interventions in the health care system.

Principal Findings
Overall, we found that, despite the growing number of health care AI quality standards in the literature, they are seldom applied in practice, as is shown in a sample of recently published systematic reviews of health care AI studies.Of the reviews that mentioned AI quality standards, most were used to ensure the methodological and reporting quality of the AI studies involved.At the same time, the reviews identified many AI quality standard-related issues, including those broader in nature, such as ethics, regulations, transparency, interoperability, safety, and governance.Examples of broader standards mentioned in a handful of reviews or original studies are the ISO-12207, Unified Medical Language System, HIPAA, FDA Software as a Medical Device, World Health Organization AI governance, and American Medical Association augmented intelligence recommendations.These findings reflect the evolving nature of health care AI, which has not yet reached maturity or been widely adopted.There is a need to apply appropriate AI quality standards to demonstrate the transparency, robustness, and benefits of these AI approaches in different AI domains and health topics while protecting the privacy, safety, and rights of individuals and society from the potential unintended consequences of such innovations.
Another contribution of our study was a conceptual reframing for a systems-based perspective to harmonize health care AI.We did not look at AI studies solely as individual entities but rather as part of a bigger system that includes clinical, organizational, and societal aspects.Our findings complement those of recent publications, such as an FDA paper that advocates for a need to help people understand the broader system of AI in health care, including across different clinical settings [72].Moving forward, we advocate for AI research that looks at how AI approaches will mature over time.AI approaches evolve through different phases of maturity as they move from development to validation to implementation.Each phase of maturity has different requirements [23] that must be assessed as part of evaluating AI approaches across domains as the number of health care applications rapidly increases [73].However, comparing AI life cycle maturity across studies was challenging as there were a variety of life cycle terms used across the reviews, making it hard to compare life cycle maturity in and across studies.To address this issue, we provided a mapping of life cycle terms from the original studies but also used the system life cycle phases by van de Sande et al [23] as a common terminology for AI life cycle stages.A significant finding from the mapping was that most AI studies in our selected reviews were still at early stages of maturity (ie, model preparation, development, or validation), with very few studies progressing to later phases of maturity such as clinical testing and implementation.If AI research in health systems is to evolve, we need to move past single-case studies with external data validation to studies that achieve higher levels of life cycle maturity, such as clinical testing and implementation over a variety of routine health care settings (eg, hospitals, clinics, and patient homes and other community settings).
Our findings also highlighted that there are many AI approaches and quality standards used across domains in health care AI studies.To better understand their relationships and the overall construct of the approach, our applied conceptual organizing scheme for harmonized health care characterizes AI studies according to AI domains, approaches, health topics, life cycle phases, and quality standards.The health care AI landscape is complex.The Euler diagram shows multiple AI approaches in one or more AI domains for a given health topic.These domains can overlap, and the AI approaches can be driven by ML, DL, or other types (eg, decision trees, robotics).This complexity is expected to increase as the number of AI approaches and range of applications across all health topics and settings grows over time.For meaningful comparison, we need a harmonized scheme such as the one described in this paper to make sense of the multitude of AI terminology for the types of approaches reported in the health care AI literature.The systems-based perspective in this review provides the means for harmonizing AI life cycles and incorporating quality standards through different maturity stages, which could help advance health care AI research by scaling up to clinical validation and implementation in routine practice.Furthermore, we need to move toward explainable AI approaches where applications are based on clinical models if we are to move toward later stages of AI maturity in health care (eg, clinical validation, and implementation) [74].

Proposed Guidance
To improve the quality of future health care AI studies, we urge AI practitioners and researchers to draw on published health care AI quality standard literature, such as those identified in this review.The type of quality standards to be considered should cover the trustworthiness, methodological, reporting, and technical aspects.Examples include the NAM and EUC AI frameworks that address trustworthiness and the EQUATOR network with its catalog of methodological and reporting guidelines identified in this review.Also included are the Minimum Information for Medical AI Reporting guidelines and technical ISO standards (eg, robotics) that are not in the EQUATOR.Components that should be standardized are the AI ethics, approaches, life cycle stages, and performance measures used in AI studies to facilitate their meaningful comparison and aggregation.The technical standards should address such key design features as data, interoperability, and robotics.Given the complexities of the different AI approaches involved, rather than focusing on the underlying model or algorithm design, one should compare their actual performance based on life cycle stages (eg, degree of accuracy in model development or assessment vs outcome improvement in implementation).The summary list of the AI quality standards described in this paper is provided in Multimedia Appendix 9 for those wishing to apply them in future studies.

Implications
Our review has practice, policy, and research implications.For practice, better application of health care AI quality standards could help AI practitioners and researchers become more confident regarding the rigor and transparency of their health care AI studies.Developers adhering to standards may help make AI approaches in domains less of a black box and reduce unintended consequences such as systemic bias or threats to patient safety.AI standards may help health care providers better understand, trust, and apply the study findings in relevant clinical settings.For policy, these standards can provide the necessary guidance to address the broader impacts of health care AI, such as the issues of data governance, privacy, patient safety, and ethics.For research, AI quality standards can help advance the field by improving the rigor, reproducibility, and transparency in the planning, design, conduct, reporting, and appraisal of health care AI studies.Standardization would also allow for the meaningful comparison and aggregation of different health care AI studies to expand the evidence base in terms of their performance impacts, such as cost-effectiveness, and clinical outcomes.

Limitations
Despite our best effort, this umbrella review has limitations.First, we only searched for peer-reviewed English articles with "health" and "AI" as the keywords in MEDLINE and Google Scholar covering a 36-month period.It is possible to have missed relevant or important reviews that did not meet our inclusion criteria.Second, some of the AI quality standards were only published in the last few years, at approximately the same time when the AI reviews were conducted.As such, it is possible for AI review and study authors to have been unaware of these standards or the need to apply them.Third, the AI standard landscape is still evolving; thus, there are likely standards that we missed in this review (eg, Digital Imaging and Communications in Medicine in pattern recognition with convolutional neural networks [75]).Fourth, the broader socioethical guidelines are still in the early stages of being refined, operationalized, and adopted.They may not yet be in a form that can be easily applied when compared with the more established methodological and reporting standards with explicit checklists and criteria.Fifth, our literature review did not include any literature reviews on LLMs [76], and we know there are reviews of LLMs published in 2023 and beyond.Nevertheless, our categorization of NLP could coincide with NLP and DL in our Euler diagram, and furthermore, LLMs could be used in health care via approved chatbot applications at an early life cycle phase, for example, using decision trees first to prototype the chatbot as clinical decision support [77] before advancing it in the mature phase toward a more robust AI solution in health care with LLMs.Finally, only one author was involved in screening citation titles and abstracts (although 2 were later involved in full-text review of all articles that were screened in), and there is the possibility that we erroneously excluded an article on the basis of title and abstract.Despite these limitations, this umbrella review provided a snapshot of the current state of knowledge and gaps that exist with respect to the use of and need for AI quality standards in health care AI studies.

Conclusions
Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice.With the recent unveiling of broader ethical guidelines such as those of the NAM and EUC, more transparency and guidance in health care AI use are needed.The key contribution of this review was the harmonization of different AI quality standards that could help practitioners, developers, and users understand the relationships among AI domains, approaches, life cycles, and standards.Specifically, we advocate for common terminology on AI life cycles to enable comparison of AI

Figure 1 .
Figure 1.PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram based on the work by Page et al [32].JBI: Joanna Briggs Institute.

Figure 2 .
Figure 2. Euler diagram showing the overlapping artificial intelligence (AI) domains and health topics.Health topics in red are from reviews with no mention of specific AI quality standards.Health-related subjects in blue are from reviews with mention of AI quality standards.DL: deep learning; mHealth: mobile health.

Table 1 .
Summary of artificial intelligence (AI) approaches, domains, life cycle phases, and quality standards in the reviews.

Table 2 .
[23]of health care standards in the reviews mapped to the life cycle phases by van de Sande et al[23].

Table 3 .
Summary of quality standard-related issues in the reviews.

Table 4 .
Quality standard-related issues by theme mapped to the National Academy of Medicine (NAM) and European Union Commission (EUC) requirements.