Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in Artificial Intelligence/Machine Learning: A Modified Delphi Approach

Background: Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all socio-humanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI/ML literacy in underserved communities and build a more diverse AI/ML design and development workforce engaged in health research. Objective: AI/ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help to put ethics and fairness at the forefront of AI


Table of Contents
glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts.Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary.
Conclusions: Ongoing engagement is needed around our principles and glossary to identify and/or predict potential limitations in their use(s) in AI/ML research settings, especially for institutions with limited resources.This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement.By slowing down to meet historically and presently under-resourced institutions and communities where they are and where they are capable to engage and compete, there is higher potential to achieve needed diversity, ethics, and equity in AI/ML implementation in health research.

Preprint Settings
1) Would you like to publish your submitted manuscript as preprint?
Please make my preprint PDF available to anyone at any time (recommended).
Please make my preprint PDF available only to logged-in users; I understand that my title and abstract will remain visible to all users.
Only make the preprint title and abstract visible.
No, I do not wish to publish my submitted manuscript as a preprint.2) If accepted for publication in a JMIR journal, would you like the PDF to be visible to the public?
Yes, please make my accepted manuscript PDF available to anyone at any time (Recommended).
Yes, but please make my accepted manuscript PDF available only to logged-in users; I understand that the title and abstract will remain v Yes, but only make the title and abstract visible (see Important note, above).I understand that if I later pay to participate in <a href="http

Introduction
2][3][4][5] It has also become imperative to strengthen AI/ML literacy in underserved communities and build a more diverse workforce in AI/ML design and development..However, whether as a practice or as an academic discipline, AI/ML is not yet engineered to address all socio-humanitarian issues and complexities.This is especially true for socially and/or economically marginalized communities whose members are frequently unheard or have limited engagement in research, discovery, and innovation pipelines for cultivating shared prosperity.
The general population still has limited knowledge about AI/ML, with one study reporting that only about one-quarter of people have heard of AI or ML and only about half are at least somewhat aware of AI and ML. 6 Furthermore, individuals and communities who are subject to potentially detrimental outcomes (e.g., persons with mental health care needs and disabilities, persons with marginalized racial/ethnic identities, etc.) may be more aware of the potential harms of AI/ML, particularly when it comes to the risk for harm from bias. 7,8Thus, people who are presently or historically underserved or marginalized may be particularly concerned that they will be harmed by AI or ML technologies, especially in cases where AI or ML is used or applied without their awareness.
The overall lack of understanding about AI/ML and the awareness of bias among historically and presently marginalized populations could result in limited trust in the technology and its use.To build trust among those most subject to bias or at risk of detrimental outcomes, it is critical for AI/ML developers to assess their own reliability and adapt their practices to build trustworthiness to the most vulnerable stakeholders.In this context, it is also important to recognize that trust varies across and within populations and people may have more or less trust in health care technologies based on factors such as prior experience of racial bias. 9 implemented responsibly, AI/ML has the power to account for and assess a variety of factors that contribute to health and disease to improve prevention, diagnosis, and therapy.The ability to predict the risk of adverse health outcomes and identify high-risk patients for targeted preventive interventions offers tremendous potential to improve the health of individuals and medically underserved populations. 10,11rious factors that may negatively affect how people engage in the development AI/ML may limit the technology's potential benefits for those people in health-related settings (See Table 1).For instance, failure to educate about AI/ML, and contextualize its impact on an individual and their community, can bias if people provide data to build such technologies and, subsequently, who benefits from their application.As a consequence, poor engagement can exacerbate inequities in the creation, development, and application of AI/ML.
Table 1.Factors that may engender inequitable access to AI/ML or demotivate participation in AI/ML Under-engagement of communities in research, development, and use of AI/ML often reflects limited knowledge and crucial misunderstandings about AI/ML, including how it is used in healthcare settings to advance health-related innovations and solutions.Thus, stronger, more targeted, and more intentional engagement is required to help these groups identify and address real or potential harms associated with problematic implementation of AI/ML in high-consequence settings.To address this challenge, the National Institutes of Health's (NIH) Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) was established in 2021 with a mission to address factors that undermine achieving health equity through the design, use, and application of AI/ML, including the lack of: • An adequately diverse workforce.
• Adequate data and data infrastructure.
• Consensus that ethics can strengthen innovation.
The tension between individual desires and population needs challenges ethics and equity.Thus, the Ethics and Equity Workgroup (EEWG) was formed within the AIM-AHEAD consortium to ensure that ethics and fairness are at the forefront of AI/ML applications to build equity in biomedical research, education, and healthcare.Activities within the workgroup have included the study and deliberations for developing actionable guiding principles, a glossary of key terms, and other engagement tools to encourage greater attention to ethics and equity in AI/ML development.This article describes the EEWG process to refine and reach consensus on its initial collaborative work products to serve and inform the AIM-AHEAD community of stakeholders and other external consortia, organizations, and/or communities that have goals similar to AIM-AHEAD.

Workgroup Establishment
AIM-AHEAD EEWG was created in 2021 to guide the ethical and equitable development and implementation of AI/ML tools and processes broadly within the AIM-AHEAD Consortium.
Simultaneously, an Equitable Policy Development Workgroup was developed within the AIM-AHEAD Infrastructure Core.To ensure rapid and coordinated progress with respect to embedding ethics and equity into AIM-AHEAD Consortium activities, both within and outside of the Infrastructure Core, the EEWG's efforts were harmonized and merged with the Infrastructure Core's Equitable Policy Development Workgroup, upon recommendation by the EEWG co-chair and Multiple Principal Investigator for the AIM-AHEAD Infrastructure Core.The newly reconfigured EEWG began by defining its scope of activities (see Figure 1).

Workgroup Membership
At the start of the program in year 1, the EEWG was composed of 51 members (AIM-AHEAD principal investigators and co-investigators) and three co-chairs.AIM-AHEAD participants either requested to join or were selected to join by their project leaders within the program.During Year 2, the EEWG's membership was consolidated into two co-chairs and approximately 40 AIM-AHEAD principal investigators, co-investigators, leadership fellows, and research fellows.This reduction in EEWG co-chairs and members occurred for two main reasons: 1) time and effort among members were reallocated to other activities within AIM-AHEAD (i.e., administrative planning for regional hubs, research, etc.) and 2) given the evolution of the program over time, the Year 1 members were provided an opportunity to recommit to the EEWG for Year 2. In both years, EEWG co-chairs and members represented a variety of academic disciplines and focus areas, including but not limited to: medicine, computational science, population health, health science, data science, bioethics, law, community engagement, human-centered design, health disparities research, biological science, social science, and engineering.

Development of a Set of Ethical Principles for AI/ML
The initial effort of EEWG during year 1 was to produce a set of principles and a glossary to inform the practice of ethics and equity in AI/ML development and implementation in health research.
During year 1, members convened in weekly meetings that led to consensus on the development of specific workgroup deliverables.[14][15][16][17][18][19][20][21][22][23][24][25][26][27] To develop the Principles, EEWG used a modified Delphi approach to facilitate discussions around tangible steps that the Consortium should take to ensure that ethics and fairness are at the front and center of AI/ML applications to build equity in biomedical research, education, and healthcare. 28EWG approached the development of the Principles with optimism about the potential of AI/ML to address health disparities by empowering communities, yet with recognition of complex societal challenges: inadequate or misrepresentation in datasets, algorithmic bias, imbalances in communities' access to data and information about themselves, misuses of AI/ML tools, and threats to the civil and human rights of individuals and communities who are or may be subject to illegal or pervasive AI/ML surveillance, to name just a few..

Development of a Glossary
To develop the Glossary, during year 1 EEWG began by defining ways in which outputs of AI/ML can: 1) fail to be informative or useful for individuals and groups; 2) distinguish among individuals in inappropriate ways as a result of bias, failure of inclusion, misuse; or 3) be poorly vetted by individuals and groups who are or may be subject to potentially harmful actions and/or decisions made by key or authoritative stakeholders that rely on AI/ML for decision support as a result of insufficient engagement with key stakeholders, including data subjects.
Using a modified Delphi approach, consensus was reached on terms to define. 29Identified terms included or described demographic characteristics like self-defined or assigned race, ethnicity, sex, ability, and gender that can lead to errors in the development of AI/ML, which can in turn lead to potentially irreversible, intergenerational, and multigenerational harm to individuals and groups subjected to decisions informed by or based on AI/ML outputs.During year 2, virtual meetings were held on a bi-weekly basis to further deliberate and refine the Principles and Glossary.Refinements were based on expert stakeholder feedback gathered via a survey among participants in AIM-AHEAD pilot project and during virtual convenings.

Development of an Interview Guide
EEWG initially sought to conduct a quantitative survey to assess how AIM-AHEAD researchers would implement the Principles in practice.A draft survey was developed by two volunteers within the workgroup who later shared the draft survey with the broader workgroup for iterative feedback and edits.The draft survey was also shared with awardees of AIM-AHEAD pilot projects for feedback.As EEWG deliberated on the feedback, it ultimately determined that a qualitative interview (versus quantitative survey) would be a more useful approach to garnering AIM-AHEAD researchers' perspectives on implementing the Principles in practice.Thereafter, EEWG met regularly to convert the quantitative survey into an interview guide with the intent of learning the interviewees perspectives around and natural reactions to the AIM-AHEAD Ethics and Equity Principles and Glossary.

Ethical Considerations
EEWG's efforts in developing the interview guide and conducting the interviews were focused exclusively on program-specific planning for AIM-AHEAD and were not intended as human subjects research.AIM-AHEAD investigators' responses to the interviews were wholly voluntary, and their comments used exclusively to develop the program's Principles, to be subject to further assessment for generalizable knowledge.

AIM-AHEAD Ethics and Equity Principles
Based on the EEWG's internal Delphi process, informed by insights from interviews with AIM-AHEAD investigators, the workgroup articulated five (5) core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities: 1. BUILD TRUST WITH COMMUNITIES.Build trust and share power to enable data-driven decision-making among multiple partners-this must be earned through longstanding, sustained relationships in the community, which takes time, investment, and resources to manifest.
• Through authentic community engagement, determine, understand, and deliver value in a manner that is community-driven, community-defined, and community-led.• Use asset-based language and thinking in collecting, interpreting, and reporting communitylevel data (in lieu of deficit-based language and thinking).• Be transparent about the structure of AI models, data that are contextually limited or incomplete, and limitations in the capabilities of data analytics tools and platforms.• Commit to ongoing engagement and bi-directional communication between AI/ML developers and communities around interventions to address limitations in capabilities of data analytics tools and platforms.

DESIGN AND IMPLEMENT AI/ML WITH INTENTION.
Take collective action and engage in data-driven decision-making towards embedding equity, which requires shared goal-setting, design, implementation, and accountability.
• Determine shared goals that serve as a commitment anchor and barometer for co-created actions.• Design with intent to overcome root causes of bias to solve or address (versus merely explore) an immediate, ongoing, and/or systemic problem affecting communities experiencing certain hardships that have contributed to health inequity.• Develop and implement ongoing AI/ML design mechanisms and procedures to monitor AI/ML algorithms with the goal to prevent and/or mitigate harm.

CO-CREATE, DON'T DICTATE.
Move from superficial community engagement to true community partnership through meaningful co-creation.
• Develop AI/ML infrastructure, protocols, and programs in partnership with key and affected community stakeholders.• Avoid tokenizing individuals and communities to achieve asymmetric goals that are or can be perceived as to the detriment of communities.• Limit uses of computational methods that are or can be perceived as a substitution for data that would be only obtained through strong community engagement.• Be transparent about the short-medium-and long-term sponsorships, investors in, and potential beneficiaries of AI/ML projects.

BUILD CAPACITY.
Invest in people, data, and computational technology-today, as community leaders dig into this work, and tomorrow, as society collectively builds a stronger, more diverse tech talent pipeline.
• Educate stakeholders to enable AI/ML competency across clinical practice, community, and research settings (e.g., build AI/ML model fact labels that can summarize or explain algorithms).• Develop a plan to promote ehealth literacy in marginalized and underserved communities and groups.• Build equitable access to AI/ML technology, its development, applications, and uses across real-world health contexts including social determinants of health and research.• Develop a plan for building capacity that includes hiring and supporting a diverse workforce, dedicating funds for sustaining an existing workforce, and creating metrics that allow institutions to measure their success.

RESET THE RULES.
Reexamine the mechanisms that hold institutions accountable and resist the urgency of quick fixes to complex issues like systemic racism.
• Engage communities to determine their experiences with and desires to overcome the digital divide and facilitate the equitable inclusion and consideration of populations in AI/ML models and algorithms.• Create equitable and liberated access to AI/ML development, implementation, and maintenance to oversee and correct model drift and guide entities in their reactions to AI/ML outputs.• Identify and correct information asymmetries that may lead to communities' lacking pertinent, actionable, and critical information that is exclusively held by powerful institutions.

AIM-AHEAD Ethics and Equity Glossary Terms
Developers of AI/ML platforms and tools must contemplate, anticipate, mitigate, and address potential issues with downstream data aggregation, interpretation, and use.Meeting these goals requires a shared understanding of the terms used in these policies and processes.The EEWG determined that, in many cases, sensitive demographic characteristics (e.g., race, ethnicity, sex, ability, and gender) are particularly problematic as variables used in AI/ML because they often are inappropriately understood as being rooted solely or primarily in genetic or phenotypic differences, rather than strongly influenced by discriminatory sociohistorical and sociocultural practices.
To capture and promote a shared understanding of key terms, the EEWG developed a glossary of 12 words (listed in Table 1) that follow or build upon existing understandings of these concepts, highlighting their particular importance for the optimal development, refinement, and implementation of AI/ML.In addition, EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts described in Table 1 (see Figure 2).The center of this diagram is equity, which requires AI developers and implementers to enforce fairness and avoid bias in a population with sufficient diversity through being inclusive.To implement diversity, representatives that are characterized from a minimal set of aspects, ethnicity, race, gender, and sexual orientation, need to be collected.They will form a representative sample if they can reflect the characteristics of a population.A representative sample can mitigate algorithmic bias, which is one specific type of bias.

Interview Guide
As mentioned, extensive and iterative feedback received during the development of the quantitative survey led the EEWG co-chairs and members to determine that a qualitative engagement approach is warranted to facilitate meaningful and diverse stakeholder engagement to disseminate and facilitate implementation of the principles and glossary.Therefore, the EEWG developed an interview guide that can be used or adapted to garner and understand AIM-AHEAD members' and other community perspectives on the principles and glossary.The interview guide is provided in Appendix 1.

Discussion
1][32][33] Over time, more attention has been devoted to assessing potential harms and benefits of research to the people who are studied, albeit primarily as viewed by investigators -typically White men, and institutional review boards, typically comprised of researchers with minimal or latent community involvement.Incentivizing representation of non-scientific, non-affiliate community members on institutional review boards, engaging members of historically underrepresented groups in more visible roles as investigators, and engaging minority-serving institutions as partners in AI/ML research is necessary to promote equitable access to opportunities and careers in AI/ML.Such an intentional approach also, importantly, demonstrates an appreciation for local knowledge and facilitates the design of more culturally informed interventions that consider how research will affect heterogeneous populations being studied in AI/ML research.This form of appreciation is necessary for tailoring engagement to the needs of diverse groups and understanding how to overcome barriers to AI/ML research and use. 34yond promoting diverse and equitable opportunities for participation in AI/ML research, it is necessary to recognize the need to translate that work into actual practice, which historically has also been a barrier for health equity.For example, the association of the lower quality data measured by pulse oximetry with dark skin tones has long been known, and there have been versions of the technology designed to account for this discrepancy, but versions of pulse oximeters with biased tendencies remain in wide use. 35There is a real risk that AI/ML technology will follow a similar pathway if there is not sufficient action to build ethics and equity into the research.
Our effort reported here achieves two goals.The first is to describe what is needed procedurally and substantively to achieve equity.This is a complex process that must take place and evolve over time.It cannot be addressed as a one-time event or by filling out a checklist.Achieving equity requires rebalancing the interests at stake in research, which at a minimum, is truly considering and addressing the interests of the people who will be affected by the results.Ideally, research participants can become co-creators as ethics in AI/ML and related ethical principles evolve into more commonly accepted policies and practices.The second goal of this reported effort is to emphasize that addressing equity requires an inclusive, ongoing process with shared understanding of salient terms that will evolve over time.Recent engagements within the AIM-AHEAD program have noted this to be true even for terms like AI/ML, as today very few stakeholders have been able to clearly articulate how AI/ML can be or is used in the real world. 34ew and ongoing national initiatives, such as the National Academy of Medicine's Artificial Intelligence Code of Conduct project, which intends to develop a "code of conduct for the development and use of artificial intelligence in health, medical care, and health research", are encouraged to learn from EEWGs efforts. 36clusive and ongoing processes to develop a shared understanding of salient terms like AI/ML and those described in our glossary require more time, greater inclusion, and deeper incorporation of diverse community perspectives.This approach differs drastically from the typical project life cycles afforded in the gold-rush mentality that has emerged with AI/ML today.Therefore, one key step, moving forward, would be to persuade leaders in the AI/ML research enterprise to broadly disseminate the lessons that may be learned in operationalizing our EEWG principles and/or glossary.Programs like AIM-AHEAD need to objectively assess their administrative processes and evaluation criteria for what constitutes ethical and equitable opportunities for an AI/ML investigation, including investigator inclusion, data governance, data sources, and data infrastructure.
There are limitations to consider in our process and recommendations.First, the EEWG has continuously revisited the principles and glossary for potential editing based on the members' evolving experience and expert opinions -making these deliverables "living documents" complicates the process of achieving sustainable consensus.Nonetheless, the principles and glossary will require reflection, appreciation, and adjustments over time to account for the effects of real-world events, human choices, or interpersonal phenomena on relevant perspectives.Also, some of our proposed glossary terms may already be limited in scope with respect to real-world events and phenomena.For instance, although our definition of "representative" concerns "an individual or body chosen or appointed to act or speak for an individual, population, or subpopulation," there are certain matters in which a representative may be self-appointed without specific authorization from those they wish to represent.
Therefore, ongoing engagement around the use(s) of our principles and glossary in AI/ML research settings is encouraged to maximize their potential benefits and minimize any potential harm.However, ongoing engagement with institutions that have limited resources to support their full participation requires careful consideration and discussion of how to incentivize, support, and sustain meaningful engagement, beyond mere compensation.One way to accomplish this is to seek institutional input through authentic connections to determine what they consider a valuable investment for their time, instead of deciding for them.For example, such connections can be made both within and outside of conferences, convenings, and events hosted by minority-serving institutions nationwide (e.g., the Annual Biomedical Research Conference for Minoritized Scientists, or National Society of Black Engineers' Annual Convention).

Conclusions and Next Steps
An overemphasis on velocity works against taking the time needed to foster the inclusion of historically and presently underrepresented communities in the development of AI/ML, ultimately rewarding AI/ML "haves" over "have-nots."In the private sector (e.g., big technology companies and startups), the pace of AI/ML development is extremely rapid and difficult to manage.Inequitable divisions in access to resources like computers, smartphones, the Internet have vastly decreased over the past decade, but how the new technology is used, with adequate operational know-how and e-literacy, cost of use, human resources and staffing needs to maintain cyberinfrastructure, and many other technical and non-technical resources, is where AI/ML can catalyze this divide.
An equity-oriented public sector intervention, such as AIM-AHEAD, can be more effective in achieving diversity and inclusion goals by emphasizing actions that do not sacrifice trustbuilding for the sake of rapid development of technology, especially in the initial stages.By slowing down to meet historically and presently under-resourced institutions and communities where they are --and where they are capable to engage and compete, we can evaluate AI/ML implementation and results for bias over time and expand the potential to achieve the aims of ethics and equity.We envision a virtuous cycle of shared learning, building on our EEWG deliverables, that may bridge researchers and impacted communities into a new intersection of computational sciences, ethics, and health equity.
• Introductions from Project Team • Discuss the purpose and goals for the interview (to learn the interviewees perspectives around and natural reactions to the AIM-AHEAD Ethics and Equity Principles and Glossary).
o Note that the interview should take no more than one hour of their time.• Ask for Permission to Record • Encourage the interviewer to pursue threads in the conversation as they arise, and ask follow up questions to flesh out details.

Figure 1 .
Figure 1.AIM-AHEAD Ethics and Equity Workgroup Scope of Activities ideas: consensual human relationships-sexual, romantic, or both-the biological sex of an individual's actual or potential relationship partners, and enduring patterns of experience and behavior.Sexual minorities, or people whose sexual orientation does not conform to heteronormative cultural expectations, are vulnerable to violence and discrimination.https://preprints.jmir.org/preprint/52888[unpublished,non-peer-reviewed preprint]

Table 1 .
AIM-AHEAD Ethics and Equity Glossary Terms and Definitions language, lifestyle, illness and health beliefs encountered among an individual or representative population, regardless of race, and that may subject the individual or population to bias or discrimination.Inclusive Avoiding bias by providing equitable and open access to opportunities and resources for engagement.This can be accomplished, for example, by enforcing fairness in the data collection methods, enforcing fairness in the assignment of labels, developing explainable, transparent, and interpretable models, having diverse teams monitoring models and looking for biases and eliminating them.ssense of oneself as male, female or something else.When an individual's gender identity and biological sex are not congruent, the individual may identify along the transgender spectrum.An individual may choose to change their gender one or more times.Varying cultural indicators of gender, such as clothing choice, speech patterns, and personality traits relate to gender but are not acceptable means to determine another's gender identity.The change in an individual's gender can be used to abuse, discriminate against, and misrepresent individuals and groups.12SexualOrientation An individual's capacity for attraction to and sexual activity with the same or different sex.An 2Race A social construct or assumption based on patterns in an individual's or representative population's language, lifestyle, and/or health beliefs, and immutable characteristics, such as skin tone/color or hair texture, regardless of immigration status, socioeconomic status, genetic ancestry, or geographic origin, which may subject the individual's or population to bias, structural racism, and/or discrimination that would warrant corrective anti-racism action(s).6DiversityThewidevariety of shared and different personal and group characteristics among human beings.There are many kinds of diversity, including gender, sexual orientation, class, age, country of origin, education, religion, geography, physical or cognitive abilities, or other characteristics.Valuing diversity means recognizing differences between people, acknowledging that these differences are a valued asset, and striving for diverse representation as a critical step towards equity.7 individual's sexual orientation is indicated by one or more of the following: how an individual identifies their own sexual orientation, an individual's capacity for experiencing sexual and/or affectional attraction to people of the same and/or different gender, and/or an individual's sexual behavior with people of the same and/or different gender.Sexual orientation incorporates three core https://preprints.jmir.org/preprint/52888[unpublished, non-peer-reviewed preprint] • Request Introduction from Interviewee including name, organization, role in AIM AHEAD • Please take a moment to review the AIM-AHEAD Ethics and Equity Principles.o Provide up to 15 minutes at the start of the interview to help familiarize the interviewee with the Principles and Glossary • Can you describe to us which principle resonates with you the most?o If any, ask the participant to indicate which principle(s) and discuss why the principle(s) resonates with the most.• Can you describe which principle(s) applies the most to your work?o If any, ask the participant to indicate which principle(s) and discuss why the principle(s) applies to their work the most.• Can you describe which principle(s) applies the least to your work?o If any, ask the participant to indicate which principle(s) and discuss why the principle(s) applies to their work the least.• Would you like to share any experiences within your scope of work or interests that relate to one or more of these principles?o Based on those experiences alone, what are your natural reactions to principle(s)?• Are there other important principles missing from this list?o If yes, ask the participant to elaborate.• Looking at the AIM-AHEAD Ethics and Equity Glossary, does any particular term stand out to you? o If no/yes, then ask the participant to elaborate.• Do any of these terms align with your understanding of how they are or can be used within AI/ML? • Would you like to share any additional thoughts, perspectives, reactions, and/or feedback concerning the AIM-AHEAD Ethics and Equity Glossary?• On a scale of 1 to 5, how might you perceive the level of difficulty in implementing these principles within your institution?o very easy (1), somewhat easy (2), neutral (3), somewhat hard (4), very hard (5) o If examples are needed: ▪ Applying any of these principles within existing institutional structures ▪ Adhering to policies that may align or conflict with these principles ▪ Changing existing policies and/or culture to apply these principles • Can you describe one way in which you apply these principles in your work/research/institution?