Navigating the Grey Area: Students' Ethical Dilemmas in Using AI Tools for Coding Assignments

: Integrating artificial intelligence (AI) in higher education, particularly in coding assignments for Information Technology (IT) students, represents a rapidly evolving research area with significant implications for academic practices and integrity. This study focuses on the ethical challenges faced by IT students when using AI tools like ChatGPT for coding assignments. Despite the growing use of AI in education, there is a notable gap in understanding how students perceive and navigate the ethical dilemmas associated with these technologies. To address this gap, this study employed a thematic analysis of qualitative data collected from interviews with IT students. The results reveal a complex landscape of ethical considerations, including issues of originality, academic integrity, and the potential for misuse of AI tools. Students reported challenges in balancing the benefits of AI assistance with the need to maintain independent learning and adhere to ethical standards. The implications of this research are significant for educators, institutions, and policymakers. Understanding the ethical challenges students face can inform the development of more effective teaching strategies, assessment methods, and institutional policies. This study contributes to the ongoing dialogue about AI ethics in academia, providing valuable insights for creating an educational environment that leverages the power of AI while upholding the principles of academic integrity and meaningful learning.


Introduction
Integrating artificial intelligence (AI) in education, particularly in higher education, represents a rapidly evolving research area with far-reaching implications for academic practices and integrity (Takács et al., 2023).This study focuses on a critical issue within this domain: the ethical challenges faced by Information Technology (IT) students when using AI tools like ChatGPT for coding assignments.As AI technologies become increasingly sophisticated and accessible, their potential to revolutionize learning is matched by their capacity to disrupt traditional academic norms and practices, raising complex ethical questions that demand careful consideration.The issues of academic dishonesty and ethics have always been a challenge for a long time (Mutanga, 2020) The importance of this issue cannot be overstated.As Mohamad & Nazlan, (2024) point out, "the probability that AI will soon relieve humans of the daily tasks that humans usually do and as such, the trust in this technology must be paramount."This sentiment underscores the urgency of addressing the ethical implications of AI in education, particularly in coding assignments where the line between leveraging AI as a learning aid and potential academic misconduct can become blurred.The complexity of this issue lies in balancing the benefits of AI as a learning tool VOLUME 8 ISSUE 1 with the fundamental educational goals of skill development, critical thinking, and academic integrity (Slimi & Villarejo-Carballido, 2024) The rapid advancement of AI has led to its integration into various fields, including education.In the realm of coding assignments, AI tools have emerged as a transformative technology, offering both opportunities and ethical challenges.This sentiment is echoed by Slimi & Carballido (2023) who highlight the "ethical challenges and dilemmas" that have surfaced with the swift integration of AI into the education system, particularly concerning students' misuse of the technology.The exploration of these ethical dilemmas is crucial to ensure that the integration of AI into coding education is conducted responsibly and transparently.
Recent literature has begun to explore the multifaceted impact of AI on education.Mohamad & Nazlan (2024) propose a framework for evaluating the ethics of AI, emphasizing the need for AI to adhere to moral rules and not be used for illicit purposes.They stress the importance of addressing ethical issues such as "human sense and empathy" that may be lacking in AI algorithms, as well as concerns over "the security of data" and "the opaque nature of the algorithms." The authors argue that it is "imperative to evaluate their ethics" to ensure that the technology being used does not violate ethical principles.
In the realm of IT education specifically, the emergence of AI-generated code has posed a significant challenge to traditional computer science education.Porayska-Pomsta et al., (2022) argue that "instructors should instead reconsider assessment design in their pedagogy in light of recent developments, with a focus on how students build knowledge, practice skills, and develop processes."This suggests a need to rethink how computer science is taught and assessed, recognizing the opportunities and limitations presented by AI tools.
The perspectives of educators have also been explored in recent studies.Lo, (2023) conducted research to gather the views of university programming instructors on how they plan to adapt to the growing presence of AI code generation tools, such as ChatGPT and GitHub Copilot.The authors found that "in the short-term, many planned to take immediate measures to discourage AI-assisted cheating," while longer-term opinions diverged on whether to ban or integrate these tools into their courses.This highlights the diversity of approaches and the need to further explore best practices in this rapidly evolving landscape.
Similarly, Agrawal et al., (2021) investigate the balance between AI advancements and ethical concerns in higher education assessments.Their survey of diverse educators found a "growing interest in AI-based educational tools, along with a demand for rigorous training and ethical standards for their equitable application."This underscores the importance of addressing ethical considerations as AI becomes more prevalent in assessment practices.
The ethical implications of AI extend beyond coding assignments to other areas of academic work.Chan, (2023)explores the ethical dilemmas in using AI for academic writing, focusing on the field of nephrology.They highlight the potential for "scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity."The authors propose solutions, including "the adoption of sophisticated AI-driven plagiarism detection systems" and "a robust augmentation of the peer-review process with an 'AI scrutiny' phase," to mitigate the unethical use of AI in academia.
From the student perspective, Gupta et al., (2024) present a study on the impact of AI tools, such as ChatGPT, on the student experience in programming courses.Their preliminary findings "describe a range of students' attitudes and behaviours towards ChatGPT that provides insight for future research and plans for incorporating such AI tools in a course."This underscores the need to understand the student perspective and its implications for the effective integration of AI tools in programming education.
The ethical challenges associated with AI in education are part of a broader discourse on AI ethics.Abimbola et al., (2024) examine the ethical dilemma of regulating AI chatbots, particularly ChatGPT.The study addresses potential ethical issues related to "data privacy, algorithmic bias, and the potential for chatbots to replace human interaction and support."The author emphasizes the need to find a balance between regulation and innovation to maximize the benefits of ChatGPT while minimizing its potential harms.
In the medical field, (Ciriaco & Marín, 2023) explore the ethical dilemmas of using AI.They highlight issues related to "informed consent, respect for confidentiality, protection of personal data, and the accuracy of the information it uses."The authors emphasize that the ethical analysis of AI in medicine must address "nonmaleficence and beneficence, both in correlation with patient safety risks, ability versus inability to detect correct information from inadequate or even incorrect information."While these concerns are specific to medicine, they highlight the broader ethical considerations that arise with the integration of AI in professional and educational settings.Kooli, (2023) conducts an ethics-based audit on leading Large Language Models (LLM), including GPT-4, to assess their moral reasoning and normative values.The author employs an "experimental, evidence-based approach that VOLUME 8 ISSUE 1 challenges the models with ethical dilemmas" to probe human-AI alignment.Their findings include "underlying normative frameworks with clear bias towards particular cultural norms" and "troubling authoritarian tendencies" in many of the models.This highlights the need for rigorous evaluation and regulation of AI systems to ensure alignment with human values and ethical principles.
While existing research provides valuable insights into the broader implications of AI in education and its ethical challenges, there is a notable gap in our understanding of how students, particularly in technical fields like computer programming, perceive and navigate these ethical dilemmas.The unique nature of coding assignments, which often involve problem-solving and algorithm development, presents specific challenges when considering the ethical use of AI tools.This gap is critical, as students' perspectives and decision-making processes are central to developing effective educational policies and ethical guidelines.The severity of this gap is underscored by the rapid adoption of AI tools among students, with a survey by Mohamad & Nazlan, (2024) indicating that 67% of students have used ChatGPT for schoolwork.Furthermore, the rapid evolution of AI technology, exemplified by tools like ChatGPT, has outpaced the development of ethical guidelines and educational policies.This creates a pressing need for research that explores how students are adapting to these new tools and the ethical frameworks they are developing to guide their use (Egbe et al., 2016).
To address this gap, our study poses the main research question: How do IT students perceive and navigate the ethical challenges of using ChatGPT in coding assignments?Understanding students' perspectives is crucial for developing informed strategies to harness AI's potential while maintaining academic integrity and ensuring meaningful learning outcomes.The complexity of this research question lies in several factors: the evolving nature of AI technology, which requires ongoing reassessment of ethical guidelines; the diversity of coding assignments, presenting unique ethical considerations; varying levels of AI literacy among students, influencing their ethical decision-making; the intersection of institutional policies and personal ethics; and the potential impact on skill development, balancing AI as a learning aid against the need to develop crucial coding skills.These multifaceted aspects underscore the intricate ethical landscape that students must navigate when using AI tools in their academic work.
In this paper, we adopted Kitchener's Five Ethical Principles (Kitchener, 1984) as the theoretical framework to analyze students' ethical behavior and decision-making when using AI tools in coding assignments.This framework allows us to explore key aspects of students' ethical considerations.The chosen framework has been used in other instances.For instance, Duncan & Geist (2022) investigated the understanding and awareness of ethics among psychology students.
Our findings reveal a complex landscape of ethical considerations, encompassing students' motivations, strategies for balancing AI assistance with independent learning, and perceptions of ethical AI use in academic settings.These insights provide a good understanding of the ethical dilemmas faced by IT students and their decision-making processes, offering valuable potential to inform the development of more effective policies on AI use in IT education.By understanding students' perspectives, educators and institutions can create realistic guidelines that address the challenges and opportunities presented by AI in coding education.Furthermore, this work contributes to broader discussions on adapting pedagogical approaches for a future where AI is integral to professional IT practice.
As the world moves forward in this era of rapid technological advancement, we must develop a comprehensive understanding of the ethical implications of AI in education.This study, therefore, represents a step towards that understanding, focusing on the perspectives of those at the forefront of this technological revolution -the students themselves.
The rest of the paper is structured as follows: The next section and the methodology details data collection and analysis processes, including transcription, coding, and thematic analysis.The findings and discussion highlight IT students' ethical dilemmas when using AI tools for coding assignments, focusing on originality, academic integrity, and AI misuse.The conclusion summarizes key findings and their implications for educators and policymakers, emphasizing the need for ethical guidelines.

Theoritical Framework
The ethical use of artificial intelligence (AI) in academic settings is a complex and multifaceted issue.To understand how students navigate the ethical dilemmas associated with the use of AI tools, this study employs Kitchener's Five Ethical Principles (Kitchener, 1984) as the theoretical framework.These principles provide a robust foundation for analyzing ethical behaviour and decision-making in many contexts, including education.

Autonomy
Autonomy refers to respecting the individual's right to make informed decisions about their actions.In the context of using ChatGPT, students' autonomy involves their ability to decide how and when to use the AI tool, provided they are aware of the ethical implications.This principle emphasizes the importance of students' understanding of academic integrity policies and their capacity to make choices that align with these guidelines.It also highlights the role of educational institutions in providing clear and comprehensive information about acceptable and unacceptable uses of AI tools.

Nonmaleficence
The principle of nonmaleficence centres on the obligation to avoid causing harm.Applying this principle to the use of ChatGPT involves ensuring that the use of AI does not negatively impact students' learning experiences, academic development, or the integrity of their work.This principle is crucial in understanding the potential risks associated with over-reliance on AI, such as the erosion of critical thinking skills and the temptation to engage in academic dishonesty.

Beneficence
Beneficence involves actively promoting the well-being of others.In this study, beneficence is considered in terms of how ChatGPT can enhance students' learning, support their academic performance, and contribute to their overall educational experience.This principle requires examining the positive aspects of AI use, such as providing additional resources for understanding complex topics, aiding in brainstorming and idea generation, and offering personalized learning support.It also involves balancing these benefits with potential drawbacks to ensure that the use of AI is genuinely beneficial to students.

Justice
Justice pertains to fairness and the equitable distribution of benefits and burdens.In the context of ChatGPT, this principle examines whether all students have equal access to AI tools and whether the use of these tools creates or exacerbates disparities among students.Justice also involves considering the fairness of using AI-generated content in academic work and the implications for grading and assessment.Ensuring that policies regarding AI use are applied consistently and fairly across different student groups is essential to uphold this principle.

Fidelity
Fidelity involves maintaining trustworthiness, honesty, and integrity in relationships and actions.For students, fidelity means adhering to academic integrity standards and being honest about their use of ChatGPT in their work.This principle underscores the importance of transparency in disclosing AI assistance and the ethical responsibility to produce original work.Fidelity also extends to the relationship between students and educators, emphasizing the need for clear communication and mutual understanding regarding the ethical use of AI tools.

Research Design
This research employs a qualitative case study approach to explore how students navigate ethical dilemmas when using ChatGPT for academic purposes.A case study design is chosen because it allows for an in-depth examination of the area under investigation (Hennink et al., 2020).This approach is particularly well-suited to understanding complex phenomena like ethical decision-making, where contextual factors play a significant role (Hancock et al., 2021;Schoch, 2020).In essence, the rationale for using a qualitative methodology is to capture the rich, detailed narratives of students' interactions with ChatGPT, providing insights that quantitative methods may overlook.

Data Collection Methods and Procedures
Data were collected from IT students enrolled in computer programming courses through a combination of semistructured interviews and focus groups.This approach has been shown to give robust and comprehensive data, allowing for the triangulation of findings and thereby enhancing the credibility of the results (Heiselberg & Stępińska, 2023).In the semi-structured interviews, a purposive sample of 20 students was selected based on their willingness to discuss their experiences with ChatGPT.Each interview lasted approximately 45-60 minutes and was conducted in person or via video conferencing, based on their preferences.The interviews were audio-recorded with participants' consent and VOLUME 8 ISSUE 1 transcribed verbatim for analysis.The focus groups involved three separate sessions, each consisting of 6-8 students with varied experience levels with ChatGPT.These focus groups, facilitated by the researcher, lasted about 90 minutes.Discussions were audio-recorded and transcribed to capture diverse perspectives.This approach encouraged the students to reflect on and debate their views in a group setting.

Data Analysis Techniques
The data analysis followed a systematic and iterative process to ensure thorough and credible findings.The primary techniques used were thematic analysis and narrative analysis.Thematic analysis involved coding the transcripts from interviews and focus groups using a combination of deductive and inductive approaches, as described in the work by Kiger& Varpio (2020).Initial codes were derived from Five Ethical Principles, while additional codes emerged from the data.Codes were grouped into broader themes that captured the critical aspects of students' ethical dilemmas and decision-making processes.Themes were refined through multiple rounds of review and discussion.Narrative analysis focused on understanding students' stories and experiences to identify common patterns and unique variations in how they navigated ethical dilemmas.

Awareness and Understanding of Ethical Implications of Using ChatGPT
Our findings revealed that students exhibit varied levels of awareness regarding the ethical implications of using ChatGPT.Some students clearly understand what constitutes ethical use, recognizing the importance of maintaining academic integrity and adhering to institutional guidelines.AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in developing and using technologies (Balasubramaniam et al., 2022).These students tend to use ChatGPT responsibly, employing it as a supplementary tool to enhance their learning without compromising the originality of their work.One student noted, "I know it's important to use ChatGPT wisely.I use it to get ideas and understand concepts, but I always make sure my work is my own."However, a significant portion of the student population appears to be less informed about the ethical boundaries associated with using AI tools like ChatGPT.This lack of awareness often leads to misuse, where students might unintentionally engage in academic dishonesty by submitting AI-generated content as their own work or relying too heavily on the tool, thereby undermining their learning process (Jobin et al., 2019).Academic dishonesty has always been reported as a big challenge in higher (Barnes & Hutson, 2024).A student admitted, "Sometimes I just copy what ChatGPT gives me because I'm not sure if it's okay to use it directly.I don't want to get in trouble, but it's not always clear what the rules are." A critical factor contributing to this varied level of awareness is the absence of clear and consistent guidelines from educational institutions.Studies have been conducted to investigate the challenges related to adhering to specific ethical principles of AI, such as fairness, accountability, and privacy (Leslie, 2019).Due to the lack of explicit policies and instructions on the ethical use of AI, students are left to navigate these complexities on their own, leading to inconsistent practices and potential ethical violations (Kong et al., 2023)).One student pointed out, "We don't get a lot of guidance on how to use tools like ChatGPT.Some professors talk about it, but others don't mention it at all, so it's confusing." The disparity in guidelines is further exacerbated by the differing rules set by individual lecturers.Academic institutions are still grappling with relevant standards to teach and implement AI ethics (Kong et al., 2023).The lack of AI students' guidelines creates confusion to learners (Fan & Li, 2023).This inconsistency creates confusion among students as they struggle to reconcile conflicting directives from different courses and instructors.For instance, one lecturer might emphasize the importance of citing AI assistance, while another might provide no guidance at all, leaving students uncertain about what is acceptable.One student said, "One of my lecturers says we need to cite ChatGPT if we use it, but another hasn't said anything about it, so I don't know what's right."This lack of uniformity not only affects students' understanding of ethical use but also impacts their behavior, as they may inadvertently breach academic integrity standards due to unclear expectations.Lecturers or teacher autonomy plays a part in the use of AI by students.This autonomy has been defined as teachers' perception regarding whether they control themselves and their work environment.

Perceptions of Ethical Use
Students' perceptions of the ethical use of ChatGPT vary significantly depending on the type of assignment and its context.Student perceptions play a vital role in determining their motivation, engagement and academic achievement.

VOLUME 8 ISSUE 1
On the other hand, negative perceptions result in reduced motivation and hinder academic success (Lavrič & Škraba, 2023).This study found that many students differentiate their use of AI tools based on whether the assignment is a take-home task or an in-class exam.For take-home assignments, some students view ChatGPT as a carte blanche to employ AI tools however they see fit.Students now find support by using ChatGPT to tackle their assignments ((Wibowo et al., 2023).They argue that if they can use resources like Google to learn and perform programming tasks, then using ChatGPT, which provides precise and relevant answers, should be equally acceptable.One student stated, "If we can use Google to figure out programming tasks, why not use ChatGPT?It gives us exactly what we need."Scholars who accept a deep approach to learning pursue to understand what they are learning, are vigorously concerned about their learning material, and attempt to source conclusions on evidence and reasoned arguments (Gordon & Debus, 2002) This perception is often articulated in student comments such as, "Why do we need to use Google to learn how to perform certain programming tasks when ChatGPT can give us the exact thing we are looking for?"Such viewpoints reflect a pragmatic approach to AI use, where efficiency and accuracy are highly valued, sometimes at the expense of deeper learning and skill development.The motivation connected with a deep learning approach is fundamentally intrinsic: the student seeks to satisfy the personal novelty of learning.Such students are aware of more aspects of their learning situations and experiences than students who adopt a surface approach to learning (Gordon & Debus, 2002) However, there is a consensus among students that using ChatGPT for brainstorming or getting explanations is generally seen as more acceptable than using it to generate entire essays or solve exam questions.ChatGPT tools if used correctly, present the opportunity to enhance group brainstorming sessions (Lavrič & Škraba, 2023) The former is perceived as a way to enhance understanding and foster creativity, while the latter is viewed as crossing an ethical line by outsourcing substantial portions of academic work to an AI tool.A student explained, "It's fine to use ChatGPT to get ideas or understand something better, but writing my whole essay with it feels wrong."Despite these distinctions, high academic pressure and workload significantly influence students' ethical decisionmaking.ChatGPT has revolutionized educational paradigms and reorganized student engagement modes with digital content and their social environment (Shahzad et al., 2024).Under intense stress and time constraints, students are more likely to rationalize the unethical use of ChatGPT.Moreover, ChatGPT serves as a robust tool for effective time management task prioritization, and as a repository of supplemental learning means (Shahzad et al., 2024).They may justify using the tool in ways they would normally consider inappropriate, such as copying large sections of AIgenerated text directly into their assignments or relying on ChatGPT to complete tasks that they do not understand.One student admitted, "When I'm stressed and running out of time, it's easy just to take what ChatGPT gives me and use it, even if I know I shouldn't."This rationalization is driven by the urgent need to meet deadlines and achieve high grades, often overshadowing concerns about academic integrity.

Usage Patterns and Ethical Dilemmas
The study uncovered diverse usage patterns of ChatGPT among students, highlighting a spectrum that ranges from support to dependence.However, the utilization of AI in education must be approached carefully to ensure it reinforces rather than reduces critical thinking skills (Darwin et al., 2024).Our findings show that many students use ChatGPT as a supplementary tool to aid their understanding of complex concepts.However, a fine line exists between using ChatGPT for support and becoming overly dependent, as some students admitted that they don't know where to draw the line.If left unchecked, an over-reliance on AI tools for problem-solving could lead to a passive learning approach.The number of students who admitted to being overly reliant on chatGPT was significant.One student remarked, "I use ChatGPT to understand difficult topics, but sometimes I worry I'm not putting in enough effort myself."It is, thus, imperative for lecturers to make sure that students do not overly rely on this technology at the expense of their learning.
ChatGPT's effectiveness in handling programming questions has solidified its position as a trusted study partner for many students.In essence, AI tools have become a powerful tool to supplement critical thinking skills, especially in learning settings (Rogers et al., 2024).Some students expressed that they could not imagine life without ChatGPT, indicating significant reliance on the tool.One student shared, "I can't imagine studying without ChatGPT.This reliance is so pronounced that many students use ChatGPT daily, even during classes.Over-dependence on ChatGPT reduces the level of critical thinking (Bouzar et al., 2024;Wu, 2024).When lecturers pose questions, students admit that they often turn to ChatGPT for answers, bypassing the need to engage in critical thinking or problem-solving themselves.This is despite the fact that the importance of critical thinking skills cannot be over-stated.Critical thinking, as a skill, is crucial for assessing information, solving problems, and making informed decisions, both in academic and real-world scenarios (DİLEKLİ & Boyraz, 2024) .If that learning opportunity is handed over entirely to ChatGPT, students are robbed of using their creative thinking.Another student admitted, "If a lecturer asks a tough question in class, I just type it into ChatGPT using my phone and get the answer immediately."

VOLUME 8 ISSUE 1
This behaviour shows a troubling trend where the convenience and accuracy of AI tools diminish students' motivation to develop their cognitive skills.Studies of ChatGPT's usage show proof of deductive reasoning, a sequential thought process, and the ability to maintain a long-term dependency (Essel et al., 2024) .
Students often struggle to distinguish between legitimate uses of ChatGPT, such as paraphrasing or seeking clarification, and unethical practices, such as submitting AI-generated text as their own work.The line between these practices can be very blurry for students.This may subsequently lead to unintentional unethical practices.One student said, "It's hard to know when I'm crossing the line.Sometimes, I just use what ChatGPT gives me because it's easier, but I know that's not always right." Students' heavy reliance on ChatGPT exacerbates this issue, as they may not fully grasp the importance of producing original work or may feel justified in using AI-generated content due to the perceived ubiquity of the tool.Despite the several potential benefits of ChatGPT as a teaching and learning tool, researchers have also highlighted causes for discussion and concern about possible disruptions to education, as this tool is still an unregulated technology presenting issues with academic integrity, data privacy, and other ethical concerns (Essel et al., 2024).This heavy reliance not only affects their learning outcomes but also raises serious ethical questions about academic integrity.This is demonstrated by one student who stated that, "Everyone uses ChatGPT, so it feels like it's okay to use it too much, but I know it's not the same as doing the work myself.".

Motivations and Justifications
Students who use ChatGPT typically do so primarily for academic purposes, specifically to pass their modules.During the interviews, a lot of students defended their use of ChatGPT by emphasizing how it would help them learn the material better and get better grades.A pupil stated, "Using ChatGPT helps me get better grades because it explains things clearly and helps me understand concepts I struggle with." Interestingly, the use of ChatGPT purely as a learning tool was rarely mentioned by students.Instead, the emphasis was predominantly on the outcomes of using the tool, such as higher grades rather than the learning process itself.This indicates that the tool is primarily seen as a means to an end rather than an integral part of the learning journey.
Another major reason mentioned by most students for using chatGPT GPT is the reduction of time needed to complete assignments.For many students, especially those juggling multiple responsibilities such as part-time job and extracurricular activities, completing assignments fast is a good motivation.One student explained, "ChatGPT saves me a lot of time.I can get answers fast and finish my assignments quickly, which is great because I have a lot of other commitments.".

Ethical Dilemmas and Resolution Strategies
Students stated that they frequently encounter conflicting norms when using ChatGPT.Essentially, they struggle to balance personal values, peer practices, and institutional expectations.For a student, these contradictory norms create a challenging landscape.One student expressed this dilemma: "I want to do things the right way, but it's hard when I see others using ChatGPT to get ahead without any consequences."This illustrates the struggle between maintaining personal integrity and succumbing to the pressure of following what seems to be the norm among peers.
In response to these conflicts, some students stated that they developed personal ethical codes based on their experiences and understanding of what constitutes acceptable use of ChatGPT.These personal codes served as internal guidelines, helping students make decisions that align with their values and ethical beliefs.One student shared, "I have my own rules for using ChatGPT.I only use it to check my work or get explanations, but I never copy and paste answers."Such personal codes reflect a commitment to ethical practices, even without clear or consistent institutional guidelines.
By creating and adhering to these personal ethical codes, students attempt to resolve the dilemmas they face and navigate the complexities of using AI tools like ChatGPT.These codes provide a framework for ethical decisionmaking, enabling students to use ChatGPT to support their learning while maintaining academic integrity.However, the effectiveness of these personal strategies can vary, depending on each student's understanding of institutional policies.It is thus crucial for institutions to make sure that all students have a common understanding of the policies.

Ethical Dilemmas in Collaborative Work
In collaborative settings such as computer programming, the ethical use of ChatGPT becomes more complex due to group dynamics and peer influence, which can lead to varied approaches to using the tool and sometimes result in VOLUME 8 ISSUE 1 ethical conflicts within groups.When working on group assignments, students reported that they often face dilemmas about incorporating ChatGPT into their collective work.One student shared, "It gets tricky when you're in a group.Not everyone has the same opinion on how to use ChatGPT, and it can cause disagreements."This complexity is compounded by group members' diverse ethical standards and practices, leading to confusion and conflict.In addition, the fact that computer programming is perceived as a difficult subject (Msane et al., 2020) (Egbe et al., 2020) and may tempt students to use AI tools to make their lives easier.
On the other hand, students also reported that sometimes they are not fully aware of how ethically their counterparts present their contributions in a group assignment.They further reported that this usually creates tension and uncertainty within the group.As one student said, "You don't always know if someone has just copied and pasted something from ChatGPT.It makes you question the integrity of the entire project."This uncertainty can undermine trust and collaboration, it difficult for students to work together effectively.
The issue of collective responsibility further complicates the ethical dilemmas in group projects.Students often face dilemmas regarding the collective responsibility and the use of ChatGPT, especially when there are differing opinions on what constitutes ethical use.The potential for the entire group to be punished for the unethical actions of one member adds to the stress and complexity of managing group work.One student expressed this concern: "It's frustrating because if one person decides to use ChatGPT unethically, the whole group can get in trouble.It's hard to control what everyone does."This shared responsibility for maintaining academic integrity highlights the need for clear communication and agreement within the group on how to use AI tools like ChatGPT ethically.

Conclusion
Integrating artificial intelligence (AI) in higher education, particularly in computer programming assignments, presents ethical challenges.This study has highlighted the critical ethical dilemmas faced by students, such as balancing AI assistance with originality, maintaining academic integrity, and navigating the potential for misuse of AI tools like ChatGPT.We have gained valuable insights into how students perceive and manage these ethical considerations by applying a theoretical framework based on autonomy, nonmaleficence, beneficence, justice, and fidelity.
The implications of this research are far-reaching.For educators, understanding the ethical challenges students face when using AI tools can inform the development of more effective teaching strategies and assessment methods.This includes rethinking assessment design to better integrate AI tools in a way that promotes learning while upholding academic integrity.Institutions can also benefit by developing clear guidelines and policies that address the ethical use of AI in educational settings, ensuring that students are equipped with the knowledge to use these technologies responsibly.Moreover, this study emphasizes the need for continuous dialogue between students, educators, and policymakers to adapt to the rapid advancements in AI.As AI technologies evolve, so too must the ethical frameworks and educational practices that govern their use.This research suggests that incorporating ethical training and AI literacy into the curriculum could help students better navigate the complexities of using AI tools in their academic work.
The translational importance of this work lies in its potential to shape future educational policies and practices.By providing a nuanced understanding of the ethical dilemmas associated with AI in education, this study can guide the creation of robust ethical guidelines and promote a balanced approach to AI integration.This can ensure that AI tools are used to enhance learning without compromising the core values of education.
Future research could extend this work by exploring the long-term impacts of AI use on student learning outcomes and the development of coding skills.Additionally, comparative studies across different educational contexts and disciplines could provide a broader perspective on the ethical challenges and best practices for AI integration in academia.