AI's Moral Dilemmas in Composition: Copying, Prejudice, and the Future of Intellectual Honesty in Education
Universities across Kazakhstan are witnessing a silent revolution, led by artificial intelligence (AI). From writing centers to dorm rooms, students are employing AI resources like ChatGPT, Grammarly, and QuillBot to aid in their writing tasks. Some utilize these tools for basic editing, idea generation, or structure, while others rely on them to pen entire essays.
Now that these resources are readily available, the debate over AI's role in education has shifted. The focus is no longer on whether AI is appropriate but rather on how it should be utilized effectively.
AI holds immense potential for enriching academic life in Kazakhstan. It can provide instant, personalized feedback to multilingual learners dealing with writing requirements in Russian, Kazakh, and English. However, the unchecked and uncritical adoption of AI in writing poses serious ethical dilemmas concerning plagiarism and bias.
These issues aren't theoretical. They challenge the very foundations of education: promoting originality, critical thinking, and equity.
Evolving plagiarism in the AI age
The age-old problem of plagiarism has acquired a new facet in the era of AI. Traditionally, plagiarism referred to copying someone else's work without giving credit. But with AI, the lines blur. When a student utilizes AI to craft an essay and submits it without alteration, is that plagiarism? What about when it's slightly modified? Or when AI is only used for structure and transitions?
This isn't just an academic dilemma but a pedagogical one. Students who use AI for intellectual work miss out on the essence of writing: learning to think, synthesize, and analyze. Universities in Kazakhstan are in need of adapting academic integrity policies that account for the nuances of AI usage.
Most students understand that submitting AI-generated content without attribution or alteration constitutes cheating. The uncertainty lies in knowing the specific policy of their university, as AI-related institutional policies are still taking shape. Uncertainty is further compounded by inconsistencies across faculties, as some professors endorse modest AI tool usage for idea generation or language assistance, while others prohibit it altogether.
To navigate this gray zone, universities in Kazakhstan can learn from international institutions that are creating transparent and nuanced guidelines for AI-generated content.
However, a punitive approach alone will not be effective. To address plagiarism, universities must change the academic culture. Students need to understand not only how to avoid plagiarism, but why originality and authorship matter. Faculty need to instill an understanding of writing as a thought process, fostering an environment where AI serves as a tool to facilitate that process, rather than a substitute for it.
The concealed biases of supposedly neutral technology
Another critical ethical issue that often goes unnoticed is bias. While AI is powered by algorithms, it is not neutral. AI models are trained on extensive data sets that predominantly come from Western sources and are mostly in English. Even OpenAI, ChatGPT's creators, acknowledge this on their website. This means that AI reflects Western cultural, linguistic, and ideological assumptions embedded in the data.
This poses two primary challenges for students:
First, there's a risk that AI-based writing reinforces Anglo-American scholarly practices at the expense of local systems of knowledge. AI-generated writings tend to prioritize linear, thesis-based argument structures, citation practices, and critical styles that may not align with the patterns of native or multilingual scholarly practices. If Kazakh students employ AI to support their writings, they might inadvertently adopt these practices, forgoing the opportunity to develop an academic voice that reflects their regional context.
Second, AI can perpetuate and intensify existing inequalities. Students from rural areas or those most comfortable using Kazakh or Russian may discover that AI tools perform better with English content or Western examples. This sets the stage for an uneven playing field, where linguistic ability and access to global discourse impact the quality of AI assistance a student receives. This disparity risks exacerbating existing educational inequalities, favoring those already conversant with the dominant discourse of global academia.
To address these issues, universities must prioritize discussions of these biases in their courses. Assignments could be structured around local interpretations of regional or global issues, counteracting any potential cultural threats that may arise due to AI usage.
Paving the way: a call for ethical leadership
With its unique multilingual and multicultural environment, coupled with investments in education, Kazakhstan has the potential to lead on this issue. This will involve revamping academic integrity policies to accommodate AI-generated content and investing in widespread training for faculty, staff, and students.
Institutions can host regular workshops on the ethical use of AI, develop standardized institutional guidelines for AI assistance disclosure and citation, and embed discussions on digital ethics and algorithmic bias into the curriculum.
Banning AI from classrooms would be impractical and detrimental to both students and educators. Instead, we must confront and discuss the ways AI reshapes the learning and thinking processes.
This includes emphasizing the importance of originality and critical inquiry, encouraging educators to view writing as a mental journey rather than a final product, and fostering assignments that prioritize individual voice and reflection. AI should serve as a tool that supports learning, not replace it. Fairness, equity, and intellectual honesty should remain the cornerstones of education.
The author is Michael Jones, a writing and communications instructor at the School of Social Science and Humanities, Nazarbayev University, Astana.
Enrichment Data:Universities in Kazakhstan are actively addressing the ethical challenges posed by AI in academic writing. Current policies primarily focus on plagiarism mitigation, bias awareness, and originality preservation, driven by interdisciplinary dialogue and emerging institutional frameworks.
Kazakhstani institutions are developing nuanced academic integrity policies that differentiate between AI-assisted writing (e.g., idea generation, language refinement) and outright misconduct (e.g., copying AI outputs without attribution). However, enforcement remains inconsistent across faculties, with some professors endorsing modest AI tool usage, while others prohibit it.
To curb plagiarism, universities are emphasizing transparency, urging students to disclose AI assistance in their workflows. They also advocate for pedagogical reforms that reframe writing as a cognitive process rather than a product, reducing reliance on AI to shortcut critical thinking.
Addressing bias involves adopting a human-centric design for AI tools, ensuring they align with educational values and minimize algorithmic discrimination. Collaborative efforts are underway between technologists, educators, and policymakers to embed ethical principles such as transparency, fairness, and accountability early in AI system development.
Originality and academic culture are key concerns, with universities promoting assignments that require synthesis and analysis, which AI cannot fully replicate. Simultaneously, conferences and research initiatives are exploring metadata standards to track AI-generated content and preserve authorship accountability.
To address plagiarism, universities are focusing on changing the academic culture, fostering an environment where originality is valued over transactional task completion. They are also developing scalable AI literacy models and strengthening collaborations between academia, government, and industry to ensure ethical governance keeps pace with technological advancements.
- The integration of AI resources like ChatGPT, Grammarly, and QuillBot in writing tasks has spurred debates about the appropriate use of such tools in education, particularly universities in Kazakhstan.
- Given the immense potential of AI to provide personalized feedback to multilingual learners, questions about plagiarism and biases arise, making it crucial to establish clear academic integrity policies.
- In the age of AI, universities must develop nuanced guidelines for AI-generated content, differentiating between AI-assisted writing and outright misconduct. This also entails emphasizing the importance of originality and intellectual honesty in education.
- The biases in AI, arising from the data it is trained on, pose challenges in reinforcing Anglo-American scholarly practices at the expense of local systems of knowledge. To address these issues, universities must prioritize discussions of these biases in their courses.
