AI-Authored Paper Indications, as Revealed by a Scholar
In the halls of Florida Southwestern State College, English Composition teacher Mark Massaro has been battling a new foe – artificial intelligence (AI). As AI has become more prevalent in higher education, it's been doing the talking (and the thinking) for students, robbing them of their ability to find their voice during this critical time in their academic careers.
Massaro, who has been teaching at the college for years, has taken it upon himself to fight back against AI-generated essays. He relies on his own wits to assess whether a paper is illegitimately generated. He's created a checklist of tell-tale signs that a paper is AI-generated, such as a plethora of em dashes, uniform sentence and paragraph length, a rhythmic, mechanical feel, and an overly polished academic voice.
The rise of AI-writing has been widespread, with the use of AI apps like ChatGPT for cheating becoming commonplace in U.S. higher education in 2023. However, current AI detection tools for identifying automated essays demonstrate limited effectiveness. Although some advanced detectors report accuracy as high as around 84%, many free or less sophisticated tools achieve only about 68% accuracy, indicating a substantial error rate.
Tools like Turnitin’s AI detection have been criticized for missing much AI-generated content or issuing false positives. Moreover, many AI detectors leave critical judgment to human instructors because they only estimate portions of AI-generated text without verifying content accuracy or providing clear evidence of misconduct.
Experts and educators recommend using AI detectors cautiously as one component of a broader academic integrity strategy. This strategy includes human review of flagged work to avoid unfair penalties, clear communication with students about the use and limitations of AI tools, teaching responsible AI use, and fostering student understanding.
In practice, AI detection tools are inconsistent, prone to manipulation, and can produce conflicting results, rendering sole reliance on them problematic for institutional decision-making. Research also highlights ongoing vulnerabilities, such as students leaving prompt inputs from their interactions with a chatbot in the final essay they submitted, or text with no paragraph indentations.
Despite these challenges, Massaro continues to fight the good fight. He believes that while current AI detection tools provide some assistance and continue to improve, their effectiveness in detecting AI-generated essays requires complementary human judgment and policy frameworks. After all, the stakes are high – the future of education depends on it.
- Artificial intelligence (AI) has become increasingly prevalent in higher education, altering the landscape of student essays, as seen by English Composition teacher Mark Massaro's battle against AI-generated essays at Florida Southwestern State College.
- Amidst the rise of AI-writing, advanced AI detectors boast accuracy of around 84%, yet free or less sophisticated tools achieve only about 68%, signifying a significant error rate in identifying AI-generated content.
- It is essential that AI detection tools, struggling with consistency and prone to manipulation, are employed cautiously, as part of a broader academic integrity strategy involving human review, clear communication, responsible AI education, and fostering student understanding, to safeguard the future of education.